Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"Resource management:
- Add support for Enhanced Allocation devices (Sean O. Stalley)
- Add Enhanced Allocation register entries (Sean O. Stalley)
- Handle IORESOURCE_PCI_FIXED when sizing resources (David Daney)
- Handle IORESOURCE_PCI_FIXED when assigning resources (David Daney)
- Handle Enhanced Allocation capability for SR-IOV devices (David Daney)
- Clear IORESOURCE_UNSET when reverting to firmware-assigned address (Bjorn Helgaas)
- Make Enhanced Allocation bitmasks more obvious (Bjorn Helgaas)
- Expand Enhanced Allocation BAR output (Bjorn Helgaas)
- Add of_pci_check_probe_only to parse "linux,pci-probe-only" (Marc Zyngier)
- Fix lookup of linux,pci-probe-only property (Marc Zyngier)
- Add sparc mem64 resource parsing for root bus (Yinghai Lu)

PCI device hotplug:
- pciehp: Queue power work requests in dedicated function (Guenter Roeck)

Driver binding:
- Add builtin_pci_driver() to avoid registration boilerplate (Paul Gortmaker)

Virtualization:
- Set SR-IOV NumVFs to zero after enumeration (Alexander Duyck)
- Remove redundant validation of SR-IOV offset/stride registers (Alexander Duyck)
- Remove VFs in reverse order if virtfn_add() fails (Alexander Duyck)
- Reorder pcibios_sriov_disable() (Alexander Duyck)
- Wait 1 second between disabling VFs and clearing NumVFs (Alexander Duyck)
- Fix sriov_enable() error path for pcibios_enable_sriov() failures (Alexander Duyck)
- Enable SR-IOV ARI Capable Hierarchy before reading TotalVFs (Ben Shelton)
- Don't try to restore VF BARs (Wei Yang)

MSI:
- Don't alloc pcibios-irq when MSI is enabled (Joerg Roedel)
- Add msi_controller setup_irqs() method for special multivector setup (Lucas Stach)
- Export all remapped MSIs to sysfs attributes (Romain Bezut)
- Disable MSI on SiS 761 (Ondrej Zary)

AER:
- Clear error status registers during enumeration and restore (Taku Izumi)

Generic host bridge driver:
- Fix lookup of linux,pci-probe-only property (Marc Zyngier)
- Allow multiple hosts with different map_bus() methods (David Daney)
- Pass starting bus number to pci_scan_root_bus() (David Daney)
- Fix address window calculation for non-zero starting bus (David Daney)

Altera host bridge driver:
- Add msi.h to ARM Kbuild (Ley Foon Tan)
- Add Altera PCIe host controller driver (Ley Foon Tan)
- Add Altera PCIe MSI driver (Ley Foon Tan)

APM X-Gene host bridge driver:
- Remove msi_controller assignment (Duc Dang)

Broadcom iProc host bridge driver:
- Fix header comment "Corporation" misspelling (Florian Fainelli)
- Fix code comment to match code (Ray Jui)
- Remove unused struct iproc_pcie.irqs[] (Ray Jui)
- Call pci_fixup_irqs() for ARM64 as well as ARM (Ray Jui)
- Fix PCIe reset logic (Ray Jui)
- Improve link detection logic (Ray Jui)
- Update PCIe device tree bindings (Ray Jui)
- Add outbound mapping support (Ray Jui)

Freescale i.MX6 host bridge driver:
- Return real error code from imx6_add_pcie_port() (Fabio Estevam)
- Add PCIE_PHY_RX_ASIC_OUT_VALID definition (Fabio Estevam)

Freescale Layerscape host bridge driver:
- Remove ls_pcie_establish_link() (Minghuan Lian)
- Ignore PCIe controllers in Endpoint mode (Minghuan Lian)
- Factor out SCFG related function (Minghuan Lian)
- Update ls_add_pcie_port() (Minghuan Lian)
- Remove unused fields from struct ls_pcie (Minghuan Lian)
- Add support for LS1043a and LS2080a (Minghuan Lian)
- Add ls_pcie_msi_host_init() (Minghuan Lian)

HiSilicon host bridge driver:
- Add HiSilicon SoC Hip05 PCIe driver (Zhou Wang)

Marvell MVEBU host bridge driver:
- Return zero for reserved or unimplemented config space (Russell King)
- Use exact config access size; don't read/modify/write (Russell King)
- Use of_get_available_child_count() (Russell King)
- Use for_each_available_child_of_node() to walk child nodes (Russell King)
- Report full node name when reporting a DT error (Russell King)
- Use port->name rather than "PCIe%d.%d" (Russell King)
- Move port parsing and resource claiming to separate function (Russell King)
- Fix memory leaks and refcount leaks (Russell King)
- Split port parsing and resource claiming from port setup (Russell King)
- Use gpio_set_value_cansleep() (Russell King)
- Use devm_kcalloc() to allocate an array (Russell King)
- Use gpio_desc to carry around gpio (Russell King)
- Improve clock/reset handling (Russell King)
- Add PCI Express root complex capability block (Russell King)
- Remove code restricting accesses to slot 0 (Russell King)

NVIDIA Tegra host bridge driver:
- Wrap static pgprot_t initializer with __pgprot() (Ard Biesheuvel)

Renesas R-Car host bridge driver:
- Build pci-rcar-gen2.c only on ARM (Geert Uytterhoeven)
- Build pcie-rcar.c only on ARM (Geert Uytterhoeven)
- Make PCI aware of the I/O resources (Phil Edworthy)
- Remove dependency on ARM-specific struct hw_pci (Phil Edworthy)
- Set root bus nr to that provided in DT (Phil Edworthy)
- Fix I/O offset for multiple host bridges (Phil Edworthy)

ST Microelectronics SPEAr13xx host bridge driver:
- Fix dw_pcie_cfg_read/write() usage (Gabriele Paoloni)

Synopsys DesignWare host bridge driver:
- Make "clocks" and "clock-names" optional DT properties (Bhupesh Sharma)
- Use exact access size in dw_pcie_cfg_read() (Gabriele Paoloni)
- Simplify dw_pcie_cfg_read/write() interfaces (Gabriele Paoloni)
- Require config accesses to be naturally aligned (Gabriele Paoloni)
- Make "num-lanes" an optional DT property (Gabriele Paoloni)
- Move calculation of bus addresses to DRA7xx (Gabriele Paoloni)
- Replace ARM pci_sys_data->align_resource with global function pointer (Gabriele Paoloni)
- Factor out MSI msg setup (Lucas Stach)
- Implement multivector MSI IRQ setup (Lucas Stach)
- Make get_msi_addr() return phys_addr_t, not u32 (Lucas Stach)
- Set up high part of MSI target address (Lucas Stach)
- Fix PORT_LOGIC_LINK_WIDTH_MASK (Zhou Wang)
- Revert "PCI: designware: Program ATU with untranslated address" (Zhou Wang)
- Use of_pci_get_host_bridge_resources() to parse DT (Zhou Wang)
- Make driver arch-agnostic (Zhou Wang)

Miscellaneous:
- Make x86 pci_subsys_init() static (Alexander Kuleshov)
- Turn off Request Attributes to avoid Chelsio T5 Completion erratum (Hariprasad Shenai)"

* tag 'pci-v4.4-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (94 commits)
PCI: altera: Add Altera PCIe MSI driver
PCI: hisi: Add HiSilicon SoC Hip05 PCIe driver
PCI: layerscape: Add ls_pcie_msi_host_init()
PCI: layerscape: Add support for LS1043a and LS2080a
PCI: layerscape: Remove unused fields from struct ls_pcie
PCI: layerscape: Update ls_add_pcie_port()
PCI: layerscape: Factor out SCFG related function
PCI: layerscape: Ignore PCIe controllers in Endpoint mode
PCI: layerscape: Remove ls_pcie_establish_link()
PCI: designware: Make "clocks" and "clock-names" optional DT properties
PCI: designware: Make driver arch-agnostic
ARM/PCI: Replace pci_sys_data->align_resource with global function pointer
PCI: designware: Use of_pci_get_host_bridge_resources() to parse DT
Revert "PCI: designware: Program ATU with untranslated address"
PCI: designware: Move calculation of bus addresses to DRA7xx
PCI: designware: Make "num-lanes" an optional DT property
PCI: designware: Require config accesses to be naturally aligned
PCI: designware: Simplify dw_pcie_cfg_read/write() interfaces
PCI: designware: Use exact access size in dw_pcie_cfg_read()
PCI: spear: Fix dw_pcie_cfg_read/write() usage
...

+2904 -685
+17
Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt
··· 167 167 }; 168 168 169 169 ----------------------------------------------------------------------- 170 + Hisilicon HiP05 PCIe-SAS system controller 171 + 172 + Required properties: 173 + - compatible : "hisilicon,pcie-sas-subctrl", "syscon"; 174 + - reg : Register address and size 175 + 176 + The HiP05 PCIe-SAS system controller is shared by PCIe and SAS controllers in 177 + HiP05 Soc to implement some basic configurations. 178 + 179 + Example: 180 + /* for HiP05 PCIe-SAS system */ 181 + pcie_sas: system_controller@0xb0000000 { 182 + compatible = "hisilicon,pcie-sas-subctrl", "syscon"; 183 + reg = <0xb0000000 0x10000>; 184 + }; 185 + 186 + ----------------------------------------------------------------------- 170 187 Hisilicon CPU controller 171 188 172 189 Required properties:
+28
Documentation/devicetree/bindings/pci/altera-pcie-msi.txt
··· 1 + * Altera PCIe MSI controller 2 + 3 + Required properties: 4 + - compatible: should contain "altr,msi-1.0" 5 + - reg: specifies the physical base address of the controller and 6 + the length of the memory mapped region. 7 + - reg-names: must include the following entries: 8 + "csr": CSR registers 9 + "vector_slave": vectors slave port region 10 + - interrupt-parent: interrupt source phandle. 11 + - interrupts: specifies the interrupt source of the parent interrupt 12 + controller. The format of the interrupt specifier depends on the 13 + parent interrupt controller. 14 + - num-vectors: number of vectors, range 1 to 32. 15 + - msi-controller: indicates that this is MSI controller node 16 + 17 + 18 + Example 19 + msi0: msi@0xFF200000 { 20 + compatible = "altr,msi-1.0"; 21 + reg = <0xFF200000 0x00000010 22 + 0xFF200010 0x00000080>; 23 + reg-names = "csr", "vector_slave"; 24 + interrupt-parent = <&hps_0_arm_gic_0>; 25 + interrupts = <0 42 4>; 26 + msi-controller; 27 + num-vectors = <32>; 28 + };
+49
Documentation/devicetree/bindings/pci/altera-pcie.txt
··· 1 + * Altera PCIe controller 2 + 3 + Required properties: 4 + - compatible : should contain "altr,pcie-root-port-1.0" 5 + - reg: a list of physical base address and length for TXS and CRA. 6 + - reg-names: must include the following entries: 7 + "Txs": TX slave port region 8 + "Cra": Control register access region 9 + - interrupt-parent: interrupt source phandle. 10 + - interrupts: specifies the interrupt source of the parent interrupt controller. 11 + The format of the interrupt specifier depends on the parent interrupt 12 + controller. 13 + - device_type: must be "pci" 14 + - #address-cells: set to <3> 15 + - #size-cells: set to <2> 16 + - #interrupt-cells: set to <1> 17 + - ranges: describes the translation of addresses for root ports and standard 18 + PCI regions. 19 + - interrupt-map-mask and interrupt-map: standard PCI properties to define the 20 + mapping of the PCIe interface to interrupt numbers. 21 + 22 + Optional properties: 23 + - msi-parent: Link to the hardware entity that serves as the MSI controller for this PCIe 24 + controller. 25 + - bus-range: PCI bus numbers covered 26 + 27 + Example 28 + pcie_0: pcie@0xc00000000 { 29 + compatible = "altr,pcie-root-port-1.0"; 30 + reg = <0xc0000000 0x20000000>, 31 + <0xff220000 0x00004000>; 32 + reg-names = "Txs", "Cra"; 33 + interrupt-parent = <&hps_0_arm_gic_0>; 34 + interrupts = <0 40 4>; 35 + interrupt-controller; 36 + #interrupt-cells = <1>; 37 + bus-range = <0x0 0xFF>; 38 + device_type = "pci"; 39 + msi-parent = <&msi_to_gic_gen_0>; 40 + #address-cells = <3>; 41 + #size-cells = <2>; 42 + interrupt-map-mask = <0 0 0 7>; 43 + interrupt-map = <0 0 0 1 &pcie_0 1>, 44 + <0 0 0 2 &pcie_0 2>, 45 + <0 0 0 3 &pcie_0 3>, 46 + <0 0 0 4 &pcie_0 4>; 47 + ranges = <0x82000000 0x00000000 0x00000000 0xc0000000 0x00000000 0x10000000 48 + 0x82000000 0x00000000 0x10000000 0xd0000000 0x00000000 0x10000000>; 49 + };
+20
Documentation/devicetree/bindings/pci/brcm,iproc-pcie.txt
··· 17 17 - phys: phandle of the PCIe PHY device 18 18 - phy-names: must be "pcie-phy" 19 19 20 + - brcm,pcie-ob: Some iProc SoCs do not have the outbound address mapping done 21 + by the ASIC after power on reset. In this case, SW needs to configure it 22 + 23 + If the brcm,pcie-ob property is present, the following properties become 24 + effective: 25 + 26 + Required: 27 + - brcm,pcie-ob-axi-offset: The offset from the AXI address to the internal 28 + address used by the iProc PCIe core (not the PCIe address) 29 + - brcm,pcie-ob-window-size: The outbound address mapping window size (in MB) 30 + 31 + Optional: 32 + - brcm,pcie-ob-oarr-size: Some iProc SoCs need the OARR size bit to be set to 33 + increase the outbound window size 34 + 20 35 Example: 21 36 pcie0: pcie@18012000 { 22 37 compatible = "brcm,iproc-pcie"; ··· 53 38 54 39 phys = <&phy 0 5>; 55 40 phy-names = "pcie-phy"; 41 + 42 + brcm,pcie-ob; 43 + brcm,pcie-ob-oarr-size; 44 + brcm,pcie-ob-axi-offset = <0x00000000>; 45 + brcm,pcie-ob-window-size = <256>; 56 46 }; 57 47 58 48 pcie1: pcie@18013000 {
+8 -6
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 15 15 to define the mapping of the PCIe interface to interrupt 16 16 numbers. 17 17 - num-lanes: number of lanes to use 18 + 19 + Optional properties: 20 + - num-lanes: number of lanes to use (this property should be specified unless 21 + the link is brought already up in BIOS) 22 + - reset-gpio: gpio pin number of power good signal 23 + - bus-range: PCI bus numbers covered (it is recommended for new devicetrees to 24 + specify this property, to keep backwards compatibility a range of 0x00-0xff 25 + is assumed if not present) 18 26 - clocks: Must contain an entry for each entry in clock-names. 19 27 See ../clocks/clock-bindings.txt for details. 20 28 - clock-names: Must include the following entries: 21 29 - "pcie" 22 30 - "pcie_bus" 23 - 24 - Optional properties: 25 - - reset-gpio: gpio pin number of power good signal 26 - - bus-range: PCI bus numbers covered (it is recommended for new devicetrees to 27 - specify this property, to keep backwards compatibility a range of 0x00-0xff 28 - is assumed if not present)
+44
Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
··· 1 + HiSilicon PCIe host bridge DT description 2 + 3 + HiSilicon PCIe host controller is based on Designware PCI core. 4 + It shares common functions with PCIe Designware core driver and inherits 5 + common properties defined in 6 + Documentation/devicetree/bindings/pci/designware-pci.txt. 7 + 8 + Additional properties are described here: 9 + 10 + Required properties: 11 + - compatible: Should contain "hisilicon,hip05-pcie". 12 + - reg: Should contain rc_dbi, config registers location and length. 13 + - reg-names: Must include the following entries: 14 + "rc_dbi": controller configuration registers; 15 + "config": PCIe configuration space registers. 16 + - msi-parent: Should be its_pcie which is an ITS receiving MSI interrupts. 17 + - port-id: Should be 0, 1, 2 or 3. 18 + 19 + Optional properties: 20 + - status: Either "ok" or "disabled". 21 + - dma-coherent: Present if DMA operations are coherent. 22 + 23 + Example: 24 + pcie@0xb0080000 { 25 + compatible = "hisilicon,hip05-pcie", "snps,dw-pcie"; 26 + reg = <0 0xb0080000 0 0x10000>, <0x220 0x00000000 0 0x2000>; 27 + reg-names = "rc_dbi", "config"; 28 + bus-range = <0 15>; 29 + msi-parent = <&its_pcie>; 30 + #address-cells = <3>; 31 + #size-cells = <2>; 32 + device_type = "pci"; 33 + dma-coherent; 34 + ranges = <0x82000000 0 0x00000000 0x220 0x00000000 0 0x10000000>; 35 + num-lanes = <8>; 36 + port-id = <1>; 37 + #interrupts-cells = <1>; 38 + interrupts-map-mask = <0xf800 0 0 7>; 39 + interrupts-map = <0x0 0 0 1 &mbigen_pcie 1 10 40 + 0x0 0 0 2 &mbigen_pcie 2 11 41 + 0x0 0 0 3 &mbigen_pcie 3 12 42 + 0x0 0 0 4 &mbigen_pcie 4 13>; 43 + status = "ok"; 44 + };
+3 -2
Documentation/devicetree/bindings/pci/host-generic-pci.txt
··· 34 34 - #size-cells : Must be 2. 35 35 36 36 - reg : The Configuration Space base address and size, as accessed 37 - from the parent bus. 38 - 37 + from the parent bus. The base address corresponds to 38 + the first bus in the "bus-range" property. If no 39 + "bus-range" is specified, this will be bus 0 (the default). 39 40 40 41 Properties of the /chosen node: 41 42
+12 -2
Documentation/devicetree/bindings/pci/layerscape-pci.txt
··· 1 1 Freescale Layerscape PCIe controller 2 2 3 - This PCIe host controller is based on the Synopsis Designware PCIe IP 3 + This PCIe host controller is based on the Synopsys DesignWare PCIe IP 4 4 and thus inherits all the common properties defined in designware-pcie.txt. 5 5 6 + This controller derives its clocks from the Reset Configuration Word (RCW) 7 + which is used to describe the PLL settings at the time of chip-reset. 8 + 9 + Also as per the available Reference Manuals, there is no specific 'version' 10 + register available in the Freescale PCIe controller register set, 11 + which can allow determining the underlying DesignWare PCIe controller version 12 + information. 13 + 6 14 Required properties: 7 - - compatible: should contain the platform identifier such as "fsl,ls1021a-pcie" 15 + - compatible: should contain the platform identifier such as: 16 + "fsl,ls1021a-pcie", "snps,dw-pcie" 17 + "fsl,ls2080a-pcie", "snps,dw-pcie" 8 18 - reg: base addresses and lengths of the PCIe controller 9 19 - interrupts: A list of interrupt outputs of the controller. Must contain an 10 20 entry for each entry in the interrupt-names property.
+23
MAINTAINERS
··· 8063 8063 F: arch/x86/pci/ 8064 8064 F: arch/x86/kernel/quirks.c 8065 8065 8066 + PCI DRIVER FOR ALTERA PCIE IP 8067 + M: Ley Foon Tan <lftan@altera.com> 8068 + L: rfi@lists.rocketboards.org (moderated for non-subscribers) 8069 + L: linux-pci@vger.kernel.org 8070 + S: Supported 8071 + F: Documentation/devicetree/bindings/pci/altera-pcie.txt 8072 + F: drivers/pci/host/pcie-altera.c 8073 + 8066 8074 PCI DRIVER FOR ARM VERSATILE PLATFORM 8067 8075 M: Rob Herring <robh@kernel.org> 8068 8076 L: linux-pci@vger.kernel.org ··· 8172 8164 S: Maintained 8173 8165 F: drivers/pci/host/*spear* 8174 8166 8167 + PCI MSI DRIVER FOR ALTERA MSI IP 8168 + M: Ley Foon Tan <lftan@altera.com> 8169 + L: rfi@lists.rocketboards.org (moderated for non-subscribers) 8170 + L: linux-pci@vger.kernel.org 8171 + S: Supported 8172 + F: Documentation/devicetree/bindings/pci/altera-pcie-msi.txt 8173 + F: drivers/pci/host/pcie-altera-msi.c 8174 + 8175 8175 PCI MSI DRIVER FOR APPLIEDMICRO XGENE 8176 8176 M: Duc Dang <dhdang@apm.com> 8177 8177 L: linux-pci@vger.kernel.org ··· 8187 8171 S: Maintained 8188 8172 F: Documentation/devicetree/bindings/pci/xgene-pci-msi.txt 8189 8173 F: drivers/pci/host/pci-xgene-msi.c 8174 + 8175 + PCIE DRIVER FOR HISILICON 8176 + M: Zhou Wang <wangzhou1@hisilicon.com> 8177 + L: linux-pci@vger.kernel.org 8178 + S: Maintained 8179 + F: Documentation/devicetree/bindings/pci/hisilicon-pcie.txt 8180 + F: drivers/pci/host/pcie-hisi.c 8190 8181 8191 8182 PCMCIA SUBSYSTEM 8192 8183 P: Linux PCMCIA Team
+1
arch/arm/include/asm/Kbuild
··· 14 14 generic-y += local64.h 15 15 generic-y += mm-arch-hooks.h 16 16 generic-y += msgbuf.h 17 + generic-y += msi.h 17 18 generic-y += param.h 18 19 generic-y += parport.h 19 20 generic-y += poll.h
-6
arch/arm/include/asm/mach/pci.h
··· 52 52 u8 (*swizzle)(struct pci_dev *, u8 *); 53 53 /* IRQ mapping */ 54 54 int (*map_irq)(const struct pci_dev *, u8, u8); 55 - /* Resource alignement requirements */ 56 - resource_size_t (*align_resource)(struct pci_dev *dev, 57 - const struct resource *res, 58 - resource_size_t start, 59 - resource_size_t size, 60 - resource_size_t align); 61 55 void *private_data; /* platform controller private data */ 62 56 }; 63 57
+8 -4
arch/arm/kernel/bios32.c
··· 17 17 #include <asm/mach/pci.h> 18 18 19 19 static int debug_pci; 20 + static resource_size_t (*align_resource)(struct pci_dev *dev, 21 + const struct resource *res, 22 + resource_size_t start, 23 + resource_size_t size, 24 + resource_size_t align) = NULL; 20 25 21 26 /* 22 27 * We can't use pci_get_device() here since we are ··· 461 456 sys->busnr = busnr; 462 457 sys->swizzle = hw->swizzle; 463 458 sys->map_irq = hw->map_irq; 464 - sys->align_resource = hw->align_resource; 459 + align_resource = hw->align_resource; 465 460 INIT_LIST_HEAD(&sys->resources); 466 461 467 462 if (hw->private_data) ··· 577 572 resource_size_t size, resource_size_t align) 578 573 { 579 574 struct pci_dev *dev = data; 580 - struct pci_sys_data *sys = dev->sysdata; 581 575 resource_size_t start = res->start; 582 576 583 577 if (res->flags & IORESOURCE_IO && start & 0x300) ··· 584 580 585 581 start = (start + align - 1) & ~(align - 1); 586 582 587 - if (sys->align_resource) 588 - return sys->align_resource(dev, res, start, size, align); 583 + if (align_resource) 584 + return align_resource(dev, res, start, size, align); 589 585 590 586 return start; 591 587 }
-1
arch/arm64/boot/dts/amd/amd-overdrive.dts
··· 14 14 15 15 chosen { 16 16 stdout-path = &serial0; 17 - linux,pci-probe-only; 18 17 }; 19 18 }; 20 19
+2 -12
arch/powerpc/platforms/pseries/setup.c
··· 40 40 #include <linux/seq_file.h> 41 41 #include <linux/root_dev.h> 42 42 #include <linux/of.h> 43 + #include <linux/of_pci.h> 43 44 #include <linux/kexec.h> 44 45 45 46 #include <asm/mmu.h> ··· 496 495 * PCI_PROBE_ONLY and PCI_REASSIGN_ALL_BUS can be set via properties 497 496 * in chosen. 498 497 */ 499 - if (of_chosen) { 500 - const int *prop; 501 - 502 - prop = of_get_property(of_chosen, 503 - "linux,pci-probe-only", NULL); 504 - if (prop) { 505 - if (*prop) 506 - pci_add_flags(PCI_PROBE_ONLY); 507 - else 508 - pci_clear_flags(PCI_PROBE_ONLY); 509 - } 510 - } 498 + of_pci_check_probe_only(); 511 499 } 512 500 513 501 static void __init pSeries_setup_arch(void)
+6 -1
arch/sparc/kernel/pci.c
··· 185 185 186 186 if (addr0 & 0x02000000) { 187 187 flags = IORESOURCE_MEM | PCI_BASE_ADDRESS_SPACE_MEMORY; 188 - flags |= (addr0 >> 22) & PCI_BASE_ADDRESS_MEM_TYPE_64; 189 188 flags |= (addr0 >> 28) & PCI_BASE_ADDRESS_MEM_TYPE_1M; 189 + if (addr0 & 0x01000000) 190 + flags |= IORESOURCE_MEM_64 191 + | PCI_BASE_ADDRESS_MEM_TYPE_64; 190 192 if (addr0 & 0x40000000) 191 193 flags |= IORESOURCE_PREFETCH 192 194 | PCI_BASE_ADDRESS_MEM_PREFETCH; ··· 657 655 pbm->io_space.start); 658 656 pci_add_resource_offset(&resources, &pbm->mem_space, 659 657 pbm->mem_space.start); 658 + if (pbm->mem64_space.flags) 659 + pci_add_resource_offset(&resources, &pbm->mem64_space, 660 + pbm->mem_space.start); 660 661 pbm->busn.start = pbm->pci_first_busno; 661 662 pbm->busn.end = pbm->pci_last_busno; 662 663 pbm->busn.flags = IORESOURCE_BUS;
+15 -2
arch/sparc/kernel/pci_common.c
··· 406 406 } 407 407 408 408 num_pbm_ranges = i / sizeof(*pbm_ranges); 409 + memset(&pbm->mem64_space, 0, sizeof(struct resource)); 409 410 410 411 for (i = 0; i < num_pbm_ranges; i++) { 411 412 const struct linux_prom_pci_ranges *pr = &pbm_ranges[i]; ··· 452 451 break; 453 452 454 453 case 3: 455 - /* XXX 64-bit MEM handling XXX */ 454 + /* 64-bit MEM handling */ 455 + pbm->mem64_space.start = a; 456 + pbm->mem64_space.end = a + size - 1UL; 457 + pbm->mem64_space.flags = IORESOURCE_MEM; 458 + saw_mem = 1; 459 + break; 456 460 457 461 default: 458 462 break; ··· 471 465 prom_halt(); 472 466 } 473 467 474 - printk("%s: PCI IO[%llx] MEM[%llx]\n", 468 + printk("%s: PCI IO[%llx] MEM[%llx]", 475 469 pbm->name, 476 470 pbm->io_space.start, 477 471 pbm->mem_space.start); 472 + if (pbm->mem64_space.flags) 473 + printk(" MEM64[%llx]", 474 + pbm->mem64_space.start); 475 + printk("\n"); 478 476 479 477 pbm->io_space.name = pbm->mem_space.name = pbm->name; 478 + pbm->mem64_space.name = pbm->name; 480 479 481 480 request_resource(&ioport_resource, &pbm->io_space); 482 481 request_resource(&iomem_resource, &pbm->mem_space); 482 + if (pbm->mem64_space.flags) 483 + request_resource(&iomem_resource, &pbm->mem64_space); 483 484 484 485 pci_register_legacy_regions(&pbm->io_space, 485 486 &pbm->mem_space);
+1
arch/sparc/kernel/pci_impl.h
··· 97 97 /* PBM I/O and Memory space resources. */ 98 98 struct resource io_space; 99 99 struct resource mem_space; 100 + struct resource mem64_space; 100 101 struct resource busn; 101 102 102 103 /* Base of PCI Config space, can be per-PBM or shared. */
+8
arch/x86/pci/common.c
··· 675 675 676 676 int pcibios_alloc_irq(struct pci_dev *dev) 677 677 { 678 + /* 679 + * If the PCI device was already claimed by core code and has 680 + * MSI enabled, probing of the pcibios IRQ will overwrite 681 + * dev->irq. So bail out if MSI is already enabled. 682 + */ 683 + if (pci_dev_msi_enabled(dev)) 684 + return -EBUSY; 685 + 678 686 return pcibios_enable_irq(dev); 679 687 } 680 688
+1 -1
arch/x86/pci/legacy.c
··· 54 54 } 55 55 EXPORT_SYMBOL_GPL(pcibios_scan_specific_bus); 56 56 57 - int __init pci_subsys_init(void) 57 + static int __init pci_subsys_init(void) 58 58 { 59 59 /* 60 60 * The init function returns an non zero value when
+26
drivers/of/of_pci.c
··· 5 5 #include <linux/of_device.h> 6 6 #include <linux/of_pci.h> 7 7 #include <linux/slab.h> 8 + #include <asm-generic/pci-bridge.h> 8 9 9 10 static inline int __of_pci_pci_compare(struct device_node *node, 10 11 unsigned int data) ··· 117 116 return domain; 118 117 } 119 118 EXPORT_SYMBOL_GPL(of_get_pci_domain_nr); 119 + 120 + /** 121 + * of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only 122 + * is present and valid 123 + */ 124 + void of_pci_check_probe_only(void) 125 + { 126 + u32 val; 127 + int ret; 128 + 129 + ret = of_property_read_u32(of_chosen, "linux,pci-probe-only", &val); 130 + if (ret) { 131 + if (ret == -ENODATA || ret == -EOVERFLOW) 132 + pr_warn("linux,pci-probe-only without valid value, ignoring\n"); 133 + return; 134 + } 135 + 136 + if (val) 137 + pci_add_flags(PCI_PROBE_ONLY); 138 + else 139 + pci_clear_flags(PCI_PROBE_ONLY); 140 + 141 + pr_info("PCI: PROBE_ONLY %sabled\n", val ? "en" : "dis"); 142 + } 143 + EXPORT_SYMBOL_GPL(of_pci_check_probe_only); 120 144 121 145 /** 122 146 * of_pci_dma_configure - Setup DMA configuration
+30 -3
drivers/pci/host/Kconfig
··· 39 39 40 40 config PCI_RCAR_GEN2 41 41 bool "Renesas R-Car Gen2 Internal PCI controller" 42 - depends on ARCH_SHMOBILE || (ARM && COMPILE_TEST) 42 + depends on ARM 43 + depends on ARCH_SHMOBILE || COMPILE_TEST 43 44 help 44 45 Say Y here if you want internal PCI support on R-Car Gen2 SoC. 45 46 There are 3 internal PCI controllers available with a single ··· 48 47 49 48 config PCI_RCAR_GEN2_PCIE 50 49 bool "Renesas R-Car PCIe controller" 51 - depends on ARCH_SHMOBILE || (ARM && COMPILE_TEST) 50 + depends on ARM 51 + depends on ARCH_SHMOBILE || COMPILE_TEST 52 52 help 53 53 Say Y here if you want PCIe controller support on R-Car Gen2 SoCs. 54 54 ··· 107 105 108 106 config PCI_LAYERSCAPE 109 107 bool "Freescale Layerscape PCIe controller" 110 - depends on OF && ARM 108 + depends on OF && (ARM || ARCH_LAYERSCAPE) 111 109 select PCIE_DW 112 110 select MFD_SYSCON 113 111 help ··· 146 144 help 147 145 Say Y here if you want to use the Broadcom iProc PCIe controller 148 146 through the BCMA bus interface 147 + 148 + config PCIE_ALTERA 149 + bool "Altera PCIe controller" 150 + depends on ARM || NIOS2 151 + depends on OF_PCI 152 + select PCI_DOMAINS 153 + help 154 + Say Y here if you want to enable PCIe controller support on Altera 155 + FPGA. 156 + 157 + config PCIE_ALTERA_MSI 158 + bool "Altera PCIe MSI feature" 159 + depends on PCIE_ALTERA && PCI_MSI 160 + select PCI_MSI_IRQ_DOMAIN 161 + help 162 + Say Y here if you want PCIe MSI support for the Altera FPGA. 163 + This MSI driver supports Altera MSI to GIC controller IP. 164 + 165 + config PCI_HISI 166 + depends on OF && ARM64 167 + bool "HiSilicon SoC HIP05 PCIe controller" 168 + select PCIEPORTBUS 169 + select PCIE_DW 170 + help 171 + Say Y here if you want PCIe controller support on HiSilicon HIP05 SoC 149 172 150 173 endmenu
+3
drivers/pci/host/Makefile
··· 17 17 obj-$(CONFIG_PCIE_IPROC) += pcie-iproc.o 18 18 obj-$(CONFIG_PCIE_IPROC_PLATFORM) += pcie-iproc-platform.o 19 19 obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o 20 + obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o 21 + obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o 22 + obj-$(CONFIG_PCI_HISI) += pcie-hisi.o
+7
drivers/pci/host/pci-dra7xx.c
··· 62 62 63 63 #define PCIECTRL_DRA7XX_CONF_PHY_CS 0x010C 64 64 #define LINK_UP BIT(16) 65 + #define DRA7XX_CPU_TO_BUS_ADDR 0x0FFFFFFF 65 66 66 67 struct dra7xx_pcie { 67 68 void __iomem *base; ··· 152 151 static void dra7xx_pcie_host_init(struct pcie_port *pp) 153 152 { 154 153 dw_pcie_setup_rc(pp); 154 + 155 + pp->io_base &= DRA7XX_CPU_TO_BUS_ADDR; 156 + pp->mem_base &= DRA7XX_CPU_TO_BUS_ADDR; 157 + pp->cfg0_base &= DRA7XX_CPU_TO_BUS_ADDR; 158 + pp->cfg1_base &= DRA7XX_CPU_TO_BUS_ADDR; 159 + 155 160 dra7xx_pcie_establish_link(pp); 156 161 if (IS_ENABLED(CONFIG_PCI_MSI)) 157 162 dw_pcie_msi_init(pp);
+2 -3
drivers/pci/host/pci-exynos.c
··· 454 454 int ret; 455 455 456 456 exynos_pcie_sideband_dbi_r_mode(pp, true); 457 - ret = dw_pcie_cfg_read(pp->dbi_base + (where & ~0x3), where, size, val); 457 + ret = dw_pcie_cfg_read(pp->dbi_base + where, size, val); 458 458 exynos_pcie_sideband_dbi_r_mode(pp, false); 459 459 return ret; 460 460 } ··· 465 465 int ret; 466 466 467 467 exynos_pcie_sideband_dbi_w_mode(pp, true); 468 - ret = dw_pcie_cfg_write(pp->dbi_base + (where & ~0x3), 469 - where, size, val); 468 + ret = dw_pcie_cfg_write(pp->dbi_base + where, size, val); 470 469 exynos_pcie_sideband_dbi_w_mode(pp, false); 471 470 return ret; 472 471 }
+19 -22
drivers/pci/host/pci-host-generic.c
··· 27 27 28 28 struct gen_pci_cfg_bus_ops { 29 29 u32 bus_shift; 30 - void __iomem *(*map_bus)(struct pci_bus *, unsigned int, int); 30 + struct pci_ops ops; 31 31 }; 32 32 33 33 struct gen_pci_cfg_windows { ··· 35 35 struct resource *bus_range; 36 36 void __iomem **win; 37 37 38 - const struct gen_pci_cfg_bus_ops *ops; 38 + struct gen_pci_cfg_bus_ops *ops; 39 39 }; 40 40 41 41 /* ··· 65 65 66 66 static struct gen_pci_cfg_bus_ops gen_pci_cfg_cam_bus_ops = { 67 67 .bus_shift = 16, 68 - .map_bus = gen_pci_map_cfg_bus_cam, 68 + .ops = { 69 + .map_bus = gen_pci_map_cfg_bus_cam, 70 + .read = pci_generic_config_read, 71 + .write = pci_generic_config_write, 72 + } 69 73 }; 70 74 71 75 static void __iomem *gen_pci_map_cfg_bus_ecam(struct pci_bus *bus, ··· 84 80 85 81 static struct gen_pci_cfg_bus_ops gen_pci_cfg_ecam_bus_ops = { 86 82 .bus_shift = 20, 87 - .map_bus = gen_pci_map_cfg_bus_ecam, 88 - }; 89 - 90 - static struct pci_ops gen_pci_ops = { 91 - .read = pci_generic_config_read, 92 - .write = pci_generic_config_write, 83 + .ops = { 84 + .map_bus = gen_pci_map_cfg_bus_ecam, 85 + .read = pci_generic_config_read, 86 + .write = pci_generic_config_write, 87 + } 93 88 }; 94 89 95 90 static const struct of_device_id gen_pci_of_match[] = { ··· 169 166 struct resource *bus_range; 170 167 struct device *dev = pci->host.dev.parent; 171 168 struct device_node *np = dev->of_node; 169 + u32 sz = 1 << pci->cfg.ops->bus_shift; 172 170 173 171 err = of_address_to_resource(np, 0, &pci->cfg.res); 174 172 if (err) { ··· 197 193 bus_range = pci->cfg.bus_range; 198 194 for (busn = bus_range->start; busn <= bus_range->end; ++busn) { 199 195 u32 idx = busn - bus_range->start; 200 - u32 sz = 1 << pci->cfg.ops->bus_shift; 201 196 202 197 pci->cfg.win[idx] = devm_ioremap(dev, 203 - pci->cfg.res.start + busn * sz, 198 + pci->cfg.res.start + idx * sz, 204 199 sz); 205 200 if (!pci->cfg.win[idx]) 206 201 return -ENOMEM; ··· 213 210 int err; 214 211 const char *type; 215 212 const struct of_device_id *of_id; 216 - const int *prop; 217 213 struct device *dev = &pdev->dev; 218 214 struct device_node *np = dev->of_node; 219 215 struct gen_pci *pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); ··· 227 225 return -EINVAL; 228 226 } 229 227 230 - prop = of_get_property(of_chosen, "linux,pci-probe-only", NULL); 231 - if (prop) { 232 - if (*prop) 233 - pci_add_flags(PCI_PROBE_ONLY); 234 - else 235 - pci_clear_flags(PCI_PROBE_ONLY); 236 - } 228 + of_pci_check_probe_only(); 237 229 238 230 of_id = of_match_node(gen_pci_of_match, np); 239 - pci->cfg.ops = of_id->data; 240 - gen_pci_ops.map_bus = pci->cfg.ops->map_bus; 231 + pci->cfg.ops = (struct gen_pci_cfg_bus_ops *)of_id->data; 241 232 pci->host.dev.parent = dev; 242 233 INIT_LIST_HEAD(&pci->host.windows); 243 234 INIT_LIST_HEAD(&pci->resources); ··· 251 256 if (!pci_has_flag(PCI_PROBE_ONLY)) 252 257 pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 253 258 254 - bus = pci_scan_root_bus(dev, 0, &gen_pci_ops, pci, &pci->resources); 259 + 260 + bus = pci_scan_root_bus(dev, pci->cfg.bus_range->start, 261 + &pci->cfg.ops->ops, pci, &pci->resources); 255 262 if (!bus) { 256 263 dev_err(dev, "Scanning rootbus failed"); 257 264 return -ENODEV;
+3 -2
drivers/pci/host/pci-imx6.c
··· 74 74 75 75 /* PHY registers (not memory-mapped) */ 76 76 #define PCIE_PHY_RX_ASIC_OUT 0x100D 77 + #define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0) 77 78 78 79 #define PHY_RX_OVRD_IN_LO 0x1005 79 80 #define PHY_RX_OVRD_IN_LO_RX_DATA_EN (1 << 5) ··· 504 503 pcie_phy_read(pp->dbi_base, PCIE_PHY_RX_ASIC_OUT, &rx_valid); 505 504 debug_r0 = readl(pp->dbi_base + PCIE_PHY_DEBUG_R0); 506 505 507 - if (rx_valid & 0x01) 506 + if (rx_valid & PCIE_PHY_RX_ASIC_OUT_VALID) 508 507 return 0; 509 508 510 509 if ((debug_r0 & 0x3f) != 0x0d) ··· 540 539 IRQF_SHARED, "mx6-pcie-msi", pp); 541 540 if (ret) { 542 541 dev_err(&pdev->dev, "failed to request MSI irq\n"); 543 - return -ENODEV; 542 + return ret; 544 543 } 545 544 } 546 545
+4 -4
drivers/pci/host/pci-keystone-dw.c
··· 70 70 *bit_pos = offset >> 3; 71 71 } 72 72 73 - u32 ks_dw_pcie_get_msi_addr(struct pcie_port *pp) 73 + phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp) 74 74 { 75 75 struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 76 76 ··· 322 322 void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) 323 323 { 324 324 struct pcie_port *pp = &ks_pcie->pp; 325 - u32 start = pp->mem.start, end = pp->mem.end; 325 + u32 start = pp->mem->start, end = pp->mem->end; 326 326 int i, tr_size; 327 327 328 328 /* Disable BARs for inbound access */ ··· 398 398 399 399 addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); 400 400 401 - return dw_pcie_cfg_read(addr + (where & ~0x3), where, size, val); 401 + return dw_pcie_cfg_read(addr + where, size, val); 402 402 } 403 403 404 404 int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, ··· 410 410 411 411 addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); 412 412 413 - return dw_pcie_cfg_write(addr + (where & ~0x3), where, size, val); 413 + return dw_pcie_cfg_write(addr + where, size, val); 414 414 } 415 415 416 416 /**
+1 -1
drivers/pci/host/pci-keystone.h
··· 37 37 38 38 /* Keystone DW specific MSI controller APIs/definitions */ 39 39 void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset); 40 - u32 ks_dw_pcie_get_msi_addr(struct pcie_port *pp); 40 + phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp); 41 41 42 42 /* Keystone specific PCI controller APIs */ 43 43 void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie);
+152 -55
drivers/pci/host/pci-layerscape.c
··· 3 3 * 4 4 * Copyright (C) 2014 Freescale Semiconductor. 5 5 * 6 - * Author: Minghuan Lian <Minghuan.Lian@freescale.com> 6 + * Author: Minghuan Lian <Minghuan.Lian@freescale.com> 7 7 * 8 8 * This program is free software; you can redistribute it and/or modify 9 9 * it under the terms of the GNU General Public License version 2 as ··· 11 11 */ 12 12 13 13 #include <linux/kernel.h> 14 - #include <linux/delay.h> 15 14 #include <linux/interrupt.h> 16 15 #include <linux/module.h> 17 16 #include <linux/of_pci.h> ··· 31 32 #define LTSSM_STATE_MASK 0x3f 32 33 #define LTSSM_PCIE_L0 0x11 /* L0 state */ 33 34 34 - /* Symbol Timer Register and Filter Mask Register 1 */ 35 - #define PCIE_STRFMR1 0x71c 35 + /* PEX Internal Configuration Registers */ 36 + #define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */ 37 + #define PCIE_DBI_RO_WR_EN 0x8bc /* DBI Read-Only Write Enable Register */ 38 + 39 + /* PEX LUT registers */ 40 + #define PCIE_LUT_DBG 0x7FC /* PEX LUT Debug Register */ 41 + 42 + struct ls_pcie_drvdata { 43 + u32 lut_offset; 44 + u32 ltssm_shift; 45 + struct pcie_host_ops *ops; 46 + }; 36 47 37 48 struct ls_pcie { 38 - struct list_head node; 39 - struct device *dev; 40 - struct pci_bus *bus; 41 49 void __iomem *dbi; 50 + void __iomem *lut; 42 51 struct regmap *scfg; 43 52 struct pcie_port pp; 53 + const struct ls_pcie_drvdata *drvdata; 44 54 int index; 45 - int msi_irq; 46 55 }; 47 56 48 57 #define to_ls_pcie(x) container_of(x, struct ls_pcie, pp) 49 58 50 - static int ls_pcie_link_up(struct pcie_port *pp) 59 + static bool ls_pcie_is_bridge(struct ls_pcie *pcie) 60 + { 61 + u32 header_type; 62 + 63 + header_type = ioread8(pcie->dbi + PCI_HEADER_TYPE); 64 + header_type &= 0x7f; 65 + 66 + return header_type == PCI_HEADER_TYPE_BRIDGE; 67 + } 68 + 69 + /* Clear multi-function bit */ 70 + static void ls_pcie_clear_multifunction(struct ls_pcie *pcie) 71 + { 72 + iowrite8(PCI_HEADER_TYPE_BRIDGE, pcie->dbi + PCI_HEADER_TYPE); 73 + } 74 + 75 + /* Fix class value */ 76 + static void ls_pcie_fix_class(struct ls_pcie *pcie) 77 + { 78 + iowrite16(PCI_CLASS_BRIDGE_PCI, pcie->dbi + PCI_CLASS_DEVICE); 79 + } 80 + 81 + static int ls1021_pcie_link_up(struct pcie_port *pp) 51 82 { 52 83 u32 state; 53 84 struct ls_pcie *pcie = to_ls_pcie(pp); 85 + 86 + if (!pcie->scfg) 87 + return 0; 54 88 55 89 regmap_read(pcie->scfg, SCFG_PEXMSCPORTSR(pcie->index), &state); 56 90 state = (state >> LTSSM_STATE_SHIFT) & LTSSM_STATE_MASK; ··· 94 62 return 1; 95 63 } 96 64 97 - static int ls_pcie_establish_link(struct pcie_port *pp) 98 - { 99 - unsigned int retries; 100 - 101 - for (retries = 0; retries < 200; retries++) { 102 - if (dw_pcie_link_up(pp)) 103 - return 0; 104 - usleep_range(100, 1000); 105 - } 106 - 107 - dev_err(pp->dev, "phy link never came up\n"); 108 - return -EINVAL; 109 - } 110 - 111 - static void ls_pcie_host_init(struct pcie_port *pp) 65 + static void ls1021_pcie_host_init(struct pcie_port *pp) 112 66 { 113 67 struct ls_pcie *pcie = to_ls_pcie(pp); 114 - u32 val; 68 + u32 val, index[2]; 69 + 70 + pcie->scfg = syscon_regmap_lookup_by_phandle(pp->dev->of_node, 71 + "fsl,pcie-scfg"); 72 + if (IS_ERR(pcie->scfg)) { 73 + dev_err(pp->dev, "No syscfg phandle specified\n"); 74 + pcie->scfg = NULL; 75 + return; 76 + } 77 + 78 + if (of_property_read_u32_array(pp->dev->of_node, 79 + "fsl,pcie-scfg", index, 2)) { 80 + pcie->scfg = NULL; 81 + return; 82 + } 83 + pcie->index = index[1]; 115 84 116 85 dw_pcie_setup_rc(pp); 117 - ls_pcie_establish_link(pp); 118 86 119 87 /* 120 88 * LS1021A Workaround for internal TKT228622 ··· 125 93 iowrite32(val, pcie->dbi + PCIE_STRFMR1); 126 94 } 127 95 96 + static int ls_pcie_link_up(struct pcie_port *pp) 97 + { 98 + struct ls_pcie *pcie = to_ls_pcie(pp); 99 + u32 state; 100 + 101 + state = (ioread32(pcie->lut + PCIE_LUT_DBG) >> 102 + pcie->drvdata->ltssm_shift) & 103 + LTSSM_STATE_MASK; 104 + 105 + if (state < LTSSM_PCIE_L0) 106 + return 0; 107 + 108 + return 1; 109 + } 110 + 111 + static void ls_pcie_host_init(struct pcie_port *pp) 112 + { 113 + struct ls_pcie *pcie = to_ls_pcie(pp); 114 + 115 + iowrite32(1, pcie->dbi + PCIE_DBI_RO_WR_EN); 116 + ls_pcie_fix_class(pcie); 117 + ls_pcie_clear_multifunction(pcie); 118 + iowrite32(0, pcie->dbi + PCIE_DBI_RO_WR_EN); 119 + } 120 + 121 + static int ls_pcie_msi_host_init(struct pcie_port *pp, 122 + struct msi_controller *chip) 123 + { 124 + struct device_node *msi_node; 125 + struct device_node *np = pp->dev->of_node; 126 + 127 + /* 128 + * The MSI domain is set by the generic of_msi_configure(). This 129 + * .msi_host_init() function keeps us from doing the default MSI 130 + * domain setup in dw_pcie_host_init() and also enforces the 131 + * requirement that "msi-parent" exists. 132 + */ 133 + msi_node = of_parse_phandle(np, "msi-parent", 0); 134 + if (!msi_node) { 135 + dev_err(pp->dev, "failed to find msi-parent\n"); 136 + return -EINVAL; 137 + } 138 + 139 + return 0; 140 + } 141 + 142 + static struct pcie_host_ops ls1021_pcie_host_ops = { 143 + .link_up = ls1021_pcie_link_up, 144 + .host_init = ls1021_pcie_host_init, 145 + .msi_host_init = ls_pcie_msi_host_init, 146 + }; 147 + 128 148 static struct pcie_host_ops ls_pcie_host_ops = { 129 149 .link_up = ls_pcie_link_up, 130 150 .host_init = ls_pcie_host_init, 151 + .msi_host_init = ls_pcie_msi_host_init, 131 152 }; 132 153 133 - static int ls_add_pcie_port(struct ls_pcie *pcie) 134 - { 135 - struct pcie_port *pp; 136 - int ret; 154 + static struct ls_pcie_drvdata ls1021_drvdata = { 155 + .ops = &ls1021_pcie_host_ops, 156 + }; 137 157 138 - pp = &pcie->pp; 139 - pp->dev = pcie->dev; 158 + static struct ls_pcie_drvdata ls1043_drvdata = { 159 + .lut_offset = 0x10000, 160 + .ltssm_shift = 24, 161 + .ops = &ls_pcie_host_ops, 162 + }; 163 + 164 + static struct ls_pcie_drvdata ls2080_drvdata = { 165 + .lut_offset = 0x80000, 166 + .ltssm_shift = 0, 167 + .ops = &ls_pcie_host_ops, 168 + }; 169 + 170 + static const struct of_device_id ls_pcie_of_match[] = { 171 + { .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata }, 172 + { .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata }, 173 + { .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata }, 174 + { }, 175 + }; 176 + MODULE_DEVICE_TABLE(of, ls_pcie_of_match); 177 + 178 + static int __init ls_add_pcie_port(struct pcie_port *pp, 179 + struct platform_device *pdev) 180 + { 181 + int ret; 182 + struct ls_pcie *pcie = to_ls_pcie(pp); 183 + 184 + pp->dev = &pdev->dev; 140 185 pp->dbi_base = pcie->dbi; 141 - pp->root_bus_nr = -1; 142 - pp->ops = &ls_pcie_host_ops; 186 + pp->ops = pcie->drvdata->ops; 143 187 144 188 ret = dw_pcie_host_init(pp); 145 189 if (ret) { ··· 228 120 229 121 static int __init ls_pcie_probe(struct platform_device *pdev) 230 122 { 123 + const struct of_device_id *match; 231 124 struct ls_pcie *pcie; 232 125 struct resource *dbi_base; 233 - u32 index[2]; 234 126 int ret; 127 + 128 + match = of_match_device(ls_pcie_of_match, &pdev->dev); 129 + if (!match) 130 + return -ENODEV; 235 131 236 132 pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL); 237 133 if (!pcie) 238 134 return -ENOMEM; 239 - 240 - pcie->dev = &pdev->dev; 241 135 242 136 dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); 243 137 pcie->dbi = devm_ioremap_resource(&pdev->dev, dbi_base); ··· 248 138 return PTR_ERR(pcie->dbi); 249 139 } 250 140 251 - pcie->scfg = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, 252 - "fsl,pcie-scfg"); 253 - if (IS_ERR(pcie->scfg)) { 254 - dev_err(&pdev->dev, "No syscfg phandle specified\n"); 255 - return PTR_ERR(pcie->scfg); 256 - } 141 + pcie->drvdata = match->data; 142 + pcie->lut = pcie->dbi + pcie->drvdata->lut_offset; 257 143 258 - ret = of_property_read_u32_array(pdev->dev.of_node, 259 - "fsl,pcie-scfg", index, 2); 260 - if (ret) 261 - return ret; 262 - pcie->index = index[1]; 144 + if (!ls_pcie_is_bridge(pcie)) 145 + return -ENODEV; 263 146 264 - ret = ls_add_pcie_port(pcie); 147 + ret = ls_add_pcie_port(&pcie->pp, pdev); 265 148 if (ret < 0) 266 149 return ret; 267 150 ··· 262 159 263 160 return 0; 264 161 } 265 - 266 - static const struct of_device_id ls_pcie_of_match[] = { 267 - { .compatible = "fsl,ls1021a-pcie" }, 268 - { }, 269 - }; 270 - MODULE_DEVICE_TABLE(of, ls_pcie_of_match); 271 162 272 163 static struct platform_driver ls_pcie_driver = { 273 164 .driver = {
+343 -130
drivers/pci/host/pci-mvebu.c
··· 30 30 #define PCIE_DEV_REV_OFF 0x0008 31 31 #define PCIE_BAR_LO_OFF(n) (0x0010 + ((n) << 3)) 32 32 #define PCIE_BAR_HI_OFF(n) (0x0014 + ((n) << 3)) 33 + #define PCIE_CAP_PCIEXP 0x0060 33 34 #define PCIE_HEADER_LOG_4_OFF 0x0128 34 35 #define PCIE_BAR_CTRL_OFF(n) (0x1804 + (((n) - 1) * 4)) 35 36 #define PCIE_WIN04_CTRL_OFF(n) (0x1820 + ((n) << 4)) ··· 58 57 #define PCIE_STAT_BUS 0xff00 59 58 #define PCIE_STAT_DEV 0x1f0000 60 59 #define PCIE_STAT_LINK_DOWN BIT(0) 60 + #define PCIE_RC_RTSTA 0x1a14 61 61 #define PCIE_DEBUG_CTRL 0x1a60 62 62 #define PCIE_DEBUG_SOFT_RESET BIT(20) 63 + 64 + enum { 65 + PCISWCAP = PCI_BRIDGE_CONTROL + 2, 66 + PCISWCAP_EXP_LIST_ID = PCISWCAP + PCI_CAP_LIST_ID, 67 + PCISWCAP_EXP_DEVCAP = PCISWCAP + PCI_EXP_DEVCAP, 68 + PCISWCAP_EXP_DEVCTL = PCISWCAP + PCI_EXP_DEVCTL, 69 + PCISWCAP_EXP_LNKCAP = PCISWCAP + PCI_EXP_LNKCAP, 70 + PCISWCAP_EXP_LNKCTL = PCISWCAP + PCI_EXP_LNKCTL, 71 + PCISWCAP_EXP_SLTCAP = PCISWCAP + PCI_EXP_SLTCAP, 72 + PCISWCAP_EXP_SLTCTL = PCISWCAP + PCI_EXP_SLTCTL, 73 + PCISWCAP_EXP_RTCTL = PCISWCAP + PCI_EXP_RTCTL, 74 + PCISWCAP_EXP_RTSTA = PCISWCAP + PCI_EXP_RTSTA, 75 + PCISWCAP_EXP_DEVCAP2 = PCISWCAP + PCI_EXP_DEVCAP2, 76 + PCISWCAP_EXP_DEVCTL2 = PCISWCAP + PCI_EXP_DEVCTL2, 77 + PCISWCAP_EXP_LNKCAP2 = PCISWCAP + PCI_EXP_LNKCAP2, 78 + PCISWCAP_EXP_LNKCTL2 = PCISWCAP + PCI_EXP_LNKCTL2, 79 + PCISWCAP_EXP_SLTCAP2 = PCISWCAP + PCI_EXP_SLTCAP2, 80 + PCISWCAP_EXP_SLTCTL2 = PCISWCAP + PCI_EXP_SLTCTL2, 81 + }; 63 82 64 83 /* PCI configuration space of a PCI-to-PCI bridge */ 65 84 struct mvebu_sw_pci_bridge { 66 85 u16 vendor; 67 86 u16 device; 68 87 u16 command; 88 + u16 status; 69 89 u16 class; 70 90 u8 interface; 71 91 u8 revision; ··· 106 84 u16 memlimit; 107 85 u16 iobaseupper; 108 86 u16 iolimitupper; 109 - u8 cappointer; 110 - u8 reserved1; 111 - u16 reserved2; 112 87 u32 romaddr; 113 88 u8 intline; 114 89 u8 intpin; 115 90 u16 bridgectrl; 91 + 92 + /* PCI express capability */ 93 + u32 pcie_sltcap; 94 + u16 pcie_devctl; 95 + u16 pcie_rtctl; 116 96 }; 117 97 118 98 struct mvebu_pcie_port; ··· 143 119 unsigned int io_target; 144 120 unsigned int io_attr; 145 121 struct clk *clk; 146 - int reset_gpio; 147 - int reset_active_low; 122 + struct gpio_desc *reset_gpio; 148 123 char *reset_name; 149 124 struct mvebu_sw_pci_bridge bridge; 150 125 struct device_node *dn; ··· 277 254 struct pci_bus *bus, 278 255 u32 devfn, int where, int size, u32 *val) 279 256 { 257 + void __iomem *conf_data = port->base + PCIE_CONF_DATA_OFF; 258 + 280 259 mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where), 281 260 PCIE_CONF_ADDR_OFF); 282 261 283 - *val = mvebu_readl(port, PCIE_CONF_DATA_OFF); 284 - 285 - if (size == 1) 286 - *val = (*val >> (8 * (where & 3))) & 0xff; 287 - else if (size == 2) 288 - *val = (*val >> (8 * (where & 3))) & 0xffff; 262 + switch (size) { 263 + case 1: 264 + *val = readb_relaxed(conf_data + (where & 3)); 265 + break; 266 + case 2: 267 + *val = readw_relaxed(conf_data + (where & 2)); 268 + break; 269 + case 4: 270 + *val = readl_relaxed(conf_data); 271 + break; 272 + } 289 273 290 274 return PCIBIOS_SUCCESSFUL; 291 275 } ··· 301 271 struct pci_bus *bus, 302 272 u32 devfn, int where, int size, u32 val) 303 273 { 304 - u32 _val, shift = 8 * (where & 3); 274 + void __iomem *conf_data = port->base + PCIE_CONF_DATA_OFF; 305 275 306 276 mvebu_writel(port, PCIE_CONF_ADDR(bus->number, devfn, where), 307 277 PCIE_CONF_ADDR_OFF); 308 - _val = mvebu_readl(port, PCIE_CONF_DATA_OFF); 309 278 310 - if (size == 4) 311 - _val = val; 312 - else if (size == 2) 313 - _val = (_val & ~(0xffff << shift)) | ((val & 0xffff) << shift); 314 - else if (size == 1) 315 - _val = (_val & ~(0xff << shift)) | ((val & 0xff) << shift); 316 - else 279 + switch (size) { 280 + case 1: 281 + writeb(val, conf_data + (where & 3)); 282 + break; 283 + case 2: 284 + writew(val, conf_data + (where & 2)); 285 + break; 286 + case 4: 287 + writel(val, conf_data); 288 + break; 289 + default: 317 290 return PCIBIOS_BAD_REGISTER_NUMBER; 318 - 319 - mvebu_writel(port, _val, PCIE_CONF_DATA_OFF); 291 + } 320 292 321 293 return PCIBIOS_SUCCESSFUL; 322 294 } ··· 475 443 /* We support 32 bits I/O addressing */ 476 444 bridge->iobase = PCI_IO_RANGE_TYPE_32; 477 445 bridge->iolimit = PCI_IO_RANGE_TYPE_32; 446 + 447 + /* Add capabilities */ 448 + bridge->status = PCI_STATUS_CAP_LIST; 478 449 } 479 450 480 451 /* ··· 495 460 break; 496 461 497 462 case PCI_COMMAND: 498 - *value = bridge->command; 463 + *value = bridge->command | bridge->status << 16; 499 464 break; 500 465 501 466 case PCI_CLASS_REVISION: ··· 540 505 *value = (bridge->iolimitupper << 16 | bridge->iobaseupper); 541 506 break; 542 507 508 + case PCI_CAPABILITY_LIST: 509 + *value = PCISWCAP; 510 + break; 511 + 543 512 case PCI_ROM_ADDRESS1: 544 513 *value = 0; 545 514 break; ··· 553 514 *value = 0; 554 515 break; 555 516 517 + case PCISWCAP_EXP_LIST_ID: 518 + /* Set PCIe v2, root port, slot support */ 519 + *value = (PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | 520 + PCI_EXP_FLAGS_SLOT) << 16 | PCI_CAP_ID_EXP; 521 + break; 522 + 523 + case PCISWCAP_EXP_DEVCAP: 524 + *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP); 525 + break; 526 + 527 + case PCISWCAP_EXP_DEVCTL: 528 + *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) & 529 + ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | 530 + PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); 531 + *value |= bridge->pcie_devctl; 532 + break; 533 + 534 + case PCISWCAP_EXP_LNKCAP: 535 + /* 536 + * PCIe requires the clock power management capability to be 537 + * hard-wired to zero for downstream ports 538 + */ 539 + *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCAP) & 540 + ~PCI_EXP_LNKCAP_CLKPM; 541 + break; 542 + 543 + case PCISWCAP_EXP_LNKCTL: 544 + *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); 545 + break; 546 + 547 + case PCISWCAP_EXP_SLTCAP: 548 + *value = bridge->pcie_sltcap; 549 + break; 550 + 551 + case PCISWCAP_EXP_SLTCTL: 552 + *value = PCI_EXP_SLTSTA_PDS << 16; 553 + break; 554 + 555 + case PCISWCAP_EXP_RTCTL: 556 + *value = bridge->pcie_rtctl; 557 + break; 558 + 559 + case PCISWCAP_EXP_RTSTA: 560 + *value = mvebu_readl(port, PCIE_RC_RTSTA); 561 + break; 562 + 563 + /* PCIe requires the v2 fields to be hard-wired to zero */ 564 + case PCISWCAP_EXP_DEVCAP2: 565 + case PCISWCAP_EXP_DEVCTL2: 566 + case PCISWCAP_EXP_LNKCAP2: 567 + case PCISWCAP_EXP_LNKCTL2: 568 + case PCISWCAP_EXP_SLTCAP2: 569 + case PCISWCAP_EXP_SLTCTL2: 556 570 default: 557 - *value = 0xffffffff; 558 - return PCIBIOS_BAD_REGISTER_NUMBER; 571 + /* 572 + * PCI defines configuration read accesses to reserved or 573 + * unimplemented registers to read as zero and complete 574 + * normally. 575 + */ 576 + *value = 0; 577 + return PCIBIOS_SUCCESSFUL; 559 578 } 560 579 561 580 if (size == 2) ··· 698 601 mvebu_pcie_set_local_bus_nr(port, bridge->secondary_bus); 699 602 break; 700 603 604 + case PCISWCAP_EXP_DEVCTL: 605 + /* 606 + * Armada370 data says these bits must always 607 + * be zero when in root complex mode. 608 + */ 609 + value &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | 610 + PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); 611 + 612 + /* 613 + * If the mask is 0xffff0000, then we only want to write 614 + * the device control register, rather than clearing the 615 + * RW1C bits in the device status register. Mask out the 616 + * status register bits. 617 + */ 618 + if (mask == 0xffff0000) 619 + value &= 0xffff; 620 + 621 + mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL); 622 + break; 623 + 624 + case PCISWCAP_EXP_LNKCTL: 625 + /* 626 + * If we don't support CLKREQ, we must ensure that the 627 + * CLKREQ enable bit always reads zero. Since we haven't 628 + * had this capability, and it's dependent on board wiring, 629 + * disable it for the time being. 630 + */ 631 + value &= ~PCI_EXP_LNKCTL_CLKREQ_EN; 632 + 633 + /* 634 + * If the mask is 0xffff0000, then we only want to write 635 + * the link control register, rather than clearing the 636 + * RW1C bits in the link status register. Mask out the 637 + * status register bits. 638 + */ 639 + if (mask == 0xffff0000) 640 + value &= 0xffff; 641 + 642 + mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); 643 + break; 644 + 645 + case PCISWCAP_EXP_RTSTA: 646 + mvebu_writel(port, value, PCIE_RC_RTSTA); 647 + break; 648 + 701 649 default: 702 650 break; 703 651 } ··· 794 652 if (!mvebu_pcie_link_up(port)) 795 653 return PCIBIOS_DEVICE_NOT_FOUND; 796 654 797 - /* 798 - * On the secondary bus, we don't want to expose any other 799 - * device than the device physically connected in the PCIe 800 - * slot, visible in slot 0. In slot 1, there's a special 801 - * Marvell device that only makes sense when the Armada is 802 - * used as a PCIe endpoint. 803 - */ 804 - if (bus->number == port->bridge.secondary_bus && 805 - PCI_SLOT(devfn) != 0) 806 - return PCIBIOS_DEVICE_NOT_FOUND; 807 - 808 655 /* Access the real PCIe interface */ 809 656 ret = mvebu_pcie_hw_wr_conf(port, bus, devfn, 810 657 where, size, val); ··· 820 689 return mvebu_sw_pci_bridge_read(port, where, size, val); 821 690 822 691 if (!mvebu_pcie_link_up(port)) { 823 - *val = 0xffffffff; 824 - return PCIBIOS_DEVICE_NOT_FOUND; 825 - } 826 - 827 - /* 828 - * On the secondary bus, we don't want to expose any other 829 - * device than the device physically connected in the PCIe 830 - * slot, visible in slot 0. In slot 1, there's a special 831 - * Marvell device that only makes sense when the Armada is 832 - * used as a PCIe endpoint. 833 - */ 834 - if (bus->number == port->bridge.secondary_bus && 835 - PCI_SLOT(devfn) != 0) { 836 692 *val = 0xffffffff; 837 693 return PCIBIOS_DEVICE_NOT_FOUND; 838 694 } ··· 1032 914 return 0; 1033 915 } 1034 916 917 + static void mvebu_pcie_port_clk_put(void *data) 918 + { 919 + struct mvebu_pcie_port *port = data; 920 + 921 + clk_put(port->clk); 922 + } 923 + 924 + static int mvebu_pcie_parse_port(struct mvebu_pcie *pcie, 925 + struct mvebu_pcie_port *port, struct device_node *child) 926 + { 927 + struct device *dev = &pcie->pdev->dev; 928 + enum of_gpio_flags flags; 929 + int reset_gpio, ret; 930 + 931 + port->pcie = pcie; 932 + 933 + if (of_property_read_u32(child, "marvell,pcie-port", &port->port)) { 934 + dev_warn(dev, "ignoring %s, missing pcie-port property\n", 935 + of_node_full_name(child)); 936 + goto skip; 937 + } 938 + 939 + if (of_property_read_u32(child, "marvell,pcie-lane", &port->lane)) 940 + port->lane = 0; 941 + 942 + port->name = devm_kasprintf(dev, GFP_KERNEL, "pcie%d.%d", port->port, 943 + port->lane); 944 + if (!port->name) { 945 + ret = -ENOMEM; 946 + goto err; 947 + } 948 + 949 + port->devfn = of_pci_get_devfn(child); 950 + if (port->devfn < 0) 951 + goto skip; 952 + 953 + ret = mvebu_get_tgt_attr(dev->of_node, port->devfn, IORESOURCE_MEM, 954 + &port->mem_target, &port->mem_attr); 955 + if (ret < 0) { 956 + dev_err(dev, "%s: cannot get tgt/attr for mem window\n", 957 + port->name); 958 + goto skip; 959 + } 960 + 961 + if (resource_size(&pcie->io) != 0) { 962 + mvebu_get_tgt_attr(dev->of_node, port->devfn, IORESOURCE_IO, 963 + &port->io_target, &port->io_attr); 964 + } else { 965 + port->io_target = -1; 966 + port->io_attr = -1; 967 + } 968 + 969 + reset_gpio = of_get_named_gpio_flags(child, "reset-gpios", 0, &flags); 970 + if (reset_gpio == -EPROBE_DEFER) { 971 + ret = reset_gpio; 972 + goto err; 973 + } 974 + 975 + if (gpio_is_valid(reset_gpio)) { 976 + unsigned long gpio_flags; 977 + 978 + port->reset_name = devm_kasprintf(dev, GFP_KERNEL, "%s-reset", 979 + port->name); 980 + if (!port->reset_name) { 981 + ret = -ENOMEM; 982 + goto err; 983 + } 984 + 985 + if (flags & OF_GPIO_ACTIVE_LOW) { 986 + dev_info(dev, "%s: reset gpio is active low\n", 987 + of_node_full_name(child)); 988 + gpio_flags = GPIOF_ACTIVE_LOW | 989 + GPIOF_OUT_INIT_LOW; 990 + } else { 991 + gpio_flags = GPIOF_OUT_INIT_HIGH; 992 + } 993 + 994 + ret = devm_gpio_request_one(dev, reset_gpio, gpio_flags, 995 + port->reset_name); 996 + if (ret) { 997 + if (ret == -EPROBE_DEFER) 998 + goto err; 999 + goto skip; 1000 + } 1001 + 1002 + port->reset_gpio = gpio_to_desc(reset_gpio); 1003 + } 1004 + 1005 + port->clk = of_clk_get_by_name(child, NULL); 1006 + if (IS_ERR(port->clk)) { 1007 + dev_err(dev, "%s: cannot get clock\n", port->name); 1008 + goto skip; 1009 + } 1010 + 1011 + ret = devm_add_action(dev, mvebu_pcie_port_clk_put, port); 1012 + if (ret < 0) { 1013 + clk_put(port->clk); 1014 + goto err; 1015 + } 1016 + 1017 + return 1; 1018 + 1019 + skip: 1020 + ret = 0; 1021 + 1022 + /* In the case of skipping, we need to free these */ 1023 + devm_kfree(dev, port->reset_name); 1024 + port->reset_name = NULL; 1025 + devm_kfree(dev, port->name); 1026 + port->name = NULL; 1027 + 1028 + err: 1029 + return ret; 1030 + } 1031 + 1032 + /* 1033 + * Power up a PCIe port. PCIe requires the refclk to be stable for 100µs 1034 + * prior to releasing PERST. See table 2-4 in section 2.6.2 AC Specifications 1035 + * of the PCI Express Card Electromechanical Specification, 1.1. 1036 + */ 1037 + static int mvebu_pcie_powerup(struct mvebu_pcie_port *port) 1038 + { 1039 + int ret; 1040 + 1041 + ret = clk_prepare_enable(port->clk); 1042 + if (ret < 0) 1043 + return ret; 1044 + 1045 + if (port->reset_gpio) { 1046 + u32 reset_udelay = 20000; 1047 + 1048 + of_property_read_u32(port->dn, "reset-delay-us", 1049 + &reset_udelay); 1050 + 1051 + udelay(100); 1052 + 1053 + gpiod_set_value_cansleep(port->reset_gpio, 0); 1054 + msleep(reset_udelay / 1000); 1055 + } 1056 + 1057 + return 0; 1058 + } 1059 + 1060 + /* 1061 + * Power down a PCIe port. Strictly, PCIe requires us to place the card 1062 + * in D3hot state before asserting PERST#. 1063 + */ 1064 + static void mvebu_pcie_powerdown(struct mvebu_pcie_port *port) 1065 + { 1066 + if (port->reset_gpio) 1067 + gpiod_set_value_cansleep(port->reset_gpio, 1); 1068 + 1069 + clk_disable_unprepare(port->clk); 1070 + } 1071 + 1035 1072 static int mvebu_pcie_probe(struct platform_device *pdev) 1036 1073 { 1037 1074 struct mvebu_pcie *pcie; 1038 1075 struct device_node *np = pdev->dev.of_node; 1039 1076 struct device_node *child; 1040 - int i, ret; 1077 + int num, i, ret; 1041 1078 1042 1079 pcie = devm_kzalloc(&pdev->dev, sizeof(struct mvebu_pcie), 1043 1080 GFP_KERNEL); ··· 1228 955 return ret; 1229 956 } 1230 957 1231 - i = 0; 1232 - for_each_child_of_node(pdev->dev.of_node, child) { 1233 - if (!of_device_is_available(child)) 1234 - continue; 1235 - i++; 1236 - } 958 + num = of_get_available_child_count(pdev->dev.of_node); 1237 959 1238 - pcie->ports = devm_kzalloc(&pdev->dev, i * 1239 - sizeof(struct mvebu_pcie_port), 960 + pcie->ports = devm_kcalloc(&pdev->dev, num, sizeof(*pcie->ports), 1240 961 GFP_KERNEL); 1241 962 if (!pcie->ports) 1242 963 return -ENOMEM; 1243 964 1244 965 i = 0; 1245 - for_each_child_of_node(pdev->dev.of_node, child) { 966 + for_each_available_child_of_node(pdev->dev.of_node, child) { 1246 967 struct mvebu_pcie_port *port = &pcie->ports[i]; 1247 - enum of_gpio_flags flags; 1248 968 1249 - if (!of_device_is_available(child)) 1250 - continue; 1251 - 1252 - port->pcie = pcie; 1253 - 1254 - if (of_property_read_u32(child, "marvell,pcie-port", 1255 - &port->port)) { 1256 - dev_warn(&pdev->dev, 1257 - "ignoring PCIe DT node, missing pcie-port property\n"); 1258 - continue; 1259 - } 1260 - 1261 - if (of_property_read_u32(child, "marvell,pcie-lane", 1262 - &port->lane)) 1263 - port->lane = 0; 1264 - 1265 - port->name = kasprintf(GFP_KERNEL, "pcie%d.%d", 1266 - port->port, port->lane); 1267 - 1268 - port->devfn = of_pci_get_devfn(child); 1269 - if (port->devfn < 0) 1270 - continue; 1271 - 1272 - ret = mvebu_get_tgt_attr(np, port->devfn, IORESOURCE_MEM, 1273 - &port->mem_target, &port->mem_attr); 969 + ret = mvebu_pcie_parse_port(pcie, port, child); 1274 970 if (ret < 0) { 1275 - dev_err(&pdev->dev, "PCIe%d.%d: cannot get tgt/attr for mem window\n", 1276 - port->port, port->lane); 971 + of_node_put(child); 972 + return ret; 973 + } else if (ret == 0) { 1277 974 continue; 1278 975 } 1279 976 1280 - if (resource_size(&pcie->io) != 0) 1281 - mvebu_get_tgt_attr(np, port->devfn, IORESOURCE_IO, 1282 - &port->io_target, &port->io_attr); 1283 - else { 1284 - port->io_target = -1; 1285 - port->io_attr = -1; 1286 - } 977 + port->dn = child; 978 + i++; 979 + } 980 + pcie->nports = i; 1287 981 1288 - port->reset_gpio = of_get_named_gpio_flags(child, 1289 - "reset-gpios", 0, &flags); 1290 - if (gpio_is_valid(port->reset_gpio)) { 1291 - u32 reset_udelay = 20000; 982 + for (i = 0; i < pcie->nports; i++) { 983 + struct mvebu_pcie_port *port = &pcie->ports[i]; 1292 984 1293 - port->reset_active_low = flags & OF_GPIO_ACTIVE_LOW; 1294 - port->reset_name = kasprintf(GFP_KERNEL, 1295 - "pcie%d.%d-reset", port->port, port->lane); 1296 - of_property_read_u32(child, "reset-delay-us", 1297 - &reset_udelay); 1298 - 1299 - ret = devm_gpio_request_one(&pdev->dev, 1300 - port->reset_gpio, GPIOF_DIR_OUT, port->reset_name); 1301 - if (ret) { 1302 - if (ret == -EPROBE_DEFER) 1303 - return ret; 1304 - continue; 1305 - } 1306 - 1307 - gpio_set_value(port->reset_gpio, 1308 - (port->reset_active_low) ? 1 : 0); 1309 - msleep(reset_udelay/1000); 1310 - } 1311 - 1312 - port->clk = of_clk_get_by_name(child, NULL); 1313 - if (IS_ERR(port->clk)) { 1314 - dev_err(&pdev->dev, "PCIe%d.%d: cannot get clock\n", 1315 - port->port, port->lane); 985 + child = port->dn; 986 + if (!child) 1316 987 continue; 1317 - } 1318 988 1319 - ret = clk_prepare_enable(port->clk); 1320 - if (ret) 989 + ret = mvebu_pcie_powerup(port); 990 + if (ret < 0) 1321 991 continue; 1322 992 1323 993 port->base = mvebu_pcie_map_registers(pdev, child, port); 1324 994 if (IS_ERR(port->base)) { 1325 - dev_err(&pdev->dev, "PCIe%d.%d: cannot map registers\n", 1326 - port->port, port->lane); 995 + dev_err(&pdev->dev, "%s: cannot map registers\n", 996 + port->name); 1327 997 port->base = NULL; 1328 - clk_disable_unprepare(port->clk); 998 + mvebu_pcie_powerdown(port); 1329 999 continue; 1330 1000 } 1331 1001 1332 1002 mvebu_pcie_set_local_dev_nr(port, 1); 1333 - 1334 - port->dn = child; 1335 1003 mvebu_sw_pci_bridge_init(port); 1336 - i++; 1337 1004 } 1338 1005 1339 1006 pcie->nports = i;
+2 -2
drivers/pci/host/pci-tegra.c
··· 382 382 static struct tegra_pcie_bus *tegra_pcie_bus_alloc(struct tegra_pcie *pcie, 383 383 unsigned int busnr) 384 384 { 385 - pgprot_t prot = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | L_PTE_XN | 386 - L_PTE_MT_DEV_SHARED | L_PTE_SHARED; 385 + pgprot_t prot = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 386 + L_PTE_XN | L_PTE_MT_DEV_SHARED | L_PTE_SHARED); 387 387 phys_addr_t cs = pcie->cs->start; 388 388 struct tegra_pcie_bus *bus; 389 389 unsigned int i;
-22
drivers/pci/host/pci-xgene.c
··· 509 509 return 0; 510 510 } 511 511 512 - static int xgene_pcie_msi_enable(struct pci_bus *bus) 513 - { 514 - struct device_node *msi_node; 515 - 516 - msi_node = of_parse_phandle(bus->dev.of_node, 517 - "msi-parent", 0); 518 - if (!msi_node) 519 - return -ENODEV; 520 - 521 - bus->msi = of_pci_find_msi_chip_by_node(msi_node); 522 - if (!bus->msi) 523 - return -ENODEV; 524 - 525 - of_node_put(msi_node); 526 - bus->msi->dev = &bus->dev; 527 - return 0; 528 - } 529 - 530 512 static int xgene_pcie_probe_bridge(struct platform_device *pdev) 531 513 { 532 514 struct device_node *dn = pdev->dev.of_node; ··· 548 566 &xgene_pcie_ops, port, &res); 549 567 if (!bus) 550 568 return -ENOMEM; 551 - 552 - if (IS_ENABLED(CONFIG_PCI_MSI)) 553 - if (xgene_pcie_msi_enable(bus)) 554 - dev_info(port->dev, "failed to enable MSI\n"); 555 569 556 570 pci_scan_child_bus(bus); 557 571 pci_assign_unassigned_bus_resources(bus);
+314
drivers/pci/host/pcie-altera-msi.c
··· 1 + /* 2 + * Copyright Altera Corporation (C) 2013-2015. All rights reserved 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms and conditions of the GNU General Public License, 6 + * version 2, as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope it will be useful, but WITHOUT 9 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 10 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 11 + * more details. 12 + * 13 + * You should have received a copy of the GNU General Public License along with 14 + * this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #include <linux/interrupt.h> 18 + #include <linux/irqchip/chained_irq.h> 19 + #include <linux/module.h> 20 + #include <linux/msi.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_irq.h> 23 + #include <linux/of_pci.h> 24 + #include <linux/pci.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/slab.h> 27 + 28 + #define MSI_STATUS 0x0 29 + #define MSI_ERROR 0x4 30 + #define MSI_INTMASK 0x8 31 + 32 + #define MAX_MSI_VECTORS 32 33 + 34 + struct altera_msi { 35 + DECLARE_BITMAP(used, MAX_MSI_VECTORS); 36 + struct mutex lock; /* protect "used" bitmap */ 37 + struct platform_device *pdev; 38 + struct irq_domain *msi_domain; 39 + struct irq_domain *inner_domain; 40 + void __iomem *csr_base; 41 + void __iomem *vector_base; 42 + phys_addr_t vector_phy; 43 + u32 num_of_vectors; 44 + int irq; 45 + }; 46 + 47 + static inline void msi_writel(struct altera_msi *msi, const u32 value, 48 + const u32 reg) 49 + { 50 + writel_relaxed(value, msi->csr_base + reg); 51 + } 52 + 53 + static inline u32 msi_readl(struct altera_msi *msi, const u32 reg) 54 + { 55 + return readl_relaxed(msi->csr_base + reg); 56 + } 57 + 58 + static void altera_msi_isr(struct irq_desc *desc) 59 + { 60 + struct irq_chip *chip = irq_desc_get_chip(desc); 61 + struct altera_msi *msi; 62 + unsigned long status; 63 + u32 num_of_vectors; 64 + u32 bit; 65 + u32 virq; 66 + 67 + chained_irq_enter(chip, desc); 68 + msi = irq_desc_get_handler_data(desc); 69 + num_of_vectors = msi->num_of_vectors; 70 + 71 + while ((status = msi_readl(msi, MSI_STATUS)) != 0) { 72 + for_each_set_bit(bit, &status, msi->num_of_vectors) { 73 + /* Dummy read from vector to clear the interrupt */ 74 + readl_relaxed(msi->vector_base + (bit * sizeof(u32))); 75 + 76 + virq = irq_find_mapping(msi->inner_domain, bit); 77 + if (virq) 78 + generic_handle_irq(virq); 79 + else 80 + dev_err(&msi->pdev->dev, "unexpected MSI\n"); 81 + } 82 + } 83 + 84 + chained_irq_exit(chip, desc); 85 + } 86 + 87 + static struct irq_chip altera_msi_irq_chip = { 88 + .name = "Altera PCIe MSI", 89 + .irq_mask = pci_msi_mask_irq, 90 + .irq_unmask = pci_msi_unmask_irq, 91 + }; 92 + 93 + static struct msi_domain_info altera_msi_domain_info = { 94 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 95 + MSI_FLAG_PCI_MSIX), 96 + .chip = &altera_msi_irq_chip, 97 + }; 98 + 99 + static void altera_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 100 + { 101 + struct altera_msi *msi = irq_data_get_irq_chip_data(data); 102 + phys_addr_t addr = msi->vector_phy + (data->hwirq * sizeof(u32)); 103 + 104 + msg->address_lo = lower_32_bits(addr); 105 + msg->address_hi = upper_32_bits(addr); 106 + msg->data = data->hwirq; 107 + 108 + dev_dbg(&msi->pdev->dev, "msi#%d address_hi %#x address_lo %#x\n", 109 + (int)data->hwirq, msg->address_hi, msg->address_lo); 110 + } 111 + 112 + static int altera_msi_set_affinity(struct irq_data *irq_data, 113 + const struct cpumask *mask, bool force) 114 + { 115 + return -EINVAL; 116 + } 117 + 118 + static struct irq_chip altera_msi_bottom_irq_chip = { 119 + .name = "Altera MSI", 120 + .irq_compose_msi_msg = altera_compose_msi_msg, 121 + .irq_set_affinity = altera_msi_set_affinity, 122 + }; 123 + 124 + static int altera_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 125 + unsigned int nr_irqs, void *args) 126 + { 127 + struct altera_msi *msi = domain->host_data; 128 + unsigned long bit; 129 + u32 mask; 130 + 131 + WARN_ON(nr_irqs != 1); 132 + mutex_lock(&msi->lock); 133 + 134 + bit = find_first_zero_bit(msi->used, msi->num_of_vectors); 135 + if (bit >= msi->num_of_vectors) { 136 + mutex_unlock(&msi->lock); 137 + return -ENOSPC; 138 + } 139 + 140 + set_bit(bit, msi->used); 141 + 142 + mutex_unlock(&msi->lock); 143 + 144 + irq_domain_set_info(domain, virq, bit, &altera_msi_bottom_irq_chip, 145 + domain->host_data, handle_simple_irq, 146 + NULL, NULL); 147 + 148 + mask = msi_readl(msi, MSI_INTMASK); 149 + mask |= 1 << bit; 150 + msi_writel(msi, mask, MSI_INTMASK); 151 + 152 + return 0; 153 + } 154 + 155 + static void altera_irq_domain_free(struct irq_domain *domain, 156 + unsigned int virq, unsigned int nr_irqs) 157 + { 158 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 159 + struct altera_msi *msi = irq_data_get_irq_chip_data(d); 160 + u32 mask; 161 + 162 + mutex_lock(&msi->lock); 163 + 164 + if (!test_bit(d->hwirq, msi->used)) { 165 + dev_err(&msi->pdev->dev, "trying to free unused MSI#%lu\n", 166 + d->hwirq); 167 + } else { 168 + __clear_bit(d->hwirq, msi->used); 169 + mask = msi_readl(msi, MSI_INTMASK); 170 + mask &= ~(1 << d->hwirq); 171 + msi_writel(msi, mask, MSI_INTMASK); 172 + } 173 + 174 + mutex_unlock(&msi->lock); 175 + } 176 + 177 + static const struct irq_domain_ops msi_domain_ops = { 178 + .alloc = altera_irq_domain_alloc, 179 + .free = altera_irq_domain_free, 180 + }; 181 + 182 + static int altera_allocate_domains(struct altera_msi *msi) 183 + { 184 + struct fwnode_handle *fwnode = of_node_to_fwnode(msi->pdev->dev.of_node); 185 + 186 + msi->inner_domain = irq_domain_add_linear(NULL, msi->num_of_vectors, 187 + &msi_domain_ops, msi); 188 + if (!msi->inner_domain) { 189 + dev_err(&msi->pdev->dev, "failed to create IRQ domain\n"); 190 + return -ENOMEM; 191 + } 192 + 193 + msi->msi_domain = pci_msi_create_irq_domain(fwnode, 194 + &altera_msi_domain_info, msi->inner_domain); 195 + if (!msi->msi_domain) { 196 + dev_err(&msi->pdev->dev, "failed to create MSI domain\n"); 197 + irq_domain_remove(msi->inner_domain); 198 + return -ENOMEM; 199 + } 200 + 201 + return 0; 202 + } 203 + 204 + static void altera_free_domains(struct altera_msi *msi) 205 + { 206 + irq_domain_remove(msi->msi_domain); 207 + irq_domain_remove(msi->inner_domain); 208 + } 209 + 210 + static int altera_msi_remove(struct platform_device *pdev) 211 + { 212 + struct altera_msi *msi = platform_get_drvdata(pdev); 213 + 214 + msi_writel(msi, 0, MSI_INTMASK); 215 + irq_set_chained_handler(msi->irq, NULL); 216 + irq_set_handler_data(msi->irq, NULL); 217 + 218 + altera_free_domains(msi); 219 + 220 + platform_set_drvdata(pdev, NULL); 221 + return 0; 222 + } 223 + 224 + static int altera_msi_probe(struct platform_device *pdev) 225 + { 226 + struct altera_msi *msi; 227 + struct device_node *np = pdev->dev.of_node; 228 + struct resource *res; 229 + int ret; 230 + 231 + msi = devm_kzalloc(&pdev->dev, sizeof(struct altera_msi), 232 + GFP_KERNEL); 233 + if (!msi) 234 + return -ENOMEM; 235 + 236 + mutex_init(&msi->lock); 237 + msi->pdev = pdev; 238 + 239 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "csr"); 240 + if (!res) { 241 + dev_err(&pdev->dev, "no csr memory resource defined\n"); 242 + return -ENODEV; 243 + } 244 + 245 + msi->csr_base = devm_ioremap_resource(&pdev->dev, res); 246 + if (IS_ERR(msi->csr_base)) { 247 + dev_err(&pdev->dev, "failed to map csr memory\n"); 248 + return PTR_ERR(msi->csr_base); 249 + } 250 + 251 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 252 + "vector_slave"); 253 + if (!res) { 254 + dev_err(&pdev->dev, "no vector_slave memory resource defined\n"); 255 + return -ENODEV; 256 + } 257 + 258 + msi->vector_base = devm_ioremap_resource(&pdev->dev, res); 259 + if (IS_ERR(msi->vector_base)) { 260 + dev_err(&pdev->dev, "failed to map vector_slave memory\n"); 261 + return PTR_ERR(msi->vector_base); 262 + } 263 + 264 + msi->vector_phy = res->start; 265 + 266 + if (of_property_read_u32(np, "num-vectors", &msi->num_of_vectors)) { 267 + dev_err(&pdev->dev, "failed to parse the number of vectors\n"); 268 + return -EINVAL; 269 + } 270 + 271 + ret = altera_allocate_domains(msi); 272 + if (ret) 273 + return ret; 274 + 275 + msi->irq = platform_get_irq(pdev, 0); 276 + if (msi->irq <= 0) { 277 + dev_err(&pdev->dev, "failed to map IRQ: %d\n", msi->irq); 278 + ret = -ENODEV; 279 + goto err; 280 + } 281 + 282 + irq_set_chained_handler_and_data(msi->irq, altera_msi_isr, msi); 283 + platform_set_drvdata(pdev, msi); 284 + 285 + return 0; 286 + 287 + err: 288 + altera_msi_remove(pdev); 289 + return ret; 290 + } 291 + 292 + static const struct of_device_id altera_msi_of_match[] = { 293 + { .compatible = "altr,msi-1.0", NULL }, 294 + { }, 295 + }; 296 + 297 + static struct platform_driver altera_msi_driver = { 298 + .driver = { 299 + .name = "altera-msi", 300 + .of_match_table = altera_msi_of_match, 301 + }, 302 + .probe = altera_msi_probe, 303 + .remove = altera_msi_remove, 304 + }; 305 + 306 + static int __init altera_msi_init(void) 307 + { 308 + return platform_driver_register(&altera_msi_driver); 309 + } 310 + subsys_initcall(altera_msi_init); 311 + 312 + MODULE_AUTHOR("Ley Foon Tan <lftan@altera.com>"); 313 + MODULE_DESCRIPTION("Altera PCIe MSI support"); 314 + MODULE_LICENSE("GPL v2");
+579
drivers/pci/host/pcie-altera.c
··· 1 + /* 2 + * Copyright Altera Corporation (C) 2013-2015. All rights reserved 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms and conditions of the GNU General Public License, 6 + * version 2, as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope it will be useful, but WITHOUT 9 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 10 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 11 + * more details. 12 + * 13 + * You should have received a copy of the GNU General Public License along with 14 + * this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #include <linux/delay.h> 18 + #include <linux/interrupt.h> 19 + #include <linux/irqchip/chained_irq.h> 20 + #include <linux/module.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_irq.h> 23 + #include <linux/of_pci.h> 24 + #include <linux/pci.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/slab.h> 27 + 28 + #define RP_TX_REG0 0x2000 29 + #define RP_TX_REG1 0x2004 30 + #define RP_TX_CNTRL 0x2008 31 + #define RP_TX_EOP 0x2 32 + #define RP_TX_SOP 0x1 33 + #define RP_RXCPL_STATUS 0x2010 34 + #define RP_RXCPL_EOP 0x2 35 + #define RP_RXCPL_SOP 0x1 36 + #define RP_RXCPL_REG0 0x2014 37 + #define RP_RXCPL_REG1 0x2018 38 + #define P2A_INT_STATUS 0x3060 39 + #define P2A_INT_STS_ALL 0xf 40 + #define P2A_INT_ENABLE 0x3070 41 + #define P2A_INT_ENA_ALL 0xf 42 + #define RP_LTSSM 0x3c64 43 + #define LTSSM_L0 0xf 44 + 45 + /* TLP configuration type 0 and 1 */ 46 + #define TLP_FMTTYPE_CFGRD0 0x04 /* Configuration Read Type 0 */ 47 + #define TLP_FMTTYPE_CFGWR0 0x44 /* Configuration Write Type 0 */ 48 + #define TLP_FMTTYPE_CFGRD1 0x05 /* Configuration Read Type 1 */ 49 + #define TLP_FMTTYPE_CFGWR1 0x45 /* Configuration Write Type 1 */ 50 + #define TLP_PAYLOAD_SIZE 0x01 51 + #define TLP_READ_TAG 0x1d 52 + #define TLP_WRITE_TAG 0x10 53 + #define TLP_CFG_DW0(fmttype) (((fmttype) << 24) | TLP_PAYLOAD_SIZE) 54 + #define TLP_CFG_DW1(reqid, tag, be) (((reqid) << 16) | (tag << 8) | (be)) 55 + #define TLP_CFG_DW2(bus, devfn, offset) \ 56 + (((bus) << 24) | ((devfn) << 16) | (offset)) 57 + #define TLP_REQ_ID(bus, devfn) (((bus) << 8) | (devfn)) 58 + #define TLP_HDR_SIZE 3 59 + #define TLP_LOOP 500 60 + 61 + #define INTX_NUM 4 62 + 63 + #define DWORD_MASK 3 64 + 65 + struct altera_pcie { 66 + struct platform_device *pdev; 67 + void __iomem *cra_base; 68 + int irq; 69 + u8 root_bus_nr; 70 + struct irq_domain *irq_domain; 71 + struct resource bus_range; 72 + struct list_head resources; 73 + }; 74 + 75 + struct tlp_rp_regpair_t { 76 + u32 ctrl; 77 + u32 reg0; 78 + u32 reg1; 79 + }; 80 + 81 + static void altera_pcie_retrain(struct pci_dev *dev) 82 + { 83 + u16 linkcap, linkstat; 84 + 85 + /* 86 + * Set the retrain bit if the PCIe rootport support > 2.5GB/s, but 87 + * current speed is 2.5 GB/s. 88 + */ 89 + pcie_capability_read_word(dev, PCI_EXP_LNKCAP, &linkcap); 90 + 91 + if ((linkcap & PCI_EXP_LNKCAP_SLS) <= PCI_EXP_LNKCAP_SLS_2_5GB) 92 + return; 93 + 94 + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &linkstat); 95 + if ((linkstat & PCI_EXP_LNKSTA_CLS) == PCI_EXP_LNKSTA_CLS_2_5GB) 96 + pcie_capability_set_word(dev, PCI_EXP_LNKCTL, 97 + PCI_EXP_LNKCTL_RL); 98 + } 99 + DECLARE_PCI_FIXUP_EARLY(0x1172, PCI_ANY_ID, altera_pcie_retrain); 100 + 101 + /* 102 + * Altera PCIe port uses BAR0 of RC's configuration space as the translation 103 + * from PCI bus to native BUS. Entire DDR region is mapped into PCIe space 104 + * using these registers, so it can be reached by DMA from EP devices. 105 + * This BAR0 will also access to MSI vector when receiving MSI/MSIX interrupt 106 + * from EP devices, eventually trigger interrupt to GIC. The BAR0 of bridge 107 + * should be hidden during enumeration to avoid the sizing and resource 108 + * allocation by PCIe core. 109 + */ 110 + static bool altera_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int devfn, 111 + int offset) 112 + { 113 + if (pci_is_root_bus(bus) && (devfn == 0) && 114 + (offset == PCI_BASE_ADDRESS_0)) 115 + return true; 116 + 117 + return false; 118 + } 119 + 120 + static inline void cra_writel(struct altera_pcie *pcie, const u32 value, 121 + const u32 reg) 122 + { 123 + writel_relaxed(value, pcie->cra_base + reg); 124 + } 125 + 126 + static inline u32 cra_readl(struct altera_pcie *pcie, const u32 reg) 127 + { 128 + return readl_relaxed(pcie->cra_base + reg); 129 + } 130 + 131 + static void tlp_write_tx(struct altera_pcie *pcie, 132 + struct tlp_rp_regpair_t *tlp_rp_regdata) 133 + { 134 + cra_writel(pcie, tlp_rp_regdata->reg0, RP_TX_REG0); 135 + cra_writel(pcie, tlp_rp_regdata->reg1, RP_TX_REG1); 136 + cra_writel(pcie, tlp_rp_regdata->ctrl, RP_TX_CNTRL); 137 + } 138 + 139 + static bool altera_pcie_link_is_up(struct altera_pcie *pcie) 140 + { 141 + return !!(cra_readl(pcie, RP_LTSSM) & LTSSM_L0); 142 + } 143 + 144 + static bool altera_pcie_valid_config(struct altera_pcie *pcie, 145 + struct pci_bus *bus, int dev) 146 + { 147 + /* If there is no link, then there is no device */ 148 + if (bus->number != pcie->root_bus_nr) { 149 + if (!altera_pcie_link_is_up(pcie)) 150 + return false; 151 + } 152 + 153 + /* access only one slot on each root port */ 154 + if (bus->number == pcie->root_bus_nr && dev > 0) 155 + return false; 156 + 157 + /* 158 + * Do not read more than one device on the bus directly attached 159 + * to root port, root port can only attach to one downstream port. 160 + */ 161 + if (bus->primary == pcie->root_bus_nr && dev > 0) 162 + return false; 163 + 164 + return true; 165 + } 166 + 167 + static int tlp_read_packet(struct altera_pcie *pcie, u32 *value) 168 + { 169 + u8 loop; 170 + bool sop = 0; 171 + u32 ctrl; 172 + u32 reg0, reg1; 173 + 174 + /* 175 + * Minimum 2 loops to read TLP headers and 1 loop to read data 176 + * payload. 177 + */ 178 + for (loop = 0; loop < TLP_LOOP; loop++) { 179 + ctrl = cra_readl(pcie, RP_RXCPL_STATUS); 180 + if ((ctrl & RP_RXCPL_SOP) || (ctrl & RP_RXCPL_EOP) || sop) { 181 + reg0 = cra_readl(pcie, RP_RXCPL_REG0); 182 + reg1 = cra_readl(pcie, RP_RXCPL_REG1); 183 + 184 + if (ctrl & RP_RXCPL_SOP) 185 + sop = true; 186 + 187 + if (ctrl & RP_RXCPL_EOP) { 188 + if (value) 189 + *value = reg0; 190 + return PCIBIOS_SUCCESSFUL; 191 + } 192 + } 193 + udelay(5); 194 + } 195 + 196 + return -ENOENT; 197 + } 198 + 199 + static void tlp_write_packet(struct altera_pcie *pcie, u32 *headers, 200 + u32 data, bool align) 201 + { 202 + struct tlp_rp_regpair_t tlp_rp_regdata; 203 + 204 + tlp_rp_regdata.reg0 = headers[0]; 205 + tlp_rp_regdata.reg1 = headers[1]; 206 + tlp_rp_regdata.ctrl = RP_TX_SOP; 207 + tlp_write_tx(pcie, &tlp_rp_regdata); 208 + 209 + if (align) { 210 + tlp_rp_regdata.reg0 = headers[2]; 211 + tlp_rp_regdata.reg1 = 0; 212 + tlp_rp_regdata.ctrl = 0; 213 + tlp_write_tx(pcie, &tlp_rp_regdata); 214 + 215 + tlp_rp_regdata.reg0 = data; 216 + tlp_rp_regdata.reg1 = 0; 217 + } else { 218 + tlp_rp_regdata.reg0 = headers[2]; 219 + tlp_rp_regdata.reg1 = data; 220 + } 221 + 222 + tlp_rp_regdata.ctrl = RP_TX_EOP; 223 + tlp_write_tx(pcie, &tlp_rp_regdata); 224 + } 225 + 226 + static int tlp_cfg_dword_read(struct altera_pcie *pcie, u8 bus, u32 devfn, 227 + int where, u8 byte_en, u32 *value) 228 + { 229 + u32 headers[TLP_HDR_SIZE]; 230 + 231 + if (bus == pcie->root_bus_nr) 232 + headers[0] = TLP_CFG_DW0(TLP_FMTTYPE_CFGRD0); 233 + else 234 + headers[0] = TLP_CFG_DW0(TLP_FMTTYPE_CFGRD1); 235 + 236 + headers[1] = TLP_CFG_DW1(TLP_REQ_ID(pcie->root_bus_nr, devfn), 237 + TLP_READ_TAG, byte_en); 238 + headers[2] = TLP_CFG_DW2(bus, devfn, where); 239 + 240 + tlp_write_packet(pcie, headers, 0, false); 241 + 242 + return tlp_read_packet(pcie, value); 243 + } 244 + 245 + static int tlp_cfg_dword_write(struct altera_pcie *pcie, u8 bus, u32 devfn, 246 + int where, u8 byte_en, u32 value) 247 + { 248 + u32 headers[TLP_HDR_SIZE]; 249 + int ret; 250 + 251 + if (bus == pcie->root_bus_nr) 252 + headers[0] = TLP_CFG_DW0(TLP_FMTTYPE_CFGWR0); 253 + else 254 + headers[0] = TLP_CFG_DW0(TLP_FMTTYPE_CFGWR1); 255 + 256 + headers[1] = TLP_CFG_DW1(TLP_REQ_ID(pcie->root_bus_nr, devfn), 257 + TLP_WRITE_TAG, byte_en); 258 + headers[2] = TLP_CFG_DW2(bus, devfn, where); 259 + 260 + /* check alignment to Qword */ 261 + if ((where & 0x7) == 0) 262 + tlp_write_packet(pcie, headers, value, true); 263 + else 264 + tlp_write_packet(pcie, headers, value, false); 265 + 266 + ret = tlp_read_packet(pcie, NULL); 267 + if (ret != PCIBIOS_SUCCESSFUL) 268 + return ret; 269 + 270 + /* 271 + * Monitor changes to PCI_PRIMARY_BUS register on root port 272 + * and update local copy of root bus number accordingly. 273 + */ 274 + if ((bus == pcie->root_bus_nr) && (where == PCI_PRIMARY_BUS)) 275 + pcie->root_bus_nr = (u8)(value); 276 + 277 + return PCIBIOS_SUCCESSFUL; 278 + } 279 + 280 + static int altera_pcie_cfg_read(struct pci_bus *bus, unsigned int devfn, 281 + int where, int size, u32 *value) 282 + { 283 + struct altera_pcie *pcie = bus->sysdata; 284 + int ret; 285 + u32 data; 286 + u8 byte_en; 287 + 288 + if (altera_pcie_hide_rc_bar(bus, devfn, where)) 289 + return PCIBIOS_BAD_REGISTER_NUMBER; 290 + 291 + if (!altera_pcie_valid_config(pcie, bus, PCI_SLOT(devfn))) { 292 + *value = 0xffffffff; 293 + return PCIBIOS_DEVICE_NOT_FOUND; 294 + } 295 + 296 + switch (size) { 297 + case 1: 298 + byte_en = 1 << (where & 3); 299 + break; 300 + case 2: 301 + byte_en = 3 << (where & 3); 302 + break; 303 + default: 304 + byte_en = 0xf; 305 + break; 306 + } 307 + 308 + ret = tlp_cfg_dword_read(pcie, bus->number, devfn, 309 + (where & ~DWORD_MASK), byte_en, &data); 310 + if (ret != PCIBIOS_SUCCESSFUL) 311 + return ret; 312 + 313 + switch (size) { 314 + case 1: 315 + *value = (data >> (8 * (where & 0x3))) & 0xff; 316 + break; 317 + case 2: 318 + *value = (data >> (8 * (where & 0x2))) & 0xffff; 319 + break; 320 + default: 321 + *value = data; 322 + break; 323 + } 324 + 325 + return PCIBIOS_SUCCESSFUL; 326 + } 327 + 328 + static int altera_pcie_cfg_write(struct pci_bus *bus, unsigned int devfn, 329 + int where, int size, u32 value) 330 + { 331 + struct altera_pcie *pcie = bus->sysdata; 332 + u32 data32; 333 + u32 shift = 8 * (where & 3); 334 + u8 byte_en; 335 + 336 + if (altera_pcie_hide_rc_bar(bus, devfn, where)) 337 + return PCIBIOS_BAD_REGISTER_NUMBER; 338 + 339 + if (!altera_pcie_valid_config(pcie, bus, PCI_SLOT(devfn))) 340 + return PCIBIOS_DEVICE_NOT_FOUND; 341 + 342 + switch (size) { 343 + case 1: 344 + data32 = (value & 0xff) << shift; 345 + byte_en = 1 << (where & 3); 346 + break; 347 + case 2: 348 + data32 = (value & 0xffff) << shift; 349 + byte_en = 3 << (where & 3); 350 + break; 351 + default: 352 + data32 = value; 353 + byte_en = 0xf; 354 + break; 355 + } 356 + 357 + return tlp_cfg_dword_write(pcie, bus->number, devfn, 358 + (where & ~DWORD_MASK), byte_en, data32); 359 + } 360 + 361 + static struct pci_ops altera_pcie_ops = { 362 + .read = altera_pcie_cfg_read, 363 + .write = altera_pcie_cfg_write, 364 + }; 365 + 366 + static int altera_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 367 + irq_hw_number_t hwirq) 368 + { 369 + irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); 370 + irq_set_chip_data(irq, domain->host_data); 371 + 372 + return 0; 373 + } 374 + 375 + static const struct irq_domain_ops intx_domain_ops = { 376 + .map = altera_pcie_intx_map, 377 + }; 378 + 379 + static void altera_pcie_isr(struct irq_desc *desc) 380 + { 381 + struct irq_chip *chip = irq_desc_get_chip(desc); 382 + struct altera_pcie *pcie; 383 + unsigned long status; 384 + u32 bit; 385 + u32 virq; 386 + 387 + chained_irq_enter(chip, desc); 388 + pcie = irq_desc_get_handler_data(desc); 389 + 390 + while ((status = cra_readl(pcie, P2A_INT_STATUS) 391 + & P2A_INT_STS_ALL) != 0) { 392 + for_each_set_bit(bit, &status, INTX_NUM) { 393 + /* clear interrupts */ 394 + cra_writel(pcie, 1 << bit, P2A_INT_STATUS); 395 + 396 + virq = irq_find_mapping(pcie->irq_domain, bit + 1); 397 + if (virq) 398 + generic_handle_irq(virq); 399 + else 400 + dev_err(&pcie->pdev->dev, 401 + "unexpected IRQ, INT%d\n", bit); 402 + } 403 + } 404 + 405 + chained_irq_exit(chip, desc); 406 + } 407 + 408 + static void altera_pcie_release_of_pci_ranges(struct altera_pcie *pcie) 409 + { 410 + pci_free_resource_list(&pcie->resources); 411 + } 412 + 413 + static int altera_pcie_parse_request_of_pci_ranges(struct altera_pcie *pcie) 414 + { 415 + int err, res_valid = 0; 416 + struct device *dev = &pcie->pdev->dev; 417 + struct device_node *np = dev->of_node; 418 + struct resource_entry *win; 419 + 420 + err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pcie->resources, 421 + NULL); 422 + if (err) 423 + return err; 424 + 425 + resource_list_for_each_entry(win, &pcie->resources) { 426 + struct resource *parent, *res = win->res; 427 + 428 + switch (resource_type(res)) { 429 + case IORESOURCE_MEM: 430 + parent = &iomem_resource; 431 + res_valid |= !(res->flags & IORESOURCE_PREFETCH); 432 + break; 433 + default: 434 + continue; 435 + } 436 + 437 + err = devm_request_resource(dev, parent, res); 438 + if (err) 439 + goto out_release_res; 440 + } 441 + 442 + if (!res_valid) { 443 + dev_err(dev, "non-prefetchable memory resource required\n"); 444 + err = -EINVAL; 445 + goto out_release_res; 446 + } 447 + 448 + return 0; 449 + 450 + out_release_res: 451 + altera_pcie_release_of_pci_ranges(pcie); 452 + return err; 453 + } 454 + 455 + static int altera_pcie_init_irq_domain(struct altera_pcie *pcie) 456 + { 457 + struct device *dev = &pcie->pdev->dev; 458 + struct device_node *node = dev->of_node; 459 + 460 + /* Setup INTx */ 461 + pcie->irq_domain = irq_domain_add_linear(node, INTX_NUM, 462 + &intx_domain_ops, pcie); 463 + if (!pcie->irq_domain) { 464 + dev_err(dev, "Failed to get a INTx IRQ domain\n"); 465 + return -ENOMEM; 466 + } 467 + 468 + return 0; 469 + } 470 + 471 + static int altera_pcie_parse_dt(struct altera_pcie *pcie) 472 + { 473 + struct resource *cra; 474 + struct platform_device *pdev = pcie->pdev; 475 + 476 + cra = platform_get_resource_byname(pdev, IORESOURCE_MEM, "Cra"); 477 + if (!cra) { 478 + dev_err(&pdev->dev, "no Cra memory resource defined\n"); 479 + return -ENODEV; 480 + } 481 + 482 + pcie->cra_base = devm_ioremap_resource(&pdev->dev, cra); 483 + if (IS_ERR(pcie->cra_base)) { 484 + dev_err(&pdev->dev, "failed to map cra memory\n"); 485 + return PTR_ERR(pcie->cra_base); 486 + } 487 + 488 + /* setup IRQ */ 489 + pcie->irq = platform_get_irq(pdev, 0); 490 + if (pcie->irq <= 0) { 491 + dev_err(&pdev->dev, "failed to get IRQ: %d\n", pcie->irq); 492 + return -EINVAL; 493 + } 494 + 495 + irq_set_chained_handler_and_data(pcie->irq, altera_pcie_isr, pcie); 496 + 497 + return 0; 498 + } 499 + 500 + static int altera_pcie_probe(struct platform_device *pdev) 501 + { 502 + struct altera_pcie *pcie; 503 + struct pci_bus *bus; 504 + struct pci_bus *child; 505 + int ret; 506 + 507 + pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL); 508 + if (!pcie) 509 + return -ENOMEM; 510 + 511 + pcie->pdev = pdev; 512 + 513 + ret = altera_pcie_parse_dt(pcie); 514 + if (ret) { 515 + dev_err(&pdev->dev, "Parsing DT failed\n"); 516 + return ret; 517 + } 518 + 519 + INIT_LIST_HEAD(&pcie->resources); 520 + 521 + ret = altera_pcie_parse_request_of_pci_ranges(pcie); 522 + if (ret) { 523 + dev_err(&pdev->dev, "Failed add resources\n"); 524 + return ret; 525 + } 526 + 527 + ret = altera_pcie_init_irq_domain(pcie); 528 + if (ret) { 529 + dev_err(&pdev->dev, "Failed creating IRQ Domain\n"); 530 + return ret; 531 + } 532 + 533 + /* clear all interrupts */ 534 + cra_writel(pcie, P2A_INT_STS_ALL, P2A_INT_STATUS); 535 + /* enable all interrupts */ 536 + cra_writel(pcie, P2A_INT_ENA_ALL, P2A_INT_ENABLE); 537 + 538 + bus = pci_scan_root_bus(&pdev->dev, pcie->root_bus_nr, &altera_pcie_ops, 539 + pcie, &pcie->resources); 540 + if (!bus) 541 + return -ENOMEM; 542 + 543 + pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 544 + pci_assign_unassigned_bus_resources(bus); 545 + 546 + /* Configure PCI Express setting. */ 547 + list_for_each_entry(child, &bus->children, node) 548 + pcie_bus_configure_settings(child); 549 + 550 + pci_bus_add_devices(bus); 551 + 552 + platform_set_drvdata(pdev, pcie); 553 + return ret; 554 + } 555 + 556 + static const struct of_device_id altera_pcie_of_match[] = { 557 + { .compatible = "altr,pcie-root-port-1.0", }, 558 + {}, 559 + }; 560 + MODULE_DEVICE_TABLE(of, altera_pcie_of_match); 561 + 562 + static struct platform_driver altera_pcie_driver = { 563 + .probe = altera_pcie_probe, 564 + .driver = { 565 + .name = "altera-pcie", 566 + .of_match_table = altera_pcie_of_match, 567 + .suppress_bind_attrs = true, 568 + }, 569 + }; 570 + 571 + static int altera_pcie_init(void) 572 + { 573 + return platform_driver_register(&altera_pcie_driver); 574 + } 575 + module_init(altera_pcie_init); 576 + 577 + MODULE_AUTHOR("Ley Foon Tan <lftan@altera.com>"); 578 + MODULE_DESCRIPTION("Altera PCIe host controller driver"); 579 + MODULE_LICENSE("GPL v2");
+175 -208
drivers/pci/host/pcie-designware.c
··· 35 35 36 36 #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C 37 37 #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) 38 - #define PORT_LOGIC_LINK_WIDTH_MASK (0x1ff << 8) 38 + #define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8) 39 39 #define PORT_LOGIC_LINK_WIDTH_1_LANES (0x1 << 8) 40 40 #define PORT_LOGIC_LINK_WIDTH_2_LANES (0x2 << 8) 41 41 #define PORT_LOGIC_LINK_WIDTH_4_LANES (0x4 << 8) ··· 69 69 #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) 70 70 #define PCIE_ATU_UPPER_TARGET 0x91C 71 71 72 - static struct hw_pci dw_pci; 72 + static struct pci_ops dw_pcie_ops; 73 73 74 - static unsigned long global_io_offset; 75 - 76 - static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) 74 + int dw_pcie_cfg_read(void __iomem *addr, int size, u32 *val) 77 75 { 78 - BUG_ON(!sys->private_data); 79 - 80 - return sys->private_data; 81 - } 82 - 83 - int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val) 84 - { 85 - *val = readl(addr); 86 - 87 - if (size == 1) 88 - *val = (*val >> (8 * (where & 3))) & 0xff; 89 - else if (size == 2) 90 - *val = (*val >> (8 * (where & 3))) & 0xffff; 91 - else if (size != 4) 76 + if ((uintptr_t)addr & (size - 1)) { 77 + *val = 0; 92 78 return PCIBIOS_BAD_REGISTER_NUMBER; 79 + } 80 + 81 + if (size == 4) 82 + *val = readl(addr); 83 + else if (size == 2) 84 + *val = readw(addr); 85 + else if (size == 1) 86 + *val = readb(addr); 87 + else { 88 + *val = 0; 89 + return PCIBIOS_BAD_REGISTER_NUMBER; 90 + } 93 91 94 92 return PCIBIOS_SUCCESSFUL; 95 93 } 96 94 97 - int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val) 95 + int dw_pcie_cfg_write(void __iomem *addr, int size, u32 val) 98 96 { 97 + if ((uintptr_t)addr & (size - 1)) 98 + return PCIBIOS_BAD_REGISTER_NUMBER; 99 + 99 100 if (size == 4) 100 101 writel(val, addr); 101 102 else if (size == 2) 102 - writew(val, addr + (where & 2)); 103 + writew(val, addr); 103 104 else if (size == 1) 104 - writeb(val, addr + (where & 3)); 105 + writeb(val, addr); 105 106 else 106 107 return PCIBIOS_BAD_REGISTER_NUMBER; 107 108 ··· 133 132 if (pp->ops->rd_own_conf) 134 133 ret = pp->ops->rd_own_conf(pp, where, size, val); 135 134 else 136 - ret = dw_pcie_cfg_read(pp->dbi_base + (where & ~0x3), where, 137 - size, val); 135 + ret = dw_pcie_cfg_read(pp->dbi_base + where, size, val); 138 136 139 137 return ret; 140 138 } ··· 146 146 if (pp->ops->wr_own_conf) 147 147 ret = pp->ops->wr_own_conf(pp, where, size, val); 148 148 else 149 - ret = dw_pcie_cfg_write(pp->dbi_base + (where & ~0x3), where, 150 - size, val); 149 + ret = dw_pcie_cfg_write(pp->dbi_base + where, size, val); 151 150 152 151 return ret; 153 152 } ··· 204 205 205 206 void dw_pcie_msi_init(struct pcie_port *pp) 206 207 { 208 + u64 msi_target; 209 + 207 210 pp->msi_data = __get_free_pages(GFP_KERNEL, 0); 211 + msi_target = virt_to_phys((void *)pp->msi_data); 208 212 209 213 /* program the msi_data */ 210 214 dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4, 211 - virt_to_phys((void *)pp->msi_data)); 212 - dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, 0); 215 + (u32)(msi_target & 0xffffffff)); 216 + dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, 217 + (u32)(msi_target >> 32 & 0xffffffff)); 213 218 } 214 219 215 220 static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) ··· 258 255 static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos) 259 256 { 260 257 int irq, pos0, i; 261 - struct pcie_port *pp = sys_to_pcie(msi_desc_to_pci_sysdata(desc)); 258 + struct pcie_port *pp = (struct pcie_port *) msi_desc_to_pci_sysdata(desc); 262 259 263 260 pos0 = bitmap_find_free_region(pp->msi_irq_in_use, MAX_MSI_IRQS, 264 261 order_base_2(no_irqs)); ··· 289 286 } 290 287 291 288 *pos = pos0; 289 + desc->nvec_used = no_irqs; 290 + desc->msi_attrib.multiple = order_base_2(no_irqs); 291 + 292 292 return irq; 293 293 294 294 no_valid_irq: ··· 299 293 return -ENOSPC; 300 294 } 301 295 296 + static void dw_msi_setup_msg(struct pcie_port *pp, unsigned int irq, u32 pos) 297 + { 298 + struct msi_msg msg; 299 + u64 msi_target; 300 + 301 + if (pp->ops->get_msi_addr) 302 + msi_target = pp->ops->get_msi_addr(pp); 303 + else 304 + msi_target = virt_to_phys((void *)pp->msi_data); 305 + 306 + msg.address_lo = (u32)(msi_target & 0xffffffff); 307 + msg.address_hi = (u32)(msi_target >> 32 & 0xffffffff); 308 + 309 + if (pp->ops->get_msi_data) 310 + msg.data = pp->ops->get_msi_data(pp, pos); 311 + else 312 + msg.data = pos; 313 + 314 + pci_write_msi_msg(irq, &msg); 315 + } 316 + 302 317 static int dw_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev, 303 318 struct msi_desc *desc) 304 319 { 305 320 int irq, pos; 306 - struct msi_msg msg; 307 - struct pcie_port *pp = sys_to_pcie(pdev->bus->sysdata); 321 + struct pcie_port *pp = pdev->bus->sysdata; 308 322 309 323 if (desc->msi_attrib.is_msix) 310 324 return -EINVAL; ··· 333 307 if (irq < 0) 334 308 return irq; 335 309 336 - if (pp->ops->get_msi_addr) 337 - msg.address_lo = pp->ops->get_msi_addr(pp); 338 - else 339 - msg.address_lo = virt_to_phys((void *)pp->msi_data); 340 - msg.address_hi = 0x0; 341 - 342 - if (pp->ops->get_msi_data) 343 - msg.data = pp->ops->get_msi_data(pp, pos); 344 - else 345 - msg.data = pos; 346 - 347 - pci_write_msi_msg(irq, &msg); 310 + dw_msi_setup_msg(pp, irq, pos); 348 311 349 312 return 0; 313 + } 314 + 315 + static int dw_msi_setup_irqs(struct msi_controller *chip, struct pci_dev *pdev, 316 + int nvec, int type) 317 + { 318 + #ifdef CONFIG_PCI_MSI 319 + int irq, pos; 320 + struct msi_desc *desc; 321 + struct pcie_port *pp = pdev->bus->sysdata; 322 + 323 + /* MSI-X interrupts are not supported */ 324 + if (type == PCI_CAP_ID_MSIX) 325 + return -EINVAL; 326 + 327 + WARN_ON(!list_is_singular(&pdev->dev.msi_list)); 328 + desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list); 329 + 330 + irq = assign_irq(nvec, desc, &pos); 331 + if (irq < 0) 332 + return irq; 333 + 334 + dw_msi_setup_msg(pp, irq, pos); 335 + 336 + return 0; 337 + #else 338 + return -EINVAL; 339 + #endif 350 340 } 351 341 352 342 static void dw_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) 353 343 { 354 344 struct irq_data *data = irq_get_irq_data(irq); 355 345 struct msi_desc *msi = irq_data_get_msi_desc(data); 356 - struct pcie_port *pp = sys_to_pcie(msi_desc_to_pci_sysdata(msi)); 346 + struct pcie_port *pp = (struct pcie_port *) msi_desc_to_pci_sysdata(msi); 357 347 358 348 clear_irq_range(pp, irq, 1, data->hwirq); 359 349 } 360 350 361 351 static struct msi_controller dw_pcie_msi_chip = { 362 352 .setup_irq = dw_msi_setup_irq, 353 + .setup_irqs = dw_msi_setup_irqs, 363 354 .teardown_irq = dw_msi_teardown_irq, 364 355 }; 365 356 ··· 405 362 { 406 363 struct device_node *np = pp->dev->of_node; 407 364 struct platform_device *pdev = to_platform_device(pp->dev); 408 - struct of_pci_range range; 409 - struct of_pci_range_parser parser; 365 + struct pci_bus *bus, *child; 410 366 struct resource *cfg_res; 411 - u32 val, na, ns; 412 - const __be32 *addrp; 413 - int i, index, ret; 414 - 415 - /* Find the address cell size and the number of cells in order to get 416 - * the untranslated address. 417 - */ 418 - of_property_read_u32(np, "#address-cells", &na); 419 - ns = of_n_size_cells(np); 367 + u32 val; 368 + int i, ret; 369 + LIST_HEAD(res); 370 + struct resource_entry *win; 420 371 421 372 cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 422 373 if (cfg_res) { ··· 418 381 pp->cfg1_size = resource_size(cfg_res)/2; 419 382 pp->cfg0_base = cfg_res->start; 420 383 pp->cfg1_base = cfg_res->start + pp->cfg0_size; 421 - 422 - /* Find the untranslated configuration space address */ 423 - index = of_property_match_string(np, "reg-names", "config"); 424 - addrp = of_get_address(np, index, NULL, NULL); 425 - pp->cfg0_mod_base = of_read_number(addrp, ns); 426 - pp->cfg1_mod_base = pp->cfg0_mod_base + pp->cfg0_size; 427 384 } else if (!pp->va_cfg0_base) { 428 385 dev_err(pp->dev, "missing *config* reg space\n"); 429 386 } 430 387 431 - if (of_pci_range_parser_init(&parser, np)) { 432 - dev_err(pp->dev, "missing ranges property\n"); 433 - return -EINVAL; 434 - } 388 + ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &res, &pp->io_base); 389 + if (ret) 390 + return ret; 435 391 436 392 /* Get the I/O and memory ranges from DT */ 437 - for_each_of_pci_range(&parser, &range) { 438 - unsigned long restype = range.flags & IORESOURCE_TYPE_BITS; 439 - 440 - if (restype == IORESOURCE_IO) { 441 - of_pci_range_to_resource(&range, np, &pp->io); 442 - pp->io.name = "I/O"; 443 - pp->io.start = max_t(resource_size_t, 444 - PCIBIOS_MIN_IO, 445 - range.pci_addr + global_io_offset); 446 - pp->io.end = min_t(resource_size_t, 447 - IO_SPACE_LIMIT, 448 - range.pci_addr + range.size 449 - + global_io_offset - 1); 450 - pp->io_size = resource_size(&pp->io); 451 - pp->io_bus_addr = range.pci_addr; 452 - pp->io_base = range.cpu_addr; 453 - 454 - /* Find the untranslated IO space address */ 455 - pp->io_mod_base = of_read_number(parser.range - 456 - parser.np + na, ns); 393 + resource_list_for_each_entry(win, &res) { 394 + switch (resource_type(win->res)) { 395 + case IORESOURCE_IO: 396 + pp->io = win->res; 397 + pp->io->name = "I/O"; 398 + pp->io_size = resource_size(pp->io); 399 + pp->io_bus_addr = pp->io->start - win->offset; 400 + ret = pci_remap_iospace(pp->io, pp->io_base); 401 + if (ret) { 402 + dev_warn(pp->dev, "error %d: failed to map resource %pR\n", 403 + ret, pp->io); 404 + continue; 405 + } 406 + pp->io_base = pp->io->start; 407 + break; 408 + case IORESOURCE_MEM: 409 + pp->mem = win->res; 410 + pp->mem->name = "MEM"; 411 + pp->mem_size = resource_size(pp->mem); 412 + pp->mem_bus_addr = pp->mem->start - win->offset; 413 + break; 414 + case 0: 415 + pp->cfg = win->res; 416 + pp->cfg0_size = resource_size(pp->cfg)/2; 417 + pp->cfg1_size = resource_size(pp->cfg)/2; 418 + pp->cfg0_base = pp->cfg->start; 419 + pp->cfg1_base = pp->cfg->start + pp->cfg0_size; 420 + break; 421 + case IORESOURCE_BUS: 422 + pp->busn = win->res; 423 + break; 424 + default: 425 + continue; 457 426 } 458 - if (restype == IORESOURCE_MEM) { 459 - of_pci_range_to_resource(&range, np, &pp->mem); 460 - pp->mem.name = "MEM"; 461 - pp->mem_size = resource_size(&pp->mem); 462 - pp->mem_bus_addr = range.pci_addr; 463 - 464 - /* Find the untranslated MEM space address */ 465 - pp->mem_mod_base = of_read_number(parser.range - 466 - parser.np + na, ns); 467 - } 468 - if (restype == 0) { 469 - of_pci_range_to_resource(&range, np, &pp->cfg); 470 - pp->cfg0_size = resource_size(&pp->cfg)/2; 471 - pp->cfg1_size = resource_size(&pp->cfg)/2; 472 - pp->cfg0_base = pp->cfg.start; 473 - pp->cfg1_base = pp->cfg.start + pp->cfg0_size; 474 - 475 - /* Find the untranslated configuration space address */ 476 - pp->cfg0_mod_base = of_read_number(parser.range - 477 - parser.np + na, ns); 478 - pp->cfg1_mod_base = pp->cfg0_mod_base + 479 - pp->cfg0_size; 480 - } 481 - } 482 - 483 - ret = of_pci_parse_bus_range(np, &pp->busn); 484 - if (ret < 0) { 485 - pp->busn.name = np->name; 486 - pp->busn.start = 0; 487 - pp->busn.end = 0xff; 488 - pp->busn.flags = IORESOURCE_BUS; 489 - dev_dbg(pp->dev, "failed to parse bus-range property: %d, using default %pR\n", 490 - ret, &pp->busn); 491 427 } 492 428 493 429 if (!pp->dbi_base) { 494 - pp->dbi_base = devm_ioremap(pp->dev, pp->cfg.start, 495 - resource_size(&pp->cfg)); 430 + pp->dbi_base = devm_ioremap(pp->dev, pp->cfg->start, 431 + resource_size(pp->cfg)); 496 432 if (!pp->dbi_base) { 497 433 dev_err(pp->dev, "error with ioremap\n"); 498 434 return -ENOMEM; 499 435 } 500 436 } 501 437 502 - pp->mem_base = pp->mem.start; 438 + pp->mem_base = pp->mem->start; 503 439 504 440 if (!pp->va_cfg0_base) { 505 441 pp->va_cfg0_base = devm_ioremap(pp->dev, pp->cfg0_base, ··· 492 482 } 493 483 } 494 484 495 - if (of_property_read_u32(np, "num-lanes", &pp->lanes)) { 496 - dev_err(pp->dev, "Failed to parse the number of lanes\n"); 497 - return -EINVAL; 498 - } 485 + ret = of_property_read_u32(np, "num-lanes", &pp->lanes); 486 + if (ret) 487 + pp->lanes = 0; 499 488 500 489 if (IS_ENABLED(CONFIG_PCI_MSI)) { 501 490 if (!pp->ops->msi_host_init) { ··· 520 511 521 512 if (!pp->ops->rd_other_conf) 522 513 dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1, 523 - PCIE_ATU_TYPE_MEM, pp->mem_mod_base, 514 + PCIE_ATU_TYPE_MEM, pp->mem_base, 524 515 pp->mem_bus_addr, pp->mem_size); 525 516 526 517 dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); ··· 532 523 val |= PORT_LOGIC_SPEED_CHANGE; 533 524 dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); 534 525 535 - #ifdef CONFIG_PCI_MSI 536 - dw_pcie_msi_chip.dev = pp->dev; 526 + pp->root_bus_nr = pp->busn->start; 527 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 528 + bus = pci_scan_root_bus_msi(pp->dev, pp->root_bus_nr, 529 + &dw_pcie_ops, pp, &res, 530 + &dw_pcie_msi_chip); 531 + dw_pcie_msi_chip.dev = pp->dev; 532 + } else 533 + bus = pci_scan_root_bus(pp->dev, pp->root_bus_nr, &dw_pcie_ops, 534 + pp, &res); 535 + if (!bus) 536 + return -ENOMEM; 537 + 538 + if (pp->ops->scan_bus) 539 + pp->ops->scan_bus(pp); 540 + 541 + #ifdef CONFIG_ARM 542 + /* support old dtbs that incorrectly describe IRQs */ 543 + pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 537 544 #endif 538 545 539 - dw_pci.nr_controllers = 1; 540 - dw_pci.private_data = (void **)&pp; 546 + if (!pci_has_flag(PCI_PROBE_ONLY)) { 547 + pci_bus_size_bridges(bus); 548 + pci_bus_assign_resources(bus); 541 549 542 - pci_common_init_dev(pp->dev, &dw_pci); 550 + list_for_each_entry(child, &bus->children, node) 551 + pcie_bus_configure_settings(child); 552 + } 543 553 554 + pci_bus_add_devices(bus); 544 555 return 0; 545 556 } 546 557 ··· 568 539 u32 devfn, int where, int size, u32 *val) 569 540 { 570 541 int ret, type; 571 - u32 address, busdev, cfg_size; 542 + u32 busdev, cfg_size; 572 543 u64 cpu_addr; 573 544 void __iomem *va_cfg_base; 574 545 575 546 busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 576 547 PCIE_ATU_FUNC(PCI_FUNC(devfn)); 577 - address = where & ~0x3; 578 548 579 549 if (bus->parent->number == pp->root_bus_nr) { 580 550 type = PCIE_ATU_TYPE_CFG0; 581 - cpu_addr = pp->cfg0_mod_base; 551 + cpu_addr = pp->cfg0_base; 582 552 cfg_size = pp->cfg0_size; 583 553 va_cfg_base = pp->va_cfg0_base; 584 554 } else { 585 555 type = PCIE_ATU_TYPE_CFG1; 586 - cpu_addr = pp->cfg1_mod_base; 556 + cpu_addr = pp->cfg1_base; 587 557 cfg_size = pp->cfg1_size; 588 558 va_cfg_base = pp->va_cfg1_base; 589 559 } ··· 590 562 dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 591 563 type, cpu_addr, 592 564 busdev, cfg_size); 593 - ret = dw_pcie_cfg_read(va_cfg_base + address, where, size, val); 565 + ret = dw_pcie_cfg_read(va_cfg_base + where, size, val); 594 566 dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 595 - PCIE_ATU_TYPE_IO, pp->io_mod_base, 567 + PCIE_ATU_TYPE_IO, pp->io_base, 596 568 pp->io_bus_addr, pp->io_size); 597 569 598 570 return ret; ··· 602 574 u32 devfn, int where, int size, u32 val) 603 575 { 604 576 int ret, type; 605 - u32 address, busdev, cfg_size; 577 + u32 busdev, cfg_size; 606 578 u64 cpu_addr; 607 579 void __iomem *va_cfg_base; 608 580 609 581 busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 610 582 PCIE_ATU_FUNC(PCI_FUNC(devfn)); 611 - address = where & ~0x3; 612 583 613 584 if (bus->parent->number == pp->root_bus_nr) { 614 585 type = PCIE_ATU_TYPE_CFG0; 615 - cpu_addr = pp->cfg0_mod_base; 586 + cpu_addr = pp->cfg0_base; 616 587 cfg_size = pp->cfg0_size; 617 588 va_cfg_base = pp->va_cfg0_base; 618 589 } else { 619 590 type = PCIE_ATU_TYPE_CFG1; 620 - cpu_addr = pp->cfg1_mod_base; 591 + cpu_addr = pp->cfg1_base; 621 592 cfg_size = pp->cfg1_size; 622 593 va_cfg_base = pp->va_cfg1_base; 623 594 } ··· 624 597 dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 625 598 type, cpu_addr, 626 599 busdev, cfg_size); 627 - ret = dw_pcie_cfg_write(va_cfg_base + address, where, size, val); 600 + ret = dw_pcie_cfg_write(va_cfg_base + where, size, val); 628 601 dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX0, 629 - PCIE_ATU_TYPE_IO, pp->io_mod_base, 602 + PCIE_ATU_TYPE_IO, pp->io_base, 630 603 pp->io_bus_addr, pp->io_size); 631 604 632 605 return ret; ··· 658 631 static int dw_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, 659 632 int size, u32 *val) 660 633 { 661 - struct pcie_port *pp = sys_to_pcie(bus->sysdata); 634 + struct pcie_port *pp = bus->sysdata; 662 635 int ret; 663 636 664 637 if (dw_pcie_valid_config(pp, bus, PCI_SLOT(devfn)) == 0) { ··· 682 655 static int dw_pcie_wr_conf(struct pci_bus *bus, u32 devfn, 683 656 int where, int size, u32 val) 684 657 { 685 - struct pcie_port *pp = sys_to_pcie(bus->sysdata); 658 + struct pcie_port *pp = bus->sysdata; 686 659 int ret; 687 660 688 661 if (dw_pcie_valid_config(pp, bus, PCI_SLOT(devfn)) == 0) ··· 704 677 static struct pci_ops dw_pcie_ops = { 705 678 .read = dw_pcie_rd_conf, 706 679 .write = dw_pcie_wr_conf, 707 - }; 708 - 709 - static int dw_pcie_setup(int nr, struct pci_sys_data *sys) 710 - { 711 - struct pcie_port *pp; 712 - 713 - pp = sys_to_pcie(sys); 714 - 715 - if (global_io_offset < SZ_1M && pp->io_size > 0) { 716 - sys->io_offset = global_io_offset - pp->io_bus_addr; 717 - pci_ioremap_io(global_io_offset, pp->io_base); 718 - global_io_offset += SZ_64K; 719 - pci_add_resource_offset(&sys->resources, &pp->io, 720 - sys->io_offset); 721 - } 722 - 723 - sys->mem_offset = pp->mem.start - pp->mem_bus_addr; 724 - pci_add_resource_offset(&sys->resources, &pp->mem, sys->mem_offset); 725 - pci_add_resource(&sys->resources, &pp->busn); 726 - 727 - return 1; 728 - } 729 - 730 - static struct pci_bus *dw_pcie_scan_bus(int nr, struct pci_sys_data *sys) 731 - { 732 - struct pci_bus *bus; 733 - struct pcie_port *pp = sys_to_pcie(sys); 734 - 735 - pp->root_bus_nr = sys->busnr; 736 - 737 - if (IS_ENABLED(CONFIG_PCI_MSI)) 738 - bus = pci_scan_root_bus_msi(pp->dev, sys->busnr, &dw_pcie_ops, 739 - sys, &sys->resources, 740 - &dw_pcie_msi_chip); 741 - else 742 - bus = pci_scan_root_bus(pp->dev, sys->busnr, &dw_pcie_ops, 743 - sys, &sys->resources); 744 - 745 - if (!bus) 746 - return NULL; 747 - 748 - if (bus && pp->ops->scan_bus) 749 - pp->ops->scan_bus(pp); 750 - 751 - return bus; 752 - } 753 - 754 - static int dw_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 755 - { 756 - struct pcie_port *pp = sys_to_pcie(dev->bus->sysdata); 757 - int irq; 758 - 759 - irq = of_irq_parse_and_map_pci(dev, slot, pin); 760 - if (!irq) 761 - irq = pp->irq; 762 - 763 - return irq; 764 - } 765 - 766 - static struct hw_pci dw_pci = { 767 - .setup = dw_pcie_setup, 768 - .scan = dw_pcie_scan_bus, 769 - .map_irq = dw_pcie_map_irq, 770 680 }; 771 681 772 682 void dw_pcie_setup_rc(struct pcie_port *pp) ··· 728 764 case 8: 729 765 val |= PORT_LINK_MODE_8_LANES; 730 766 break; 767 + default: 768 + dev_err(pp->dev, "num-lanes %u: invalid value\n", pp->lanes); 769 + return; 731 770 } 732 771 dw_pcie_writel_rc(pp, val, PCIE_PORT_LINK_CONTROL); 733 772
+8 -12
drivers/pci/host/pcie-designware.h
··· 27 27 u8 root_bus_nr; 28 28 void __iomem *dbi_base; 29 29 u64 cfg0_base; 30 - u64 cfg0_mod_base; 31 30 void __iomem *va_cfg0_base; 32 31 u32 cfg0_size; 33 32 u64 cfg1_base; 34 - u64 cfg1_mod_base; 35 33 void __iomem *va_cfg1_base; 36 34 u32 cfg1_size; 37 - u64 io_base; 38 - u64 io_mod_base; 35 + resource_size_t io_base; 39 36 phys_addr_t io_bus_addr; 40 37 u32 io_size; 41 38 u64 mem_base; 42 - u64 mem_mod_base; 43 39 phys_addr_t mem_bus_addr; 44 40 u32 mem_size; 45 - struct resource cfg; 46 - struct resource io; 47 - struct resource mem; 48 - struct resource busn; 41 + struct resource *cfg; 42 + struct resource *io; 43 + struct resource *mem; 44 + struct resource *busn; 49 45 int irq; 50 46 u32 lanes; 51 47 struct pcie_host_ops *ops; ··· 66 70 void (*host_init)(struct pcie_port *pp); 67 71 void (*msi_set_irq)(struct pcie_port *pp, int irq); 68 72 void (*msi_clear_irq)(struct pcie_port *pp, int irq); 69 - u32 (*get_msi_addr)(struct pcie_port *pp); 73 + phys_addr_t (*get_msi_addr)(struct pcie_port *pp); 70 74 u32 (*get_msi_data)(struct pcie_port *pp, int pos); 71 75 void (*scan_bus)(struct pcie_port *pp); 72 76 int (*msi_host_init)(struct pcie_port *pp, struct msi_controller *chip); 73 77 }; 74 78 75 - int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val); 76 - int dw_pcie_cfg_write(void __iomem *addr, int where, int size, u32 val); 79 + int dw_pcie_cfg_read(void __iomem *addr, int size, u32 *val); 80 + int dw_pcie_cfg_write(void __iomem *addr, int size, u32 val); 77 81 irqreturn_t dw_handle_msi_irq(struct pcie_port *pp); 78 82 void dw_pcie_msi_init(struct pcie_port *pp); 79 83 int dw_pcie_link_up(struct pcie_port *pp);
+198
drivers/pci/host/pcie-hisi.c
··· 1 + /* 2 + * PCIe host controller driver for HiSilicon Hip05 SoC 3 + * 4 + * Copyright (C) 2015 HiSilicon Co., Ltd. http://www.hisilicon.com 5 + * 6 + * Author: Zhou Wang <wangzhou1@hisilicon.com> 7 + * Dacai Zhu <zhudacai@hisilicon.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + #include <linux/interrupt.h> 14 + #include <linux/module.h> 15 + #include <linux/mfd/syscon.h> 16 + #include <linux/of_address.h> 17 + #include <linux/of_pci.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/regmap.h> 20 + 21 + #include "pcie-designware.h" 22 + 23 + #define PCIE_SUBCTRL_SYS_STATE4_REG 0x6818 24 + #define PCIE_LTSSM_LINKUP_STATE 0x11 25 + #define PCIE_LTSSM_STATE_MASK 0x3F 26 + 27 + #define to_hisi_pcie(x) container_of(x, struct hisi_pcie, pp) 28 + 29 + struct hisi_pcie { 30 + struct regmap *subctrl; 31 + void __iomem *reg_base; 32 + u32 port_id; 33 + struct pcie_port pp; 34 + }; 35 + 36 + static inline void hisi_pcie_apb_writel(struct hisi_pcie *pcie, 37 + u32 val, u32 reg) 38 + { 39 + writel(val, pcie->reg_base + reg); 40 + } 41 + 42 + static inline u32 hisi_pcie_apb_readl(struct hisi_pcie *pcie, u32 reg) 43 + { 44 + return readl(pcie->reg_base + reg); 45 + } 46 + 47 + /* Hip05 PCIe host only supports 32-bit config access */ 48 + static int hisi_pcie_cfg_read(struct pcie_port *pp, int where, int size, 49 + u32 *val) 50 + { 51 + u32 reg; 52 + u32 reg_val; 53 + struct hisi_pcie *pcie = to_hisi_pcie(pp); 54 + void *walker = &reg_val; 55 + 56 + walker += (where & 0x3); 57 + reg = where & ~0x3; 58 + reg_val = hisi_pcie_apb_readl(pcie, reg); 59 + 60 + if (size == 1) 61 + *val = *(u8 __force *) walker; 62 + else if (size == 2) 63 + *val = *(u16 __force *) walker; 64 + else if (size != 4) 65 + return PCIBIOS_BAD_REGISTER_NUMBER; 66 + 67 + return PCIBIOS_SUCCESSFUL; 68 + } 69 + 70 + /* Hip05 PCIe host only supports 32-bit config access */ 71 + static int hisi_pcie_cfg_write(struct pcie_port *pp, int where, int size, 72 + u32 val) 73 + { 74 + u32 reg_val; 75 + u32 reg; 76 + struct hisi_pcie *pcie = to_hisi_pcie(pp); 77 + void *walker = &reg_val; 78 + 79 + walker += (where & 0x3); 80 + reg = where & ~0x3; 81 + if (size == 4) 82 + hisi_pcie_apb_writel(pcie, val, reg); 83 + else if (size == 2) { 84 + reg_val = hisi_pcie_apb_readl(pcie, reg); 85 + *(u16 __force *) walker = val; 86 + hisi_pcie_apb_writel(pcie, reg_val, reg); 87 + } else if (size == 1) { 88 + reg_val = hisi_pcie_apb_readl(pcie, reg); 89 + *(u8 __force *) walker = val; 90 + hisi_pcie_apb_writel(pcie, reg_val, reg); 91 + } else 92 + return PCIBIOS_BAD_REGISTER_NUMBER; 93 + 94 + return PCIBIOS_SUCCESSFUL; 95 + } 96 + 97 + static int hisi_pcie_link_up(struct pcie_port *pp) 98 + { 99 + u32 val; 100 + struct hisi_pcie *hisi_pcie = to_hisi_pcie(pp); 101 + 102 + regmap_read(hisi_pcie->subctrl, PCIE_SUBCTRL_SYS_STATE4_REG + 103 + 0x100 * hisi_pcie->port_id, &val); 104 + 105 + return ((val & PCIE_LTSSM_STATE_MASK) == PCIE_LTSSM_LINKUP_STATE); 106 + } 107 + 108 + static struct pcie_host_ops hisi_pcie_host_ops = { 109 + .rd_own_conf = hisi_pcie_cfg_read, 110 + .wr_own_conf = hisi_pcie_cfg_write, 111 + .link_up = hisi_pcie_link_up, 112 + }; 113 + 114 + static int __init hisi_add_pcie_port(struct pcie_port *pp, 115 + struct platform_device *pdev) 116 + { 117 + int ret; 118 + u32 port_id; 119 + struct hisi_pcie *hisi_pcie = to_hisi_pcie(pp); 120 + 121 + if (of_property_read_u32(pdev->dev.of_node, "port-id", &port_id)) { 122 + dev_err(&pdev->dev, "failed to read port-id\n"); 123 + return -EINVAL; 124 + } 125 + if (port_id > 3) { 126 + dev_err(&pdev->dev, "Invalid port-id: %d\n", port_id); 127 + return -EINVAL; 128 + } 129 + hisi_pcie->port_id = port_id; 130 + 131 + pp->ops = &hisi_pcie_host_ops; 132 + 133 + ret = dw_pcie_host_init(pp); 134 + if (ret) { 135 + dev_err(&pdev->dev, "failed to initialize host\n"); 136 + return ret; 137 + } 138 + 139 + return 0; 140 + } 141 + 142 + static int __init hisi_pcie_probe(struct platform_device *pdev) 143 + { 144 + struct hisi_pcie *hisi_pcie; 145 + struct pcie_port *pp; 146 + struct resource *reg; 147 + int ret; 148 + 149 + hisi_pcie = devm_kzalloc(&pdev->dev, sizeof(*hisi_pcie), GFP_KERNEL); 150 + if (!hisi_pcie) 151 + return -ENOMEM; 152 + 153 + pp = &hisi_pcie->pp; 154 + pp->dev = &pdev->dev; 155 + 156 + hisi_pcie->subctrl = 157 + syscon_regmap_lookup_by_compatible("hisilicon,pcie-sas-subctrl"); 158 + if (IS_ERR(hisi_pcie->subctrl)) { 159 + dev_err(pp->dev, "cannot get subctrl base\n"); 160 + return PTR_ERR(hisi_pcie->subctrl); 161 + } 162 + 163 + reg = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbi"); 164 + hisi_pcie->reg_base = devm_ioremap_resource(&pdev->dev, reg); 165 + if (IS_ERR(hisi_pcie->reg_base)) { 166 + dev_err(pp->dev, "cannot get rc_dbi base\n"); 167 + return PTR_ERR(hisi_pcie->reg_base); 168 + } 169 + 170 + hisi_pcie->pp.dbi_base = hisi_pcie->reg_base; 171 + 172 + ret = hisi_add_pcie_port(pp, pdev); 173 + if (ret) 174 + return ret; 175 + 176 + platform_set_drvdata(pdev, hisi_pcie); 177 + 178 + dev_warn(pp->dev, "only 32-bit config accesses supported; smaller writes may corrupt adjacent RW1C fields\n"); 179 + 180 + return 0; 181 + } 182 + 183 + static const struct of_device_id hisi_pcie_of_match[] = { 184 + {.compatible = "hisilicon,hip05-pcie",}, 185 + {}, 186 + }; 187 + 188 + MODULE_DEVICE_TABLE(of, hisi_pcie_of_match); 189 + 190 + static struct platform_driver hisi_pcie_driver = { 191 + .probe = hisi_pcie_probe, 192 + .driver = { 193 + .name = "hisi-pcie", 194 + .of_match_table = hisi_pcie_of_match, 195 + }, 196 + }; 197 + 198 + module_platform_driver(hisi_pcie_driver);
+27
drivers/pci/host/pcie-iproc-platform.c
··· 54 54 return -ENOMEM; 55 55 } 56 56 57 + if (of_property_read_bool(np, "brcm,pcie-ob")) { 58 + u32 val; 59 + 60 + ret = of_property_read_u32(np, "brcm,pcie-ob-axi-offset", 61 + &val); 62 + if (ret) { 63 + dev_err(pcie->dev, 64 + "missing brcm,pcie-ob-axi-offset property\n"); 65 + return ret; 66 + } 67 + pcie->ob.axi_offset = val; 68 + 69 + ret = of_property_read_u32(np, "brcm,pcie-ob-window-size", 70 + &val); 71 + if (ret) { 72 + dev_err(pcie->dev, 73 + "missing brcm,pcie-ob-window-size property\n"); 74 + return ret; 75 + } 76 + pcie->ob.window_size = (resource_size_t)val * SZ_1M; 77 + 78 + if (of_property_read_bool(np, "brcm,pcie-ob-oarr-size")) 79 + pcie->ob.set_oarr_size = true; 80 + 81 + pcie->need_ob_cfg = true; 82 + } 83 + 57 84 /* PHY use is optional */ 58 85 pcie->phy = devm_phy_get(&pdev->dev, "pcie-phy"); 59 86 if (IS_ERR(pcie->phy)) {
+149 -14
drivers/pci/host/pcie-iproc.c
··· 1 1 /* 2 2 * Copyright (C) 2014 Hauke Mehrtens <hauke@hauke-m.de> 3 - * Copyright (C) 2015 Broadcom Corporatcommon ion 3 + * Copyright (C) 2015 Broadcom Corporation 4 4 * 5 5 * This program is free software; you can redistribute it and/or 6 6 * modify it under the terms of the GNU General Public License as ··· 31 31 #include "pcie-iproc.h" 32 32 33 33 #define CLK_CONTROL_OFFSET 0x000 34 + #define EP_PERST_SOURCE_SELECT_SHIFT 2 35 + #define EP_PERST_SOURCE_SELECT BIT(EP_PERST_SOURCE_SELECT_SHIFT) 34 36 #define EP_MODE_SURVIVE_PERST_SHIFT 1 35 37 #define EP_MODE_SURVIVE_PERST BIT(EP_MODE_SURVIVE_PERST_SHIFT) 36 38 #define RC_PCIE_RST_OUTPUT_SHIFT 0 ··· 59 57 60 58 #define SYS_RC_INTX_EN 0x330 61 59 #define SYS_RC_INTX_MASK 0xf 60 + 61 + #define PCIE_LINK_STATUS_OFFSET 0xf0c 62 + #define PCIE_PHYLINKUP_SHIFT 3 63 + #define PCIE_PHYLINKUP BIT(PCIE_PHYLINKUP_SHIFT) 64 + #define PCIE_DL_ACTIVE_SHIFT 2 65 + #define PCIE_DL_ACTIVE BIT(PCIE_DL_ACTIVE_SHIFT) 66 + 67 + #define OARR_VALID_SHIFT 0 68 + #define OARR_VALID BIT(OARR_VALID_SHIFT) 69 + #define OARR_SIZE_CFG_SHIFT 1 70 + #define OARR_SIZE_CFG BIT(OARR_SIZE_CFG_SHIFT) 71 + 72 + #define OARR_LO(window) (0xd20 + (window) * 8) 73 + #define OARR_HI(window) (0xd24 + (window) * 8) 74 + #define OMAP_LO(window) (0xd40 + (window) * 8) 75 + #define OMAP_HI(window) (0xd44 + (window) * 8) 76 + 77 + #define MAX_NUM_OB_WINDOWS 2 62 78 63 79 static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) 64 80 { ··· 139 119 u32 val; 140 120 141 121 /* 142 - * Configure the PCIe controller as root complex and send a downstream 143 - * reset 122 + * Select perst_b signal as reset source. Put the device into reset, 123 + * and then bring it out of reset 144 124 */ 145 - val = EP_MODE_SURVIVE_PERST | RC_PCIE_RST_OUTPUT; 125 + val = readl(pcie->base + CLK_CONTROL_OFFSET); 126 + val &= ~EP_PERST_SOURCE_SELECT & ~EP_MODE_SURVIVE_PERST & 127 + ~RC_PCIE_RST_OUTPUT; 146 128 writel(val, pcie->base + CLK_CONTROL_OFFSET); 147 129 udelay(250); 148 - val &= ~EP_MODE_SURVIVE_PERST; 130 + 131 + val |= RC_PCIE_RST_OUTPUT; 149 132 writel(val, pcie->base + CLK_CONTROL_OFFSET); 150 - msleep(250); 133 + msleep(100); 151 134 } 152 135 153 136 static int iproc_pcie_check_link(struct iproc_pcie *pcie, struct pci_bus *bus) 154 137 { 155 138 u8 hdr_type; 156 - u32 link_ctrl; 139 + u32 link_ctrl, class, val; 157 140 u16 pos, link_status; 158 - int link_is_active = 0; 141 + bool link_is_active = false; 142 + 143 + val = readl(pcie->base + PCIE_LINK_STATUS_OFFSET); 144 + if (!(val & PCIE_PHYLINKUP) || !(val & PCIE_DL_ACTIVE)) { 145 + dev_err(pcie->dev, "PHY or data link is INACTIVE!\n"); 146 + return -ENODEV; 147 + } 159 148 160 149 /* make sure we are not in EP mode */ 161 150 pci_bus_read_config_byte(bus, 0, PCI_HEADER_TYPE, &hdr_type); ··· 174 145 } 175 146 176 147 /* force class to PCI_CLASS_BRIDGE_PCI (0x0604) */ 177 - pci_bus_write_config_word(bus, 0, PCI_CLASS_DEVICE, 178 - PCI_CLASS_BRIDGE_PCI); 148 + #define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c 149 + #define PCI_CLASS_BRIDGE_MASK 0xffff00 150 + #define PCI_CLASS_BRIDGE_SHIFT 8 151 + pci_bus_read_config_dword(bus, 0, PCI_BRIDGE_CTRL_REG_OFFSET, &class); 152 + class &= ~PCI_CLASS_BRIDGE_MASK; 153 + class |= (PCI_CLASS_BRIDGE_PCI << PCI_CLASS_BRIDGE_SHIFT); 154 + pci_bus_write_config_dword(bus, 0, PCI_BRIDGE_CTRL_REG_OFFSET, class); 179 155 180 156 /* check link status to see if link is active */ 181 157 pos = pci_bus_find_capability(bus, 0, PCI_CAP_ID_EXP); 182 158 pci_bus_read_config_word(bus, 0, pos + PCI_EXP_LNKSTA, &link_status); 183 159 if (link_status & PCI_EXP_LNKSTA_NLW) 184 - link_is_active = 1; 160 + link_is_active = true; 185 161 186 162 if (!link_is_active) { 187 163 /* try GEN 1 link speed */ ··· 210 176 pci_bus_read_config_word(bus, 0, pos + PCI_EXP_LNKSTA, 211 177 &link_status); 212 178 if (link_status & PCI_EXP_LNKSTA_NLW) 213 - link_is_active = 1; 179 + link_is_active = true; 214 180 } 215 181 } 216 182 ··· 222 188 static void iproc_pcie_enable(struct iproc_pcie *pcie) 223 189 { 224 190 writel(SYS_RC_INTX_MASK, pcie->base + SYS_RC_INTX_EN); 191 + } 192 + 193 + /** 194 + * Some iProc SoCs require the SW to configure the outbound address mapping 195 + * 196 + * Outbound address translation: 197 + * 198 + * iproc_pcie_address = axi_address - axi_offset 199 + * OARR = iproc_pcie_address 200 + * OMAP = pci_addr 201 + * 202 + * axi_addr -> iproc_pcie_address -> OARR -> OMAP -> pci_address 203 + */ 204 + static int iproc_pcie_setup_ob(struct iproc_pcie *pcie, u64 axi_addr, 205 + u64 pci_addr, resource_size_t size) 206 + { 207 + struct iproc_pcie_ob *ob = &pcie->ob; 208 + unsigned i; 209 + u64 max_size = (u64)ob->window_size * MAX_NUM_OB_WINDOWS; 210 + u64 remainder; 211 + 212 + if (size > max_size) { 213 + dev_err(pcie->dev, 214 + "res size 0x%pap exceeds max supported size 0x%llx\n", 215 + &size, max_size); 216 + return -EINVAL; 217 + } 218 + 219 + div64_u64_rem(size, ob->window_size, &remainder); 220 + if (remainder) { 221 + dev_err(pcie->dev, 222 + "res size %pap needs to be multiple of window size %pap\n", 223 + &size, &ob->window_size); 224 + return -EINVAL; 225 + } 226 + 227 + if (axi_addr < ob->axi_offset) { 228 + dev_err(pcie->dev, 229 + "axi address %pap less than offset %pap\n", 230 + &axi_addr, &ob->axi_offset); 231 + return -EINVAL; 232 + } 233 + 234 + /* 235 + * Translate the AXI address to the internal address used by the iProc 236 + * PCIe core before programming the OARR 237 + */ 238 + axi_addr -= ob->axi_offset; 239 + 240 + for (i = 0; i < MAX_NUM_OB_WINDOWS; i++) { 241 + writel(lower_32_bits(axi_addr) | OARR_VALID | 242 + (ob->set_oarr_size ? 1 : 0), pcie->base + OARR_LO(i)); 243 + writel(upper_32_bits(axi_addr), pcie->base + OARR_HI(i)); 244 + writel(lower_32_bits(pci_addr), pcie->base + OMAP_LO(i)); 245 + writel(upper_32_bits(pci_addr), pcie->base + OMAP_HI(i)); 246 + 247 + size -= ob->window_size; 248 + if (size == 0) 249 + break; 250 + 251 + axi_addr += ob->window_size; 252 + pci_addr += ob->window_size; 253 + } 254 + 255 + return 0; 256 + } 257 + 258 + static int iproc_pcie_map_ranges(struct iproc_pcie *pcie, 259 + struct list_head *resources) 260 + { 261 + struct resource_entry *window; 262 + int ret; 263 + 264 + resource_list_for_each_entry(window, resources) { 265 + struct resource *res = window->res; 266 + u64 res_type = resource_type(res); 267 + 268 + switch (res_type) { 269 + case IORESOURCE_IO: 270 + case IORESOURCE_BUS: 271 + break; 272 + case IORESOURCE_MEM: 273 + ret = iproc_pcie_setup_ob(pcie, res->start, 274 + res->start - window->offset, 275 + resource_size(res)); 276 + if (ret) 277 + return ret; 278 + break; 279 + default: 280 + dev_err(pcie->dev, "invalid resource %pR\n", res); 281 + return -EINVAL; 282 + } 283 + } 284 + 285 + return 0; 225 286 } 226 287 227 288 int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) ··· 341 212 } 342 213 343 214 iproc_pcie_reset(pcie); 215 + 216 + if (pcie->need_ob_cfg) { 217 + ret = iproc_pcie_map_ranges(pcie, res); 218 + if (ret) { 219 + dev_err(pcie->dev, "map failed\n"); 220 + goto err_power_off_phy; 221 + } 222 + } 344 223 345 224 #ifdef CONFIG_ARM 346 225 pcie->sysdata.private_data = pcie; ··· 375 238 376 239 pci_scan_child_bus(bus); 377 240 pci_assign_unassigned_bus_resources(bus); 378 - #ifdef CONFIG_ARM 379 241 pci_fixup_irqs(pci_common_swizzle, pcie->map_irq); 380 - #endif 381 242 pci_bus_add_devices(bus); 382 243 383 244 return 0;
+17 -3
drivers/pci/host/pcie-iproc.h
··· 14 14 #ifndef _PCIE_IPROC_H 15 15 #define _PCIE_IPROC_H 16 16 17 - #define IPROC_PCIE_MAX_NUM_IRQS 6 17 + /** 18 + * iProc PCIe outbound mapping 19 + * @set_oarr_size: indicates the OARR size bit needs to be set 20 + * @axi_offset: offset from the AXI address to the internal address used by 21 + * the iProc PCIe core 22 + * @window_size: outbound window size 23 + */ 24 + struct iproc_pcie_ob { 25 + bool set_oarr_size; 26 + resource_size_t axi_offset; 27 + resource_size_t window_size; 28 + }; 18 29 19 30 /** 20 31 * iProc PCIe device 21 32 * @dev: pointer to device data structure 22 33 * @base: PCIe host controller I/O register base 23 - * @resources: linked list of all PCI resources 24 34 * @sysdata: Per PCI controller data (ARM-specific) 25 35 * @root_bus: pointer to root bus 26 36 * @phy: optional PHY device that controls the Serdes 27 37 * @irqs: interrupt IDs 38 + * @map_irq: function callback to map interrupts 39 + * @need_ob_cfg: indidates SW needs to configure the outbound mapping window 40 + * @ob: outbound mapping parameters 28 41 */ 29 42 struct iproc_pcie { 30 43 struct device *dev; ··· 47 34 #endif 48 35 struct pci_bus *root_bus; 49 36 struct phy *phy; 50 - int irqs[IPROC_PCIE_MAX_NUM_IRQS]; 51 37 int (*map_irq)(const struct pci_dev *, u8, u8); 38 + bool need_ob_cfg; 39 + struct iproc_pcie_ob ob; 52 40 }; 53 41 54 42 int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res);
+55 -31
drivers/pci/host/pcie-rcar.c
··· 108 108 #define RCAR_PCI_MAX_RESOURCES 4 109 109 #define MAX_NR_INBOUND_MAPS 6 110 110 111 + static unsigned long global_io_offset; 112 + 111 113 struct rcar_msi { 112 114 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 113 115 struct irq_domain *domain; ··· 126 124 } 127 125 128 126 /* Structure representing the PCIe interface */ 127 + /* 128 + * ARM pcibios functions expect the ARM struct pci_sys_data as the PCI 129 + * sysdata. Add pci_sys_data as the first element in struct gen_pci so 130 + * that when we use a gen_pci pointer as sysdata, it is also a pointer to 131 + * a struct pci_sys_data. 132 + */ 129 133 struct rcar_pcie { 134 + #ifdef CONFIG_ARM 135 + struct pci_sys_data sys; 136 + #endif 130 137 struct device *dev; 131 138 void __iomem *base; 132 139 struct resource res[RCAR_PCI_MAX_RESOURCES]; ··· 145 134 struct clk *bus_clk; 146 135 struct rcar_msi msi; 147 136 }; 148 - 149 - static inline struct rcar_pcie *sys_to_pcie(struct pci_sys_data *sys) 150 - { 151 - return sys->private_data; 152 - } 153 137 154 138 static void rcar_pci_write_reg(struct rcar_pcie *pcie, unsigned long val, 155 139 unsigned long reg) ··· 264 258 static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, 265 259 int where, int size, u32 *val) 266 260 { 267 - struct rcar_pcie *pcie = sys_to_pcie(bus->sysdata); 261 + struct rcar_pcie *pcie = bus->sysdata; 268 262 int ret; 269 263 270 264 ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ, ··· 289 283 static int rcar_pcie_write_conf(struct pci_bus *bus, unsigned int devfn, 290 284 int where, int size, u32 val) 291 285 { 292 - struct rcar_pcie *pcie = sys_to_pcie(bus->sysdata); 286 + struct rcar_pcie *pcie = bus->sysdata; 293 287 int shift, ret; 294 288 u32 data; 295 289 ··· 359 353 rcar_pci_write_reg(pcie, mask, PCIEPTCTLR(win)); 360 354 } 361 355 362 - static int rcar_pcie_setup(int nr, struct pci_sys_data *sys) 356 + static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pcie) 363 357 { 364 - struct rcar_pcie *pcie = sys_to_pcie(sys); 365 358 struct resource *res; 366 359 int i; 367 360 368 - pcie->root_bus_nr = -1; 361 + pcie->root_bus_nr = pcie->busn.start; 369 362 370 363 /* Setup PCI resources */ 371 364 for (i = 0; i < RCAR_PCI_MAX_RESOURCES; i++) { ··· 377 372 378 373 if (res->flags & IORESOURCE_IO) { 379 374 phys_addr_t io_start = pci_pio_to_address(res->start); 380 - pci_ioremap_io(nr * SZ_64K, io_start); 381 - } else 382 - pci_add_resource(&sys->resources, res); 375 + pci_ioremap_io(global_io_offset, io_start); 376 + global_io_offset += SZ_64K; 377 + } 378 + 379 + pci_add_resource(resource, res); 383 380 } 384 - pci_add_resource(&sys->resources, &pcie->busn); 381 + pci_add_resource(resource, &pcie->busn); 385 382 386 383 return 1; 387 384 } 388 385 389 - static struct hw_pci rcar_pci = { 390 - .setup = rcar_pcie_setup, 391 - .map_irq = of_irq_parse_and_map_pci, 392 - .ops = &rcar_pcie_ops, 393 - }; 394 - 395 - static void rcar_pcie_enable(struct rcar_pcie *pcie) 386 + static int rcar_pcie_enable(struct rcar_pcie *pcie) 396 387 { 397 - struct platform_device *pdev = to_platform_device(pcie->dev); 388 + struct pci_bus *bus, *child; 389 + LIST_HEAD(res); 398 390 399 - rcar_pci.nr_controllers = 1; 400 - rcar_pci.private_data = (void **)&pcie; 401 - #ifdef CONFIG_PCI_MSI 402 - rcar_pci.msi_ctrl = &pcie->msi.chip; 403 - #endif 391 + rcar_pcie_setup(&res, pcie); 404 392 405 - pci_common_init_dev(&pdev->dev, &rcar_pci); 393 + /* Do not reassign resources if probe only */ 394 + if (!pci_has_flag(PCI_PROBE_ONLY)) 395 + pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 396 + 397 + if (IS_ENABLED(CONFIG_PCI_MSI)) 398 + bus = pci_scan_root_bus_msi(pcie->dev, pcie->root_bus_nr, 399 + &rcar_pcie_ops, pcie, &res, &pcie->msi.chip); 400 + else 401 + bus = pci_scan_root_bus(pcie->dev, pcie->root_bus_nr, 402 + &rcar_pcie_ops, pcie, &res); 403 + 404 + if (!bus) { 405 + dev_err(pcie->dev, "Scanning rootbus failed"); 406 + return -ENODEV; 407 + } 408 + 409 + pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 410 + 411 + if (!pci_has_flag(PCI_PROBE_ONLY)) { 412 + pci_bus_size_bridges(bus); 413 + pci_bus_assign_resources(bus); 414 + 415 + list_for_each_entry(child, &bus->children, node) 416 + pcie_bus_configure_settings(child); 417 + } 418 + 419 + pci_bus_add_devices(bus); 420 + 421 + return 0; 406 422 } 407 423 408 424 static int phy_wait_for_ack(struct rcar_pcie *pcie) ··· 996 970 data = rcar_pci_read_reg(pcie, MACSR); 997 971 dev_info(&pdev->dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 998 972 999 - rcar_pcie_enable(pcie); 1000 - 1001 - return 0; 973 + return rcar_pcie_enable(pcie); 1002 974 } 1003 975 1004 976 static struct platform_driver rcar_pcie_driver = {
+12 -12
drivers/pci/host/pcie-spear13xx.c
··· 163 163 * default value in capability register is 512 bytes. So force 164 164 * it to 128 here. 165 165 */ 166 - dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_DEVCTL, 4, &val); 166 + dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, &val); 167 167 val &= ~PCI_EXP_DEVCTL_READRQ; 168 - dw_pcie_cfg_write(pp->dbi_base, exp_cap_off + PCI_EXP_DEVCTL, 4, val); 168 + dw_pcie_cfg_write(pp->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, val); 169 169 170 - dw_pcie_cfg_write(pp->dbi_base, PCI_VENDOR_ID, 2, 0x104A); 171 - dw_pcie_cfg_write(pp->dbi_base, PCI_DEVICE_ID, 2, 0xCD80); 170 + dw_pcie_cfg_write(pp->dbi_base + PCI_VENDOR_ID, 2, 0x104A); 171 + dw_pcie_cfg_write(pp->dbi_base + PCI_DEVICE_ID, 2, 0xCD80); 172 172 173 173 /* 174 174 * if is_gen1 is set then handle it, so that some buggy card 175 175 * also works 176 176 */ 177 177 if (spear13xx_pcie->is_gen1) { 178 - dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_LNKCAP, 4, 179 - &val); 178 + dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_LNKCAP, 179 + 4, &val); 180 180 if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 181 181 val &= ~((u32)PCI_EXP_LNKCAP_SLS); 182 182 val |= PCI_EXP_LNKCAP_SLS_2_5GB; 183 - dw_pcie_cfg_write(pp->dbi_base, exp_cap_off + 184 - PCI_EXP_LNKCAP, 4, val); 183 + dw_pcie_cfg_write(pp->dbi_base + exp_cap_off + 184 + PCI_EXP_LNKCAP, 4, val); 185 185 } 186 186 187 - dw_pcie_cfg_read(pp->dbi_base, exp_cap_off + PCI_EXP_LNKCTL2, 4, 188 - &val); 187 + dw_pcie_cfg_read(pp->dbi_base + exp_cap_off + PCI_EXP_LNKCTL2, 188 + 2, &val); 189 189 if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 190 190 val &= ~((u32)PCI_EXP_LNKCAP_SLS); 191 191 val |= PCI_EXP_LNKCAP_SLS_2_5GB; 192 - dw_pcie_cfg_write(pp->dbi_base, exp_cap_off + 193 - PCI_EXP_LNKCTL2, 4, val); 192 + dw_pcie_cfg_write(pp->dbi_base + exp_cap_off + 193 + PCI_EXP_LNKCTL2, 2, val); 194 194 } 195 195 } 196 196
+23 -52
drivers/pci/hotplug/pciehp_ctrl.c
··· 204 204 kfree(info); 205 205 } 206 206 207 - void pciehp_queue_pushbutton_work(struct work_struct *work) 207 + static void pciehp_queue_power_work(struct slot *p_slot, int req) 208 208 { 209 - struct slot *p_slot = container_of(work, struct slot, work.work); 210 209 struct power_work_info *info; 210 + 211 + p_slot->state = (req == ENABLE_REQ) ? POWERON_STATE : POWEROFF_STATE; 211 212 212 213 info = kmalloc(sizeof(*info), GFP_KERNEL); 213 214 if (!info) { 214 - ctrl_err(p_slot->ctrl, "%s: Cannot allocate memory\n", 215 - __func__); 215 + ctrl_err(p_slot->ctrl, "no memory to queue %s request\n", 216 + (req == ENABLE_REQ) ? "poweron" : "poweroff"); 216 217 return; 217 218 } 218 219 info->p_slot = p_slot; 219 220 INIT_WORK(&info->work, pciehp_power_thread); 221 + info->req = req; 222 + queue_work(p_slot->wq, &info->work); 223 + } 224 + 225 + void pciehp_queue_pushbutton_work(struct work_struct *work) 226 + { 227 + struct slot *p_slot = container_of(work, struct slot, work.work); 220 228 221 229 mutex_lock(&p_slot->lock); 222 230 switch (p_slot->state) { 223 231 case BLINKINGOFF_STATE: 224 - p_slot->state = POWEROFF_STATE; 225 - info->req = DISABLE_REQ; 232 + pciehp_queue_power_work(p_slot, DISABLE_REQ); 226 233 break; 227 234 case BLINKINGON_STATE: 228 - p_slot->state = POWERON_STATE; 229 - info->req = ENABLE_REQ; 235 + pciehp_queue_power_work(p_slot, ENABLE_REQ); 230 236 break; 231 237 default: 232 - kfree(info); 233 - goto out; 238 + break; 234 239 } 235 - queue_work(p_slot->wq, &info->work); 236 - out: 237 240 mutex_unlock(&p_slot->lock); 238 241 } 239 242 ··· 304 301 static void handle_surprise_event(struct slot *p_slot) 305 302 { 306 303 u8 getstatus; 307 - struct power_work_info *info; 308 - 309 - info = kmalloc(sizeof(*info), GFP_KERNEL); 310 - if (!info) { 311 - ctrl_err(p_slot->ctrl, "%s: Cannot allocate memory\n", 312 - __func__); 313 - return; 314 - } 315 - info->p_slot = p_slot; 316 - INIT_WORK(&info->work, pciehp_power_thread); 317 304 318 305 pciehp_get_adapter_status(p_slot, &getstatus); 319 - if (!getstatus) { 320 - p_slot->state = POWEROFF_STATE; 321 - info->req = DISABLE_REQ; 322 - } else { 323 - p_slot->state = POWERON_STATE; 324 - info->req = ENABLE_REQ; 325 - } 326 - 327 - queue_work(p_slot->wq, &info->work); 306 + if (!getstatus) 307 + pciehp_queue_power_work(p_slot, DISABLE_REQ); 308 + else 309 + pciehp_queue_power_work(p_slot, ENABLE_REQ); 328 310 } 329 311 330 312 /* ··· 318 330 static void handle_link_event(struct slot *p_slot, u32 event) 319 331 { 320 332 struct controller *ctrl = p_slot->ctrl; 321 - struct power_work_info *info; 322 - 323 - info = kmalloc(sizeof(*info), GFP_KERNEL); 324 - if (!info) { 325 - ctrl_err(p_slot->ctrl, "%s: Cannot allocate memory\n", 326 - __func__); 327 - return; 328 - } 329 - info->p_slot = p_slot; 330 - info->req = event == INT_LINK_UP ? ENABLE_REQ : DISABLE_REQ; 331 - INIT_WORK(&info->work, pciehp_power_thread); 332 333 333 334 switch (p_slot->state) { 334 335 case BLINKINGON_STATE: ··· 325 348 cancel_delayed_work(&p_slot->work); 326 349 /* Fall through */ 327 350 case STATIC_STATE: 328 - p_slot->state = event == INT_LINK_UP ? 329 - POWERON_STATE : POWEROFF_STATE; 330 - queue_work(p_slot->wq, &info->work); 351 + pciehp_queue_power_work(p_slot, event == INT_LINK_UP ? 352 + ENABLE_REQ : DISABLE_REQ); 331 353 break; 332 354 case POWERON_STATE: 333 355 if (event == INT_LINK_UP) { 334 356 ctrl_info(ctrl, 335 357 "Link Up event ignored on slot(%s): already powering on\n", 336 358 slot_name(p_slot)); 337 - kfree(info); 338 359 } else { 339 360 ctrl_info(ctrl, 340 361 "Link Down event queued on slot(%s): currently getting powered on\n", 341 362 slot_name(p_slot)); 342 - p_slot->state = POWEROFF_STATE; 343 - queue_work(p_slot->wq, &info->work); 363 + pciehp_queue_power_work(p_slot, DISABLE_REQ); 344 364 } 345 365 break; 346 366 case POWEROFF_STATE: ··· 345 371 ctrl_info(ctrl, 346 372 "Link Up event queued on slot(%s): currently getting powered off\n", 347 373 slot_name(p_slot)); 348 - p_slot->state = POWERON_STATE; 349 - queue_work(p_slot->wq, &info->work); 374 + pciehp_queue_power_work(p_slot, ENABLE_REQ); 350 375 } else { 351 376 ctrl_info(ctrl, 352 377 "Link Down event ignored on slot(%s): already powering off\n", 353 378 slot_name(p_slot)); 354 - kfree(info); 355 379 } 356 380 break; 357 381 default: 358 382 ctrl_err(ctrl, "ignoring invalid state %#x on slot(%s)\n", 359 383 p_slot->state, slot_name(p_slot)); 360 - kfree(info); 361 384 break; 362 385 } 363 386 }
+52 -49
drivers/pci/iov.c
··· 54 54 * The PF consumes one bus number. NumVFs, First VF Offset, and VF Stride 55 55 * determine how many additional bus numbers will be consumed by VFs. 56 56 * 57 - * Iterate over all valid NumVFs and calculate the maximum number of bus 58 - * numbers that could ever be required. 57 + * Iterate over all valid NumVFs, validate offset and stride, and calculate 58 + * the maximum number of bus numbers that could ever be required. 59 59 */ 60 - static inline u8 virtfn_max_buses(struct pci_dev *dev) 60 + static int compute_max_vf_buses(struct pci_dev *dev) 61 61 { 62 62 struct pci_sriov *iov = dev->sriov; 63 - int nr_virtfn; 64 - u8 max = 0; 65 - int busnr; 63 + int nr_virtfn, busnr, rc = 0; 66 64 67 - for (nr_virtfn = 1; nr_virtfn <= iov->total_VFs; nr_virtfn++) { 65 + for (nr_virtfn = iov->total_VFs; nr_virtfn; nr_virtfn--) { 68 66 pci_iov_set_numvfs(dev, nr_virtfn); 67 + if (!iov->offset || (nr_virtfn > 1 && !iov->stride)) { 68 + rc = -EIO; 69 + goto out; 70 + } 71 + 69 72 busnr = pci_iov_virtfn_bus(dev, nr_virtfn - 1); 70 - if (busnr > max) 71 - max = busnr; 73 + if (busnr > iov->max_VF_buses) 74 + iov->max_VF_buses = busnr; 72 75 } 73 76 74 - return max; 77 + out: 78 + pci_iov_set_numvfs(dev, 0); 79 + return rc; 75 80 } 76 81 77 82 static struct pci_bus *virtfn_add_bus(struct pci_bus *bus, int busnr) ··· 227 222 228 223 int __weak pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) 229 224 { 230 - return 0; 225 + return 0; 226 + } 227 + 228 + int __weak pcibios_sriov_disable(struct pci_dev *pdev) 229 + { 230 + return 0; 231 231 } 232 232 233 233 static int sriov_enable(struct pci_dev *dev, int nr_virtfn) 234 234 { 235 235 int rc; 236 - int i, j; 236 + int i; 237 237 int nres; 238 - u16 offset, stride, initial; 238 + u16 initial; 239 239 struct resource *res; 240 240 struct pci_dev *pdev; 241 241 struct pci_sriov *iov = dev->sriov; 242 242 int bars = 0; 243 243 int bus; 244 - int retval; 245 244 246 245 if (!nr_virtfn) 247 246 return 0; ··· 262 253 (!(iov->cap & PCI_SRIOV_CAP_VFM) && (nr_virtfn > initial))) 263 254 return -EINVAL; 264 255 265 - pci_read_config_word(dev, iov->pos + PCI_SRIOV_VF_OFFSET, &offset); 266 - pci_read_config_word(dev, iov->pos + PCI_SRIOV_VF_STRIDE, &stride); 267 - if (!offset || (nr_virtfn > 1 && !stride)) 268 - return -EIO; 269 - 270 256 nres = 0; 271 257 for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { 272 258 bars |= (1 << (i + PCI_IOV_RESOURCES)); ··· 273 269 dev_err(&dev->dev, "not enough MMIO resources for SR-IOV\n"); 274 270 return -ENOMEM; 275 271 } 276 - 277 - iov->offset = offset; 278 - iov->stride = stride; 279 272 280 273 bus = pci_iov_virtfn_bus(dev, nr_virtfn - 1); 281 274 if (bus > dev->bus->busn_res.end) { ··· 314 313 if (nr_virtfn < initial) 315 314 initial = nr_virtfn; 316 315 317 - if ((retval = pcibios_sriov_enable(dev, initial))) { 318 - dev_err(&dev->dev, "failure %d from pcibios_sriov_enable()\n", 319 - retval); 320 - return retval; 316 + rc = pcibios_sriov_enable(dev, initial); 317 + if (rc) { 318 + dev_err(&dev->dev, "failure %d from pcibios_sriov_enable()\n", rc); 319 + goto err_pcibios; 321 320 } 322 321 323 322 for (i = 0; i < initial; i++) { ··· 332 331 return 0; 333 332 334 333 failed: 335 - for (j = 0; j < i; j++) 336 - virtfn_remove(dev, j, 0); 334 + while (i--) 335 + virtfn_remove(dev, i, 0); 337 336 337 + pcibios_sriov_disable(dev); 338 + err_pcibios: 338 339 iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE); 339 340 pci_cfg_access_lock(dev); 340 341 pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); 341 - pci_iov_set_numvfs(dev, 0); 342 342 ssleep(1); 343 343 pci_cfg_access_unlock(dev); 344 344 345 345 if (iov->link != dev->devfn) 346 346 sysfs_remove_link(&dev->dev.kobj, "dep_link"); 347 347 348 + pci_iov_set_numvfs(dev, 0); 348 349 return rc; 349 - } 350 - 351 - int __weak pcibios_sriov_disable(struct pci_dev *pdev) 352 - { 353 - return 0; 354 350 } 355 351 356 352 static void sriov_disable(struct pci_dev *dev) ··· 382 384 int rc; 383 385 int nres; 384 386 u32 pgsz; 385 - u16 ctrl, total, offset, stride; 387 + u16 ctrl, total; 386 388 struct pci_sriov *iov; 387 389 struct resource *res; 388 390 struct pci_dev *pdev; ··· 397 399 ssleep(1); 398 400 } 399 401 400 - pci_read_config_word(dev, pos + PCI_SRIOV_TOTAL_VF, &total); 401 - if (!total) 402 - return 0; 403 - 404 402 ctrl = 0; 405 403 list_for_each_entry(pdev, &dev->bus->devices, bus_list) 406 404 if (pdev->is_physfn) ··· 408 414 409 415 found: 410 416 pci_write_config_word(dev, pos + PCI_SRIOV_CTRL, ctrl); 411 - pci_write_config_word(dev, pos + PCI_SRIOV_NUM_VF, 0); 412 - pci_read_config_word(dev, pos + PCI_SRIOV_VF_OFFSET, &offset); 413 - pci_read_config_word(dev, pos + PCI_SRIOV_VF_STRIDE, &stride); 414 - if (!offset || (total > 1 && !stride)) 415 - return -EIO; 417 + 418 + pci_read_config_word(dev, pos + PCI_SRIOV_TOTAL_VF, &total); 419 + if (!total) 420 + return 0; 416 421 417 422 pci_read_config_dword(dev, pos + PCI_SRIOV_SUP_PGSIZE, &pgsz); 418 423 i = PAGE_SHIFT > 12 ? PAGE_SHIFT - 12 : 0; ··· 429 436 nres = 0; 430 437 for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { 431 438 res = &dev->resource[i + PCI_IOV_RESOURCES]; 432 - bar64 = __pci_read_base(dev, pci_bar_unknown, res, 433 - pos + PCI_SRIOV_BAR + i * 4); 439 + /* 440 + * If it is already FIXED, don't change it, something 441 + * (perhaps EA or header fixups) wants it this way. 442 + */ 443 + if (res->flags & IORESOURCE_PCI_FIXED) 444 + bar64 = (res->flags & IORESOURCE_MEM_64) ? 1 : 0; 445 + else 446 + bar64 = __pci_read_base(dev, pci_bar_unknown, res, 447 + pos + PCI_SRIOV_BAR + i * 4); 434 448 if (!res->flags) 435 449 continue; 436 450 if (resource_size(res) & (PAGE_SIZE - 1)) { ··· 456 456 iov->nres = nres; 457 457 iov->ctrl = ctrl; 458 458 iov->total_VFs = total; 459 - iov->offset = offset; 460 - iov->stride = stride; 461 459 iov->pgsz = pgsz; 462 460 iov->self = dev; 463 461 pci_read_config_dword(dev, pos + PCI_SRIOV_CAP, &iov->cap); ··· 472 474 473 475 dev->sriov = iov; 474 476 dev->is_physfn = 1; 475 - iov->max_VF_buses = virtfn_max_buses(dev); 477 + rc = compute_max_vf_buses(dev); 478 + if (rc) 479 + goto fail_max_buses; 476 480 477 481 return 0; 478 482 483 + fail_max_buses: 484 + dev->sriov = NULL; 485 + dev->is_physfn = 0; 479 486 failed: 480 487 for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { 481 488 res = &dev->resource[i + PCI_IOV_RESOURCES];
+19 -13
drivers/pci/msi.c
··· 106 106 107 107 int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 108 108 { 109 + struct msi_controller *chip = dev->bus->msi; 109 110 struct msi_desc *entry; 110 111 int ret; 111 112 113 + if (chip && chip->setup_irqs) 114 + return chip->setup_irqs(chip, dev, nvec, type); 112 115 /* 113 116 * If an architecture wants to support multiple MSI, it needs to 114 117 * override arch_setup_msi_irqs() ··· 479 476 int ret = -ENOMEM; 480 477 int num_msi = 0; 481 478 int count = 0; 479 + int i; 482 480 483 481 /* Determine how many msi entries we have */ 484 482 for_each_pci_msi_entry(entry, pdev) 485 - ++num_msi; 483 + num_msi += entry->nvec_used; 486 484 if (!num_msi) 487 485 return 0; 488 486 ··· 492 488 if (!msi_attrs) 493 489 return -ENOMEM; 494 490 for_each_pci_msi_entry(entry, pdev) { 495 - msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL); 496 - if (!msi_dev_attr) 497 - goto error_attrs; 498 - msi_attrs[count] = &msi_dev_attr->attr; 491 + for (i = 0; i < entry->nvec_used; i++) { 492 + msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL); 493 + if (!msi_dev_attr) 494 + goto error_attrs; 495 + msi_attrs[count] = &msi_dev_attr->attr; 499 496 500 - sysfs_attr_init(&msi_dev_attr->attr); 501 - msi_dev_attr->attr.name = kasprintf(GFP_KERNEL, "%d", 502 - entry->irq); 503 - if (!msi_dev_attr->attr.name) 504 - goto error_attrs; 505 - msi_dev_attr->attr.mode = S_IRUGO; 506 - msi_dev_attr->show = msi_mode_show; 507 - ++count; 497 + sysfs_attr_init(&msi_dev_attr->attr); 498 + msi_dev_attr->attr.name = kasprintf(GFP_KERNEL, "%d", 499 + entry->irq + i); 500 + if (!msi_dev_attr->attr.name) 501 + goto error_attrs; 502 + msi_dev_attr->attr.mode = S_IRUGO; 503 + msi_dev_attr->show = msi_mode_show; 504 + ++count; 505 + } 508 506 } 509 507 510 508 msi_irq_group = kzalloc(sizeof(*msi_irq_group), GFP_KERNEL);
+3 -5
drivers/pci/pci-driver.c
··· 172 172 __u32 vendor, device, subvendor = PCI_ANY_ID, 173 173 subdevice = PCI_ANY_ID, class = 0, class_mask = 0; 174 174 int fields = 0; 175 - int retval = -ENODEV; 175 + size_t retval = -ENODEV; 176 176 177 177 fields = sscanf(buf, "%x %x %x %x %x %x", 178 178 &vendor, &device, &subvendor, &subdevice, ··· 190 190 !((id->class ^ class) & class_mask)) { 191 191 list_del(&dynid->node); 192 192 kfree(dynid); 193 - retval = 0; 193 + retval = count; 194 194 break; 195 195 } 196 196 } 197 197 spin_unlock(&pdrv->dynids.lock); 198 198 199 - if (retval) 200 - return retval; 201 - return count; 199 + return retval; 202 200 } 203 201 static DRIVER_ATTR(remove_id, S_IWUSR, NULL, store_remove_id); 204 202
+224 -1
drivers/pci/pci.c
··· 27 27 #include <linux/pci_hotplug.h> 28 28 #include <asm-generic/pci-bridge.h> 29 29 #include <asm/setup.h> 30 + #include <linux/aer.h> 30 31 #include "pci.h" 31 32 32 33 const char *pci_power_names[] = { ··· 459 458 EXPORT_SYMBOL(pci_find_parent_resource); 460 459 461 460 /** 461 + * pci_find_pcie_root_port - return PCIe Root Port 462 + * @dev: PCI device to query 463 + * 464 + * Traverse up the parent chain and return the PCIe Root Port PCI Device 465 + * for a given PCI Device. 466 + */ 467 + struct pci_dev *pci_find_pcie_root_port(struct pci_dev *dev) 468 + { 469 + struct pci_dev *bridge, *highest_pcie_bridge = NULL; 470 + 471 + bridge = pci_upstream_bridge(dev); 472 + while (bridge && pci_is_pcie(bridge)) { 473 + highest_pcie_bridge = bridge; 474 + bridge = pci_upstream_bridge(bridge); 475 + } 476 + 477 + if (pci_pcie_type(highest_pcie_bridge) != PCI_EXP_TYPE_ROOT_PORT) 478 + return NULL; 479 + 480 + return highest_pcie_bridge; 481 + } 482 + EXPORT_SYMBOL(pci_find_pcie_root_port); 483 + 484 + /** 462 485 * pci_wait_for_pending - wait for @mask bit(s) to clear in status word @pos 463 486 * @dev: the PCI device to operate on 464 487 * @pos: config space offset of status word ··· 509 484 } 510 485 511 486 /** 512 - * pci_restore_bars - restore a devices BAR values (e.g. after wake-up) 487 + * pci_restore_bars - restore a device's BAR values (e.g. after wake-up) 513 488 * @dev: PCI device to have its BARs restored 514 489 * 515 490 * Restore the BAR values for a given device, so as to make it ··· 518 493 static void pci_restore_bars(struct pci_dev *dev) 519 494 { 520 495 int i; 496 + 497 + /* Per SR-IOV spec 3.4.1.11, VF BARs are RO zero */ 498 + if (dev->is_virtfn) 499 + return; 521 500 522 501 for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) 523 502 pci_update_resource(dev, i); ··· 1127 1098 pci_restore_pcie_state(dev); 1128 1099 pci_restore_ats_state(dev); 1129 1100 pci_restore_vc_state(dev); 1101 + 1102 + pci_cleanup_aer_error_status_regs(dev); 1130 1103 1131 1104 pci_restore_config_space(dev); 1132 1105 ··· 2225 2194 /* Disable the PME# generation functionality */ 2226 2195 pci_pme_active(dev, false); 2227 2196 } 2197 + } 2198 + 2199 + static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) 2200 + { 2201 + unsigned long flags = IORESOURCE_PCI_FIXED; 2202 + 2203 + switch (prop) { 2204 + case PCI_EA_P_MEM: 2205 + case PCI_EA_P_VF_MEM: 2206 + flags |= IORESOURCE_MEM; 2207 + break; 2208 + case PCI_EA_P_MEM_PREFETCH: 2209 + case PCI_EA_P_VF_MEM_PREFETCH: 2210 + flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH; 2211 + break; 2212 + case PCI_EA_P_IO: 2213 + flags |= IORESOURCE_IO; 2214 + break; 2215 + default: 2216 + return 0; 2217 + } 2218 + 2219 + return flags; 2220 + } 2221 + 2222 + static struct resource *pci_ea_get_resource(struct pci_dev *dev, u8 bei, 2223 + u8 prop) 2224 + { 2225 + if (bei <= PCI_EA_BEI_BAR5 && prop <= PCI_EA_P_IO) 2226 + return &dev->resource[bei]; 2227 + #ifdef CONFIG_PCI_IOV 2228 + else if (bei >= PCI_EA_BEI_VF_BAR0 && bei <= PCI_EA_BEI_VF_BAR5 && 2229 + (prop == PCI_EA_P_VF_MEM || prop == PCI_EA_P_VF_MEM_PREFETCH)) 2230 + return &dev->resource[PCI_IOV_RESOURCES + 2231 + bei - PCI_EA_BEI_VF_BAR0]; 2232 + #endif 2233 + else if (bei == PCI_EA_BEI_ROM) 2234 + return &dev->resource[PCI_ROM_RESOURCE]; 2235 + else 2236 + return NULL; 2237 + } 2238 + 2239 + /* Read an Enhanced Allocation (EA) entry */ 2240 + static int pci_ea_read(struct pci_dev *dev, int offset) 2241 + { 2242 + struct resource *res; 2243 + int ent_size, ent_offset = offset; 2244 + resource_size_t start, end; 2245 + unsigned long flags; 2246 + u32 dw0, bei, base, max_offset; 2247 + u8 prop; 2248 + bool support_64 = (sizeof(resource_size_t) >= 8); 2249 + 2250 + pci_read_config_dword(dev, ent_offset, &dw0); 2251 + ent_offset += 4; 2252 + 2253 + /* Entry size field indicates DWORDs after 1st */ 2254 + ent_size = ((dw0 & PCI_EA_ES) + 1) << 2; 2255 + 2256 + if (!(dw0 & PCI_EA_ENABLE)) /* Entry not enabled */ 2257 + goto out; 2258 + 2259 + bei = (dw0 & PCI_EA_BEI) >> 4; 2260 + prop = (dw0 & PCI_EA_PP) >> 8; 2261 + 2262 + /* 2263 + * If the Property is in the reserved range, try the Secondary 2264 + * Property instead. 2265 + */ 2266 + if (prop > PCI_EA_P_BRIDGE_IO && prop < PCI_EA_P_MEM_RESERVED) 2267 + prop = (dw0 & PCI_EA_SP) >> 16; 2268 + if (prop > PCI_EA_P_BRIDGE_IO) 2269 + goto out; 2270 + 2271 + res = pci_ea_get_resource(dev, bei, prop); 2272 + if (!res) { 2273 + dev_err(&dev->dev, "Unsupported EA entry BEI: %u\n", bei); 2274 + goto out; 2275 + } 2276 + 2277 + flags = pci_ea_flags(dev, prop); 2278 + if (!flags) { 2279 + dev_err(&dev->dev, "Unsupported EA properties: %#x\n", prop); 2280 + goto out; 2281 + } 2282 + 2283 + /* Read Base */ 2284 + pci_read_config_dword(dev, ent_offset, &base); 2285 + start = (base & PCI_EA_FIELD_MASK); 2286 + ent_offset += 4; 2287 + 2288 + /* Read MaxOffset */ 2289 + pci_read_config_dword(dev, ent_offset, &max_offset); 2290 + ent_offset += 4; 2291 + 2292 + /* Read Base MSBs (if 64-bit entry) */ 2293 + if (base & PCI_EA_IS_64) { 2294 + u32 base_upper; 2295 + 2296 + pci_read_config_dword(dev, ent_offset, &base_upper); 2297 + ent_offset += 4; 2298 + 2299 + flags |= IORESOURCE_MEM_64; 2300 + 2301 + /* entry starts above 32-bit boundary, can't use */ 2302 + if (!support_64 && base_upper) 2303 + goto out; 2304 + 2305 + if (support_64) 2306 + start |= ((u64)base_upper << 32); 2307 + } 2308 + 2309 + end = start + (max_offset | 0x03); 2310 + 2311 + /* Read MaxOffset MSBs (if 64-bit entry) */ 2312 + if (max_offset & PCI_EA_IS_64) { 2313 + u32 max_offset_upper; 2314 + 2315 + pci_read_config_dword(dev, ent_offset, &max_offset_upper); 2316 + ent_offset += 4; 2317 + 2318 + flags |= IORESOURCE_MEM_64; 2319 + 2320 + /* entry too big, can't use */ 2321 + if (!support_64 && max_offset_upper) 2322 + goto out; 2323 + 2324 + if (support_64) 2325 + end += ((u64)max_offset_upper << 32); 2326 + } 2327 + 2328 + if (end < start) { 2329 + dev_err(&dev->dev, "EA Entry crosses address boundary\n"); 2330 + goto out; 2331 + } 2332 + 2333 + if (ent_size != ent_offset - offset) { 2334 + dev_err(&dev->dev, 2335 + "EA Entry Size (%d) does not match length read (%d)\n", 2336 + ent_size, ent_offset - offset); 2337 + goto out; 2338 + } 2339 + 2340 + res->name = pci_name(dev); 2341 + res->start = start; 2342 + res->end = end; 2343 + res->flags = flags; 2344 + 2345 + if (bei <= PCI_EA_BEI_BAR5) 2346 + dev_printk(KERN_DEBUG, &dev->dev, "BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", 2347 + bei, res, prop); 2348 + else if (bei == PCI_EA_BEI_ROM) 2349 + dev_printk(KERN_DEBUG, &dev->dev, "ROM: %pR (from Enhanced Allocation, properties %#02x)\n", 2350 + res, prop); 2351 + else if (bei >= PCI_EA_BEI_VF_BAR0 && bei <= PCI_EA_BEI_VF_BAR5) 2352 + dev_printk(KERN_DEBUG, &dev->dev, "VF BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", 2353 + bei - PCI_EA_BEI_VF_BAR0, res, prop); 2354 + else 2355 + dev_printk(KERN_DEBUG, &dev->dev, "BEI %d res: %pR (from Enhanced Allocation, properties %#02x)\n", 2356 + bei, res, prop); 2357 + 2358 + out: 2359 + return offset + ent_size; 2360 + } 2361 + 2362 + /* Enhanced Allocation Initalization */ 2363 + void pci_ea_init(struct pci_dev *dev) 2364 + { 2365 + int ea; 2366 + u8 num_ent; 2367 + int offset; 2368 + int i; 2369 + 2370 + /* find PCI EA capability in list */ 2371 + ea = pci_find_capability(dev, PCI_CAP_ID_EA); 2372 + if (!ea) 2373 + return; 2374 + 2375 + /* determine the number of entries */ 2376 + pci_bus_read_config_byte(dev->bus, dev->devfn, ea + PCI_EA_NUM_ENT, 2377 + &num_ent); 2378 + num_ent &= PCI_EA_NUM_ENT_MASK; 2379 + 2380 + offset = ea + PCI_EA_FIRST_ENT; 2381 + 2382 + /* Skip DWORD 2 for type 1 functions */ 2383 + if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) 2384 + offset += 4; 2385 + 2386 + /* parse each EA entry */ 2387 + for (i = 0; i < num_ent; ++i) 2388 + offset = pci_ea_read(dev, offset); 2228 2389 } 2229 2390 2230 2391 static void pci_add_saved_cap(struct pci_dev *pci_dev,
+1
drivers/pci/pci.h
··· 79 79 void pci_config_pm_runtime_get(struct pci_dev *dev); 80 80 void pci_config_pm_runtime_put(struct pci_dev *dev); 81 81 void pci_pm_init(struct pci_dev *dev); 82 + void pci_ea_init(struct pci_dev *dev); 82 83 void pci_allocate_cap_save_buffers(struct pci_dev *dev); 83 84 void pci_free_cap_save_buffers(struct pci_dev *dev); 84 85
+28
drivers/pci/pcie/aer/aerdrv_core.c
··· 74 74 } 75 75 EXPORT_SYMBOL_GPL(pci_cleanup_aer_uncorrect_error_status); 76 76 77 + int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) 78 + { 79 + int pos; 80 + u32 status; 81 + int port_type; 82 + 83 + if (!pci_is_pcie(dev)) 84 + return -ENODEV; 85 + 86 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 87 + if (!pos) 88 + return -EIO; 89 + 90 + port_type = pci_pcie_type(dev); 91 + if (port_type == PCI_EXP_TYPE_ROOT_PORT) { 92 + pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &status); 93 + pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, status); 94 + } 95 + 96 + pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, &status); 97 + pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, status); 98 + 99 + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); 100 + pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); 101 + 102 + return 0; 103 + } 104 + 77 105 /** 78 106 * add_error_device - list device to be handled 79 107 * @e_info: pointer to error info
+6
drivers/pci/probe.c
··· 12 12 #include <linux/module.h> 13 13 #include <linux/cpumask.h> 14 14 #include <linux/pci-aspm.h> 15 + #include <linux/aer.h> 15 16 #include <asm-generic/pci-bridge.h> 16 17 #include "pci.h" 17 18 ··· 1598 1597 1599 1598 static void pci_init_capabilities(struct pci_dev *dev) 1600 1599 { 1600 + /* Enhanced Allocation */ 1601 + pci_ea_init(dev); 1602 + 1601 1603 /* MSI/MSI-X list */ 1602 1604 pci_msi_init_pci_dev(dev); 1603 1605 ··· 1624 1620 1625 1621 /* Enable ACS P2P upstream forwarding */ 1626 1622 pci_enable_acs(dev); 1623 + 1624 + pci_cleanup_aer_error_status_regs(dev); 1627 1625 } 1628 1626 1629 1627 /*
+58
drivers/pci/quirks.c
··· 2246 2246 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_VT3351, quirk_disable_all_msi); 2247 2247 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_VT3364, quirk_disable_all_msi); 2248 2248 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8380_0, quirk_disable_all_msi); 2249 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SI, 0x0761, quirk_disable_all_msi); 2249 2250 2250 2251 /* Disable MSI on chipsets that are known to not support it */ 2251 2252 static void quirk_disable_msi(struct pci_dev *dev) ··· 3707 3706 quirk_tw686x_class); 3708 3707 DECLARE_PCI_FIXUP_CLASS_EARLY(0x1797, 0x6869, PCI_CLASS_NOT_DEFINED, 8, 3709 3708 quirk_tw686x_class); 3709 + 3710 + /* 3711 + * Per PCIe r3.0, sec 2.2.9, "Completion headers must supply the same 3712 + * values for the Attribute as were supplied in the header of the 3713 + * corresponding Request, except as explicitly allowed when IDO is used." 3714 + * 3715 + * If a non-compliant device generates a completion with a different 3716 + * attribute than the request, the receiver may accept it (which itself 3717 + * seems non-compliant based on sec 2.3.2), or it may handle it as a 3718 + * Malformed TLP or an Unexpected Completion, which will probably lead to a 3719 + * device access timeout. 3720 + * 3721 + * If the non-compliant device generates completions with zero attributes 3722 + * (instead of copying the attributes from the request), we can work around 3723 + * this by disabling the "Relaxed Ordering" and "No Snoop" attributes in 3724 + * upstream devices so they always generate requests with zero attributes. 3725 + * 3726 + * This affects other devices under the same Root Port, but since these 3727 + * attributes are performance hints, there should be no functional problem. 3728 + * 3729 + * Note that Configuration Space accesses are never supposed to have TLP 3730 + * Attributes, so we're safe waiting till after any Configuration Space 3731 + * accesses to do the Root Port fixup. 3732 + */ 3733 + static void quirk_disable_root_port_attributes(struct pci_dev *pdev) 3734 + { 3735 + struct pci_dev *root_port = pci_find_pcie_root_port(pdev); 3736 + 3737 + if (!root_port) { 3738 + dev_warn(&pdev->dev, "PCIe Completion erratum may cause device errors\n"); 3739 + return; 3740 + } 3741 + 3742 + dev_info(&root_port->dev, "Disabling No Snoop/Relaxed Ordering Attributes to avoid PCIe Completion erratum in %s\n", 3743 + dev_name(&pdev->dev)); 3744 + pcie_capability_clear_and_set_word(root_port, PCI_EXP_DEVCTL, 3745 + PCI_EXP_DEVCTL_RELAX_EN | 3746 + PCI_EXP_DEVCTL_NOSNOOP_EN, 0); 3747 + } 3748 + 3749 + /* 3750 + * The Chelsio T5 chip fails to copy TLP Attributes from a Request to the 3751 + * Completion it generates. 3752 + */ 3753 + static void quirk_chelsio_T5_disable_root_port_attributes(struct pci_dev *pdev) 3754 + { 3755 + /* 3756 + * This mask/compare operation selects for Physical Function 4 on a 3757 + * T5. We only need to fix up the Root Port once for any of the 3758 + * PFs. PF[0..3] have PCI Device IDs of 0x50xx, but PF4 is uniquely 3759 + * 0x54xx so we use that one, 3760 + */ 3761 + if ((pdev->device & 0xff00) == 0x5400) 3762 + quirk_disable_root_port_attributes(pdev); 3763 + } 3764 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, 3765 + quirk_chelsio_T5_disable_root_port_attributes); 3710 3766 3711 3767 /* 3712 3768 * AMD has indicated that the devices below do not support peer-to-peer
+47 -3
drivers/pci/setup-bus.c
··· 1037 1037 struct resource *r = &dev->resource[i]; 1038 1038 resource_size_t r_size; 1039 1039 1040 - if (r->parent || ((r->flags & mask) != type && 1041 - (r->flags & mask) != type2 && 1042 - (r->flags & mask) != type3)) 1040 + if (r->parent || (r->flags & IORESOURCE_PCI_FIXED) || 1041 + ((r->flags & mask) != type && 1042 + (r->flags & mask) != type2 && 1043 + (r->flags & mask) != type3)) 1043 1044 continue; 1044 1045 r_size = resource_size(r); 1045 1046 #ifdef CONFIG_PCI_IOV ··· 1341 1340 } 1342 1341 EXPORT_SYMBOL(pci_bus_size_bridges); 1343 1342 1343 + static void assign_fixed_resource_on_bus(struct pci_bus *b, struct resource *r) 1344 + { 1345 + int i; 1346 + struct resource *parent_r; 1347 + unsigned long mask = IORESOURCE_IO | IORESOURCE_MEM | 1348 + IORESOURCE_PREFETCH; 1349 + 1350 + pci_bus_for_each_resource(b, parent_r, i) { 1351 + if (!parent_r) 1352 + continue; 1353 + 1354 + if ((r->flags & mask) == (parent_r->flags & mask) && 1355 + resource_contains(parent_r, r)) 1356 + request_resource(parent_r, r); 1357 + } 1358 + } 1359 + 1360 + /* 1361 + * Try to assign any resources marked as IORESOURCE_PCI_FIXED, as they 1362 + * are skipped by pbus_assign_resources_sorted(). 1363 + */ 1364 + static void pdev_assign_fixed_resources(struct pci_dev *dev) 1365 + { 1366 + int i; 1367 + 1368 + for (i = 0; i < PCI_NUM_RESOURCES; i++) { 1369 + struct pci_bus *b; 1370 + struct resource *r = &dev->resource[i]; 1371 + 1372 + if (r->parent || !(r->flags & IORESOURCE_PCI_FIXED) || 1373 + !(r->flags & (IORESOURCE_IO | IORESOURCE_MEM))) 1374 + continue; 1375 + 1376 + b = dev->bus; 1377 + while (b && !r->parent) { 1378 + assign_fixed_resource_on_bus(b, r); 1379 + b = b->parent; 1380 + } 1381 + } 1382 + } 1383 + 1344 1384 void __pci_bus_assign_resources(const struct pci_bus *bus, 1345 1385 struct list_head *realloc_head, 1346 1386 struct list_head *fail_head) ··· 1392 1350 pbus_assign_resources_sorted(bus, realloc_head, fail_head); 1393 1351 1394 1352 list_for_each_entry(dev, &bus->devices, bus_list) { 1353 + pdev_assign_fixed_resources(dev); 1354 + 1395 1355 b = dev->subordinate; 1396 1356 if (!b) 1397 1357 continue;
+7
drivers/pci/setup-res.c
··· 36 36 enum pci_bar_type type; 37 37 struct resource *res = dev->resource + resno; 38 38 39 + if (dev->is_virtfn) { 40 + dev_warn(&dev->dev, "can't update VF BAR%d\n", resno); 41 + return; 42 + } 43 + 39 44 /* 40 45 * Ignore resources for unimplemented BARs and unused resource slots 41 46 * for 64 bit BARs. ··· 182 177 end = res->end; 183 178 res->start = fw_addr; 184 179 res->end = res->start + size - 1; 180 + res->flags &= ~IORESOURCE_UNSET; 185 181 186 182 root = pci_find_parent_resource(dev, res); 187 183 if (!root) { ··· 200 194 resno, res, conflict->name, conflict); 201 195 res->start = start; 202 196 res->end = end; 197 + res->flags |= IORESOURCE_UNSET; 203 198 return -EBUSY; 204 199 } 205 200 return 0;
+5
include/linux/aer.h
··· 42 42 int pci_enable_pcie_error_reporting(struct pci_dev *dev); 43 43 int pci_disable_pcie_error_reporting(struct pci_dev *dev); 44 44 int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev); 45 + int pci_cleanup_aer_error_status_regs(struct pci_dev *dev); 45 46 #else 46 47 static inline int pci_enable_pcie_error_reporting(struct pci_dev *dev) 47 48 { ··· 53 52 return -EINVAL; 54 53 } 55 54 static inline int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev) 55 + { 56 + return -EINVAL; 57 + } 58 + static inline int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) 56 59 { 57 60 return -EINVAL; 58 61 }
+2
include/linux/msi.h
··· 163 163 164 164 int (*setup_irq)(struct msi_controller *chip, struct pci_dev *dev, 165 165 struct msi_desc *desc); 166 + int (*setup_irqs)(struct msi_controller *chip, struct pci_dev *dev, 167 + int nvec, int type); 166 168 void (*teardown_irq)(struct msi_controller *chip, unsigned int irq); 167 169 }; 168 170
+3
include/linux/of_pci.h
··· 17 17 int of_pci_parse_bus_range(struct device_node *node, struct resource *res); 18 18 int of_get_pci_domain_nr(struct device_node *node); 19 19 void of_pci_dma_configure(struct pci_dev *pci_dev); 20 + void of_pci_check_probe_only(void); 20 21 #else 21 22 static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq) 22 23 { ··· 54 53 } 55 54 56 55 static inline void of_pci_dma_configure(struct pci_dev *pci_dev) { } 56 + 57 + static inline void of_pci_check_probe_only(void) { } 57 58 #endif 58 59 59 60 #if defined(CONFIG_OF_ADDRESS)
+12
include/linux/pci.h
··· 820 820 void pci_read_bridge_bases(struct pci_bus *child); 821 821 struct resource *pci_find_parent_resource(const struct pci_dev *dev, 822 822 struct resource *res); 823 + struct pci_dev *pci_find_pcie_root_port(struct pci_dev *dev); 823 824 u8 pci_swizzle_interrupt_pin(const struct pci_dev *dev, u8 pin); 824 825 int pci_get_interrupt_pin(struct pci_dev *dev, struct pci_dev **bridge); 825 826 u8 pci_common_swizzle(struct pci_dev *dev, u8 *pinp); ··· 1192 1191 #define module_pci_driver(__pci_driver) \ 1193 1192 module_driver(__pci_driver, pci_register_driver, \ 1194 1193 pci_unregister_driver) 1194 + 1195 + /** 1196 + * builtin_pci_driver() - Helper macro for registering a PCI driver 1197 + * @__pci_driver: pci_driver struct 1198 + * 1199 + * Helper macro for PCI drivers which do not do anything special in their 1200 + * init code. This eliminates a lot of boilerplate. Each driver may only 1201 + * use this macro once, and calling it replaces device_initcall(...) 1202 + */ 1203 + #define builtin_pci_driver(__pci_driver) \ 1204 + builtin_driver(__pci_driver, pci_register_driver) 1195 1205 1196 1206 struct pci_driver *pci_dev_driver(const struct pci_dev *dev); 1197 1207 int pci_add_dynid(struct pci_driver *drv,
+42 -1
include/uapi/linux/pci_regs.h
··· 216 216 #define PCI_CAP_ID_MSIX 0x11 /* MSI-X */ 217 217 #define PCI_CAP_ID_SATA 0x12 /* SATA Data/Index Conf. */ 218 218 #define PCI_CAP_ID_AF 0x13 /* PCI Advanced Features */ 219 - #define PCI_CAP_ID_MAX PCI_CAP_ID_AF 219 + #define PCI_CAP_ID_EA 0x14 /* PCI Enhanced Allocation */ 220 + #define PCI_CAP_ID_MAX PCI_CAP_ID_EA 220 221 #define PCI_CAP_LIST_NEXT 1 /* Next capability in the list */ 221 222 #define PCI_CAP_FLAGS 2 /* Capability defined flags (16 bits) */ 222 223 #define PCI_CAP_SIZEOF 4 ··· 353 352 #define PCI_AF_STATUS 5 354 353 #define PCI_AF_STATUS_TP 0x01 355 354 #define PCI_CAP_AF_SIZEOF 6 /* size of AF registers */ 355 + 356 + /* PCI Enhanced Allocation registers */ 357 + 358 + #define PCI_EA_NUM_ENT 2 /* Number of Capability Entries */ 359 + #define PCI_EA_NUM_ENT_MASK 0x3f /* Num Entries Mask */ 360 + #define PCI_EA_FIRST_ENT 4 /* First EA Entry in List */ 361 + #define PCI_EA_FIRST_ENT_BRIDGE 8 /* First EA Entry for Bridges */ 362 + #define PCI_EA_ES 0x00000007 /* Entry Size */ 363 + #define PCI_EA_BEI 0x000000f0 /* BAR Equivalent Indicator */ 364 + /* 0-5 map to BARs 0-5 respectively */ 365 + #define PCI_EA_BEI_BAR0 0 366 + #define PCI_EA_BEI_BAR5 5 367 + #define PCI_EA_BEI_BRIDGE 6 /* Resource behind bridge */ 368 + #define PCI_EA_BEI_ENI 7 /* Equivalent Not Indicated */ 369 + #define PCI_EA_BEI_ROM 8 /* Expansion ROM */ 370 + /* 9-14 map to VF BARs 0-5 respectively */ 371 + #define PCI_EA_BEI_VF_BAR0 9 372 + #define PCI_EA_BEI_VF_BAR5 14 373 + #define PCI_EA_BEI_RESERVED 15 /* Reserved - Treat like ENI */ 374 + #define PCI_EA_PP 0x0000ff00 /* Primary Properties */ 375 + #define PCI_EA_SP 0x00ff0000 /* Secondary Properties */ 376 + #define PCI_EA_P_MEM 0x00 /* Non-Prefetch Memory */ 377 + #define PCI_EA_P_MEM_PREFETCH 0x01 /* Prefetchable Memory */ 378 + #define PCI_EA_P_IO 0x02 /* I/O Space */ 379 + #define PCI_EA_P_VF_MEM_PREFETCH 0x03 /* VF Prefetchable Memory */ 380 + #define PCI_EA_P_VF_MEM 0x04 /* VF Non-Prefetch Memory */ 381 + #define PCI_EA_P_BRIDGE_MEM 0x05 /* Bridge Non-Prefetch Memory */ 382 + #define PCI_EA_P_BRIDGE_MEM_PREFETCH 0x06 /* Bridge Prefetchable Memory */ 383 + #define PCI_EA_P_BRIDGE_IO 0x07 /* Bridge I/O Space */ 384 + /* 0x08-0xfc reserved */ 385 + #define PCI_EA_P_MEM_RESERVED 0xfd /* Reserved Memory */ 386 + #define PCI_EA_P_IO_RESERVED 0xfe /* Reserved I/O Space */ 387 + #define PCI_EA_P_UNAVAILABLE 0xff /* Entry Unavailable */ 388 + #define PCI_EA_WRITABLE 0x40000000 /* Writable: 1 = RW, 0 = HwInit */ 389 + #define PCI_EA_ENABLE 0x80000000 /* Enable for this entry */ 390 + #define PCI_EA_BASE 4 /* Base Address Offset */ 391 + #define PCI_EA_MAX_OFFSET 8 /* MaxOffset (resource length) */ 392 + /* bit 0 is reserved */ 393 + #define PCI_EA_IS_64 0x00000002 /* 64-bit field flag */ 394 + #define PCI_EA_FIELD_MASK 0xfffffffc /* For Base & Max Offset */ 356 395 357 396 /* PCI-X registers (Type 0 (non-bridge) devices) */ 358 397