Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v3.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"The interesting things here are:

- Turn on Config Request Retry Status Software Visibility. This
caused hangs last time, but we included a fix this time.
- Rework PCI device configuration to use _HPP/_HPX more aggressively
- Allow PCI devices to be put into D3cold during system suspend
- Add arm64 PCI support
- Add APM X-Gene host bridge driver
- Add TI Keystone host bridge driver
- Add Xilinx AXI host bridge driver

More detailed summary:

Enumeration
- Check Vendor ID only for Config Request Retry Status (Rajat Jain)
- Enable Config Request Retry Status when supported (Rajat Jain)
- Add generic domain handling (Catalin Marinas)
- Generate uppercase hex for modalias interface class (Ricardo Ribalda Delgado)

Resource management
- Add missing MEM_64 mask in pci_assign_unassigned_bridge_resources() (Yinghai Lu)
- Increase IBM ipr SAS Crocodile BARs to at least system page size (Douglas Lehr)

PCI device hotplug
- Prevent NULL dereference during pciehp probe (Andreas Noever)
- Move _HPP & _HPX handling into core (Bjorn Helgaas)
- Apply _HPP to PCIe devices as well as PCI (Bjorn Helgaas)
- Apply _HPP/_HPX to display devices (Bjorn Helgaas)
- Preserve SERR & PARITY settings when applying _HPP/_HPX (Bjorn Helgaas)
- Preserve MPS and MRRS settings when applying _HPP/_HPX (Bjorn Helgaas)
- Apply _HPP/_HPX to all devices, not just hot-added ones (Bjorn Helgaas)
- Fix wait time in pciehp timeout message (Yinghai Lu)
- Add more pciehp Slot Control debug output (Yinghai Lu)
- Stop disabling pciehp notifications during init (Yinghai Lu)

MSI
- Remove arch_msi_check_device() (Alexander Gordeev)
- Rename pci_msi_check_device() to pci_msi_supported() (Alexander Gordeev)
- Move D0 check into pci_msi_check_device() (Alexander Gordeev)
- Remove unused kobject from struct msi_desc (Yijing Wang)
- Remove "pos" from the struct msi_desc msi_attrib (Yijing Wang)
- Add "msi_bus" sysfs MSI/MSI-X control for endpoints (Yijing Wang)
- Use __get_cached_msi_msg() instead of get_cached_msi_msg() (Yijing Wang)
- Use __read_msi_msg() instead of read_msi_msg() (Yijing Wang)
- Use __write_msi_msg() instead of write_msi_msg() (Yijing Wang)

Power management
- Drop unused runtime PM support code for PCIe ports (Rafael J. Wysocki)
- Allow PCI devices to be put into D3cold during system suspend (Rafael J. Wysocki)

AER
- Add additional AER error strings (Gong Chen)
- Make <linux/aer.h> standalone includable (Thierry Reding)

Virtualization
- Add ACS quirk for Solarflare SFC9120 & SFC9140 (Alex Williamson)
- Add ACS quirk for Intel 10G NICs (Alex Williamson)
- Add ACS quirk for AMD A88X southbridge (Marti Raudsepp)
- Remove unused pci_find_upstream_pcie_bridge(), pci_get_dma_source() (Alex Williamson)
- Add device flag helpers (Ethan Zhao)
- Assume all Mellanox devices have broken INTx masking (Gavin Shan)

Generic host bridge driver
- Fix ioport_map() for !CONFIG_GENERIC_IOMAP (Liviu Dudau)
- Add pci_register_io_range() and pci_pio_to_address() (Liviu Dudau)
- Define PCI_IOBASE as the base of virtual PCI IO space (Liviu Dudau)
- Fix the conversion of IO ranges into IO resources (Liviu Dudau)
- Add pci_get_new_domain_nr() and of_get_pci_domain_nr() (Liviu Dudau)
- Add support for parsing PCI host bridge resources from DT (Liviu Dudau)
- Add pci_remap_iospace() to map bus I/O resources (Liviu Dudau)
- Add arm64 architectural support for PCI (Liviu Dudau)

APM X-Gene
- Add APM X-Gene PCIe driver (Tanmay Inamdar)
- Add arm64 DT APM X-Gene PCIe device tree nodes (Tanmay Inamdar)

Freescale i.MX6
- Probe in module_init(), not fs_initcall() (Lucas Stach)
- Delay enabling reference clock for SS until it stabilizes (Tim Harvey)

Marvell MVEBU
- Fix uninitialized variable in mvebu_get_tgt_attr() (Thomas Petazzoni)

NVIDIA Tegra
- Make sure the PCIe PLL is really reset (Eric Yuen)
- Add error path tegra_msi_teardown_irq() cleanup (Jisheng Zhang)
- Fix extended configuration space mapping (Peter Daifuku)
- Implement resource hierarchy (Thierry Reding)
- Clear CLKREQ# enable on port disable (Thierry Reding)
- Add Tegra124 support (Thierry Reding)

ST Microelectronics SPEAr13xx
- Pass config resource through reg property (Pratyush Anand)

Synopsys DesignWare
- Use NULL instead of false (Fabio Estevam)
- Parse bus-range property from devicetree (Lucas Stach)
- Use pci_create_root_bus() instead of pci_scan_root_bus() (Lucas Stach)
- Remove pci_assign_unassigned_resources() (Lucas Stach)
- Check private_data validity in single place (Lucas Stach)
- Setup and clear exactly one MSI at a time (Lucas Stach)
- Remove open-coded bitmap operations (Lucas Stach)
- Fix configuration base address when using 'reg' (Minghuan Lian)
- Fix IO resource end address calculation (Minghuan Lian)
- Rename get_msi_data() to get_msi_addr() (Minghuan Lian)
- Add get_msi_data() to pcie_host_ops (Minghuan Lian)
- Add support for v3.65 hardware (Murali Karicheri)
- Fold struct pcie_port_info into struct pcie_port (Pratyush Anand)

TI Keystone
- Add TI Keystone PCIe driver (Murali Karicheri)
- Limit MRSS for all downstream devices (Murali Karicheri)
- Assume controller is already in RC mode (Murali Karicheri)
- Set device ID based on SoC to support multiple ports (Murali Karicheri)

Xilinx AXI
- Add Xilinx AXI PCIe driver (Srikanth Thokala)
- Fix xilinx_pcie_assign_msi() return value test (Dan Carpenter)

Miscellaneous
- Clean up whitespace (Quentin Lambert)
- Remove assignments from "if" conditions (Quentin Lambert)
- Move PCI_VENDOR_ID_VMWARE to pci_ids.h (Francesco Ruggeri)
- x86: Mark DMI tables as initialization data (Mathias Krause)
- x86: Move __init annotation to the correct place (Mathias Krause)
- x86: Mark constants of pci_mmcfg_nvidia_mcp55() as __initconst (Mathias Krause)
- x86: Constify pci_mmcfg_probes[] array (Mathias Krause)
- x86: Mark PCI BIOS initialization code as such (Mathias Krause)
- Parenthesize PCI_DEVID and PCI_VPD_LRDT_ID parameters (Megan Kamiya)
- Remove unnecessary variable in pci_add_dynid() (Tobias Klauser)"

* tag 'pci-v3.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (109 commits)
arm64: dts: Add APM X-Gene PCIe device tree nodes
PCI: Add ACS quirk for AMD A88X southbridge devices
PCI: xgene: Add APM X-Gene PCIe driver
PCI: designware: Remove open-coded bitmap operations
PCI/MSI: Remove unnecessary temporary variable
PCI/MSI: Use __write_msi_msg() instead of write_msi_msg()
MSI/powerpc: Use __read_msi_msg() instead of read_msi_msg()
PCI/MSI: Use __get_cached_msi_msg() instead of get_cached_msi_msg()
PCI/MSI: Add "msi_bus" sysfs MSI/MSI-X control for endpoints
PCI/MSI: Remove "pos" from the struct msi_desc msi_attrib
PCI/MSI: Remove unused kobject from struct msi_desc
PCI/MSI: Rename pci_msi_check_device() to pci_msi_supported()
PCI/MSI: Move D0 check into pci_msi_check_device()
PCI/MSI: Remove arch_msi_check_device()
irqchip: armada-370-xp: Remove arch_msi_check_device()
PCI/MSI/PPC: Remove arch_msi_check_device()
arm64: Add architectural support for PCI
PCI: Add pci_remap_iospace() to map bus I/O resources
of/pci: Add support for parsing PCI host bridge resources from DT
of/pci: Add pci_get_new_domain_nr() and of_get_pci_domain_nr()
...

Conflicts:
arch/arm64/boot/dts/apm-storm.dtsi

+4910 -1332
+10
Documentation/ABI/testing/sysfs-bus-pci
··· 65 65 force a rescan of all PCI buses in the system, and 66 66 re-discover previously removed devices. 67 67 68 + What: /sys/bus/pci/devices/.../msi_bus 69 + Date: September 2014 70 + Contact: Linux PCI developers <linux-pci@vger.kernel.org> 71 + Description: 72 + Writing a zero value to this attribute disallows MSI and 73 + MSI-X for any future drivers of the device. If the device 74 + is a bridge, MSI and MSI-X will be disallowed for future 75 + drivers of all child devices under the bridge. Drivers 76 + must be reloaded for the new setting to take effect. 77 + 68 78 What: /sys/bus/pci/devices/.../msi_irqs/ 69 79 Date: September, 2011 70 80 Contact: Neil Horman <nhorman@tuxdriver.com>
+3
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 23 23 24 24 Optional properties: 25 25 - reset-gpio: gpio pin number of power good signal 26 + - bus-range: PCI bus numbers covered (it is recommended for new devicetrees to 27 + specify this property, to keep backwards compatibility a range of 0x00-0xff 28 + is assumed if not present)
+24 -1
Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
··· 1 1 NVIDIA Tegra PCIe controller 2 2 3 3 Required properties: 4 - - compatible: "nvidia,tegra20-pcie" or "nvidia,tegra30-pcie" 4 + - compatible: Must be one of: 5 + - "nvidia,tegra20-pcie" 6 + - "nvidia,tegra30-pcie" 7 + - "nvidia,tegra124-pcie" 5 8 - device_type: Must be "pci" 6 9 - reg: A list of physical base address and length for each set of controller 7 10 registers. Must contain an entry for each entry in the reg-names property. ··· 60 57 - afi 61 58 - pcie_x 62 59 60 + Required properties on Tegra124 and later: 61 + - phys: Must contain an entry for each entry in phy-names. 62 + - phy-names: Must include the following entries: 63 + - pcie 64 + 63 65 Power supplies for Tegra20: 64 66 - avdd-pex-supply: Power supply for analog PCIe logic. Must supply 1.05 V. 65 67 - vdd-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. ··· 91 83 - If lanes 4 or 5 are used: 92 84 - avdd-pexb-supply: Power supply for analog PCIe logic. Must supply 1.05 V. 93 85 - vdd-pexb-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. 86 + 87 + Power supplies for Tegra124: 88 + - Required: 89 + - avddio-pex-supply: Power supply for analog PCIe logic. Must supply 1.05 V. 90 + - dvddio-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. 91 + - avdd-pex-pll-supply: Power supply for dedicated (internal) PCIe PLL. Must 92 + supply 1.05 V. 93 + - hvdd-pex-supply: High-voltage supply for PCIe I/O and PCIe output clocks. 94 + Must supply 3.3 V. 95 + - hvdd-pex-pll-e-supply: High-voltage supply for PLLE (shared with USB3). 96 + Must supply 3.3 V. 97 + - vddio-pex-ctl-supply: Power supply for PCIe control I/O partition. Must 98 + supply 2.8-3.3 V. 99 + - avdd-pll-erefe-supply: Power supply for PLLE (shared with USB3). Must 100 + supply 1.05 V. 94 101 95 102 Root ports are defined as subnodes of the PCIe controller node. 96 103
+63
Documentation/devicetree/bindings/pci/pci-keystone.txt
··· 1 + TI Keystone PCIe interface 2 + 3 + Keystone PCI host Controller is based on Designware PCI h/w version 3.65. 4 + It shares common functions with PCIe Designware core driver and inherit 5 + common properties defined in 6 + Documentation/devicetree/bindings/pci/designware-pci.txt 7 + 8 + Please refer to Documentation/devicetree/bindings/pci/designware-pci.txt 9 + for the details of Designware DT bindings. Additional properties are 10 + described here as well as properties that are not applicable. 11 + 12 + Required Properties:- 13 + 14 + compatibility: "ti,keystone-pcie" 15 + reg: index 1 is the base address and length of DW application registers. 16 + index 2 is the base address and length of PCI device ID register. 17 + 18 + pcie_msi_intc : Interrupt controller device node for MSI IRQ chip 19 + interrupt-cells: should be set to 1 20 + interrupt-parent: Parent interrupt controller phandle 21 + interrupts: GIC interrupt lines connected to PCI MSI interrupt lines 22 + 23 + Example: 24 + pcie_msi_intc: msi-interrupt-controller { 25 + interrupt-controller; 26 + #interrupt-cells = <1>; 27 + interrupt-parent = <&gic>; 28 + interrupts = <GIC_SPI 30 IRQ_TYPE_EDGE_RISING>, 29 + <GIC_SPI 31 IRQ_TYPE_EDGE_RISING>, 30 + <GIC_SPI 32 IRQ_TYPE_EDGE_RISING>, 31 + <GIC_SPI 33 IRQ_TYPE_EDGE_RISING>, 32 + <GIC_SPI 34 IRQ_TYPE_EDGE_RISING>, 33 + <GIC_SPI 35 IRQ_TYPE_EDGE_RISING>, 34 + <GIC_SPI 36 IRQ_TYPE_EDGE_RISING>, 35 + <GIC_SPI 37 IRQ_TYPE_EDGE_RISING>; 36 + }; 37 + 38 + pcie_intc: Interrupt controller device node for Legacy IRQ chip 39 + interrupt-cells: should be set to 1 40 + interrupt-parent: Parent interrupt controller phandle 41 + interrupts: GIC interrupt lines connected to PCI Legacy interrupt lines 42 + 43 + Example: 44 + pcie_intc: legacy-interrupt-controller { 45 + interrupt-controller; 46 + #interrupt-cells = <1>; 47 + interrupt-parent = <&gic>; 48 + interrupts = <GIC_SPI 26 IRQ_TYPE_EDGE_RISING>, 49 + <GIC_SPI 27 IRQ_TYPE_EDGE_RISING>, 50 + <GIC_SPI 28 IRQ_TYPE_EDGE_RISING>, 51 + <GIC_SPI 29 IRQ_TYPE_EDGE_RISING>; 52 + }; 53 + 54 + Optional properties:- 55 + phys: phandle to Generic Keystone SerDes phy for PCI 56 + phy-names: name of the Generic Keystine SerDes phy for PCI 57 + - If boot loader already does PCI link establishment, then phys and 58 + phy-names shouldn't be present. 59 + 60 + Designware DT Properties not applicable for Keystone PCI 61 + 62 + 1. pcie_bus clock-names not used. Instead, a phandle to phys is used. 63 +
+57
Documentation/devicetree/bindings/pci/xgene-pci.txt
··· 1 + * AppliedMicro X-Gene PCIe interface 2 + 3 + Required properties: 4 + - device_type: set to "pci" 5 + - compatible: should contain "apm,xgene-pcie" to identify the core. 6 + - reg: A list of physical base address and length for each set of controller 7 + registers. Must contain an entry for each entry in the reg-names 8 + property. 9 + - reg-names: Must include the following entries: 10 + "csr": controller configuration registers. 11 + "cfg": pcie configuration space registers. 12 + - #address-cells: set to <3> 13 + - #size-cells: set to <2> 14 + - ranges: ranges for the outbound memory, I/O regions. 15 + - dma-ranges: ranges for the inbound memory regions. 16 + - #interrupt-cells: set to <1> 17 + - interrupt-map-mask and interrupt-map: standard PCI properties 18 + to define the mapping of the PCIe interface to interrupt 19 + numbers. 20 + - clocks: from common clock binding: handle to pci clock. 21 + 22 + Optional properties: 23 + - status: Either "ok" or "disabled". 24 + - dma-coherent: Present if dma operations are coherent 25 + 26 + Example: 27 + 28 + SoC specific DT Entry: 29 + 30 + pcie0: pcie@1f2b0000 { 31 + status = "disabled"; 32 + device_type = "pci"; 33 + compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie"; 34 + #interrupt-cells = <1>; 35 + #size-cells = <2>; 36 + #address-cells = <3>; 37 + reg = < 0x00 0x1f2b0000 0x0 0x00010000 /* Controller registers */ 38 + 0xe0 0xd0000000 0x0 0x00040000>; /* PCI config space */ 39 + reg-names = "csr", "cfg"; 40 + ranges = <0x01000000 0x00 0x00000000 0xe0 0x10000000 0x00 0x00010000 /* io */ 41 + 0x02000000 0x00 0x80000000 0xe1 0x80000000 0x00 0x80000000>; /* mem */ 42 + dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000 43 + 0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>; 44 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 45 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xc2 0x1 46 + 0x0 0x0 0x0 0x2 &gic 0x0 0xc3 0x1 47 + 0x0 0x0 0x0 0x3 &gic 0x0 0xc4 0x1 48 + 0x0 0x0 0x0 0x4 &gic 0x0 0xc5 0x1>; 49 + dma-coherent; 50 + clocks = <&pcie0clk 0>; 51 + }; 52 + 53 + 54 + Board specific DT Entry: 55 + &pcie0 { 56 + status = "ok"; 57 + };
+62
Documentation/devicetree/bindings/pci/xilinx-pcie.txt
··· 1 + * Xilinx AXI PCIe Root Port Bridge DT description 2 + 3 + Required properties: 4 + - #address-cells: Address representation for root ports, set to <3> 5 + - #size-cells: Size representation for root ports, set to <2> 6 + - #interrupt-cells: specifies the number of cells needed to encode an 7 + interrupt source. The value must be 1. 8 + - compatible: Should contain "xlnx,axi-pcie-host-1.00.a" 9 + - reg: Should contain AXI PCIe registers location and length 10 + - device_type: must be "pci" 11 + - interrupts: Should contain AXI PCIe interrupt 12 + - interrupt-map-mask, 13 + interrupt-map: standard PCI properties to define the mapping of the 14 + PCI interface to interrupt numbers. 15 + - ranges: ranges for the PCI memory regions (I/O space region is not 16 + supported by hardware) 17 + Please refer to the standard PCI bus binding document for a more 18 + detailed explanation 19 + 20 + Optional properties: 21 + - bus-range: PCI bus numbers covered 22 + 23 + Interrupt controller child node 24 + +++++++++++++++++++++++++++++++ 25 + Required properties: 26 + - interrupt-controller: identifies the node as an interrupt controller 27 + - #address-cells: specifies the number of cells needed to encode an 28 + address. The value must be 0. 29 + - #interrupt-cells: specifies the number of cells needed to encode an 30 + interrupt source. The value must be 1. 31 + 32 + NOTE: 33 + The core provides a single interrupt for both INTx/MSI messages. So, 34 + created a interrupt controller node to support 'interrupt-map' DT 35 + functionality. The driver will create an IRQ domain for this map, decode 36 + the four INTx interrupts in ISR and route them to this domain. 37 + 38 + 39 + Example: 40 + ++++++++ 41 + 42 + pci_express: axi-pcie@50000000 { 43 + #address-cells = <3>; 44 + #size-cells = <2>; 45 + #interrupt-cells = <1>; 46 + compatible = "xlnx,axi-pcie-host-1.00.a"; 47 + reg = < 0x50000000 0x10000000 >; 48 + device_type = "pci"; 49 + interrupts = < 0 52 4 >; 50 + interrupt-map-mask = <0 0 0 7>; 51 + interrupt-map = <0 0 0 1 &pcie_intc 1>, 52 + <0 0 0 2 &pcie_intc 2>, 53 + <0 0 0 3 &pcie_intc 3>, 54 + <0 0 0 4 &pcie_intc 4>; 55 + ranges = < 0x02000000 0 0x60000000 0x60000000 0 0x10000000 >; 56 + 57 + pcie_intc: interrupt-controller { 58 + interrupt-controller; 59 + #address-cells = <0>; 60 + #interrupt-cells = <1>; 61 + } 62 + };
+2
Documentation/driver-model/devres.txt
··· 264 264 IO region 265 265 devm_release_mem_region() 266 266 devm_release_region() 267 + devm_release_resource() 267 268 devm_request_mem_region() 268 269 devm_request_region() 270 + devm_request_resource() 269 271 270 272 IOMAP 271 273 devm_ioport_map()
+15
MAINTAINERS
··· 6939 6939 F: arch/x86/pci/ 6940 6940 F: arch/x86/kernel/quirks.c 6941 6941 6942 + PCI DRIVER FOR APPLIEDMICRO XGENE 6943 + M: Tanmay Inamdar <tinamdar@apm.com> 6944 + L: linux-pci@vger.kernel.org 6945 + L: linux-arm-kernel@lists.infradead.org 6946 + S: Maintained 6947 + F: Documentation/devicetree/bindings/pci/xgene-pci.txt 6948 + F: drivers/pci/host/pci-xgene.c 6949 + 6942 6950 PCI DRIVER FOR IMX6 6943 6951 M: Richard Zhu <r65037@freescale.com> 6944 6952 M: Lucas Stach <l.stach@pengutronix.de> ··· 6954 6946 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6955 6947 S: Maintained 6956 6948 F: drivers/pci/host/*imx6* 6949 + 6950 + PCI DRIVER FOR TI KEYSTONE 6951 + M: Murali Karicheri <m-karicheri2@ti.com> 6952 + L: linux-pci@vger.kernel.org 6953 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 6954 + S: Maintained 6955 + F: drivers/pci/host/*keystone* 6957 6956 6958 6957 PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support) 6959 6958 M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+9 -9
arch/arm/boot/dts/spear1310.dtsi
··· 85 85 86 86 pcie0: pcie@b1000000 { 87 87 compatible = "st,spear1340-pcie", "snps,dw-pcie"; 88 - reg = <0xb1000000 0x4000>; 88 + reg = <0xb1000000 0x4000>, <0x80000000 0x20000>; 89 + reg-names = "dbi", "config"; 89 90 interrupts = <0 68 0x4>; 90 91 interrupt-map-mask = <0 0 0 0>; 91 92 interrupt-map = <0x0 0 &gic 0 68 0x4>; ··· 96 95 #address-cells = <3>; 97 96 #size-cells = <2>; 98 97 device_type = "pci"; 99 - ranges = <0x00000800 0 0x80000000 0x80000000 0 0x00020000 /* configuration space */ 100 - 0x81000000 0 0 0x80020000 0 0x00010000 /* downstream I/O */ 98 + ranges = <0x81000000 0 0 0x80020000 0 0x00010000 /* downstream I/O */ 101 99 0x82000000 0 0x80030000 0xc0030000 0 0x0ffd0000>; /* non-prefetchable memory */ 102 100 status = "disabled"; 103 101 }; 104 102 105 103 pcie1: pcie@b1800000 { 106 104 compatible = "st,spear1340-pcie", "snps,dw-pcie"; 107 - reg = <0xb1800000 0x4000>; 105 + reg = <0xb1800000 0x4000>, <0x90000000 0x20000>; 106 + reg-names = "dbi", "config"; 108 107 interrupts = <0 69 0x4>; 109 108 interrupt-map-mask = <0 0 0 0>; 110 109 interrupt-map = <0x0 0 &gic 0 69 0x4>; ··· 114 113 #address-cells = <3>; 115 114 #size-cells = <2>; 116 115 device_type = "pci"; 117 - ranges = <0x00000800 0 0x90000000 0x90000000 0 0x00020000 /* configuration space */ 118 - 0x81000000 0 0 0x90020000 0 0x00010000 /* downstream I/O */ 116 + ranges = <0x81000000 0 0 0x90020000 0 0x00010000 /* downstream I/O */ 119 117 0x82000000 0 0x90030000 0x90030000 0 0x0ffd0000>; /* non-prefetchable memory */ 120 118 status = "disabled"; 121 119 }; 122 120 123 121 pcie2: pcie@b4000000 { 124 122 compatible = "st,spear1340-pcie", "snps,dw-pcie"; 125 - reg = <0xb4000000 0x4000>; 123 + reg = <0xb4000000 0x4000>, <0xc0000000 0x20000>; 124 + reg-names = "dbi", "config"; 126 125 interrupts = <0 70 0x4>; 127 126 interrupt-map-mask = <0 0 0 0>; 128 127 interrupt-map = <0x0 0 &gic 0 70 0x4>; ··· 132 131 #address-cells = <3>; 133 132 #size-cells = <2>; 134 133 device_type = "pci"; 135 - ranges = <0x00000800 0 0xc0000000 0xc0000000 0 0x00020000 /* configuration space */ 136 - 0x81000000 0 0 0xc0020000 0 0x00010000 /* downstream I/O */ 134 + ranges = <0x81000000 0 0 0xc0020000 0 0x00010000 /* downstream I/O */ 137 135 0x82000000 0 0xc0030000 0xc0030000 0 0x0ffd0000>; /* non-prefetchable memory */ 138 136 status = "disabled"; 139 137 };
+3 -3
arch/arm/boot/dts/spear1340.dtsi
··· 50 50 51 51 pcie0: pcie@b1000000 { 52 52 compatible = "st,spear1340-pcie", "snps,dw-pcie"; 53 - reg = <0xb1000000 0x4000>; 53 + reg = <0xb1000000 0x4000>, <0x80000000 0x20000>; 54 + reg-names = "dbi", "config"; 54 55 interrupts = <0 68 0x4>; 55 56 interrupt-map-mask = <0 0 0 0>; 56 57 interrupt-map = <0x0 0 &gic 0 68 0x4>; ··· 61 60 #address-cells = <3>; 62 61 #size-cells = <2>; 63 62 device_type = "pci"; 64 - ranges = <0x00000800 0 0x80000000 0x80000000 0 0x00020000 /* configuration space */ 65 - 0x81000000 0 0 0x80020000 0 0x00010000 /* downstream I/O */ 63 + ranges = <0x81000000 0 0 0x80020000 0 0x00010000 /* downstream I/O */ 66 64 0x82000000 0 0x80030000 0xc0030000 0 0x0ffd0000>; /* non-prefetchable memory */ 67 65 status = "disabled"; 68 66 };
+1
arch/arm/include/asm/io.h
··· 178 178 179 179 /* PCI fixed i/o mapping */ 180 180 #define PCI_IO_VIRT_BASE 0xfee00000 181 + #define PCI_IOBASE ((void __iomem *)PCI_IO_VIRT_BASE) 181 182 182 183 #if defined(CONFIG_PCI) 183 184 void pci_ioremap_set_mem_type(int mem_type);
+12 -11
arch/arm/mach-integrator/pci_v3.c
··· 660 660 { 661 661 unsigned long flags; 662 662 unsigned int temp; 663 + phys_addr_t io_address = pci_pio_to_address(io_mem.start); 663 664 664 665 pcibios_min_mem = 0x00100000; 665 666 ··· 702 701 /* 703 702 * Setup window 2 - PCI IO 704 703 */ 705 - v3_writel(V3_LB_BASE2, v3_addr_to_lb_base2(io_mem.start) | 704 + v3_writel(V3_LB_BASE2, v3_addr_to_lb_base2(io_address) | 706 705 V3_LB_BASE_ENABLE); 707 706 v3_writew(V3_LB_MAP2, v3_addr_to_lb_map2(0)); 708 707 ··· 743 742 static void __init pci_v3_postinit(void) 744 743 { 745 744 unsigned int pci_cmd; 745 + phys_addr_t io_address = pci_pio_to_address(io_mem.start); 746 746 747 747 pci_cmd = PCI_COMMAND_MEMORY | 748 748 PCI_COMMAND_MASTER | PCI_COMMAND_INVALIDATE; ··· 760 758 "interrupt: %d\n", ret); 761 759 #endif 762 760 763 - register_isa_ports(non_mem.start, io_mem.start, 0); 761 + register_isa_ports(non_mem.start, io_address, 0); 764 762 } 765 763 766 764 /* ··· 869 867 870 868 for_each_of_pci_range(&parser, &range) { 871 869 if (!range.flags) { 872 - of_pci_range_to_resource(&range, np, &conf_mem); 870 + ret = of_pci_range_to_resource(&range, np, &conf_mem); 873 871 conf_mem.name = "PCIv3 config"; 874 872 } 875 873 if (range.flags & IORESOURCE_IO) { 876 - of_pci_range_to_resource(&range, np, &io_mem); 874 + ret = of_pci_range_to_resource(&range, np, &io_mem); 877 875 io_mem.name = "PCIv3 I/O"; 878 876 } 879 877 if ((range.flags & IORESOURCE_MEM) && 880 878 !(range.flags & IORESOURCE_PREFETCH)) { 881 879 non_mem_pci = range.pci_addr; 882 880 non_mem_pci_sz = range.size; 883 - of_pci_range_to_resource(&range, np, &non_mem); 881 + ret = of_pci_range_to_resource(&range, np, &non_mem); 884 882 non_mem.name = "PCIv3 non-prefetched mem"; 885 883 } 886 884 if ((range.flags & IORESOURCE_MEM) && 887 885 (range.flags & IORESOURCE_PREFETCH)) { 888 886 pre_mem_pci = range.pci_addr; 889 887 pre_mem_pci_sz = range.size; 890 - of_pci_range_to_resource(&range, np, &pre_mem); 888 + ret = of_pci_range_to_resource(&range, np, &pre_mem); 891 889 pre_mem.name = "PCIv3 prefetched mem"; 892 890 } 893 - } 894 891 895 - if (!conf_mem.start || !io_mem.start || 896 - !non_mem.start || !pre_mem.start) { 897 - dev_err(&pdev->dev, "missing ranges in device node\n"); 898 - return -EINVAL; 892 + if (ret < 0) { 893 + dev_err(&pdev->dev, "missing ranges in device node\n"); 894 + return ret; 895 + } 899 896 } 900 897 901 898 pci_v3.map_irq = of_irq_parse_and_map_pci;
+21 -1
arch/arm64/Kconfig
··· 83 83 def_bool y 84 84 85 85 config NO_IOPORT_MAP 86 - def_bool y 86 + def_bool y if !PCI 87 87 88 88 config STACKTRACE_SUPPORT 89 89 def_bool y ··· 162 162 163 163 config ARM_AMBA 164 164 bool 165 + 166 + config PCI 167 + bool "PCI support" 168 + help 169 + This feature enables support for PCI bus system. If you say Y 170 + here, the kernel will include drivers and infrastructure code 171 + to support PCI bus devices. 172 + 173 + config PCI_DOMAINS 174 + def_bool PCI 175 + 176 + config PCI_DOMAINS_GENERIC 177 + def_bool PCI 178 + 179 + config PCI_SYSCALL 180 + def_bool PCI 181 + 182 + source "drivers/pci/Kconfig" 183 + source "drivers/pci/pcie/Kconfig" 184 + source "drivers/pci/hotplug/Kconfig" 165 185 166 186 endmenu 167 187
+8
arch/arm64/boot/dts/apm-mustang.dts
··· 25 25 }; 26 26 }; 27 27 28 + &pcie0clk { 29 + status = "ok"; 30 + }; 31 + 32 + &pcie0 { 33 + status = "ok"; 34 + }; 35 + 28 36 &serial0 { 29 37 status = "ok"; 30 38 };
+165
arch/arm64/boot/dts/apm-storm.dtsi
··· 282 282 enable-mask = <0x10>; 283 283 clock-output-names = "rngpkaclk"; 284 284 }; 285 + 286 + pcie0clk: pcie0clk@1f2bc000 { 287 + status = "disabled"; 288 + compatible = "apm,xgene-device-clock"; 289 + #clock-cells = <1>; 290 + clocks = <&socplldiv2 0>; 291 + reg = <0x0 0x1f2bc000 0x0 0x1000>; 292 + reg-names = "csr-reg"; 293 + clock-output-names = "pcie0clk"; 294 + }; 295 + 296 + pcie1clk: pcie1clk@1f2cc000 { 297 + status = "disabled"; 298 + compatible = "apm,xgene-device-clock"; 299 + #clock-cells = <1>; 300 + clocks = <&socplldiv2 0>; 301 + reg = <0x0 0x1f2cc000 0x0 0x1000>; 302 + reg-names = "csr-reg"; 303 + clock-output-names = "pcie1clk"; 304 + }; 305 + 306 + pcie2clk: pcie2clk@1f2dc000 { 307 + status = "disabled"; 308 + compatible = "apm,xgene-device-clock"; 309 + #clock-cells = <1>; 310 + clocks = <&socplldiv2 0>; 311 + reg = <0x0 0x1f2dc000 0x0 0x1000>; 312 + reg-names = "csr-reg"; 313 + clock-output-names = "pcie2clk"; 314 + }; 315 + 316 + pcie3clk: pcie3clk@1f50c000 { 317 + status = "disabled"; 318 + compatible = "apm,xgene-device-clock"; 319 + #clock-cells = <1>; 320 + clocks = <&socplldiv2 0>; 321 + reg = <0x0 0x1f50c000 0x0 0x1000>; 322 + reg-names = "csr-reg"; 323 + clock-output-names = "pcie3clk"; 324 + }; 325 + 326 + pcie4clk: pcie4clk@1f51c000 { 327 + status = "disabled"; 328 + compatible = "apm,xgene-device-clock"; 329 + #clock-cells = <1>; 330 + clocks = <&socplldiv2 0>; 331 + reg = <0x0 0x1f51c000 0x0 0x1000>; 332 + reg-names = "csr-reg"; 333 + clock-output-names = "pcie4clk"; 334 + }; 335 + }; 336 + 337 + pcie0: pcie@1f2b0000 { 338 + status = "disabled"; 339 + device_type = "pci"; 340 + compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie"; 341 + #interrupt-cells = <1>; 342 + #size-cells = <2>; 343 + #address-cells = <3>; 344 + reg = < 0x00 0x1f2b0000 0x0 0x00010000 /* Controller registers */ 345 + 0xe0 0xd0000000 0x0 0x00040000>; /* PCI config space */ 346 + reg-names = "csr", "cfg"; 347 + ranges = <0x01000000 0x00 0x00000000 0xe0 0x10000000 0x00 0x00010000 /* io */ 348 + 0x02000000 0x00 0x80000000 0xe1 0x80000000 0x00 0x80000000>; /* mem */ 349 + dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000 350 + 0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>; 351 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 352 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xc2 0x1 353 + 0x0 0x0 0x0 0x2 &gic 0x0 0xc3 0x1 354 + 0x0 0x0 0x0 0x3 &gic 0x0 0xc4 0x1 355 + 0x0 0x0 0x0 0x4 &gic 0x0 0xc5 0x1>; 356 + dma-coherent; 357 + clocks = <&pcie0clk 0>; 358 + }; 359 + 360 + pcie1: pcie@1f2c0000 { 361 + status = "disabled"; 362 + device_type = "pci"; 363 + compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie"; 364 + #interrupt-cells = <1>; 365 + #size-cells = <2>; 366 + #address-cells = <3>; 367 + reg = < 0x00 0x1f2c0000 0x0 0x00010000 /* Controller registers */ 368 + 0xd0 0xd0000000 0x0 0x00040000>; /* PCI config space */ 369 + reg-names = "csr", "cfg"; 370 + ranges = <0x01000000 0x0 0x00000000 0xd0 0x10000000 0x00 0x00010000 /* io */ 371 + 0x02000000 0x0 0x80000000 0xd1 0x80000000 0x00 0x80000000>; /* mem */ 372 + dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000 373 + 0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>; 374 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 375 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xc8 0x1 376 + 0x0 0x0 0x0 0x2 &gic 0x0 0xc9 0x1 377 + 0x0 0x0 0x0 0x3 &gic 0x0 0xca 0x1 378 + 0x0 0x0 0x0 0x4 &gic 0x0 0xcb 0x1>; 379 + dma-coherent; 380 + clocks = <&pcie1clk 0>; 381 + }; 382 + 383 + pcie2: pcie@1f2d0000 { 384 + status = "disabled"; 385 + device_type = "pci"; 386 + compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie"; 387 + #interrupt-cells = <1>; 388 + #size-cells = <2>; 389 + #address-cells = <3>; 390 + reg = < 0x00 0x1f2d0000 0x0 0x00010000 /* Controller registers */ 391 + 0x90 0xd0000000 0x0 0x00040000>; /* PCI config space */ 392 + reg-names = "csr", "cfg"; 393 + ranges = <0x01000000 0x0 0x00000000 0x90 0x10000000 0x0 0x00010000 /* io */ 394 + 0x02000000 0x0 0x80000000 0x91 0x80000000 0x0 0x80000000>; /* mem */ 395 + dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000 396 + 0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>; 397 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 398 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xce 0x1 399 + 0x0 0x0 0x0 0x2 &gic 0x0 0xcf 0x1 400 + 0x0 0x0 0x0 0x3 &gic 0x0 0xd0 0x1 401 + 0x0 0x0 0x0 0x4 &gic 0x0 0xd1 0x1>; 402 + dma-coherent; 403 + clocks = <&pcie2clk 0>; 404 + }; 405 + 406 + pcie3: pcie@1f500000 { 407 + status = "disabled"; 408 + device_type = "pci"; 409 + compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie"; 410 + #interrupt-cells = <1>; 411 + #size-cells = <2>; 412 + #address-cells = <3>; 413 + reg = < 0x00 0x1f500000 0x0 0x00010000 /* Controller registers */ 414 + 0xa0 0xd0000000 0x0 0x00040000>; /* PCI config space */ 415 + reg-names = "csr", "cfg"; 416 + ranges = <0x01000000 0x0 0x00000000 0xa0 0x10000000 0x0 0x00010000 /* io */ 417 + 0x02000000 0x0 0x80000000 0xa1 0x80000000 0x0 0x80000000>; /* mem */ 418 + dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000 419 + 0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>; 420 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 421 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xd4 0x1 422 + 0x0 0x0 0x0 0x2 &gic 0x0 0xd5 0x1 423 + 0x0 0x0 0x0 0x3 &gic 0x0 0xd6 0x1 424 + 0x0 0x0 0x0 0x4 &gic 0x0 0xd7 0x1>; 425 + dma-coherent; 426 + clocks = <&pcie3clk 0>; 427 + }; 428 + 429 + pcie4: pcie@1f510000 { 430 + status = "disabled"; 431 + device_type = "pci"; 432 + compatible = "apm,xgene-storm-pcie", "apm,xgene-pcie"; 433 + #interrupt-cells = <1>; 434 + #size-cells = <2>; 435 + #address-cells = <3>; 436 + reg = < 0x00 0x1f510000 0x0 0x00010000 /* Controller registers */ 437 + 0xc0 0xd0000000 0x0 0x00200000>; /* PCI config space */ 438 + reg-names = "csr", "cfg"; 439 + ranges = <0x01000000 0x0 0x00000000 0xc0 0x10000000 0x0 0x00010000 /* io */ 440 + 0x02000000 0x0 0x80000000 0xc1 0x80000000 0x0 0x80000000>; /* mem */ 441 + dma-ranges = <0x42000000 0x80 0x00000000 0x80 0x00000000 0x00 0x80000000 442 + 0x42000000 0x00 0x00000000 0x00 0x00000000 0x80 0x00000000>; 443 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 444 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0xda 0x1 445 + 0x0 0x0 0x0 0x2 &gic 0x0 0xdb 0x1 446 + 0x0 0x0 0x0 0x3 &gic 0x0 0xdc 0x1 447 + 0x0 0x0 0x0 0x4 &gic 0x0 0xdd 0x1>; 448 + dma-coherent; 449 + clocks = <&pcie4clk 0>; 285 450 }; 286 451 287 452 serial0: serial@1c020000 {
+1
arch/arm64/include/asm/Kbuild
··· 29 29 generic-y += msgbuf.h 30 30 generic-y += mutex.h 31 31 generic-y += pci.h 32 + generic-y += pci-bridge.h 32 33 generic-y += poll.h 33 34 generic-y += preempt.h 34 35 generic-y += resource.h
+2 -1
arch/arm64/include/asm/io.h
··· 121 121 /* 122 122 * I/O port access primitives. 123 123 */ 124 - #define IO_SPACE_LIMIT 0xffff 124 + #define arch_has_dev_port() (1) 125 + #define IO_SPACE_LIMIT (SZ_32M - 1) 125 126 #define PCI_IOBASE ((void __iomem *)(MODULES_VADDR - SZ_32M)) 126 127 127 128 static inline u8 inb(unsigned long addr)
+37
arch/arm64/include/asm/pci.h
··· 1 + #ifndef __ASM_PCI_H 2 + #define __ASM_PCI_H 3 + #ifdef __KERNEL__ 4 + 5 + #include <linux/types.h> 6 + #include <linux/slab.h> 7 + #include <linux/dma-mapping.h> 8 + 9 + #include <asm/io.h> 10 + #include <asm-generic/pci-bridge.h> 11 + #include <asm-generic/pci-dma-compat.h> 12 + 13 + #define PCIBIOS_MIN_IO 0x1000 14 + #define PCIBIOS_MIN_MEM 0 15 + 16 + /* 17 + * Set to 1 if the kernel should re-assign all PCI bus numbers 18 + */ 19 + #define pcibios_assign_all_busses() \ 20 + (pci_has_flag(PCI_REASSIGN_ALL_BUS)) 21 + 22 + /* 23 + * PCI address space differs from physical memory address space 24 + */ 25 + #define PCI_DMA_BUS_IS_PHYS (0) 26 + 27 + extern int isa_dma_bridge_buggy; 28 + 29 + #ifdef CONFIG_PCI 30 + static inline int pci_proc_domain(struct pci_bus *bus) 31 + { 32 + return 1; 33 + } 34 + #endif /* CONFIG_PCI */ 35 + 36 + #endif /* __KERNEL__ */ 37 + #endif /* __ASM_PCI_H */
+2
arch/arm64/include/asm/pgtable.h
··· 301 301 __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRnE) | PTE_PXN | PTE_UXN) 302 302 #define pgprot_writecombine(prot) \ 303 303 __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_NORMAL_NC) | PTE_PXN | PTE_UXN) 304 + #define pgprot_device(prot) \ 305 + __pgprot_modify(prot, PTE_ATTRINDX_MASK, PTE_ATTRINDX(MT_DEVICE_nGnRE) | PTE_PXN | PTE_UXN) 304 306 #define __HAVE_PHYS_MEM_ACCESS_PROT 305 307 struct file; 306 308 extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
+1
arch/arm64/kernel/Makefile
··· 30 30 arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o 31 31 arm64-obj-$(CONFIG_KGDB) += kgdb.o 32 32 arm64-obj-$(CONFIG_EFI) += efi.o efi-stub.o efi-entry.o 33 + arm64-obj-$(CONFIG_PCI) += pci.o 33 34 34 35 obj-y += $(arm64-obj-y) vdso/ 35 36 obj-m += $(arm64-obj-m)
+70
arch/arm64/kernel/pci.c
··· 1 + /* 2 + * Code borrowed from powerpc/kernel/pci-common.c 3 + * 4 + * Copyright (C) 2003 Anton Blanchard <anton@au.ibm.com>, IBM 5 + * Copyright (C) 2014 ARM Ltd. 6 + * 7 + * This program is free software; you can redistribute it and/or 8 + * modify it under the terms of the GNU General Public License 9 + * version 2 as published by the Free Software Foundation. 10 + * 11 + */ 12 + 13 + #include <linux/init.h> 14 + #include <linux/io.h> 15 + #include <linux/kernel.h> 16 + #include <linux/mm.h> 17 + #include <linux/of_pci.h> 18 + #include <linux/of_platform.h> 19 + #include <linux/slab.h> 20 + 21 + #include <asm/pci-bridge.h> 22 + 23 + /* 24 + * Called after each bus is probed, but before its children are examined 25 + */ 26 + void pcibios_fixup_bus(struct pci_bus *bus) 27 + { 28 + /* nothing to do, expected to be removed in the future */ 29 + } 30 + 31 + /* 32 + * We don't have to worry about legacy ISA devices, so nothing to do here 33 + */ 34 + resource_size_t pcibios_align_resource(void *data, const struct resource *res, 35 + resource_size_t size, resource_size_t align) 36 + { 37 + return res->start; 38 + } 39 + 40 + /* 41 + * Try to assign the IRQ number from DT when adding a new device 42 + */ 43 + int pcibios_add_device(struct pci_dev *dev) 44 + { 45 + dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); 46 + 47 + return 0; 48 + } 49 + 50 + 51 + #ifdef CONFIG_PCI_DOMAINS_GENERIC 52 + static bool dt_domain_found = false; 53 + 54 + void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent) 55 + { 56 + int domain = of_get_pci_domain_nr(parent->of_node); 57 + 58 + if (domain >= 0) { 59 + dt_domain_found = true; 60 + } else if (dt_domain_found == true) { 61 + dev_err(parent, "Node %s is missing \"linux,pci-domain\" property in DT\n", 62 + parent->of_node->full_name); 63 + return; 64 + } else { 65 + domain = pci_get_new_domain_nr(); 66 + } 67 + 68 + bus->domain_nr = domain; 69 + } 70 + #endif
+1 -1
arch/ia64/kernel/msi_ia64.c
··· 23 23 if (irq_prepare_move(irq, cpu)) 24 24 return -1; 25 25 26 - get_cached_msi_msg(irq, &msg); 26 + __get_cached_msi_msg(idata->msi_desc, &msg); 27 27 28 28 addr = msg.address_lo; 29 29 addr &= MSI_ADDR_DEST_ID_MASK;
+2 -2
arch/ia64/sn/kernel/msi_sn.c
··· 175 175 * Release XIO resources for the old MSI PCI address 176 176 */ 177 177 178 - get_cached_msi_msg(irq, &msg); 179 - sn_pdev = (struct pcidev_info *)sn_irq_info->irq_pciioinfo; 178 + __get_cached_msi_msg(data->msi_desc, &msg); 179 + sn_pdev = (struct pcidev_info *)sn_irq_info->irq_pciioinfo; 180 180 pdev = sn_pdev->pdi_linux_pcidev; 181 181 provider = SN_PCIDEV_BUSPROVIDER(pdev); 182 182
+2 -4
arch/mips/pci/msi-octeon.c
··· 73 73 * wants. Most devices only want 1, which will give 74 74 * configured_private_bits and request_private_bits equal 0. 75 75 */ 76 - pci_read_config_word(dev, desc->msi_attrib.pos + PCI_MSI_FLAGS, 77 - &control); 76 + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 78 77 79 78 /* 80 79 * If the number of private bits has been configured then use ··· 175 176 /* Update the number of IRQs the device has available to it */ 176 177 control &= ~PCI_MSI_FLAGS_QSIZE; 177 178 control |= request_private_bits << 4; 178 - pci_write_config_word(dev, desc->msi_attrib.pos + PCI_MSI_FLAGS, 179 - control); 179 + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); 180 180 181 181 irq_set_msi_desc(irq, desc); 182 182 write_msi_msg(irq, &msg);
-2
arch/powerpc/include/asm/machdep.h
··· 136 136 int (*pci_setup_phb)(struct pci_controller *host); 137 137 138 138 #ifdef CONFIG_PCI_MSI 139 - int (*msi_check_device)(struct pci_dev* dev, 140 - int nvec, int type); 141 139 int (*setup_msi_irqs)(struct pci_dev *dev, 142 140 int nvec, int type); 143 141 void (*teardown_msi_irqs)(struct pci_dev *dev);
+1 -11
arch/powerpc/kernel/msi.c
··· 13 13 14 14 #include <asm/machdep.h> 15 15 16 - int arch_msi_check_device(struct pci_dev* dev, int nvec, int type) 16 + int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 17 17 { 18 18 if (!ppc_md.setup_msi_irqs || !ppc_md.teardown_msi_irqs) { 19 19 pr_debug("msi: Platform doesn't provide MSI callbacks.\n"); ··· 24 24 if (type == PCI_CAP_ID_MSI && nvec > 1) 25 25 return 1; 26 26 27 - if (ppc_md.msi_check_device) { 28 - pr_debug("msi: Using platform check routine.\n"); 29 - return ppc_md.msi_check_device(dev, nvec, type); 30 - } 31 - 32 - return 0; 33 - } 34 - 35 - int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 36 - { 37 27 return ppc_md.setup_msi_irqs(dev, nvec, type); 38 28 } 39 29
-9
arch/powerpc/platforms/cell/axon_msi.c
··· 199 199 return msic; 200 200 } 201 201 202 - static int axon_msi_check_device(struct pci_dev *dev, int nvec, int type) 203 - { 204 - if (!find_msi_translator(dev)) 205 - return -ENODEV; 206 - 207 - return 0; 208 - } 209 - 210 202 static int setup_msi_msg_address(struct pci_dev *dev, struct msi_msg *msg) 211 203 { 212 204 struct device_node *dn; ··· 408 416 409 417 ppc_md.setup_msi_irqs = axon_msi_setup_msi_irqs; 410 418 ppc_md.teardown_msi_irqs = axon_msi_teardown_msi_irqs; 411 - ppc_md.msi_check_device = axon_msi_check_device; 412 419 413 420 axon_msi_debug_setup(dn, msic); 414 421
+5 -14
arch/powerpc/platforms/powernv/pci.c
··· 46 46 //#define cfg_dbg(fmt...) printk(fmt) 47 47 48 48 #ifdef CONFIG_PCI_MSI 49 - static int pnv_msi_check_device(struct pci_dev* pdev, int nvec, int type) 50 - { 51 - struct pci_controller *hose = pci_bus_to_host(pdev->bus); 52 - struct pnv_phb *phb = hose->private_data; 53 - struct pci_dn *pdn = pci_get_pdn(pdev); 54 - 55 - if (pdn && pdn->force_32bit_msi && !phb->msi32_support) 56 - return -ENODEV; 57 - 58 - return (phb && phb->msi_bmp.bitmap) ? 0 : -ENODEV; 59 - } 60 - 61 49 static int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type) 62 50 { 63 51 struct pci_controller *hose = pci_bus_to_host(pdev->bus); 64 52 struct pnv_phb *phb = hose->private_data; 53 + struct pci_dn *pdn = pci_get_pdn(pdev); 65 54 struct msi_desc *entry; 66 55 struct msi_msg msg; 67 56 int hwirq; 68 57 unsigned int virq; 69 58 int rc; 70 59 71 - if (WARN_ON(!phb)) 60 + if (WARN_ON(!phb) || !phb->msi_bmp.bitmap) 61 + return -ENODEV; 62 + 63 + if (pdn && pdn->force_32bit_msi && !phb->msi32_support) 72 64 return -ENODEV; 73 65 74 66 list_for_each_entry(entry, &pdev->msi_list, list) { ··· 852 860 853 861 /* Configure MSIs */ 854 862 #ifdef CONFIG_PCI_MSI 855 - ppc_md.msi_check_device = pnv_msi_check_device; 856 863 ppc_md.setup_msi_irqs = pnv_setup_msi_irqs; 857 864 ppc_md.teardown_msi_irqs = pnv_teardown_msi_irqs; 858 865 #endif
+17 -27
arch/powerpc/platforms/pseries/msi.c
··· 336 336 return request; 337 337 } 338 338 339 - static int rtas_msi_check_device(struct pci_dev *pdev, int nvec, int type) 340 - { 341 - int quota, rc; 342 - 343 - if (type == PCI_CAP_ID_MSIX) 344 - rc = check_req_msix(pdev, nvec); 345 - else 346 - rc = check_req_msi(pdev, nvec); 347 - 348 - if (rc) 349 - return rc; 350 - 351 - quota = msi_quota_for_device(pdev, nvec); 352 - 353 - if (quota && quota < nvec) 354 - return quota; 355 - 356 - return 0; 357 - } 358 - 359 339 static int check_msix_entries(struct pci_dev *pdev) 360 340 { 361 341 struct msi_desc *entry; ··· 377 397 static int rtas_setup_msi_irqs(struct pci_dev *pdev, int nvec_in, int type) 378 398 { 379 399 struct pci_dn *pdn; 380 - int hwirq, virq, i, rc; 400 + int hwirq, virq, i, quota, rc; 381 401 struct msi_desc *entry; 382 402 struct msi_msg msg; 383 403 int nvec = nvec_in; 384 404 int use_32bit_msi_hack = 0; 385 405 386 - pdn = pci_get_pdn(pdev); 387 - if (!pdn) 388 - return -ENODEV; 406 + if (type == PCI_CAP_ID_MSIX) 407 + rc = check_req_msix(pdev, nvec); 408 + else 409 + rc = check_req_msi(pdev, nvec); 410 + 411 + if (rc) 412 + return rc; 413 + 414 + quota = msi_quota_for_device(pdev, nvec); 415 + 416 + if (quota && quota < nvec) 417 + return quota; 389 418 390 419 if (type == PCI_CAP_ID_MSIX && check_msix_entries(pdev)) 391 420 return -EINVAL; ··· 405 416 */ 406 417 if (type == PCI_CAP_ID_MSIX) { 407 418 int m = roundup_pow_of_two(nvec); 408 - int quota = msi_quota_for_device(pdev, m); 419 + quota = msi_quota_for_device(pdev, m); 409 420 410 421 if (quota >= m) 411 422 nvec = m; 412 423 } 424 + 425 + pdn = pci_get_pdn(pdev); 413 426 414 427 /* 415 428 * Try the new more explicit firmware interface, if that fails fall ··· 476 485 irq_set_msi_desc(virq, entry); 477 486 478 487 /* Read config space back so we can restore after reset */ 479 - read_msi_msg(virq, &msg); 488 + __read_msi_msg(entry, &msg); 480 489 entry->msg = msg; 481 490 } 482 491 ··· 517 526 WARN_ON(ppc_md.setup_msi_irqs); 518 527 ppc_md.setup_msi_irqs = rtas_setup_msi_irqs; 519 528 ppc_md.teardown_msi_irqs = rtas_teardown_msi_irqs; 520 - ppc_md.msi_check_device = rtas_msi_check_device; 521 529 522 530 WARN_ON(ppc_md.pci_irq_fixup); 523 531 ppc_md.pci_irq_fixup = rtas_msi_pci_irq_fixup;
+3 -9
arch/powerpc/sysdev/fsl_msi.c
··· 109 109 return 0; 110 110 } 111 111 112 - static int fsl_msi_check_device(struct pci_dev *pdev, int nvec, int type) 113 - { 114 - if (type == PCI_CAP_ID_MSIX) 115 - pr_debug("fslmsi: MSI-X untested, trying anyway.\n"); 116 - 117 - return 0; 118 - } 119 - 120 112 static void fsl_teardown_msi_irqs(struct pci_dev *pdev) 121 113 { 122 114 struct msi_desc *entry; ··· 164 172 struct msi_desc *entry; 165 173 struct msi_msg msg; 166 174 struct fsl_msi *msi_data; 175 + 176 + if (type == PCI_CAP_ID_MSIX) 177 + pr_debug("fslmsi: MSI-X untested, trying anyway.\n"); 167 178 168 179 /* 169 180 * If the PCI node has an fsl,msi property, then we need to use it ··· 522 527 if (!ppc_md.setup_msi_irqs) { 523 528 ppc_md.setup_msi_irqs = fsl_setup_msi_irqs; 524 529 ppc_md.teardown_msi_irqs = fsl_teardown_msi_irqs; 525 - ppc_md.msi_check_device = fsl_msi_check_device; 526 530 } else if (ppc_md.setup_msi_irqs != fsl_setup_msi_irqs) { 527 531 dev_err(&dev->dev, "Different MSI driver already installed!\n"); 528 532 err = -ENODEV;
+2 -9
arch/powerpc/sysdev/mpic_pasemi_msi.c
··· 63 63 .name = "PASEMI-MSI", 64 64 }; 65 65 66 - static int pasemi_msi_check_device(struct pci_dev *pdev, int nvec, int type) 67 - { 68 - if (type == PCI_CAP_ID_MSIX) 69 - pr_debug("pasemi_msi: MSI-X untested, trying anyway\n"); 70 - 71 - return 0; 72 - } 73 - 74 66 static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev) 75 67 { 76 68 struct msi_desc *entry; ··· 89 97 struct msi_msg msg; 90 98 int hwirq; 91 99 100 + if (type == PCI_CAP_ID_MSIX) 101 + pr_debug("pasemi_msi: MSI-X untested, trying anyway\n"); 92 102 pr_debug("pasemi_msi_setup_msi_irqs, pdev %p nvec %d type %d\n", 93 103 pdev, nvec, type); 94 104 ··· 163 169 WARN_ON(ppc_md.setup_msi_irqs); 164 170 ppc_md.setup_msi_irqs = pasemi_msi_setup_msi_irqs; 165 171 ppc_md.teardown_msi_irqs = pasemi_msi_teardown_msi_irqs; 166 - ppc_md.msi_check_device = pasemi_msi_check_device; 167 172 168 173 return 0; 169 174 }
+11 -17
arch/powerpc/sysdev/mpic_u3msi.c
··· 105 105 return 0; 106 106 } 107 107 108 - static int u3msi_msi_check_device(struct pci_dev *pdev, int nvec, int type) 109 - { 110 - if (type == PCI_CAP_ID_MSIX) 111 - pr_debug("u3msi: MSI-X untested, trying anyway.\n"); 112 - 113 - /* If we can't find a magic address then MSI ain't gonna work */ 114 - if (find_ht_magic_addr(pdev, 0) == 0 && 115 - find_u4_magic_addr(pdev, 0) == 0) { 116 - pr_debug("u3msi: no magic address found for %s\n", 117 - pci_name(pdev)); 118 - return -ENXIO; 119 - } 120 - 121 - return 0; 122 - } 123 - 124 108 static void u3msi_teardown_msi_irqs(struct pci_dev *pdev) 125 109 { 126 110 struct msi_desc *entry; ··· 129 145 struct msi_msg msg; 130 146 u64 addr; 131 147 int hwirq; 148 + 149 + if (type == PCI_CAP_ID_MSIX) 150 + pr_debug("u3msi: MSI-X untested, trying anyway.\n"); 151 + 152 + /* If we can't find a magic address then MSI ain't gonna work */ 153 + if (find_ht_magic_addr(pdev, 0) == 0 && 154 + find_u4_magic_addr(pdev, 0) == 0) { 155 + pr_debug("u3msi: no magic address found for %s\n", 156 + pci_name(pdev)); 157 + return -ENXIO; 158 + } 132 159 133 160 list_for_each_entry(entry, &pdev->msi_list, list) { 134 161 hwirq = msi_bitmap_alloc_hwirqs(&msi_mpic->msi_bitmap, 1); ··· 197 202 WARN_ON(ppc_md.setup_msi_irqs); 198 203 ppc_md.setup_msi_irqs = u3msi_setup_msi_irqs; 199 204 ppc_md.teardown_msi_irqs = u3msi_teardown_msi_irqs; 200 - ppc_md.msi_check_device = u3msi_msi_check_device; 201 205 202 206 return 0; 203 207 }
+6 -12
arch/powerpc/sysdev/ppc4xx_hsta_msi.c
··· 44 44 int irq, hwirq; 45 45 u64 addr; 46 46 47 + /* We don't support MSI-X */ 48 + if (type == PCI_CAP_ID_MSIX) { 49 + pr_debug("%s: MSI-X not supported.\n", __func__); 50 + return -EINVAL; 51 + } 52 + 47 53 list_for_each_entry(entry, &dev->msi_list, list) { 48 54 irq = msi_bitmap_alloc_hwirqs(&ppc4xx_hsta_msi.bmp, 1); 49 55 if (irq < 0) { ··· 123 117 } 124 118 } 125 119 126 - static int hsta_msi_check_device(struct pci_dev *pdev, int nvec, int type) 127 - { 128 - /* We don't support MSI-X */ 129 - if (type == PCI_CAP_ID_MSIX) { 130 - pr_debug("%s: MSI-X not supported.\n", __func__); 131 - return -EINVAL; 132 - } 133 - 134 - return 0; 135 - } 136 - 137 120 static int hsta_msi_probe(struct platform_device *pdev) 138 121 { 139 122 struct device *dev = &pdev->dev; ··· 173 178 174 179 ppc_md.setup_msi_irqs = hsta_setup_msi_irqs; 175 180 ppc_md.teardown_msi_irqs = hsta_teardown_msi_irqs; 176 - ppc_md.msi_check_device = hsta_msi_check_device; 177 181 return 0; 178 182 179 183 out2:
+6 -13
arch/powerpc/sysdev/ppc4xx_msi.c
··· 85 85 struct msi_desc *entry; 86 86 struct ppc4xx_msi *msi_data = &ppc4xx_msi; 87 87 88 - msi_data->msi_virqs = kmalloc((msi_irqs) * sizeof(int), 89 - GFP_KERNEL); 88 + dev_dbg(&dev->dev, "PCIE-MSI:%s called. vec %x type %d\n", 89 + __func__, nvec, type); 90 + if (type == PCI_CAP_ID_MSIX) 91 + pr_debug("ppc4xx msi: MSI-X untested, trying anyway.\n"); 92 + 93 + msi_data->msi_virqs = kmalloc((msi_irqs) * sizeof(int), GFP_KERNEL); 90 94 if (!msi_data->msi_virqs) 91 95 return -ENOMEM; 92 96 ··· 136 132 virq_to_hw(entry->irq), 1); 137 133 irq_dispose_mapping(entry->irq); 138 134 } 139 - } 140 - 141 - static int ppc4xx_msi_check_device(struct pci_dev *pdev, int nvec, int type) 142 - { 143 - dev_dbg(&pdev->dev, "PCIE-MSI:%s called. vec %x type %d\n", 144 - __func__, nvec, type); 145 - if (type == PCI_CAP_ID_MSIX) 146 - pr_debug("ppc4xx msi: MSI-X untested, trying anyway.\n"); 147 - 148 - return 0; 149 135 } 150 136 151 137 static int ppc4xx_setup_pcieh_hw(struct platform_device *dev, ··· 253 259 254 260 ppc_md.setup_msi_irqs = ppc4xx_setup_msi_irqs; 255 261 ppc_md.teardown_msi_irqs = ppc4xx_teardown_msi_irqs; 256 - ppc_md.msi_check_device = ppc4xx_msi_check_device; 257 262 return err; 258 263 259 264 error_out:
+10 -10
arch/x86/pci/common.c
··· 81 81 */ 82 82 DEFINE_RAW_SPINLOCK(pci_config_lock); 83 83 84 - static int can_skip_ioresource_align(const struct dmi_system_id *d) 84 + static int __init can_skip_ioresource_align(const struct dmi_system_id *d) 85 85 { 86 86 pci_probe |= PCI_CAN_SKIP_ISA_ALIGN; 87 87 printk(KERN_INFO "PCI: %s detected, can skip ISA alignment\n", d->ident); 88 88 return 0; 89 89 } 90 90 91 - static const struct dmi_system_id can_skip_pciprobe_dmi_table[] = { 91 + static const struct dmi_system_id can_skip_pciprobe_dmi_table[] __initconst = { 92 92 /* 93 93 * Systems where PCI IO resource ISA alignment can be skipped 94 94 * when the ISA enable bit in the bridge control is not set ··· 186 186 * on the kernel command line (which was parsed earlier). 187 187 */ 188 188 189 - static int set_bf_sort(const struct dmi_system_id *d) 189 + static int __init set_bf_sort(const struct dmi_system_id *d) 190 190 { 191 191 if (pci_bf_sort == pci_bf_sort_default) { 192 192 pci_bf_sort = pci_dmi_bf; ··· 195 195 return 0; 196 196 } 197 197 198 - static void read_dmi_type_b1(const struct dmi_header *dm, 199 - void *private_data) 198 + static void __init read_dmi_type_b1(const struct dmi_header *dm, 199 + void *private_data) 200 200 { 201 201 u8 *d = (u8 *)dm + 4; 202 202 ··· 217 217 } 218 218 } 219 219 220 - static int find_sort_method(const struct dmi_system_id *d) 220 + static int __init find_sort_method(const struct dmi_system_id *d) 221 221 { 222 222 dmi_walk(read_dmi_type_b1, NULL); 223 223 ··· 232 232 * Enable renumbering of PCI bus# ranges to reach all PCI busses (Cardbus) 233 233 */ 234 234 #ifdef __i386__ 235 - static int assign_all_busses(const struct dmi_system_id *d) 235 + static int __init assign_all_busses(const struct dmi_system_id *d) 236 236 { 237 237 pci_probe |= PCI_ASSIGN_ALL_BUSSES; 238 238 printk(KERN_INFO "%s detected: enabling PCI bus# renumbering" ··· 241 241 } 242 242 #endif 243 243 244 - static int set_scan_all(const struct dmi_system_id *d) 244 + static int __init set_scan_all(const struct dmi_system_id *d) 245 245 { 246 246 printk(KERN_INFO "PCI: %s detected, enabling pci=pcie_scan_all\n", 247 247 d->ident); ··· 249 249 return 0; 250 250 } 251 251 252 - static const struct dmi_system_id pciprobe_dmi_table[] = { 252 + static const struct dmi_system_id pciprobe_dmi_table[] __initconst = { 253 253 #ifdef __i386__ 254 254 /* 255 255 * Laptops which need pci=assign-busses to see Cardbus cards ··· 512 512 return 0; 513 513 } 514 514 515 - char * __init pcibios_setup(char *str) 515 + char *__init pcibios_setup(char *str) 516 516 { 517 517 if (!strcmp(str, "off")) { 518 518 pci_probe = 0;
+22 -18
arch/x86/pci/mmconfig-shared.c
··· 31 31 32 32 LIST_HEAD(pci_mmcfg_list); 33 33 34 - static __init void pci_mmconfig_remove(struct pci_mmcfg_region *cfg) 34 + static void __init pci_mmconfig_remove(struct pci_mmcfg_region *cfg) 35 35 { 36 36 if (cfg->res.parent) 37 37 release_resource(&cfg->res); ··· 39 39 kfree(cfg); 40 40 } 41 41 42 - static __init void free_all_mmcfg(void) 42 + static void __init free_all_mmcfg(void) 43 43 { 44 44 struct pci_mmcfg_region *cfg, *tmp; 45 45 ··· 93 93 return new; 94 94 } 95 95 96 - static __init struct pci_mmcfg_region *pci_mmconfig_add(int segment, int start, 96 + static struct pci_mmcfg_region *__init pci_mmconfig_add(int segment, int start, 97 97 int end, u64 addr) 98 98 { 99 99 struct pci_mmcfg_region *new; ··· 125 125 return NULL; 126 126 } 127 127 128 - static const char __init *pci_mmcfg_e7520(void) 128 + static const char *__init pci_mmcfg_e7520(void) 129 129 { 130 130 u32 win; 131 131 raw_pci_ops->read(0, 0, PCI_DEVFN(0, 0), 0xce, 2, &win); ··· 140 140 return "Intel Corporation E7520 Memory Controller Hub"; 141 141 } 142 142 143 - static const char __init *pci_mmcfg_intel_945(void) 143 + static const char *__init pci_mmcfg_intel_945(void) 144 144 { 145 145 u32 pciexbar, mask = 0, len = 0; 146 146 ··· 184 184 return "Intel Corporation 945G/GZ/P/PL Express Memory Controller Hub"; 185 185 } 186 186 187 - static const char __init *pci_mmcfg_amd_fam10h(void) 187 + static const char *__init pci_mmcfg_amd_fam10h(void) 188 188 { 189 189 u32 low, high, address; 190 190 u64 base, msr; ··· 235 235 } 236 236 237 237 static bool __initdata mcp55_checked; 238 - static const char __init *pci_mmcfg_nvidia_mcp55(void) 238 + static const char *__init pci_mmcfg_nvidia_mcp55(void) 239 239 { 240 240 int bus; 241 241 int mcp55_mmconf_found = 0; 242 242 243 - static const u32 extcfg_regnum = 0x90; 244 - static const u32 extcfg_regsize = 4; 245 - static const u32 extcfg_enable_mask = 1<<31; 246 - static const u32 extcfg_start_mask = 0xff<<16; 247 - static const int extcfg_start_shift = 16; 248 - static const u32 extcfg_size_mask = 0x3<<28; 249 - static const int extcfg_size_shift = 28; 250 - static const int extcfg_sizebus[] = {0x100, 0x80, 0x40, 0x20}; 251 - static const u32 extcfg_base_mask[] = {0x7ff8, 0x7ffc, 0x7ffe, 0x7fff}; 252 - static const int extcfg_base_lshift = 25; 243 + static const u32 extcfg_regnum __initconst = 0x90; 244 + static const u32 extcfg_regsize __initconst = 4; 245 + static const u32 extcfg_enable_mask __initconst = 1 << 31; 246 + static const u32 extcfg_start_mask __initconst = 0xff << 16; 247 + static const int extcfg_start_shift __initconst = 16; 248 + static const u32 extcfg_size_mask __initconst = 0x3 << 28; 249 + static const int extcfg_size_shift __initconst = 28; 250 + static const int extcfg_sizebus[] __initconst = { 251 + 0x100, 0x80, 0x40, 0x20 252 + }; 253 + static const u32 extcfg_base_mask[] __initconst = { 254 + 0x7ff8, 0x7ffc, 0x7ffe, 0x7fff 255 + }; 256 + static const int extcfg_base_lshift __initconst = 25; 253 257 254 258 /* 255 259 * do check if amd fam10h already took over ··· 306 302 const char *(*probe)(void); 307 303 }; 308 304 309 - static struct pci_mmcfg_hostbridge_probe pci_mmcfg_probes[] __initdata = { 305 + static const struct pci_mmcfg_hostbridge_probe pci_mmcfg_probes[] __initconst = { 310 306 { 0, PCI_DEVFN(0, 0), PCI_VENDOR_ID_INTEL, 311 307 PCI_DEVICE_ID_INTEL_E7520_MCH, pci_mmcfg_e7520 }, 312 308 { 0, PCI_DEVFN(0, 0), PCI_VENDOR_ID_INTEL,
+4 -4
arch/x86/pci/pcbios.c
··· 79 79 static struct { 80 80 unsigned long address; 81 81 unsigned short segment; 82 - } bios32_indirect = { 0, __KERNEL_CS }; 82 + } bios32_indirect __initdata = { 0, __KERNEL_CS }; 83 83 84 84 /* 85 85 * Returns the entry point for the given service, NULL on error 86 86 */ 87 87 88 - static unsigned long bios32_service(unsigned long service) 88 + static unsigned long __init bios32_service(unsigned long service) 89 89 { 90 90 unsigned char return_code; /* %al */ 91 91 unsigned long address; /* %ebx */ ··· 124 124 125 125 static int pci_bios_present; 126 126 127 - static int check_pcibios(void) 127 + static int __init check_pcibios(void) 128 128 { 129 129 u32 signature, eax, ebx, ecx; 130 130 u8 status, major_ver, minor_ver, hw_mech; ··· 312 312 * Try to find PCI BIOS. 313 313 */ 314 314 315 - static const struct pci_raw_ops *pci_find_bios(void) 315 + static const struct pci_raw_ops *__init pci_find_bios(void) 316 316 { 317 317 union bios32 *check; 318 318 unsigned char sum;
-1
drivers/gpu/drm/vmwgfx/svga_reg.h
··· 35 35 /* 36 36 * PCI device IDs. 37 37 */ 38 - #define PCI_VENDOR_ID_VMWARE 0x15AD 39 38 #define PCI_DEVICE_ID_VMWARE_SVGA2 0x0405 40 39 41 40 /*
+4 -10
drivers/irqchip/irq-armada-370-xp.c
··· 136 136 struct msi_msg msg; 137 137 int virq, hwirq; 138 138 139 + /* We support MSI, but not MSI-X */ 140 + if (desc->msi_attrib.is_msix) 141 + return -EINVAL; 142 + 139 143 hwirq = armada_370_xp_alloc_msi(); 140 144 if (hwirq < 0) 141 145 return hwirq; ··· 168 164 169 165 irq_dispose_mapping(irq); 170 166 armada_370_xp_free_msi(hwirq); 171 - } 172 - 173 - static int armada_370_xp_check_msi_device(struct msi_chip *chip, struct pci_dev *dev, 174 - int nvec, int type) 175 - { 176 - /* We support MSI, but not MSI-X */ 177 - if (type == PCI_CAP_ID_MSI) 178 - return 0; 179 - return -EINVAL; 180 167 } 181 168 182 169 static struct irq_chip armada_370_xp_msi_irq_chip = { ··· 208 213 209 214 msi_chip->setup_irq = armada_370_xp_setup_msi_irq; 210 215 msi_chip->teardown_irq = armada_370_xp_teardown_msi_irq; 211 - msi_chip->check_device = armada_370_xp_check_msi_device; 212 216 msi_chip->of_node = node; 213 217 214 218 armada_370_xp_msi_domain =
-1
drivers/misc/vmw_vmci/vmci_guest.c
··· 35 35 #include "vmci_driver.h" 36 36 #include "vmci_event.h" 37 37 38 - #define PCI_VENDOR_ID_VMWARE 0x15AD 39 38 #define PCI_DEVICE_ID_VMWARE_VMCI 0x0740 40 39 41 40 #define VMCI_UTIL_NUM_RESOURCES 1
-1
drivers/net/vmxnet3/vmxnet3_int.h
··· 117 117 /* 118 118 * PCI vendor and device IDs. 119 119 */ 120 - #define PCI_VENDOR_ID_VMWARE 0x15AD 121 120 #define PCI_DEVICE_ID_VMWARE_VMXNET3 0x07B0 122 121 #define MAX_ETHERNET_CARDS 10 123 122 #define MAX_PCI_PASSTHRU_DEVICE 6
+154
drivers/of/address.c
··· 5 5 #include <linux/module.h> 6 6 #include <linux/of_address.h> 7 7 #include <linux/pci_regs.h> 8 + #include <linux/sizes.h> 9 + #include <linux/slab.h> 8 10 #include <linux/string.h> 9 11 10 12 /* Max address size we deal with */ ··· 295 293 } 296 294 EXPORT_SYMBOL_GPL(of_pci_range_parser_one); 297 295 296 + /* 297 + * of_pci_range_to_resource - Create a resource from an of_pci_range 298 + * @range: the PCI range that describes the resource 299 + * @np: device node where the range belongs to 300 + * @res: pointer to a valid resource that will be updated to 301 + * reflect the values contained in the range. 302 + * 303 + * Returns EINVAL if the range cannot be converted to resource. 304 + * 305 + * Note that if the range is an IO range, the resource will be converted 306 + * using pci_address_to_pio() which can fail if it is called too early or 307 + * if the range cannot be matched to any host bridge IO space (our case here). 308 + * To guard against that we try to register the IO range first. 309 + * If that fails we know that pci_address_to_pio() will do too. 310 + */ 311 + int of_pci_range_to_resource(struct of_pci_range *range, 312 + struct device_node *np, struct resource *res) 313 + { 314 + int err; 315 + res->flags = range->flags; 316 + res->parent = res->child = res->sibling = NULL; 317 + res->name = np->full_name; 318 + 319 + if (res->flags & IORESOURCE_IO) { 320 + unsigned long port; 321 + err = pci_register_io_range(range->cpu_addr, range->size); 322 + if (err) 323 + goto invalid_range; 324 + port = pci_address_to_pio(range->cpu_addr); 325 + if (port == (unsigned long)-1) { 326 + err = -EINVAL; 327 + goto invalid_range; 328 + } 329 + res->start = port; 330 + } else { 331 + res->start = range->cpu_addr; 332 + } 333 + res->end = res->start + range->size - 1; 334 + return 0; 335 + 336 + invalid_range: 337 + res->start = (resource_size_t)OF_BAD_ADDR; 338 + res->end = (resource_size_t)OF_BAD_ADDR; 339 + return err; 340 + } 298 341 #endif /* CONFIG_PCI */ 299 342 300 343 /* ··· 648 601 } 649 602 EXPORT_SYMBOL(of_get_address); 650 603 604 + #ifdef PCI_IOBASE 605 + struct io_range { 606 + struct list_head list; 607 + phys_addr_t start; 608 + resource_size_t size; 609 + }; 610 + 611 + static LIST_HEAD(io_range_list); 612 + static DEFINE_SPINLOCK(io_range_lock); 613 + #endif 614 + 615 + /* 616 + * Record the PCI IO range (expressed as CPU physical address + size). 617 + * Return a negative value if an error has occured, zero otherwise 618 + */ 619 + int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size) 620 + { 621 + int err = 0; 622 + 623 + #ifdef PCI_IOBASE 624 + struct io_range *range; 625 + resource_size_t allocated_size = 0; 626 + 627 + /* check if the range hasn't been previously recorded */ 628 + spin_lock(&io_range_lock); 629 + list_for_each_entry(range, &io_range_list, list) { 630 + if (addr >= range->start && addr + size <= range->start + size) { 631 + /* range already registered, bail out */ 632 + goto end_register; 633 + } 634 + allocated_size += range->size; 635 + } 636 + 637 + /* range not registed yet, check for available space */ 638 + if (allocated_size + size - 1 > IO_SPACE_LIMIT) { 639 + /* if it's too big check if 64K space can be reserved */ 640 + if (allocated_size + SZ_64K - 1 > IO_SPACE_LIMIT) { 641 + err = -E2BIG; 642 + goto end_register; 643 + } 644 + 645 + size = SZ_64K; 646 + pr_warn("Requested IO range too big, new size set to 64K\n"); 647 + } 648 + 649 + /* add the range to the list */ 650 + range = kzalloc(sizeof(*range), GFP_KERNEL); 651 + if (!range) { 652 + err = -ENOMEM; 653 + goto end_register; 654 + } 655 + 656 + range->start = addr; 657 + range->size = size; 658 + 659 + list_add_tail(&range->list, &io_range_list); 660 + 661 + end_register: 662 + spin_unlock(&io_range_lock); 663 + #endif 664 + 665 + return err; 666 + } 667 + 668 + phys_addr_t pci_pio_to_address(unsigned long pio) 669 + { 670 + phys_addr_t address = (phys_addr_t)OF_BAD_ADDR; 671 + 672 + #ifdef PCI_IOBASE 673 + struct io_range *range; 674 + resource_size_t allocated_size = 0; 675 + 676 + if (pio > IO_SPACE_LIMIT) 677 + return address; 678 + 679 + spin_lock(&io_range_lock); 680 + list_for_each_entry(range, &io_range_list, list) { 681 + if (pio >= allocated_size && pio < allocated_size + range->size) { 682 + address = range->start + pio - allocated_size; 683 + break; 684 + } 685 + allocated_size += range->size; 686 + } 687 + spin_unlock(&io_range_lock); 688 + #endif 689 + 690 + return address; 691 + } 692 + 651 693 unsigned long __weak pci_address_to_pio(phys_addr_t address) 652 694 { 695 + #ifdef PCI_IOBASE 696 + struct io_range *res; 697 + resource_size_t offset = 0; 698 + unsigned long addr = -1; 699 + 700 + spin_lock(&io_range_lock); 701 + list_for_each_entry(res, &io_range_list, list) { 702 + if (address >= res->start && address < res->start + res->size) { 703 + addr = res->start - address + offset; 704 + break; 705 + } 706 + offset += res->size; 707 + } 708 + spin_unlock(&io_range_lock); 709 + 710 + return addr; 711 + #else 653 712 if (address > IO_SPACE_LIMIT) 654 713 return (unsigned long)-1; 655 714 656 715 return (unsigned long) address; 716 + #endif 657 717 } 658 718 659 719 static int __of_address_to_resource(struct device_node *dev,
+142
drivers/of/of_pci.c
··· 1 1 #include <linux/kernel.h> 2 2 #include <linux/export.h> 3 3 #include <linux/of.h> 4 + #include <linux/of_address.h> 4 5 #include <linux/of_pci.h> 6 + #include <linux/slab.h> 5 7 6 8 static inline int __of_pci_pci_compare(struct device_node *node, 7 9 unsigned int data) ··· 90 88 return 0; 91 89 } 92 90 EXPORT_SYMBOL_GPL(of_pci_parse_bus_range); 91 + 92 + /** 93 + * This function will try to obtain the host bridge domain number by 94 + * finding a property called "linux,pci-domain" of the given device node. 95 + * 96 + * @node: device tree node with the domain information 97 + * 98 + * Returns the associated domain number from DT in the range [0-0xffff], or 99 + * a negative value if the required property is not found. 100 + */ 101 + int of_get_pci_domain_nr(struct device_node *node) 102 + { 103 + const __be32 *value; 104 + int len; 105 + u16 domain; 106 + 107 + value = of_get_property(node, "linux,pci-domain", &len); 108 + if (!value || len < sizeof(*value)) 109 + return -EINVAL; 110 + 111 + domain = (u16)be32_to_cpup(value); 112 + 113 + return domain; 114 + } 115 + EXPORT_SYMBOL_GPL(of_get_pci_domain_nr); 116 + 117 + #if defined(CONFIG_OF_ADDRESS) 118 + /** 119 + * of_pci_get_host_bridge_resources - Parse PCI host bridge resources from DT 120 + * @dev: device node of the host bridge having the range property 121 + * @busno: bus number associated with the bridge root bus 122 + * @bus_max: maximum number of buses for this bridge 123 + * @resources: list where the range of resources will be added after DT parsing 124 + * @io_base: pointer to a variable that will contain on return the physical 125 + * address for the start of the I/O range. Can be NULL if the caller doesn't 126 + * expect IO ranges to be present in the device tree. 127 + * 128 + * It is the caller's job to free the @resources list. 129 + * 130 + * This function will parse the "ranges" property of a PCI host bridge device 131 + * node and setup the resource mapping based on its content. It is expected 132 + * that the property conforms with the Power ePAPR document. 133 + * 134 + * It returns zero if the range parsing has been successful or a standard error 135 + * value if it failed. 136 + */ 137 + int of_pci_get_host_bridge_resources(struct device_node *dev, 138 + unsigned char busno, unsigned char bus_max, 139 + struct list_head *resources, resource_size_t *io_base) 140 + { 141 + struct resource *res; 142 + struct resource *bus_range; 143 + struct of_pci_range range; 144 + struct of_pci_range_parser parser; 145 + char range_type[4]; 146 + int err; 147 + 148 + if (io_base) 149 + *io_base = (resource_size_t)OF_BAD_ADDR; 150 + 151 + bus_range = kzalloc(sizeof(*bus_range), GFP_KERNEL); 152 + if (!bus_range) 153 + return -ENOMEM; 154 + 155 + pr_info("PCI host bridge %s ranges:\n", dev->full_name); 156 + 157 + err = of_pci_parse_bus_range(dev, bus_range); 158 + if (err) { 159 + bus_range->start = busno; 160 + bus_range->end = bus_max; 161 + bus_range->flags = IORESOURCE_BUS; 162 + pr_info(" No bus range found for %s, using %pR\n", 163 + dev->full_name, bus_range); 164 + } else { 165 + if (bus_range->end > bus_range->start + bus_max) 166 + bus_range->end = bus_range->start + bus_max; 167 + } 168 + pci_add_resource(resources, bus_range); 169 + 170 + /* Check for ranges property */ 171 + err = of_pci_range_parser_init(&parser, dev); 172 + if (err) 173 + goto parse_failed; 174 + 175 + pr_debug("Parsing ranges property...\n"); 176 + for_each_of_pci_range(&parser, &range) { 177 + /* Read next ranges element */ 178 + if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO) 179 + snprintf(range_type, 4, " IO"); 180 + else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM) 181 + snprintf(range_type, 4, "MEM"); 182 + else 183 + snprintf(range_type, 4, "err"); 184 + pr_info(" %s %#010llx..%#010llx -> %#010llx\n", range_type, 185 + range.cpu_addr, range.cpu_addr + range.size - 1, 186 + range.pci_addr); 187 + 188 + /* 189 + * If we failed translation or got a zero-sized region 190 + * then skip this range 191 + */ 192 + if (range.cpu_addr == OF_BAD_ADDR || range.size == 0) 193 + continue; 194 + 195 + res = kzalloc(sizeof(struct resource), GFP_KERNEL); 196 + if (!res) { 197 + err = -ENOMEM; 198 + goto parse_failed; 199 + } 200 + 201 + err = of_pci_range_to_resource(&range, dev, res); 202 + if (err) 203 + goto conversion_failed; 204 + 205 + if (resource_type(res) == IORESOURCE_IO) { 206 + if (!io_base) { 207 + pr_err("I/O range found for %s. Please provide an io_base pointer to save CPU base address\n", 208 + dev->full_name); 209 + err = -EINVAL; 210 + goto conversion_failed; 211 + } 212 + if (*io_base != (resource_size_t)OF_BAD_ADDR) 213 + pr_warn("More than one I/O resource converted for %s. CPU base address for old range lost!\n", 214 + dev->full_name); 215 + *io_base = range.cpu_addr; 216 + } 217 + 218 + pci_add_resource_offset(resources, res, res->start - range.pci_addr); 219 + } 220 + 221 + return 0; 222 + 223 + conversion_failed: 224 + kfree(res); 225 + parse_failed: 226 + pci_free_resource_list(resources); 227 + return err; 228 + } 229 + EXPORT_SYMBOL_GPL(of_pci_get_host_bridge_resources); 230 + #endif /* CONFIG_OF_ADDRESS */ 93 231 94 232 #ifdef CONFIG_PCI_MSI 95 233
+28
drivers/pci/host/Kconfig
··· 63 63 help 64 64 Say Y here if you want PCIe support on SPEAr13XX SoCs. 65 65 66 + config PCI_KEYSTONE 67 + bool "TI Keystone PCIe controller" 68 + depends on ARCH_KEYSTONE 69 + select PCIE_DW 70 + select PCIEPORTBUS 71 + help 72 + Say Y here if you want to enable PCI controller support on Keystone 73 + SoCs. The PCI controller on Keystone is based on Designware hardware 74 + and therefore the driver re-uses the Designware core functions to 75 + implement the driver. 76 + 77 + config PCIE_XILINX 78 + bool "Xilinx AXI PCIe host bridge support" 79 + depends on ARCH_ZYNQ 80 + help 81 + Say 'Y' here if you want kernel to support the Xilinx AXI PCIe 82 + Host Bridge driver. 83 + 84 + config PCI_XGENE 85 + bool "X-Gene PCIe controller" 86 + depends on ARCH_XGENE 87 + depends on OF 88 + select PCIEPORTBUS 89 + help 90 + Say Y here if you want internal PCI support on APM X-Gene SoC. 91 + There are 5 internal PCIe ports available. Each port is GEN3 capable 92 + and have varied lanes from x1 to x8. 93 + 66 94 endmenu
+3
drivers/pci/host/Makefile
··· 8 8 obj-$(CONFIG_PCI_RCAR_GEN2_PCIE) += pcie-rcar.o 9 9 obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o 10 10 obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o 11 + obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o 12 + obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o 13 + obj-$(CONFIG_PCI_XGENE) += pci-xgene.o
+7 -6
drivers/pci/host/pci-imx6.c
··· 257 257 struct imx6_pcie *imx6_pcie = to_imx6_pcie(pp); 258 258 int ret; 259 259 260 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 261 - IMX6Q_GPR1_PCIE_TEST_PD, 0 << 18); 262 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 263 - IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16); 264 - 265 260 ret = clk_prepare_enable(imx6_pcie->pcie_phy); 266 261 if (ret) { 267 262 dev_err(pp->dev, "unable to enable pcie_phy clock\n"); ··· 277 282 278 283 /* allow the clocks to stabilize */ 279 284 usleep_range(200, 500); 285 + 286 + /* power up core phy and enable ref clock */ 287 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 288 + IMX6Q_GPR1_PCIE_TEST_PD, 0 << 18); 289 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 290 + IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16); 280 291 281 292 /* Some boards don't have PCIe reset GPIO. */ 282 293 if (gpio_is_valid(imx6_pcie->reset_gpio)) { ··· 648 647 { 649 648 return platform_driver_probe(&imx6_pcie_driver, imx6_pcie_probe); 650 649 } 651 - fs_initcall(imx6_pcie_init); 650 + module_init(imx6_pcie_init); 652 651 653 652 MODULE_AUTHOR("Sean Cross <xobs@kosagi.com>"); 654 653 MODULE_DESCRIPTION("Freescale i.MX6 PCIe host controller driver");
+516
drivers/pci/host/pci-keystone-dw.c
··· 1 + /* 2 + * Designware application register space functions for Keystone PCI controller 3 + * 4 + * Copyright (C) 2013-2014 Texas Instruments., Ltd. 5 + * http://www.ti.com 6 + * 7 + * Author: Murali Karicheri <m-karicheri2@ti.com> 8 + * 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + */ 14 + 15 + #include <linux/irq.h> 16 + #include <linux/irqdomain.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/of_pci.h> 20 + #include <linux/pci.h> 21 + #include <linux/platform_device.h> 22 + 23 + #include "pcie-designware.h" 24 + #include "pci-keystone.h" 25 + 26 + /* Application register defines */ 27 + #define LTSSM_EN_VAL 1 28 + #define LTSSM_STATE_MASK 0x1f 29 + #define LTSSM_STATE_L0 0x11 30 + #define DBI_CS2_EN_VAL 0x20 31 + #define OB_XLAT_EN_VAL 2 32 + 33 + /* Application registers */ 34 + #define CMD_STATUS 0x004 35 + #define CFG_SETUP 0x008 36 + #define OB_SIZE 0x030 37 + #define CFG_PCIM_WIN_SZ_IDX 3 38 + #define CFG_PCIM_WIN_CNT 32 39 + #define SPACE0_REMOTE_CFG_OFFSET 0x1000 40 + #define OB_OFFSET_INDEX(n) (0x200 + (8 * n)) 41 + #define OB_OFFSET_HI(n) (0x204 + (8 * n)) 42 + 43 + /* IRQ register defines */ 44 + #define IRQ_EOI 0x050 45 + #define IRQ_STATUS 0x184 46 + #define IRQ_ENABLE_SET 0x188 47 + #define IRQ_ENABLE_CLR 0x18c 48 + 49 + #define MSI_IRQ 0x054 50 + #define MSI0_IRQ_STATUS 0x104 51 + #define MSI0_IRQ_ENABLE_SET 0x108 52 + #define MSI0_IRQ_ENABLE_CLR 0x10c 53 + #define IRQ_STATUS 0x184 54 + #define MSI_IRQ_OFFSET 4 55 + 56 + /* Config space registers */ 57 + #define DEBUG0 0x728 58 + 59 + #define to_keystone_pcie(x) container_of(x, struct keystone_pcie, pp) 60 + 61 + static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) 62 + { 63 + return sys->private_data; 64 + } 65 + 66 + static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset, 67 + u32 *bit_pos) 68 + { 69 + *reg_offset = offset % 8; 70 + *bit_pos = offset >> 3; 71 + } 72 + 73 + u32 ks_dw_pcie_get_msi_addr(struct pcie_port *pp) 74 + { 75 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 76 + 77 + return ks_pcie->app.start + MSI_IRQ; 78 + } 79 + 80 + void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) 81 + { 82 + struct pcie_port *pp = &ks_pcie->pp; 83 + u32 pending, vector; 84 + int src, virq; 85 + 86 + pending = readl(ks_pcie->va_app_base + MSI0_IRQ_STATUS + (offset << 4)); 87 + 88 + /* 89 + * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit 90 + * shows 1, 9, 17, 25 and so forth 91 + */ 92 + for (src = 0; src < 4; src++) { 93 + if (BIT(src) & pending) { 94 + vector = offset + (src << 3); 95 + virq = irq_linear_revmap(pp->irq_domain, vector); 96 + dev_dbg(pp->dev, "irq: bit %d, vector %d, virq %d\n", 97 + src, vector, virq); 98 + generic_handle_irq(virq); 99 + } 100 + } 101 + } 102 + 103 + static void ks_dw_pcie_msi_irq_ack(struct irq_data *d) 104 + { 105 + u32 offset, reg_offset, bit_pos; 106 + struct keystone_pcie *ks_pcie; 107 + unsigned int irq = d->irq; 108 + struct msi_desc *msi; 109 + struct pcie_port *pp; 110 + 111 + msi = irq_get_msi_desc(irq); 112 + pp = sys_to_pcie(msi->dev->bus->sysdata); 113 + ks_pcie = to_keystone_pcie(pp); 114 + offset = irq - irq_linear_revmap(pp->irq_domain, 0); 115 + update_reg_offset_bit_pos(offset, &reg_offset, &bit_pos); 116 + 117 + writel(BIT(bit_pos), 118 + ks_pcie->va_app_base + MSI0_IRQ_STATUS + (reg_offset << 4)); 119 + writel(reg_offset + MSI_IRQ_OFFSET, ks_pcie->va_app_base + IRQ_EOI); 120 + } 121 + 122 + void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq) 123 + { 124 + u32 reg_offset, bit_pos; 125 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 126 + 127 + update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 128 + writel(BIT(bit_pos), 129 + ks_pcie->va_app_base + MSI0_IRQ_ENABLE_SET + (reg_offset << 4)); 130 + } 131 + 132 + void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) 133 + { 134 + u32 reg_offset, bit_pos; 135 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 136 + 137 + update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 138 + writel(BIT(bit_pos), 139 + ks_pcie->va_app_base + MSI0_IRQ_ENABLE_CLR + (reg_offset << 4)); 140 + } 141 + 142 + static void ks_dw_pcie_msi_irq_mask(struct irq_data *d) 143 + { 144 + struct keystone_pcie *ks_pcie; 145 + unsigned int irq = d->irq; 146 + struct msi_desc *msi; 147 + struct pcie_port *pp; 148 + u32 offset; 149 + 150 + msi = irq_get_msi_desc(irq); 151 + pp = sys_to_pcie(msi->dev->bus->sysdata); 152 + ks_pcie = to_keystone_pcie(pp); 153 + offset = irq - irq_linear_revmap(pp->irq_domain, 0); 154 + 155 + /* Mask the end point if PVM implemented */ 156 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 157 + if (msi->msi_attrib.maskbit) 158 + mask_msi_irq(d); 159 + } 160 + 161 + ks_dw_pcie_msi_clear_irq(pp, offset); 162 + } 163 + 164 + static void ks_dw_pcie_msi_irq_unmask(struct irq_data *d) 165 + { 166 + struct keystone_pcie *ks_pcie; 167 + unsigned int irq = d->irq; 168 + struct msi_desc *msi; 169 + struct pcie_port *pp; 170 + u32 offset; 171 + 172 + msi = irq_get_msi_desc(irq); 173 + pp = sys_to_pcie(msi->dev->bus->sysdata); 174 + ks_pcie = to_keystone_pcie(pp); 175 + offset = irq - irq_linear_revmap(pp->irq_domain, 0); 176 + 177 + /* Mask the end point if PVM implemented */ 178 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 179 + if (msi->msi_attrib.maskbit) 180 + unmask_msi_irq(d); 181 + } 182 + 183 + ks_dw_pcie_msi_set_irq(pp, offset); 184 + } 185 + 186 + static struct irq_chip ks_dw_pcie_msi_irq_chip = { 187 + .name = "Keystone-PCIe-MSI-IRQ", 188 + .irq_ack = ks_dw_pcie_msi_irq_ack, 189 + .irq_mask = ks_dw_pcie_msi_irq_mask, 190 + .irq_unmask = ks_dw_pcie_msi_irq_unmask, 191 + }; 192 + 193 + static int ks_dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, 194 + irq_hw_number_t hwirq) 195 + { 196 + irq_set_chip_and_handler(irq, &ks_dw_pcie_msi_irq_chip, 197 + handle_level_irq); 198 + irq_set_chip_data(irq, domain->host_data); 199 + set_irq_flags(irq, IRQF_VALID); 200 + 201 + return 0; 202 + } 203 + 204 + const struct irq_domain_ops ks_dw_pcie_msi_domain_ops = { 205 + .map = ks_dw_pcie_msi_map, 206 + }; 207 + 208 + int ks_dw_pcie_msi_host_init(struct pcie_port *pp, struct msi_chip *chip) 209 + { 210 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 211 + int i; 212 + 213 + pp->irq_domain = irq_domain_add_linear(ks_pcie->msi_intc_np, 214 + MAX_MSI_IRQS, 215 + &ks_dw_pcie_msi_domain_ops, 216 + chip); 217 + if (!pp->irq_domain) { 218 + dev_err(pp->dev, "irq domain init failed\n"); 219 + return -ENXIO; 220 + } 221 + 222 + for (i = 0; i < MAX_MSI_IRQS; i++) 223 + irq_create_mapping(pp->irq_domain, i); 224 + 225 + return 0; 226 + } 227 + 228 + void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) 229 + { 230 + int i; 231 + 232 + for (i = 0; i < MAX_LEGACY_IRQS; i++) 233 + writel(0x1, ks_pcie->va_app_base + IRQ_ENABLE_SET + (i << 4)); 234 + } 235 + 236 + void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset) 237 + { 238 + struct pcie_port *pp = &ks_pcie->pp; 239 + u32 pending; 240 + int virq; 241 + 242 + pending = readl(ks_pcie->va_app_base + IRQ_STATUS + (offset << 4)); 243 + 244 + if (BIT(0) & pending) { 245 + virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); 246 + dev_dbg(pp->dev, ": irq: irq_offset %d, virq %d\n", offset, 247 + virq); 248 + generic_handle_irq(virq); 249 + } 250 + 251 + /* EOI the INTx interrupt */ 252 + writel(offset, ks_pcie->va_app_base + IRQ_EOI); 253 + } 254 + 255 + static void ks_dw_pcie_ack_legacy_irq(struct irq_data *d) 256 + { 257 + } 258 + 259 + static void ks_dw_pcie_mask_legacy_irq(struct irq_data *d) 260 + { 261 + } 262 + 263 + static void ks_dw_pcie_unmask_legacy_irq(struct irq_data *d) 264 + { 265 + } 266 + 267 + static struct irq_chip ks_dw_pcie_legacy_irq_chip = { 268 + .name = "Keystone-PCI-Legacy-IRQ", 269 + .irq_ack = ks_dw_pcie_ack_legacy_irq, 270 + .irq_mask = ks_dw_pcie_mask_legacy_irq, 271 + .irq_unmask = ks_dw_pcie_unmask_legacy_irq, 272 + }; 273 + 274 + static int ks_dw_pcie_init_legacy_irq_map(struct irq_domain *d, 275 + unsigned int irq, irq_hw_number_t hw_irq) 276 + { 277 + irq_set_chip_and_handler(irq, &ks_dw_pcie_legacy_irq_chip, 278 + handle_level_irq); 279 + irq_set_chip_data(irq, d->host_data); 280 + set_irq_flags(irq, IRQF_VALID); 281 + 282 + return 0; 283 + } 284 + 285 + static const struct irq_domain_ops ks_dw_pcie_legacy_irq_domain_ops = { 286 + .map = ks_dw_pcie_init_legacy_irq_map, 287 + .xlate = irq_domain_xlate_onetwocell, 288 + }; 289 + 290 + /** 291 + * ks_dw_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask 292 + * registers 293 + * 294 + * Since modification of dbi_cs2 involves different clock domain, read the 295 + * status back to ensure the transition is complete. 296 + */ 297 + static void ks_dw_pcie_set_dbi_mode(void __iomem *reg_virt) 298 + { 299 + u32 val; 300 + 301 + writel(DBI_CS2_EN_VAL | readl(reg_virt + CMD_STATUS), 302 + reg_virt + CMD_STATUS); 303 + 304 + do { 305 + val = readl(reg_virt + CMD_STATUS); 306 + } while (!(val & DBI_CS2_EN_VAL)); 307 + } 308 + 309 + /** 310 + * ks_dw_pcie_clear_dbi_mode() - Disable DBI mode 311 + * 312 + * Since modification of dbi_cs2 involves different clock domain, read the 313 + * status back to ensure the transition is complete. 314 + */ 315 + static void ks_dw_pcie_clear_dbi_mode(void __iomem *reg_virt) 316 + { 317 + u32 val; 318 + 319 + writel(~DBI_CS2_EN_VAL & readl(reg_virt + CMD_STATUS), 320 + reg_virt + CMD_STATUS); 321 + 322 + do { 323 + val = readl(reg_virt + CMD_STATUS); 324 + } while (val & DBI_CS2_EN_VAL); 325 + } 326 + 327 + void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) 328 + { 329 + struct pcie_port *pp = &ks_pcie->pp; 330 + u32 start = pp->mem.start, end = pp->mem.end; 331 + int i, tr_size; 332 + 333 + /* Disable BARs for inbound access */ 334 + ks_dw_pcie_set_dbi_mode(ks_pcie->va_app_base); 335 + writel(0, pp->dbi_base + PCI_BASE_ADDRESS_0); 336 + writel(0, pp->dbi_base + PCI_BASE_ADDRESS_1); 337 + ks_dw_pcie_clear_dbi_mode(ks_pcie->va_app_base); 338 + 339 + /* Set outbound translation size per window division */ 340 + writel(CFG_PCIM_WIN_SZ_IDX & 0x7, ks_pcie->va_app_base + OB_SIZE); 341 + 342 + tr_size = (1 << (CFG_PCIM_WIN_SZ_IDX & 0x7)) * SZ_1M; 343 + 344 + /* Using Direct 1:1 mapping of RC <-> PCI memory space */ 345 + for (i = 0; (i < CFG_PCIM_WIN_CNT) && (start < end); i++) { 346 + writel(start | 1, ks_pcie->va_app_base + OB_OFFSET_INDEX(i)); 347 + writel(0, ks_pcie->va_app_base + OB_OFFSET_HI(i)); 348 + start += tr_size; 349 + } 350 + 351 + /* Enable OB translation */ 352 + writel(OB_XLAT_EN_VAL | readl(ks_pcie->va_app_base + CMD_STATUS), 353 + ks_pcie->va_app_base + CMD_STATUS); 354 + } 355 + 356 + /** 357 + * ks_pcie_cfg_setup() - Set up configuration space address for a device 358 + * 359 + * @ks_pcie: ptr to keystone_pcie structure 360 + * @bus: Bus number the device is residing on 361 + * @devfn: device, function number info 362 + * 363 + * Forms and returns the address of configuration space mapped in PCIESS 364 + * address space 0. Also configures CFG_SETUP for remote configuration space 365 + * access. 366 + * 367 + * The address space has two regions to access configuration - local and remote. 368 + * We access local region for bus 0 (as RC is attached on bus 0) and remote 369 + * region for others with TYPE 1 access when bus > 1. As for device on bus = 1, 370 + * we will do TYPE 0 access as it will be on our secondary bus (logical). 371 + * CFG_SETUP is needed only for remote configuration access. 372 + */ 373 + static void __iomem *ks_pcie_cfg_setup(struct keystone_pcie *ks_pcie, u8 bus, 374 + unsigned int devfn) 375 + { 376 + u8 device = PCI_SLOT(devfn), function = PCI_FUNC(devfn); 377 + struct pcie_port *pp = &ks_pcie->pp; 378 + u32 regval; 379 + 380 + if (bus == 0) 381 + return pp->dbi_base; 382 + 383 + regval = (bus << 16) | (device << 8) | function; 384 + 385 + /* 386 + * Since Bus#1 will be a virtual bus, we need to have TYPE0 387 + * access only. 388 + * TYPE 1 389 + */ 390 + if (bus != 1) 391 + regval |= BIT(24); 392 + 393 + writel(regval, ks_pcie->va_app_base + CFG_SETUP); 394 + return pp->va_cfg0_base; 395 + } 396 + 397 + int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 398 + unsigned int devfn, int where, int size, u32 *val) 399 + { 400 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 401 + u8 bus_num = bus->number; 402 + void __iomem *addr; 403 + 404 + addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); 405 + 406 + return dw_pcie_cfg_read(addr + (where & ~0x3), where, size, val); 407 + } 408 + 409 + int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 410 + unsigned int devfn, int where, int size, u32 val) 411 + { 412 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 413 + u8 bus_num = bus->number; 414 + void __iomem *addr; 415 + 416 + addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); 417 + 418 + return dw_pcie_cfg_write(addr + (where & ~0x3), where, size, val); 419 + } 420 + 421 + /** 422 + * ks_dw_pcie_v3_65_scan_bus() - keystone scan_bus post initialization 423 + * 424 + * This sets BAR0 to enable inbound access for MSI_IRQ register 425 + */ 426 + void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp) 427 + { 428 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 429 + 430 + /* Configure and set up BAR0 */ 431 + ks_dw_pcie_set_dbi_mode(ks_pcie->va_app_base); 432 + 433 + /* Enable BAR0 */ 434 + writel(1, pp->dbi_base + PCI_BASE_ADDRESS_0); 435 + writel(SZ_4K - 1, pp->dbi_base + PCI_BASE_ADDRESS_0); 436 + 437 + ks_dw_pcie_clear_dbi_mode(ks_pcie->va_app_base); 438 + 439 + /* 440 + * For BAR0, just setting bus address for inbound writes (MSI) should 441 + * be sufficient. Use physical address to avoid any conflicts. 442 + */ 443 + writel(ks_pcie->app.start, pp->dbi_base + PCI_BASE_ADDRESS_0); 444 + } 445 + 446 + /** 447 + * ks_dw_pcie_link_up() - Check if link up 448 + */ 449 + int ks_dw_pcie_link_up(struct pcie_port *pp) 450 + { 451 + u32 val = readl(pp->dbi_base + DEBUG0); 452 + 453 + return (val & LTSSM_STATE_MASK) == LTSSM_STATE_L0; 454 + } 455 + 456 + void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie) 457 + { 458 + u32 val; 459 + 460 + /* Disable Link training */ 461 + val = readl(ks_pcie->va_app_base + CMD_STATUS); 462 + val &= ~LTSSM_EN_VAL; 463 + writel(LTSSM_EN_VAL | val, ks_pcie->va_app_base + CMD_STATUS); 464 + 465 + /* Initiate Link Training */ 466 + val = readl(ks_pcie->va_app_base + CMD_STATUS); 467 + writel(LTSSM_EN_VAL | val, ks_pcie->va_app_base + CMD_STATUS); 468 + } 469 + 470 + /** 471 + * ks_dw_pcie_host_init() - initialize host for v3_65 dw hardware 472 + * 473 + * Ioremap the register resources, initialize legacy irq domain 474 + * and call dw_pcie_v3_65_host_init() API to initialize the Keystone 475 + * PCI host controller. 476 + */ 477 + int __init ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, 478 + struct device_node *msi_intc_np) 479 + { 480 + struct pcie_port *pp = &ks_pcie->pp; 481 + struct platform_device *pdev = to_platform_device(pp->dev); 482 + struct resource *res; 483 + 484 + /* Index 0 is the config reg. space address */ 485 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 486 + pp->dbi_base = devm_ioremap_resource(pp->dev, res); 487 + if (IS_ERR(pp->dbi_base)) 488 + return PTR_ERR(pp->dbi_base); 489 + 490 + /* 491 + * We set these same and is used in pcie rd/wr_other_conf 492 + * functions 493 + */ 494 + pp->va_cfg0_base = pp->dbi_base + SPACE0_REMOTE_CFG_OFFSET; 495 + pp->va_cfg1_base = pp->va_cfg0_base; 496 + 497 + /* Index 1 is the application reg. space address */ 498 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 499 + ks_pcie->app = *res; 500 + ks_pcie->va_app_base = devm_ioremap_resource(pp->dev, res); 501 + if (IS_ERR(ks_pcie->va_app_base)) 502 + return PTR_ERR(ks_pcie->va_app_base); 503 + 504 + /* Create legacy IRQ domain */ 505 + ks_pcie->legacy_irq_domain = 506 + irq_domain_add_linear(ks_pcie->legacy_intc_np, 507 + MAX_LEGACY_IRQS, 508 + &ks_dw_pcie_legacy_irq_domain_ops, 509 + NULL); 510 + if (!ks_pcie->legacy_irq_domain) { 511 + dev_err(pp->dev, "Failed to add irq domain for legacy irqs\n"); 512 + return -EINVAL; 513 + } 514 + 515 + return dw_pcie_host_init(pp); 516 + }
+415
drivers/pci/host/pci-keystone.c
··· 1 + /* 2 + * PCIe host controller driver for Texas Instruments Keystone SoCs 3 + * 4 + * Copyright (C) 2013-2014 Texas Instruments., Ltd. 5 + * http://www.ti.com 6 + * 7 + * Author: Murali Karicheri <m-karicheri2@ti.com> 8 + * Implementation based on pci-exynos.c and pcie-designware.c 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + */ 14 + 15 + #include <linux/irqchip/chained_irq.h> 16 + #include <linux/clk.h> 17 + #include <linux/delay.h> 18 + #include <linux/irqdomain.h> 19 + #include <linux/module.h> 20 + #include <linux/msi.h> 21 + #include <linux/of_irq.h> 22 + #include <linux/of.h> 23 + #include <linux/of_pci.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/phy/phy.h> 26 + #include <linux/resource.h> 27 + #include <linux/signal.h> 28 + 29 + #include "pcie-designware.h" 30 + #include "pci-keystone.h" 31 + 32 + #define DRIVER_NAME "keystone-pcie" 33 + 34 + /* driver specific constants */ 35 + #define MAX_MSI_HOST_IRQS 8 36 + #define MAX_LEGACY_HOST_IRQS 4 37 + 38 + /* DEV_STAT_CTRL */ 39 + #define PCIE_CAP_BASE 0x70 40 + 41 + /* PCIE controller device IDs */ 42 + #define PCIE_RC_K2HK 0xb008 43 + #define PCIE_RC_K2E 0xb009 44 + #define PCIE_RC_K2L 0xb00a 45 + 46 + #define to_keystone_pcie(x) container_of(x, struct keystone_pcie, pp) 47 + 48 + static void quirk_limit_mrrs(struct pci_dev *dev) 49 + { 50 + struct pci_bus *bus = dev->bus; 51 + struct pci_dev *bridge = bus->self; 52 + static const struct pci_device_id rc_pci_devids[] = { 53 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), 54 + .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 55 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2E), 56 + .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 57 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L), 58 + .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 59 + { 0, }, 60 + }; 61 + 62 + if (pci_is_root_bus(bus)) 63 + return; 64 + 65 + /* look for the host bridge */ 66 + while (!pci_is_root_bus(bus)) { 67 + bridge = bus->self; 68 + bus = bus->parent; 69 + } 70 + 71 + if (bridge) { 72 + /* 73 + * Keystone PCI controller has a h/w limitation of 74 + * 256 bytes maximum read request size. It can't handle 75 + * anything higher than this. So force this limit on 76 + * all downstream devices. 77 + */ 78 + if (pci_match_id(rc_pci_devids, bridge)) { 79 + if (pcie_get_readrq(dev) > 256) { 80 + dev_info(&dev->dev, "limiting MRRS to 256\n"); 81 + pcie_set_readrq(dev, 256); 82 + } 83 + } 84 + } 85 + } 86 + DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, quirk_limit_mrrs); 87 + 88 + static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie) 89 + { 90 + struct pcie_port *pp = &ks_pcie->pp; 91 + int count = 200; 92 + 93 + dw_pcie_setup_rc(pp); 94 + 95 + if (dw_pcie_link_up(pp)) { 96 + dev_err(pp->dev, "Link already up\n"); 97 + return 0; 98 + } 99 + 100 + ks_dw_pcie_initiate_link_train(ks_pcie); 101 + /* check if the link is up or not */ 102 + while (!dw_pcie_link_up(pp)) { 103 + usleep_range(100, 1000); 104 + if (--count) { 105 + ks_dw_pcie_initiate_link_train(ks_pcie); 106 + continue; 107 + } 108 + dev_err(pp->dev, "phy link never came up\n"); 109 + return -EINVAL; 110 + } 111 + 112 + return 0; 113 + } 114 + 115 + static void ks_pcie_msi_irq_handler(unsigned int irq, struct irq_desc *desc) 116 + { 117 + struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc); 118 + u32 offset = irq - ks_pcie->msi_host_irqs[0]; 119 + struct pcie_port *pp = &ks_pcie->pp; 120 + struct irq_chip *chip = irq_desc_get_chip(desc); 121 + 122 + dev_dbg(pp->dev, "ks_pci_msi_irq_handler, irq %d\n", irq); 123 + 124 + /* 125 + * The chained irq handler installation would have replaced normal 126 + * interrupt driver handler so we need to take care of mask/unmask and 127 + * ack operation. 128 + */ 129 + chained_irq_enter(chip, desc); 130 + ks_dw_pcie_handle_msi_irq(ks_pcie, offset); 131 + chained_irq_exit(chip, desc); 132 + } 133 + 134 + /** 135 + * ks_pcie_legacy_irq_handler() - Handle legacy interrupt 136 + * @irq: IRQ line for legacy interrupts 137 + * @desc: Pointer to irq descriptor 138 + * 139 + * Traverse through pending legacy interrupts and invoke handler for each. Also 140 + * takes care of interrupt controller level mask/ack operation. 141 + */ 142 + static void ks_pcie_legacy_irq_handler(unsigned int irq, struct irq_desc *desc) 143 + { 144 + struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc); 145 + struct pcie_port *pp = &ks_pcie->pp; 146 + u32 irq_offset = irq - ks_pcie->legacy_host_irqs[0]; 147 + struct irq_chip *chip = irq_desc_get_chip(desc); 148 + 149 + dev_dbg(pp->dev, ": Handling legacy irq %d\n", irq); 150 + 151 + /* 152 + * The chained irq handler installation would have replaced normal 153 + * interrupt driver handler so we need to take care of mask/unmask and 154 + * ack operation. 155 + */ 156 + chained_irq_enter(chip, desc); 157 + ks_dw_pcie_handle_legacy_irq(ks_pcie, irq_offset); 158 + chained_irq_exit(chip, desc); 159 + } 160 + 161 + static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie, 162 + char *controller, int *num_irqs) 163 + { 164 + int temp, max_host_irqs, legacy = 1, *host_irqs, ret = -EINVAL; 165 + struct device *dev = ks_pcie->pp.dev; 166 + struct device_node *np_pcie = dev->of_node, **np_temp; 167 + 168 + if (!strcmp(controller, "msi-interrupt-controller")) 169 + legacy = 0; 170 + 171 + if (legacy) { 172 + np_temp = &ks_pcie->legacy_intc_np; 173 + max_host_irqs = MAX_LEGACY_HOST_IRQS; 174 + host_irqs = &ks_pcie->legacy_host_irqs[0]; 175 + } else { 176 + np_temp = &ks_pcie->msi_intc_np; 177 + max_host_irqs = MAX_MSI_HOST_IRQS; 178 + host_irqs = &ks_pcie->msi_host_irqs[0]; 179 + } 180 + 181 + /* interrupt controller is in a child node */ 182 + *np_temp = of_find_node_by_name(np_pcie, controller); 183 + if (!(*np_temp)) { 184 + dev_err(dev, "Node for %s is absent\n", controller); 185 + goto out; 186 + } 187 + temp = of_irq_count(*np_temp); 188 + if (!temp) 189 + goto out; 190 + if (temp > max_host_irqs) 191 + dev_warn(dev, "Too many %s interrupts defined %u\n", 192 + (legacy ? "legacy" : "MSI"), temp); 193 + 194 + /* 195 + * support upto max_host_irqs. In dt from index 0 to 3 (legacy) or 0 to 196 + * 7 (MSI) 197 + */ 198 + for (temp = 0; temp < max_host_irqs; temp++) { 199 + host_irqs[temp] = irq_of_parse_and_map(*np_temp, temp); 200 + if (host_irqs[temp] < 0) 201 + break; 202 + } 203 + if (temp) { 204 + *num_irqs = temp; 205 + ret = 0; 206 + } 207 + out: 208 + return ret; 209 + } 210 + 211 + static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) 212 + { 213 + int i; 214 + 215 + /* Legacy IRQ */ 216 + for (i = 0; i < ks_pcie->num_legacy_host_irqs; i++) { 217 + irq_set_handler_data(ks_pcie->legacy_host_irqs[i], ks_pcie); 218 + irq_set_chained_handler(ks_pcie->legacy_host_irqs[i], 219 + ks_pcie_legacy_irq_handler); 220 + } 221 + ks_dw_pcie_enable_legacy_irqs(ks_pcie); 222 + 223 + /* MSI IRQ */ 224 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 225 + for (i = 0; i < ks_pcie->num_msi_host_irqs; i++) { 226 + irq_set_chained_handler(ks_pcie->msi_host_irqs[i], 227 + ks_pcie_msi_irq_handler); 228 + irq_set_handler_data(ks_pcie->msi_host_irqs[i], 229 + ks_pcie); 230 + } 231 + } 232 + } 233 + 234 + /* 235 + * When a PCI device does not exist during config cycles, keystone host gets a 236 + * bus error instead of returning 0xffffffff. This handler always returns 0 237 + * for this kind of faults. 238 + */ 239 + static int keystone_pcie_fault(unsigned long addr, unsigned int fsr, 240 + struct pt_regs *regs) 241 + { 242 + unsigned long instr = *(unsigned long *) instruction_pointer(regs); 243 + 244 + if ((instr & 0x0e100090) == 0x00100090) { 245 + int reg = (instr >> 12) & 15; 246 + 247 + regs->uregs[reg] = -1; 248 + regs->ARM_pc += 4; 249 + } 250 + 251 + return 0; 252 + } 253 + 254 + static void __init ks_pcie_host_init(struct pcie_port *pp) 255 + { 256 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pp); 257 + u32 val; 258 + 259 + ks_pcie_establish_link(ks_pcie); 260 + ks_dw_pcie_setup_rc_app_regs(ks_pcie); 261 + ks_pcie_setup_interrupts(ks_pcie); 262 + writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8), 263 + pp->dbi_base + PCI_IO_BASE); 264 + 265 + /* update the Vendor ID */ 266 + writew(ks_pcie->device_id, pp->dbi_base + PCI_DEVICE_ID); 267 + 268 + /* update the DEV_STAT_CTRL to publish right mrrs */ 269 + val = readl(pp->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL); 270 + val &= ~PCI_EXP_DEVCTL_READRQ; 271 + /* set the mrrs to 256 bytes */ 272 + val |= BIT(12); 273 + writel(val, pp->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL); 274 + 275 + /* 276 + * PCIe access errors that result into OCP errors are caught by ARM as 277 + * "External aborts" 278 + */ 279 + hook_fault_code(17, keystone_pcie_fault, SIGBUS, 0, 280 + "Asynchronous external abort"); 281 + } 282 + 283 + static struct pcie_host_ops keystone_pcie_host_ops = { 284 + .rd_other_conf = ks_dw_pcie_rd_other_conf, 285 + .wr_other_conf = ks_dw_pcie_wr_other_conf, 286 + .link_up = ks_dw_pcie_link_up, 287 + .host_init = ks_pcie_host_init, 288 + .msi_set_irq = ks_dw_pcie_msi_set_irq, 289 + .msi_clear_irq = ks_dw_pcie_msi_clear_irq, 290 + .get_msi_addr = ks_dw_pcie_get_msi_addr, 291 + .msi_host_init = ks_dw_pcie_msi_host_init, 292 + .scan_bus = ks_dw_pcie_v3_65_scan_bus, 293 + }; 294 + 295 + static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, 296 + struct platform_device *pdev) 297 + { 298 + struct pcie_port *pp = &ks_pcie->pp; 299 + int ret; 300 + 301 + ret = ks_pcie_get_irq_controller_info(ks_pcie, 302 + "legacy-interrupt-controller", 303 + &ks_pcie->num_legacy_host_irqs); 304 + if (ret) 305 + return ret; 306 + 307 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 308 + ret = ks_pcie_get_irq_controller_info(ks_pcie, 309 + "msi-interrupt-controller", 310 + &ks_pcie->num_msi_host_irqs); 311 + if (ret) 312 + return ret; 313 + } 314 + 315 + pp->root_bus_nr = -1; 316 + pp->ops = &keystone_pcie_host_ops; 317 + ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np); 318 + if (ret) { 319 + dev_err(&pdev->dev, "failed to initialize host\n"); 320 + return ret; 321 + } 322 + 323 + return ret; 324 + } 325 + 326 + static const struct of_device_id ks_pcie_of_match[] = { 327 + { 328 + .type = "pci", 329 + .compatible = "ti,keystone-pcie", 330 + }, 331 + { }, 332 + }; 333 + MODULE_DEVICE_TABLE(of, ks_pcie_of_match); 334 + 335 + static int __exit ks_pcie_remove(struct platform_device *pdev) 336 + { 337 + struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev); 338 + 339 + clk_disable_unprepare(ks_pcie->clk); 340 + 341 + return 0; 342 + } 343 + 344 + static int __init ks_pcie_probe(struct platform_device *pdev) 345 + { 346 + struct device *dev = &pdev->dev; 347 + struct keystone_pcie *ks_pcie; 348 + struct pcie_port *pp; 349 + struct resource *res; 350 + void __iomem *reg_p; 351 + struct phy *phy; 352 + int ret = 0; 353 + 354 + ks_pcie = devm_kzalloc(&pdev->dev, sizeof(*ks_pcie), 355 + GFP_KERNEL); 356 + if (!ks_pcie) { 357 + dev_err(dev, "no memory for keystone pcie\n"); 358 + return -ENOMEM; 359 + } 360 + pp = &ks_pcie->pp; 361 + 362 + /* initialize SerDes Phy if present */ 363 + phy = devm_phy_get(dev, "pcie-phy"); 364 + if (!IS_ERR_OR_NULL(phy)) { 365 + ret = phy_init(phy); 366 + if (ret < 0) 367 + return ret; 368 + } 369 + 370 + /* index 2 is to read PCI DEVICE_ID */ 371 + res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 372 + reg_p = devm_ioremap_resource(dev, res); 373 + if (IS_ERR(reg_p)) 374 + return PTR_ERR(reg_p); 375 + ks_pcie->device_id = readl(reg_p) >> 16; 376 + devm_iounmap(dev, reg_p); 377 + devm_release_mem_region(dev, res->start, resource_size(res)); 378 + 379 + pp->dev = dev; 380 + platform_set_drvdata(pdev, ks_pcie); 381 + ks_pcie->clk = devm_clk_get(dev, "pcie"); 382 + if (IS_ERR(ks_pcie->clk)) { 383 + dev_err(dev, "Failed to get pcie rc clock\n"); 384 + return PTR_ERR(ks_pcie->clk); 385 + } 386 + ret = clk_prepare_enable(ks_pcie->clk); 387 + if (ret) 388 + return ret; 389 + 390 + ret = ks_add_pcie_port(ks_pcie, pdev); 391 + if (ret < 0) 392 + goto fail_clk; 393 + 394 + return 0; 395 + fail_clk: 396 + clk_disable_unprepare(ks_pcie->clk); 397 + 398 + return ret; 399 + } 400 + 401 + static struct platform_driver ks_pcie_driver __refdata = { 402 + .probe = ks_pcie_probe, 403 + .remove = __exit_p(ks_pcie_remove), 404 + .driver = { 405 + .name = "keystone-pcie", 406 + .owner = THIS_MODULE, 407 + .of_match_table = of_match_ptr(ks_pcie_of_match), 408 + }, 409 + }; 410 + 411 + module_platform_driver(ks_pcie_driver); 412 + 413 + MODULE_AUTHOR("Murali Karicheri <m-karicheri2@ti.com>"); 414 + MODULE_DESCRIPTION("Keystone PCIe host controller driver"); 415 + MODULE_LICENSE("GPL v2");
+58
drivers/pci/host/pci-keystone.h
··· 1 + /* 2 + * Keystone PCI Controller's common includes 3 + * 4 + * Copyright (C) 2013-2014 Texas Instruments., Ltd. 5 + * http://www.ti.com 6 + * 7 + * Author: Murali Karicheri <m-karicheri2@ti.com> 8 + * 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + */ 14 + 15 + #define MAX_LEGACY_IRQS 4 16 + #define MAX_MSI_HOST_IRQS 8 17 + #define MAX_LEGACY_HOST_IRQS 4 18 + 19 + struct keystone_pcie { 20 + struct clk *clk; 21 + struct pcie_port pp; 22 + /* PCI Device ID */ 23 + u32 device_id; 24 + int num_legacy_host_irqs; 25 + int legacy_host_irqs[MAX_LEGACY_HOST_IRQS]; 26 + struct device_node *legacy_intc_np; 27 + 28 + int num_msi_host_irqs; 29 + int msi_host_irqs[MAX_MSI_HOST_IRQS]; 30 + struct device_node *msi_intc_np; 31 + struct irq_domain *legacy_irq_domain; 32 + 33 + /* Application register space */ 34 + void __iomem *va_app_base; 35 + struct resource app; 36 + }; 37 + 38 + /* Keystone DW specific MSI controller APIs/definitions */ 39 + void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset); 40 + u32 ks_dw_pcie_get_msi_addr(struct pcie_port *pp); 41 + 42 + /* Keystone specific PCI controller APIs */ 43 + void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie); 44 + void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset); 45 + int ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, 46 + struct device_node *msi_intc_np); 47 + int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 48 + unsigned int devfn, int where, int size, u32 val); 49 + int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 50 + unsigned int devfn, int where, int size, u32 *val); 51 + void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie); 52 + int ks_dw_pcie_link_up(struct pcie_port *pp); 53 + void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie); 54 + void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq); 55 + void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq); 56 + void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp); 57 + int ks_dw_pcie_msi_host_init(struct pcie_port *pp, 58 + struct msi_chip *chip);
+3 -3
drivers/pci/host/pci-mvebu.c
··· 873 873 rangesz = pna + na + ns; 874 874 nranges = rlen / sizeof(__be32) / rangesz; 875 875 876 - for (i = 0; i < nranges; i++) { 876 + for (i = 0; i < nranges; i++, range += rangesz) { 877 877 u32 flags = of_read_number(range, 1); 878 878 u32 slot = of_read_number(range + 1, 1); 879 879 u64 cpuaddr = of_read_number(range + na, pna); ··· 883 883 rtype = IORESOURCE_IO; 884 884 else if (DT_FLAGS_TO_TYPE(flags) == DT_TYPE_MEM32) 885 885 rtype = IORESOURCE_MEM; 886 + else 887 + continue; 886 888 887 889 if (slot == PCI_SLOT(devfn) && type == rtype) { 888 890 *tgt = DT_CPUADDR_TO_TARGET(cpuaddr); 889 891 *attr = DT_CPUADDR_TO_ATTR(cpuaddr); 890 892 return 0; 891 893 } 892 - 893 - range += rangesz; 894 894 } 895 895 896 896 return -ENOENT;
+237 -40
drivers/pci/host/pci-tegra.c
··· 38 38 #include <linux/of_pci.h> 39 39 #include <linux/of_platform.h> 40 40 #include <linux/pci.h> 41 + #include <linux/phy/phy.h> 41 42 #include <linux/platform_device.h> 42 43 #include <linux/reset.h> 43 44 #include <linux/sizes.h> ··· 116 115 117 116 #define AFI_INTR_CODE 0xb8 118 117 #define AFI_INTR_CODE_MASK 0xf 119 - #define AFI_INTR_AXI_SLAVE_ERROR 1 120 - #define AFI_INTR_AXI_DECODE_ERROR 2 118 + #define AFI_INTR_INI_SLAVE_ERROR 1 119 + #define AFI_INTR_INI_DECODE_ERROR 2 121 120 #define AFI_INTR_TARGET_ABORT 3 122 121 #define AFI_INTR_MASTER_ABORT 4 123 122 #define AFI_INTR_INVALID_WRITE 5 124 123 #define AFI_INTR_LEGACY 6 125 124 #define AFI_INTR_FPCI_DECODE_ERROR 7 125 + #define AFI_INTR_AXI_DECODE_ERROR 8 126 + #define AFI_INTR_FPCI_TIMEOUT 9 127 + #define AFI_INTR_PE_PRSNT_SENSE 10 128 + #define AFI_INTR_PE_CLKREQ_SENSE 11 129 + #define AFI_INTR_CLKCLAMP_SENSE 12 130 + #define AFI_INTR_RDY4PD_SENSE 13 131 + #define AFI_INTR_P2P_ERROR 14 126 132 127 133 #define AFI_INTR_SIGNATURE 0xbc 128 134 #define AFI_UPPER_FPCI_ADDRESS 0xc0 ··· 160 152 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_MASK (0xf << 20) 161 153 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_SINGLE (0x0 << 20) 162 154 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_420 (0x0 << 20) 155 + #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_X2_X1 (0x0 << 20) 163 156 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_DUAL (0x1 << 20) 164 157 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_222 (0x1 << 20) 158 + #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_X4_X1 (0x1 << 20) 165 159 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_411 (0x2 << 20) 166 160 167 161 #define AFI_FUSE 0x104 ··· 175 165 #define AFI_PEX_CTRL_RST (1 << 0) 176 166 #define AFI_PEX_CTRL_CLKREQ_EN (1 << 1) 177 167 #define AFI_PEX_CTRL_REFCLK_EN (1 << 3) 168 + #define AFI_PEX_CTRL_OVERRIDE_EN (1 << 4) 169 + 170 + #define AFI_PLLE_CONTROL 0x160 171 + #define AFI_PLLE_CONTROL_BYPASS_PADS2PLLE_CONTROL (1 << 9) 172 + #define AFI_PLLE_CONTROL_PADS2PLLE_CONTROL_EN (1 << 1) 178 173 179 174 #define AFI_PEXBIAS_CTRL_0 0x168 180 175 181 176 #define RP_VEND_XP 0x00000F00 182 177 #define RP_VEND_XP_DL_UP (1 << 30) 178 + 179 + #define RP_PRIV_MISC 0x00000FE0 180 + #define RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT (0xE << 0) 181 + #define RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT (0xF << 0) 183 182 184 183 #define RP_LINK_CONTROL_STATUS 0x00000090 185 184 #define RP_LINK_CONTROL_STATUS_DL_LINK_ACTIVE 0x20000000 ··· 216 197 217 198 #define PADS_REFCLK_CFG0 0x000000C8 218 199 #define PADS_REFCLK_CFG1 0x000000CC 200 + #define PADS_REFCLK_BIAS 0x000000D0 219 201 220 202 /* 221 203 * Fields in PADS_REFCLK_CFG*. Those registers form an array of 16-bit ··· 256 236 bool has_pex_bias_ctrl; 257 237 bool has_intr_prsnt_sense; 258 238 bool has_cml_clk; 239 + bool has_gen2; 259 240 }; 260 241 261 242 static inline struct tegra_msi *to_tegra_msi(struct msi_chip *chip) ··· 274 253 struct list_head buses; 275 254 struct resource *cs; 276 255 256 + struct resource all; 277 257 struct resource io; 278 258 struct resource mem; 279 259 struct resource prefetch; ··· 288 266 struct reset_control *pex_rst; 289 267 struct reset_control *afi_rst; 290 268 struct reset_control *pcie_xrst; 269 + 270 + struct phy *phy; 291 271 292 272 struct tegra_msi msi; 293 273 ··· 406 382 for (i = 0; i < 16; i++) { 407 383 unsigned long virt = (unsigned long)bus->area->addr + 408 384 i * SZ_64K; 409 - phys_addr_t phys = cs + i * SZ_1M + busnr * SZ_64K; 385 + phys_addr_t phys = cs + i * SZ_16M + busnr * SZ_64K; 410 386 411 387 err = ioremap_page_range(virt, virt + SZ_64K, phys, prot); 412 388 if (err < 0) { ··· 585 561 if (soc->has_pex_clkreq_en) 586 562 value |= AFI_PEX_CTRL_CLKREQ_EN; 587 563 564 + value |= AFI_PEX_CTRL_OVERRIDE_EN; 565 + 588 566 afi_writel(port->pcie, value, ctrl); 589 567 590 568 tegra_pcie_port_reset(port); ··· 594 568 595 569 static void tegra_pcie_port_disable(struct tegra_pcie_port *port) 596 570 { 571 + const struct tegra_pcie_soc_data *soc = port->pcie->soc_data; 597 572 unsigned long ctrl = tegra_pcie_port_get_pex_ctrl(port); 598 573 unsigned long value; 599 574 ··· 605 578 606 579 /* disable reference clock */ 607 580 value = afi_readl(port->pcie, ctrl); 581 + 582 + if (soc->has_pex_clkreq_en) 583 + value &= ~AFI_PEX_CTRL_CLKREQ_EN; 584 + 608 585 value &= ~AFI_PEX_CTRL_REFCLK_EN; 609 586 afi_writel(port->pcie, value, ctrl); 610 587 } ··· 657 626 static int tegra_pcie_setup(int nr, struct pci_sys_data *sys) 658 627 { 659 628 struct tegra_pcie *pcie = sys_to_pcie(sys); 629 + int err; 630 + phys_addr_t io_start; 631 + 632 + err = devm_request_resource(pcie->dev, &pcie->all, &pcie->mem); 633 + if (err < 0) 634 + return err; 635 + 636 + err = devm_request_resource(pcie->dev, &pcie->all, &pcie->prefetch); 637 + if (err) 638 + return err; 639 + 640 + io_start = pci_pio_to_address(pcie->io.start); 660 641 661 642 pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); 662 643 pci_add_resource_offset(&sys->resources, &pcie->prefetch, 663 644 sys->mem_offset); 664 645 pci_add_resource(&sys->resources, &pcie->busn); 665 646 666 - pci_ioremap_io(nr * SZ_64K, pcie->io.start); 647 + pci_ioremap_io(nr * SZ_64K, io_start); 667 648 668 649 return 1; 669 650 } ··· 727 684 "Target abort", 728 685 "Master abort", 729 686 "Invalid write", 687 + "Legacy interrupt", 730 688 "Response decoding error", 731 689 "AXI response decoding error", 732 690 "Transaction timeout", 691 + "Slot present pin change", 692 + "Slot clock request change", 693 + "TMS clock ramp change", 694 + "TMS ready for power down", 695 + "Peer2Peer error", 733 696 }; 734 697 struct tegra_pcie *pcie = arg; 735 698 u32 code, signature; ··· 786 737 static void tegra_pcie_setup_translations(struct tegra_pcie *pcie) 787 738 { 788 739 u32 fpci_bar, size, axi_address; 740 + phys_addr_t io_start = pci_pio_to_address(pcie->io.start); 789 741 790 742 /* Bar 0: type 1 extended configuration space */ 791 743 fpci_bar = 0xfe100000; ··· 799 749 /* Bar 1: downstream IO bar */ 800 750 fpci_bar = 0xfdfc0000; 801 751 size = resource_size(&pcie->io); 802 - axi_address = pcie->io.start; 752 + axi_address = io_start; 803 753 afi_writel(pcie, axi_address, AFI_AXI_BAR1_START); 804 754 afi_writel(pcie, size >> 12, AFI_AXI_BAR1_SZ); 805 755 afi_writel(pcie, fpci_bar, AFI_FPCI_BAR1); ··· 842 792 afi_writel(pcie, 0, AFI_MSI_BAR_SZ); 843 793 } 844 794 845 - static int tegra_pcie_enable_controller(struct tegra_pcie *pcie) 795 + static int tegra_pcie_pll_wait(struct tegra_pcie *pcie, unsigned long timeout) 846 796 { 847 797 const struct tegra_pcie_soc_data *soc = pcie->soc_data; 848 - struct tegra_pcie_port *port; 849 - unsigned int timeout; 850 - unsigned long value; 798 + u32 value; 851 799 852 - /* power down PCIe slot clock bias pad */ 853 - if (soc->has_pex_bias_ctrl) 854 - afi_writel(pcie, 0, AFI_PEXBIAS_CTRL_0); 800 + timeout = jiffies + msecs_to_jiffies(timeout); 855 801 856 - /* configure mode and disable all ports */ 857 - value = afi_readl(pcie, AFI_PCIE_CONFIG); 858 - value &= ~AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_MASK; 859 - value |= AFI_PCIE_CONFIG_PCIE_DISABLE_ALL | pcie->xbar_config; 802 + while (time_before(jiffies, timeout)) { 803 + value = pads_readl(pcie, soc->pads_pll_ctl); 804 + if (value & PADS_PLL_CTL_LOCKDET) 805 + return 0; 806 + } 860 807 861 - list_for_each_entry(port, &pcie->ports, list) 862 - value &= ~AFI_PCIE_CONFIG_PCIE_DISABLE(port->index); 808 + return -ETIMEDOUT; 809 + } 863 810 864 - afi_writel(pcie, value, AFI_PCIE_CONFIG); 865 - 866 - value = afi_readl(pcie, AFI_FUSE); 867 - value |= AFI_FUSE_PCIE_T0_GEN2_DIS; 868 - afi_writel(pcie, value, AFI_FUSE); 811 + static int tegra_pcie_phy_enable(struct tegra_pcie *pcie) 812 + { 813 + const struct tegra_pcie_soc_data *soc = pcie->soc_data; 814 + u32 value; 815 + int err; 869 816 870 817 /* initialize internal PHY, enable up to 16 PCIE lanes */ 871 818 pads_writel(pcie, 0x0, PADS_CTL_SEL); ··· 881 834 value |= PADS_PLL_CTL_REFCLK_INTERNAL_CML | soc->tx_ref_sel; 882 835 pads_writel(pcie, value, soc->pads_pll_ctl); 883 836 837 + /* reset PLL */ 838 + value = pads_readl(pcie, soc->pads_pll_ctl); 839 + value &= ~PADS_PLL_CTL_RST_B4SM; 840 + pads_writel(pcie, value, soc->pads_pll_ctl); 841 + 842 + usleep_range(20, 100); 843 + 884 844 /* take PLL out of reset */ 885 845 value = pads_readl(pcie, soc->pads_pll_ctl); 886 846 value |= PADS_PLL_CTL_RST_B4SM; ··· 900 846 pads_writel(pcie, PADS_REFCLK_CFG_VALUE, PADS_REFCLK_CFG1); 901 847 902 848 /* wait for the PLL to lock */ 903 - timeout = 300; 904 - do { 905 - value = pads_readl(pcie, soc->pads_pll_ctl); 906 - usleep_range(1000, 2000); 907 - if (--timeout == 0) { 908 - pr_err("Tegra PCIe error: timeout waiting for PLL\n"); 909 - return -EBUSY; 910 - } 911 - } while (!(value & PADS_PLL_CTL_LOCKDET)); 849 + err = tegra_pcie_pll_wait(pcie, 500); 850 + if (err < 0) { 851 + dev_err(pcie->dev, "PLL failed to lock: %d\n", err); 852 + return err; 853 + } 912 854 913 855 /* turn off IDDQ override */ 914 856 value = pads_readl(pcie, PADS_CTL); ··· 915 865 value = pads_readl(pcie, PADS_CTL); 916 866 value |= PADS_CTL_TX_DATA_EN_1L | PADS_CTL_RX_DATA_EN_1L; 917 867 pads_writel(pcie, value, PADS_CTL); 868 + 869 + return 0; 870 + } 871 + 872 + static int tegra_pcie_enable_controller(struct tegra_pcie *pcie) 873 + { 874 + const struct tegra_pcie_soc_data *soc = pcie->soc_data; 875 + struct tegra_pcie_port *port; 876 + unsigned long value; 877 + int err; 878 + 879 + /* enable PLL power down */ 880 + if (pcie->phy) { 881 + value = afi_readl(pcie, AFI_PLLE_CONTROL); 882 + value &= ~AFI_PLLE_CONTROL_BYPASS_PADS2PLLE_CONTROL; 883 + value |= AFI_PLLE_CONTROL_PADS2PLLE_CONTROL_EN; 884 + afi_writel(pcie, value, AFI_PLLE_CONTROL); 885 + } 886 + 887 + /* power down PCIe slot clock bias pad */ 888 + if (soc->has_pex_bias_ctrl) 889 + afi_writel(pcie, 0, AFI_PEXBIAS_CTRL_0); 890 + 891 + /* configure mode and disable all ports */ 892 + value = afi_readl(pcie, AFI_PCIE_CONFIG); 893 + value &= ~AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_MASK; 894 + value |= AFI_PCIE_CONFIG_PCIE_DISABLE_ALL | pcie->xbar_config; 895 + 896 + list_for_each_entry(port, &pcie->ports, list) 897 + value &= ~AFI_PCIE_CONFIG_PCIE_DISABLE(port->index); 898 + 899 + afi_writel(pcie, value, AFI_PCIE_CONFIG); 900 + 901 + if (soc->has_gen2) { 902 + value = afi_readl(pcie, AFI_FUSE); 903 + value &= ~AFI_FUSE_PCIE_T0_GEN2_DIS; 904 + afi_writel(pcie, value, AFI_FUSE); 905 + } else { 906 + value = afi_readl(pcie, AFI_FUSE); 907 + value |= AFI_FUSE_PCIE_T0_GEN2_DIS; 908 + afi_writel(pcie, value, AFI_FUSE); 909 + } 910 + 911 + if (!pcie->phy) 912 + err = tegra_pcie_phy_enable(pcie); 913 + else 914 + err = phy_power_on(pcie->phy); 915 + 916 + if (err < 0) { 917 + dev_err(pcie->dev, "failed to power on PHY: %d\n", err); 918 + return err; 919 + } 918 920 919 921 /* take the PCIe interface module out of reset */ 920 922 reset_control_deassert(pcie->pcie_xrst); ··· 1000 898 int err; 1001 899 1002 900 /* TODO: disable and unprepare clocks? */ 901 + 902 + err = phy_power_off(pcie->phy); 903 + if (err < 0) 904 + dev_warn(pcie->dev, "failed to power off PHY: %d\n", err); 1003 905 1004 906 reset_control_assert(pcie->pcie_xrst); 1005 907 reset_control_assert(pcie->afi_rst); ··· 1126 1020 return err; 1127 1021 } 1128 1022 1023 + pcie->phy = devm_phy_optional_get(pcie->dev, "pcie"); 1024 + if (IS_ERR(pcie->phy)) { 1025 + err = PTR_ERR(pcie->phy); 1026 + dev_err(&pdev->dev, "failed to get PHY: %d\n", err); 1027 + return err; 1028 + } 1029 + 1030 + err = phy_init(pcie->phy); 1031 + if (err < 0) { 1032 + dev_err(&pdev->dev, "failed to initialize PHY: %d\n", err); 1033 + return err; 1034 + } 1035 + 1129 1036 err = tegra_pcie_power_on(pcie); 1130 1037 if (err) { 1131 1038 dev_err(&pdev->dev, "failed to power up: %d\n", err); ··· 1197 1078 1198 1079 static int tegra_pcie_put_resources(struct tegra_pcie *pcie) 1199 1080 { 1081 + int err; 1082 + 1200 1083 if (pcie->irq > 0) 1201 1084 free_irq(pcie->irq, pcie); 1202 1085 1203 1086 tegra_pcie_power_off(pcie); 1087 + 1088 + err = phy_exit(pcie->phy); 1089 + if (err < 0) 1090 + dev_err(pcie->dev, "failed to teardown PHY: %d\n", err); 1091 + 1204 1092 return 0; 1205 1093 } 1206 1094 ··· 1296 1170 return hwirq; 1297 1171 1298 1172 irq = irq_create_mapping(msi->domain, hwirq); 1299 - if (!irq) 1173 + if (!irq) { 1174 + tegra_msi_free(msi, hwirq); 1300 1175 return -EINVAL; 1176 + } 1301 1177 1302 1178 irq_set_msi_desc(irq, desc); 1303 1179 ··· 1317 1189 { 1318 1190 struct tegra_msi *msi = to_tegra_msi(chip); 1319 1191 struct irq_data *d = irq_get_irq_data(irq); 1192 + irq_hw_number_t hwirq = irqd_to_hwirq(d); 1320 1193 1321 - tegra_msi_free(msi, d->hwirq); 1194 + irq_dispose_mapping(irq); 1195 + tegra_msi_free(msi, hwirq); 1322 1196 } 1323 1197 1324 1198 static struct irq_chip tegra_msi_irq_chip = { ··· 1457 1327 { 1458 1328 struct device_node *np = pcie->dev->of_node; 1459 1329 1460 - if (of_device_is_compatible(np, "nvidia,tegra30-pcie")) { 1330 + if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) { 1331 + switch (lanes) { 1332 + case 0x0000104: 1333 + dev_info(pcie->dev, "4x1, 1x1 configuration\n"); 1334 + *xbar = AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_X4_X1; 1335 + return 0; 1336 + 1337 + case 0x0000102: 1338 + dev_info(pcie->dev, "2x1, 1x1 configuration\n"); 1339 + *xbar = AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_X2_X1; 1340 + return 0; 1341 + } 1342 + } else if (of_device_is_compatible(np, "nvidia,tegra30-pcie")) { 1461 1343 switch (lanes) { 1462 1344 case 0x00000204: 1463 1345 dev_info(pcie->dev, "4x1, 2x1 configuration\n"); ··· 1577 1435 struct device_node *np = pcie->dev->of_node; 1578 1436 unsigned int i = 0; 1579 1437 1580 - if (of_device_is_compatible(np, "nvidia,tegra30-pcie")) { 1438 + if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) { 1439 + pcie->num_supplies = 7; 1440 + 1441 + pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies, 1442 + sizeof(*pcie->supplies), 1443 + GFP_KERNEL); 1444 + if (!pcie->supplies) 1445 + return -ENOMEM; 1446 + 1447 + pcie->supplies[i++].supply = "avddio-pex"; 1448 + pcie->supplies[i++].supply = "dvddio-pex"; 1449 + pcie->supplies[i++].supply = "avdd-pex-pll"; 1450 + pcie->supplies[i++].supply = "hvdd-pex"; 1451 + pcie->supplies[i++].supply = "hvdd-pex-pll-e"; 1452 + pcie->supplies[i++].supply = "vddio-pex-ctl"; 1453 + pcie->supplies[i++].supply = "avdd-pll-erefe"; 1454 + } else if (of_device_is_compatible(np, "nvidia,tegra30-pcie")) { 1581 1455 bool need_pexa = false, need_pexb = false; 1582 1456 1583 1457 /* VDD_PEXA and AVDD_PEXA supply lanes 0 to 3 */ ··· 1672 1514 struct resource res; 1673 1515 int err; 1674 1516 1517 + memset(&pcie->all, 0, sizeof(pcie->all)); 1518 + pcie->all.flags = IORESOURCE_MEM; 1519 + pcie->all.name = np->full_name; 1520 + pcie->all.start = ~0; 1521 + pcie->all.end = 0; 1522 + 1675 1523 if (of_pci_range_parser_init(&parser, np)) { 1676 1524 dev_err(pcie->dev, "missing \"ranges\" property\n"); 1677 1525 return -EINVAL; 1678 1526 } 1679 1527 1680 1528 for_each_of_pci_range(&parser, &range) { 1681 - of_pci_range_to_resource(&range, np, &res); 1529 + err = of_pci_range_to_resource(&range, np, &res); 1530 + if (err < 0) 1531 + return err; 1682 1532 1683 1533 switch (res.flags & IORESOURCE_TYPE_BITS) { 1684 1534 case IORESOURCE_IO: 1685 1535 memcpy(&pcie->io, &res, sizeof(res)); 1686 - pcie->io.name = "I/O"; 1536 + pcie->io.name = np->full_name; 1687 1537 break; 1688 1538 1689 1539 case IORESOURCE_MEM: 1690 1540 if (res.flags & IORESOURCE_PREFETCH) { 1691 1541 memcpy(&pcie->prefetch, &res, sizeof(res)); 1692 - pcie->prefetch.name = "PREFETCH"; 1542 + pcie->prefetch.name = "prefetchable"; 1693 1543 } else { 1694 1544 memcpy(&pcie->mem, &res, sizeof(res)); 1695 - pcie->mem.name = "MEM"; 1545 + pcie->mem.name = "non-prefetchable"; 1696 1546 } 1697 1547 break; 1698 1548 } 1549 + 1550 + if (res.start <= pcie->all.start) 1551 + pcie->all.start = res.start; 1552 + 1553 + if (res.end >= pcie->all.end) 1554 + pcie->all.end = res.end; 1699 1555 } 1556 + 1557 + err = devm_request_resource(pcie->dev, &iomem_resource, &pcie->all); 1558 + if (err < 0) 1559 + return err; 1700 1560 1701 1561 err = of_pci_parse_bus_range(np, &pcie->busn); 1702 1562 if (err < 0) { ··· 1817 1641 unsigned int retries = 3; 1818 1642 unsigned long value; 1819 1643 1644 + /* override presence detection */ 1645 + value = readl(port->base + RP_PRIV_MISC); 1646 + value &= ~RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT; 1647 + value |= RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT; 1648 + writel(value, port->base + RP_PRIV_MISC); 1649 + 1820 1650 do { 1821 1651 unsigned int timeout = TEGRA_PCIE_LINKUP_TIMEOUT; 1822 1652 ··· 1903 1721 .has_pex_bias_ctrl = false, 1904 1722 .has_intr_prsnt_sense = false, 1905 1723 .has_cml_clk = false, 1724 + .has_gen2 = false, 1906 1725 }; 1907 1726 1908 1727 static const struct tegra_pcie_soc_data tegra30_pcie_data = { ··· 1915 1732 .has_pex_bias_ctrl = true, 1916 1733 .has_intr_prsnt_sense = true, 1917 1734 .has_cml_clk = true, 1735 + .has_gen2 = false, 1736 + }; 1737 + 1738 + static const struct tegra_pcie_soc_data tegra124_pcie_data = { 1739 + .num_ports = 2, 1740 + .msi_base_shift = 8, 1741 + .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, 1742 + .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, 1743 + .has_pex_clkreq_en = true, 1744 + .has_pex_bias_ctrl = true, 1745 + .has_intr_prsnt_sense = true, 1746 + .has_cml_clk = true, 1747 + .has_gen2 = true, 1918 1748 }; 1919 1749 1920 1750 static const struct of_device_id tegra_pcie_of_match[] = { 1751 + { .compatible = "nvidia,tegra124-pcie", .data = &tegra124_pcie_data }, 1921 1752 { .compatible = "nvidia,tegra30-pcie", .data = &tegra30_pcie_data }, 1922 1753 { .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie_data }, 1923 1754 { },
+659
drivers/pci/host/pci-xgene.c
··· 1 + /** 2 + * APM X-Gene PCIe Driver 3 + * 4 + * Copyright (c) 2014 Applied Micro Circuits Corporation. 5 + * 6 + * Author: Tanmay Inamdar <tinamdar@apm.com>. 7 + * 8 + * This program is free software; you can redistribute it and/or modify it 9 + * under the terms of the GNU General Public License as published by the 10 + * Free Software Foundation; either version 2 of the License, or (at your 11 + * option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + */ 19 + #include <linux/clk-private.h> 20 + #include <linux/delay.h> 21 + #include <linux/io.h> 22 + #include <linux/jiffies.h> 23 + #include <linux/memblock.h> 24 + #include <linux/module.h> 25 + #include <linux/of.h> 26 + #include <linux/of_address.h> 27 + #include <linux/of_irq.h> 28 + #include <linux/of_pci.h> 29 + #include <linux/pci.h> 30 + #include <linux/platform_device.h> 31 + #include <linux/slab.h> 32 + 33 + #define PCIECORE_CTLANDSTATUS 0x50 34 + #define PIM1_1L 0x80 35 + #define IBAR2 0x98 36 + #define IR2MSK 0x9c 37 + #define PIM2_1L 0xa0 38 + #define IBAR3L 0xb4 39 + #define IR3MSKL 0xbc 40 + #define PIM3_1L 0xc4 41 + #define OMR1BARL 0x100 42 + #define OMR2BARL 0x118 43 + #define OMR3BARL 0x130 44 + #define CFGBARL 0x154 45 + #define CFGBARH 0x158 46 + #define CFGCTL 0x15c 47 + #define RTDID 0x160 48 + #define BRIDGE_CFG_0 0x2000 49 + #define BRIDGE_CFG_4 0x2010 50 + #define BRIDGE_STATUS_0 0x2600 51 + 52 + #define LINK_UP_MASK 0x00000100 53 + #define AXI_EP_CFG_ACCESS 0x10000 54 + #define EN_COHERENCY 0xF0000000 55 + #define EN_REG 0x00000001 56 + #define OB_LO_IO 0x00000002 57 + #define XGENE_PCIE_VENDORID 0x10E8 58 + #define XGENE_PCIE_DEVICEID 0xE004 59 + #define SZ_1T (SZ_1G*1024ULL) 60 + #define PIPE_PHY_RATE_RD(src) ((0xc000 & (u32)(src)) >> 0xe) 61 + 62 + struct xgene_pcie_port { 63 + struct device_node *node; 64 + struct device *dev; 65 + struct clk *clk; 66 + void __iomem *csr_base; 67 + void __iomem *cfg_base; 68 + unsigned long cfg_addr; 69 + bool link_up; 70 + }; 71 + 72 + static inline u32 pcie_bar_low_val(u32 addr, u32 flags) 73 + { 74 + return (addr & PCI_BASE_ADDRESS_MEM_MASK) | flags; 75 + } 76 + 77 + /* PCIe Configuration Out/In */ 78 + static inline void xgene_pcie_cfg_out32(void __iomem *addr, int offset, u32 val) 79 + { 80 + writel(val, addr + offset); 81 + } 82 + 83 + static inline void xgene_pcie_cfg_out16(void __iomem *addr, int offset, u16 val) 84 + { 85 + u32 val32 = readl(addr + (offset & ~0x3)); 86 + 87 + switch (offset & 0x3) { 88 + case 2: 89 + val32 &= ~0xFFFF0000; 90 + val32 |= (u32)val << 16; 91 + break; 92 + case 0: 93 + default: 94 + val32 &= ~0xFFFF; 95 + val32 |= val; 96 + break; 97 + } 98 + writel(val32, addr + (offset & ~0x3)); 99 + } 100 + 101 + static inline void xgene_pcie_cfg_out8(void __iomem *addr, int offset, u8 val) 102 + { 103 + u32 val32 = readl(addr + (offset & ~0x3)); 104 + 105 + switch (offset & 0x3) { 106 + case 0: 107 + val32 &= ~0xFF; 108 + val32 |= val; 109 + break; 110 + case 1: 111 + val32 &= ~0xFF00; 112 + val32 |= (u32)val << 8; 113 + break; 114 + case 2: 115 + val32 &= ~0xFF0000; 116 + val32 |= (u32)val << 16; 117 + break; 118 + case 3: 119 + default: 120 + val32 &= ~0xFF000000; 121 + val32 |= (u32)val << 24; 122 + break; 123 + } 124 + writel(val32, addr + (offset & ~0x3)); 125 + } 126 + 127 + static inline void xgene_pcie_cfg_in32(void __iomem *addr, int offset, u32 *val) 128 + { 129 + *val = readl(addr + offset); 130 + } 131 + 132 + static inline void xgene_pcie_cfg_in16(void __iomem *addr, int offset, u32 *val) 133 + { 134 + *val = readl(addr + (offset & ~0x3)); 135 + 136 + switch (offset & 0x3) { 137 + case 2: 138 + *val >>= 16; 139 + break; 140 + } 141 + 142 + *val &= 0xFFFF; 143 + } 144 + 145 + static inline void xgene_pcie_cfg_in8(void __iomem *addr, int offset, u32 *val) 146 + { 147 + *val = readl(addr + (offset & ~0x3)); 148 + 149 + switch (offset & 0x3) { 150 + case 3: 151 + *val = *val >> 24; 152 + break; 153 + case 2: 154 + *val = *val >> 16; 155 + break; 156 + case 1: 157 + *val = *val >> 8; 158 + break; 159 + } 160 + *val &= 0xFF; 161 + } 162 + 163 + /* 164 + * When the address bit [17:16] is 2'b01, the Configuration access will be 165 + * treated as Type 1 and it will be forwarded to external PCIe device. 166 + */ 167 + static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus) 168 + { 169 + struct xgene_pcie_port *port = bus->sysdata; 170 + 171 + if (bus->number >= (bus->primary + 1)) 172 + return port->cfg_base + AXI_EP_CFG_ACCESS; 173 + 174 + return port->cfg_base; 175 + } 176 + 177 + /* 178 + * For Configuration request, RTDID register is used as Bus Number, 179 + * Device Number and Function number of the header fields. 180 + */ 181 + static void xgene_pcie_set_rtdid_reg(struct pci_bus *bus, uint devfn) 182 + { 183 + struct xgene_pcie_port *port = bus->sysdata; 184 + unsigned int b, d, f; 185 + u32 rtdid_val = 0; 186 + 187 + b = bus->number; 188 + d = PCI_SLOT(devfn); 189 + f = PCI_FUNC(devfn); 190 + 191 + if (!pci_is_root_bus(bus)) 192 + rtdid_val = (b << 8) | (d << 3) | f; 193 + 194 + writel(rtdid_val, port->csr_base + RTDID); 195 + /* read the register back to ensure flush */ 196 + readl(port->csr_base + RTDID); 197 + } 198 + 199 + /* 200 + * X-Gene PCIe port uses BAR0-BAR1 of RC's configuration space as 201 + * the translation from PCI bus to native BUS. Entire DDR region 202 + * is mapped into PCIe space using these registers, so it can be 203 + * reached by DMA from EP devices. The BAR0/1 of bridge should be 204 + * hidden during enumeration to avoid the sizing and resource allocation 205 + * by PCIe core. 206 + */ 207 + static bool xgene_pcie_hide_rc_bars(struct pci_bus *bus, int offset) 208 + { 209 + if (pci_is_root_bus(bus) && ((offset == PCI_BASE_ADDRESS_0) || 210 + (offset == PCI_BASE_ADDRESS_1))) 211 + return true; 212 + 213 + return false; 214 + } 215 + 216 + static int xgene_pcie_read_config(struct pci_bus *bus, unsigned int devfn, 217 + int offset, int len, u32 *val) 218 + { 219 + struct xgene_pcie_port *port = bus->sysdata; 220 + void __iomem *addr; 221 + 222 + if ((pci_is_root_bus(bus) && devfn != 0) || !port->link_up) 223 + return PCIBIOS_DEVICE_NOT_FOUND; 224 + 225 + if (xgene_pcie_hide_rc_bars(bus, offset)) { 226 + *val = 0; 227 + return PCIBIOS_SUCCESSFUL; 228 + } 229 + 230 + xgene_pcie_set_rtdid_reg(bus, devfn); 231 + addr = xgene_pcie_get_cfg_base(bus); 232 + switch (len) { 233 + case 1: 234 + xgene_pcie_cfg_in8(addr, offset, val); 235 + break; 236 + case 2: 237 + xgene_pcie_cfg_in16(addr, offset, val); 238 + break; 239 + default: 240 + xgene_pcie_cfg_in32(addr, offset, val); 241 + break; 242 + } 243 + 244 + return PCIBIOS_SUCCESSFUL; 245 + } 246 + 247 + static int xgene_pcie_write_config(struct pci_bus *bus, unsigned int devfn, 248 + int offset, int len, u32 val) 249 + { 250 + struct xgene_pcie_port *port = bus->sysdata; 251 + void __iomem *addr; 252 + 253 + if ((pci_is_root_bus(bus) && devfn != 0) || !port->link_up) 254 + return PCIBIOS_DEVICE_NOT_FOUND; 255 + 256 + if (xgene_pcie_hide_rc_bars(bus, offset)) 257 + return PCIBIOS_SUCCESSFUL; 258 + 259 + xgene_pcie_set_rtdid_reg(bus, devfn); 260 + addr = xgene_pcie_get_cfg_base(bus); 261 + switch (len) { 262 + case 1: 263 + xgene_pcie_cfg_out8(addr, offset, (u8)val); 264 + break; 265 + case 2: 266 + xgene_pcie_cfg_out16(addr, offset, (u16)val); 267 + break; 268 + default: 269 + xgene_pcie_cfg_out32(addr, offset, val); 270 + break; 271 + } 272 + 273 + return PCIBIOS_SUCCESSFUL; 274 + } 275 + 276 + static struct pci_ops xgene_pcie_ops = { 277 + .read = xgene_pcie_read_config, 278 + .write = xgene_pcie_write_config 279 + }; 280 + 281 + static u64 xgene_pcie_set_ib_mask(void __iomem *csr_base, u32 addr, 282 + u32 flags, u64 size) 283 + { 284 + u64 mask = (~(size - 1) & PCI_BASE_ADDRESS_MEM_MASK) | flags; 285 + u32 val32 = 0; 286 + u32 val; 287 + 288 + val32 = readl(csr_base + addr); 289 + val = (val32 & 0x0000ffff) | (lower_32_bits(mask) << 16); 290 + writel(val, csr_base + addr); 291 + 292 + val32 = readl(csr_base + addr + 0x04); 293 + val = (val32 & 0xffff0000) | (lower_32_bits(mask) >> 16); 294 + writel(val, csr_base + addr + 0x04); 295 + 296 + val32 = readl(csr_base + addr + 0x04); 297 + val = (val32 & 0x0000ffff) | (upper_32_bits(mask) << 16); 298 + writel(val, csr_base + addr + 0x04); 299 + 300 + val32 = readl(csr_base + addr + 0x08); 301 + val = (val32 & 0xffff0000) | (upper_32_bits(mask) >> 16); 302 + writel(val, csr_base + addr + 0x08); 303 + 304 + return mask; 305 + } 306 + 307 + static void xgene_pcie_linkup(struct xgene_pcie_port *port, 308 + u32 *lanes, u32 *speed) 309 + { 310 + void __iomem *csr_base = port->csr_base; 311 + u32 val32; 312 + 313 + port->link_up = false; 314 + val32 = readl(csr_base + PCIECORE_CTLANDSTATUS); 315 + if (val32 & LINK_UP_MASK) { 316 + port->link_up = true; 317 + *speed = PIPE_PHY_RATE_RD(val32); 318 + val32 = readl(csr_base + BRIDGE_STATUS_0); 319 + *lanes = val32 >> 26; 320 + } 321 + } 322 + 323 + static int xgene_pcie_init_port(struct xgene_pcie_port *port) 324 + { 325 + int rc; 326 + 327 + port->clk = clk_get(port->dev, NULL); 328 + if (IS_ERR(port->clk)) { 329 + dev_err(port->dev, "clock not available\n"); 330 + return -ENODEV; 331 + } 332 + 333 + rc = clk_prepare_enable(port->clk); 334 + if (rc) { 335 + dev_err(port->dev, "clock enable failed\n"); 336 + return rc; 337 + } 338 + 339 + return 0; 340 + } 341 + 342 + static int xgene_pcie_map_reg(struct xgene_pcie_port *port, 343 + struct platform_device *pdev) 344 + { 345 + struct resource *res; 346 + 347 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "csr"); 348 + port->csr_base = devm_ioremap_resource(port->dev, res); 349 + if (IS_ERR(port->csr_base)) 350 + return PTR_ERR(port->csr_base); 351 + 352 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg"); 353 + port->cfg_base = devm_ioremap_resource(port->dev, res); 354 + if (IS_ERR(port->cfg_base)) 355 + return PTR_ERR(port->cfg_base); 356 + port->cfg_addr = res->start; 357 + 358 + return 0; 359 + } 360 + 361 + static void xgene_pcie_setup_ob_reg(struct xgene_pcie_port *port, 362 + struct resource *res, u32 offset, 363 + u64 cpu_addr, u64 pci_addr) 364 + { 365 + void __iomem *base = port->csr_base + offset; 366 + resource_size_t size = resource_size(res); 367 + u64 restype = resource_type(res); 368 + u64 mask = 0; 369 + u32 min_size; 370 + u32 flag = EN_REG; 371 + 372 + if (restype == IORESOURCE_MEM) { 373 + min_size = SZ_128M; 374 + } else { 375 + min_size = 128; 376 + flag |= OB_LO_IO; 377 + } 378 + 379 + if (size >= min_size) 380 + mask = ~(size - 1) | flag; 381 + else 382 + dev_warn(port->dev, "res size 0x%llx less than minimum 0x%x\n", 383 + (u64)size, min_size); 384 + 385 + writel(lower_32_bits(cpu_addr), base); 386 + writel(upper_32_bits(cpu_addr), base + 0x04); 387 + writel(lower_32_bits(mask), base + 0x08); 388 + writel(upper_32_bits(mask), base + 0x0c); 389 + writel(lower_32_bits(pci_addr), base + 0x10); 390 + writel(upper_32_bits(pci_addr), base + 0x14); 391 + } 392 + 393 + static void xgene_pcie_setup_cfg_reg(void __iomem *csr_base, u64 addr) 394 + { 395 + writel(lower_32_bits(addr), csr_base + CFGBARL); 396 + writel(upper_32_bits(addr), csr_base + CFGBARH); 397 + writel(EN_REG, csr_base + CFGCTL); 398 + } 399 + 400 + static int xgene_pcie_map_ranges(struct xgene_pcie_port *port, 401 + struct list_head *res, 402 + resource_size_t io_base) 403 + { 404 + struct pci_host_bridge_window *window; 405 + struct device *dev = port->dev; 406 + int ret; 407 + 408 + list_for_each_entry(window, res, list) { 409 + struct resource *res = window->res; 410 + u64 restype = resource_type(res); 411 + 412 + dev_dbg(port->dev, "%pR\n", res); 413 + 414 + switch (restype) { 415 + case IORESOURCE_IO: 416 + xgene_pcie_setup_ob_reg(port, res, OMR3BARL, io_base, 417 + res->start - window->offset); 418 + ret = pci_remap_iospace(res, io_base); 419 + if (ret < 0) 420 + return ret; 421 + break; 422 + case IORESOURCE_MEM: 423 + xgene_pcie_setup_ob_reg(port, res, OMR1BARL, res->start, 424 + res->start - window->offset); 425 + break; 426 + case IORESOURCE_BUS: 427 + break; 428 + default: 429 + dev_err(dev, "invalid resource %pR\n", res); 430 + return -EINVAL; 431 + } 432 + } 433 + xgene_pcie_setup_cfg_reg(port->csr_base, port->cfg_addr); 434 + 435 + return 0; 436 + } 437 + 438 + static void xgene_pcie_setup_pims(void *addr, u64 pim, u64 size) 439 + { 440 + writel(lower_32_bits(pim), addr); 441 + writel(upper_32_bits(pim) | EN_COHERENCY, addr + 0x04); 442 + writel(lower_32_bits(size), addr + 0x10); 443 + writel(upper_32_bits(size), addr + 0x14); 444 + } 445 + 446 + /* 447 + * X-Gene PCIe support maximum 3 inbound memory regions 448 + * This function helps to select a region based on size of region 449 + */ 450 + static int xgene_pcie_select_ib_reg(u8 *ib_reg_mask, u64 size) 451 + { 452 + if ((size > 4) && (size < SZ_16M) && !(*ib_reg_mask & (1 << 1))) { 453 + *ib_reg_mask |= (1 << 1); 454 + return 1; 455 + } 456 + 457 + if ((size > SZ_1K) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 0))) { 458 + *ib_reg_mask |= (1 << 0); 459 + return 0; 460 + } 461 + 462 + if ((size > SZ_1M) && (size < SZ_1T) && !(*ib_reg_mask & (1 << 2))) { 463 + *ib_reg_mask |= (1 << 2); 464 + return 2; 465 + } 466 + 467 + return -EINVAL; 468 + } 469 + 470 + static void xgene_pcie_setup_ib_reg(struct xgene_pcie_port *port, 471 + struct of_pci_range *range, u8 *ib_reg_mask) 472 + { 473 + void __iomem *csr_base = port->csr_base; 474 + void __iomem *cfg_base = port->cfg_base; 475 + void *bar_addr; 476 + void *pim_addr; 477 + u64 cpu_addr = range->cpu_addr; 478 + u64 pci_addr = range->pci_addr; 479 + u64 size = range->size; 480 + u64 mask = ~(size - 1) | EN_REG; 481 + u32 flags = PCI_BASE_ADDRESS_MEM_TYPE_64; 482 + u32 bar_low; 483 + int region; 484 + 485 + region = xgene_pcie_select_ib_reg(ib_reg_mask, range->size); 486 + if (region < 0) { 487 + dev_warn(port->dev, "invalid pcie dma-range config\n"); 488 + return; 489 + } 490 + 491 + if (range->flags & IORESOURCE_PREFETCH) 492 + flags |= PCI_BASE_ADDRESS_MEM_PREFETCH; 493 + 494 + bar_low = pcie_bar_low_val((u32)cpu_addr, flags); 495 + switch (region) { 496 + case 0: 497 + xgene_pcie_set_ib_mask(csr_base, BRIDGE_CFG_4, flags, size); 498 + bar_addr = cfg_base + PCI_BASE_ADDRESS_0; 499 + writel(bar_low, bar_addr); 500 + writel(upper_32_bits(cpu_addr), bar_addr + 0x4); 501 + pim_addr = csr_base + PIM1_1L; 502 + break; 503 + case 1: 504 + bar_addr = csr_base + IBAR2; 505 + writel(bar_low, bar_addr); 506 + writel(lower_32_bits(mask), csr_base + IR2MSK); 507 + pim_addr = csr_base + PIM2_1L; 508 + break; 509 + case 2: 510 + bar_addr = csr_base + IBAR3L; 511 + writel(bar_low, bar_addr); 512 + writel(upper_32_bits(cpu_addr), bar_addr + 0x4); 513 + writel(lower_32_bits(mask), csr_base + IR3MSKL); 514 + writel(upper_32_bits(mask), csr_base + IR3MSKL + 0x4); 515 + pim_addr = csr_base + PIM3_1L; 516 + break; 517 + } 518 + 519 + xgene_pcie_setup_pims(pim_addr, pci_addr, ~(size - 1)); 520 + } 521 + 522 + static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 523 + struct device_node *node) 524 + { 525 + const int na = 3, ns = 2; 526 + int rlen; 527 + 528 + parser->node = node; 529 + parser->pna = of_n_addr_cells(node); 530 + parser->np = parser->pna + na + ns; 531 + 532 + parser->range = of_get_property(node, "dma-ranges", &rlen); 533 + if (!parser->range) 534 + return -ENOENT; 535 + parser->end = parser->range + rlen / sizeof(__be32); 536 + 537 + return 0; 538 + } 539 + 540 + static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie_port *port) 541 + { 542 + struct device_node *np = port->node; 543 + struct of_pci_range range; 544 + struct of_pci_range_parser parser; 545 + struct device *dev = port->dev; 546 + u8 ib_reg_mask = 0; 547 + 548 + if (pci_dma_range_parser_init(&parser, np)) { 549 + dev_err(dev, "missing dma-ranges property\n"); 550 + return -EINVAL; 551 + } 552 + 553 + /* Get the dma-ranges from DT */ 554 + for_each_of_pci_range(&parser, &range) { 555 + u64 end = range.cpu_addr + range.size - 1; 556 + 557 + dev_dbg(port->dev, "0x%08x 0x%016llx..0x%016llx -> 0x%016llx\n", 558 + range.flags, range.cpu_addr, end, range.pci_addr); 559 + xgene_pcie_setup_ib_reg(port, &range, &ib_reg_mask); 560 + } 561 + return 0; 562 + } 563 + 564 + /* clear BAR configuration which was done by firmware */ 565 + static void xgene_pcie_clear_config(struct xgene_pcie_port *port) 566 + { 567 + int i; 568 + 569 + for (i = PIM1_1L; i <= CFGCTL; i += 4) 570 + writel(0x0, port->csr_base + i); 571 + } 572 + 573 + static int xgene_pcie_setup(struct xgene_pcie_port *port, 574 + struct list_head *res, 575 + resource_size_t io_base) 576 + { 577 + u32 val, lanes = 0, speed = 0; 578 + int ret; 579 + 580 + xgene_pcie_clear_config(port); 581 + 582 + /* setup the vendor and device IDs correctly */ 583 + val = (XGENE_PCIE_DEVICEID << 16) | XGENE_PCIE_VENDORID; 584 + writel(val, port->csr_base + BRIDGE_CFG_0); 585 + 586 + ret = xgene_pcie_map_ranges(port, res, io_base); 587 + if (ret) 588 + return ret; 589 + 590 + ret = xgene_pcie_parse_map_dma_ranges(port); 591 + if (ret) 592 + return ret; 593 + 594 + xgene_pcie_linkup(port, &lanes, &speed); 595 + if (!port->link_up) 596 + dev_info(port->dev, "(rc) link down\n"); 597 + else 598 + dev_info(port->dev, "(rc) x%d gen-%d link up\n", 599 + lanes, speed + 1); 600 + return 0; 601 + } 602 + 603 + static int xgene_pcie_probe_bridge(struct platform_device *pdev) 604 + { 605 + struct device_node *dn = pdev->dev.of_node; 606 + struct xgene_pcie_port *port; 607 + resource_size_t iobase = 0; 608 + struct pci_bus *bus; 609 + int ret; 610 + LIST_HEAD(res); 611 + 612 + port = devm_kzalloc(&pdev->dev, sizeof(*port), GFP_KERNEL); 613 + if (!port) 614 + return -ENOMEM; 615 + port->node = of_node_get(pdev->dev.of_node); 616 + port->dev = &pdev->dev; 617 + 618 + ret = xgene_pcie_map_reg(port, pdev); 619 + if (ret) 620 + return ret; 621 + 622 + ret = xgene_pcie_init_port(port); 623 + if (ret) 624 + return ret; 625 + 626 + ret = of_pci_get_host_bridge_resources(dn, 0, 0xff, &res, &iobase); 627 + if (ret) 628 + return ret; 629 + 630 + ret = xgene_pcie_setup(port, &res, iobase); 631 + if (ret) 632 + return ret; 633 + 634 + bus = pci_scan_root_bus(&pdev->dev, 0, &xgene_pcie_ops, port, &res); 635 + if (!bus) 636 + return -ENOMEM; 637 + 638 + platform_set_drvdata(pdev, port); 639 + return 0; 640 + } 641 + 642 + static const struct of_device_id xgene_pcie_match_table[] = { 643 + {.compatible = "apm,xgene-pcie",}, 644 + {}, 645 + }; 646 + 647 + static struct platform_driver xgene_pcie_driver = { 648 + .driver = { 649 + .name = "xgene-pcie", 650 + .owner = THIS_MODULE, 651 + .of_match_table = of_match_ptr(xgene_pcie_match_table), 652 + }, 653 + .probe = xgene_pcie_probe_bridge, 654 + }; 655 + module_platform_driver(xgene_pcie_driver); 656 + 657 + MODULE_AUTHOR("Tanmay Inamdar <tinamdar@apm.com>"); 658 + MODULE_DESCRIPTION("APM X-Gene PCIe driver"); 659 + MODULE_LICENSE("GPL v2");
+100 -168
drivers/pci/host/pcie-designware.c
··· 73 73 74 74 static inline struct pcie_port *sys_to_pcie(struct pci_sys_data *sys) 75 75 { 76 + BUG_ON(!sys->private_data); 77 + 76 78 return sys->private_data; 77 79 } 78 80 ··· 196 194 dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, 0); 197 195 } 198 196 199 - static int find_valid_pos0(struct pcie_port *pp, int msgvec, int pos, int *pos0) 200 - { 201 - int flag = 1; 202 - 203 - do { 204 - pos = find_next_zero_bit(pp->msi_irq_in_use, 205 - MAX_MSI_IRQS, pos); 206 - /*if you have reached to the end then get out from here.*/ 207 - if (pos == MAX_MSI_IRQS) 208 - return -ENOSPC; 209 - /* 210 - * Check if this position is at correct offset.nvec is always a 211 - * power of two. pos0 must be nvec bit aligned. 212 - */ 213 - if (pos % msgvec) 214 - pos += msgvec - (pos % msgvec); 215 - else 216 - flag = 0; 217 - } while (flag); 218 - 219 - *pos0 = pos; 220 - return 0; 221 - } 222 - 223 197 static void dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) 224 198 { 225 199 unsigned int res, bit, val; ··· 214 236 215 237 for (i = 0; i < nvec; i++) { 216 238 irq_set_msi_desc_off(irq_base, i, NULL); 217 - clear_bit(pos + i, pp->msi_irq_in_use); 218 239 /* Disable corresponding interrupt on MSI controller */ 219 240 if (pp->ops->msi_clear_irq) 220 241 pp->ops->msi_clear_irq(pp, pos + i); 221 242 else 222 243 dw_pcie_msi_clear_irq(pp, pos + i); 223 244 } 245 + 246 + bitmap_release_region(pp->msi_irq_in_use, pos, order_base_2(nvec)); 224 247 } 225 248 226 249 static void dw_pcie_msi_set_irq(struct pcie_port *pp, int irq) ··· 237 258 238 259 static int assign_irq(int no_irqs, struct msi_desc *desc, int *pos) 239 260 { 240 - int irq, pos0, pos1, i; 261 + int irq, pos0, i; 241 262 struct pcie_port *pp = sys_to_pcie(desc->dev->bus->sysdata); 242 263 243 - if (!pp) { 244 - BUG(); 245 - return -EINVAL; 246 - } 247 - 248 - pos0 = find_first_zero_bit(pp->msi_irq_in_use, 249 - MAX_MSI_IRQS); 250 - if (pos0 % no_irqs) { 251 - if (find_valid_pos0(pp, no_irqs, pos0, &pos0)) 252 - goto no_valid_irq; 253 - } 254 - if (no_irqs > 1) { 255 - pos1 = find_next_bit(pp->msi_irq_in_use, 256 - MAX_MSI_IRQS, pos0); 257 - /* there must be nvec number of consecutive free bits */ 258 - while ((pos1 - pos0) < no_irqs) { 259 - if (find_valid_pos0(pp, no_irqs, pos1, &pos0)) 260 - goto no_valid_irq; 261 - pos1 = find_next_bit(pp->msi_irq_in_use, 262 - MAX_MSI_IRQS, pos0); 263 - } 264 - } 264 + pos0 = bitmap_find_free_region(pp->msi_irq_in_use, MAX_MSI_IRQS, 265 + order_base_2(no_irqs)); 266 + if (pos0 < 0) 267 + goto no_valid_irq; 265 268 266 269 irq = irq_find_mapping(pp->irq_domain, pos0); 267 270 if (!irq) ··· 261 300 clear_irq_range(pp, irq, i, pos0); 262 301 goto no_valid_irq; 263 302 } 264 - set_bit(pos0 + i, pp->msi_irq_in_use); 265 303 /*Enable corresponding interrupt in MSI interrupt controller */ 266 304 if (pp->ops->msi_set_irq) 267 305 pp->ops->msi_set_irq(pp, pos0 + i); ··· 276 316 return -ENOSPC; 277 317 } 278 318 279 - static void clear_irq(unsigned int irq) 280 - { 281 - unsigned int pos, nvec; 282 - struct msi_desc *msi; 283 - struct pcie_port *pp; 284 - struct irq_data *data = irq_get_irq_data(irq); 285 - 286 - /* get the port structure */ 287 - msi = irq_data_get_msi(data); 288 - pp = sys_to_pcie(msi->dev->bus->sysdata); 289 - if (!pp) { 290 - BUG(); 291 - return; 292 - } 293 - 294 - /* undo what was done in assign_irq */ 295 - pos = data->hwirq; 296 - nvec = 1 << msi->msi_attrib.multiple; 297 - 298 - clear_irq_range(pp, irq, nvec, pos); 299 - 300 - /* all irqs cleared; reset attributes */ 301 - msi->irq = 0; 302 - msi->msi_attrib.multiple = 0; 303 - } 304 - 305 319 static int dw_msi_setup_irq(struct msi_chip *chip, struct pci_dev *pdev, 306 320 struct msi_desc *desc) 307 321 { 308 - int irq, pos, msgvec; 309 - u16 msg_ctr; 322 + int irq, pos; 310 323 struct msi_msg msg; 311 324 struct pcie_port *pp = sys_to_pcie(pdev->bus->sysdata); 312 325 313 - if (!pp) { 314 - BUG(); 315 - return -EINVAL; 316 - } 317 - 318 - pci_read_config_word(pdev, desc->msi_attrib.pos+PCI_MSI_FLAGS, 319 - &msg_ctr); 320 - msgvec = (msg_ctr&PCI_MSI_FLAGS_QSIZE) >> 4; 321 - if (msgvec == 0) 322 - msgvec = (msg_ctr & PCI_MSI_FLAGS_QMASK) >> 1; 323 - if (msgvec > 5) 324 - msgvec = 0; 325 - 326 - irq = assign_irq((1 << msgvec), desc, &pos); 326 + irq = assign_irq(1, desc, &pos); 327 327 if (irq < 0) 328 328 return irq; 329 329 330 - /* 331 - * write_msi_msg() will update PCI_MSI_FLAGS so there is 332 - * no need to explicitly call pci_write_config_word(). 333 - */ 334 - desc->msi_attrib.multiple = msgvec; 335 - 336 - if (pp->ops->get_msi_data) 337 - msg.address_lo = pp->ops->get_msi_data(pp); 330 + if (pp->ops->get_msi_addr) 331 + msg.address_lo = pp->ops->get_msi_addr(pp); 338 332 else 339 333 msg.address_lo = virt_to_phys((void *)pp->msi_data); 340 334 msg.address_hi = 0x0; 341 - msg.data = pos; 335 + 336 + if (pp->ops->get_msi_data) 337 + msg.data = pp->ops->get_msi_data(pp, pos); 338 + else 339 + msg.data = pos; 340 + 342 341 write_msi_msg(irq, &msg); 343 342 344 343 return 0; ··· 305 386 306 387 static void dw_msi_teardown_irq(struct msi_chip *chip, unsigned int irq) 307 388 { 308 - clear_irq(irq); 389 + struct irq_data *data = irq_get_irq_data(irq); 390 + struct msi_desc *msi = irq_data_get_msi(data); 391 + struct pcie_port *pp = sys_to_pcie(msi->dev->bus->sysdata); 392 + 393 + clear_irq_range(pp, irq, 1, data->hwirq); 309 394 } 310 395 311 396 static struct msi_chip dw_pcie_msi_chip = { ··· 348 425 struct resource *cfg_res; 349 426 u32 val, na, ns; 350 427 const __be32 *addrp; 351 - int i, index; 428 + int i, index, ret; 352 429 353 430 /* Find the address cell size and the number of cells in order to get 354 431 * the untranslated address. ··· 358 435 359 436 cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 360 437 if (cfg_res) { 361 - pp->config.cfg0_size = resource_size(cfg_res)/2; 362 - pp->config.cfg1_size = resource_size(cfg_res)/2; 438 + pp->cfg0_size = resource_size(cfg_res)/2; 439 + pp->cfg1_size = resource_size(cfg_res)/2; 363 440 pp->cfg0_base = cfg_res->start; 364 - pp->cfg1_base = cfg_res->start + pp->config.cfg0_size; 441 + pp->cfg1_base = cfg_res->start + pp->cfg0_size; 365 442 366 443 /* Find the untranslated configuration space address */ 367 444 index = of_property_match_string(np, "reg-names", "config"); 368 - addrp = of_get_address(np, index, false, false); 445 + addrp = of_get_address(np, index, NULL, NULL); 369 446 pp->cfg0_mod_base = of_read_number(addrp, ns); 370 - pp->cfg1_mod_base = pp->cfg0_mod_base + pp->config.cfg0_size; 447 + pp->cfg1_mod_base = pp->cfg0_mod_base + pp->cfg0_size; 371 448 } else { 372 449 dev_err(pp->dev, "missing *config* reg space\n"); 373 450 } ··· 389 466 pp->io.end = min_t(resource_size_t, 390 467 IO_SPACE_LIMIT, 391 468 range.pci_addr + range.size 392 - + global_io_offset); 393 - pp->config.io_size = resource_size(&pp->io); 394 - pp->config.io_bus_addr = range.pci_addr; 469 + + global_io_offset - 1); 470 + pp->io_size = resource_size(&pp->io); 471 + pp->io_bus_addr = range.pci_addr; 395 472 pp->io_base = range.cpu_addr; 396 473 397 474 /* Find the untranslated IO space address */ ··· 401 478 if (restype == IORESOURCE_MEM) { 402 479 of_pci_range_to_resource(&range, np, &pp->mem); 403 480 pp->mem.name = "MEM"; 404 - pp->config.mem_size = resource_size(&pp->mem); 405 - pp->config.mem_bus_addr = range.pci_addr; 481 + pp->mem_size = resource_size(&pp->mem); 482 + pp->mem_bus_addr = range.pci_addr; 406 483 407 484 /* Find the untranslated MEM space address */ 408 485 pp->mem_mod_base = of_read_number(parser.range - ··· 410 487 } 411 488 if (restype == 0) { 412 489 of_pci_range_to_resource(&range, np, &pp->cfg); 413 - pp->config.cfg0_size = resource_size(&pp->cfg)/2; 414 - pp->config.cfg1_size = resource_size(&pp->cfg)/2; 490 + pp->cfg0_size = resource_size(&pp->cfg)/2; 491 + pp->cfg1_size = resource_size(&pp->cfg)/2; 415 492 pp->cfg0_base = pp->cfg.start; 416 - pp->cfg1_base = pp->cfg.start + pp->config.cfg0_size; 493 + pp->cfg1_base = pp->cfg.start + pp->cfg0_size; 417 494 418 495 /* Find the untranslated configuration space address */ 419 496 pp->cfg0_mod_base = of_read_number(parser.range - 420 497 parser.np + na, ns); 421 498 pp->cfg1_mod_base = pp->cfg0_mod_base + 422 - pp->config.cfg0_size; 499 + pp->cfg0_size; 423 500 } 501 + } 502 + 503 + ret = of_pci_parse_bus_range(np, &pp->busn); 504 + if (ret < 0) { 505 + pp->busn.name = np->name; 506 + pp->busn.start = 0; 507 + pp->busn.end = 0xff; 508 + pp->busn.flags = IORESOURCE_BUS; 509 + dev_dbg(pp->dev, "failed to parse bus-range property: %d, using default %pR\n", 510 + ret, &pp->busn); 424 511 } 425 512 426 513 if (!pp->dbi_base) { ··· 444 511 445 512 pp->mem_base = pp->mem.start; 446 513 447 - pp->va_cfg0_base = devm_ioremap(pp->dev, pp->cfg0_base, 448 - pp->config.cfg0_size); 449 514 if (!pp->va_cfg0_base) { 450 - dev_err(pp->dev, "error with ioremap in function\n"); 451 - return -ENOMEM; 515 + pp->va_cfg0_base = devm_ioremap(pp->dev, pp->cfg0_base, 516 + pp->cfg0_size); 517 + if (!pp->va_cfg0_base) { 518 + dev_err(pp->dev, "error with ioremap in function\n"); 519 + return -ENOMEM; 520 + } 452 521 } 453 - pp->va_cfg1_base = devm_ioremap(pp->dev, pp->cfg1_base, 454 - pp->config.cfg1_size); 522 + 455 523 if (!pp->va_cfg1_base) { 456 - dev_err(pp->dev, "error with ioremap\n"); 457 - return -ENOMEM; 524 + pp->va_cfg1_base = devm_ioremap(pp->dev, pp->cfg1_base, 525 + pp->cfg1_size); 526 + if (!pp->va_cfg1_base) { 527 + dev_err(pp->dev, "error with ioremap\n"); 528 + return -ENOMEM; 529 + } 458 530 } 459 531 460 532 if (of_property_read_u32(np, "num-lanes", &pp->lanes)) { ··· 468 530 } 469 531 470 532 if (IS_ENABLED(CONFIG_PCI_MSI)) { 471 - pp->irq_domain = irq_domain_add_linear(pp->dev->of_node, 472 - MAX_MSI_IRQS, &msi_domain_ops, 473 - &dw_pcie_msi_chip); 474 - if (!pp->irq_domain) { 475 - dev_err(pp->dev, "irq domain init failed\n"); 476 - return -ENXIO; 477 - } 533 + if (!pp->ops->msi_host_init) { 534 + pp->irq_domain = irq_domain_add_linear(pp->dev->of_node, 535 + MAX_MSI_IRQS, &msi_domain_ops, 536 + &dw_pcie_msi_chip); 537 + if (!pp->irq_domain) { 538 + dev_err(pp->dev, "irq domain init failed\n"); 539 + return -ENXIO; 540 + } 478 541 479 - for (i = 0; i < MAX_MSI_IRQS; i++) 480 - irq_create_mapping(pp->irq_domain, i); 542 + for (i = 0; i < MAX_MSI_IRQS; i++) 543 + irq_create_mapping(pp->irq_domain, i); 544 + } else { 545 + ret = pp->ops->msi_host_init(pp, &dw_pcie_msi_chip); 546 + if (ret < 0) 547 + return ret; 548 + } 481 549 } 482 550 483 551 if (pp->ops->host_init) ··· 502 558 dw_pci.private_data = (void **)&pp; 503 559 504 560 pci_common_init_dev(pp->dev, &dw_pci); 505 - pci_assign_unassigned_resources(); 506 561 #ifdef CONFIG_PCI_DOMAINS 507 562 dw_pci.domain++; 508 563 #endif ··· 516 573 PCIE_ATU_VIEWPORT); 517 574 dw_pcie_writel_rc(pp, pp->cfg0_mod_base, PCIE_ATU_LOWER_BASE); 518 575 dw_pcie_writel_rc(pp, (pp->cfg0_mod_base >> 32), PCIE_ATU_UPPER_BASE); 519 - dw_pcie_writel_rc(pp, pp->cfg0_mod_base + pp->config.cfg0_size - 1, 576 + dw_pcie_writel_rc(pp, pp->cfg0_mod_base + pp->cfg0_size - 1, 520 577 PCIE_ATU_LIMIT); 521 578 dw_pcie_writel_rc(pp, busdev, PCIE_ATU_LOWER_TARGET); 522 579 dw_pcie_writel_rc(pp, 0, PCIE_ATU_UPPER_TARGET); ··· 532 589 dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_CFG1, PCIE_ATU_CR1); 533 590 dw_pcie_writel_rc(pp, pp->cfg1_mod_base, PCIE_ATU_LOWER_BASE); 534 591 dw_pcie_writel_rc(pp, (pp->cfg1_mod_base >> 32), PCIE_ATU_UPPER_BASE); 535 - dw_pcie_writel_rc(pp, pp->cfg1_mod_base + pp->config.cfg1_size - 1, 592 + dw_pcie_writel_rc(pp, pp->cfg1_mod_base + pp->cfg1_size - 1, 536 593 PCIE_ATU_LIMIT); 537 594 dw_pcie_writel_rc(pp, busdev, PCIE_ATU_LOWER_TARGET); 538 595 dw_pcie_writel_rc(pp, 0, PCIE_ATU_UPPER_TARGET); ··· 547 604 dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_MEM, PCIE_ATU_CR1); 548 605 dw_pcie_writel_rc(pp, pp->mem_mod_base, PCIE_ATU_LOWER_BASE); 549 606 dw_pcie_writel_rc(pp, (pp->mem_mod_base >> 32), PCIE_ATU_UPPER_BASE); 550 - dw_pcie_writel_rc(pp, pp->mem_mod_base + pp->config.mem_size - 1, 607 + dw_pcie_writel_rc(pp, pp->mem_mod_base + pp->mem_size - 1, 551 608 PCIE_ATU_LIMIT); 552 - dw_pcie_writel_rc(pp, pp->config.mem_bus_addr, PCIE_ATU_LOWER_TARGET); 553 - dw_pcie_writel_rc(pp, upper_32_bits(pp->config.mem_bus_addr), 609 + dw_pcie_writel_rc(pp, pp->mem_bus_addr, PCIE_ATU_LOWER_TARGET); 610 + dw_pcie_writel_rc(pp, upper_32_bits(pp->mem_bus_addr), 554 611 PCIE_ATU_UPPER_TARGET); 555 612 dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 556 613 } ··· 563 620 dw_pcie_writel_rc(pp, PCIE_ATU_TYPE_IO, PCIE_ATU_CR1); 564 621 dw_pcie_writel_rc(pp, pp->io_mod_base, PCIE_ATU_LOWER_BASE); 565 622 dw_pcie_writel_rc(pp, (pp->io_mod_base >> 32), PCIE_ATU_UPPER_BASE); 566 - dw_pcie_writel_rc(pp, pp->io_mod_base + pp->config.io_size - 1, 623 + dw_pcie_writel_rc(pp, pp->io_mod_base + pp->io_size - 1, 567 624 PCIE_ATU_LIMIT); 568 - dw_pcie_writel_rc(pp, pp->config.io_bus_addr, PCIE_ATU_LOWER_TARGET); 569 - dw_pcie_writel_rc(pp, upper_32_bits(pp->config.io_bus_addr), 625 + dw_pcie_writel_rc(pp, pp->io_bus_addr, PCIE_ATU_LOWER_TARGET); 626 + dw_pcie_writel_rc(pp, upper_32_bits(pp->io_bus_addr), 570 627 PCIE_ATU_UPPER_TARGET); 571 628 dw_pcie_writel_rc(pp, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 572 629 } ··· 650 707 struct pcie_port *pp = sys_to_pcie(bus->sysdata); 651 708 int ret; 652 709 653 - if (!pp) { 654 - BUG(); 655 - return -EINVAL; 656 - } 657 - 658 710 if (dw_pcie_valid_config(pp, bus, PCI_SLOT(devfn)) == 0) { 659 711 *val = 0xffffffff; 660 712 return PCIBIOS_DEVICE_NOT_FOUND; ··· 673 735 { 674 736 struct pcie_port *pp = sys_to_pcie(bus->sysdata); 675 737 int ret; 676 - 677 - if (!pp) { 678 - BUG(); 679 - return -EINVAL; 680 - } 681 738 682 739 if (dw_pcie_valid_config(pp, bus, PCI_SLOT(devfn)) == 0) 683 740 return PCIBIOS_DEVICE_NOT_FOUND; ··· 701 768 702 769 pp = sys_to_pcie(sys); 703 770 704 - if (!pp) 705 - return 0; 706 - 707 - if (global_io_offset < SZ_1M && pp->config.io_size > 0) { 708 - sys->io_offset = global_io_offset - pp->config.io_bus_addr; 771 + if (global_io_offset < SZ_1M && pp->io_size > 0) { 772 + sys->io_offset = global_io_offset - pp->io_bus_addr; 709 773 pci_ioremap_io(global_io_offset, pp->io_base); 710 774 global_io_offset += SZ_64K; 711 775 pci_add_resource_offset(&sys->resources, &pp->io, 712 776 sys->io_offset); 713 777 } 714 778 715 - sys->mem_offset = pp->mem.start - pp->config.mem_bus_addr; 779 + sys->mem_offset = pp->mem.start - pp->mem_bus_addr; 716 780 pci_add_resource_offset(&sys->resources, &pp->mem, sys->mem_offset); 781 + pci_add_resource(&sys->resources, &pp->busn); 717 782 718 783 return 1; 719 784 } ··· 721 790 struct pci_bus *bus; 722 791 struct pcie_port *pp = sys_to_pcie(sys); 723 792 724 - if (pp) { 725 - pp->root_bus_nr = sys->busnr; 726 - bus = pci_scan_root_bus(pp->dev, sys->busnr, &dw_pcie_ops, 727 - sys, &sys->resources); 728 - } else { 729 - bus = NULL; 730 - BUG(); 731 - } 793 + pp->root_bus_nr = sys->busnr; 794 + bus = pci_create_root_bus(pp->dev, sys->busnr, 795 + &dw_pcie_ops, sys, &sys->resources); 796 + if (!bus) 797 + return NULL; 798 + 799 + pci_scan_child_bus(bus); 800 + 801 + if (bus && pp->ops->scan_bus) 802 + pp->ops->scan_bus(pp); 732 803 733 804 return bus; 734 805 } ··· 766 833 767 834 void dw_pcie_setup_rc(struct pcie_port *pp) 768 835 { 769 - struct pcie_port_info *config = &pp->config; 770 836 u32 val; 771 837 u32 membase; 772 838 u32 memlimit; ··· 820 888 821 889 /* setup memory base, memory limit */ 822 890 membase = ((u32)pp->mem_base & 0xfff00000) >> 16; 823 - memlimit = (config->mem_size + (u32)pp->mem_base) & 0xfff00000; 891 + memlimit = (pp->mem_size + (u32)pp->mem_base) & 0xfff00000; 824 892 val = memlimit | membase; 825 893 dw_pcie_writel_rc(pp, val, PCI_MEMORY_BASE); 826 894
+11 -11
drivers/pci/host/pcie-designware.h
··· 14 14 #ifndef _PCIE_DESIGNWARE_H 15 15 #define _PCIE_DESIGNWARE_H 16 16 17 - struct pcie_port_info { 18 - u32 cfg0_size; 19 - u32 cfg1_size; 20 - u32 io_size; 21 - u32 mem_size; 22 - phys_addr_t io_bus_addr; 23 - phys_addr_t mem_bus_addr; 24 - }; 25 - 26 17 /* 27 18 * Maximum number of MSI IRQs can be 256 per controller. But keep 28 19 * it 32 as of now. Probably we will never need more than 32. If needed, ··· 29 38 u64 cfg0_base; 30 39 u64 cfg0_mod_base; 31 40 void __iomem *va_cfg0_base; 41 + u32 cfg0_size; 32 42 u64 cfg1_base; 33 43 u64 cfg1_mod_base; 34 44 void __iomem *va_cfg1_base; 45 + u32 cfg1_size; 35 46 u64 io_base; 36 47 u64 io_mod_base; 48 + phys_addr_t io_bus_addr; 49 + u32 io_size; 37 50 u64 mem_base; 38 51 u64 mem_mod_base; 52 + phys_addr_t mem_bus_addr; 53 + u32 mem_size; 39 54 struct resource cfg; 40 55 struct resource io; 41 56 struct resource mem; 42 - struct pcie_port_info config; 57 + struct resource busn; 43 58 int irq; 44 59 u32 lanes; 45 60 struct pcie_host_ops *ops; ··· 70 73 void (*host_init)(struct pcie_port *pp); 71 74 void (*msi_set_irq)(struct pcie_port *pp, int irq); 72 75 void (*msi_clear_irq)(struct pcie_port *pp, int irq); 73 - u32 (*get_msi_data)(struct pcie_port *pp); 76 + u32 (*get_msi_addr)(struct pcie_port *pp); 77 + u32 (*get_msi_data)(struct pcie_port *pp, int pos); 78 + void (*scan_bus)(struct pcie_port *pp); 79 + int (*msi_host_init)(struct pcie_port *pp, struct msi_chip *chip); 74 80 }; 75 81 76 82 int dw_pcie_cfg_read(void __iomem *addr, int where, int size, u32 *val);
+15 -6
drivers/pci/host/pcie-rcar.c
··· 323 323 324 324 /* Setup PCIe address space mappings for each resource */ 325 325 resource_size_t size; 326 + resource_size_t res_start; 326 327 u32 mask; 327 328 328 329 rcar_pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win)); ··· 336 335 mask = (roundup_pow_of_two(size) / SZ_128) - 1; 337 336 rcar_pci_write_reg(pcie, mask << 7, PCIEPAMR(win)); 338 337 339 - rcar_pci_write_reg(pcie, upper_32_bits(res->start), PCIEPARH(win)); 340 - rcar_pci_write_reg(pcie, lower_32_bits(res->start), PCIEPARL(win)); 338 + if (res->flags & IORESOURCE_IO) 339 + res_start = pci_pio_to_address(res->start); 340 + else 341 + res_start = res->start; 342 + 343 + rcar_pci_write_reg(pcie, upper_32_bits(res_start), PCIEPARH(win)); 344 + rcar_pci_write_reg(pcie, lower_32_bits(res_start), PCIEPARL(win)); 341 345 342 346 /* First resource is for IO */ 343 347 mask = PAR_ENABLE; ··· 369 363 370 364 rcar_pcie_setup_window(i, pcie); 371 365 372 - if (res->flags & IORESOURCE_IO) 373 - pci_ioremap_io(nr * SZ_64K, res->start); 374 - else 366 + if (res->flags & IORESOURCE_IO) { 367 + phys_addr_t io_start = pci_pio_to_address(res->start); 368 + pci_ioremap_io(nr * SZ_64K, io_start); 369 + } else 375 370 pci_add_resource(&sys->resources, res); 376 371 } 377 372 pci_add_resource(&sys->resources, &pcie->busn); ··· 942 935 } 943 936 944 937 for_each_of_pci_range(&parser, &range) { 945 - of_pci_range_to_resource(&range, pdev->dev.of_node, 938 + err = of_pci_range_to_resource(&range, pdev->dev.of_node, 946 939 &pcie->res[win++]); 940 + if (err < 0) 941 + return err; 947 942 948 943 if (win > RCAR_PCI_MAX_RESOURCES) 949 944 break;
+1 -1
drivers/pci/host/pcie-spear13xx.c
··· 340 340 341 341 pp->dev = dev; 342 342 343 - dbi_base = platform_get_resource(pdev, IORESOURCE_MEM, 0); 343 + dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 344 344 pp->dbi_base = devm_ioremap_resource(dev, dbi_base); 345 345 if (IS_ERR(pp->dbi_base)) { 346 346 dev_err(dev, "couldn't remap dbi base %p\n", dbi_base);
+970
drivers/pci/host/pcie-xilinx.c
··· 1 + /* 2 + * PCIe host controller driver for Xilinx AXI PCIe Bridge 3 + * 4 + * Copyright (c) 2012 - 2014 Xilinx, Inc. 5 + * 6 + * Based on the Tegra PCIe driver 7 + * 8 + * Bits taken from Synopsys Designware Host controller driver and 9 + * ARM PCI Host generic driver. 10 + * 11 + * This program is free software: you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License as published by 13 + * the Free Software Foundation, either version 2 of the License, or 14 + * (at your option) any later version. 15 + */ 16 + 17 + #include <linux/interrupt.h> 18 + #include <linux/irq.h> 19 + #include <linux/irqdomain.h> 20 + #include <linux/kernel.h> 21 + #include <linux/module.h> 22 + #include <linux/msi.h> 23 + #include <linux/of_address.h> 24 + #include <linux/of_pci.h> 25 + #include <linux/of_platform.h> 26 + #include <linux/of_irq.h> 27 + #include <linux/pci.h> 28 + #include <linux/platform_device.h> 29 + 30 + /* Register definitions */ 31 + #define XILINX_PCIE_REG_BIR 0x00000130 32 + #define XILINX_PCIE_REG_IDR 0x00000138 33 + #define XILINX_PCIE_REG_IMR 0x0000013c 34 + #define XILINX_PCIE_REG_PSCR 0x00000144 35 + #define XILINX_PCIE_REG_RPSC 0x00000148 36 + #define XILINX_PCIE_REG_MSIBASE1 0x0000014c 37 + #define XILINX_PCIE_REG_MSIBASE2 0x00000150 38 + #define XILINX_PCIE_REG_RPEFR 0x00000154 39 + #define XILINX_PCIE_REG_RPIFR1 0x00000158 40 + #define XILINX_PCIE_REG_RPIFR2 0x0000015c 41 + 42 + /* Interrupt registers definitions */ 43 + #define XILINX_PCIE_INTR_LINK_DOWN BIT(0) 44 + #define XILINX_PCIE_INTR_ECRC_ERR BIT(1) 45 + #define XILINX_PCIE_INTR_STR_ERR BIT(2) 46 + #define XILINX_PCIE_INTR_HOT_RESET BIT(3) 47 + #define XILINX_PCIE_INTR_CFG_TIMEOUT BIT(8) 48 + #define XILINX_PCIE_INTR_CORRECTABLE BIT(9) 49 + #define XILINX_PCIE_INTR_NONFATAL BIT(10) 50 + #define XILINX_PCIE_INTR_FATAL BIT(11) 51 + #define XILINX_PCIE_INTR_INTX BIT(16) 52 + #define XILINX_PCIE_INTR_MSI BIT(17) 53 + #define XILINX_PCIE_INTR_SLV_UNSUPP BIT(20) 54 + #define XILINX_PCIE_INTR_SLV_UNEXP BIT(21) 55 + #define XILINX_PCIE_INTR_SLV_COMPL BIT(22) 56 + #define XILINX_PCIE_INTR_SLV_ERRP BIT(23) 57 + #define XILINX_PCIE_INTR_SLV_CMPABT BIT(24) 58 + #define XILINX_PCIE_INTR_SLV_ILLBUR BIT(25) 59 + #define XILINX_PCIE_INTR_MST_DECERR BIT(26) 60 + #define XILINX_PCIE_INTR_MST_SLVERR BIT(27) 61 + #define XILINX_PCIE_INTR_MST_ERRP BIT(28) 62 + #define XILINX_PCIE_IMR_ALL_MASK 0x1FF30FED 63 + #define XILINX_PCIE_IDR_ALL_MASK 0xFFFFFFFF 64 + 65 + /* Root Port Error FIFO Read Register definitions */ 66 + #define XILINX_PCIE_RPEFR_ERR_VALID BIT(18) 67 + #define XILINX_PCIE_RPEFR_REQ_ID GENMASK(15, 0) 68 + #define XILINX_PCIE_RPEFR_ALL_MASK 0xFFFFFFFF 69 + 70 + /* Root Port Interrupt FIFO Read Register 1 definitions */ 71 + #define XILINX_PCIE_RPIFR1_INTR_VALID BIT(31) 72 + #define XILINX_PCIE_RPIFR1_MSI_INTR BIT(30) 73 + #define XILINX_PCIE_RPIFR1_INTR_MASK GENMASK(28, 27) 74 + #define XILINX_PCIE_RPIFR1_ALL_MASK 0xFFFFFFFF 75 + #define XILINX_PCIE_RPIFR1_INTR_SHIFT 27 76 + 77 + /* Bridge Info Register definitions */ 78 + #define XILINX_PCIE_BIR_ECAM_SZ_MASK GENMASK(18, 16) 79 + #define XILINX_PCIE_BIR_ECAM_SZ_SHIFT 16 80 + 81 + /* Root Port Interrupt FIFO Read Register 2 definitions */ 82 + #define XILINX_PCIE_RPIFR2_MSG_DATA GENMASK(15, 0) 83 + 84 + /* Root Port Status/control Register definitions */ 85 + #define XILINX_PCIE_REG_RPSC_BEN BIT(0) 86 + 87 + /* Phy Status/Control Register definitions */ 88 + #define XILINX_PCIE_REG_PSCR_LNKUP BIT(11) 89 + 90 + /* ECAM definitions */ 91 + #define ECAM_BUS_NUM_SHIFT 20 92 + #define ECAM_DEV_NUM_SHIFT 12 93 + 94 + /* Number of MSI IRQs */ 95 + #define XILINX_NUM_MSI_IRQS 128 96 + 97 + /* Number of Memory Resources */ 98 + #define XILINX_MAX_NUM_RESOURCES 3 99 + 100 + /** 101 + * struct xilinx_pcie_port - PCIe port information 102 + * @reg_base: IO Mapped Register Base 103 + * @irq: Interrupt number 104 + * @msi_pages: MSI pages 105 + * @root_busno: Root Bus number 106 + * @dev: Device pointer 107 + * @irq_domain: IRQ domain pointer 108 + * @bus_range: Bus range 109 + * @resources: Bus Resources 110 + */ 111 + struct xilinx_pcie_port { 112 + void __iomem *reg_base; 113 + u32 irq; 114 + unsigned long msi_pages; 115 + u8 root_busno; 116 + struct device *dev; 117 + struct irq_domain *irq_domain; 118 + struct resource bus_range; 119 + struct list_head resources; 120 + }; 121 + 122 + static DECLARE_BITMAP(msi_irq_in_use, XILINX_NUM_MSI_IRQS); 123 + 124 + static inline struct xilinx_pcie_port *sys_to_pcie(struct pci_sys_data *sys) 125 + { 126 + return sys->private_data; 127 + } 128 + 129 + static inline u32 pcie_read(struct xilinx_pcie_port *port, u32 reg) 130 + { 131 + return readl(port->reg_base + reg); 132 + } 133 + 134 + static inline void pcie_write(struct xilinx_pcie_port *port, u32 val, u32 reg) 135 + { 136 + writel(val, port->reg_base + reg); 137 + } 138 + 139 + static inline bool xilinx_pcie_link_is_up(struct xilinx_pcie_port *port) 140 + { 141 + return (pcie_read(port, XILINX_PCIE_REG_PSCR) & 142 + XILINX_PCIE_REG_PSCR_LNKUP) ? 1 : 0; 143 + } 144 + 145 + /** 146 + * xilinx_pcie_clear_err_interrupts - Clear Error Interrupts 147 + * @port: PCIe port information 148 + */ 149 + static void xilinx_pcie_clear_err_interrupts(struct xilinx_pcie_port *port) 150 + { 151 + u32 val = pcie_read(port, XILINX_PCIE_REG_RPEFR); 152 + 153 + if (val & XILINX_PCIE_RPEFR_ERR_VALID) { 154 + dev_dbg(port->dev, "Requester ID %d\n", 155 + val & XILINX_PCIE_RPEFR_REQ_ID); 156 + pcie_write(port, XILINX_PCIE_RPEFR_ALL_MASK, 157 + XILINX_PCIE_REG_RPEFR); 158 + } 159 + } 160 + 161 + /** 162 + * xilinx_pcie_valid_device - Check if a valid device is present on bus 163 + * @bus: PCI Bus structure 164 + * @devfn: device/function 165 + * 166 + * Return: 'true' on success and 'false' if invalid device is found 167 + */ 168 + static bool xilinx_pcie_valid_device(struct pci_bus *bus, unsigned int devfn) 169 + { 170 + struct xilinx_pcie_port *port = sys_to_pcie(bus->sysdata); 171 + 172 + /* Check if link is up when trying to access downstream ports */ 173 + if (bus->number != port->root_busno) 174 + if (!xilinx_pcie_link_is_up(port)) 175 + return false; 176 + 177 + /* Only one device down on each root port */ 178 + if (bus->number == port->root_busno && devfn > 0) 179 + return false; 180 + 181 + /* 182 + * Do not read more than one device on the bus directly attached 183 + * to RC. 184 + */ 185 + if (bus->primary == port->root_busno && devfn > 0) 186 + return false; 187 + 188 + return true; 189 + } 190 + 191 + /** 192 + * xilinx_pcie_config_base - Get configuration base 193 + * @bus: PCI Bus structure 194 + * @devfn: Device/function 195 + * @where: Offset from base 196 + * 197 + * Return: Base address of the configuration space needed to be 198 + * accessed. 199 + */ 200 + static void __iomem *xilinx_pcie_config_base(struct pci_bus *bus, 201 + unsigned int devfn, int where) 202 + { 203 + struct xilinx_pcie_port *port = sys_to_pcie(bus->sysdata); 204 + int relbus; 205 + 206 + relbus = (bus->number << ECAM_BUS_NUM_SHIFT) | 207 + (devfn << ECAM_DEV_NUM_SHIFT); 208 + 209 + return port->reg_base + relbus + where; 210 + } 211 + 212 + /** 213 + * xilinx_pcie_read_config - Read configuration space 214 + * @bus: PCI Bus structure 215 + * @devfn: Device/function 216 + * @where: Offset from base 217 + * @size: Byte/word/dword 218 + * @val: Value to be read 219 + * 220 + * Return: PCIBIOS_SUCCESSFUL on success 221 + * PCIBIOS_DEVICE_NOT_FOUND on failure 222 + */ 223 + static int xilinx_pcie_read_config(struct pci_bus *bus, unsigned int devfn, 224 + int where, int size, u32 *val) 225 + { 226 + void __iomem *addr; 227 + 228 + if (!xilinx_pcie_valid_device(bus, devfn)) { 229 + *val = 0xFFFFFFFF; 230 + return PCIBIOS_DEVICE_NOT_FOUND; 231 + } 232 + 233 + addr = xilinx_pcie_config_base(bus, devfn, where); 234 + 235 + switch (size) { 236 + case 1: 237 + *val = readb(addr); 238 + break; 239 + case 2: 240 + *val = readw(addr); 241 + break; 242 + default: 243 + *val = readl(addr); 244 + break; 245 + } 246 + 247 + return PCIBIOS_SUCCESSFUL; 248 + } 249 + 250 + /** 251 + * xilinx_pcie_write_config - Write configuration space 252 + * @bus: PCI Bus structure 253 + * @devfn: Device/function 254 + * @where: Offset from base 255 + * @size: Byte/word/dword 256 + * @val: Value to be written to device 257 + * 258 + * Return: PCIBIOS_SUCCESSFUL on success 259 + * PCIBIOS_DEVICE_NOT_FOUND on failure 260 + */ 261 + static int xilinx_pcie_write_config(struct pci_bus *bus, unsigned int devfn, 262 + int where, int size, u32 val) 263 + { 264 + void __iomem *addr; 265 + 266 + if (!xilinx_pcie_valid_device(bus, devfn)) 267 + return PCIBIOS_DEVICE_NOT_FOUND; 268 + 269 + addr = xilinx_pcie_config_base(bus, devfn, where); 270 + 271 + switch (size) { 272 + case 1: 273 + writeb(val, addr); 274 + break; 275 + case 2: 276 + writew(val, addr); 277 + break; 278 + default: 279 + writel(val, addr); 280 + break; 281 + } 282 + 283 + return PCIBIOS_SUCCESSFUL; 284 + } 285 + 286 + /* PCIe operations */ 287 + static struct pci_ops xilinx_pcie_ops = { 288 + .read = xilinx_pcie_read_config, 289 + .write = xilinx_pcie_write_config, 290 + }; 291 + 292 + /* MSI functions */ 293 + 294 + /** 295 + * xilinx_pcie_destroy_msi - Free MSI number 296 + * @irq: IRQ to be freed 297 + */ 298 + static void xilinx_pcie_destroy_msi(unsigned int irq) 299 + { 300 + struct irq_desc *desc; 301 + struct msi_desc *msi; 302 + struct xilinx_pcie_port *port; 303 + 304 + desc = irq_to_desc(irq); 305 + msi = irq_desc_get_msi_desc(desc); 306 + port = sys_to_pcie(msi->dev->bus->sysdata); 307 + 308 + if (!test_bit(irq, msi_irq_in_use)) 309 + dev_err(port->dev, "Trying to free unused MSI#%d\n", irq); 310 + else 311 + clear_bit(irq, msi_irq_in_use); 312 + } 313 + 314 + /** 315 + * xilinx_pcie_assign_msi - Allocate MSI number 316 + * @port: PCIe port structure 317 + * 318 + * Return: A valid IRQ on success and error value on failure. 319 + */ 320 + static int xilinx_pcie_assign_msi(struct xilinx_pcie_port *port) 321 + { 322 + int pos; 323 + 324 + pos = find_first_zero_bit(msi_irq_in_use, XILINX_NUM_MSI_IRQS); 325 + if (pos < XILINX_NUM_MSI_IRQS) 326 + set_bit(pos, msi_irq_in_use); 327 + else 328 + return -ENOSPC; 329 + 330 + return pos; 331 + } 332 + 333 + /** 334 + * xilinx_msi_teardown_irq - Destroy the MSI 335 + * @chip: MSI Chip descriptor 336 + * @irq: MSI IRQ to destroy 337 + */ 338 + static void xilinx_msi_teardown_irq(struct msi_chip *chip, unsigned int irq) 339 + { 340 + xilinx_pcie_destroy_msi(irq); 341 + } 342 + 343 + /** 344 + * xilinx_pcie_msi_setup_irq - Setup MSI request 345 + * @chip: MSI chip pointer 346 + * @pdev: PCIe device pointer 347 + * @desc: MSI descriptor pointer 348 + * 349 + * Return: '0' on success and error value on failure 350 + */ 351 + static int xilinx_pcie_msi_setup_irq(struct msi_chip *chip, 352 + struct pci_dev *pdev, 353 + struct msi_desc *desc) 354 + { 355 + struct xilinx_pcie_port *port = sys_to_pcie(pdev->bus->sysdata); 356 + unsigned int irq; 357 + int hwirq; 358 + struct msi_msg msg; 359 + phys_addr_t msg_addr; 360 + 361 + hwirq = xilinx_pcie_assign_msi(port); 362 + if (hwirq < 0) 363 + return hwirq; 364 + 365 + irq = irq_create_mapping(port->irq_domain, hwirq); 366 + if (!irq) 367 + return -EINVAL; 368 + 369 + irq_set_msi_desc(irq, desc); 370 + 371 + msg_addr = virt_to_phys((void *)port->msi_pages); 372 + 373 + msg.address_hi = 0; 374 + msg.address_lo = msg_addr; 375 + msg.data = irq; 376 + 377 + write_msi_msg(irq, &msg); 378 + 379 + return 0; 380 + } 381 + 382 + /* MSI Chip Descriptor */ 383 + static struct msi_chip xilinx_pcie_msi_chip = { 384 + .setup_irq = xilinx_pcie_msi_setup_irq, 385 + .teardown_irq = xilinx_msi_teardown_irq, 386 + }; 387 + 388 + /* HW Interrupt Chip Descriptor */ 389 + static struct irq_chip xilinx_msi_irq_chip = { 390 + .name = "Xilinx PCIe MSI", 391 + .irq_enable = unmask_msi_irq, 392 + .irq_disable = mask_msi_irq, 393 + .irq_mask = mask_msi_irq, 394 + .irq_unmask = unmask_msi_irq, 395 + }; 396 + 397 + /** 398 + * xilinx_pcie_msi_map - Set the handler for the MSI and mark IRQ as valid 399 + * @domain: IRQ domain 400 + * @irq: Virtual IRQ number 401 + * @hwirq: HW interrupt number 402 + * 403 + * Return: Always returns 0. 404 + */ 405 + static int xilinx_pcie_msi_map(struct irq_domain *domain, unsigned int irq, 406 + irq_hw_number_t hwirq) 407 + { 408 + irq_set_chip_and_handler(irq, &xilinx_msi_irq_chip, handle_simple_irq); 409 + irq_set_chip_data(irq, domain->host_data); 410 + set_irq_flags(irq, IRQF_VALID); 411 + 412 + return 0; 413 + } 414 + 415 + /* IRQ Domain operations */ 416 + static const struct irq_domain_ops msi_domain_ops = { 417 + .map = xilinx_pcie_msi_map, 418 + }; 419 + 420 + /** 421 + * xilinx_pcie_enable_msi - Enable MSI support 422 + * @port: PCIe port information 423 + */ 424 + static void xilinx_pcie_enable_msi(struct xilinx_pcie_port *port) 425 + { 426 + phys_addr_t msg_addr; 427 + 428 + port->msi_pages = __get_free_pages(GFP_KERNEL, 0); 429 + msg_addr = virt_to_phys((void *)port->msi_pages); 430 + pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1); 431 + pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2); 432 + } 433 + 434 + /** 435 + * xilinx_pcie_add_bus - Add MSI chip info to PCIe bus 436 + * @bus: PCIe bus 437 + */ 438 + static void xilinx_pcie_add_bus(struct pci_bus *bus) 439 + { 440 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 441 + struct xilinx_pcie_port *port = sys_to_pcie(bus->sysdata); 442 + 443 + xilinx_pcie_msi_chip.dev = port->dev; 444 + bus->msi = &xilinx_pcie_msi_chip; 445 + } 446 + } 447 + 448 + /* INTx Functions */ 449 + 450 + /** 451 + * xilinx_pcie_intx_map - Set the handler for the INTx and mark IRQ as valid 452 + * @domain: IRQ domain 453 + * @irq: Virtual IRQ number 454 + * @hwirq: HW interrupt number 455 + * 456 + * Return: Always returns 0. 457 + */ 458 + static int xilinx_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 459 + irq_hw_number_t hwirq) 460 + { 461 + irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); 462 + irq_set_chip_data(irq, domain->host_data); 463 + set_irq_flags(irq, IRQF_VALID); 464 + 465 + return 0; 466 + } 467 + 468 + /* INTx IRQ Domain operations */ 469 + static const struct irq_domain_ops intx_domain_ops = { 470 + .map = xilinx_pcie_intx_map, 471 + }; 472 + 473 + /* PCIe HW Functions */ 474 + 475 + /** 476 + * xilinx_pcie_intr_handler - Interrupt Service Handler 477 + * @irq: IRQ number 478 + * @data: PCIe port information 479 + * 480 + * Return: IRQ_HANDLED on success and IRQ_NONE on failure 481 + */ 482 + static irqreturn_t xilinx_pcie_intr_handler(int irq, void *data) 483 + { 484 + struct xilinx_pcie_port *port = (struct xilinx_pcie_port *)data; 485 + u32 val, mask, status, msi_data; 486 + 487 + /* Read interrupt decode and mask registers */ 488 + val = pcie_read(port, XILINX_PCIE_REG_IDR); 489 + mask = pcie_read(port, XILINX_PCIE_REG_IMR); 490 + 491 + status = val & mask; 492 + if (!status) 493 + return IRQ_NONE; 494 + 495 + if (status & XILINX_PCIE_INTR_LINK_DOWN) 496 + dev_warn(port->dev, "Link Down\n"); 497 + 498 + if (status & XILINX_PCIE_INTR_ECRC_ERR) 499 + dev_warn(port->dev, "ECRC failed\n"); 500 + 501 + if (status & XILINX_PCIE_INTR_STR_ERR) 502 + dev_warn(port->dev, "Streaming error\n"); 503 + 504 + if (status & XILINX_PCIE_INTR_HOT_RESET) 505 + dev_info(port->dev, "Hot reset\n"); 506 + 507 + if (status & XILINX_PCIE_INTR_CFG_TIMEOUT) 508 + dev_warn(port->dev, "ECAM access timeout\n"); 509 + 510 + if (status & XILINX_PCIE_INTR_CORRECTABLE) { 511 + dev_warn(port->dev, "Correctable error message\n"); 512 + xilinx_pcie_clear_err_interrupts(port); 513 + } 514 + 515 + if (status & XILINX_PCIE_INTR_NONFATAL) { 516 + dev_warn(port->dev, "Non fatal error message\n"); 517 + xilinx_pcie_clear_err_interrupts(port); 518 + } 519 + 520 + if (status & XILINX_PCIE_INTR_FATAL) { 521 + dev_warn(port->dev, "Fatal error message\n"); 522 + xilinx_pcie_clear_err_interrupts(port); 523 + } 524 + 525 + if (status & XILINX_PCIE_INTR_INTX) { 526 + /* INTx interrupt received */ 527 + val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); 528 + 529 + /* Check whether interrupt valid */ 530 + if (!(val & XILINX_PCIE_RPIFR1_INTR_VALID)) { 531 + dev_warn(port->dev, "RP Intr FIFO1 read error\n"); 532 + return IRQ_HANDLED; 533 + } 534 + 535 + /* Clear interrupt FIFO register 1 */ 536 + pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, 537 + XILINX_PCIE_REG_RPIFR1); 538 + 539 + /* Handle INTx Interrupt */ 540 + val = ((val & XILINX_PCIE_RPIFR1_INTR_MASK) >> 541 + XILINX_PCIE_RPIFR1_INTR_SHIFT) + 1; 542 + generic_handle_irq(irq_find_mapping(port->irq_domain, val)); 543 + } 544 + 545 + if (status & XILINX_PCIE_INTR_MSI) { 546 + /* MSI Interrupt */ 547 + val = pcie_read(port, XILINX_PCIE_REG_RPIFR1); 548 + 549 + if (!(val & XILINX_PCIE_RPIFR1_INTR_VALID)) { 550 + dev_warn(port->dev, "RP Intr FIFO1 read error\n"); 551 + return IRQ_HANDLED; 552 + } 553 + 554 + if (val & XILINX_PCIE_RPIFR1_MSI_INTR) { 555 + msi_data = pcie_read(port, XILINX_PCIE_REG_RPIFR2) & 556 + XILINX_PCIE_RPIFR2_MSG_DATA; 557 + 558 + /* Clear interrupt FIFO register 1 */ 559 + pcie_write(port, XILINX_PCIE_RPIFR1_ALL_MASK, 560 + XILINX_PCIE_REG_RPIFR1); 561 + 562 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 563 + /* Handle MSI Interrupt */ 564 + generic_handle_irq(msi_data); 565 + } 566 + } 567 + } 568 + 569 + if (status & XILINX_PCIE_INTR_SLV_UNSUPP) 570 + dev_warn(port->dev, "Slave unsupported request\n"); 571 + 572 + if (status & XILINX_PCIE_INTR_SLV_UNEXP) 573 + dev_warn(port->dev, "Slave unexpected completion\n"); 574 + 575 + if (status & XILINX_PCIE_INTR_SLV_COMPL) 576 + dev_warn(port->dev, "Slave completion timeout\n"); 577 + 578 + if (status & XILINX_PCIE_INTR_SLV_ERRP) 579 + dev_warn(port->dev, "Slave Error Poison\n"); 580 + 581 + if (status & XILINX_PCIE_INTR_SLV_CMPABT) 582 + dev_warn(port->dev, "Slave Completer Abort\n"); 583 + 584 + if (status & XILINX_PCIE_INTR_SLV_ILLBUR) 585 + dev_warn(port->dev, "Slave Illegal Burst\n"); 586 + 587 + if (status & XILINX_PCIE_INTR_MST_DECERR) 588 + dev_warn(port->dev, "Master decode error\n"); 589 + 590 + if (status & XILINX_PCIE_INTR_MST_SLVERR) 591 + dev_warn(port->dev, "Master slave error\n"); 592 + 593 + if (status & XILINX_PCIE_INTR_MST_ERRP) 594 + dev_warn(port->dev, "Master error poison\n"); 595 + 596 + /* Clear the Interrupt Decode register */ 597 + pcie_write(port, status, XILINX_PCIE_REG_IDR); 598 + 599 + return IRQ_HANDLED; 600 + } 601 + 602 + /** 603 + * xilinx_pcie_free_irq_domain - Free IRQ domain 604 + * @port: PCIe port information 605 + */ 606 + static void xilinx_pcie_free_irq_domain(struct xilinx_pcie_port *port) 607 + { 608 + int i; 609 + u32 irq, num_irqs; 610 + 611 + /* Free IRQ Domain */ 612 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 613 + 614 + free_pages(port->msi_pages, 0); 615 + 616 + num_irqs = XILINX_NUM_MSI_IRQS; 617 + } else { 618 + /* INTx */ 619 + num_irqs = 4; 620 + } 621 + 622 + for (i = 0; i < num_irqs; i++) { 623 + irq = irq_find_mapping(port->irq_domain, i); 624 + if (irq > 0) 625 + irq_dispose_mapping(irq); 626 + } 627 + 628 + irq_domain_remove(port->irq_domain); 629 + } 630 + 631 + /** 632 + * xilinx_pcie_init_irq_domain - Initialize IRQ domain 633 + * @port: PCIe port information 634 + * 635 + * Return: '0' on success and error value on failure 636 + */ 637 + static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port) 638 + { 639 + struct device *dev = port->dev; 640 + struct device_node *node = dev->of_node; 641 + struct device_node *pcie_intc_node; 642 + 643 + /* Setup INTx */ 644 + pcie_intc_node = of_get_next_child(node, NULL); 645 + if (!pcie_intc_node) { 646 + dev_err(dev, "No PCIe Intc node found\n"); 647 + return PTR_ERR(pcie_intc_node); 648 + } 649 + 650 + port->irq_domain = irq_domain_add_linear(pcie_intc_node, 4, 651 + &intx_domain_ops, 652 + port); 653 + if (!port->irq_domain) { 654 + dev_err(dev, "Failed to get a INTx IRQ domain\n"); 655 + return PTR_ERR(port->irq_domain); 656 + } 657 + 658 + /* Setup MSI */ 659 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 660 + port->irq_domain = irq_domain_add_linear(node, 661 + XILINX_NUM_MSI_IRQS, 662 + &msi_domain_ops, 663 + &xilinx_pcie_msi_chip); 664 + if (!port->irq_domain) { 665 + dev_err(dev, "Failed to get a MSI IRQ domain\n"); 666 + return PTR_ERR(port->irq_domain); 667 + } 668 + 669 + xilinx_pcie_enable_msi(port); 670 + } 671 + 672 + return 0; 673 + } 674 + 675 + /** 676 + * xilinx_pcie_init_port - Initialize hardware 677 + * @port: PCIe port information 678 + */ 679 + static void xilinx_pcie_init_port(struct xilinx_pcie_port *port) 680 + { 681 + if (xilinx_pcie_link_is_up(port)) 682 + dev_info(port->dev, "PCIe Link is UP\n"); 683 + else 684 + dev_info(port->dev, "PCIe Link is DOWN\n"); 685 + 686 + /* Disable all interrupts */ 687 + pcie_write(port, ~XILINX_PCIE_IDR_ALL_MASK, 688 + XILINX_PCIE_REG_IMR); 689 + 690 + /* Clear pending interrupts */ 691 + pcie_write(port, pcie_read(port, XILINX_PCIE_REG_IDR) & 692 + XILINX_PCIE_IMR_ALL_MASK, 693 + XILINX_PCIE_REG_IDR); 694 + 695 + /* Enable all interrupts */ 696 + pcie_write(port, XILINX_PCIE_IMR_ALL_MASK, XILINX_PCIE_REG_IMR); 697 + 698 + /* Enable the Bridge enable bit */ 699 + pcie_write(port, pcie_read(port, XILINX_PCIE_REG_RPSC) | 700 + XILINX_PCIE_REG_RPSC_BEN, 701 + XILINX_PCIE_REG_RPSC); 702 + } 703 + 704 + /** 705 + * xilinx_pcie_setup - Setup memory resources 706 + * @nr: Bus number 707 + * @sys: Per controller structure 708 + * 709 + * Return: '1' on success and error value on failure 710 + */ 711 + static int xilinx_pcie_setup(int nr, struct pci_sys_data *sys) 712 + { 713 + struct xilinx_pcie_port *port = sys_to_pcie(sys); 714 + 715 + list_splice_init(&port->resources, &sys->resources); 716 + 717 + return 1; 718 + } 719 + 720 + /** 721 + * xilinx_pcie_scan_bus - Scan PCIe bus for devices 722 + * @nr: Bus number 723 + * @sys: Per controller structure 724 + * 725 + * Return: Valid Bus pointer on success and NULL on failure 726 + */ 727 + static struct pci_bus *xilinx_pcie_scan_bus(int nr, struct pci_sys_data *sys) 728 + { 729 + struct xilinx_pcie_port *port = sys_to_pcie(sys); 730 + struct pci_bus *bus; 731 + 732 + port->root_busno = sys->busnr; 733 + bus = pci_scan_root_bus(port->dev, sys->busnr, &xilinx_pcie_ops, 734 + sys, &sys->resources); 735 + 736 + return bus; 737 + } 738 + 739 + /** 740 + * xilinx_pcie_parse_and_add_res - Add resources by parsing ranges 741 + * @port: PCIe port information 742 + * 743 + * Return: '0' on success and error value on failure 744 + */ 745 + static int xilinx_pcie_parse_and_add_res(struct xilinx_pcie_port *port) 746 + { 747 + struct device *dev = port->dev; 748 + struct device_node *node = dev->of_node; 749 + struct resource *mem; 750 + resource_size_t offset; 751 + struct of_pci_range_parser parser; 752 + struct of_pci_range range; 753 + struct pci_host_bridge_window *win; 754 + int err = 0, mem_resno = 0; 755 + 756 + /* Get the ranges */ 757 + if (of_pci_range_parser_init(&parser, node)) { 758 + dev_err(dev, "missing \"ranges\" property\n"); 759 + return -EINVAL; 760 + } 761 + 762 + /* Parse the ranges and add the resources found to the list */ 763 + for_each_of_pci_range(&parser, &range) { 764 + 765 + if (mem_resno >= XILINX_MAX_NUM_RESOURCES) { 766 + dev_err(dev, "Maximum memory resources exceeded\n"); 767 + return -EINVAL; 768 + } 769 + 770 + mem = devm_kmalloc(dev, sizeof(*mem), GFP_KERNEL); 771 + if (!mem) { 772 + err = -ENOMEM; 773 + goto free_resources; 774 + } 775 + 776 + of_pci_range_to_resource(&range, node, mem); 777 + 778 + switch (mem->flags & IORESOURCE_TYPE_BITS) { 779 + case IORESOURCE_MEM: 780 + offset = range.cpu_addr - range.pci_addr; 781 + mem_resno++; 782 + break; 783 + default: 784 + err = -EINVAL; 785 + break; 786 + } 787 + 788 + if (err < 0) { 789 + dev_warn(dev, "Invalid resource found %pR\n", mem); 790 + continue; 791 + } 792 + 793 + err = request_resource(&iomem_resource, mem); 794 + if (err) 795 + goto free_resources; 796 + 797 + pci_add_resource_offset(&port->resources, mem, offset); 798 + } 799 + 800 + /* Get the bus range */ 801 + if (of_pci_parse_bus_range(node, &port->bus_range)) { 802 + u32 val = pcie_read(port, XILINX_PCIE_REG_BIR); 803 + u8 last; 804 + 805 + last = (val & XILINX_PCIE_BIR_ECAM_SZ_MASK) >> 806 + XILINX_PCIE_BIR_ECAM_SZ_SHIFT; 807 + 808 + port->bus_range = (struct resource) { 809 + .name = node->name, 810 + .start = 0, 811 + .end = last, 812 + .flags = IORESOURCE_BUS, 813 + }; 814 + } 815 + 816 + /* Register bus resource */ 817 + pci_add_resource(&port->resources, &port->bus_range); 818 + 819 + return 0; 820 + 821 + free_resources: 822 + release_child_resources(&iomem_resource); 823 + list_for_each_entry(win, &port->resources, list) 824 + devm_kfree(dev, win->res); 825 + pci_free_resource_list(&port->resources); 826 + 827 + return err; 828 + } 829 + 830 + /** 831 + * xilinx_pcie_parse_dt - Parse Device tree 832 + * @port: PCIe port information 833 + * 834 + * Return: '0' on success and error value on failure 835 + */ 836 + static int xilinx_pcie_parse_dt(struct xilinx_pcie_port *port) 837 + { 838 + struct device *dev = port->dev; 839 + struct device_node *node = dev->of_node; 840 + struct resource regs; 841 + const char *type; 842 + int err; 843 + 844 + type = of_get_property(node, "device_type", NULL); 845 + if (!type || strcmp(type, "pci")) { 846 + dev_err(dev, "invalid \"device_type\" %s\n", type); 847 + return -EINVAL; 848 + } 849 + 850 + err = of_address_to_resource(node, 0, &regs); 851 + if (err) { 852 + dev_err(dev, "missing \"reg\" property\n"); 853 + return err; 854 + } 855 + 856 + port->reg_base = devm_ioremap_resource(dev, &regs); 857 + if (IS_ERR(port->reg_base)) 858 + return PTR_ERR(port->reg_base); 859 + 860 + port->irq = irq_of_parse_and_map(node, 0); 861 + err = devm_request_irq(dev, port->irq, xilinx_pcie_intr_handler, 862 + IRQF_SHARED, "xilinx-pcie", port); 863 + if (err) { 864 + dev_err(dev, "unable to request irq %d\n", port->irq); 865 + return err; 866 + } 867 + 868 + return 0; 869 + } 870 + 871 + /** 872 + * xilinx_pcie_probe - Probe function 873 + * @pdev: Platform device pointer 874 + * 875 + * Return: '0' on success and error value on failure 876 + */ 877 + static int xilinx_pcie_probe(struct platform_device *pdev) 878 + { 879 + struct xilinx_pcie_port *port; 880 + struct hw_pci hw; 881 + struct device *dev = &pdev->dev; 882 + int err; 883 + 884 + if (!dev->of_node) 885 + return -ENODEV; 886 + 887 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 888 + if (!port) 889 + return -ENOMEM; 890 + 891 + port->dev = dev; 892 + 893 + err = xilinx_pcie_parse_dt(port); 894 + if (err) { 895 + dev_err(dev, "Parsing DT failed\n"); 896 + return err; 897 + } 898 + 899 + xilinx_pcie_init_port(port); 900 + 901 + err = xilinx_pcie_init_irq_domain(port); 902 + if (err) { 903 + dev_err(dev, "Failed creating IRQ Domain\n"); 904 + return err; 905 + } 906 + 907 + /* 908 + * Parse PCI ranges, configuration bus range and 909 + * request their resources 910 + */ 911 + INIT_LIST_HEAD(&port->resources); 912 + err = xilinx_pcie_parse_and_add_res(port); 913 + if (err) { 914 + dev_err(dev, "Failed adding resources\n"); 915 + return err; 916 + } 917 + 918 + platform_set_drvdata(pdev, port); 919 + 920 + /* Register the device */ 921 + memset(&hw, 0, sizeof(hw)); 922 + hw = (struct hw_pci) { 923 + .nr_controllers = 1, 924 + .private_data = (void **)&port, 925 + .setup = xilinx_pcie_setup, 926 + .map_irq = of_irq_parse_and_map_pci, 927 + .add_bus = xilinx_pcie_add_bus, 928 + .scan = xilinx_pcie_scan_bus, 929 + .ops = &xilinx_pcie_ops, 930 + }; 931 + pci_common_init_dev(dev, &hw); 932 + 933 + return 0; 934 + } 935 + 936 + /** 937 + * xilinx_pcie_remove - Remove function 938 + * @pdev: Platform device pointer 939 + * 940 + * Return: '0' always 941 + */ 942 + static int xilinx_pcie_remove(struct platform_device *pdev) 943 + { 944 + struct xilinx_pcie_port *port = platform_get_drvdata(pdev); 945 + 946 + xilinx_pcie_free_irq_domain(port); 947 + 948 + return 0; 949 + } 950 + 951 + static struct of_device_id xilinx_pcie_of_match[] = { 952 + { .compatible = "xlnx,axi-pcie-host-1.00.a", }, 953 + {} 954 + }; 955 + 956 + static struct platform_driver xilinx_pcie_driver = { 957 + .driver = { 958 + .name = "xilinx-pcie", 959 + .owner = THIS_MODULE, 960 + .of_match_table = xilinx_pcie_of_match, 961 + .suppress_bind_attrs = true, 962 + }, 963 + .probe = xilinx_pcie_probe, 964 + .remove = xilinx_pcie_remove, 965 + }; 966 + module_platform_driver(xilinx_pcie_driver); 967 + 968 + MODULE_AUTHOR("Xilinx Inc"); 969 + MODULE_DESCRIPTION("Xilinx AXI PCIe driver"); 970 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/pci/hotplug/Makefile
··· 24 24 25 25 obj-$(CONFIG_HOTPLUG_PCI_ACPI_IBM) += acpiphp_ibm.o 26 26 27 - pci_hotplug-objs := pci_hotplug_core.o pcihp_slot.o 27 + pci_hotplug-objs := pci_hotplug_core.o 28 28 29 29 ifdef CONFIG_HOTPLUG_PCI_CPCI 30 30 pci_hotplug-objs += cpci_hotplug_core.o \
+2 -252
drivers/pci/hotplug/acpi_pcihp.c
··· 46 46 47 47 static bool debug_acpi; 48 48 49 - static acpi_status 50 - decode_type0_hpx_record(union acpi_object *record, struct hotplug_params *hpx) 51 - { 52 - int i; 53 - union acpi_object *fields = record->package.elements; 54 - u32 revision = fields[1].integer.value; 55 - 56 - switch (revision) { 57 - case 1: 58 - if (record->package.count != 6) 59 - return AE_ERROR; 60 - for (i = 2; i < 6; i++) 61 - if (fields[i].type != ACPI_TYPE_INTEGER) 62 - return AE_ERROR; 63 - hpx->t0 = &hpx->type0_data; 64 - hpx->t0->revision = revision; 65 - hpx->t0->cache_line_size = fields[2].integer.value; 66 - hpx->t0->latency_timer = fields[3].integer.value; 67 - hpx->t0->enable_serr = fields[4].integer.value; 68 - hpx->t0->enable_perr = fields[5].integer.value; 69 - break; 70 - default: 71 - printk(KERN_WARNING 72 - "%s: Type 0 Revision %d record not supported\n", 73 - __func__, revision); 74 - return AE_ERROR; 75 - } 76 - return AE_OK; 77 - } 78 - 79 - static acpi_status 80 - decode_type1_hpx_record(union acpi_object *record, struct hotplug_params *hpx) 81 - { 82 - int i; 83 - union acpi_object *fields = record->package.elements; 84 - u32 revision = fields[1].integer.value; 85 - 86 - switch (revision) { 87 - case 1: 88 - if (record->package.count != 5) 89 - return AE_ERROR; 90 - for (i = 2; i < 5; i++) 91 - if (fields[i].type != ACPI_TYPE_INTEGER) 92 - return AE_ERROR; 93 - hpx->t1 = &hpx->type1_data; 94 - hpx->t1->revision = revision; 95 - hpx->t1->max_mem_read = fields[2].integer.value; 96 - hpx->t1->avg_max_split = fields[3].integer.value; 97 - hpx->t1->tot_max_split = fields[4].integer.value; 98 - break; 99 - default: 100 - printk(KERN_WARNING 101 - "%s: Type 1 Revision %d record not supported\n", 102 - __func__, revision); 103 - return AE_ERROR; 104 - } 105 - return AE_OK; 106 - } 107 - 108 - static acpi_status 109 - decode_type2_hpx_record(union acpi_object *record, struct hotplug_params *hpx) 110 - { 111 - int i; 112 - union acpi_object *fields = record->package.elements; 113 - u32 revision = fields[1].integer.value; 114 - 115 - switch (revision) { 116 - case 1: 117 - if (record->package.count != 18) 118 - return AE_ERROR; 119 - for (i = 2; i < 18; i++) 120 - if (fields[i].type != ACPI_TYPE_INTEGER) 121 - return AE_ERROR; 122 - hpx->t2 = &hpx->type2_data; 123 - hpx->t2->revision = revision; 124 - hpx->t2->unc_err_mask_and = fields[2].integer.value; 125 - hpx->t2->unc_err_mask_or = fields[3].integer.value; 126 - hpx->t2->unc_err_sever_and = fields[4].integer.value; 127 - hpx->t2->unc_err_sever_or = fields[5].integer.value; 128 - hpx->t2->cor_err_mask_and = fields[6].integer.value; 129 - hpx->t2->cor_err_mask_or = fields[7].integer.value; 130 - hpx->t2->adv_err_cap_and = fields[8].integer.value; 131 - hpx->t2->adv_err_cap_or = fields[9].integer.value; 132 - hpx->t2->pci_exp_devctl_and = fields[10].integer.value; 133 - hpx->t2->pci_exp_devctl_or = fields[11].integer.value; 134 - hpx->t2->pci_exp_lnkctl_and = fields[12].integer.value; 135 - hpx->t2->pci_exp_lnkctl_or = fields[13].integer.value; 136 - hpx->t2->sec_unc_err_sever_and = fields[14].integer.value; 137 - hpx->t2->sec_unc_err_sever_or = fields[15].integer.value; 138 - hpx->t2->sec_unc_err_mask_and = fields[16].integer.value; 139 - hpx->t2->sec_unc_err_mask_or = fields[17].integer.value; 140 - break; 141 - default: 142 - printk(KERN_WARNING 143 - "%s: Type 2 Revision %d record not supported\n", 144 - __func__, revision); 145 - return AE_ERROR; 146 - } 147 - return AE_OK; 148 - } 149 - 150 - static acpi_status 151 - acpi_run_hpx(acpi_handle handle, struct hotplug_params *hpx) 152 - { 153 - acpi_status status; 154 - struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; 155 - union acpi_object *package, *record, *fields; 156 - u32 type; 157 - int i; 158 - 159 - /* Clear the return buffer with zeros */ 160 - memset(hpx, 0, sizeof(struct hotplug_params)); 161 - 162 - status = acpi_evaluate_object(handle, "_HPX", NULL, &buffer); 163 - if (ACPI_FAILURE(status)) 164 - return status; 165 - 166 - package = (union acpi_object *)buffer.pointer; 167 - if (package->type != ACPI_TYPE_PACKAGE) { 168 - status = AE_ERROR; 169 - goto exit; 170 - } 171 - 172 - for (i = 0; i < package->package.count; i++) { 173 - record = &package->package.elements[i]; 174 - if (record->type != ACPI_TYPE_PACKAGE) { 175 - status = AE_ERROR; 176 - goto exit; 177 - } 178 - 179 - fields = record->package.elements; 180 - if (fields[0].type != ACPI_TYPE_INTEGER || 181 - fields[1].type != ACPI_TYPE_INTEGER) { 182 - status = AE_ERROR; 183 - goto exit; 184 - } 185 - 186 - type = fields[0].integer.value; 187 - switch (type) { 188 - case 0: 189 - status = decode_type0_hpx_record(record, hpx); 190 - if (ACPI_FAILURE(status)) 191 - goto exit; 192 - break; 193 - case 1: 194 - status = decode_type1_hpx_record(record, hpx); 195 - if (ACPI_FAILURE(status)) 196 - goto exit; 197 - break; 198 - case 2: 199 - status = decode_type2_hpx_record(record, hpx); 200 - if (ACPI_FAILURE(status)) 201 - goto exit; 202 - break; 203 - default: 204 - printk(KERN_ERR "%s: Type %d record not supported\n", 205 - __func__, type); 206 - status = AE_ERROR; 207 - goto exit; 208 - } 209 - } 210 - exit: 211 - kfree(buffer.pointer); 212 - return status; 213 - } 214 - 215 - static acpi_status 216 - acpi_run_hpp(acpi_handle handle, struct hotplug_params *hpp) 217 - { 218 - acpi_status status; 219 - struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 220 - union acpi_object *package, *fields; 221 - int i; 222 - 223 - memset(hpp, 0, sizeof(struct hotplug_params)); 224 - 225 - status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer); 226 - if (ACPI_FAILURE(status)) 227 - return status; 228 - 229 - package = (union acpi_object *) buffer.pointer; 230 - if (package->type != ACPI_TYPE_PACKAGE || 231 - package->package.count != 4) { 232 - status = AE_ERROR; 233 - goto exit; 234 - } 235 - 236 - fields = package->package.elements; 237 - for (i = 0; i < 4; i++) { 238 - if (fields[i].type != ACPI_TYPE_INTEGER) { 239 - status = AE_ERROR; 240 - goto exit; 241 - } 242 - } 243 - 244 - hpp->t0 = &hpp->type0_data; 245 - hpp->t0->revision = 1; 246 - hpp->t0->cache_line_size = fields[0].integer.value; 247 - hpp->t0->latency_timer = fields[1].integer.value; 248 - hpp->t0->enable_serr = fields[2].integer.value; 249 - hpp->t0->enable_perr = fields[3].integer.value; 250 - 251 - exit: 252 - kfree(buffer.pointer); 253 - return status; 254 - } 255 - 256 - 257 - 258 49 /* acpi_run_oshp - get control of hotplug from the firmware 259 50 * 260 51 * @handle - the handle of the hotplug controller. ··· 73 282 kfree(string.pointer); 74 283 return status; 75 284 } 76 - 77 - /* pci_get_hp_params 78 - * 79 - * @dev - the pci_dev for which we want parameters 80 - * @hpp - allocated by the caller 81 - */ 82 - int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp) 83 - { 84 - acpi_status status; 85 - acpi_handle handle, phandle; 86 - struct pci_bus *pbus; 87 - 88 - handle = NULL; 89 - for (pbus = dev->bus; pbus; pbus = pbus->parent) { 90 - handle = acpi_pci_get_bridge_handle(pbus); 91 - if (handle) 92 - break; 93 - } 94 - 95 - /* 96 - * _HPP settings apply to all child buses, until another _HPP is 97 - * encountered. If we don't find an _HPP for the input pci dev, 98 - * look for it in the parent device scope since that would apply to 99 - * this pci dev. 100 - */ 101 - while (handle) { 102 - status = acpi_run_hpx(handle, hpp); 103 - if (ACPI_SUCCESS(status)) 104 - return 0; 105 - status = acpi_run_hpp(handle, hpp); 106 - if (ACPI_SUCCESS(status)) 107 - return 0; 108 - if (acpi_is_root_bridge(handle)) 109 - break; 110 - status = acpi_get_parent(handle, &phandle); 111 - if (ACPI_FAILURE(status)) 112 - break; 113 - handle = phandle; 114 - } 115 - return -ENODEV; 116 - } 117 - EXPORT_SYMBOL_GPL(pci_get_hp_params); 118 285 119 286 /** 120 287 * acpi_get_hp_hw_control_from_firmware ··· 182 433 { 183 434 acpi_handle bridge_handle, parent_handle; 184 435 185 - if (!(bridge_handle = acpi_pci_get_bridge_handle(pbus))) 436 + bridge_handle = acpi_pci_get_bridge_handle(pbus); 437 + if (!bridge_handle) 186 438 return 0; 187 439 if ((ACPI_FAILURE(acpi_get_parent(handle, &parent_handle)))) 188 440 return 0;
+1 -10
drivers/pci/hotplug/acpiphp_glue.c
··· 61 61 static int acpiphp_hotplug_notify(struct acpi_device *adev, u32 type); 62 62 static void acpiphp_post_dock_fixup(struct acpi_device *adev); 63 63 static void acpiphp_sanitize_bus(struct pci_bus *bus); 64 - static void acpiphp_set_hpp_values(struct pci_bus *bus); 65 64 static void hotplug_event(u32 type, struct acpiphp_context *context); 66 65 static void free_bridge(struct kref *kref); 67 66 ··· 509 510 __pci_bus_assign_resources(bus, &add_list, NULL); 510 511 511 512 acpiphp_sanitize_bus(bus); 512 - acpiphp_set_hpp_values(bus); 513 + pcie_bus_configure_settings(bus); 513 514 acpiphp_set_acpi_region(slot); 514 515 515 516 list_for_each_entry(dev, &bus->devices, bus_list) { ··· 695 696 disable_slot(slot); 696 697 } 697 698 } 698 - } 699 - 700 - static void acpiphp_set_hpp_values(struct pci_bus *bus) 701 - { 702 - struct pci_dev *dev; 703 - 704 - list_for_each_entry(dev, &bus->devices, bus_list) 705 - pci_configure_slot(dev); 706 699 } 707 700 708 701 /*
+1 -1
drivers/pci/hotplug/acpiphp_ibm.c
··· 302 302 goto read_table_done; 303 303 } 304 304 305 - for(size = 0, i = 0; i < package->package.count; i++) { 305 + for (size = 0, i = 0; i < package->package.count; i++) { 306 306 if (package->package.elements[i].type != ACPI_TYPE_BUFFER) { 307 307 pr_err("%s: Invalid APCI element %d\n", __func__, i); 308 308 goto read_table_done;
+8 -5
drivers/pci/hotplug/cpci_hotplug_core.c
··· 125 125 126 126 /* Unconfigure device */ 127 127 dbg("%s - unconfiguring slot %s", __func__, slot_name(slot)); 128 - if ((retval = cpci_unconfigure_slot(slot))) { 128 + retval = cpci_unconfigure_slot(slot); 129 + if (retval) { 129 130 err("%s - could not unconfigure slot %s", 130 131 __func__, slot_name(slot)); 131 132 goto disable_error; ··· 142 141 } 143 142 cpci_led_on(slot); 144 143 145 - if (controller->ops->set_power) 146 - if ((retval = controller->ops->set_power(slot, 0))) 144 + if (controller->ops->set_power) { 145 + retval = controller->ops->set_power(slot, 0); 146 + if (retval) 147 147 goto disable_error; 148 + } 148 149 149 150 if (update_adapter_status(slot->hotplug_slot, 0)) 150 151 warn("failure to update adapter file"); ··· 470 467 __func__, slot_name(slot), hs_csr); 471 468 472 469 if (!slot->extracting) { 473 - if (update_latch_status(slot->hotplug_slot, 0)) { 470 + if (update_latch_status(slot->hotplug_slot, 0)) 474 471 warn("failure to update latch file"); 475 - } 472 + 476 473 slot->extracting = 1; 477 474 atomic_inc(&extracting); 478 475 }
+14 -14
drivers/pci/hotplug/cpcihp_generic.c
··· 56 56 if (debug) \ 57 57 printk (KERN_DEBUG "%s: " format "\n", \ 58 58 MY_NAME , ## arg); \ 59 - } while(0) 59 + } while (0) 60 60 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 61 61 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 62 62 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 82 82 char *p; 83 83 unsigned long tmp; 84 84 85 - if(!bridge) { 85 + if (!bridge) { 86 86 info("not configured, disabling."); 87 87 return -EINVAL; 88 88 } 89 89 str = bridge; 90 - if(!*str) 90 + if (!*str) 91 91 return -EINVAL; 92 92 93 93 tmp = simple_strtoul(str, &p, 16); 94 - if(p == str || tmp > 0xff) { 94 + if (p == str || tmp > 0xff) { 95 95 err("Invalid hotplug bus bridge device bus number"); 96 96 return -EINVAL; 97 97 } 98 98 bridge_busnr = (u8) tmp; 99 99 dbg("bridge_busnr = 0x%02x", bridge_busnr); 100 - if(*p != ':') { 100 + if (*p != ':') { 101 101 err("Invalid hotplug bus bridge device"); 102 102 return -EINVAL; 103 103 } 104 104 str = p + 1; 105 105 tmp = simple_strtoul(str, &p, 16); 106 - if(p == str || tmp > 0x1f) { 106 + if (p == str || tmp > 0x1f) { 107 107 err("Invalid hotplug bus bridge device slot number"); 108 108 return -EINVAL; 109 109 } ··· 112 112 113 113 dbg("first_slot = 0x%02x", first_slot); 114 114 dbg("last_slot = 0x%02x", last_slot); 115 - if(!(first_slot && last_slot)) { 115 + if (!(first_slot && last_slot)) { 116 116 err("Need to specify first_slot and last_slot"); 117 117 return -EINVAL; 118 118 } 119 - if(last_slot < first_slot) { 119 + if (last_slot < first_slot) { 120 120 err("first_slot must be less than last_slot"); 121 121 return -EINVAL; 122 122 } 123 123 124 124 dbg("port = 0x%04x", port); 125 125 dbg("enum_bit = 0x%02x", enum_bit); 126 - if(enum_bit > 7) { 126 + if (enum_bit > 7) { 127 127 err("Invalid #ENUM bit"); 128 128 return -EINVAL; 129 129 } ··· 151 151 return status; 152 152 153 153 r = request_region(port, 1, "#ENUM hotswap signal register"); 154 - if(!r) 154 + if (!r) 155 155 return -EBUSY; 156 156 157 157 dev = pci_get_domain_bus_and_slot(0, bridge_busnr, 158 158 PCI_DEVFN(bridge_slot, 0)); 159 - if(!dev || dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) { 159 + if (!dev || dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) { 160 160 err("Invalid bridge device %s", bridge); 161 161 pci_dev_put(dev); 162 162 return -EINVAL; ··· 169 169 generic_hpc.ops = &generic_hpc_ops; 170 170 171 171 status = cpci_hp_register_controller(&generic_hpc); 172 - if(status != 0) { 172 + if (status != 0) { 173 173 err("Could not register cPCI hotplug controller"); 174 174 return -ENODEV; 175 175 } 176 176 dbg("registered controller"); 177 177 178 178 status = cpci_hp_register_bus(bus, first_slot, last_slot); 179 - if(status != 0) { 179 + if (status != 0) { 180 180 err("Could not register cPCI hotplug bus"); 181 181 goto init_bus_register_error; 182 182 } 183 183 dbg("registered bus"); 184 184 185 185 status = cpci_hp_start(); 186 - if(status != 0) { 186 + if (status != 0) { 187 187 err("Could not started cPCI hotplug system"); 188 188 goto init_start_error; 189 189 }
+22 -22
drivers/pci/hotplug/cpcihp_zt5550.c
··· 51 51 if (debug) \ 52 52 printk (KERN_DEBUG "%s: " format "\n", \ 53 53 MY_NAME , ## arg); \ 54 - } while(0) 54 + } while (0) 55 55 #define err(format, arg...) printk(KERN_ERR "%s: " format "\n", MY_NAME , ## arg) 56 56 #define info(format, arg...) printk(KERN_INFO "%s: " format "\n", MY_NAME , ## arg) 57 57 #define warn(format, arg...) printk(KERN_WARNING "%s: " format "\n", MY_NAME , ## arg) ··· 82 82 int ret; 83 83 84 84 /* Since we know that no boards exist with two HC chips, treat it as an error */ 85 - if(hc_dev) { 85 + if (hc_dev) { 86 86 err("too many host controller devices?"); 87 87 return -EBUSY; 88 88 } 89 89 90 90 ret = pci_enable_device(pdev); 91 - if(ret) { 91 + if (ret) { 92 92 err("cannot enable %s\n", pci_name(pdev)); 93 93 return ret; 94 94 } ··· 98 98 dbg("pci resource start %llx", (unsigned long long)pci_resource_start(hc_dev, 1)); 99 99 dbg("pci resource len %llx", (unsigned long long)pci_resource_len(hc_dev, 1)); 100 100 101 - if(!request_mem_region(pci_resource_start(hc_dev, 1), 101 + if (!request_mem_region(pci_resource_start(hc_dev, 1), 102 102 pci_resource_len(hc_dev, 1), MY_NAME)) { 103 103 err("cannot reserve MMIO region"); 104 104 ret = -ENOMEM; ··· 107 107 108 108 hc_registers = 109 109 ioremap(pci_resource_start(hc_dev, 1), pci_resource_len(hc_dev, 1)); 110 - if(!hc_registers) { 110 + if (!hc_registers) { 111 111 err("cannot remap MMIO region %llx @ %llx", 112 112 (unsigned long long)pci_resource_len(hc_dev, 1), 113 113 (unsigned long long)pci_resource_start(hc_dev, 1)); ··· 146 146 147 147 static int zt5550_hc_cleanup(void) 148 148 { 149 - if(!hc_dev) 149 + if (!hc_dev) 150 150 return -ENODEV; 151 151 152 152 iounmap(hc_registers); ··· 170 170 u8 reg; 171 171 172 172 ret = 0; 173 - if(dev_id == zt5550_hpc.dev_id) { 173 + if (dev_id == zt5550_hpc.dev_id) { 174 174 reg = readb(csr_int_status); 175 - if(reg) 175 + if (reg) 176 176 ret = 1; 177 177 } 178 178 return ret; ··· 182 182 { 183 183 u8 reg; 184 184 185 - if(hc_dev == NULL) { 185 + if (hc_dev == NULL) 186 186 return -ENODEV; 187 - } 187 + 188 188 reg = readb(csr_int_mask); 189 189 reg = reg & ~ENUM_INT_MASK; 190 190 writeb(reg, csr_int_mask); ··· 195 195 { 196 196 u8 reg; 197 197 198 - if(hc_dev == NULL) { 198 + if (hc_dev == NULL) 199 199 return -ENODEV; 200 - } 201 200 202 201 reg = readb(csr_int_mask); 203 202 reg = reg | ENUM_INT_MASK; ··· 209 210 int status; 210 211 211 212 status = zt5550_hc_config(pdev); 212 - if(status != 0) { 213 + if (status != 0) 213 214 return status; 214 - } 215 + 215 216 dbg("returned from zt5550_hc_config"); 216 217 217 218 memset(&zt5550_hpc, 0, sizeof (struct cpci_hp_controller)); 218 219 zt5550_hpc_ops.query_enum = zt5550_hc_query_enum; 219 220 zt5550_hpc.ops = &zt5550_hpc_ops; 220 - if(!poll) { 221 + if (!poll) { 221 222 zt5550_hpc.irq = hc_dev->irq; 222 223 zt5550_hpc.irq_flags = IRQF_SHARED; 223 224 zt5550_hpc.dev_id = hc_dev; ··· 230 231 } 231 232 232 233 status = cpci_hp_register_controller(&zt5550_hpc); 233 - if(status != 0) { 234 + if (status != 0) { 234 235 err("could not register cPCI hotplug controller"); 235 236 goto init_hc_error; 236 237 } 237 238 dbg("registered controller"); 238 239 239 240 /* Look for first device matching cPCI bus's bridge vendor and device IDs */ 240 - if(!(bus0_dev = pci_get_device(PCI_VENDOR_ID_DEC, 241 - PCI_DEVICE_ID_DEC_21154, NULL))) { 241 + bus0_dev = pci_get_device(PCI_VENDOR_ID_DEC, 242 + PCI_DEVICE_ID_DEC_21154, NULL); 243 + if (!bus0_dev) { 242 244 status = -ENODEV; 243 245 goto init_register_error; 244 246 } ··· 247 247 pci_dev_put(bus0_dev); 248 248 249 249 status = cpci_hp_register_bus(bus0, 0x0a, 0x0f); 250 - if(status != 0) { 250 + if (status != 0) { 251 251 err("could not register cPCI hotplug bus"); 252 252 goto init_register_error; 253 253 } 254 254 dbg("registered bus"); 255 255 256 256 status = cpci_hp_start(); 257 - if(status != 0) { 257 + if (status != 0) { 258 258 err("could not started cPCI hotplug system"); 259 259 cpci_hp_unregister_bus(bus0); 260 260 goto init_register_error; ··· 300 300 301 301 info(DRIVER_DESC " version: " DRIVER_VERSION); 302 302 r = request_region(ENUM_PORT, 1, "#ENUM hotswap signal register"); 303 - if(!r) 303 + if (!r) 304 304 return -EBUSY; 305 305 306 306 rc = pci_register_driver(&zt5550_hc_driver); 307 - if(rc < 0) 307 + if (rc < 0) 308 308 release_region(ENUM_PORT, 1); 309 309 return rc; 310 310 }
+1 -1
drivers/pci/hotplug/cpqphp.h
··· 690 690 691 691 status = (readl(ctrl->hpc_reg + INT_INPUT_CLEAR) & (0x01L << hp_slot)); 692 692 693 - return(status == 0) ? 1 : 0; 693 + return (status == 0) ? 1 : 0; 694 694 } 695 695 696 696
+1 -2
drivers/pci/hotplug/cpqphp_core.c
··· 1096 1096 1097 1097 /* initialize our threads if they haven't already been started up */ 1098 1098 rc = one_time_init(); 1099 - if (rc) { 1099 + if (rc) 1100 1100 goto err_free_bus; 1101 - } 1102 1101 1103 1102 dbg("pdev = %p\n", pdev); 1104 1103 dbg("pci resource start %llx\n", (unsigned long long)pci_resource_start(pdev, 0));
+7 -12
drivers/pci/hotplug/cpqphp_ctrl.c
··· 705 705 if (temp == max) { 706 706 *head = max->next; 707 707 } else { 708 - while (temp && temp->next != max) { 708 + while (temp && temp->next != max) 709 709 temp = temp->next; 710 - } 711 710 712 711 if (temp) 713 712 temp->next = max->next; ··· 902 903 /* 903 904 * Check to see if it was our interrupt 904 905 */ 905 - if (!(misc & 0x000C)) { 906 + if (!(misc & 0x000C)) 906 907 return IRQ_NONE; 907 - } 908 908 909 909 if (misc & 0x0004) { 910 910 /* ··· 1141 1143 /* We don't allow freq/mode changes if we find another adapter running 1142 1144 * in another slot on this controller 1143 1145 */ 1144 - for(slot = ctrl->slot; slot; slot = slot->next) { 1146 + for (slot = ctrl->slot; slot; slot = slot->next) { 1145 1147 if (slot->device == (hp_slot + ctrl->slot_device_offset)) 1146 1148 continue; 1147 1149 if (!slot->hotplug_slot || !slot->hotplug_slot->info) ··· 1191 1193 1192 1194 reg16 = readw(ctrl->hpc_reg + NEXT_CURR_FREQ); 1193 1195 reg16 &= ~0x000F; 1194 - switch(adapter_speed) { 1196 + switch (adapter_speed) { 1195 1197 case(PCI_SPEED_133MHz_PCIX): 1196 1198 reg = 0x75; 1197 1199 reg16 |= 0xB; ··· 2004 2006 /* Check to see if the interlock is closed */ 2005 2007 tempdword = readl(ctrl->hpc_reg + INT_INPUT_CLEAR); 2006 2008 2007 - if (tempdword & (0x01 << hp_slot)) { 2009 + if (tempdword & (0x01 << hp_slot)) 2008 2010 return 1; 2009 - } 2010 2011 2011 2012 if (func->is_a_board) { 2012 2013 rc = board_replaced(func, ctrl); ··· 2067 2070 } 2068 2071 } 2069 2072 2070 - if (rc) { 2073 + if (rc) 2071 2074 dbg("%s: rc = %d\n", __func__, rc); 2072 - } 2073 2075 2074 2076 if (p_slot) 2075 2077 update_slot_info(ctrl, p_slot); ··· 2091 2095 device = func->device; 2092 2096 func = cpqhp_slot_find(ctrl->bus, device, index++); 2093 2097 p_slot = cpqhp_find_slot(ctrl, device); 2094 - if (p_slot) { 2098 + if (p_slot) 2095 2099 physical_slot = p_slot->number; 2096 - } 2097 2100 2098 2101 /* Make sure there are no video controllers here */ 2099 2102 while (func && !rc) {
+5 -8
drivers/pci/hotplug/cpqphp_nvram.c
··· 204 204 u8 temp_byte = 0xFF; 205 205 u32 rc; 206 206 207 - if (!check_for_compaq_ROM(rom_start)) { 207 + if (!check_for_compaq_ROM(rom_start)) 208 208 return -ENODEV; 209 - } 210 209 211 210 available = 1024; 212 211 ··· 249 250 250 251 available = 1024; 251 252 252 - if (!check_for_compaq_ROM(rom_start)) { 253 + if (!check_for_compaq_ROM(rom_start)) 253 254 return(1); 254 - } 255 255 256 256 buffer = (u32*) evbuffer; 257 257 ··· 425 427 426 428 void compaq_nvram_init (void __iomem *rom_start) 427 429 { 428 - if (rom_start) { 430 + if (rom_start) 429 431 compaq_int15_entry_point = (rom_start + ROM_INT15_PHY_ADDR - ROM_PHY_ADDR); 430 - } 432 + 431 433 dbg("int15 entry = %p\n", compaq_int15_entry_point); 432 434 433 435 /* initialize our int15 lock */ ··· 659 661 660 662 if (evbuffer_init) { 661 663 rc = store_HRT(rom_start); 662 - if (rc) { 664 + if (rc) 663 665 err(msg_unable_to_save); 664 - } 665 666 } 666 667 return rc; 667 668 }
+11 -8
drivers/pci/hotplug/ibmphp_core.c
··· 1023 1023 debug("ENABLING SLOT........\n"); 1024 1024 slot_cur = hs->private; 1025 1025 1026 - if ((rc = validate(slot_cur, ENABLE))) { 1026 + rc = validate(slot_cur, ENABLE); 1027 + if (rc) { 1027 1028 err("validate function failed\n"); 1028 1029 goto error_nopower; 1029 1030 } ··· 1200 1199 1201 1200 debug("DISABLING SLOT...\n"); 1202 1201 1203 - if ((slot_cur == NULL) || (slot_cur->ctrl == NULL)) { 1202 + if ((slot_cur == NULL) || (slot_cur->ctrl == NULL)) 1204 1203 return -ENODEV; 1205 - } 1206 1204 1207 1205 flag = slot_cur->flag; 1208 1206 slot_cur->flag = 1; ··· 1336 1336 for (i = 0; i < 16; i++) 1337 1337 irqs[i] = 0; 1338 1338 1339 - if ((rc = ibmphp_access_ebda())) 1339 + rc = ibmphp_access_ebda(); 1340 + if (rc) 1340 1341 goto error; 1341 1342 debug("after ibmphp_access_ebda()\n"); 1342 1343 1343 - if ((rc = ibmphp_rsrc_init())) 1344 + rc = ibmphp_rsrc_init(); 1345 + if (rc) 1344 1346 goto error; 1345 1347 debug("AFTER Resource & EBDA INITIALIZATIONS\n"); 1346 1348 1347 1349 max_slots = get_max_slots(); 1348 1350 1349 - if ((rc = ibmphp_register_pci())) 1351 + rc = ibmphp_register_pci(); 1352 + if (rc) 1350 1353 goto error; 1351 1354 1352 1355 if (init_ops()) { ··· 1358 1355 } 1359 1356 1360 1357 ibmphp_print_test(); 1361 - if ((rc = ibmphp_hpc_start_poll_thread())) { 1358 + rc = ibmphp_hpc_start_poll_thread(); 1359 + if (rc) 1362 1360 goto error; 1363 - } 1364 1361 1365 1362 exit: 1366 1363 return rc;
+1 -2
drivers/pci/hotplug/ibmphp_ebda.c
··· 215 215 debug ("%s - cap of the slot: %x\n", __func__, hpc_ptr->slots[index].slot_cap); 216 216 } 217 217 218 - for (index = 0; index < hpc_ptr->bus_count; index++) { 218 + for (index = 0; index < hpc_ptr->bus_count; index++) 219 219 debug ("%s - bus# of each bus controlled by this ctlr: %x\n", __func__, hpc_ptr->buses[index].bus_num); 220 - } 221 220 222 221 debug ("%s - type of hpc: %x\n", __func__, hpc_ptr->ctlr_type); 223 222 switch (hpc_ptr->ctlr_type) {
+1 -2
drivers/pci/hotplug/ibmphp_hpc.c
··· 997 997 rc = ibmphp_do_disable_slot (pslot); 998 998 } 999 999 1000 - if (update || disable) { 1000 + if (update || disable) 1001 1001 ibmphp_update_slot_info (pslot); 1002 - } 1003 1002 1004 1003 debug ("%s - Exit rc[%d] disable[%x] update[%x]\n", __func__, rc, disable, update); 1005 1004
+4 -2
drivers/pci/hotplug/ibmphp_pci.c
··· 145 145 case PCI_HEADER_TYPE_NORMAL: 146 146 debug ("single device case.... vendor id = %x, hdr_type = %x, class = %x\n", vendor_id, hdr_type, class); 147 147 assign_alt_irq (cur_func, class_code); 148 - if ((rc = configure_device (cur_func)) < 0) { 148 + rc = configure_device(cur_func); 149 + if (rc < 0) { 149 150 /* We need to do this in case some other BARs were properly inserted */ 150 151 err ("was not able to configure devfunc %x on bus %x.\n", 151 152 cur_func->device, cur_func->busno); ··· 158 157 break; 159 158 case PCI_HEADER_TYPE_MULTIDEVICE: 160 159 assign_alt_irq (cur_func, class_code); 161 - if ((rc = configure_device (cur_func)) < 0) { 160 + rc = configure_device(cur_func); 161 + if (rc < 0) { 162 162 /* We need to do this in case some other BARs were properly inserted */ 163 163 err ("was not able to configure devfunc %x on bus %x...bailing out\n", 164 164 cur_func->device, cur_func->busno);
+31 -14
drivers/pci/hotplug/ibmphp_res.c
··· 224 224 if ((curr->rsrc_type & RESTYPE) == MMASK) { 225 225 /* no bus structure exists in place yet */ 226 226 if (list_empty (&gbuses)) { 227 - if ((rc = alloc_bus_range (&newbus, &newrange, curr, MEM, 1))) 227 + rc = alloc_bus_range(&newbus, &newrange, curr, MEM, 1); 228 + if (rc) 228 229 return rc; 229 230 list_add_tail (&newbus->bus_list, &gbuses); 230 231 debug ("gbuses = NULL, Memory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); ··· 238 237 return rc; 239 238 } else { 240 239 /* went through all the buses and didn't find ours, need to create a new bus node */ 241 - if ((rc = alloc_bus_range (&newbus, &newrange, curr, MEM, 1))) 240 + rc = alloc_bus_range(&newbus, &newrange, curr, MEM, 1); 241 + if (rc) 242 242 return rc; 243 243 244 244 list_add_tail (&newbus->bus_list, &gbuses); ··· 250 248 /* prefetchable memory */ 251 249 if (list_empty (&gbuses)) { 252 250 /* no bus structure exists in place yet */ 253 - if ((rc = alloc_bus_range (&newbus, &newrange, curr, PFMEM, 1))) 251 + rc = alloc_bus_range(&newbus, &newrange, curr, PFMEM, 1); 252 + if (rc) 254 253 return rc; 255 254 list_add_tail (&newbus->bus_list, &gbuses); 256 255 debug ("gbuses = NULL, PFMemory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); ··· 264 261 return rc; 265 262 } else { 266 263 /* went through all the buses and didn't find ours, need to create a new bus node */ 267 - if ((rc = alloc_bus_range (&newbus, &newrange, curr, PFMEM, 1))) 264 + rc = alloc_bus_range(&newbus, &newrange, curr, PFMEM, 1); 265 + if (rc) 268 266 return rc; 269 267 list_add_tail (&newbus->bus_list, &gbuses); 270 268 debug ("1st Bus, PFMemory Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); ··· 275 271 /* IO */ 276 272 if (list_empty (&gbuses)) { 277 273 /* no bus structure exists in place yet */ 278 - if ((rc = alloc_bus_range (&newbus, &newrange, curr, IO, 1))) 274 + rc = alloc_bus_range(&newbus, &newrange, curr, IO, 1); 275 + if (rc) 279 276 return rc; 280 277 list_add_tail (&newbus->bus_list, &gbuses); 281 278 debug ("gbuses = NULL, IO Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); ··· 288 283 return rc; 289 284 } else { 290 285 /* went through all the buses and didn't find ours, need to create a new bus node */ 291 - if ((rc = alloc_bus_range (&newbus, &newrange, curr, IO, 1))) 286 + rc = alloc_bus_range(&newbus, &newrange, curr, IO, 1); 287 + if (rc) 292 288 return rc; 293 289 list_add_tail (&newbus->bus_list, &gbuses); 294 290 debug ("1st Bus, IO Primary Bus %x [%x - %x]\n", newbus->busno, newrange->start, newrange->end); ··· 1044 1038 /* found our range */ 1045 1039 if (!res_prev) { 1046 1040 /* first time in the loop */ 1047 - if ((res_cur->start != range->start) && ((len_tmp = res_cur->start - 1 - range->start) >= res->len)) { 1041 + len_tmp = res_cur->start - 1 - range->start; 1042 + 1043 + if ((res_cur->start != range->start) && (len_tmp >= res->len)) { 1048 1044 debug ("len_tmp = %x\n", len_tmp); 1049 1045 1050 1046 if ((len_tmp < len_cur) || (len_cur == 0)) { ··· 1086 1078 } 1087 1079 if (!res_cur->next) { 1088 1080 /* last device on the range */ 1089 - if ((range->end != res_cur->end) && ((len_tmp = range->end - (res_cur->end + 1)) >= res->len)) { 1081 + len_tmp = range->end - (res_cur->end + 1); 1082 + 1083 + if ((range->end != res_cur->end) && (len_tmp >= res->len)) { 1090 1084 debug ("len_tmp = %x\n", len_tmp); 1091 1085 if ((len_tmp < len_cur) || (len_cur == 0)) { 1092 1086 ··· 1127 1117 if (res_prev) { 1128 1118 if (res_prev->rangeno != res_cur->rangeno) { 1129 1119 /* 1st device on this range */ 1130 - if ((res_cur->start != range->start) && 1131 - ((len_tmp = res_cur->start - 1 - range->start) >= res->len)) { 1120 + len_tmp = res_cur->start - 1 - range->start; 1121 + 1122 + if ((res_cur->start != range->start) && (len_tmp >= res->len)) { 1132 1123 if ((len_tmp < len_cur) || (len_cur == 0)) { 1133 1124 if ((range->start % tmp_divide) == 0) { 1134 1125 /* just perfect, starting address is divisible by length */ ··· 1164 1153 } 1165 1154 } else { 1166 1155 /* in the same range */ 1167 - if ((len_tmp = res_cur->start - 1 - res_prev->end - 1) >= res->len) { 1156 + len_tmp = res_cur->start - 1 - res_prev->end - 1; 1157 + 1158 + if (len_tmp >= res->len) { 1168 1159 if ((len_tmp < len_cur) || (len_cur == 0)) { 1169 1160 if (((res_prev->end + 1) % tmp_divide) == 0) { 1170 1161 /* just perfect, starting address's divisible by length */ ··· 1225 1212 break; 1226 1213 } 1227 1214 while (range) { 1228 - if ((len_tmp = range->end - range->start) >= res->len) { 1215 + len_tmp = range->end - range->start; 1216 + 1217 + if (len_tmp >= res->len) { 1229 1218 if ((len_tmp < len_cur) || (len_cur == 0)) { 1230 1219 if ((range->start % tmp_divide) == 0) { 1231 1220 /* just perfect, starting address's divisible by length */ ··· 1291 1276 break; 1292 1277 } 1293 1278 while (range) { 1294 - if ((len_tmp = range->end - range->start) >= res->len) { 1279 + len_tmp = range->end - range->start; 1280 + 1281 + if (len_tmp >= res->len) { 1295 1282 if ((len_tmp < len_cur) || (len_cur == 0)) { 1296 1283 if ((range->start % tmp_divide) == 0) { 1297 1284 /* just perfect, starting address's divisible by length */ ··· 1352 1335 return -EINVAL; 1353 1336 } 1354 1337 } 1355 - } /* end if(!res_cur) */ 1338 + } /* end if (!res_cur) */ 1356 1339 return -EINVAL; 1357 1340 } 1358 1341
+1 -1
drivers/pci/hotplug/pciehp.h
··· 92 92 struct slot *slot; 93 93 wait_queue_head_t queue; /* sleep & wake process */ 94 94 u32 slot_cap; 95 - u32 slot_ctrl; 95 + u16 slot_ctrl; 96 96 struct timer_list poll_timer; 97 97 unsigned long cmd_started; /* jiffies */ 98 98 unsigned int cmd_busy:1;
+7
drivers/pci/hotplug/pciehp_core.c
··· 262 262 goto err_out_none; 263 263 } 264 264 265 + if (!dev->port->subordinate) { 266 + /* Can happen if we run out of bus numbers during probe */ 267 + dev_err(&dev->device, 268 + "Hotplug bridge without secondary bus, ignoring\n"); 269 + goto err_out_none; 270 + } 271 + 265 272 ctrl = pcie_init(dev); 266 273 if (!ctrl) { 267 274 dev_err(&dev->device, "Controller initialization failed\n");
+11 -6
drivers/pci/hotplug/pciehp_hpc.c
··· 171 171 * interrupts. 172 172 */ 173 173 if (!rc) 174 - ctrl_info(ctrl, "Timeout on hotplug command %#010x (issued %u msec ago)\n", 174 + ctrl_info(ctrl, "Timeout on hotplug command %#06x (issued %u msec ago)\n", 175 175 ctrl->slot_ctrl, 176 - jiffies_to_msecs(now - ctrl->cmd_started)); 176 + jiffies_to_msecs(jiffies - ctrl->cmd_started)); 177 177 } 178 178 179 179 /** ··· 422 422 default: 423 423 return; 424 424 } 425 + pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); 425 426 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 426 427 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 427 - pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC); 428 428 } 429 429 430 430 void pciehp_green_led_on(struct slot *slot) ··· 614 614 PCI_EXP_SLTCTL_DLLSCE); 615 615 616 616 pcie_write_cmd(ctrl, cmd, mask); 617 + ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 618 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd); 617 619 } 618 620 619 621 static void pcie_disable_notification(struct controller *ctrl) ··· 627 625 PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE | 628 626 PCI_EXP_SLTCTL_DLLSCE); 629 627 pcie_write_cmd(ctrl, 0, mask); 628 + ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 629 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0); 630 630 } 631 631 632 632 /* ··· 656 652 stat_mask |= PCI_EXP_SLTSTA_DLLSC; 657 653 658 654 pcie_write_cmd(ctrl, 0, ctrl_mask); 655 + ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 656 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0); 659 657 if (pciehp_poll_mode) 660 658 del_timer_sync(&ctrl->poll_timer); 661 659 ··· 665 659 666 660 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask); 667 661 pcie_write_cmd(ctrl, ctrl_mask, ctrl_mask); 662 + ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 663 + pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask); 668 664 if (pciehp_poll_mode) 669 665 int_poll_timeout(ctrl->poll_timer.data); 670 666 ··· 804 796 PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 805 797 PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | 806 798 PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC); 807 - 808 - /* Disable software notification */ 809 - pcie_disable_notification(ctrl); 810 799 811 800 ctrl_info(ctrl, "Slot #%d AttnBtn%c AttnInd%c PwrInd%c PwrCtrl%c MRL%c Interlock%c NoCompl%c LLActRep%c\n", 812 801 (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19,
+1 -8
drivers/pci/hotplug/pciehp_pci.c
··· 65 65 pci_hp_add_bridge(dev); 66 66 67 67 pci_assign_unassigned_bridge_resources(bridge); 68 - 69 - list_for_each_entry(dev, &parent->devices, bus_list) { 70 - if ((dev->class >> 16) == PCI_BASE_CLASS_DISPLAY) 71 - continue; 72 - 73 - pci_configure_slot(dev); 74 - } 75 - 68 + pcie_bus_configure_settings(parent); 76 69 pci_bus_add_devices(parent); 77 70 78 71 out:
-176
drivers/pci/hotplug/pcihp_slot.c
··· 1 - /* 2 - * Copyright (C) 1995,2001 Compaq Computer Corporation 3 - * Copyright (C) 2001 Greg Kroah-Hartman (greg@kroah.com) 4 - * Copyright (C) 2001 IBM Corp. 5 - * Copyright (C) 2003-2004 Intel Corporation 6 - * (c) Copyright 2009 Hewlett-Packard Development Company, L.P. 7 - * 8 - * All rights reserved. 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License as published by 12 - * the Free Software Foundation; either version 2 of the License, or (at 13 - * your option) any later version. 14 - * 15 - * This program is distributed in the hope that it will be useful, but 16 - * WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 18 - * NON INFRINGEMENT. See the GNU General Public License for more 19 - * details. 20 - * 21 - * You should have received a copy of the GNU General Public License 22 - * along with this program; if not, write to the Free Software 23 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 24 - */ 25 - 26 - #include <linux/pci.h> 27 - #include <linux/export.h> 28 - #include <linux/pci_hotplug.h> 29 - 30 - static struct hpp_type0 pci_default_type0 = { 31 - .revision = 1, 32 - .cache_line_size = 8, 33 - .latency_timer = 0x40, 34 - .enable_serr = 0, 35 - .enable_perr = 0, 36 - }; 37 - 38 - static void program_hpp_type0(struct pci_dev *dev, struct hpp_type0 *hpp) 39 - { 40 - u16 pci_cmd, pci_bctl; 41 - 42 - if (!hpp) { 43 - /* 44 - * Perhaps we *should* use default settings for PCIe, but 45 - * pciehp didn't, so we won't either. 46 - */ 47 - if (pci_is_pcie(dev)) 48 - return; 49 - hpp = &pci_default_type0; 50 - } 51 - 52 - if (hpp->revision > 1) { 53 - dev_warn(&dev->dev, 54 - "PCI settings rev %d not supported; using defaults\n", 55 - hpp->revision); 56 - hpp = &pci_default_type0; 57 - } 58 - 59 - pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, hpp->cache_line_size); 60 - pci_write_config_byte(dev, PCI_LATENCY_TIMER, hpp->latency_timer); 61 - pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); 62 - if (hpp->enable_serr) 63 - pci_cmd |= PCI_COMMAND_SERR; 64 - else 65 - pci_cmd &= ~PCI_COMMAND_SERR; 66 - if (hpp->enable_perr) 67 - pci_cmd |= PCI_COMMAND_PARITY; 68 - else 69 - pci_cmd &= ~PCI_COMMAND_PARITY; 70 - pci_write_config_word(dev, PCI_COMMAND, pci_cmd); 71 - 72 - /* Program bridge control value */ 73 - if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) { 74 - pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER, 75 - hpp->latency_timer); 76 - pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl); 77 - if (hpp->enable_serr) 78 - pci_bctl |= PCI_BRIDGE_CTL_SERR; 79 - else 80 - pci_bctl &= ~PCI_BRIDGE_CTL_SERR; 81 - if (hpp->enable_perr) 82 - pci_bctl |= PCI_BRIDGE_CTL_PARITY; 83 - else 84 - pci_bctl &= ~PCI_BRIDGE_CTL_PARITY; 85 - pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl); 86 - } 87 - } 88 - 89 - static void program_hpp_type1(struct pci_dev *dev, struct hpp_type1 *hpp) 90 - { 91 - if (hpp) 92 - dev_warn(&dev->dev, "PCI-X settings not supported\n"); 93 - } 94 - 95 - static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp) 96 - { 97 - int pos; 98 - u32 reg32; 99 - 100 - if (!hpp) 101 - return; 102 - 103 - if (hpp->revision > 1) { 104 - dev_warn(&dev->dev, "PCIe settings rev %d not supported\n", 105 - hpp->revision); 106 - return; 107 - } 108 - 109 - /* Initialize Device Control Register */ 110 - pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, 111 - ~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or); 112 - 113 - /* Initialize Link Control Register */ 114 - if (dev->subordinate) 115 - pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 116 - ~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or); 117 - 118 - /* Find Advanced Error Reporting Enhanced Capability */ 119 - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 120 - if (!pos) 121 - return; 122 - 123 - /* Initialize Uncorrectable Error Mask Register */ 124 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &reg32); 125 - reg32 = (reg32 & hpp->unc_err_mask_and) | hpp->unc_err_mask_or; 126 - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, reg32); 127 - 128 - /* Initialize Uncorrectable Error Severity Register */ 129 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &reg32); 130 - reg32 = (reg32 & hpp->unc_err_sever_and) | hpp->unc_err_sever_or; 131 - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, reg32); 132 - 133 - /* Initialize Correctable Error Mask Register */ 134 - pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg32); 135 - reg32 = (reg32 & hpp->cor_err_mask_and) | hpp->cor_err_mask_or; 136 - pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg32); 137 - 138 - /* Initialize Advanced Error Capabilities and Control Register */ 139 - pci_read_config_dword(dev, pos + PCI_ERR_CAP, &reg32); 140 - reg32 = (reg32 & hpp->adv_err_cap_and) | hpp->adv_err_cap_or; 141 - pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); 142 - 143 - /* 144 - * FIXME: The following two registers are not supported yet. 145 - * 146 - * o Secondary Uncorrectable Error Severity Register 147 - * o Secondary Uncorrectable Error Mask Register 148 - */ 149 - } 150 - 151 - void pci_configure_slot(struct pci_dev *dev) 152 - { 153 - struct pci_dev *cdev; 154 - struct hotplug_params hpp; 155 - 156 - if (!(dev->hdr_type == PCI_HEADER_TYPE_NORMAL || 157 - (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE && 158 - (dev->class >> 8) == PCI_CLASS_BRIDGE_PCI))) 159 - return; 160 - 161 - pcie_bus_configure_settings(dev->bus); 162 - 163 - memset(&hpp, 0, sizeof(hpp)); 164 - pci_get_hp_params(dev, &hpp); 165 - 166 - program_hpp_type2(dev, hpp.t2); 167 - program_hpp_type1(dev, hpp.t1); 168 - program_hpp_type0(dev, hpp.t0); 169 - 170 - if (dev->subordinate) { 171 - list_for_each_entry(cdev, &dev->subordinate->devices, 172 - bus_list) 173 - pci_configure_slot(cdev); 174 - } 175 - } 176 - EXPORT_SYMBOL_GPL(pci_configure_slot);
+9 -5
drivers/pci/hotplug/shpchp_ctrl.c
··· 195 195 int rc = 0; 196 196 197 197 ctrl_dbg(ctrl, "Change speed to %d\n", speed); 198 - if ((rc = p_slot->hpc_ops->set_bus_speed_mode(p_slot, speed))) { 198 + rc = p_slot->hpc_ops->set_bus_speed_mode(p_slot, speed); 199 + if (rc) { 199 200 ctrl_err(ctrl, "%s: Issue of set bus speed mode command failed\n", 200 201 __func__); 201 202 return WRONG_BUS_FREQUENCY; ··· 262 261 } 263 262 264 263 if ((ctrl->pci_dev->vendor == 0x8086) && (ctrl->pci_dev->device == 0x0332)) { 265 - if ((rc = p_slot->hpc_ops->set_bus_speed_mode(p_slot, PCI_SPEED_33MHz))) { 264 + rc = p_slot->hpc_ops->set_bus_speed_mode(p_slot, PCI_SPEED_33MHz); 265 + if (rc) { 266 266 ctrl_err(ctrl, "%s: Issue of set bus speed mode command failed\n", 267 267 __func__); 268 268 return WRONG_BUS_FREQUENCY; 269 269 } 270 270 271 271 /* turn on board, blink green LED, turn off Amber LED */ 272 - if ((rc = p_slot->hpc_ops->slot_enable(p_slot))) { 272 + rc = p_slot->hpc_ops->slot_enable(p_slot); 273 + if (rc) { 273 274 ctrl_err(ctrl, "Issue of Slot Enable command failed\n"); 274 275 return rc; 275 276 } ··· 299 296 return rc; 300 297 301 298 /* turn on board, blink green LED, turn off Amber LED */ 302 - if ((rc = p_slot->hpc_ops->slot_enable(p_slot))) { 299 + rc = p_slot->hpc_ops->slot_enable(p_slot); 300 + if (rc) { 303 301 ctrl_err(ctrl, "Issue of Slot Enable command failed\n"); 304 302 return rc; 305 303 } ··· 599 595 ctrl_dbg(ctrl, "%s: p_slot->pwr_save %x\n", __func__, p_slot->pwr_save); 600 596 p_slot->hpc_ops->get_latch_status(p_slot, &getstatus); 601 597 602 - if(((p_slot->ctrl->pci_dev->vendor == PCI_VENDOR_ID_AMD) || 598 + if (((p_slot->ctrl->pci_dev->vendor == PCI_VENDOR_ID_AMD) || 603 599 (p_slot->ctrl->pci_dev->device == PCI_DEVICE_ID_AMD_POGO_7458)) 604 600 && p_slot->ctrl->num_slots == 1) { 605 601 /* handle amd pogo errata; this must be done before enable */
+3 -2
drivers/pci/hotplug/shpchp_hpc.c
··· 466 466 u8 m66_cap = !!(slot_reg & MHZ66_CAP); 467 467 u8 pi, pcix_cap; 468 468 469 - if ((retval = hpc_get_prog_int(slot, &pi))) 469 + retval = hpc_get_prog_int(slot, &pi); 470 + if (retval) 470 471 return retval; 471 472 472 473 switch (pi) { ··· 799 798 800 799 ctrl_dbg(ctrl, "%s: intr_loc = %x\n", __func__, intr_loc); 801 800 802 - if(!shpchp_poll_mode) { 801 + if (!shpchp_poll_mode) { 803 802 /* 804 803 * Mask Global Interrupt Mask - see implementation 805 804 * note on p. 139 of SHPC spec rev 1.0
+1 -7
drivers/pci/hotplug/shpchp_pci.c
··· 69 69 } 70 70 71 71 pci_assign_unassigned_bridge_resources(bridge); 72 - 73 - list_for_each_entry(dev, &parent->devices, bus_list) { 74 - if (PCI_SLOT(dev->devfn) != p_slot->device) 75 - continue; 76 - pci_configure_slot(dev); 77 - } 78 - 72 + pcie_bus_configure_settings(parent); 79 73 pci_bus_add_devices(parent); 80 74 81 75 out:
+1 -1
drivers/pci/iov.c
··· 633 633 * our dev as the physical function and the assigned bit is set 634 634 */ 635 635 if (vfdev->is_virtfn && (vfdev->physfn == dev) && 636 - (vfdev->dev_flags & PCI_DEV_FLAGS_ASSIGNED)) 636 + pci_is_dev_assigned(vfdev)) 637 637 vfs_assigned++; 638 638 639 639 vfdev = pci_get_device(dev->vendor, dev_id, vfdev);
+18 -57
drivers/pci/msi.c
··· 56 56 chip->teardown_irq(chip, irq); 57 57 } 58 58 59 - int __weak arch_msi_check_device(struct pci_dev *dev, int nvec, int type) 60 - { 61 - struct msi_chip *chip = dev->bus->msi; 62 - 63 - if (!chip || !chip->check_device) 64 - return 0; 65 - 66 - return chip->check_device(chip, dev, nvec, type); 67 - } 68 - 69 59 int __weak arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 70 60 { 71 61 struct msi_desc *entry; ··· 120 130 } 121 131 122 132 if (entry) 123 - write_msi_msg(irq, &entry->msg); 133 + __write_msi_msg(entry, &entry->msg); 124 134 } 125 135 126 136 void __weak arch_restore_msi_irqs(struct pci_dev *dev) ··· 374 384 iounmap(entry->mask_base); 375 385 } 376 386 377 - /* 378 - * Its possible that we get into this path 379 - * When populate_msi_sysfs fails, which means the entries 380 - * were not registered with sysfs. In that case don't 381 - * unregister them. 382 - */ 383 - if (entry->kobj.parent) { 384 - kobject_del(&entry->kobj); 385 - kobject_put(&entry->kobj); 386 - } 387 - 388 387 list_del(&entry->list); 389 388 kfree(entry); 390 389 } ··· 574 595 entry->msi_attrib.entry_nr = 0; 575 596 entry->msi_attrib.maskbit = !!(control & PCI_MSI_FLAGS_MASKBIT); 576 597 entry->msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */ 577 - entry->msi_attrib.pos = dev->msi_cap; 578 598 entry->msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; 579 599 580 600 if (control & PCI_MSI_FLAGS_64BIT) ··· 677 699 entry->msi_attrib.is_64 = 1; 678 700 entry->msi_attrib.entry_nr = entries[i].entry; 679 701 entry->msi_attrib.default_irq = dev->irq; 680 - entry->msi_attrib.pos = dev->msix_cap; 681 702 entry->mask_base = base; 682 703 683 704 list_add_tail(&entry->list, &dev->msi_list); ··· 783 806 } 784 807 785 808 /** 786 - * pci_msi_check_device - check whether MSI may be enabled on a device 809 + * pci_msi_supported - check whether MSI may be enabled on a device 787 810 * @dev: pointer to the pci_dev data structure of MSI device function 788 811 * @nvec: how many MSIs have been requested ? 789 - * @type: are we checking for MSI or MSI-X ? 790 812 * 791 813 * Look at global flags, the device itself, and its parent buses 792 814 * to determine if MSI/-X are supported for the device. If MSI/-X is 793 - * supported return 0, else return an error code. 815 + * supported return 1, else return 0. 794 816 **/ 795 - static int pci_msi_check_device(struct pci_dev *dev, int nvec, int type) 817 + static int pci_msi_supported(struct pci_dev *dev, int nvec) 796 818 { 797 819 struct pci_bus *bus; 798 - int ret; 799 820 800 821 /* MSI must be globally enabled and supported by the device */ 801 - if (!pci_msi_enable || !dev || dev->no_msi) 802 - return -EINVAL; 822 + if (!pci_msi_enable) 823 + return 0; 824 + 825 + if (!dev || dev->no_msi || dev->current_state != PCI_D0) 826 + return 0; 803 827 804 828 /* 805 829 * You can't ask to have 0 or less MSIs configured. ··· 808 830 * b) the list manipulation code assumes nvec >= 1. 809 831 */ 810 832 if (nvec < 1) 811 - return -ERANGE; 833 + return 0; 812 834 813 835 /* 814 836 * Any bridge which does NOT route MSI transactions from its ··· 819 841 */ 820 842 for (bus = dev->bus; bus; bus = bus->parent) 821 843 if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI) 822 - return -EINVAL; 844 + return 0; 823 845 824 - ret = arch_msi_check_device(dev, nvec, type); 825 - if (ret) 826 - return ret; 827 - 828 - return 0; 846 + return 1; 829 847 } 830 848 831 849 /** ··· 920 946 **/ 921 947 int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec) 922 948 { 923 - int status, nr_entries; 949 + int nr_entries; 924 950 int i, j; 925 951 926 - if (!entries || !dev->msix_cap || dev->current_state != PCI_D0) 952 + if (!pci_msi_supported(dev, nvec)) 927 953 return -EINVAL; 928 954 929 - status = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSIX); 930 - if (status) 931 - return status; 955 + if (!entries) 956 + return -EINVAL; 932 957 933 958 nr_entries = pci_msix_vec_count(dev); 934 959 if (nr_entries < 0) ··· 951 978 dev_info(&dev->dev, "can't enable MSI-X (MSI IRQ already assigned)\n"); 952 979 return -EINVAL; 953 980 } 954 - status = msix_capability_init(dev, entries, nvec); 955 - return status; 981 + return msix_capability_init(dev, entries, nvec); 956 982 } 957 983 EXPORT_SYMBOL(pci_enable_msix); 958 984 ··· 1034 1062 int nvec; 1035 1063 int rc; 1036 1064 1037 - if (dev->current_state != PCI_D0) 1065 + if (!pci_msi_supported(dev, minvec)) 1038 1066 return -EINVAL; 1039 1067 1040 1068 WARN_ON(!!dev->msi_enabled); ··· 1056 1084 return -EINVAL; 1057 1085 else if (nvec > maxvec) 1058 1086 nvec = maxvec; 1059 - 1060 - do { 1061 - rc = pci_msi_check_device(dev, nvec, PCI_CAP_ID_MSI); 1062 - if (rc < 0) { 1063 - return rc; 1064 - } else if (rc > 0) { 1065 - if (rc < minvec) 1066 - return -ENOSPC; 1067 - nvec = rc; 1068 - } 1069 - } while (rc); 1070 1087 1071 1088 do { 1072 1089 rc = msi_capability_init(dev, nvec);
+262 -14
drivers/pci/pci-acpi.c
··· 10 10 #include <linux/delay.h> 11 11 #include <linux/init.h> 12 12 #include <linux/pci.h> 13 + #include <linux/pci_hotplug.h> 13 14 #include <linux/module.h> 14 15 #include <linux/pci-aspm.h> 15 16 #include <linux/pci-acpi.h> 16 17 #include <linux/pm_runtime.h> 17 18 #include <linux/pm_qos.h> 18 19 #include "pci.h" 20 + 21 + phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle) 22 + { 23 + acpi_status status = AE_NOT_EXIST; 24 + unsigned long long mcfg_addr; 25 + 26 + if (handle) 27 + status = acpi_evaluate_integer(handle, METHOD_NAME__CBA, 28 + NULL, &mcfg_addr); 29 + if (ACPI_FAILURE(status)) 30 + return 0; 31 + 32 + return (phys_addr_t)mcfg_addr; 33 + } 34 + 35 + static acpi_status decode_type0_hpx_record(union acpi_object *record, 36 + struct hotplug_params *hpx) 37 + { 38 + int i; 39 + union acpi_object *fields = record->package.elements; 40 + u32 revision = fields[1].integer.value; 41 + 42 + switch (revision) { 43 + case 1: 44 + if (record->package.count != 6) 45 + return AE_ERROR; 46 + for (i = 2; i < 6; i++) 47 + if (fields[i].type != ACPI_TYPE_INTEGER) 48 + return AE_ERROR; 49 + hpx->t0 = &hpx->type0_data; 50 + hpx->t0->revision = revision; 51 + hpx->t0->cache_line_size = fields[2].integer.value; 52 + hpx->t0->latency_timer = fields[3].integer.value; 53 + hpx->t0->enable_serr = fields[4].integer.value; 54 + hpx->t0->enable_perr = fields[5].integer.value; 55 + break; 56 + default: 57 + printk(KERN_WARNING 58 + "%s: Type 0 Revision %d record not supported\n", 59 + __func__, revision); 60 + return AE_ERROR; 61 + } 62 + return AE_OK; 63 + } 64 + 65 + static acpi_status decode_type1_hpx_record(union acpi_object *record, 66 + struct hotplug_params *hpx) 67 + { 68 + int i; 69 + union acpi_object *fields = record->package.elements; 70 + u32 revision = fields[1].integer.value; 71 + 72 + switch (revision) { 73 + case 1: 74 + if (record->package.count != 5) 75 + return AE_ERROR; 76 + for (i = 2; i < 5; i++) 77 + if (fields[i].type != ACPI_TYPE_INTEGER) 78 + return AE_ERROR; 79 + hpx->t1 = &hpx->type1_data; 80 + hpx->t1->revision = revision; 81 + hpx->t1->max_mem_read = fields[2].integer.value; 82 + hpx->t1->avg_max_split = fields[3].integer.value; 83 + hpx->t1->tot_max_split = fields[4].integer.value; 84 + break; 85 + default: 86 + printk(KERN_WARNING 87 + "%s: Type 1 Revision %d record not supported\n", 88 + __func__, revision); 89 + return AE_ERROR; 90 + } 91 + return AE_OK; 92 + } 93 + 94 + static acpi_status decode_type2_hpx_record(union acpi_object *record, 95 + struct hotplug_params *hpx) 96 + { 97 + int i; 98 + union acpi_object *fields = record->package.elements; 99 + u32 revision = fields[1].integer.value; 100 + 101 + switch (revision) { 102 + case 1: 103 + if (record->package.count != 18) 104 + return AE_ERROR; 105 + for (i = 2; i < 18; i++) 106 + if (fields[i].type != ACPI_TYPE_INTEGER) 107 + return AE_ERROR; 108 + hpx->t2 = &hpx->type2_data; 109 + hpx->t2->revision = revision; 110 + hpx->t2->unc_err_mask_and = fields[2].integer.value; 111 + hpx->t2->unc_err_mask_or = fields[3].integer.value; 112 + hpx->t2->unc_err_sever_and = fields[4].integer.value; 113 + hpx->t2->unc_err_sever_or = fields[5].integer.value; 114 + hpx->t2->cor_err_mask_and = fields[6].integer.value; 115 + hpx->t2->cor_err_mask_or = fields[7].integer.value; 116 + hpx->t2->adv_err_cap_and = fields[8].integer.value; 117 + hpx->t2->adv_err_cap_or = fields[9].integer.value; 118 + hpx->t2->pci_exp_devctl_and = fields[10].integer.value; 119 + hpx->t2->pci_exp_devctl_or = fields[11].integer.value; 120 + hpx->t2->pci_exp_lnkctl_and = fields[12].integer.value; 121 + hpx->t2->pci_exp_lnkctl_or = fields[13].integer.value; 122 + hpx->t2->sec_unc_err_sever_and = fields[14].integer.value; 123 + hpx->t2->sec_unc_err_sever_or = fields[15].integer.value; 124 + hpx->t2->sec_unc_err_mask_and = fields[16].integer.value; 125 + hpx->t2->sec_unc_err_mask_or = fields[17].integer.value; 126 + break; 127 + default: 128 + printk(KERN_WARNING 129 + "%s: Type 2 Revision %d record not supported\n", 130 + __func__, revision); 131 + return AE_ERROR; 132 + } 133 + return AE_OK; 134 + } 135 + 136 + static acpi_status acpi_run_hpx(acpi_handle handle, struct hotplug_params *hpx) 137 + { 138 + acpi_status status; 139 + struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; 140 + union acpi_object *package, *record, *fields; 141 + u32 type; 142 + int i; 143 + 144 + /* Clear the return buffer with zeros */ 145 + memset(hpx, 0, sizeof(struct hotplug_params)); 146 + 147 + status = acpi_evaluate_object(handle, "_HPX", NULL, &buffer); 148 + if (ACPI_FAILURE(status)) 149 + return status; 150 + 151 + package = (union acpi_object *)buffer.pointer; 152 + if (package->type != ACPI_TYPE_PACKAGE) { 153 + status = AE_ERROR; 154 + goto exit; 155 + } 156 + 157 + for (i = 0; i < package->package.count; i++) { 158 + record = &package->package.elements[i]; 159 + if (record->type != ACPI_TYPE_PACKAGE) { 160 + status = AE_ERROR; 161 + goto exit; 162 + } 163 + 164 + fields = record->package.elements; 165 + if (fields[0].type != ACPI_TYPE_INTEGER || 166 + fields[1].type != ACPI_TYPE_INTEGER) { 167 + status = AE_ERROR; 168 + goto exit; 169 + } 170 + 171 + type = fields[0].integer.value; 172 + switch (type) { 173 + case 0: 174 + status = decode_type0_hpx_record(record, hpx); 175 + if (ACPI_FAILURE(status)) 176 + goto exit; 177 + break; 178 + case 1: 179 + status = decode_type1_hpx_record(record, hpx); 180 + if (ACPI_FAILURE(status)) 181 + goto exit; 182 + break; 183 + case 2: 184 + status = decode_type2_hpx_record(record, hpx); 185 + if (ACPI_FAILURE(status)) 186 + goto exit; 187 + break; 188 + default: 189 + printk(KERN_ERR "%s: Type %d record not supported\n", 190 + __func__, type); 191 + status = AE_ERROR; 192 + goto exit; 193 + } 194 + } 195 + exit: 196 + kfree(buffer.pointer); 197 + return status; 198 + } 199 + 200 + static acpi_status acpi_run_hpp(acpi_handle handle, struct hotplug_params *hpp) 201 + { 202 + acpi_status status; 203 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 204 + union acpi_object *package, *fields; 205 + int i; 206 + 207 + memset(hpp, 0, sizeof(struct hotplug_params)); 208 + 209 + status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer); 210 + if (ACPI_FAILURE(status)) 211 + return status; 212 + 213 + package = (union acpi_object *) buffer.pointer; 214 + if (package->type != ACPI_TYPE_PACKAGE || 215 + package->package.count != 4) { 216 + status = AE_ERROR; 217 + goto exit; 218 + } 219 + 220 + fields = package->package.elements; 221 + for (i = 0; i < 4; i++) { 222 + if (fields[i].type != ACPI_TYPE_INTEGER) { 223 + status = AE_ERROR; 224 + goto exit; 225 + } 226 + } 227 + 228 + hpp->t0 = &hpp->type0_data; 229 + hpp->t0->revision = 1; 230 + hpp->t0->cache_line_size = fields[0].integer.value; 231 + hpp->t0->latency_timer = fields[1].integer.value; 232 + hpp->t0->enable_serr = fields[2].integer.value; 233 + hpp->t0->enable_perr = fields[3].integer.value; 234 + 235 + exit: 236 + kfree(buffer.pointer); 237 + return status; 238 + } 239 + 240 + /* pci_get_hp_params 241 + * 242 + * @dev - the pci_dev for which we want parameters 243 + * @hpp - allocated by the caller 244 + */ 245 + int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp) 246 + { 247 + acpi_status status; 248 + acpi_handle handle, phandle; 249 + struct pci_bus *pbus; 250 + 251 + handle = NULL; 252 + for (pbus = dev->bus; pbus; pbus = pbus->parent) { 253 + handle = acpi_pci_get_bridge_handle(pbus); 254 + if (handle) 255 + break; 256 + } 257 + 258 + /* 259 + * _HPP settings apply to all child buses, until another _HPP is 260 + * encountered. If we don't find an _HPP for the input pci dev, 261 + * look for it in the parent device scope since that would apply to 262 + * this pci dev. 263 + */ 264 + while (handle) { 265 + status = acpi_run_hpx(handle, hpp); 266 + if (ACPI_SUCCESS(status)) 267 + return 0; 268 + status = acpi_run_hpp(handle, hpp); 269 + if (ACPI_SUCCESS(status)) 270 + return 0; 271 + if (acpi_is_root_bridge(handle)) 272 + break; 273 + status = acpi_get_parent(handle, &phandle); 274 + if (ACPI_FAILURE(status)) 275 + break; 276 + handle = phandle; 277 + } 278 + return -ENODEV; 279 + } 280 + EXPORT_SYMBOL_GPL(pci_get_hp_params); 19 281 20 282 /** 21 283 * pci_acpi_wake_bus - Root bus wakeup notification fork function. ··· 344 82 struct pci_dev *pci_dev) 345 83 { 346 84 return acpi_add_pm_notifier(dev, &pci_dev->dev, pci_acpi_wake_dev); 347 - } 348 - 349 - phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle) 350 - { 351 - acpi_status status = AE_NOT_EXIST; 352 - unsigned long long mcfg_addr; 353 - 354 - if (handle) 355 - status = acpi_evaluate_integer(handle, METHOD_NAME__CBA, 356 - NULL, &mcfg_addr); 357 - if (ACPI_FAILURE(status)) 358 - return 0; 359 - 360 - return (phys_addr_t)mcfg_addr; 361 85 } 362 86 363 87 /*
+1 -4
drivers/pci/pci-driver.c
··· 55 55 unsigned long driver_data) 56 56 { 57 57 struct pci_dynid *dynid; 58 - int retval; 59 58 60 59 dynid = kzalloc(sizeof(*dynid), GFP_KERNEL); 61 60 if (!dynid) ··· 72 73 list_add_tail(&dynid->node, &drv->dynids.list); 73 74 spin_unlock(&drv->dynids.lock); 74 75 75 - retval = driver_attach(&drv->driver); 76 - 77 - return retval; 76 + return driver_attach(&drv->driver); 78 77 } 79 78 EXPORT_SYMBOL_GPL(pci_add_dynid); 80 79
+20 -21
drivers/pci/pci-sysfs.c
··· 177 177 { 178 178 struct pci_dev *pci_dev = to_pci_dev(dev); 179 179 180 - return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02x\n", 180 + return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n", 181 181 pci_dev->vendor, pci_dev->device, 182 182 pci_dev->subsystem_vendor, pci_dev->subsystem_device, 183 183 (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8), ··· 250 250 char *buf) 251 251 { 252 252 struct pci_dev *pdev = to_pci_dev(dev); 253 + struct pci_bus *subordinate = pdev->subordinate; 253 254 254 - if (!pdev->subordinate) 255 - return 0; 256 - 257 - return sprintf(buf, "%u\n", 258 - !(pdev->subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI)); 255 + return sprintf(buf, "%u\n", subordinate ? 256 + !(subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI) 257 + : !pdev->no_msi); 259 258 } 260 259 261 260 static ssize_t msi_bus_store(struct device *dev, struct device_attribute *attr, 262 261 const char *buf, size_t count) 263 262 { 264 263 struct pci_dev *pdev = to_pci_dev(dev); 264 + struct pci_bus *subordinate = pdev->subordinate; 265 265 unsigned long val; 266 266 267 267 if (kstrtoul(buf, 0, &val) < 0) 268 268 return -EINVAL; 269 269 270 - /* 271 - * Bad things may happen if the no_msi flag is changed 272 - * while drivers are loaded. 273 - */ 274 270 if (!capable(CAP_SYS_ADMIN)) 275 271 return -EPERM; 276 272 277 273 /* 278 - * Maybe devices without subordinate buses shouldn't have this 279 - * attribute in the first place? 274 + * "no_msi" and "bus_flags" only affect what happens when a driver 275 + * requests MSI or MSI-X. They don't affect any drivers that have 276 + * already requested MSI or MSI-X. 280 277 */ 281 - if (!pdev->subordinate) 278 + if (!subordinate) { 279 + pdev->no_msi = !val; 280 + dev_info(&pdev->dev, "MSI/MSI-X %s for future drivers\n", 281 + val ? "allowed" : "disallowed"); 282 282 return count; 283 - 284 - /* Is the flag going to change, or keep the value it already had? */ 285 - if (!(pdev->subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI) ^ 286 - !!val) { 287 - pdev->subordinate->bus_flags ^= PCI_BUS_FLAGS_NO_MSI; 288 - 289 - dev_warn(&pdev->dev, "forced subordinate bus to%s support MSI, bad things could happen\n", 290 - val ? "" : " not"); 291 283 } 292 284 285 + if (val) 286 + subordinate->bus_flags &= ~PCI_BUS_FLAGS_NO_MSI; 287 + else 288 + subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI; 289 + 290 + dev_info(&subordinate->dev, "MSI/MSI-X %s for future drivers of devices on this bus\n", 291 + val ? "allowed" : "disallowed"); 293 292 return count; 294 293 } 295 294 static DEVICE_ATTR_RW(msi_bus);
+50 -7
drivers/pci/pci.c
··· 1003 1003 for (i = 0; i < 16; i++) 1004 1004 pci_read_config_dword(dev, i * 4, &dev->saved_config_space[i]); 1005 1005 dev->state_saved = true; 1006 - if ((i = pci_save_pcie_state(dev)) != 0) 1006 + 1007 + i = pci_save_pcie_state(dev); 1008 + if (i != 0) 1007 1009 return i; 1008 - if ((i = pci_save_pcix_state(dev)) != 0) 1010 + 1011 + i = pci_save_pcix_state(dev); 1012 + if (i != 0) 1009 1013 return i; 1010 - if ((i = pci_save_vc_state(dev)) != 0) 1014 + 1015 + i = pci_save_vc_state(dev); 1016 + if (i != 0) 1011 1017 return i; 1018 + 1012 1019 return 0; 1013 1020 } 1014 1021 EXPORT_SYMBOL(pci_save_state); ··· 1914 1907 if (target_state == PCI_POWER_ERROR) 1915 1908 return -EIO; 1916 1909 1917 - /* D3cold during system suspend/hibernate is not supported */ 1918 - if (target_state > PCI_D3hot) 1919 - target_state = PCI_D3hot; 1920 - 1921 1910 pci_enable_wake(dev, target_state, device_may_wakeup(&dev->dev)); 1922 1911 1923 1912 error = pci_set_power_state(dev, target_state); ··· 2706 2703 ((1 << 6) - 1), res_name); 2707 2704 } 2708 2705 EXPORT_SYMBOL(pci_request_regions_exclusive); 2706 + 2707 + /** 2708 + * pci_remap_iospace - Remap the memory mapped I/O space 2709 + * @res: Resource describing the I/O space 2710 + * @phys_addr: physical address of range to be mapped 2711 + * 2712 + * Remap the memory mapped I/O space described by the @res 2713 + * and the CPU physical address @phys_addr into virtual address space. 2714 + * Only architectures that have memory mapped IO functions defined 2715 + * (and the PCI_IOBASE value defined) should call this function. 2716 + */ 2717 + int __weak pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr) 2718 + { 2719 + #if defined(PCI_IOBASE) && defined(CONFIG_MMU) 2720 + unsigned long vaddr = (unsigned long)PCI_IOBASE + res->start; 2721 + 2722 + if (!(res->flags & IORESOURCE_IO)) 2723 + return -EINVAL; 2724 + 2725 + if (res->end > IO_SPACE_LIMIT) 2726 + return -EINVAL; 2727 + 2728 + return ioremap_page_range(vaddr, vaddr + resource_size(res), phys_addr, 2729 + pgprot_device(PAGE_KERNEL)); 2730 + #else 2731 + /* this architecture does not have memory mapped I/O space, 2732 + so this function should never be called */ 2733 + WARN_ONCE(1, "This architecture does not support memory mapped I/O\n"); 2734 + return -ENODEV; 2735 + #endif 2736 + } 2709 2737 2710 2738 static void __pci_set_master(struct pci_dev *dev, bool enable) 2711 2739 { ··· 4439 4405 pci_domains_supported = 0; 4440 4406 #endif 4441 4407 } 4408 + 4409 + #ifdef CONFIG_PCI_DOMAINS 4410 + static atomic_t __domain_nr = ATOMIC_INIT(-1); 4411 + 4412 + int pci_get_new_domain_nr(void) 4413 + { 4414 + return atomic_inc_return(&__domain_nr); 4415 + } 4416 + #endif 4442 4417 4443 4418 /** 4444 4419 * pci_ext_cfg_avail - can we access extended PCI config space?
+9 -2
drivers/pci/pcie/aer/aerdrv_errprint.c
··· 89 89 NULL, 90 90 "Replay Timer Timeout", /* Bit Position 12 */ 91 91 "Advisory Non-Fatal", /* Bit Position 13 */ 92 + "Corrected Internal Error", /* Bit Position 14 */ 93 + "Header Log Overflow", /* Bit Position 15 */ 92 94 }; 93 95 94 96 static const char *aer_uncorrectable_error_string[] = { 95 - NULL, 97 + "Undefined", /* Bit Position 0 */ 96 98 NULL, 97 99 NULL, 98 100 NULL, 99 101 "Data Link Protocol", /* Bit Position 4 */ 100 - NULL, 102 + "Surprise Down Error", /* Bit Position 5 */ 101 103 NULL, 102 104 NULL, 103 105 NULL, ··· 115 113 "Malformed TLP", /* Bit Position 18 */ 116 114 "ECRC", /* Bit Position 19 */ 117 115 "Unsupported Request", /* Bit Position 20 */ 116 + "ACS Violation", /* Bit Position 21 */ 117 + "Uncorrectable Internal Error", /* Bit Position 22 */ 118 + "MC Blocked TLP", /* Bit Position 23 */ 119 + "AtomicOp Egress Blocked", /* Bit Position 24 */ 120 + "TLP Prefix Blocked Error", /* Bit Position 25 */ 118 121 }; 119 122 120 123 static const char *aer_agent_string[] = {
-74
drivers/pci/pcie/portdrv_pci.c
··· 93 93 return 0; 94 94 } 95 95 96 - #ifdef CONFIG_PM_RUNTIME 97 - struct d3cold_info { 98 - bool no_d3cold; 99 - unsigned int d3cold_delay; 100 - }; 101 - 102 - static int pci_dev_d3cold_info(struct pci_dev *pdev, void *data) 103 - { 104 - struct d3cold_info *info = data; 105 - 106 - info->d3cold_delay = max_t(unsigned int, pdev->d3cold_delay, 107 - info->d3cold_delay); 108 - if (pdev->no_d3cold) 109 - info->no_d3cold = true; 110 - return 0; 111 - } 112 - 113 - static int pcie_port_runtime_suspend(struct device *dev) 114 - { 115 - struct pci_dev *pdev = to_pci_dev(dev); 116 - struct d3cold_info d3cold_info = { 117 - .no_d3cold = false, 118 - .d3cold_delay = PCI_PM_D3_WAIT, 119 - }; 120 - 121 - /* 122 - * If any subordinate device disable D3cold, we should not put 123 - * the port into D3cold. The D3cold delay of port should be 124 - * the max of that of all subordinate devices. 125 - */ 126 - pci_walk_bus(pdev->subordinate, pci_dev_d3cold_info, &d3cold_info); 127 - pdev->no_d3cold = d3cold_info.no_d3cold; 128 - pdev->d3cold_delay = d3cold_info.d3cold_delay; 129 - return 0; 130 - } 131 - 132 - static int pcie_port_runtime_resume(struct device *dev) 133 - { 134 - return 0; 135 - } 136 - 137 - static int pci_dev_pme_poll(struct pci_dev *pdev, void *data) 138 - { 139 - bool *pme_poll = data; 140 - 141 - if (pdev->pme_poll) 142 - *pme_poll = true; 143 - return 0; 144 - } 145 - 146 - static int pcie_port_runtime_idle(struct device *dev) 147 - { 148 - struct pci_dev *pdev = to_pci_dev(dev); 149 - bool pme_poll = false; 150 - 151 - /* 152 - * If any subordinate device needs pme poll, we should keep 153 - * the port in D0, because we need port in D0 to poll it. 154 - */ 155 - pci_walk_bus(pdev->subordinate, pci_dev_pme_poll, &pme_poll); 156 - /* Delay for a short while to prevent too frequent suspend/resume */ 157 - if (!pme_poll) 158 - pm_schedule_suspend(dev, 10); 159 - return -EBUSY; 160 - } 161 - #else 162 - #define pcie_port_runtime_suspend NULL 163 - #define pcie_port_runtime_resume NULL 164 - #define pcie_port_runtime_idle NULL 165 - #endif 166 - 167 96 static const struct dev_pm_ops pcie_portdrv_pm_ops = { 168 97 .suspend = pcie_port_device_suspend, 169 98 .resume = pcie_port_device_resume, ··· 101 172 .poweroff = pcie_port_device_suspend, 102 173 .restore = pcie_port_device_resume, 103 174 .resume_noirq = pcie_port_resume_noirq, 104 - .runtime_suspend = pcie_port_runtime_suspend, 105 - .runtime_resume = pcie_port_runtime_resume, 106 - .runtime_idle = pcie_port_runtime_idle, 107 175 }; 108 176 109 177 #define PCIE_PORTDRV_PM_OPS (&pcie_portdrv_pm_ops)
+162 -5
drivers/pci/probe.c
··· 6 6 #include <linux/delay.h> 7 7 #include <linux/init.h> 8 8 #include <linux/pci.h> 9 + #include <linux/pci_hotplug.h> 9 10 #include <linux/slab.h> 10 11 #include <linux/module.h> 11 12 #include <linux/cpumask.h> ··· 486 485 } 487 486 } 488 487 489 - static struct pci_bus *pci_alloc_bus(void) 488 + static struct pci_bus *pci_alloc_bus(struct pci_bus *parent) 490 489 { 491 490 struct pci_bus *b; 492 491 ··· 501 500 INIT_LIST_HEAD(&b->resources); 502 501 b->max_bus_speed = PCI_SPEED_UNKNOWN; 503 502 b->cur_bus_speed = PCI_SPEED_UNKNOWN; 503 + #ifdef CONFIG_PCI_DOMAINS_GENERIC 504 + if (parent) 505 + b->domain_nr = parent->domain_nr; 506 + #endif 504 507 return b; 505 508 } 506 509 ··· 676 671 /* 677 672 * Allocate a new bus, and inherit stuff from the parent.. 678 673 */ 679 - child = pci_alloc_bus(); 674 + child = pci_alloc_bus(parent); 680 675 if (!child) 681 676 return NULL; 682 677 ··· 745 740 } 746 741 EXPORT_SYMBOL(pci_add_new_bus); 747 742 743 + static void pci_enable_crs(struct pci_dev *pdev) 744 + { 745 + u16 root_cap = 0; 746 + 747 + /* Enable CRS Software Visibility if supported */ 748 + pcie_capability_read_word(pdev, PCI_EXP_RTCAP, &root_cap); 749 + if (root_cap & PCI_EXP_RTCAP_CRSVIS) 750 + pcie_capability_set_word(pdev, PCI_EXP_RTCTL, 751 + PCI_EXP_RTCTL_CRSSVE); 752 + } 753 + 748 754 /* 749 755 * If it's a bridge, configure it and scan the bus behind it. 750 756 * For CardBus bridges, we don't scan behind as the devices will ··· 802 786 pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &bctl); 803 787 pci_write_config_word(dev, PCI_BRIDGE_CONTROL, 804 788 bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT); 789 + 790 + pci_enable_crs(dev); 805 791 806 792 if ((secondary || subordinate) && !pcibios_assign_all_busses() && 807 793 !is_cardbus && !broken) { ··· 1244 1226 return 0; 1245 1227 } 1246 1228 1229 + static struct hpp_type0 pci_default_type0 = { 1230 + .revision = 1, 1231 + .cache_line_size = 8, 1232 + .latency_timer = 0x40, 1233 + .enable_serr = 0, 1234 + .enable_perr = 0, 1235 + }; 1236 + 1237 + static void program_hpp_type0(struct pci_dev *dev, struct hpp_type0 *hpp) 1238 + { 1239 + u16 pci_cmd, pci_bctl; 1240 + 1241 + if (!hpp) 1242 + hpp = &pci_default_type0; 1243 + 1244 + if (hpp->revision > 1) { 1245 + dev_warn(&dev->dev, 1246 + "PCI settings rev %d not supported; using defaults\n", 1247 + hpp->revision); 1248 + hpp = &pci_default_type0; 1249 + } 1250 + 1251 + pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, hpp->cache_line_size); 1252 + pci_write_config_byte(dev, PCI_LATENCY_TIMER, hpp->latency_timer); 1253 + pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); 1254 + if (hpp->enable_serr) 1255 + pci_cmd |= PCI_COMMAND_SERR; 1256 + if (hpp->enable_perr) 1257 + pci_cmd |= PCI_COMMAND_PARITY; 1258 + pci_write_config_word(dev, PCI_COMMAND, pci_cmd); 1259 + 1260 + /* Program bridge control value */ 1261 + if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) { 1262 + pci_write_config_byte(dev, PCI_SEC_LATENCY_TIMER, 1263 + hpp->latency_timer); 1264 + pci_read_config_word(dev, PCI_BRIDGE_CONTROL, &pci_bctl); 1265 + if (hpp->enable_serr) 1266 + pci_bctl |= PCI_BRIDGE_CTL_SERR; 1267 + if (hpp->enable_perr) 1268 + pci_bctl |= PCI_BRIDGE_CTL_PARITY; 1269 + pci_write_config_word(dev, PCI_BRIDGE_CONTROL, pci_bctl); 1270 + } 1271 + } 1272 + 1273 + static void program_hpp_type1(struct pci_dev *dev, struct hpp_type1 *hpp) 1274 + { 1275 + if (hpp) 1276 + dev_warn(&dev->dev, "PCI-X settings not supported\n"); 1277 + } 1278 + 1279 + static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp) 1280 + { 1281 + int pos; 1282 + u32 reg32; 1283 + 1284 + if (!hpp) 1285 + return; 1286 + 1287 + if (hpp->revision > 1) { 1288 + dev_warn(&dev->dev, "PCIe settings rev %d not supported\n", 1289 + hpp->revision); 1290 + return; 1291 + } 1292 + 1293 + /* 1294 + * Don't allow _HPX to change MPS or MRRS settings. We manage 1295 + * those to make sure they're consistent with the rest of the 1296 + * platform. 1297 + */ 1298 + hpp->pci_exp_devctl_and |= PCI_EXP_DEVCTL_PAYLOAD | 1299 + PCI_EXP_DEVCTL_READRQ; 1300 + hpp->pci_exp_devctl_or &= ~(PCI_EXP_DEVCTL_PAYLOAD | 1301 + PCI_EXP_DEVCTL_READRQ); 1302 + 1303 + /* Initialize Device Control Register */ 1304 + pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, 1305 + ~hpp->pci_exp_devctl_and, hpp->pci_exp_devctl_or); 1306 + 1307 + /* Initialize Link Control Register */ 1308 + if (dev->subordinate) 1309 + pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 1310 + ~hpp->pci_exp_lnkctl_and, hpp->pci_exp_lnkctl_or); 1311 + 1312 + /* Find Advanced Error Reporting Enhanced Capability */ 1313 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 1314 + if (!pos) 1315 + return; 1316 + 1317 + /* Initialize Uncorrectable Error Mask Register */ 1318 + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &reg32); 1319 + reg32 = (reg32 & hpp->unc_err_mask_and) | hpp->unc_err_mask_or; 1320 + pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, reg32); 1321 + 1322 + /* Initialize Uncorrectable Error Severity Register */ 1323 + pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &reg32); 1324 + reg32 = (reg32 & hpp->unc_err_sever_and) | hpp->unc_err_sever_or; 1325 + pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, reg32); 1326 + 1327 + /* Initialize Correctable Error Mask Register */ 1328 + pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &reg32); 1329 + reg32 = (reg32 & hpp->cor_err_mask_and) | hpp->cor_err_mask_or; 1330 + pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, reg32); 1331 + 1332 + /* Initialize Advanced Error Capabilities and Control Register */ 1333 + pci_read_config_dword(dev, pos + PCI_ERR_CAP, &reg32); 1334 + reg32 = (reg32 & hpp->adv_err_cap_and) | hpp->adv_err_cap_or; 1335 + pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); 1336 + 1337 + /* 1338 + * FIXME: The following two registers are not supported yet. 1339 + * 1340 + * o Secondary Uncorrectable Error Severity Register 1341 + * o Secondary Uncorrectable Error Mask Register 1342 + */ 1343 + } 1344 + 1345 + static void pci_configure_device(struct pci_dev *dev) 1346 + { 1347 + struct hotplug_params hpp; 1348 + int ret; 1349 + 1350 + memset(&hpp, 0, sizeof(hpp)); 1351 + ret = pci_get_hp_params(dev, &hpp); 1352 + if (ret) 1353 + return; 1354 + 1355 + program_hpp_type2(dev, hpp.t2); 1356 + program_hpp_type1(dev, hpp.t1); 1357 + program_hpp_type0(dev, hpp.t0); 1358 + } 1359 + 1247 1360 static void pci_release_capabilities(struct pci_dev *dev) 1248 1361 { 1249 1362 pci_vpd_release(dev); ··· 1431 1282 *l == 0x0000ffff || *l == 0xffff0000) 1432 1283 return false; 1433 1284 1434 - /* Configuration request Retry Status */ 1435 - while (*l == 0xffff0001) { 1285 + /* 1286 + * Configuration Request Retry Status. Some root ports return the 1287 + * actual device ID instead of the synthetic ID (0xFFFF) required 1288 + * by the PCIe spec. Ignore the device ID and only check for 1289 + * (vendor id == 1). 1290 + */ 1291 + while ((*l & 0xffff) == 0x0001) { 1436 1292 if (!crs_timeout) 1437 1293 return false; 1438 1294 ··· 1516 1362 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus) 1517 1363 { 1518 1364 int ret; 1365 + 1366 + pci_configure_device(dev); 1519 1367 1520 1368 device_initialize(&dev->dev); 1521 1369 dev->dev.release = pci_release_dev; ··· 1907 1751 char bus_addr[64]; 1908 1752 char *fmt; 1909 1753 1910 - b = pci_alloc_bus(); 1754 + b = pci_alloc_bus(NULL); 1911 1755 if (!b) 1912 1756 return NULL; 1913 1757 1914 1758 b->sysdata = sysdata; 1915 1759 b->ops = ops; 1916 1760 b->number = b->busn_res.start = bus; 1761 + pci_bus_assign_domain_nr(b, parent); 1917 1762 b2 = pci_find_bus(pci_domain_nr(b), bus); 1918 1763 if (b2) { 1919 1764 /* If we already got to this bus through a different bridge, ignore it */
+68 -51
drivers/pci/quirks.c
··· 24 24 #include <linux/ioport.h> 25 25 #include <linux/sched.h> 26 26 #include <linux/ktime.h> 27 + #include <linux/mm.h> 27 28 #include <asm/dma.h> /* isa_dma_bridge_buggy */ 28 29 #include "pci.h" 29 30 ··· 287 286 dev->cfg_size = 0xA0; 288 287 } 289 288 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine); 289 + 290 + /* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */ 291 + static void quirk_extend_bar_to_page(struct pci_dev *dev) 292 + { 293 + int i; 294 + 295 + for (i = 0; i < PCI_STD_RESOURCE_END; i++) { 296 + struct resource *r = &dev->resource[i]; 297 + 298 + if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) { 299 + r->end = PAGE_SIZE - 1; 300 + r->start = 0; 301 + r->flags |= IORESOURCE_UNSET; 302 + dev_info(&dev->dev, "expanded BAR %d to page size: %pR\n", 303 + i, r); 304 + } 305 + } 306 + } 307 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, 0x034a, quirk_extend_bar_to_page); 290 308 291 309 /* 292 310 * S3 868 and 968 chips report region size equal to 32M, but they decode 64M. ··· 3005 2985 */ 3006 2986 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_REALTEK, 0x8169, 3007 2987 quirk_broken_intx_masking); 2988 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID, 2989 + quirk_broken_intx_masking); 3008 2990 3009 2991 #ifdef CONFIG_ACPI 3010 2992 /* ··· 3534 3512 /* Intel 82801, https://bugzilla.kernel.org/show_bug.cgi?id=44881#c49 */ 3535 3513 DECLARE_PCI_FIXUP_HEADER(0x8086, 0x244e, quirk_use_pcie_bridge_dma_alias); 3536 3514 3537 - static struct pci_dev *pci_func_0_dma_source(struct pci_dev *dev) 3538 - { 3539 - if (!PCI_FUNC(dev->devfn)) 3540 - return pci_dev_get(dev); 3541 - 3542 - return pci_get_slot(dev->bus, PCI_DEVFN(PCI_SLOT(dev->devfn), 0)); 3543 - } 3544 - 3545 - static const struct pci_dev_dma_source { 3546 - u16 vendor; 3547 - u16 device; 3548 - struct pci_dev *(*dma_source)(struct pci_dev *dev); 3549 - } pci_dev_dma_source[] = { 3550 - /* 3551 - * https://bugzilla.redhat.com/show_bug.cgi?id=605888 3552 - * 3553 - * Some Ricoh devices use the function 0 source ID for DMA on 3554 - * other functions of a multifunction device. The DMA devices 3555 - * is therefore function 0, which will have implications of the 3556 - * iommu grouping of these devices. 3557 - */ 3558 - { PCI_VENDOR_ID_RICOH, 0xe822, pci_func_0_dma_source }, 3559 - { PCI_VENDOR_ID_RICOH, 0xe230, pci_func_0_dma_source }, 3560 - { PCI_VENDOR_ID_RICOH, 0xe832, pci_func_0_dma_source }, 3561 - { PCI_VENDOR_ID_RICOH, 0xe476, pci_func_0_dma_source }, 3562 - { 0 } 3563 - }; 3564 - 3565 - /* 3566 - * IOMMUs with isolation capabilities need to be programmed with the 3567 - * correct source ID of a device. In most cases, the source ID matches 3568 - * the device doing the DMA, but sometimes hardware is broken and will 3569 - * tag the DMA as being sourced from a different device. This function 3570 - * allows that translation. Note that the reference count of the 3571 - * returned device is incremented on all paths. 3572 - */ 3573 - struct pci_dev *pci_get_dma_source(struct pci_dev *dev) 3574 - { 3575 - const struct pci_dev_dma_source *i; 3576 - 3577 - for (i = pci_dev_dma_source; i->dma_source; i++) { 3578 - if ((i->vendor == dev->vendor || 3579 - i->vendor == (u16)PCI_ANY_ID) && 3580 - (i->device == dev->device || 3581 - i->device == (u16)PCI_ANY_ID)) 3582 - return i->dma_source(dev); 3583 - } 3584 - 3585 - return pci_dev_get(dev); 3586 - } 3587 - 3588 3515 /* 3589 3516 * AMD has indicated that the devices below do not support peer-to-peer 3590 3517 * in any system where they are found in the southbridge with an AMD ··· 3553 3582 * 1002:439d SB7x0/SB8x0/SB9x0 LPC host controller 3554 3583 * 1002:4384 SBx00 PCI to PCI Bridge 3555 3584 * 1002:4399 SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 3585 + * 3586 + * https://bugzilla.kernel.org/show_bug.cgi?id=81841#c15 3587 + * 3588 + * 1022:780f [AMD] FCH PCI Bridge 3589 + * 1022:7809 [AMD] FCH USB OHCI Controller 3556 3590 */ 3557 3591 static int pci_quirk_amd_sb_acs(struct pci_dev *dev, u16 acs_flags) 3558 3592 { ··· 3640 3664 return acs_flags & ~flags ? 0 : 1; 3641 3665 } 3642 3666 3667 + static int pci_quirk_mf_endpoint_acs(struct pci_dev *dev, u16 acs_flags) 3668 + { 3669 + /* 3670 + * SV, TB, and UF are not relevant to multifunction endpoints. 3671 + * 3672 + * Multifunction devices are only required to implement RR, CR, and DT 3673 + * in their ACS capability if they support peer-to-peer transactions. 3674 + * Devices matching this quirk have been verified by the vendor to not 3675 + * perform peer-to-peer with other functions, allowing us to mask out 3676 + * these bits as if they were unimplemented in the ACS capability. 3677 + */ 3678 + acs_flags &= ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | 3679 + PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); 3680 + 3681 + return acs_flags ? 0 : 1; 3682 + } 3683 + 3643 3684 static const struct pci_dev_acs_enabled { 3644 3685 u16 vendor; 3645 3686 u16 device; ··· 3668 3675 { PCI_VENDOR_ID_ATI, 0x439d, pci_quirk_amd_sb_acs }, 3669 3676 { PCI_VENDOR_ID_ATI, 0x4384, pci_quirk_amd_sb_acs }, 3670 3677 { PCI_VENDOR_ID_ATI, 0x4399, pci_quirk_amd_sb_acs }, 3678 + { PCI_VENDOR_ID_AMD, 0x780f, pci_quirk_amd_sb_acs }, 3679 + { PCI_VENDOR_ID_AMD, 0x7809, pci_quirk_amd_sb_acs }, 3680 + { PCI_VENDOR_ID_SOLARFLARE, 0x0903, pci_quirk_mf_endpoint_acs }, 3681 + { PCI_VENDOR_ID_SOLARFLARE, 0x0923, pci_quirk_mf_endpoint_acs }, 3682 + { PCI_VENDOR_ID_INTEL, 0x10C6, pci_quirk_mf_endpoint_acs }, 3683 + { PCI_VENDOR_ID_INTEL, 0x10DB, pci_quirk_mf_endpoint_acs }, 3684 + { PCI_VENDOR_ID_INTEL, 0x10DD, pci_quirk_mf_endpoint_acs }, 3685 + { PCI_VENDOR_ID_INTEL, 0x10E1, pci_quirk_mf_endpoint_acs }, 3686 + { PCI_VENDOR_ID_INTEL, 0x10F1, pci_quirk_mf_endpoint_acs }, 3687 + { PCI_VENDOR_ID_INTEL, 0x10F7, pci_quirk_mf_endpoint_acs }, 3688 + { PCI_VENDOR_ID_INTEL, 0x10F8, pci_quirk_mf_endpoint_acs }, 3689 + { PCI_VENDOR_ID_INTEL, 0x10F9, pci_quirk_mf_endpoint_acs }, 3690 + { PCI_VENDOR_ID_INTEL, 0x10FA, pci_quirk_mf_endpoint_acs }, 3691 + { PCI_VENDOR_ID_INTEL, 0x10FB, pci_quirk_mf_endpoint_acs }, 3692 + { PCI_VENDOR_ID_INTEL, 0x10FC, pci_quirk_mf_endpoint_acs }, 3693 + { PCI_VENDOR_ID_INTEL, 0x1507, pci_quirk_mf_endpoint_acs }, 3694 + { PCI_VENDOR_ID_INTEL, 0x1514, pci_quirk_mf_endpoint_acs }, 3695 + { PCI_VENDOR_ID_INTEL, 0x151C, pci_quirk_mf_endpoint_acs }, 3696 + { PCI_VENDOR_ID_INTEL, 0x1529, pci_quirk_mf_endpoint_acs }, 3697 + { PCI_VENDOR_ID_INTEL, 0x152A, pci_quirk_mf_endpoint_acs }, 3698 + { PCI_VENDOR_ID_INTEL, 0x154D, pci_quirk_mf_endpoint_acs }, 3699 + { PCI_VENDOR_ID_INTEL, 0x154F, pci_quirk_mf_endpoint_acs }, 3700 + { PCI_VENDOR_ID_INTEL, 0x1551, pci_quirk_mf_endpoint_acs }, 3701 + { PCI_VENDOR_ID_INTEL, 0x1558, pci_quirk_mf_endpoint_acs }, 3671 3702 { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs }, 3672 3703 { 0 } 3673 3704 };
-34
drivers/pci/search.c
··· 103 103 return ret; 104 104 } 105 105 106 - /* 107 - * find the upstream PCIe-to-PCI bridge of a PCI device 108 - * if the device is PCIE, return NULL 109 - * if the device isn't connected to a PCIe bridge (that is its parent is a 110 - * legacy PCI bridge and the bridge is directly connected to bus 0), return its 111 - * parent 112 - */ 113 - struct pci_dev *pci_find_upstream_pcie_bridge(struct pci_dev *pdev) 114 - { 115 - struct pci_dev *tmp = NULL; 116 - 117 - if (pci_is_pcie(pdev)) 118 - return NULL; 119 - while (1) { 120 - if (pci_is_root_bus(pdev->bus)) 121 - break; 122 - pdev = pdev->bus->self; 123 - /* a p2p bridge */ 124 - if (!pci_is_pcie(pdev)) { 125 - tmp = pdev; 126 - continue; 127 - } 128 - /* PCI device should connect to a PCIe bridge */ 129 - if (pci_pcie_type(pdev) != PCI_EXP_TYPE_PCI_BRIDGE) { 130 - /* Busted hardware? */ 131 - WARN_ON_ONCE(1); 132 - return NULL; 133 - } 134 - return pdev; 135 - } 136 - 137 - return tmp; 138 - } 139 - 140 106 static struct pci_bus *pci_do_find_bus(struct pci_bus *bus, unsigned char busnr) 141 107 { 142 108 struct pci_bus *child;
+1 -1
drivers/pci/setup-bus.c
··· 1652 1652 struct pci_dev_resource *fail_res; 1653 1653 int retval; 1654 1654 unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | 1655 - IORESOURCE_PREFETCH; 1655 + IORESOURCE_PREFETCH | IORESOURCE_MEM_64; 1656 1656 1657 1657 again: 1658 1658 __pci_bus_size_bridges(parent, &add_list);
-1
drivers/scsi/vmw_pvscsi.h
··· 32 32 33 33 #define MASK(n) ((1 << (n)) - 1) /* make an n-bit mask */ 34 34 35 - #define PCI_VENDOR_ID_VMWARE 0x15AD 36 35 #define PCI_DEVICE_ID_VMWARE_PVSCSI 0x07C0 37 36 38 37 /*
+1 -1
drivers/vfio/pci/vfio_pci_config.c
··· 727 727 p_setd(perm, 0, ALL_VIRT, NO_WRITE); 728 728 729 729 /* Writable bits mask */ 730 - mask = PCI_ERR_UNC_TRAIN | /* Training */ 730 + mask = PCI_ERR_UNC_UND | /* Undefined */ 731 731 PCI_ERR_UNC_DLP | /* Data Link Protocol */ 732 732 PCI_ERR_UNC_SURPDN | /* Surprise Down */ 733 733 PCI_ERR_UNC_POISON_TLP | /* Poisoned TLP */
+2 -2
drivers/xen/xen-pciback/pci_stub.c
··· 133 133 xen_pcibk_config_free_dyn_fields(dev); 134 134 xen_pcibk_config_free_dev(dev); 135 135 136 - dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED; 136 + pci_clear_dev_assigned(dev); 137 137 pci_dev_put(dev); 138 138 139 139 kfree(psdev); ··· 413 413 dev_dbg(&dev->dev, "reset device\n"); 414 414 xen_pcibk_reset_device(dev); 415 415 416 - dev->dev_flags |= PCI_DEV_FLAGS_ASSIGNED; 416 + pci_set_dev_assigned(dev); 417 417 return 0; 418 418 419 419 config_release:
+1 -1
include/asm-generic/io.h
··· 331 331 #ifndef CONFIG_GENERIC_IOMAP 332 332 static inline void __iomem *ioport_map(unsigned long port, unsigned int nr) 333 333 { 334 - return (void __iomem *) port; 334 + return PCI_IOBASE + (port & IO_SPACE_LIMIT); 335 335 } 336 336 337 337 static inline void ioport_unmap(void __iomem *p)
+4
include/asm-generic/pgtable.h
··· 249 249 #define pgprot_writecombine pgprot_noncached 250 250 #endif 251 251 252 + #ifndef pgprot_device 253 + #define pgprot_device pgprot_noncached 254 + #endif 255 + 252 256 /* 253 257 * When walking page tables, get the address of the next boundary, 254 258 * or the end address of the range if that comes earlier. Although no
+2
include/linux/aer.h
··· 7 7 #ifndef _AER_H_ 8 8 #define _AER_H_ 9 9 10 + #include <linux/types.h> 11 + 10 12 #define AER_NONFATAL 0 11 13 #define AER_FATAL 1 12 14 #define AER_CORRECTABLE 2
+5
include/linux/ioport.h
··· 215 215 216 216 /* Wrappers for managed devices */ 217 217 struct device; 218 + 219 + extern int devm_request_resource(struct device *dev, struct resource *root, 220 + struct resource *new); 221 + extern void devm_release_resource(struct device *dev, struct resource *new); 222 + 218 223 #define devm_request_region(dev,start,n,name) \ 219 224 __devm_request_region(dev, &ioport_resource, (start), (n), (name)) 220 225 #define devm_request_mem_region(dev,start,n,name) \
-6
include/linux/msi.h
··· 29 29 __u8 multi_cap : 3; /* log2 num of messages supported */ 30 30 __u8 maskbit : 1; /* mask-pending bit supported ? */ 31 31 __u8 is_64 : 1; /* Address size: 0=32bit 1=64bit */ 32 - __u8 pos; /* Location of the msi capability */ 33 32 __u16 entry_nr; /* specific enabled entry */ 34 33 unsigned default_irq; /* default pre-assigned irq */ 35 34 } msi_attrib; ··· 46 47 47 48 /* Last set MSI message */ 48 49 struct msi_msg msg; 49 - 50 - struct kobject kobj; 51 50 }; 52 51 53 52 /* ··· 57 60 void arch_teardown_msi_irq(unsigned int irq); 58 61 int arch_setup_msi_irqs(struct pci_dev *dev, int nvec, int type); 59 62 void arch_teardown_msi_irqs(struct pci_dev *dev); 60 - int arch_msi_check_device(struct pci_dev* dev, int nvec, int type); 61 63 void arch_restore_msi_irqs(struct pci_dev *dev); 62 64 63 65 void default_teardown_msi_irqs(struct pci_dev *dev); ··· 73 77 int (*setup_irq)(struct msi_chip *chip, struct pci_dev *dev, 74 78 struct msi_desc *desc); 75 79 void (*teardown_irq)(struct msi_chip *chip, unsigned int irq); 76 - int (*check_device)(struct msi_chip *chip, struct pci_dev *dev, 77 - int nvec, int type); 78 80 }; 79 81 80 82 #endif /* LINUX_MSI_H */
+16 -11
include/linux/of_address.h
··· 23 23 #define for_each_of_pci_range(parser, range) \ 24 24 for (; of_pci_range_parser_one(parser, range);) 25 25 26 - static inline void of_pci_range_to_resource(struct of_pci_range *range, 27 - struct device_node *np, 28 - struct resource *res) 29 - { 30 - res->flags = range->flags; 31 - res->start = range->cpu_addr; 32 - res->end = range->cpu_addr + range->size - 1; 33 - res->parent = res->child = res->sibling = NULL; 34 - res->name = np->full_name; 35 - } 36 - 37 26 /* Translate a DMA address from device space to CPU space */ 38 27 extern u64 of_translate_dma_address(struct device_node *dev, 39 28 const __be32 *in_addr); ··· 44 55 extern const __be32 *of_get_address(struct device_node *dev, int index, 45 56 u64 *size, unsigned int *flags); 46 57 58 + extern int pci_register_io_range(phys_addr_t addr, resource_size_t size); 47 59 extern unsigned long pci_address_to_pio(phys_addr_t addr); 60 + extern phys_addr_t pci_pio_to_address(unsigned long pio); 48 61 49 62 extern int of_pci_range_parser_init(struct of_pci_range_parser *parser, 50 63 struct device_node *node); ··· 69 78 u64 *size, unsigned int *flags) 70 79 { 71 80 return NULL; 81 + } 82 + 83 + static inline phys_addr_t pci_pio_to_address(unsigned long pio) 84 + { 85 + return 0; 72 86 } 73 87 74 88 static inline int of_pci_range_parser_init(struct of_pci_range_parser *parser, ··· 134 138 u64 *size, unsigned int *flags); 135 139 extern int of_pci_address_to_resource(struct device_node *dev, int bar, 136 140 struct resource *r); 141 + extern int of_pci_range_to_resource(struct of_pci_range *range, 142 + struct device_node *np, 143 + struct resource *res); 137 144 #else /* CONFIG_OF_ADDRESS && CONFIG_PCI */ 138 145 static inline int of_pci_address_to_resource(struct device_node *dev, int bar, 139 146 struct resource *r) ··· 148 149 int bar_no, u64 *size, unsigned int *flags) 149 150 { 150 151 return NULL; 152 + } 153 + static inline int of_pci_range_to_resource(struct of_pci_range *range, 154 + struct device_node *np, 155 + struct resource *res) 156 + { 157 + return -ENOSYS; 151 158 } 152 159 #endif /* CONFIG_OF_ADDRESS && CONFIG_PCI */ 153 160
+13
include/linux/of_pci.h
··· 15 15 int of_pci_get_devfn(struct device_node *np); 16 16 int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin); 17 17 int of_pci_parse_bus_range(struct device_node *node, struct resource *res); 18 + int of_get_pci_domain_nr(struct device_node *node); 18 19 #else 19 20 static inline int of_irq_parse_pci(const struct pci_dev *pdev, struct of_phandle_args *out_irq) 20 21 { ··· 44 43 { 45 44 return -EINVAL; 46 45 } 46 + 47 + static inline int 48 + of_get_pci_domain_nr(struct device_node *node) 49 + { 50 + return -1; 51 + } 52 + #endif 53 + 54 + #if defined(CONFIG_OF_ADDRESS) 55 + int of_pci_get_host_bridge_resources(struct device_node *dev, 56 + unsigned char busno, unsigned char bus_max, 57 + struct list_head *resources, resource_size_t *io_base); 47 58 #endif 48 59 49 60 #if defined(CONFIG_OF) && defined(CONFIG_PCI_MSI)
+42 -18
include/linux/pci.h
··· 45 45 * In the interest of not exposing interfaces to user-space unnecessarily, 46 46 * the following kernel-only defines are being added here. 47 47 */ 48 - #define PCI_DEVID(bus, devfn) ((((u16)bus) << 8) | devfn) 48 + #define PCI_DEVID(bus, devfn) ((((u16)(bus)) << 8) | (devfn)) 49 49 /* return bus from PCI devid = ((u16)bus_number) << 8) | devfn */ 50 50 #define PCI_BUS_NUM(x) (((x) >> 8) & 0xff) 51 51 ··· 457 457 unsigned char primary; /* number of primary bridge */ 458 458 unsigned char max_bus_speed; /* enum pci_bus_speed */ 459 459 unsigned char cur_bus_speed; /* enum pci_bus_speed */ 460 + #ifdef CONFIG_PCI_DOMAINS_GENERIC 461 + int domain_nr; 462 + #endif 460 463 461 464 char name[48]; 462 465 ··· 1106 1103 resource_size_t), 1107 1104 void *alignf_data); 1108 1105 1106 + 1107 + int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr); 1108 + 1109 1109 static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar) 1110 1110 { 1111 1111 struct pci_bus_region region; ··· 1294 1288 */ 1295 1289 #ifdef CONFIG_PCI_DOMAINS 1296 1290 extern int pci_domains_supported; 1291 + int pci_get_new_domain_nr(void); 1297 1292 #else 1298 1293 enum { pci_domains_supported = 0 }; 1299 1294 static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 1300 1295 static inline int pci_proc_domain(struct pci_bus *bus) { return 0; } 1296 + static inline int pci_get_new_domain_nr(void) { return -ENOSYS; } 1301 1297 #endif /* CONFIG_PCI_DOMAINS */ 1298 + 1299 + /* 1300 + * Generic implementation for PCI domain support. If your 1301 + * architecture does not need custom management of PCI 1302 + * domains then this implementation will be used 1303 + */ 1304 + #ifdef CONFIG_PCI_DOMAINS_GENERIC 1305 + static inline int pci_domain_nr(struct pci_bus *bus) 1306 + { 1307 + return bus->domain_nr; 1308 + } 1309 + void pci_bus_assign_domain_nr(struct pci_bus *bus, struct device *parent); 1310 + #else 1311 + static inline void pci_bus_assign_domain_nr(struct pci_bus *bus, 1312 + struct device *parent) 1313 + { 1314 + } 1315 + #endif 1302 1316 1303 1317 /* some architectures require additional setup to direct VGA traffic */ 1304 1318 typedef int (*arch_set_vga_state_t)(struct pci_dev *pdev, bool decode, ··· 1428 1402 1429 1403 static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 1430 1404 static inline struct pci_dev *pci_dev_get(struct pci_dev *dev) { return NULL; } 1405 + static inline int pci_get_new_domain_nr(void) { return -ENOSYS; } 1431 1406 1432 1407 #define dev_is_pci(d) (false) 1433 1408 #define dev_is_pf(d) (false) ··· 1590 1563 1591 1564 #ifdef CONFIG_PCI_QUIRKS 1592 1565 void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev); 1593 - struct pci_dev *pci_get_dma_source(struct pci_dev *dev); 1594 1566 int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags); 1595 1567 void pci_dev_specific_enable_acs(struct pci_dev *dev); 1596 1568 #else 1597 1569 static inline void pci_fixup_device(enum pci_fixup_pass pass, 1598 1570 struct pci_dev *dev) { } 1599 - static inline struct pci_dev *pci_get_dma_source(struct pci_dev *dev) 1600 - { 1601 - return pci_dev_get(dev); 1602 - } 1603 1571 static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev, 1604 1572 u16 acs_flags) 1605 1573 { ··· 1729 1707 struct pci_dev *end, u16 acs_flags); 1730 1708 1731 1709 #define PCI_VPD_LRDT 0x80 /* Large Resource Data Type */ 1732 - #define PCI_VPD_LRDT_ID(x) (x | PCI_VPD_LRDT) 1710 + #define PCI_VPD_LRDT_ID(x) ((x) | PCI_VPD_LRDT) 1733 1711 1734 1712 /* Large Resource Data Type Tag Item Names */ 1735 1713 #define PCI_VPD_LTIN_ID_STRING 0x02 /* Identifier String */ ··· 1856 1834 int (*fn)(struct pci_dev *pdev, 1857 1835 u16 alias, void *data), void *data); 1858 1836 1859 - /** 1860 - * pci_find_upstream_pcie_bridge - find upstream PCIe-to-PCI bridge of a device 1861 - * @pdev: the PCI device 1862 - * 1863 - * if the device is PCIE, return NULL 1864 - * if the device isn't connected to a PCIe bridge (that is its parent is a 1865 - * legacy PCI bridge and the bridge is directly connected to bus 0), return its 1866 - * parent 1867 - */ 1868 - struct pci_dev *pci_find_upstream_pcie_bridge(struct pci_dev *pdev); 1869 - 1837 + /* helper functions for operation of device flag */ 1838 + static inline void pci_set_dev_assigned(struct pci_dev *pdev) 1839 + { 1840 + pdev->dev_flags |= PCI_DEV_FLAGS_ASSIGNED; 1841 + } 1842 + static inline void pci_clear_dev_assigned(struct pci_dev *pdev) 1843 + { 1844 + pdev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED; 1845 + } 1846 + static inline bool pci_is_dev_assigned(struct pci_dev *pdev) 1847 + { 1848 + return (pdev->dev_flags & PCI_DEV_FLAGS_ASSIGNED) == PCI_DEV_FLAGS_ASSIGNED; 1849 + } 1870 1850 #endif /* LINUX_PCI_H */
-2
include/linux/pci_hotplug.h
··· 187 187 return -ENODEV; 188 188 } 189 189 #endif 190 - 191 - void pci_configure_slot(struct pci_dev *dev); 192 190 #endif
+2
include/linux/pci_ids.h
··· 2245 2245 #define PCI_VENDOR_ID_MORETON 0x15aa 2246 2246 #define PCI_DEVICE_ID_RASTEL_2PORT 0x2000 2247 2247 2248 + #define PCI_VENDOR_ID_VMWARE 0x15ad 2249 + 2248 2250 #define PCI_VENDOR_ID_ZOLTRIX 0x15b0 2249 2251 #define PCI_DEVICE_ID_ZOLTRIX_2BD0 0x2bd0 2250 2252
+28 -18
include/ras/ras_event.h
··· 8 8 #include <linux/tracepoint.h> 9 9 #include <linux/edac.h> 10 10 #include <linux/ktime.h> 11 + #include <linux/pci.h> 11 12 #include <linux/aer.h> 12 13 #include <linux/cper.h> 13 14 ··· 174 173 * u8 severity - error severity 0:NONFATAL 1:FATAL 2:CORRECTED 175 174 */ 176 175 177 - #define aer_correctable_errors \ 178 - {BIT(0), "Receiver Error"}, \ 179 - {BIT(6), "Bad TLP"}, \ 180 - {BIT(7), "Bad DLLP"}, \ 181 - {BIT(8), "RELAY_NUM Rollover"}, \ 182 - {BIT(12), "Replay Timer Timeout"}, \ 183 - {BIT(13), "Advisory Non-Fatal"} 176 + #define aer_correctable_errors \ 177 + {PCI_ERR_COR_RCVR, "Receiver Error"}, \ 178 + {PCI_ERR_COR_BAD_TLP, "Bad TLP"}, \ 179 + {PCI_ERR_COR_BAD_DLLP, "Bad DLLP"}, \ 180 + {PCI_ERR_COR_REP_ROLL, "RELAY_NUM Rollover"}, \ 181 + {PCI_ERR_COR_REP_TIMER, "Replay Timer Timeout"}, \ 182 + {PCI_ERR_COR_ADV_NFAT, "Advisory Non-Fatal Error"}, \ 183 + {PCI_ERR_COR_INTERNAL, "Corrected Internal Error"}, \ 184 + {PCI_ERR_COR_LOG_OVER, "Header Log Overflow"} 184 185 185 - #define aer_uncorrectable_errors \ 186 - {BIT(4), "Data Link Protocol"}, \ 187 - {BIT(12), "Poisoned TLP"}, \ 188 - {BIT(13), "Flow Control Protocol"}, \ 189 - {BIT(14), "Completion Timeout"}, \ 190 - {BIT(15), "Completer Abort"}, \ 191 - {BIT(16), "Unexpected Completion"}, \ 192 - {BIT(17), "Receiver Overflow"}, \ 193 - {BIT(18), "Malformed TLP"}, \ 194 - {BIT(19), "ECRC"}, \ 195 - {BIT(20), "Unsupported Request"} 186 + #define aer_uncorrectable_errors \ 187 + {PCI_ERR_UNC_UND, "Undefined"}, \ 188 + {PCI_ERR_UNC_DLP, "Data Link Protocol Error"}, \ 189 + {PCI_ERR_UNC_SURPDN, "Surprise Down Error"}, \ 190 + {PCI_ERR_UNC_POISON_TLP,"Poisoned TLP"}, \ 191 + {PCI_ERR_UNC_FCP, "Flow Control Protocol Error"}, \ 192 + {PCI_ERR_UNC_COMP_TIME, "Completion Timeout"}, \ 193 + {PCI_ERR_UNC_COMP_ABORT,"Completer Abort"}, \ 194 + {PCI_ERR_UNC_UNX_COMP, "Unexpected Completion"}, \ 195 + {PCI_ERR_UNC_RX_OVER, "Receiver Overflow"}, \ 196 + {PCI_ERR_UNC_MALF_TLP, "Malformed TLP"}, \ 197 + {PCI_ERR_UNC_ECRC, "ECRC Error"}, \ 198 + {PCI_ERR_UNC_UNSUP, "Unsupported Request Error"}, \ 199 + {PCI_ERR_UNC_ACSV, "ACS Violation"}, \ 200 + {PCI_ERR_UNC_INTN, "Uncorrectable Internal Error"},\ 201 + {PCI_ERR_UNC_MCBTLP, "MC Blocked TLP"}, \ 202 + {PCI_ERR_UNC_ATOMEG, "AtomicOp Egress Blocked"}, \ 203 + {PCI_ERR_UNC_TLPPRE, "TLP Prefix Blocked Error"} 196 204 197 205 TRACE_EVENT(aer_event, 198 206 TP_PROTO(const char *dev_name,
+2 -1
include/uapi/linux/pci_regs.h
··· 552 552 #define PCI_EXP_RTCTL_PMEIE 0x0008 /* PME Interrupt Enable */ 553 553 #define PCI_EXP_RTCTL_CRSSVE 0x0010 /* CRS Software Visibility Enable */ 554 554 #define PCI_EXP_RTCAP 30 /* Root Capabilities */ 555 + #define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */ 555 556 #define PCI_EXP_RTSTA 32 /* Root Status */ 556 557 #define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */ 557 558 #define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */ ··· 631 630 632 631 /* Advanced Error Reporting */ 633 632 #define PCI_ERR_UNCOR_STATUS 4 /* Uncorrectable Error Status */ 634 - #define PCI_ERR_UNC_TRAIN 0x00000001 /* Training */ 633 + #define PCI_ERR_UNC_UND 0x00000001 /* Undefined */ 635 634 #define PCI_ERR_UNC_DLP 0x00000010 /* Data Link Protocol */ 636 635 #define PCI_ERR_UNC_SURPDN 0x00000020 /* Surprise Down */ 637 636 #define PCI_ERR_UNC_POISON_TLP 0x00001000 /* Poisoned TLP */
+70
kernel/resource.c
··· 1245 1245 /* 1246 1246 * Managed region resource 1247 1247 */ 1248 + static void devm_resource_release(struct device *dev, void *ptr) 1249 + { 1250 + struct resource **r = ptr; 1251 + 1252 + release_resource(*r); 1253 + } 1254 + 1255 + /** 1256 + * devm_request_resource() - request and reserve an I/O or memory resource 1257 + * @dev: device for which to request the resource 1258 + * @root: root of the resource tree from which to request the resource 1259 + * @new: descriptor of the resource to request 1260 + * 1261 + * This is a device-managed version of request_resource(). There is usually 1262 + * no need to release resources requested by this function explicitly since 1263 + * that will be taken care of when the device is unbound from its driver. 1264 + * If for some reason the resource needs to be released explicitly, because 1265 + * of ordering issues for example, drivers must call devm_release_resource() 1266 + * rather than the regular release_resource(). 1267 + * 1268 + * When a conflict is detected between any existing resources and the newly 1269 + * requested resource, an error message will be printed. 1270 + * 1271 + * Returns 0 on success or a negative error code on failure. 1272 + */ 1273 + int devm_request_resource(struct device *dev, struct resource *root, 1274 + struct resource *new) 1275 + { 1276 + struct resource *conflict, **ptr; 1277 + 1278 + ptr = devres_alloc(devm_resource_release, sizeof(*ptr), GFP_KERNEL); 1279 + if (!ptr) 1280 + return -ENOMEM; 1281 + 1282 + *ptr = new; 1283 + 1284 + conflict = request_resource_conflict(root, new); 1285 + if (conflict) { 1286 + dev_err(dev, "resource collision: %pR conflicts with %s %pR\n", 1287 + new, conflict->name, conflict); 1288 + devres_free(ptr); 1289 + return -EBUSY; 1290 + } 1291 + 1292 + devres_add(dev, ptr); 1293 + return 0; 1294 + } 1295 + EXPORT_SYMBOL(devm_request_resource); 1296 + 1297 + static int devm_resource_match(struct device *dev, void *res, void *data) 1298 + { 1299 + struct resource **ptr = res; 1300 + 1301 + return *ptr == data; 1302 + } 1303 + 1304 + /** 1305 + * devm_release_resource() - release a previously requested resource 1306 + * @dev: device for which to release the resource 1307 + * @new: descriptor of the resource to release 1308 + * 1309 + * Releases a resource previously requested using devm_request_resource(). 1310 + */ 1311 + void devm_release_resource(struct device *dev, struct resource *new) 1312 + { 1313 + WARN_ON(devres_release(dev, devm_resource_release, devm_resource_match, 1314 + new)); 1315 + } 1316 + EXPORT_SYMBOL(devm_release_resource); 1317 + 1248 1318 struct region_devres { 1249 1319 struct resource *parent; 1250 1320 resource_size_t start;
+1 -1
virt/kvm/assigned-dev.c
··· 302 302 else 303 303 pci_restore_state(assigned_dev->dev); 304 304 305 - assigned_dev->dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED; 305 + pci_clear_dev_assigned(assigned_dev->dev); 306 306 307 307 pci_release_regions(assigned_dev->dev); 308 308 pci_disable_device(assigned_dev->dev);
+2 -2
virt/kvm/iommu.c
··· 203 203 goto out_unmap; 204 204 } 205 205 206 - pdev->dev_flags |= PCI_DEV_FLAGS_ASSIGNED; 206 + pci_set_dev_assigned(pdev); 207 207 208 208 dev_info(&pdev->dev, "kvm assign device\n"); 209 209 ··· 229 229 230 230 iommu_detach_device(domain, &pdev->dev); 231 231 232 - pdev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED; 232 + pci_clear_dev_assigned(pdev); 233 233 234 234 dev_info(&pdev->dev, "kvm deassign device\n"); 235 235