Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"PCI changes:

- add support for PCI on ARM64 boxes with ACPI. We already had this
for theoretical spec-compliant hardware; now we're adding quirks
for the actual hardware (Cavium, HiSilicon, Qualcomm, X-Gene)

- add runtime PM support for hotplug ports

- enable runtime suspend for Intel UHCI that uses platform-specific
wakeup signaling

- add yet another host bridge registration interface. We hope this is
extensible enough to subsume the others

- expose device revision in sysfs for DRM

- to avoid device conflicts, make sure any VF BAR updates are done
before enabling the VF

- avoid unnecessary link retrains for ASPM

- allow INTx masking on Mellanox devices that support it

- allow access to non-standard VPD for Chelsio devices

- update Broadcom iProc support for PAXB v2, PAXC v2, inbound DMA,
etc

- update Rockchip support for max-link-speed

- add NVIDIA Tegra210 support

- add Layerscape LS1046a support

- update R-Car compatibility strings

- add Qualcomm MSM8996 support

- remove some uninformative bootup messages"

* tag 'pci-v4.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (115 commits)
PCI: Enable access to non-standard VPD for Chelsio devices (cxgb3)
PCI: Expand "VPD access disabled" quirk message
PCI: pciehp: Remove loading message
PCI: hotplug: Remove hotplug core message
PCI: Remove service driver load/unload messages
PCI/AER: Log AER IRQ when claiming Root Port
PCI/AER: Log errors with PCI device, not PCIe service device
PCI/AER: Remove unused version macros
PCI/PME: Log PME IRQ when claiming Root Port
PCI/PME: Drop unused support for PMEs from Root Complex Event Collectors
PCI: Move config space size macros to pci_regs.h
x86/platform/intel-mid: Constify mid_pci_platform_pm
PCI/ASPM: Don't retrain link if ASPM not possible
PCI: iproc: Skip check for legacy IRQ on PAXC buses
PCI: pciehp: Leave power indicator on when enabling already-enabled slot
PCI: pciehp: Prioritize data-link event over presence detect
PCI: rcar: Add gen3 fallback compatibility string for pcie-rcar
PCI: rcar: Use gen2 fallback compatibility last
PCI: rcar-gen2: Use gen2 fallback compatibility last
PCI: rockchip: Move the deassert of pm/aclk/pclk after phy_init()
..

+3101 -884
+7
Documentation/ABI/testing/sysfs-bus-pci
··· 294 294 a firmware bug to the system vendor. Writing to this file 295 295 taints the kernel with TAINT_FIRMWARE_WORKAROUND, which 296 296 reduces the supportability of your system. 297 + 298 + What: /sys/bus/pci/devices/.../revision 299 + Date: November 2016 300 + Contact: Emil Velikov <emil.l.velikov@gmail.com> 301 + Description: 302 + This file contains the revision field of the the PCI device. 303 + The value comes from device config space. The file is read only.
+28 -15
Documentation/devicetree/bindings/pci/brcm,iproc-pcie.txt
··· 1 1 * Broadcom iProc PCIe controller with the platform bus interface 2 2 3 3 Required properties: 4 - - compatible: Must be "brcm,iproc-pcie" for PAXB, or "brcm,iproc-pcie-paxc" 5 - for PAXC. PAXB-based root complex is used for external endpoint devices. 6 - PAXC-based root complex is connected to emulated endpoint devices 7 - internal to the ASIC 4 + - compatible: 5 + "brcm,iproc-pcie" for the first generation of PAXB based controller, 6 + used in SoCs including NSP, Cygnus, NS2, and Pegasus 7 + "brcm,iproc-pcie-paxb-v2" for the second generation of PAXB-based 8 + controllers, used in Stingray 9 + "brcm,iproc-pcie-paxc" for the first generation of PAXC based 10 + controller, used in NS2 11 + "brcm,iproc-pcie-paxc-v2" for the second generation of PAXC based 12 + controller, used in Stingray 13 + PAXB-based root complex is used for external endpoint devices. PAXC-based 14 + root complex is connected to emulated endpoint devices internal to the ASIC 8 15 - reg: base address and length of the PCIe controller I/O register space 9 16 - #interrupt-cells: set to <1> 10 17 - interrupt-map-mask and interrupt-map, standard PCI properties to define the ··· 26 19 Optional properties: 27 20 - phys: phandle of the PCIe PHY device 28 21 - phy-names: must be "pcie-phy" 22 + - dma-coherent: present if DMA operations are coherent 23 + - dma-ranges: Some PAXB-based root complexes do not have inbound mapping done 24 + by the ASIC after power on reset. In this case, SW is required to configure 25 + the mapping, based on inbound memory regions specified by this property. 29 26 30 27 - brcm,pcie-ob: Some iProc SoCs do not have the outbound address mapping done 31 28 by the ASIC after power on reset. In this case, SW needs to configure it ··· 40 29 Required: 41 30 - brcm,pcie-ob-axi-offset: The offset from the AXI address to the internal 42 31 address used by the iProc PCIe core (not the PCIe address) 43 - - brcm,pcie-ob-window-size: The outbound address mapping window size (in MB) 44 - 45 - Optional: 46 - - brcm,pcie-ob-oarr-size: Some iProc SoCs need the OARR size bit to be set to 47 - increase the outbound window size 48 32 49 33 MSI support (optional): 50 34 ··· 47 41 an event queue based MSI support. The iProc MSI uses host memories to store 48 42 MSI posted writes in the event queues 49 43 50 - - msi-parent: Link to the device node of the MSI controller. On newer iProc 51 - platforms, the MSI controller may be gicv2m or gicv3-its. On older iProc 52 - platforms without MSI support in its interrupt controller, one may use the 53 - event queue based MSI support integrated within the iProc PCIe core. 44 + On newer iProc platforms, gicv2m or gicv3-its based MSI support should be used 45 + 46 + - msi-map: Maps a Requester ID to an MSI controller and associated MSI 47 + sideband data 48 + 49 + - msi-parent: Link to the device node of the MSI controller, used when no MSI 50 + sideband data is passed between the iProc PCIe controller and the MSI 51 + controller 52 + 53 + Refer to the following binding documents for more detailed description on 54 + the use of 'msi-map' and 'msi-parent': 55 + Documentation/devicetree/bindings/pci/pci-msi.txt 56 + Documentation/devicetree/bindings/interrupt-controller/msi.txt 54 57 55 58 When the iProc event queue based MSI is used, one needs to define the 56 59 following properties in the MSI device node: ··· 95 80 phy-names = "pcie-phy"; 96 81 97 82 brcm,pcie-ob; 98 - brcm,pcie-ob-oarr-size; 99 83 brcm,pcie-ob-axi-offset = <0x00000000>; 100 - brcm,pcie-ob-window-size = <256>; 101 84 102 85 msi-parent = <&msi0>; 103 86
+1
Documentation/devicetree/bindings/pci/layerscape-pci.txt
··· 15 15 - compatible: should contain the platform identifier such as: 16 16 "fsl,ls1021a-pcie", "snps,dw-pcie" 17 17 "fsl,ls2080a-pcie", "fsl,ls2085a-pcie", "snps,dw-pcie" 18 + "fsl,ls1046a-pcie" 18 19 - reg: base addresses and lengths of the PCIe controller 19 20 - interrupts: A list of interrupt outputs of the controller. Must contain an 20 21 entry for each entry in the interrupt-names property.
+110
Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
··· 110 110 - avdd-pll-erefe-supply: Power supply for PLLE (shared with USB3). Must 111 111 supply 1.05 V. 112 112 113 + Power supplies for Tegra210: 114 + - Required: 115 + - avdd-pll-uerefe-supply: Power supply for PLLE (shared with USB3). Must 116 + supply 1.05 V. 117 + - hvddio-pex-supply: High-voltage supply for PCIe I/O and PCIe output 118 + clocks. Must supply 1.8 V. 119 + - dvddio-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. 120 + - dvdd-pex-pll-supply: Power supply for dedicated (internal) PCIe PLL. Must 121 + supply 1.05 V. 122 + - hvdd-pex-pll-e-supply: High-voltage supply for PLLE (shared with USB3). 123 + Must supply 3.3 V. 124 + - vddio-pex-ctl-supply: Power supply for PCIe control I/O partition. Must 125 + supply 1.8 V. 126 + 113 127 Root ports are defined as subnodes of the PCIe controller node. 114 128 115 129 Required properties: ··· 446 432 /* Gigabit Ethernet */ 447 433 pci@2,0 { 448 434 phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>; 435 + phy-names = "pcie-0"; 436 + status = "okay"; 437 + }; 438 + }; 439 + 440 + Tegra210: 441 + --------- 442 + 443 + SoC DTSI: 444 + 445 + pcie-controller@01003000 { 446 + compatible = "nvidia,tegra210-pcie"; 447 + device_type = "pci"; 448 + reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */ 449 + 0x0 0x01003800 0x0 0x00000800 /* AFI registers */ 450 + 0x0 0x02000000 0x0 0x10000000>; /* configuration space */ 451 + reg-names = "pads", "afi", "cs"; 452 + interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 453 + <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */ 454 + interrupt-names = "intr", "msi"; 455 + 456 + #interrupt-cells = <1>; 457 + interrupt-map-mask = <0 0 0 0>; 458 + interrupt-map = <0 0 0 0 &gic GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>; 459 + 460 + bus-range = <0x00 0xff>; 461 + #address-cells = <3>; 462 + #size-cells = <2>; 463 + 464 + ranges = <0x82000000 0 0x01000000 0x0 0x01000000 0 0x00001000 /* port 0 configuration space */ 465 + 0x82000000 0 0x01001000 0x0 0x01001000 0 0x00001000 /* port 1 configuration space */ 466 + 0x81000000 0 0x0 0x0 0x12000000 0 0x00010000 /* downstream I/O (64 KiB) */ 467 + 0x82000000 0 0x13000000 0x0 0x13000000 0 0x0d000000 /* non-prefetchable memory (208 MiB) */ 468 + 0xc2000000 0 0x20000000 0x0 0x20000000 0 0x20000000>; /* prefetchable memory (512 MiB) */ 469 + 470 + clocks = <&tegra_car TEGRA210_CLK_PCIE>, 471 + <&tegra_car TEGRA210_CLK_AFI>, 472 + <&tegra_car TEGRA210_CLK_PLL_E>, 473 + <&tegra_car TEGRA210_CLK_CML0>; 474 + clock-names = "pex", "afi", "pll_e", "cml"; 475 + resets = <&tegra_car 70>, 476 + <&tegra_car 72>, 477 + <&tegra_car 74>; 478 + reset-names = "pex", "afi", "pcie_x"; 479 + status = "disabled"; 480 + 481 + pci@1,0 { 482 + device_type = "pci"; 483 + assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>; 484 + reg = <0x000800 0 0 0 0>; 485 + status = "disabled"; 486 + 487 + #address-cells = <3>; 488 + #size-cells = <2>; 489 + ranges; 490 + 491 + nvidia,num-lanes = <4>; 492 + }; 493 + 494 + pci@2,0 { 495 + device_type = "pci"; 496 + assigned-addresses = <0x82001000 0 0x01001000 0 0x1000>; 497 + reg = <0x001000 0 0 0 0>; 498 + status = "disabled"; 499 + 500 + #address-cells = <3>; 501 + #size-cells = <2>; 502 + ranges; 503 + 504 + nvidia,num-lanes = <1>; 505 + }; 506 + }; 507 + 508 + Board DTS: 509 + 510 + pcie-controller@01003000 { 511 + status = "okay"; 512 + 513 + avdd-pll-uerefe-supply = <&avdd_1v05_pll>; 514 + hvddio-pex-supply = <&vdd_1v8>; 515 + dvddio-pex-supply = <&vdd_pex_1v05>; 516 + dvdd-pex-pll-supply = <&vdd_pex_1v05>; 517 + hvdd-pex-pll-e-supply = <&vdd_1v8>; 518 + vddio-pex-ctl-supply = <&vdd_1v8>; 519 + 520 + pci@1,0 { 521 + phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-0}>, 522 + <&{/padctl@7009f000/pads/pcie/lanes/pcie-1}>, 523 + <&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>, 524 + <&{/padctl@7009f000/pads/pcie/lanes/pcie-3}>; 525 + phy-names = "pcie-0", "pcie-1", "pcie-2", "pcie-3"; 526 + status = "okay"; 527 + }; 528 + 529 + pci@2,0 { 530 + phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>; 449 531 phy-names = "pcie-0"; 450 532 status = "okay"; 451 533 };
+6
Documentation/devicetree/bindings/pci/pci.txt
··· 18 18 host bridges in the system, otherwise potentially conflicting domain numbers 19 19 may be assigned to root buses behind different host bridges. The domain 20 20 number for each host bridge in the system must be unique. 21 + - max-link-speed: 22 + If present this property specifies PCI gen for link capability. Host 23 + drivers could add this as a strategy to avoid unnecessary operation for 24 + unsupported link speed, for instance, trying to do training for 25 + unsupported link speed, etc. Must be '4' for gen4, '3' for gen3, '2' 26 + for gen2, and '1' for gen1. Any other values are invalid.
+13 -1
Documentation/devicetree/bindings/pci/qcom,pcie.txt
··· 7 7 - "qcom,pcie-ipq8064" for ipq8064 8 8 - "qcom,pcie-apq8064" for apq8064 9 9 - "qcom,pcie-apq8084" for apq8084 10 + - "qcom,pcie-msm8996" for msm8996 or apq8096 10 11 11 12 - reg: 12 13 Usage: required ··· 93 92 - "aux" Auxiliary (AUX) clock 94 93 - "bus_master" Master AXI clock 95 94 - "bus_slave" Slave AXI clock 95 + 96 + - clock-names: 97 + Usage: required for msm8996/apq8096 98 + Value type: <stringlist> 99 + Definition: Should contain the following entries 100 + - "pipe" Pipe Clock driving internal logic 101 + - "aux" Auxiliary (AUX) clock 102 + - "cfg" Configuration clock 103 + - "bus_master" Master AXI clock 104 + - "bus_slave" Slave AXI clock 105 + 96 106 - resets: 97 107 Usage: required 98 108 Value type: <prop-encoded-array> ··· 127 115 - "core" Core reset 128 116 129 117 - power-domains: 130 - Usage: required for apq8084 118 + Usage: required for apq8084 and msm8996/apq8096 131 119 Value type: <prop-encoded-array> 132 120 Definition: A phandle and power domain specifier pair to the 133 121 power domain which is responsible for collapsing
+1
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 7 7 "renesas,pcie-r8a7793" for the R8A7793 SoC; 8 8 "renesas,pcie-r8a7795" for the R8A7795 SoC; 9 9 "renesas,pcie-rcar-gen2" for a generic R-Car Gen2 compatible device. 10 + "renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device. 10 11 11 12 When compatible with the generic version, nodes must list the 12 13 SoC-specific version corresponding to the platform first
+2
Documentation/filesystems/sysfs-pci.txt
··· 17 17 | |-- resource0 18 18 | |-- resource1 19 19 | |-- resource2 20 + | |-- revision 20 21 | |-- rom 21 22 | |-- subsystem_device 22 23 | |-- subsystem_vendor ··· 42 41 resource PCI resource host addresses (ascii, ro) 43 42 resource0..N PCI resource N, if present (binary, mmap, rw[1]) 44 43 resource0_wc..N_wc PCI WC map resource N, if prefetchable (binary, mmap) 44 + revision PCI revision (ascii, ro) 45 45 rom PCI ROM resource, if present (binary, ro) 46 46 subsystem_device PCI subsystem device (ascii, ro) 47 47 subsystem_vendor PCI subsystem vendor (ascii, ro)
+26
arch/arm64/boot/dts/nvidia/tegra210-p2371-2180.dts
··· 7 7 model = "NVIDIA Jetson TX1 Developer Kit"; 8 8 compatible = "nvidia,p2371-2180", "nvidia,tegra210"; 9 9 10 + pcie-controller@01003000 { 11 + status = "okay"; 12 + 13 + avdd-pll-uerefe-supply = <&avdd_1v05_pll>; 14 + hvddio-pex-supply = <&vdd_1v8>; 15 + dvddio-pex-supply = <&vdd_pex_1v05>; 16 + dvdd-pex-pll-supply = <&vdd_pex_1v05>; 17 + hvdd-pex-pll-e-supply = <&vdd_1v8>; 18 + vddio-pex-ctl-supply = <&vdd_1v8>; 19 + 20 + pci@1,0 { 21 + phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-0}>, 22 + <&{/padctl@7009f000/pads/pcie/lanes/pcie-1}>, 23 + <&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>, 24 + <&{/padctl@7009f000/pads/pcie/lanes/pcie-3}>; 25 + phy-names = "pcie-0", "pcie-1", "pcie-2", "pcie-3"; 26 + status = "okay"; 27 + }; 28 + 29 + pci@2,0 { 30 + phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>; 31 + phy-names = "pcie-0"; 32 + status = "okay"; 33 + }; 34 + }; 35 + 10 36 host1x@50000000 { 11 37 dsi@54300000 { 12 38 status = "okay";
+63
arch/arm64/boot/dts/nvidia/tegra210.dtsi
··· 11 11 #address-cells = <2>; 12 12 #size-cells = <2>; 13 13 14 + pcie-controller@01003000 { 15 + compatible = "nvidia,tegra210-pcie"; 16 + device_type = "pci"; 17 + reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */ 18 + 0x0 0x01003800 0x0 0x00000800 /* AFI registers */ 19 + 0x0 0x02000000 0x0 0x10000000>; /* configuration space */ 20 + reg-names = "pads", "afi", "cs"; 21 + interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 22 + <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */ 23 + interrupt-names = "intr", "msi"; 24 + 25 + #interrupt-cells = <1>; 26 + interrupt-map-mask = <0 0 0 0>; 27 + interrupt-map = <0 0 0 0 &gic GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>; 28 + 29 + bus-range = <0x00 0xff>; 30 + #address-cells = <3>; 31 + #size-cells = <2>; 32 + 33 + ranges = <0x82000000 0 0x01000000 0x0 0x01000000 0 0x00001000 /* port 0 configuration space */ 34 + 0x82000000 0 0x01001000 0x0 0x01001000 0 0x00001000 /* port 1 configuration space */ 35 + 0x81000000 0 0x0 0x0 0x12000000 0 0x00010000 /* downstream I/O (64 KiB) */ 36 + 0x82000000 0 0x13000000 0x0 0x13000000 0 0x0d000000 /* non-prefetchable memory (208 MiB) */ 37 + 0xc2000000 0 0x20000000 0x0 0x20000000 0 0x20000000>; /* prefetchable memory (512 MiB) */ 38 + 39 + clocks = <&tegra_car TEGRA210_CLK_PCIE>, 40 + <&tegra_car TEGRA210_CLK_AFI>, 41 + <&tegra_car TEGRA210_CLK_PLL_E>, 42 + <&tegra_car TEGRA210_CLK_CML0>; 43 + clock-names = "pex", "afi", "pll_e", "cml"; 44 + resets = <&tegra_car 70>, 45 + <&tegra_car 72>, 46 + <&tegra_car 74>; 47 + reset-names = "pex", "afi", "pcie_x"; 48 + status = "disabled"; 49 + 50 + pci@1,0 { 51 + device_type = "pci"; 52 + assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>; 53 + reg = <0x000800 0 0 0 0>; 54 + status = "disabled"; 55 + 56 + #address-cells = <3>; 57 + #size-cells = <2>; 58 + ranges; 59 + 60 + nvidia,num-lanes = <4>; 61 + }; 62 + 63 + pci@2,0 { 64 + device_type = "pci"; 65 + assigned-addresses = <0x82001000 0 0x01001000 0 0x1000>; 66 + reg = <0x001000 0 0 0 0>; 67 + status = "disabled"; 68 + 69 + #address-cells = <3>; 70 + #size-cells = <2>; 71 + ranges; 72 + 73 + nvidia,num-lanes = <1>; 74 + }; 75 + }; 76 + 14 77 host1x@50000000 { 15 78 compatible = "nvidia,tegra210-host1x", "simple-bus"; 16 79 reg = <0x0 0x50000000 0x0 0x00034000>;
+43 -24
arch/arm64/kernel/pci.c
··· 114 114 return 0; 115 115 } 116 116 117 + static int pci_acpi_root_prepare_resources(struct acpi_pci_root_info *ci) 118 + { 119 + struct resource_entry *entry, *tmp; 120 + int status; 121 + 122 + status = acpi_pci_probe_root_resources(ci); 123 + resource_list_for_each_entry_safe(entry, tmp, &ci->resources) { 124 + if (!(entry->res->flags & IORESOURCE_WINDOW)) 125 + resource_list_destroy_entry(entry); 126 + } 127 + return status; 128 + } 129 + 117 130 /* 118 131 * Lookup the bus range for the domain in MCFG, and set up config space 119 132 * mapping. ··· 134 121 static struct pci_config_window * 135 122 pci_acpi_setup_ecam_mapping(struct acpi_pci_root *root) 136 123 { 124 + struct device *dev = &root->device->dev; 137 125 struct resource *bus_res = &root->secondary; 138 126 u16 seg = root->segment; 139 - struct pci_config_window *cfg; 127 + struct pci_ecam_ops *ecam_ops; 140 128 struct resource cfgres; 141 - unsigned int bsz; 129 + struct acpi_device *adev; 130 + struct pci_config_window *cfg; 131 + int ret; 142 132 143 - /* Use address from _CBA if present, otherwise lookup MCFG */ 144 - if (!root->mcfg_addr) 145 - root->mcfg_addr = pci_mcfg_lookup(seg, bus_res); 146 - 147 - if (!root->mcfg_addr) { 148 - dev_err(&root->device->dev, "%04x:%pR ECAM region not found\n", 149 - seg, bus_res); 133 + ret = pci_mcfg_lookup(root, &cfgres, &ecam_ops); 134 + if (ret) { 135 + dev_err(dev, "%04x:%pR ECAM region not found\n", seg, bus_res); 150 136 return NULL; 151 137 } 152 138 153 - bsz = 1 << pci_generic_ecam_ops.bus_shift; 154 - cfgres.start = root->mcfg_addr + bus_res->start * bsz; 155 - cfgres.end = cfgres.start + resource_size(bus_res) * bsz - 1; 156 - cfgres.flags = IORESOURCE_MEM; 157 - cfg = pci_ecam_create(&root->device->dev, &cfgres, bus_res, 158 - &pci_generic_ecam_ops); 139 + adev = acpi_resource_consumer(&cfgres); 140 + if (adev) 141 + dev_info(dev, "ECAM area %pR reserved by %s\n", &cfgres, 142 + dev_name(&adev->dev)); 143 + else 144 + dev_warn(dev, FW_BUG "ECAM area %pR not reserved in ACPI namespace\n", 145 + &cfgres); 146 + 147 + cfg = pci_ecam_create(dev, &cfgres, bus_res, ecam_ops); 159 148 if (IS_ERR(cfg)) { 160 - dev_err(&root->device->dev, "%04x:%pR error %ld mapping ECAM\n", 161 - seg, bus_res, PTR_ERR(cfg)); 149 + dev_err(dev, "%04x:%pR error %ld mapping ECAM\n", seg, bus_res, 150 + PTR_ERR(cfg)); 162 151 return NULL; 163 152 } 164 153 ··· 174 159 175 160 ri = container_of(ci, struct acpi_pci_generic_root_info, common); 176 161 pci_ecam_free(ri->cfg); 162 + kfree(ci->ops); 177 163 kfree(ri); 178 164 } 179 - 180 - static struct acpi_pci_root_ops acpi_pci_root_ops = { 181 - .release_info = pci_acpi_generic_release_info, 182 - }; 183 165 184 166 /* Interface called from ACPI code to setup PCI host controller */ 185 167 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) ··· 184 172 int node = acpi_get_node(root->device->handle); 185 173 struct acpi_pci_generic_root_info *ri; 186 174 struct pci_bus *bus, *child; 175 + struct acpi_pci_root_ops *root_ops; 187 176 188 177 ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node); 189 178 if (!ri) 190 179 return NULL; 191 180 181 + root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node); 182 + if (!root_ops) 183 + return NULL; 184 + 192 185 ri->cfg = pci_acpi_setup_ecam_mapping(root); 193 186 if (!ri->cfg) { 194 187 kfree(ri); 188 + kfree(root_ops); 195 189 return NULL; 196 190 } 197 191 198 - acpi_pci_root_ops.pci_ops = &ri->cfg->ops->pci_ops; 199 - bus = acpi_pci_root_create(root, &acpi_pci_root_ops, &ri->common, 200 - ri->cfg); 192 + root_ops->release_info = pci_acpi_generic_release_info; 193 + root_ops->prepare_resources = pci_acpi_root_prepare_resources; 194 + root_ops->pci_ops = &ri->cfg->ops->pci_ops; 195 + bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg); 201 196 if (!bus) 202 197 return NULL; 203 198
+187 -3
drivers/acpi/pci_mcfg.c
··· 22 22 #include <linux/kernel.h> 23 23 #include <linux/pci.h> 24 24 #include <linux/pci-acpi.h> 25 + #include <linux/pci-ecam.h> 25 26 26 27 /* Structure to hold entries from the MCFG table */ 27 28 struct mcfg_entry { ··· 33 32 u8 bus_end; 34 33 }; 35 34 35 + #ifdef CONFIG_PCI_QUIRKS 36 + struct mcfg_fixup { 37 + char oem_id[ACPI_OEM_ID_SIZE + 1]; 38 + char oem_table_id[ACPI_OEM_TABLE_ID_SIZE + 1]; 39 + u32 oem_revision; 40 + u16 segment; 41 + struct resource bus_range; 42 + struct pci_ecam_ops *ops; 43 + struct resource cfgres; 44 + }; 45 + 46 + #define MCFG_BUS_RANGE(start, end) DEFINE_RES_NAMED((start), \ 47 + ((end) - (start) + 1), \ 48 + NULL, IORESOURCE_BUS) 49 + #define MCFG_BUS_ANY MCFG_BUS_RANGE(0x0, 0xff) 50 + 51 + static struct mcfg_fixup mcfg_quirks[] = { 52 + /* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */ 53 + 54 + #define QCOM_ECAM32(seg) \ 55 + { "QCOM ", "QDF2432 ", 1, seg, MCFG_BUS_ANY, &pci_32b_ops } 56 + QCOM_ECAM32(0), 57 + QCOM_ECAM32(1), 58 + QCOM_ECAM32(2), 59 + QCOM_ECAM32(3), 60 + QCOM_ECAM32(4), 61 + QCOM_ECAM32(5), 62 + QCOM_ECAM32(6), 63 + QCOM_ECAM32(7), 64 + 65 + #define HISI_QUAD_DOM(table_id, seg, ops) \ 66 + { "HISI ", table_id, 0, (seg) + 0, MCFG_BUS_ANY, ops }, \ 67 + { "HISI ", table_id, 0, (seg) + 1, MCFG_BUS_ANY, ops }, \ 68 + { "HISI ", table_id, 0, (seg) + 2, MCFG_BUS_ANY, ops }, \ 69 + { "HISI ", table_id, 0, (seg) + 3, MCFG_BUS_ANY, ops } 70 + HISI_QUAD_DOM("HIP05 ", 0, &hisi_pcie_ops), 71 + HISI_QUAD_DOM("HIP06 ", 0, &hisi_pcie_ops), 72 + HISI_QUAD_DOM("HIP07 ", 0, &hisi_pcie_ops), 73 + HISI_QUAD_DOM("HIP07 ", 4, &hisi_pcie_ops), 74 + HISI_QUAD_DOM("HIP07 ", 8, &hisi_pcie_ops), 75 + HISI_QUAD_DOM("HIP07 ", 12, &hisi_pcie_ops), 76 + 77 + #define THUNDER_PEM_RES(addr, node) \ 78 + DEFINE_RES_MEM((addr) + ((u64) (node) << 44), 0x39 * SZ_16M) 79 + #define THUNDER_PEM_QUIRK(rev, node) \ 80 + { "CAVIUM", "THUNDERX", rev, 4 + (10 * (node)), MCFG_BUS_ANY, \ 81 + &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88001f000000UL, node) }, \ 82 + { "CAVIUM", "THUNDERX", rev, 5 + (10 * (node)), MCFG_BUS_ANY, \ 83 + &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x884057000000UL, node) }, \ 84 + { "CAVIUM", "THUNDERX", rev, 6 + (10 * (node)), MCFG_BUS_ANY, \ 85 + &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x88808f000000UL, node) }, \ 86 + { "CAVIUM", "THUNDERX", rev, 7 + (10 * (node)), MCFG_BUS_ANY, \ 87 + &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89001f000000UL, node) }, \ 88 + { "CAVIUM", "THUNDERX", rev, 8 + (10 * (node)), MCFG_BUS_ANY, \ 89 + &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x894057000000UL, node) }, \ 90 + { "CAVIUM", "THUNDERX", rev, 9 + (10 * (node)), MCFG_BUS_ANY, \ 91 + &thunder_pem_ecam_ops, THUNDER_PEM_RES(0x89808f000000UL, node) } 92 + /* SoC pass2.x */ 93 + THUNDER_PEM_QUIRK(1, 0), 94 + THUNDER_PEM_QUIRK(1, 1), 95 + 96 + #define THUNDER_ECAM_QUIRK(rev, seg) \ 97 + { "CAVIUM", "THUNDERX", rev, seg, MCFG_BUS_ANY, \ 98 + &pci_thunder_ecam_ops } 99 + /* SoC pass1.x */ 100 + THUNDER_PEM_QUIRK(2, 0), /* off-chip devices */ 101 + THUNDER_PEM_QUIRK(2, 1), /* off-chip devices */ 102 + THUNDER_ECAM_QUIRK(2, 0), 103 + THUNDER_ECAM_QUIRK(2, 1), 104 + THUNDER_ECAM_QUIRK(2, 2), 105 + THUNDER_ECAM_QUIRK(2, 3), 106 + THUNDER_ECAM_QUIRK(2, 10), 107 + THUNDER_ECAM_QUIRK(2, 11), 108 + THUNDER_ECAM_QUIRK(2, 12), 109 + THUNDER_ECAM_QUIRK(2, 13), 110 + 111 + #define XGENE_V1_ECAM_MCFG(rev, seg) \ 112 + {"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \ 113 + &xgene_v1_pcie_ecam_ops } 114 + #define XGENE_V2_ECAM_MCFG(rev, seg) \ 115 + {"APM ", "XGENE ", rev, seg, MCFG_BUS_ANY, \ 116 + &xgene_v2_pcie_ecam_ops } 117 + /* X-Gene SoC with v1 PCIe controller */ 118 + XGENE_V1_ECAM_MCFG(1, 0), 119 + XGENE_V1_ECAM_MCFG(1, 1), 120 + XGENE_V1_ECAM_MCFG(1, 2), 121 + XGENE_V1_ECAM_MCFG(1, 3), 122 + XGENE_V1_ECAM_MCFG(1, 4), 123 + XGENE_V1_ECAM_MCFG(2, 0), 124 + XGENE_V1_ECAM_MCFG(2, 1), 125 + XGENE_V1_ECAM_MCFG(2, 2), 126 + XGENE_V1_ECAM_MCFG(2, 3), 127 + XGENE_V1_ECAM_MCFG(2, 4), 128 + /* X-Gene SoC with v2.1 PCIe controller */ 129 + XGENE_V2_ECAM_MCFG(3, 0), 130 + XGENE_V2_ECAM_MCFG(3, 1), 131 + /* X-Gene SoC with v2.2 PCIe controller */ 132 + XGENE_V2_ECAM_MCFG(4, 0), 133 + XGENE_V2_ECAM_MCFG(4, 1), 134 + XGENE_V2_ECAM_MCFG(4, 2), 135 + }; 136 + 137 + static char mcfg_oem_id[ACPI_OEM_ID_SIZE]; 138 + static char mcfg_oem_table_id[ACPI_OEM_TABLE_ID_SIZE]; 139 + static u32 mcfg_oem_revision; 140 + 141 + static int pci_mcfg_quirk_matches(struct mcfg_fixup *f, u16 segment, 142 + struct resource *bus_range) 143 + { 144 + if (!memcmp(f->oem_id, mcfg_oem_id, ACPI_OEM_ID_SIZE) && 145 + !memcmp(f->oem_table_id, mcfg_oem_table_id, 146 + ACPI_OEM_TABLE_ID_SIZE) && 147 + f->oem_revision == mcfg_oem_revision && 148 + f->segment == segment && 149 + resource_contains(&f->bus_range, bus_range)) 150 + return 1; 151 + 152 + return 0; 153 + } 154 + #endif 155 + 156 + static void pci_mcfg_apply_quirks(struct acpi_pci_root *root, 157 + struct resource *cfgres, 158 + struct pci_ecam_ops **ecam_ops) 159 + { 160 + #ifdef CONFIG_PCI_QUIRKS 161 + u16 segment = root->segment; 162 + struct resource *bus_range = &root->secondary; 163 + struct mcfg_fixup *f; 164 + int i; 165 + 166 + for (i = 0, f = mcfg_quirks; i < ARRAY_SIZE(mcfg_quirks); i++, f++) { 167 + if (pci_mcfg_quirk_matches(f, segment, bus_range)) { 168 + if (f->cfgres.start) 169 + *cfgres = f->cfgres; 170 + if (f->ops) 171 + *ecam_ops = f->ops; 172 + dev_info(&root->device->dev, "MCFG quirk: ECAM at %pR for %pR with %ps\n", 173 + cfgres, bus_range, *ecam_ops); 174 + return; 175 + } 176 + } 177 + #endif 178 + } 179 + 36 180 /* List to save MCFG entries */ 37 181 static LIST_HEAD(pci_mcfg_list); 38 182 39 - phys_addr_t pci_mcfg_lookup(u16 seg, struct resource *bus_res) 183 + int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres, 184 + struct pci_ecam_ops **ecam_ops) 40 185 { 186 + struct pci_ecam_ops *ops = &pci_generic_ecam_ops; 187 + struct resource *bus_res = &root->secondary; 188 + u16 seg = root->segment; 41 189 struct mcfg_entry *e; 190 + struct resource res; 191 + 192 + /* Use address from _CBA if present, otherwise lookup MCFG */ 193 + if (root->mcfg_addr) 194 + goto skip_lookup; 42 195 43 196 /* 44 197 * We expect exact match, unless MCFG entry end bus covers more than ··· 200 45 */ 201 46 list_for_each_entry(e, &pci_mcfg_list, list) { 202 47 if (e->segment == seg && e->bus_start == bus_res->start && 203 - e->bus_end >= bus_res->end) 204 - return e->addr; 48 + e->bus_end >= bus_res->end) { 49 + root->mcfg_addr = e->addr; 50 + } 51 + 205 52 } 206 53 54 + skip_lookup: 55 + memset(&res, 0, sizeof(res)); 56 + if (root->mcfg_addr) { 57 + res.start = root->mcfg_addr + (bus_res->start << 20); 58 + res.end = res.start + (resource_size(bus_res) << 20) - 1; 59 + res.flags = IORESOURCE_MEM; 60 + } 61 + 62 + /* 63 + * Allow quirks to override default ECAM ops and CFG resource 64 + * range. This may even fabricate a CFG resource range in case 65 + * MCFG does not have it. Invalid CFG start address means MCFG 66 + * firmware bug or we need another quirk in array. 67 + */ 68 + pci_mcfg_apply_quirks(root, &res, &ops); 69 + if (!res.start) 70 + return -ENXIO; 71 + 72 + *cfgres = res; 73 + *ecam_ops = ops; 207 74 return 0; 208 75 } 209 76 ··· 255 78 e->bus_end = mptr->end_bus_number; 256 79 list_add(&e->list, &pci_mcfg_list); 257 80 } 81 + 82 + #ifdef CONFIG_PCI_QUIRKS 83 + /* Save MCFG IDs and revision for quirks matching */ 84 + memcpy(mcfg_oem_id, header->oem_id, ACPI_OEM_ID_SIZE); 85 + memcpy(mcfg_oem_table_id, header->oem_table_id, ACPI_OEM_TABLE_ID_SIZE); 86 + mcfg_oem_revision = header->oem_revision; 87 + #endif 258 88 259 89 pr_info("MCFG table detected, %d entries\n", n); 260 90 return 0;
+57
drivers/acpi/resource.c
··· 664 664 return (type & types) ? 0 : 1; 665 665 } 666 666 EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type); 667 + 668 + static int acpi_dev_consumes_res(struct acpi_device *adev, struct resource *res) 669 + { 670 + struct list_head resource_list; 671 + struct resource_entry *rentry; 672 + int ret, found = 0; 673 + 674 + INIT_LIST_HEAD(&resource_list); 675 + ret = acpi_dev_get_resources(adev, &resource_list, NULL, NULL); 676 + if (ret < 0) 677 + return 0; 678 + 679 + list_for_each_entry(rentry, &resource_list, node) { 680 + if (resource_contains(rentry->res, res)) { 681 + found = 1; 682 + break; 683 + } 684 + 685 + } 686 + 687 + acpi_dev_free_resource_list(&resource_list); 688 + return found; 689 + } 690 + 691 + static acpi_status acpi_res_consumer_cb(acpi_handle handle, u32 depth, 692 + void *context, void **ret) 693 + { 694 + struct resource *res = context; 695 + struct acpi_device **consumer = (struct acpi_device **) ret; 696 + struct acpi_device *adev; 697 + 698 + if (acpi_bus_get_device(handle, &adev)) 699 + return AE_OK; 700 + 701 + if (acpi_dev_consumes_res(adev, res)) { 702 + *consumer = adev; 703 + return AE_CTRL_TERMINATE; 704 + } 705 + 706 + return AE_OK; 707 + } 708 + 709 + /** 710 + * acpi_resource_consumer - Find the ACPI device that consumes @res. 711 + * @res: Resource to search for. 712 + * 713 + * Search the current resource settings (_CRS) of every ACPI device node 714 + * for @res. If we find an ACPI device whose _CRS includes @res, return 715 + * it. Otherwise, return NULL. 716 + */ 717 + struct acpi_device *acpi_resource_consumer(struct resource *res) 718 + { 719 + struct acpi_device *consumer = NULL; 720 + 721 + acpi_get_devices(NULL, acpi_res_consumer_cb, res, (void **) &consumer); 722 + return consumer; 723 + }
+43 -41
drivers/net/ethernet/mellanox/mlx4/main.c
··· 4020 4020 return err; 4021 4021 } 4022 4022 4023 + #define MLX_SP(id) { PCI_VDEVICE(MELLANOX, id), MLX4_PCI_DEV_FORCE_SENSE_PORT } 4024 + #define MLX_VF(id) { PCI_VDEVICE(MELLANOX, id), MLX4_PCI_DEV_IS_VF } 4025 + #define MLX_GN(id) { PCI_VDEVICE(MELLANOX, id), 0 } 4026 + 4023 4027 static const struct pci_device_id mlx4_pci_table[] = { 4024 - /* MT25408 "Hermon" SDR */ 4025 - { PCI_VDEVICE(MELLANOX, 0x6340), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4026 - /* MT25408 "Hermon" DDR */ 4027 - { PCI_VDEVICE(MELLANOX, 0x634a), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4028 - /* MT25408 "Hermon" QDR */ 4029 - { PCI_VDEVICE(MELLANOX, 0x6354), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4030 - /* MT25408 "Hermon" DDR PCIe gen2 */ 4031 - { PCI_VDEVICE(MELLANOX, 0x6732), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4032 - /* MT25408 "Hermon" QDR PCIe gen2 */ 4033 - { PCI_VDEVICE(MELLANOX, 0x673c), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4034 - /* MT25408 "Hermon" EN 10GigE */ 4035 - { PCI_VDEVICE(MELLANOX, 0x6368), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4036 - /* MT25408 "Hermon" EN 10GigE PCIe gen2 */ 4037 - { PCI_VDEVICE(MELLANOX, 0x6750), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4038 - /* MT25458 ConnectX EN 10GBASE-T 10GigE */ 4039 - { PCI_VDEVICE(MELLANOX, 0x6372), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4040 - /* MT25458 ConnectX EN 10GBASE-T+Gen2 10GigE */ 4041 - { PCI_VDEVICE(MELLANOX, 0x675a), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4042 - /* MT26468 ConnectX EN 10GigE PCIe gen2*/ 4043 - { PCI_VDEVICE(MELLANOX, 0x6764), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4044 - /* MT26438 ConnectX EN 40GigE PCIe gen2 5GT/s */ 4045 - { PCI_VDEVICE(MELLANOX, 0x6746), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4046 - /* MT26478 ConnectX2 40GigE PCIe gen2 */ 4047 - { PCI_VDEVICE(MELLANOX, 0x676e), MLX4_PCI_DEV_FORCE_SENSE_PORT }, 4048 - /* MT25400 Family [ConnectX-2 Virtual Function] */ 4049 - { PCI_VDEVICE(MELLANOX, 0x1002), MLX4_PCI_DEV_IS_VF }, 4028 + /* MT25408 "Hermon" */ 4029 + MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_SDR), /* SDR */ 4030 + MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_DDR), /* DDR */ 4031 + MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_QDR), /* QDR */ 4032 + MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_DDR_GEN2), /* DDR Gen2 */ 4033 + MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_QDR_GEN2), /* QDR Gen2 */ 4034 + MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_EN), /* EN 10GigE */ 4035 + MLX_SP(PCI_DEVICE_ID_MELLANOX_HERMON_EN_GEN2), /* EN 10GigE Gen2 */ 4036 + /* MT25458 ConnectX EN 10GBASE-T */ 4037 + MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN), 4038 + MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_T_GEN2), /* Gen2 */ 4039 + /* MT26468 ConnectX EN 10GigE PCIe Gen2*/ 4040 + MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_GEN2), 4041 + /* MT26438 ConnectX EN 40GigE PCIe Gen2 5GT/s */ 4042 + MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_5_GEN2), 4043 + /* MT26478 ConnectX2 40GigE PCIe Gen2 */ 4044 + MLX_SP(PCI_DEVICE_ID_MELLANOX_CONNECTX2), 4045 + /* MT25400 Family [ConnectX-2] */ 4046 + MLX_VF(0x1002), /* Virtual Function */ 4050 4047 /* MT27500 Family [ConnectX-3] */ 4051 - { PCI_VDEVICE(MELLANOX, 0x1003), 0 }, 4052 - /* MT27500 Family [ConnectX-3 Virtual Function] */ 4053 - { PCI_VDEVICE(MELLANOX, 0x1004), MLX4_PCI_DEV_IS_VF }, 4054 - { PCI_VDEVICE(MELLANOX, 0x1005), 0 }, /* MT27510 Family */ 4055 - { PCI_VDEVICE(MELLANOX, 0x1006), 0 }, /* MT27511 Family */ 4056 - { PCI_VDEVICE(MELLANOX, 0x1007), 0 }, /* MT27520 Family */ 4057 - { PCI_VDEVICE(MELLANOX, 0x1008), 0 }, /* MT27521 Family */ 4058 - { PCI_VDEVICE(MELLANOX, 0x1009), 0 }, /* MT27530 Family */ 4059 - { PCI_VDEVICE(MELLANOX, 0x100a), 0 }, /* MT27531 Family */ 4060 - { PCI_VDEVICE(MELLANOX, 0x100b), 0 }, /* MT27540 Family */ 4061 - { PCI_VDEVICE(MELLANOX, 0x100c), 0 }, /* MT27541 Family */ 4062 - { PCI_VDEVICE(MELLANOX, 0x100d), 0 }, /* MT27550 Family */ 4063 - { PCI_VDEVICE(MELLANOX, 0x100e), 0 }, /* MT27551 Family */ 4064 - { PCI_VDEVICE(MELLANOX, 0x100f), 0 }, /* MT27560 Family */ 4065 - { PCI_VDEVICE(MELLANOX, 0x1010), 0 }, /* MT27561 Family */ 4048 + MLX_GN(PCI_DEVICE_ID_MELLANOX_CONNECTX3), 4049 + MLX_VF(0x1004), /* Virtual Function */ 4050 + MLX_GN(0x1005), /* MT27510 Family */ 4051 + MLX_GN(0x1006), /* MT27511 Family */ 4052 + MLX_GN(PCI_DEVICE_ID_MELLANOX_CONNECTX3_PRO), /* MT27520 Family */ 4053 + MLX_GN(0x1008), /* MT27521 Family */ 4054 + MLX_GN(0x1009), /* MT27530 Family */ 4055 + MLX_GN(0x100a), /* MT27531 Family */ 4056 + MLX_GN(0x100b), /* MT27540 Family */ 4057 + MLX_GN(0x100c), /* MT27541 Family */ 4058 + MLX_GN(0x100d), /* MT27550 Family */ 4059 + MLX_GN(0x100e), /* MT27551 Family */ 4060 + MLX_GN(0x100f), /* MT27560 Family */ 4061 + MLX_GN(0x1010), /* MT27561 Family */ 4062 + 4063 + /* 4064 + * See the mellanox_check_broken_intx_masking() quirk when 4065 + * adding devices 4066 + */ 4067 + 4066 4068 { 0, } 4067 4069 }; 4068 4070
+21
drivers/of/of_pci.c
··· 120 120 EXPORT_SYMBOL_GPL(of_get_pci_domain_nr); 121 121 122 122 /** 123 + * This function will try to find the limitation of link speed by finding 124 + * a property called "max-link-speed" of the given device node. 125 + * 126 + * @node: device tree node with the max link speed information 127 + * 128 + * Returns the associated max link speed from DT, or a negative value if the 129 + * required property is not found or is invalid. 130 + */ 131 + int of_pci_get_max_link_speed(struct device_node *node) 132 + { 133 + u32 max_link_speed; 134 + 135 + if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || 136 + max_link_speed > 4) 137 + return -EINVAL; 138 + 139 + return max_link_speed; 140 + } 141 + EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed); 142 + 143 + /** 123 144 * of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only 124 145 * is present and valid 125 146 */
+14 -2
drivers/pci/access.c
··· 142 142 if (size == 4) { 143 143 writel(val, addr); 144 144 return PCIBIOS_SUCCESSFUL; 145 - } else { 146 - mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8)); 147 145 } 148 146 147 + /* 148 + * In general, hardware that supports only 32-bit writes on PCI is 149 + * not spec-compliant. For example, software may perform a 16-bit 150 + * write. If the hardware only supports 32-bit accesses, we must 151 + * do a 32-bit read, merge in the 16 bits we intend to write, 152 + * followed by a 32-bit write. If the 16 bits we *don't* intend to 153 + * write happen to have any RW1C (write-one-to-clear) bits set, we 154 + * just inadvertently cleared something we shouldn't have. 155 + */ 156 + dev_warn_ratelimited(&bus->dev, "%d-byte config write to %04x:%02x:%02x.%d offset %#x may corrupt adjacent RW1C bits\n", 157 + size, pci_domain_nr(bus), bus->number, 158 + PCI_SLOT(devfn), PCI_FUNC(devfn), where); 159 + 160 + mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8)); 149 161 tmp = readl(addr) & mask; 150 162 tmp |= val << ((where & 0x3) * 8); 151 163 writel(tmp, addr);
+1 -1
drivers/pci/bus.c
··· 320 320 pci_fixup_device(pci_fixup_final, dev); 321 321 pci_create_sysfs_dev_files(dev); 322 322 pci_proc_attach_device(dev); 323 - pci_bridge_d3_device_changed(dev); 323 + pci_bridge_d3_update(dev); 324 324 325 325 dev->match_driver = true; 326 326 retval = device_attach(&dev->dev);
+12
drivers/pci/ecam.c
··· 162 162 .write = pci_generic_config_write, 163 163 } 164 164 }; 165 + 166 + #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 167 + /* ECAM ops for 32-bit access only (non-compliant) */ 168 + struct pci_ecam_ops pci_32b_ops = { 169 + .bus_shift = 20, 170 + .pci_ops = { 171 + .map_bus = pci_ecam_map_bus, 172 + .read = pci_generic_config_read32, 173 + .write = pci_generic_config_write32, 174 + } 175 + }; 176 + #endif
+9 -7
drivers/pci/host/Kconfig
··· 69 69 70 70 config PCI_TEGRA 71 71 bool "NVIDIA Tegra PCIe controller" 72 - depends on ARCH_TEGRA && !ARM64 72 + depends on ARCH_TEGRA 73 73 help 74 74 Say Y here if you want support for the PCIe host controller found 75 75 on NVIDIA Tegra SoCs. ··· 133 133 134 134 config PCI_XGENE 135 135 bool "X-Gene PCIe controller" 136 - depends on ARCH_XGENE 137 - depends on OF 136 + depends on ARM64 137 + depends on OF || (ACPI && PCI_QUIRKS) 138 138 select PCIEPORTBUS 139 139 help 140 140 Say Y here if you want internal PCI support on APM X-Gene SoC. ··· 240 240 241 241 config PCI_HOST_THUNDER_PEM 242 242 bool "Cavium Thunder PCIe controller to off-chip devices" 243 - depends on OF && ARM64 243 + depends on ARM64 244 + depends on OF || (ACPI && PCI_QUIRKS) 244 245 select PCI_HOST_COMMON 245 246 help 246 247 Say Y here if you want PCIe support for CN88XX Cavium Thunder SoCs. 247 248 248 249 config PCI_HOST_THUNDER_ECAM 249 250 bool "Cavium Thunder ECAM controller to on-chip devices on pass-1.x silicon" 250 - depends on OF && ARM64 251 + depends on ARM64 252 + depends on OF || (ACPI && PCI_QUIRKS) 251 253 select PCI_HOST_COMMON 252 254 help 253 255 Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs. ··· 278 276 279 277 config PCIE_ROCKCHIP 280 278 bool "Rockchip PCIe controller" 281 - depends on ARCH_ROCKCHIP 279 + depends on ARCH_ROCKCHIP || COMPILE_TEST 282 280 depends on OF 283 281 depends on PCI_MSI_IRQ_DOMAIN 284 282 select MFD_SYSCON ··· 288 286 4 slots. 289 287 290 288 config VMD 291 - depends on PCI_MSI && X86_64 289 + depends on PCI_MSI && X86_64 && SRCU 292 290 tristate "Intel Volume Management Device Driver" 293 291 default N 294 292 ---help---
+15 -4
drivers/pci/host/Makefile
··· 15 15 obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o 16 16 obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o 17 17 obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o 18 - obj-$(CONFIG_PCI_XGENE) += pci-xgene.o 19 18 obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o 20 19 obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o 21 20 obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o ··· 24 25 obj-$(CONFIG_PCIE_IPROC_BCMA) += pcie-iproc-bcma.o 25 26 obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o 26 27 obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o 27 - obj-$(CONFIG_PCI_HISI) += pcie-hisi.o 28 28 obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o 29 - obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o 30 - obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o 31 29 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o 32 30 obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o 33 31 obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o 34 32 obj-$(CONFIG_VMD) += vmd.o 33 + 34 + # The following drivers are for devices that use the generic ACPI 35 + # pci_root.c driver but don't support standard ECAM config access. 36 + # They contain MCFG quirks to replace the generic ECAM accessors with 37 + # device-specific ones that are shared with the DT driver. 38 + 39 + # The ACPI driver is generic and should not require driver-specific 40 + # config options to be enabled, so we always build these drivers on 41 + # ARM64 and use internal ifdefs to only build the pieces we need 42 + # depending on whether ACPI, the DT driver, or both are enabled. 43 + 44 + obj-$(CONFIG_ARM64) += pcie-hisi.o 45 + obj-$(CONFIG_ARM64) += pci-thunder-ecam.o 46 + obj-$(CONFIG_ARM64) += pci-thunder-pem.o 47 + obj-$(CONFIG_ARM64) += pci-xgene.o
+61 -39
drivers/pci/host/pci-hyperv.c
··· 378 378 struct msi_domain_info msi_info; 379 379 struct msi_controller msi_chip; 380 380 struct irq_domain *irq_domain; 381 + struct retarget_msi_interrupt retarget_msi_interrupt_params; 382 + spinlock_t retarget_msi_interrupt_lock; 381 383 }; 382 384 383 385 /* ··· 757 755 return parent->chip->irq_set_affinity(parent, dest, force); 758 756 } 759 757 760 - void hv_irq_mask(struct irq_data *data) 758 + static void hv_irq_mask(struct irq_data *data) 761 759 { 762 760 pci_msi_mask_irq(data); 763 761 } ··· 772 770 * is built out of this PCI bus's instance GUID and the function 773 771 * number of the device. 774 772 */ 775 - void hv_irq_unmask(struct irq_data *data) 773 + static void hv_irq_unmask(struct irq_data *data) 776 774 { 777 775 struct msi_desc *msi_desc = irq_data_get_msi_desc(data); 778 776 struct irq_cfg *cfg = irqd_cfg(data); 779 - struct retarget_msi_interrupt params; 777 + struct retarget_msi_interrupt *params; 780 778 struct hv_pcibus_device *hbus; 781 779 struct cpumask *dest; 782 780 struct pci_bus *pbus; 783 781 struct pci_dev *pdev; 784 782 int cpu; 783 + unsigned long flags; 785 784 786 785 dest = irq_data_get_affinity_mask(data); 787 786 pdev = msi_desc_to_pci_dev(msi_desc); 788 787 pbus = pdev->bus; 789 788 hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); 790 789 791 - memset(&params, 0, sizeof(params)); 792 - params.partition_id = HV_PARTITION_ID_SELF; 793 - params.source = 1; /* MSI(-X) */ 794 - params.address = msi_desc->msg.address_lo; 795 - params.data = msi_desc->msg.data; 796 - params.device_id = (hbus->hdev->dev_instance.b[5] << 24) | 790 + spin_lock_irqsave(&hbus->retarget_msi_interrupt_lock, flags); 791 + 792 + params = &hbus->retarget_msi_interrupt_params; 793 + memset(params, 0, sizeof(*params)); 794 + params->partition_id = HV_PARTITION_ID_SELF; 795 + params->source = 1; /* MSI(-X) */ 796 + params->address = msi_desc->msg.address_lo; 797 + params->data = msi_desc->msg.data; 798 + params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | 797 799 (hbus->hdev->dev_instance.b[4] << 16) | 798 800 (hbus->hdev->dev_instance.b[7] << 8) | 799 801 (hbus->hdev->dev_instance.b[6] & 0xf8) | 800 802 PCI_FUNC(pdev->devfn); 801 - params.vector = cfg->vector; 803 + params->vector = cfg->vector; 802 804 803 805 for_each_cpu_and(cpu, dest, cpu_online_mask) 804 - params.vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu)); 806 + params->vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu)); 805 807 806 - hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, &params, NULL); 808 + hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, params, NULL); 809 + 810 + spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags); 807 811 808 812 pci_msi_unmask_irq(data); 809 813 } ··· 1279 1271 struct hv_pci_dev *hpdev; 1280 1272 struct pci_child_message *res_req; 1281 1273 struct q_res_req_compl comp_pkt; 1282 - union { 1283 - struct pci_packet init_packet; 1284 - u8 buffer[0x100]; 1274 + struct { 1275 + struct pci_packet init_packet; 1276 + u8 buffer[sizeof(struct pci_child_message)]; 1285 1277 } pkt; 1286 1278 unsigned long flags; 1287 1279 int ret; ··· 1590 1582 pci_dev_put(pdev); 1591 1583 } 1592 1584 1585 + spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags); 1586 + list_del(&hpdev->list_entry); 1587 + spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags); 1588 + 1593 1589 memset(&ctxt, 0, sizeof(ctxt)); 1594 1590 ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message; 1595 1591 ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE; ··· 1601 1589 vmbus_sendpacket(hpdev->hbus->hdev->channel, ejct_pkt, 1602 1590 sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt, 1603 1591 VM_PKT_DATA_INBAND, 0); 1604 - 1605 - spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags); 1606 - list_del(&hpdev->list_entry); 1607 - spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags); 1608 1592 1609 1593 put_pcichild(hpdev, hv_pcidev_ref_childlist); 1610 1594 put_pcichild(hpdev, hv_pcidev_ref_pnp); ··· 2194 2186 INIT_LIST_HEAD(&hbus->resources_for_children); 2195 2187 spin_lock_init(&hbus->config_lock); 2196 2188 spin_lock_init(&hbus->device_list_lock); 2189 + spin_lock_init(&hbus->retarget_msi_interrupt_lock); 2197 2190 sema_init(&hbus->enum_sem, 1); 2198 2191 init_completion(&hbus->remove_event); 2199 2192 ··· 2275 2266 return ret; 2276 2267 } 2277 2268 2278 - /** 2279 - * hv_pci_remove() - Remove routine for this VMBus channel 2280 - * @hdev: VMBus's tracking struct for this root PCI bus 2281 - * 2282 - * Return: 0 on success, -errno on failure 2283 - */ 2284 - static int hv_pci_remove(struct hv_device *hdev) 2269 + static void hv_pci_bus_exit(struct hv_device *hdev) 2285 2270 { 2286 - int ret; 2287 - struct hv_pcibus_device *hbus; 2288 - union { 2271 + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 2272 + struct { 2289 2273 struct pci_packet teardown_packet; 2290 - u8 buffer[0x100]; 2274 + u8 buffer[sizeof(struct pci_message)]; 2291 2275 } pkt; 2292 2276 struct pci_bus_relations relations; 2293 2277 struct hv_pci_compl comp_pkt; 2278 + int ret; 2294 2279 2295 - hbus = hv_get_drvdata(hdev); 2280 + /* 2281 + * After the host sends the RESCIND_CHANNEL message, it doesn't 2282 + * access the per-channel ringbuffer any longer. 2283 + */ 2284 + if (hdev->channel->rescind) 2285 + return; 2286 + 2287 + /* Delete any children which might still exist. */ 2288 + memset(&relations, 0, sizeof(relations)); 2289 + hv_pci_devices_present(hbus, &relations); 2290 + 2291 + ret = hv_send_resources_released(hdev); 2292 + if (ret) 2293 + dev_err(&hdev->device, 2294 + "Couldn't send resources released packet(s)\n"); 2296 2295 2297 2296 memset(&pkt.teardown_packet, 0, sizeof(pkt.teardown_packet)); 2298 2297 init_completion(&comp_pkt.host_event); ··· 2315 2298 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2316 2299 if (!ret) 2317 2300 wait_for_completion_timeout(&comp_pkt.host_event, 10 * HZ); 2301 + } 2318 2302 2303 + /** 2304 + * hv_pci_remove() - Remove routine for this VMBus channel 2305 + * @hdev: VMBus's tracking struct for this root PCI bus 2306 + * 2307 + * Return: 0 on success, -errno on failure 2308 + */ 2309 + static int hv_pci_remove(struct hv_device *hdev) 2310 + { 2311 + struct hv_pcibus_device *hbus; 2312 + 2313 + hbus = hv_get_drvdata(hdev); 2319 2314 if (hbus->state == hv_pcibus_installed) { 2320 2315 /* Remove the bus from PCI's point of view. */ 2321 2316 pci_lock_rescan_remove(); ··· 2336 2307 pci_unlock_rescan_remove(); 2337 2308 } 2338 2309 2339 - ret = hv_send_resources_released(hdev); 2340 - if (ret) 2341 - dev_err(&hdev->device, 2342 - "Couldn't send resources released packet(s)\n"); 2310 + hv_pci_bus_exit(hdev); 2343 2311 2344 2312 vmbus_close(hdev->channel); 2345 - 2346 - /* Delete any children which might still exist. */ 2347 - memset(&relations, 0, sizeof(relations)); 2348 - hv_pci_devices_present(hbus, &relations); 2349 2313 2350 2314 iounmap(hbus->cfg_addr); 2351 2315 hv_free_config_window(hbus);
+13 -7
drivers/pci/host/pci-layerscape.c
··· 35 35 #define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */ 36 36 #define PCIE_DBI_RO_WR_EN 0x8bc /* DBI Read-Only Write Enable Register */ 37 37 38 - /* PEX LUT registers */ 39 - #define PCIE_LUT_DBG 0x7FC /* PEX LUT Debug Register */ 40 - 41 38 struct ls_pcie_drvdata { 42 39 u32 lut_offset; 43 40 u32 ltssm_shift; 41 + u32 lut_dbg; 44 42 struct pcie_host_ops *ops; 45 43 }; 46 44 ··· 132 134 struct ls_pcie *pcie = to_ls_pcie(pp); 133 135 u32 state; 134 136 135 - state = (ioread32(pcie->lut + PCIE_LUT_DBG) >> 137 + state = (ioread32(pcie->lut + pcie->drvdata->lut_dbg) >> 136 138 pcie->drvdata->ltssm_shift) & 137 139 LTSSM_STATE_MASK; 138 140 ··· 194 196 static struct ls_pcie_drvdata ls1043_drvdata = { 195 197 .lut_offset = 0x10000, 196 198 .ltssm_shift = 24, 199 + .lut_dbg = 0x7fc, 200 + .ops = &ls_pcie_host_ops, 201 + }; 202 + 203 + static struct ls_pcie_drvdata ls1046_drvdata = { 204 + .lut_offset = 0x80000, 205 + .ltssm_shift = 24, 206 + .lut_dbg = 0x407fc, 197 207 .ops = &ls_pcie_host_ops, 198 208 }; 199 209 200 210 static struct ls_pcie_drvdata ls2080_drvdata = { 201 211 .lut_offset = 0x80000, 202 212 .ltssm_shift = 0, 213 + .lut_dbg = 0x7fc, 203 214 .ops = &ls_pcie_host_ops, 204 215 }; 205 216 206 217 static const struct of_device_id ls_pcie_of_match[] = { 207 218 { .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata }, 208 219 { .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata }, 220 + { .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata }, 209 221 { .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata }, 210 222 { .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata }, 211 223 { }, ··· 260 252 261 253 dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); 262 254 pcie->pp.dbi_base = devm_ioremap_resource(dev, dbi_base); 263 - if (IS_ERR(pcie->pp.dbi_base)) { 264 - dev_err(dev, "missing *regs* space\n"); 255 + if (IS_ERR(pcie->pp.dbi_base)) 265 256 return PTR_ERR(pcie->pp.dbi_base); 266 - } 267 257 268 258 pcie->lut = pcie->pp.dbi_base + pcie->drvdata->lut_offset; 269 259
+1 -1
drivers/pci/host/pci-rcar-gen2.c
··· 430 430 } 431 431 432 432 static struct of_device_id rcar_pci_of_match[] = { 433 - { .compatible = "renesas,pci-rcar-gen2", }, 434 433 { .compatible = "renesas,pci-r8a7790", }, 435 434 { .compatible = "renesas,pci-r8a7791", }, 436 435 { .compatible = "renesas,pci-r8a7794", }, 436 + { .compatible = "renesas,pci-rcar-gen2", }, 437 437 { }, 438 438 }; 439 439
+101 -59
drivers/pci/host/pci-tegra.c
··· 51 51 #include <soc/tegra/cpuidle.h> 52 52 #include <soc/tegra/pmc.h> 53 53 54 - #include <asm/mach/irq.h> 55 - #include <asm/mach/map.h> 56 - #include <asm/mach/pci.h> 57 - 58 54 #define INT_PCI_MSI_NR (8 * 32) 59 55 60 56 /* register definitions */ ··· 184 188 #define RP_VEND_XP 0x00000f00 185 189 #define RP_VEND_XP_DL_UP (1 << 30) 186 190 191 + #define RP_VEND_CTL2 0x00000fa8 192 + #define RP_VEND_CTL2_PCA_ENABLE (1 << 7) 193 + 187 194 #define RP_PRIV_MISC 0x00000fe0 188 195 #define RP_PRIV_MISC_PRSNT_MAP_EP_PRSNT (0xe << 0) 189 196 #define RP_PRIV_MISC_PRSNT_MAP_EP_ABSNT (0xf << 0) ··· 251 252 bool has_intr_prsnt_sense; 252 253 bool has_cml_clk; 253 254 bool has_gen2; 255 + bool force_pca_enable; 254 256 }; 255 257 256 258 static inline struct tegra_msi *to_tegra_msi(struct msi_controller *chip) ··· 322 322 unsigned int nr; 323 323 }; 324 324 325 - static inline struct tegra_pcie *sys_to_pcie(struct pci_sys_data *sys) 326 - { 327 - return sys->private_data; 328 - } 329 - 330 325 static inline void afi_writel(struct tegra_pcie *pcie, u32 value, 331 326 unsigned long offset) 332 327 { ··· 380 385 unsigned int busnr) 381 386 { 382 387 struct device *dev = pcie->dev; 383 - pgprot_t prot = __pgprot(L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 384 - L_PTE_XN | L_PTE_MT_DEV_SHARED | L_PTE_SHARED); 388 + pgprot_t prot = pgprot_device(PAGE_KERNEL); 385 389 phys_addr_t cs = pcie->cs->start; 386 390 struct tegra_pcie_bus *bus; 387 391 unsigned int i; ··· 424 430 425 431 static int tegra_pcie_add_bus(struct pci_bus *bus) 426 432 { 427 - struct tegra_pcie *pcie = sys_to_pcie(bus->sysdata); 433 + struct pci_host_bridge *host = pci_find_host_bridge(bus); 434 + struct tegra_pcie *pcie = pci_host_bridge_priv(host); 428 435 struct tegra_pcie_bus *b; 429 436 430 437 b = tegra_pcie_bus_alloc(pcie, bus->number); ··· 439 444 440 445 static void tegra_pcie_remove_bus(struct pci_bus *child) 441 446 { 442 - struct tegra_pcie *pcie = sys_to_pcie(child->sysdata); 447 + struct pci_host_bridge *host = pci_find_host_bridge(child); 448 + struct tegra_pcie *pcie = pci_host_bridge_priv(host); 443 449 struct tegra_pcie_bus *bus, *tmp; 444 450 445 451 list_for_each_entry_safe(bus, tmp, &pcie->buses, list) { ··· 457 461 unsigned int devfn, 458 462 int where) 459 463 { 460 - struct tegra_pcie *pcie = sys_to_pcie(bus->sysdata); 464 + struct pci_host_bridge *host = pci_find_host_bridge(bus); 465 + struct tegra_pcie *pcie = pci_host_bridge_priv(host); 461 466 struct device *dev = pcie->dev; 462 467 void __iomem *addr = NULL; 463 468 ··· 555 558 afi_writel(port->pcie, value, ctrl); 556 559 557 560 tegra_pcie_port_reset(port); 561 + 562 + if (soc->force_pca_enable) { 563 + value = readl(port->base + RP_VEND_CTL2); 564 + value |= RP_VEND_CTL2_PCA_ENABLE; 565 + writel(value, port->base + RP_VEND_CTL2); 566 + } 558 567 } 559 568 560 569 static void tegra_pcie_port_disable(struct tegra_pcie_port *port) ··· 613 610 } 614 611 DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, tegra_pcie_relax_enable); 615 612 616 - static int tegra_pcie_setup(int nr, struct pci_sys_data *sys) 613 + static int tegra_pcie_request_resources(struct tegra_pcie *pcie) 617 614 { 618 - struct tegra_pcie *pcie = sys_to_pcie(sys); 615 + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 616 + struct list_head *windows = &host->windows; 619 617 struct device *dev = pcie->dev; 620 618 int err; 621 619 622 - sys->mem_offset = pcie->offset.mem; 623 - sys->io_offset = pcie->offset.io; 620 + pci_add_resource_offset(windows, &pcie->pio, pcie->offset.io); 621 + pci_add_resource_offset(windows, &pcie->mem, pcie->offset.mem); 622 + pci_add_resource_offset(windows, &pcie->prefetch, pcie->offset.mem); 623 + pci_add_resource(windows, &pcie->busn); 624 624 625 - err = devm_request_resource(dev, &iomem_resource, &pcie->io); 625 + err = devm_request_pci_bus_resources(dev, windows); 626 626 if (err < 0) 627 627 return err; 628 628 629 - err = pci_remap_iospace(&pcie->pio, pcie->io.start); 630 - if (!err) 631 - pci_add_resource_offset(&sys->resources, &pcie->pio, 632 - sys->io_offset); 629 + pci_remap_iospace(&pcie->pio, pcie->io.start); 633 630 634 - pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); 635 - pci_add_resource_offset(&sys->resources, &pcie->prefetch, 636 - sys->mem_offset); 637 - pci_add_resource(&sys->resources, &pcie->busn); 638 - 639 - err = devm_request_pci_bus_resources(dev, &sys->resources); 640 - if (err < 0) 641 - return err; 642 - 643 - return 1; 631 + return 0; 644 632 } 645 633 646 634 static int tegra_pcie_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 647 635 { 648 - struct tegra_pcie *pcie = sys_to_pcie(pdev->bus->sysdata); 636 + struct pci_host_bridge *host = pci_find_host_bridge(pdev->bus); 637 + struct tegra_pcie *pcie = pci_host_bridge_priv(host); 649 638 int irq; 650 639 651 640 tegra_cpuidle_pcie_irqs_in_use(); ··· 1494 1499 1495 1500 static int tegra_pcie_enable_msi(struct tegra_pcie *pcie) 1496 1501 { 1497 - struct device *dev = pcie->dev; 1498 - struct platform_device *pdev = to_platform_device(dev); 1502 + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 1503 + struct platform_device *pdev = to_platform_device(pcie->dev); 1499 1504 const struct tegra_pcie_soc *soc = pcie->soc; 1500 1505 struct tegra_msi *msi = &pcie->msi; 1506 + struct device *dev = pcie->dev; 1501 1507 unsigned long base; 1502 1508 int err; 1503 1509 u32 reg; ··· 1555 1559 reg |= AFI_INTR_MASK_MSI_MASK; 1556 1560 afi_writel(pcie, reg, AFI_INTR_MASK); 1557 1561 1562 + host->msi = &msi->chip; 1563 + 1558 1564 return 0; 1559 1565 1560 1566 err: ··· 1607 1609 struct device *dev = pcie->dev; 1608 1610 struct device_node *np = dev->of_node; 1609 1611 1610 - if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) { 1612 + if (of_device_is_compatible(np, "nvidia,tegra124-pcie") || 1613 + of_device_is_compatible(np, "nvidia,tegra210-pcie")) { 1611 1614 switch (lanes) { 1612 1615 case 0x0000104: 1613 1616 dev_info(dev, "4x1, 1x1 configuration\n"); ··· 1729 1730 struct device_node *np = dev->of_node; 1730 1731 unsigned int i = 0; 1731 1732 1732 - if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) { 1733 + if (of_device_is_compatible(np, "nvidia,tegra210-pcie")) { 1734 + pcie->num_supplies = 6; 1735 + 1736 + pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies, 1737 + sizeof(*pcie->supplies), 1738 + GFP_KERNEL); 1739 + if (!pcie->supplies) 1740 + return -ENOMEM; 1741 + 1742 + pcie->supplies[i++].supply = "avdd-pll-uerefe"; 1743 + pcie->supplies[i++].supply = "hvddio-pex"; 1744 + pcie->supplies[i++].supply = "dvddio-pex"; 1745 + pcie->supplies[i++].supply = "dvdd-pex-pll"; 1746 + pcie->supplies[i++].supply = "hvdd-pex-pll-e"; 1747 + pcie->supplies[i++].supply = "vddio-pex-ctl"; 1748 + } else if (of_device_is_compatible(np, "nvidia,tegra124-pcie")) { 1733 1749 pcie->num_supplies = 7; 1734 1750 1735 1751 pcie->supplies = devm_kcalloc(dev, pcie->num_supplies, ··· 2035 2021 return false; 2036 2022 } 2037 2023 2038 - static int tegra_pcie_enable(struct tegra_pcie *pcie) 2024 + static void tegra_pcie_enable_ports(struct tegra_pcie *pcie) 2039 2025 { 2040 2026 struct device *dev = pcie->dev; 2041 2027 struct tegra_pcie_port *port, *tmp; 2042 - struct hw_pci hw; 2043 2028 2044 2029 list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 2045 2030 dev_info(dev, "probing port %u, using %u lanes\n", ··· 2054 2041 tegra_pcie_port_disable(port); 2055 2042 tegra_pcie_port_free(port); 2056 2043 } 2057 - 2058 - memset(&hw, 0, sizeof(hw)); 2059 - 2060 - #ifdef CONFIG_PCI_MSI 2061 - hw.msi_ctrl = &pcie->msi.chip; 2062 - #endif 2063 - 2064 - hw.nr_controllers = 1; 2065 - hw.private_data = (void **)&pcie; 2066 - hw.setup = tegra_pcie_setup; 2067 - hw.map_irq = tegra_pcie_map_irq; 2068 - hw.ops = &tegra_pcie_ops; 2069 - 2070 - pci_common_init_dev(dev, &hw); 2071 - return 0; 2072 2044 } 2073 2045 2074 2046 static const struct tegra_pcie_soc tegra20_pcie = { ··· 2067 2069 .has_intr_prsnt_sense = false, 2068 2070 .has_cml_clk = false, 2069 2071 .has_gen2 = false, 2072 + .force_pca_enable = false, 2070 2073 }; 2071 2074 2072 2075 static const struct tegra_pcie_soc tegra30_pcie = { ··· 2082 2083 .has_intr_prsnt_sense = true, 2083 2084 .has_cml_clk = true, 2084 2085 .has_gen2 = false, 2086 + .force_pca_enable = false, 2085 2087 }; 2086 2088 2087 2089 static const struct tegra_pcie_soc tegra124_pcie = { ··· 2096 2096 .has_intr_prsnt_sense = true, 2097 2097 .has_cml_clk = true, 2098 2098 .has_gen2 = true, 2099 + .force_pca_enable = false, 2100 + }; 2101 + 2102 + static const struct tegra_pcie_soc tegra210_pcie = { 2103 + .num_ports = 2, 2104 + .msi_base_shift = 8, 2105 + .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, 2106 + .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, 2107 + .pads_refclk_cfg0 = 0x90b890b8, 2108 + .has_pex_clkreq_en = true, 2109 + .has_pex_bias_ctrl = true, 2110 + .has_intr_prsnt_sense = true, 2111 + .has_cml_clk = true, 2112 + .has_gen2 = true, 2113 + .force_pca_enable = true, 2099 2114 }; 2100 2115 2101 2116 static const struct of_device_id tegra_pcie_of_match[] = { 2117 + { .compatible = "nvidia,tegra210-pcie", .data = &tegra210_pcie }, 2102 2118 { .compatible = "nvidia,tegra124-pcie", .data = &tegra124_pcie }, 2103 2119 { .compatible = "nvidia,tegra30-pcie", .data = &tegra30_pcie }, 2104 2120 { .compatible = "nvidia,tegra20-pcie", .data = &tegra20_pcie }, ··· 2233 2217 static int tegra_pcie_probe(struct platform_device *pdev) 2234 2218 { 2235 2219 struct device *dev = &pdev->dev; 2220 + struct pci_host_bridge *host; 2236 2221 struct tegra_pcie *pcie; 2222 + struct pci_bus *child; 2237 2223 int err; 2238 2224 2239 - pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 2240 - if (!pcie) 2225 + host = pci_alloc_host_bridge(sizeof(*pcie)); 2226 + if (!host) 2241 2227 return -ENOMEM; 2228 + 2229 + pcie = pci_host_bridge_priv(host); 2242 2230 2243 2231 pcie->soc = of_device_get_match_data(dev); 2244 2232 INIT_LIST_HEAD(&pcie->buses); ··· 2263 2243 if (err) 2264 2244 goto put_resources; 2265 2245 2246 + err = tegra_pcie_request_resources(pcie); 2247 + if (err) 2248 + goto put_resources; 2249 + 2266 2250 /* setup the AFI address translations */ 2267 2251 tegra_pcie_setup_translations(pcie); 2268 2252 ··· 2278 2254 } 2279 2255 } 2280 2256 2281 - err = tegra_pcie_enable(pcie); 2257 + tegra_pcie_enable_ports(pcie); 2258 + 2259 + pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 2260 + host->busnr = pcie->busn.start; 2261 + host->dev.parent = &pdev->dev; 2262 + host->ops = &tegra_pcie_ops; 2263 + 2264 + err = pci_register_host_bridge(host); 2282 2265 if (err < 0) { 2283 - dev_err(dev, "failed to enable PCIe ports: %d\n", err); 2266 + dev_err(dev, "failed to register host: %d\n", err); 2284 2267 goto disable_msi; 2285 2268 } 2269 + 2270 + pci_scan_child_bus(host->bus); 2271 + 2272 + pci_fixup_irqs(pci_common_swizzle, tegra_pcie_map_irq); 2273 + pci_bus_size_bridges(host->bus); 2274 + pci_bus_assign_resources(host->bus); 2275 + 2276 + list_for_each_entry(child, &host->bus->children, node) 2277 + pcie_bus_configure_settings(child); 2278 + 2279 + pci_bus_add_devices(host->bus); 2286 2280 2287 2281 if (IS_ENABLED(CONFIG_DEBUG_FS)) { 2288 2282 err = tegra_pcie_debugfs_init(pcie);
+8 -1
drivers/pci/host/pci-thunder-ecam.c
··· 14 14 #include <linux/pci-ecam.h> 15 15 #include <linux/platform_device.h> 16 16 17 + #if defined(CONFIG_PCI_HOST_THUNDER_ECAM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)) 18 + 17 19 static void set_val(u32 v, int where, int size, u32 *val) 18 20 { 19 21 int shift = (where & 3) * 8; ··· 348 346 return pci_generic_config_write(bus, devfn, where, size, val); 349 347 } 350 348 351 - static struct pci_ecam_ops pci_thunder_ecam_ops = { 349 + struct pci_ecam_ops pci_thunder_ecam_ops = { 352 350 .bus_shift = 20, 353 351 .pci_ops = { 354 352 .map_bus = pci_ecam_map_bus, ··· 356 354 .write = thunder_ecam_config_write, 357 355 } 358 356 }; 357 + 358 + #ifdef CONFIG_PCI_HOST_THUNDER_ECAM 359 359 360 360 static const struct of_device_id thunder_ecam_of_match[] = { 361 361 { .compatible = "cavium,pci-host-thunder-ecam" }, ··· 377 373 .probe = thunder_ecam_probe, 378 374 }; 379 375 builtin_platform_driver(thunder_ecam_driver); 376 + 377 + #endif 378 + #endif
+71 -23
drivers/pci/host/pci-thunder-pem.c
··· 18 18 #include <linux/init.h> 19 19 #include <linux/of_address.h> 20 20 #include <linux/of_pci.h> 21 + #include <linux/pci-acpi.h> 21 22 #include <linux/pci-ecam.h> 22 23 #include <linux/platform_device.h> 24 + #include "../pci.h" 25 + 26 + #if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)) 23 27 24 28 #define PEM_CFG_WR 0x28 25 29 #define PEM_CFG_RD 0x30 ··· 288 284 return pci_generic_config_write(bus, devfn, where, size, val); 289 285 } 290 286 291 - static int thunder_pem_init(struct pci_config_window *cfg) 287 + static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg, 288 + struct resource *res_pem) 292 289 { 293 - struct device *dev = cfg->parent; 294 - resource_size_t bar4_start; 295 - struct resource *res_pem; 296 290 struct thunder_pem_pci *pem_pci; 297 - struct platform_device *pdev; 298 - 299 - /* Only OF support for now */ 300 - if (!dev->of_node) 301 - return -EINVAL; 291 + resource_size_t bar4_start; 302 292 303 293 pem_pci = devm_kzalloc(dev, sizeof(*pem_pci), GFP_KERNEL); 304 294 if (!pem_pci) 305 295 return -ENOMEM; 306 - 307 - pdev = to_platform_device(dev); 308 - 309 - /* 310 - * The second register range is the PEM bridge to the PCIe 311 - * bus. It has a different config access method than those 312 - * devices behind the bridge. 313 - */ 314 - res_pem = platform_get_resource(pdev, IORESOURCE_MEM, 1); 315 - if (!res_pem) { 316 - dev_err(dev, "missing \"reg[1]\"property\n"); 317 - return -EINVAL; 318 - } 319 296 320 297 pem_pci->pem_reg_base = devm_ioremap(dev, res_pem->start, 0x10000); 321 298 if (!pem_pci->pem_reg_base) ··· 317 332 return 0; 318 333 } 319 334 335 + #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 336 + 337 + static int thunder_pem_acpi_init(struct pci_config_window *cfg) 338 + { 339 + struct device *dev = cfg->parent; 340 + struct acpi_device *adev = to_acpi_device(dev); 341 + struct acpi_pci_root *root = acpi_driver_data(adev); 342 + struct resource *res_pem; 343 + int ret; 344 + 345 + res_pem = devm_kzalloc(&adev->dev, sizeof(*res_pem), GFP_KERNEL); 346 + if (!res_pem) 347 + return -ENOMEM; 348 + 349 + ret = acpi_get_rc_resources(dev, "THRX0002", root->segment, res_pem); 350 + if (ret) { 351 + dev_err(dev, "can't get rc base address\n"); 352 + return ret; 353 + } 354 + 355 + return thunder_pem_init(dev, cfg, res_pem); 356 + } 357 + 358 + struct pci_ecam_ops thunder_pem_ecam_ops = { 359 + .bus_shift = 24, 360 + .init = thunder_pem_acpi_init, 361 + .pci_ops = { 362 + .map_bus = pci_ecam_map_bus, 363 + .read = thunder_pem_config_read, 364 + .write = thunder_pem_config_write, 365 + } 366 + }; 367 + 368 + #endif 369 + 370 + #ifdef CONFIG_PCI_HOST_THUNDER_PEM 371 + 372 + static int thunder_pem_platform_init(struct pci_config_window *cfg) 373 + { 374 + struct device *dev = cfg->parent; 375 + struct platform_device *pdev = to_platform_device(dev); 376 + struct resource *res_pem; 377 + 378 + if (!dev->of_node) 379 + return -EINVAL; 380 + 381 + /* 382 + * The second register range is the PEM bridge to the PCIe 383 + * bus. It has a different config access method than those 384 + * devices behind the bridge. 385 + */ 386 + res_pem = platform_get_resource(pdev, IORESOURCE_MEM, 1); 387 + if (!res_pem) { 388 + dev_err(dev, "missing \"reg[1]\"property\n"); 389 + return -EINVAL; 390 + } 391 + 392 + return thunder_pem_init(dev, cfg, res_pem); 393 + } 394 + 320 395 static struct pci_ecam_ops pci_thunder_pem_ops = { 321 396 .bus_shift = 24, 322 - .init = thunder_pem_init, 397 + .init = thunder_pem_platform_init, 323 398 .pci_ops = { 324 399 .map_bus = pci_ecam_map_bus, 325 400 .read = thunder_pem_config_read, ··· 405 360 .probe = thunder_pem_probe, 406 361 }; 407 362 builtin_platform_driver(thunder_pem_driver); 363 + 364 + #endif 365 + #endif
+119 -7
drivers/pci/host/pci-xgene.c
··· 27 27 #include <linux/of_irq.h> 28 28 #include <linux/of_pci.h> 29 29 #include <linux/pci.h> 30 + #include <linux/pci-acpi.h> 31 + #include <linux/pci-ecam.h> 30 32 #include <linux/platform_device.h> 31 33 #include <linux/slab.h> 32 34 ··· 66 64 /* PCIe IP version */ 67 65 #define XGENE_PCIE_IP_VER_UNKN 0 68 66 #define XGENE_PCIE_IP_VER_1 1 67 + #define XGENE_PCIE_IP_VER_2 2 69 68 69 + #if defined(CONFIG_PCI_XGENE) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)) 70 70 struct xgene_pcie_port { 71 71 struct device_node *node; 72 72 struct device *dev; ··· 95 91 return (addr & PCI_BASE_ADDRESS_MEM_MASK) | flags; 96 92 } 97 93 94 + static inline struct xgene_pcie_port *pcie_bus_to_port(struct pci_bus *bus) 95 + { 96 + struct pci_config_window *cfg; 97 + 98 + if (acpi_disabled) 99 + return (struct xgene_pcie_port *)(bus->sysdata); 100 + 101 + cfg = bus->sysdata; 102 + return (struct xgene_pcie_port *)(cfg->priv); 103 + } 104 + 98 105 /* 99 106 * When the address bit [17:16] is 2'b01, the Configuration access will be 100 107 * treated as Type 1 and it will be forwarded to external PCIe device. 101 108 */ 102 109 static void __iomem *xgene_pcie_get_cfg_base(struct pci_bus *bus) 103 110 { 104 - struct xgene_pcie_port *port = bus->sysdata; 111 + struct xgene_pcie_port *port = pcie_bus_to_port(bus); 105 112 106 113 if (bus->number >= (bus->primary + 1)) 107 114 return port->cfg_base + AXI_EP_CFG_ACCESS; ··· 126 111 */ 127 112 static void xgene_pcie_set_rtdid_reg(struct pci_bus *bus, uint devfn) 128 113 { 129 - struct xgene_pcie_port *port = bus->sysdata; 114 + struct xgene_pcie_port *port = pcie_bus_to_port(bus); 130 115 unsigned int b, d, f; 131 116 u32 rtdid_val = 0; 132 117 ··· 173 158 static int xgene_pcie_config_read32(struct pci_bus *bus, unsigned int devfn, 174 159 int where, int size, u32 *val) 175 160 { 176 - struct xgene_pcie_port *port = bus->sysdata; 161 + struct xgene_pcie_port *port = pcie_bus_to_port(bus); 177 162 178 163 if (pci_generic_config_read32(bus, devfn, where & ~0x3, 4, val) != 179 164 PCIBIOS_SUCCESSFUL) ··· 197 182 198 183 return PCIBIOS_SUCCESSFUL; 199 184 } 185 + #endif 200 186 201 - static struct pci_ops xgene_pcie_ops = { 202 - .map_bus = xgene_pcie_map_bus, 203 - .read = xgene_pcie_config_read32, 204 - .write = pci_generic_config_write32, 187 + #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 188 + static int xgene_get_csr_resource(struct acpi_device *adev, 189 + struct resource *res) 190 + { 191 + struct device *dev = &adev->dev; 192 + struct resource_entry *entry; 193 + struct list_head list; 194 + unsigned long flags; 195 + int ret; 196 + 197 + INIT_LIST_HEAD(&list); 198 + flags = IORESOURCE_MEM; 199 + ret = acpi_dev_get_resources(adev, &list, 200 + acpi_dev_filter_resource_type_cb, 201 + (void *) flags); 202 + if (ret < 0) { 203 + dev_err(dev, "failed to parse _CRS method, error code %d\n", 204 + ret); 205 + return ret; 206 + } 207 + 208 + if (ret == 0) { 209 + dev_err(dev, "no IO and memory resources present in _CRS\n"); 210 + return -EINVAL; 211 + } 212 + 213 + entry = list_first_entry(&list, struct resource_entry, node); 214 + *res = *entry->res; 215 + acpi_dev_free_resource_list(&list); 216 + return 0; 217 + } 218 + 219 + static int xgene_pcie_ecam_init(struct pci_config_window *cfg, u32 ipversion) 220 + { 221 + struct device *dev = cfg->parent; 222 + struct acpi_device *adev = to_acpi_device(dev); 223 + struct xgene_pcie_port *port; 224 + struct resource csr; 225 + int ret; 226 + 227 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 228 + if (!port) 229 + return -ENOMEM; 230 + 231 + ret = xgene_get_csr_resource(adev, &csr); 232 + if (ret) { 233 + dev_err(dev, "can't get CSR resource\n"); 234 + kfree(port); 235 + return ret; 236 + } 237 + port->csr_base = devm_ioremap_resource(dev, &csr); 238 + if (IS_ERR(port->csr_base)) { 239 + kfree(port); 240 + return -ENOMEM; 241 + } 242 + 243 + port->cfg_base = cfg->win; 244 + port->version = ipversion; 245 + 246 + cfg->priv = port; 247 + return 0; 248 + } 249 + 250 + static int xgene_v1_pcie_ecam_init(struct pci_config_window *cfg) 251 + { 252 + return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_1); 253 + } 254 + 255 + struct pci_ecam_ops xgene_v1_pcie_ecam_ops = { 256 + .bus_shift = 16, 257 + .init = xgene_v1_pcie_ecam_init, 258 + .pci_ops = { 259 + .map_bus = xgene_pcie_map_bus, 260 + .read = xgene_pcie_config_read32, 261 + .write = pci_generic_config_write, 262 + } 205 263 }; 206 264 265 + static int xgene_v2_pcie_ecam_init(struct pci_config_window *cfg) 266 + { 267 + return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_2); 268 + } 269 + 270 + struct pci_ecam_ops xgene_v2_pcie_ecam_ops = { 271 + .bus_shift = 16, 272 + .init = xgene_v2_pcie_ecam_init, 273 + .pci_ops = { 274 + .map_bus = xgene_pcie_map_bus, 275 + .read = xgene_pcie_config_read32, 276 + .write = pci_generic_config_write, 277 + } 278 + }; 279 + #endif 280 + 281 + #if defined(CONFIG_PCI_XGENE) 207 282 static u64 xgene_pcie_set_ib_mask(struct xgene_pcie_port *port, u32 addr, 208 283 u32 flags, u64 size) 209 284 { ··· 626 521 return 0; 627 522 } 628 523 524 + static struct pci_ops xgene_pcie_ops = { 525 + .map_bus = xgene_pcie_map_bus, 526 + .read = xgene_pcie_config_read32, 527 + .write = pci_generic_config_write32, 528 + }; 529 + 629 530 static int xgene_pcie_probe_bridge(struct platform_device *pdev) 630 531 { 631 532 struct device *dev = &pdev->dev; ··· 702 591 .probe = xgene_pcie_probe_bridge, 703 592 }; 704 593 builtin_platform_driver(xgene_pcie_driver); 594 + #endif
+2 -8
drivers/pci/host/pcie-altera.c
··· 550 550 551 551 cra = platform_get_resource_byname(pdev, IORESOURCE_MEM, "Cra"); 552 552 pcie->cra_base = devm_ioremap_resource(dev, cra); 553 - if (IS_ERR(pcie->cra_base)) { 554 - dev_err(dev, "failed to map cra memory\n"); 553 + if (IS_ERR(pcie->cra_base)) 555 554 return PTR_ERR(pcie->cra_base); 556 - } 557 555 558 556 /* setup IRQ */ 559 557 pcie->irq = platform_get_irq(pdev, 0); ··· 639 641 }, 640 642 }; 641 643 642 - static int altera_pcie_init(void) 643 - { 644 - return platform_driver_register(&altera_pcie_driver); 645 - } 646 - device_initcall(altera_pcie_init); 644 + builtin_platform_driver(altera_pcie_driver);
+102 -5
drivers/pci/host/pcie-hisi.c
··· 18 18 #include <linux/of_pci.h> 19 19 #include <linux/platform_device.h> 20 20 #include <linux/of_device.h> 21 + #include <linux/pci.h> 22 + #include <linux/pci-acpi.h> 23 + #include <linux/pci-ecam.h> 21 24 #include <linux/regmap.h> 25 + #include "../pci.h" 26 + 27 + #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 28 + 29 + static int hisi_pcie_acpi_rd_conf(struct pci_bus *bus, u32 devfn, int where, 30 + int size, u32 *val) 31 + { 32 + struct pci_config_window *cfg = bus->sysdata; 33 + int dev = PCI_SLOT(devfn); 34 + 35 + if (bus->number == cfg->busr.start) { 36 + /* access only one slot on each root port */ 37 + if (dev > 0) 38 + return PCIBIOS_DEVICE_NOT_FOUND; 39 + else 40 + return pci_generic_config_read32(bus, devfn, where, 41 + size, val); 42 + } 43 + 44 + return pci_generic_config_read(bus, devfn, where, size, val); 45 + } 46 + 47 + static int hisi_pcie_acpi_wr_conf(struct pci_bus *bus, u32 devfn, 48 + int where, int size, u32 val) 49 + { 50 + struct pci_config_window *cfg = bus->sysdata; 51 + int dev = PCI_SLOT(devfn); 52 + 53 + if (bus->number == cfg->busr.start) { 54 + /* access only one slot on each root port */ 55 + if (dev > 0) 56 + return PCIBIOS_DEVICE_NOT_FOUND; 57 + else 58 + return pci_generic_config_write32(bus, devfn, where, 59 + size, val); 60 + } 61 + 62 + return pci_generic_config_write(bus, devfn, where, size, val); 63 + } 64 + 65 + static void __iomem *hisi_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 66 + int where) 67 + { 68 + struct pci_config_window *cfg = bus->sysdata; 69 + void __iomem *reg_base = cfg->priv; 70 + 71 + if (bus->number == cfg->busr.start) 72 + return reg_base + where; 73 + else 74 + return pci_ecam_map_bus(bus, devfn, where); 75 + } 76 + 77 + static int hisi_pcie_init(struct pci_config_window *cfg) 78 + { 79 + struct device *dev = cfg->parent; 80 + struct acpi_device *adev = to_acpi_device(dev); 81 + struct acpi_pci_root *root = acpi_driver_data(adev); 82 + struct resource *res; 83 + void __iomem *reg_base; 84 + int ret; 85 + 86 + /* 87 + * Retrieve RC base and size from a HISI0081 device with _UID 88 + * matching our segment. 89 + */ 90 + res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL); 91 + if (!res) 92 + return -ENOMEM; 93 + 94 + ret = acpi_get_rc_resources(dev, "HISI0081", root->segment, res); 95 + if (ret) { 96 + dev_err(dev, "can't get rc base address\n"); 97 + return -ENOMEM; 98 + } 99 + 100 + reg_base = devm_ioremap(dev, res->start, resource_size(res)); 101 + if (!reg_base) 102 + return -ENOMEM; 103 + 104 + cfg->priv = reg_base; 105 + return 0; 106 + } 107 + 108 + struct pci_ecam_ops hisi_pcie_ops = { 109 + .bus_shift = 20, 110 + .init = hisi_pcie_init, 111 + .pci_ops = { 112 + .map_bus = hisi_pcie_map_bus, 113 + .read = hisi_pcie_acpi_rd_conf, 114 + .write = hisi_pcie_acpi_wr_conf, 115 + } 116 + }; 117 + 118 + #endif 119 + 120 + #ifdef CONFIG_PCI_HISI 22 121 23 122 #include "pcie-designware.h" 24 123 ··· 284 185 285 186 reg = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc_dbi"); 286 187 pp->dbi_base = devm_ioremap_resource(dev, reg); 287 - if (IS_ERR(pp->dbi_base)) { 288 - dev_err(dev, "cannot get rc_dbi base\n"); 188 + if (IS_ERR(pp->dbi_base)) 289 189 return PTR_ERR(pp->dbi_base); 290 - } 291 190 292 191 ret = hisi_add_pcie_port(hisi_pcie, pdev); 293 192 if (ret) 294 193 return ret; 295 - 296 - dev_warn(dev, "only 32-bit config accesses supported; smaller writes may corrupt adjacent RW1C fields\n"); 297 194 298 195 return 0; 299 196 } ··· 322 227 }, 323 228 }; 324 229 builtin_platform_driver(hisi_pcie_driver); 230 + 231 + #endif
+1
drivers/pci/host/pcie-iproc-bcma.c
··· 54 54 55 55 pcie->dev = dev; 56 56 57 + pcie->type = IPROC_PCIE_PAXB_BCMA; 57 58 pcie->base = bdev->io_addr; 58 59 if (!pcie->base) { 59 60 dev_err(dev, "no controller registers\n");
+1
drivers/pci/host/pcie-iproc-msi.c
··· 563 563 } 564 564 565 565 switch (pcie->type) { 566 + case IPROC_PCIE_PAXB_BCMA: 566 567 case IPROC_PCIE_PAXB: 567 568 msi->reg_offsets = iproc_msi_reg_paxb; 568 569 msi->nr_eq_region = 1;
+14 -14
drivers/pci/host/pcie-iproc-platform.c
··· 31 31 .compatible = "brcm,iproc-pcie", 32 32 .data = (int *)IPROC_PCIE_PAXB, 33 33 }, { 34 + .compatible = "brcm,iproc-pcie-paxb-v2", 35 + .data = (int *)IPROC_PCIE_PAXB_V2, 36 + }, { 34 37 .compatible = "brcm,iproc-pcie-paxc", 35 38 .data = (int *)IPROC_PCIE_PAXC, 39 + }, { 40 + .compatible = "brcm,iproc-pcie-paxc-v2", 41 + .data = (int *)IPROC_PCIE_PAXC_V2, 36 42 }, 37 43 { /* sentinel */ } 38 44 }; ··· 90 84 return ret; 91 85 } 92 86 pcie->ob.axi_offset = val; 93 - 94 - ret = of_property_read_u32(np, "brcm,pcie-ob-window-size", 95 - &val); 96 - if (ret) { 97 - dev_err(dev, 98 - "missing brcm,pcie-ob-window-size property\n"); 99 - return ret; 100 - } 101 - pcie->ob.window_size = (resource_size_t)val * SZ_1M; 102 - 103 - if (of_property_read_bool(np, "brcm,pcie-ob-oarr-size")) 104 - pcie->ob.set_oarr_size = true; 105 - 106 87 pcie->need_ob_cfg = true; 107 88 } 108 89 ··· 108 115 return ret; 109 116 } 110 117 111 - pcie->map_irq = of_irq_parse_and_map_pci; 118 + /* PAXC doesn't support legacy IRQs, skip mapping */ 119 + switch (pcie->type) { 120 + case IPROC_PCIE_PAXC: 121 + case IPROC_PCIE_PAXC_V2: 122 + break; 123 + default: 124 + pcie->map_irq = of_irq_parse_and_map_pci; 125 + } 112 126 113 127 ret = iproc_pcie_setup(pcie, &res); 114 128 if (ret)
+847 -100
drivers/pci/host/pcie-iproc.c
··· 21 21 #include <linux/slab.h> 22 22 #include <linux/delay.h> 23 23 #include <linux/interrupt.h> 24 + #include <linux/irqchip/arm-gic-v3.h> 24 25 #include <linux/platform_device.h> 25 26 #include <linux/of_address.h> 26 27 #include <linux/of_pci.h> ··· 38 37 #define RC_PCIE_RST_OUTPUT_SHIFT 0 39 38 #define RC_PCIE_RST_OUTPUT BIT(RC_PCIE_RST_OUTPUT_SHIFT) 40 39 #define PAXC_RESET_MASK 0x7f 40 + 41 + #define GIC_V3_CFG_SHIFT 0 42 + #define GIC_V3_CFG BIT(GIC_V3_CFG_SHIFT) 43 + 44 + #define MSI_ENABLE_CFG_SHIFT 0 45 + #define MSI_ENABLE_CFG BIT(MSI_ENABLE_CFG_SHIFT) 41 46 42 47 #define CFG_IND_ADDR_MASK 0x00001ffc 43 48 ··· 65 58 #define PCIE_DL_ACTIVE_SHIFT 2 66 59 #define PCIE_DL_ACTIVE BIT(PCIE_DL_ACTIVE_SHIFT) 67 60 61 + #define APB_ERR_EN_SHIFT 0 62 + #define APB_ERR_EN BIT(APB_ERR_EN_SHIFT) 63 + 64 + /* derive the enum index of the outbound/inbound mapping registers */ 65 + #define MAP_REG(base_reg, index) ((base_reg) + (index) * 2) 66 + 67 + /* 68 + * Maximum number of outbound mapping window sizes that can be supported by any 69 + * OARR/OMAP mapping pair 70 + */ 71 + #define MAX_NUM_OB_WINDOW_SIZES 4 72 + 68 73 #define OARR_VALID_SHIFT 0 69 74 #define OARR_VALID BIT(OARR_VALID_SHIFT) 70 75 #define OARR_SIZE_CFG_SHIFT 1 71 - #define OARR_SIZE_CFG BIT(OARR_SIZE_CFG_SHIFT) 76 + 77 + /* 78 + * Maximum number of inbound mapping region sizes that can be supported by an 79 + * IARR 80 + */ 81 + #define MAX_NUM_IB_REGION_SIZES 9 82 + 83 + #define IMAP_VALID_SHIFT 0 84 + #define IMAP_VALID BIT(IMAP_VALID_SHIFT) 72 85 73 86 #define PCI_EXP_CAP 0xac 74 87 75 - #define MAX_NUM_OB_WINDOWS 2 76 - 77 88 #define IPROC_PCIE_REG_INVALID 0xffff 78 89 90 + /** 91 + * iProc PCIe outbound mapping controller specific parameters 92 + * 93 + * @window_sizes: list of supported outbound mapping window sizes in MB 94 + * @nr_sizes: number of supported outbound mapping window sizes 95 + */ 96 + struct iproc_pcie_ob_map { 97 + resource_size_t window_sizes[MAX_NUM_OB_WINDOW_SIZES]; 98 + unsigned int nr_sizes; 99 + }; 100 + 101 + static const struct iproc_pcie_ob_map paxb_ob_map[] = { 102 + { 103 + /* OARR0/OMAP0 */ 104 + .window_sizes = { 128, 256 }, 105 + .nr_sizes = 2, 106 + }, 107 + { 108 + /* OARR1/OMAP1 */ 109 + .window_sizes = { 128, 256 }, 110 + .nr_sizes = 2, 111 + }, 112 + }; 113 + 114 + static const struct iproc_pcie_ob_map paxb_v2_ob_map[] = { 115 + { 116 + /* OARR0/OMAP0 */ 117 + .window_sizes = { 128, 256 }, 118 + .nr_sizes = 2, 119 + }, 120 + { 121 + /* OARR1/OMAP1 */ 122 + .window_sizes = { 128, 256 }, 123 + .nr_sizes = 2, 124 + }, 125 + { 126 + /* OARR2/OMAP2 */ 127 + .window_sizes = { 128, 256, 512, 1024 }, 128 + .nr_sizes = 4, 129 + }, 130 + { 131 + /* OARR3/OMAP3 */ 132 + .window_sizes = { 128, 256, 512, 1024 }, 133 + .nr_sizes = 4, 134 + }, 135 + }; 136 + 137 + /** 138 + * iProc PCIe inbound mapping type 139 + */ 140 + enum iproc_pcie_ib_map_type { 141 + /* for DDR memory */ 142 + IPROC_PCIE_IB_MAP_MEM = 0, 143 + 144 + /* for device I/O memory */ 145 + IPROC_PCIE_IB_MAP_IO, 146 + 147 + /* invalid or unused */ 148 + IPROC_PCIE_IB_MAP_INVALID 149 + }; 150 + 151 + /** 152 + * iProc PCIe inbound mapping controller specific parameters 153 + * 154 + * @type: inbound mapping region type 155 + * @size_unit: inbound mapping region size unit, could be SZ_1K, SZ_1M, or 156 + * SZ_1G 157 + * @region_sizes: list of supported inbound mapping region sizes in KB, MB, or 158 + * GB, depedning on the size unit 159 + * @nr_sizes: number of supported inbound mapping region sizes 160 + * @nr_windows: number of supported inbound mapping windows for the region 161 + * @imap_addr_offset: register offset between the upper and lower 32-bit 162 + * IMAP address registers 163 + * @imap_window_offset: register offset between each IMAP window 164 + */ 165 + struct iproc_pcie_ib_map { 166 + enum iproc_pcie_ib_map_type type; 167 + unsigned int size_unit; 168 + resource_size_t region_sizes[MAX_NUM_IB_REGION_SIZES]; 169 + unsigned int nr_sizes; 170 + unsigned int nr_windows; 171 + u16 imap_addr_offset; 172 + u16 imap_window_offset; 173 + }; 174 + 175 + static const struct iproc_pcie_ib_map paxb_v2_ib_map[] = { 176 + { 177 + /* IARR0/IMAP0 */ 178 + .type = IPROC_PCIE_IB_MAP_IO, 179 + .size_unit = SZ_1K, 180 + .region_sizes = { 32 }, 181 + .nr_sizes = 1, 182 + .nr_windows = 8, 183 + .imap_addr_offset = 0x40, 184 + .imap_window_offset = 0x4, 185 + }, 186 + { 187 + /* IARR1/IMAP1 (currently unused) */ 188 + .type = IPROC_PCIE_IB_MAP_INVALID, 189 + }, 190 + { 191 + /* IARR2/IMAP2 */ 192 + .type = IPROC_PCIE_IB_MAP_MEM, 193 + .size_unit = SZ_1M, 194 + .region_sizes = { 64, 128, 256, 512, 1024, 2048, 4096, 8192, 195 + 16384 }, 196 + .nr_sizes = 9, 197 + .nr_windows = 1, 198 + .imap_addr_offset = 0x4, 199 + .imap_window_offset = 0x8, 200 + }, 201 + { 202 + /* IARR3/IMAP3 */ 203 + .type = IPROC_PCIE_IB_MAP_MEM, 204 + .size_unit = SZ_1G, 205 + .region_sizes = { 1, 2, 4, 8, 16, 32 }, 206 + .nr_sizes = 6, 207 + .nr_windows = 8, 208 + .imap_addr_offset = 0x4, 209 + .imap_window_offset = 0x8, 210 + }, 211 + { 212 + /* IARR4/IMAP4 */ 213 + .type = IPROC_PCIE_IB_MAP_MEM, 214 + .size_unit = SZ_1G, 215 + .region_sizes = { 32, 64, 128, 256, 512 }, 216 + .nr_sizes = 5, 217 + .nr_windows = 8, 218 + .imap_addr_offset = 0x4, 219 + .imap_window_offset = 0x8, 220 + }, 221 + }; 222 + 223 + /* 224 + * iProc PCIe host registers 225 + */ 79 226 enum iproc_pcie_reg { 227 + /* clock/reset signal control */ 80 228 IPROC_PCIE_CLK_CTRL = 0, 229 + 230 + /* 231 + * To allow MSI to be steered to an external MSI controller (e.g., ARM 232 + * GICv3 ITS) 233 + */ 234 + IPROC_PCIE_MSI_GIC_MODE, 235 + 236 + /* 237 + * IPROC_PCIE_MSI_BASE_ADDR and IPROC_PCIE_MSI_WINDOW_SIZE define the 238 + * window where the MSI posted writes are written, for the writes to be 239 + * interpreted as MSI writes. 240 + */ 241 + IPROC_PCIE_MSI_BASE_ADDR, 242 + IPROC_PCIE_MSI_WINDOW_SIZE, 243 + 244 + /* 245 + * To hold the address of the register where the MSI writes are 246 + * programed. When ARM GICv3 ITS is used, this should be programmed 247 + * with the address of the GITS_TRANSLATER register. 248 + */ 249 + IPROC_PCIE_MSI_ADDR_LO, 250 + IPROC_PCIE_MSI_ADDR_HI, 251 + 252 + /* enable MSI */ 253 + IPROC_PCIE_MSI_EN_CFG, 254 + 255 + /* allow access to root complex configuration space */ 81 256 IPROC_PCIE_CFG_IND_ADDR, 82 257 IPROC_PCIE_CFG_IND_DATA, 258 + 259 + /* allow access to device configuration space */ 83 260 IPROC_PCIE_CFG_ADDR, 84 261 IPROC_PCIE_CFG_DATA, 262 + 263 + /* enable INTx */ 85 264 IPROC_PCIE_INTX_EN, 86 - IPROC_PCIE_OARR_LO, 87 - IPROC_PCIE_OARR_HI, 88 - IPROC_PCIE_OMAP_LO, 89 - IPROC_PCIE_OMAP_HI, 265 + 266 + /* outbound address mapping */ 267 + IPROC_PCIE_OARR0, 268 + IPROC_PCIE_OMAP0, 269 + IPROC_PCIE_OARR1, 270 + IPROC_PCIE_OMAP1, 271 + IPROC_PCIE_OARR2, 272 + IPROC_PCIE_OMAP2, 273 + IPROC_PCIE_OARR3, 274 + IPROC_PCIE_OMAP3, 275 + 276 + /* inbound address mapping */ 277 + IPROC_PCIE_IARR0, 278 + IPROC_PCIE_IMAP0, 279 + IPROC_PCIE_IARR1, 280 + IPROC_PCIE_IMAP1, 281 + IPROC_PCIE_IARR2, 282 + IPROC_PCIE_IMAP2, 283 + IPROC_PCIE_IARR3, 284 + IPROC_PCIE_IMAP3, 285 + IPROC_PCIE_IARR4, 286 + IPROC_PCIE_IMAP4, 287 + 288 + /* link status */ 90 289 IPROC_PCIE_LINK_STATUS, 290 + 291 + /* enable APB error for unsupported requests */ 292 + IPROC_PCIE_APB_ERR_EN, 293 + 294 + /* total number of core registers */ 295 + IPROC_PCIE_MAX_NUM_REG, 296 + }; 297 + 298 + /* iProc PCIe PAXB BCMA registers */ 299 + static const u16 iproc_pcie_reg_paxb_bcma[] = { 300 + [IPROC_PCIE_CLK_CTRL] = 0x000, 301 + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 302 + [IPROC_PCIE_CFG_IND_DATA] = 0x124, 303 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 304 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 305 + [IPROC_PCIE_INTX_EN] = 0x330, 306 + [IPROC_PCIE_LINK_STATUS] = 0xf0c, 91 307 }; 92 308 93 309 /* iProc PCIe PAXB registers */ 94 310 static const u16 iproc_pcie_reg_paxb[] = { 95 - [IPROC_PCIE_CLK_CTRL] = 0x000, 96 - [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 97 - [IPROC_PCIE_CFG_IND_DATA] = 0x124, 98 - [IPROC_PCIE_CFG_ADDR] = 0x1f8, 99 - [IPROC_PCIE_CFG_DATA] = 0x1fc, 100 - [IPROC_PCIE_INTX_EN] = 0x330, 101 - [IPROC_PCIE_OARR_LO] = 0xd20, 102 - [IPROC_PCIE_OARR_HI] = 0xd24, 103 - [IPROC_PCIE_OMAP_LO] = 0xd40, 104 - [IPROC_PCIE_OMAP_HI] = 0xd44, 105 - [IPROC_PCIE_LINK_STATUS] = 0xf0c, 311 + [IPROC_PCIE_CLK_CTRL] = 0x000, 312 + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 313 + [IPROC_PCIE_CFG_IND_DATA] = 0x124, 314 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 315 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 316 + [IPROC_PCIE_INTX_EN] = 0x330, 317 + [IPROC_PCIE_OARR0] = 0xd20, 318 + [IPROC_PCIE_OMAP0] = 0xd40, 319 + [IPROC_PCIE_OARR1] = 0xd28, 320 + [IPROC_PCIE_OMAP1] = 0xd48, 321 + [IPROC_PCIE_LINK_STATUS] = 0xf0c, 322 + [IPROC_PCIE_APB_ERR_EN] = 0xf40, 323 + }; 324 + 325 + /* iProc PCIe PAXB v2 registers */ 326 + static const u16 iproc_pcie_reg_paxb_v2[] = { 327 + [IPROC_PCIE_CLK_CTRL] = 0x000, 328 + [IPROC_PCIE_CFG_IND_ADDR] = 0x120, 329 + [IPROC_PCIE_CFG_IND_DATA] = 0x124, 330 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 331 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 332 + [IPROC_PCIE_INTX_EN] = 0x330, 333 + [IPROC_PCIE_OARR0] = 0xd20, 334 + [IPROC_PCIE_OMAP0] = 0xd40, 335 + [IPROC_PCIE_OARR1] = 0xd28, 336 + [IPROC_PCIE_OMAP1] = 0xd48, 337 + [IPROC_PCIE_OARR2] = 0xd60, 338 + [IPROC_PCIE_OMAP2] = 0xd68, 339 + [IPROC_PCIE_OARR3] = 0xdf0, 340 + [IPROC_PCIE_OMAP3] = 0xdf8, 341 + [IPROC_PCIE_IARR0] = 0xd00, 342 + [IPROC_PCIE_IMAP0] = 0xc00, 343 + [IPROC_PCIE_IARR2] = 0xd10, 344 + [IPROC_PCIE_IMAP2] = 0xcc0, 345 + [IPROC_PCIE_IARR3] = 0xe00, 346 + [IPROC_PCIE_IMAP3] = 0xe08, 347 + [IPROC_PCIE_IARR4] = 0xe68, 348 + [IPROC_PCIE_IMAP4] = 0xe70, 349 + [IPROC_PCIE_LINK_STATUS] = 0xf0c, 350 + [IPROC_PCIE_APB_ERR_EN] = 0xf40, 106 351 }; 107 352 108 353 /* iProc PCIe PAXC v1 registers */ 109 354 static const u16 iproc_pcie_reg_paxc[] = { 110 - [IPROC_PCIE_CLK_CTRL] = 0x000, 111 - [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 112 - [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 113 - [IPROC_PCIE_CFG_ADDR] = 0x1f8, 114 - [IPROC_PCIE_CFG_DATA] = 0x1fc, 115 - [IPROC_PCIE_INTX_EN] = IPROC_PCIE_REG_INVALID, 116 - [IPROC_PCIE_OARR_LO] = IPROC_PCIE_REG_INVALID, 117 - [IPROC_PCIE_OARR_HI] = IPROC_PCIE_REG_INVALID, 118 - [IPROC_PCIE_OMAP_LO] = IPROC_PCIE_REG_INVALID, 119 - [IPROC_PCIE_OMAP_HI] = IPROC_PCIE_REG_INVALID, 120 - [IPROC_PCIE_LINK_STATUS] = IPROC_PCIE_REG_INVALID, 355 + [IPROC_PCIE_CLK_CTRL] = 0x000, 356 + [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 357 + [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 358 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 359 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 360 + }; 361 + 362 + /* iProc PCIe PAXC v2 registers */ 363 + static const u16 iproc_pcie_reg_paxc_v2[] = { 364 + [IPROC_PCIE_MSI_GIC_MODE] = 0x050, 365 + [IPROC_PCIE_MSI_BASE_ADDR] = 0x074, 366 + [IPROC_PCIE_MSI_WINDOW_SIZE] = 0x078, 367 + [IPROC_PCIE_MSI_ADDR_LO] = 0x07c, 368 + [IPROC_PCIE_MSI_ADDR_HI] = 0x080, 369 + [IPROC_PCIE_MSI_EN_CFG] = 0x09c, 370 + [IPROC_PCIE_CFG_IND_ADDR] = 0x1f0, 371 + [IPROC_PCIE_CFG_IND_DATA] = 0x1f4, 372 + [IPROC_PCIE_CFG_ADDR] = 0x1f8, 373 + [IPROC_PCIE_CFG_DATA] = 0x1fc, 121 374 }; 122 375 123 376 static inline struct iproc_pcie *iproc_data(struct pci_bus *bus) ··· 426 159 writel(val, pcie->base + offset); 427 160 } 428 161 429 - static inline void iproc_pcie_ob_write(struct iproc_pcie *pcie, 430 - enum iproc_pcie_reg reg, 431 - unsigned window, u32 val) 162 + /** 163 + * APB error forwarding can be disabled during access of configuration 164 + * registers of the endpoint device, to prevent unsupported requests 165 + * (typically seen during enumeration with multi-function devices) from 166 + * triggering a system exception. 167 + */ 168 + static inline void iproc_pcie_apb_err_disable(struct pci_bus *bus, 169 + bool disable) 432 170 { 433 - u16 offset = iproc_pcie_reg_offset(pcie, reg); 171 + struct iproc_pcie *pcie = iproc_data(bus); 172 + u32 val; 434 173 435 - if (iproc_pcie_reg_is_invalid(offset)) 436 - return; 437 - 438 - writel(val, pcie->base + offset + (window * 8)); 174 + if (bus->number && pcie->has_apb_err_disable) { 175 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_APB_ERR_EN); 176 + if (disable) 177 + val &= ~APB_ERR_EN; 178 + else 179 + val |= APB_ERR_EN; 180 + iproc_pcie_write_reg(pcie, IPROC_PCIE_APB_ERR_EN, val); 181 + } 439 182 } 440 183 441 184 /** ··· 481 204 * PAXC is connected to an internally emulated EP within the SoC. It 482 205 * allows only one device. 483 206 */ 484 - if (pcie->type == IPROC_PCIE_PAXC) 207 + if (pcie->ep_is_internal) 485 208 if (slot > 0) 486 209 return NULL; 487 210 ··· 499 222 return (pcie->base + offset); 500 223 } 501 224 225 + static int iproc_pcie_config_read32(struct pci_bus *bus, unsigned int devfn, 226 + int where, int size, u32 *val) 227 + { 228 + int ret; 229 + 230 + iproc_pcie_apb_err_disable(bus, true); 231 + ret = pci_generic_config_read32(bus, devfn, where, size, val); 232 + iproc_pcie_apb_err_disable(bus, false); 233 + 234 + return ret; 235 + } 236 + 237 + static int iproc_pcie_config_write32(struct pci_bus *bus, unsigned int devfn, 238 + int where, int size, u32 val) 239 + { 240 + int ret; 241 + 242 + iproc_pcie_apb_err_disable(bus, true); 243 + ret = pci_generic_config_write32(bus, devfn, where, size, val); 244 + iproc_pcie_apb_err_disable(bus, false); 245 + 246 + return ret; 247 + } 248 + 502 249 static struct pci_ops iproc_pcie_ops = { 503 250 .map_bus = iproc_pcie_map_cfg_bus, 504 - .read = pci_generic_config_read32, 505 - .write = pci_generic_config_write32, 251 + .read = iproc_pcie_config_read32, 252 + .write = iproc_pcie_config_write32, 506 253 }; 507 254 508 255 static void iproc_pcie_reset(struct iproc_pcie *pcie) 509 256 { 510 257 u32 val; 511 258 512 - if (pcie->type == IPROC_PCIE_PAXC) { 513 - val = iproc_pcie_read_reg(pcie, IPROC_PCIE_CLK_CTRL); 514 - val &= ~PAXC_RESET_MASK; 515 - iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 516 - udelay(100); 517 - val |= PAXC_RESET_MASK; 518 - iproc_pcie_write_reg(pcie, IPROC_PCIE_CLK_CTRL, val); 519 - udelay(100); 259 + /* 260 + * PAXC and the internal emulated endpoint device downstream should not 261 + * be reset. If firmware has been loaded on the endpoint device at an 262 + * earlier boot stage, reset here causes issues. 263 + */ 264 + if (pcie->ep_is_internal) 520 265 return; 521 - } 522 266 523 267 /* 524 268 * Select perst_b signal as reset source. Put the device into reset, ··· 568 270 * PAXC connects to emulated endpoint devices directly and does not 569 271 * have a Serdes. Therefore skip the link detection logic here. 570 272 */ 571 - if (pcie->type == IPROC_PCIE_PAXC) 273 + if (pcie->ep_is_internal) 572 274 return 0; 573 275 574 276 val = iproc_pcie_read_reg(pcie, IPROC_PCIE_LINK_STATUS); ··· 632 334 iproc_pcie_write_reg(pcie, IPROC_PCIE_INTX_EN, SYS_RC_INTX_MASK); 633 335 } 634 336 337 + static inline bool iproc_pcie_ob_is_valid(struct iproc_pcie *pcie, 338 + int window_idx) 339 + { 340 + u32 val; 341 + 342 + val = iproc_pcie_read_reg(pcie, MAP_REG(IPROC_PCIE_OARR0, window_idx)); 343 + 344 + return !!(val & OARR_VALID); 345 + } 346 + 347 + static inline int iproc_pcie_ob_write(struct iproc_pcie *pcie, int window_idx, 348 + int size_idx, u64 axi_addr, u64 pci_addr) 349 + { 350 + struct device *dev = pcie->dev; 351 + u16 oarr_offset, omap_offset; 352 + 353 + /* 354 + * Derive the OARR/OMAP offset from the first pair (OARR0/OMAP0) based 355 + * on window index. 356 + */ 357 + oarr_offset = iproc_pcie_reg_offset(pcie, MAP_REG(IPROC_PCIE_OARR0, 358 + window_idx)); 359 + omap_offset = iproc_pcie_reg_offset(pcie, MAP_REG(IPROC_PCIE_OMAP0, 360 + window_idx)); 361 + if (iproc_pcie_reg_is_invalid(oarr_offset) || 362 + iproc_pcie_reg_is_invalid(omap_offset)) 363 + return -EINVAL; 364 + 365 + /* 366 + * Program the OARR registers. The upper 32-bit OARR register is 367 + * always right after the lower 32-bit OARR register. 368 + */ 369 + writel(lower_32_bits(axi_addr) | (size_idx << OARR_SIZE_CFG_SHIFT) | 370 + OARR_VALID, pcie->base + oarr_offset); 371 + writel(upper_32_bits(axi_addr), pcie->base + oarr_offset + 4); 372 + 373 + /* now program the OMAP registers */ 374 + writel(lower_32_bits(pci_addr), pcie->base + omap_offset); 375 + writel(upper_32_bits(pci_addr), pcie->base + omap_offset + 4); 376 + 377 + dev_info(dev, "ob window [%d]: offset 0x%x axi %pap pci %pap\n", 378 + window_idx, oarr_offset, &axi_addr, &pci_addr); 379 + dev_info(dev, "oarr lo 0x%x oarr hi 0x%x\n", 380 + readl(pcie->base + oarr_offset), 381 + readl(pcie->base + oarr_offset + 4)); 382 + dev_info(dev, "omap lo 0x%x omap hi 0x%x\n", 383 + readl(pcie->base + omap_offset), 384 + readl(pcie->base + omap_offset + 4)); 385 + 386 + return 0; 387 + } 388 + 635 389 /** 636 390 * Some iProc SoCs require the SW to configure the outbound address mapping 637 391 * ··· 700 350 { 701 351 struct iproc_pcie_ob *ob = &pcie->ob; 702 352 struct device *dev = pcie->dev; 703 - unsigned i; 704 - u64 max_size = (u64)ob->window_size * MAX_NUM_OB_WINDOWS; 705 - u64 remainder; 706 - 707 - if (size > max_size) { 708 - dev_err(dev, 709 - "res size %pap exceeds max supported size 0x%llx\n", 710 - &size, max_size); 711 - return -EINVAL; 712 - } 713 - 714 - div64_u64_rem(size, ob->window_size, &remainder); 715 - if (remainder) { 716 - dev_err(dev, 717 - "res size %pap needs to be multiple of window size %pap\n", 718 - &size, &ob->window_size); 719 - return -EINVAL; 720 - } 353 + int ret = -EINVAL, window_idx, size_idx; 721 354 722 355 if (axi_addr < ob->axi_offset) { 723 356 dev_err(dev, "axi address %pap less than offset %pap\n", ··· 714 381 */ 715 382 axi_addr -= ob->axi_offset; 716 383 717 - for (i = 0; i < MAX_NUM_OB_WINDOWS; i++) { 718 - iproc_pcie_ob_write(pcie, IPROC_PCIE_OARR_LO, i, 719 - lower_32_bits(axi_addr) | OARR_VALID | 720 - (ob->set_oarr_size ? 1 : 0)); 721 - iproc_pcie_ob_write(pcie, IPROC_PCIE_OARR_HI, i, 722 - upper_32_bits(axi_addr)); 723 - iproc_pcie_ob_write(pcie, IPROC_PCIE_OMAP_LO, i, 724 - lower_32_bits(pci_addr)); 725 - iproc_pcie_ob_write(pcie, IPROC_PCIE_OMAP_HI, i, 726 - upper_32_bits(pci_addr)); 384 + /* iterate through all OARR/OMAP mapping windows */ 385 + for (window_idx = ob->nr_windows - 1; window_idx >= 0; window_idx--) { 386 + const struct iproc_pcie_ob_map *ob_map = 387 + &pcie->ob_map[window_idx]; 727 388 728 - size -= ob->window_size; 729 - if (size == 0) 389 + /* 390 + * If current outbound window is already in use, move on to the 391 + * next one. 392 + */ 393 + if (iproc_pcie_ob_is_valid(pcie, window_idx)) 394 + continue; 395 + 396 + /* 397 + * Iterate through all supported window sizes within the 398 + * OARR/OMAP pair to find a match. Go through the window sizes 399 + * in a descending order. 400 + */ 401 + for (size_idx = ob_map->nr_sizes - 1; size_idx >= 0; 402 + size_idx--) { 403 + resource_size_t window_size = 404 + ob_map->window_sizes[size_idx] * SZ_1M; 405 + 406 + if (size < window_size) 407 + continue; 408 + 409 + if (!IS_ALIGNED(axi_addr, window_size) || 410 + !IS_ALIGNED(pci_addr, window_size)) { 411 + dev_err(dev, 412 + "axi %pap or pci %pap not aligned\n", 413 + &axi_addr, &pci_addr); 414 + return -EINVAL; 415 + } 416 + 417 + /* 418 + * Match found! Program both OARR and OMAP and mark 419 + * them as a valid entry. 420 + */ 421 + ret = iproc_pcie_ob_write(pcie, window_idx, size_idx, 422 + axi_addr, pci_addr); 423 + if (ret) 424 + goto err_ob; 425 + 426 + size -= window_size; 427 + if (size == 0) 428 + return 0; 429 + 430 + /* 431 + * If we are here, we are done with the current window, 432 + * but not yet finished all mappings. Need to move on 433 + * to the next window. 434 + */ 435 + axi_addr += window_size; 436 + pci_addr += window_size; 730 437 break; 731 - 732 - axi_addr += ob->window_size; 733 - pci_addr += ob->window_size; 438 + } 734 439 } 735 440 736 - return 0; 441 + err_ob: 442 + dev_err(dev, "unable to configure outbound mapping\n"); 443 + dev_err(dev, 444 + "axi %pap, axi offset %pap, pci %pap, res size %pap\n", 445 + &axi_addr, &ob->axi_offset, &pci_addr, &size); 446 + 447 + return ret; 737 448 } 738 449 739 450 static int iproc_pcie_map_ranges(struct iproc_pcie *pcie, ··· 811 434 return 0; 812 435 } 813 436 437 + static inline bool iproc_pcie_ib_is_in_use(struct iproc_pcie *pcie, 438 + int region_idx) 439 + { 440 + const struct iproc_pcie_ib_map *ib_map = &pcie->ib_map[region_idx]; 441 + u32 val; 442 + 443 + val = iproc_pcie_read_reg(pcie, MAP_REG(IPROC_PCIE_IARR0, region_idx)); 444 + 445 + return !!(val & (BIT(ib_map->nr_sizes) - 1)); 446 + } 447 + 448 + static inline bool iproc_pcie_ib_check_type(const struct iproc_pcie_ib_map *ib_map, 449 + enum iproc_pcie_ib_map_type type) 450 + { 451 + return !!(ib_map->type == type); 452 + } 453 + 454 + static int iproc_pcie_ib_write(struct iproc_pcie *pcie, int region_idx, 455 + int size_idx, int nr_windows, u64 axi_addr, 456 + u64 pci_addr, resource_size_t size) 457 + { 458 + struct device *dev = pcie->dev; 459 + const struct iproc_pcie_ib_map *ib_map = &pcie->ib_map[region_idx]; 460 + u16 iarr_offset, imap_offset; 461 + u32 val; 462 + int window_idx; 463 + 464 + iarr_offset = iproc_pcie_reg_offset(pcie, 465 + MAP_REG(IPROC_PCIE_IARR0, region_idx)); 466 + imap_offset = iproc_pcie_reg_offset(pcie, 467 + MAP_REG(IPROC_PCIE_IMAP0, region_idx)); 468 + if (iproc_pcie_reg_is_invalid(iarr_offset) || 469 + iproc_pcie_reg_is_invalid(imap_offset)) 470 + return -EINVAL; 471 + 472 + dev_info(dev, "ib region [%d]: offset 0x%x axi %pap pci %pap\n", 473 + region_idx, iarr_offset, &axi_addr, &pci_addr); 474 + 475 + /* 476 + * Program the IARR registers. The upper 32-bit IARR register is 477 + * always right after the lower 32-bit IARR register. 478 + */ 479 + writel(lower_32_bits(pci_addr) | BIT(size_idx), 480 + pcie->base + iarr_offset); 481 + writel(upper_32_bits(pci_addr), pcie->base + iarr_offset + 4); 482 + 483 + dev_info(dev, "iarr lo 0x%x iarr hi 0x%x\n", 484 + readl(pcie->base + iarr_offset), 485 + readl(pcie->base + iarr_offset + 4)); 486 + 487 + /* 488 + * Now program the IMAP registers. Each IARR region may have one or 489 + * more IMAP windows. 490 + */ 491 + size >>= ilog2(nr_windows); 492 + for (window_idx = 0; window_idx < nr_windows; window_idx++) { 493 + val = readl(pcie->base + imap_offset); 494 + val |= lower_32_bits(axi_addr) | IMAP_VALID; 495 + writel(val, pcie->base + imap_offset); 496 + writel(upper_32_bits(axi_addr), 497 + pcie->base + imap_offset + ib_map->imap_addr_offset); 498 + 499 + dev_info(dev, "imap window [%d] lo 0x%x hi 0x%x\n", 500 + window_idx, readl(pcie->base + imap_offset), 501 + readl(pcie->base + imap_offset + 502 + ib_map->imap_addr_offset)); 503 + 504 + imap_offset += ib_map->imap_window_offset; 505 + axi_addr += size; 506 + } 507 + 508 + return 0; 509 + } 510 + 511 + static int iproc_pcie_setup_ib(struct iproc_pcie *pcie, 512 + struct of_pci_range *range, 513 + enum iproc_pcie_ib_map_type type) 514 + { 515 + struct device *dev = pcie->dev; 516 + struct iproc_pcie_ib *ib = &pcie->ib; 517 + int ret; 518 + unsigned int region_idx, size_idx; 519 + u64 axi_addr = range->cpu_addr, pci_addr = range->pci_addr; 520 + resource_size_t size = range->size; 521 + 522 + /* iterate through all IARR mapping regions */ 523 + for (region_idx = 0; region_idx < ib->nr_regions; region_idx++) { 524 + const struct iproc_pcie_ib_map *ib_map = 525 + &pcie->ib_map[region_idx]; 526 + 527 + /* 528 + * If current inbound region is already in use or not a 529 + * compatible type, move on to the next. 530 + */ 531 + if (iproc_pcie_ib_is_in_use(pcie, region_idx) || 532 + !iproc_pcie_ib_check_type(ib_map, type)) 533 + continue; 534 + 535 + /* iterate through all supported region sizes to find a match */ 536 + for (size_idx = 0; size_idx < ib_map->nr_sizes; size_idx++) { 537 + resource_size_t region_size = 538 + ib_map->region_sizes[size_idx] * ib_map->size_unit; 539 + 540 + if (size != region_size) 541 + continue; 542 + 543 + if (!IS_ALIGNED(axi_addr, region_size) || 544 + !IS_ALIGNED(pci_addr, region_size)) { 545 + dev_err(dev, 546 + "axi %pap or pci %pap not aligned\n", 547 + &axi_addr, &pci_addr); 548 + return -EINVAL; 549 + } 550 + 551 + /* Match found! Program IARR and all IMAP windows. */ 552 + ret = iproc_pcie_ib_write(pcie, region_idx, size_idx, 553 + ib_map->nr_windows, axi_addr, 554 + pci_addr, size); 555 + if (ret) 556 + goto err_ib; 557 + else 558 + return 0; 559 + 560 + } 561 + } 562 + ret = -EINVAL; 563 + 564 + err_ib: 565 + dev_err(dev, "unable to configure inbound mapping\n"); 566 + dev_err(dev, "axi %pap, pci %pap, res size %pap\n", 567 + &axi_addr, &pci_addr, &size); 568 + 569 + return ret; 570 + } 571 + 572 + static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 573 + struct device_node *node) 574 + { 575 + const int na = 3, ns = 2; 576 + int rlen; 577 + 578 + parser->node = node; 579 + parser->pna = of_n_addr_cells(node); 580 + parser->np = parser->pna + na + ns; 581 + 582 + parser->range = of_get_property(node, "dma-ranges", &rlen); 583 + if (!parser->range) 584 + return -ENOENT; 585 + 586 + parser->end = parser->range + rlen / sizeof(__be32); 587 + return 0; 588 + } 589 + 590 + static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie) 591 + { 592 + struct of_pci_range range; 593 + struct of_pci_range_parser parser; 594 + int ret; 595 + 596 + /* Get the dma-ranges from DT */ 597 + ret = pci_dma_range_parser_init(&parser, pcie->dev->of_node); 598 + if (ret) 599 + return ret; 600 + 601 + for_each_of_pci_range(&parser, &range) { 602 + /* Each range entry corresponds to an inbound mapping region */ 603 + ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM); 604 + if (ret) 605 + return ret; 606 + } 607 + 608 + return 0; 609 + } 610 + 611 + static int iproce_pcie_get_msi(struct iproc_pcie *pcie, 612 + struct device_node *msi_node, 613 + u64 *msi_addr) 614 + { 615 + struct device *dev = pcie->dev; 616 + int ret; 617 + struct resource res; 618 + 619 + /* 620 + * Check if 'msi-map' points to ARM GICv3 ITS, which is the only 621 + * supported external MSI controller that requires steering. 622 + */ 623 + if (!of_device_is_compatible(msi_node, "arm,gic-v3-its")) { 624 + dev_err(dev, "unable to find compatible MSI controller\n"); 625 + return -ENODEV; 626 + } 627 + 628 + /* derive GITS_TRANSLATER address from GICv3 */ 629 + ret = of_address_to_resource(msi_node, 0, &res); 630 + if (ret < 0) { 631 + dev_err(dev, "unable to obtain MSI controller resources\n"); 632 + return ret; 633 + } 634 + 635 + *msi_addr = res.start + GITS_TRANSLATER; 636 + return 0; 637 + } 638 + 639 + static int iproc_pcie_paxb_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr) 640 + { 641 + int ret; 642 + struct of_pci_range range; 643 + 644 + memset(&range, 0, sizeof(range)); 645 + range.size = SZ_32K; 646 + range.pci_addr = range.cpu_addr = msi_addr & ~(range.size - 1); 647 + 648 + ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_IO); 649 + return ret; 650 + } 651 + 652 + static void iproc_pcie_paxc_v2_msi_steer(struct iproc_pcie *pcie, u64 msi_addr) 653 + { 654 + u32 val; 655 + 656 + /* 657 + * Program bits [43:13] of address of GITS_TRANSLATER register into 658 + * bits [30:0] of the MSI base address register. In fact, in all iProc 659 + * based SoCs, all I/O register bases are well below the 32-bit 660 + * boundary, so we can safely assume bits [43:32] are always zeros. 661 + */ 662 + iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_BASE_ADDR, 663 + (u32)(msi_addr >> 13)); 664 + 665 + /* use a default 8K window size */ 666 + iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_WINDOW_SIZE, 0); 667 + 668 + /* steering MSI to GICv3 ITS */ 669 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_MSI_GIC_MODE); 670 + val |= GIC_V3_CFG; 671 + iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_GIC_MODE, val); 672 + 673 + /* 674 + * Program bits [43:2] of address of GITS_TRANSLATER register into the 675 + * iProc MSI address registers. 676 + */ 677 + msi_addr >>= 2; 678 + iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_ADDR_HI, 679 + upper_32_bits(msi_addr)); 680 + iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_ADDR_LO, 681 + lower_32_bits(msi_addr)); 682 + 683 + /* enable MSI */ 684 + val = iproc_pcie_read_reg(pcie, IPROC_PCIE_MSI_EN_CFG); 685 + val |= MSI_ENABLE_CFG; 686 + iproc_pcie_write_reg(pcie, IPROC_PCIE_MSI_EN_CFG, val); 687 + } 688 + 689 + static int iproc_pcie_msi_steer(struct iproc_pcie *pcie, 690 + struct device_node *msi_node) 691 + { 692 + struct device *dev = pcie->dev; 693 + int ret; 694 + u64 msi_addr; 695 + 696 + ret = iproce_pcie_get_msi(pcie, msi_node, &msi_addr); 697 + if (ret < 0) { 698 + dev_err(dev, "msi steering failed\n"); 699 + return ret; 700 + } 701 + 702 + switch (pcie->type) { 703 + case IPROC_PCIE_PAXB_V2: 704 + ret = iproc_pcie_paxb_v2_msi_steer(pcie, msi_addr); 705 + if (ret) 706 + return ret; 707 + break; 708 + case IPROC_PCIE_PAXC_V2: 709 + iproc_pcie_paxc_v2_msi_steer(pcie, msi_addr); 710 + break; 711 + default: 712 + return -EINVAL; 713 + } 714 + 715 + return 0; 716 + } 717 + 814 718 static int iproc_pcie_msi_enable(struct iproc_pcie *pcie) 815 719 { 816 720 struct device_node *msi_node; 721 + int ret; 722 + 723 + /* 724 + * Either the "msi-parent" or the "msi-map" phandle needs to exist 725 + * for us to obtain the MSI node. 726 + */ 817 727 818 728 msi_node = of_parse_phandle(pcie->dev->of_node, "msi-parent", 0); 819 - if (!msi_node) 820 - return -ENODEV; 729 + if (!msi_node) { 730 + const __be32 *msi_map = NULL; 731 + int len; 732 + u32 phandle; 733 + 734 + msi_map = of_get_property(pcie->dev->of_node, "msi-map", &len); 735 + if (!msi_map) 736 + return -ENODEV; 737 + 738 + phandle = be32_to_cpup(msi_map + 1); 739 + msi_node = of_find_node_by_phandle(phandle); 740 + if (!msi_node) 741 + return -ENODEV; 742 + } 743 + 744 + /* 745 + * Certain revisions of the iProc PCIe controller require additional 746 + * configurations to steer the MSI writes towards an external MSI 747 + * controller. 748 + */ 749 + if (pcie->need_msi_steer) { 750 + ret = iproc_pcie_msi_steer(pcie, msi_node); 751 + if (ret) 752 + return ret; 753 + } 821 754 822 755 /* 823 756 * If another MSI controller is being used, the call below should fail ··· 1141 454 iproc_msi_exit(pcie); 1142 455 } 1143 456 457 + static int iproc_pcie_rev_init(struct iproc_pcie *pcie) 458 + { 459 + struct device *dev = pcie->dev; 460 + unsigned int reg_idx; 461 + const u16 *regs; 462 + 463 + switch (pcie->type) { 464 + case IPROC_PCIE_PAXB_BCMA: 465 + regs = iproc_pcie_reg_paxb_bcma; 466 + break; 467 + case IPROC_PCIE_PAXB: 468 + regs = iproc_pcie_reg_paxb; 469 + pcie->has_apb_err_disable = true; 470 + if (pcie->need_ob_cfg) { 471 + pcie->ob_map = paxb_ob_map; 472 + pcie->ob.nr_windows = ARRAY_SIZE(paxb_ob_map); 473 + } 474 + break; 475 + case IPROC_PCIE_PAXB_V2: 476 + regs = iproc_pcie_reg_paxb_v2; 477 + pcie->has_apb_err_disable = true; 478 + if (pcie->need_ob_cfg) { 479 + pcie->ob_map = paxb_v2_ob_map; 480 + pcie->ob.nr_windows = ARRAY_SIZE(paxb_v2_ob_map); 481 + } 482 + pcie->ib.nr_regions = ARRAY_SIZE(paxb_v2_ib_map); 483 + pcie->ib_map = paxb_v2_ib_map; 484 + pcie->need_msi_steer = true; 485 + break; 486 + case IPROC_PCIE_PAXC: 487 + regs = iproc_pcie_reg_paxc; 488 + pcie->ep_is_internal = true; 489 + break; 490 + case IPROC_PCIE_PAXC_V2: 491 + regs = iproc_pcie_reg_paxc_v2; 492 + pcie->ep_is_internal = true; 493 + pcie->need_msi_steer = true; 494 + break; 495 + default: 496 + dev_err(dev, "incompatible iProc PCIe interface\n"); 497 + return -EINVAL; 498 + } 499 + 500 + pcie->reg_offsets = devm_kcalloc(dev, IPROC_PCIE_MAX_NUM_REG, 501 + sizeof(*pcie->reg_offsets), 502 + GFP_KERNEL); 503 + if (!pcie->reg_offsets) 504 + return -ENOMEM; 505 + 506 + /* go through the register table and populate all valid registers */ 507 + pcie->reg_offsets[0] = (pcie->type == IPROC_PCIE_PAXC_V2) ? 508 + IPROC_PCIE_REG_INVALID : regs[0]; 509 + for (reg_idx = 1; reg_idx < IPROC_PCIE_MAX_NUM_REG; reg_idx++) 510 + pcie->reg_offsets[reg_idx] = regs[reg_idx] ? 511 + regs[reg_idx] : IPROC_PCIE_REG_INVALID; 512 + 513 + return 0; 514 + } 515 + 1144 516 int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res) 1145 517 { 1146 518 struct device *dev; ··· 1208 462 struct pci_bus *bus; 1209 463 1210 464 dev = pcie->dev; 465 + 466 + ret = iproc_pcie_rev_init(pcie); 467 + if (ret) { 468 + dev_err(dev, "unable to initialize controller parameters\n"); 469 + return ret; 470 + } 471 + 1211 472 ret = devm_request_pci_bus_resources(dev, res); 1212 473 if (ret) 1213 474 return ret; ··· 1231 478 goto err_exit_phy; 1232 479 } 1233 480 1234 - switch (pcie->type) { 1235 - case IPROC_PCIE_PAXB: 1236 - pcie->reg_offsets = iproc_pcie_reg_paxb; 1237 - break; 1238 - case IPROC_PCIE_PAXC: 1239 - pcie->reg_offsets = iproc_pcie_reg_paxc; 1240 - break; 1241 - default: 1242 - dev_err(dev, "incompatible iProc PCIe interface\n"); 1243 - ret = -EINVAL; 1244 - goto err_power_off_phy; 1245 - } 1246 - 1247 481 iproc_pcie_reset(pcie); 1248 482 1249 483 if (pcie->need_ob_cfg) { ··· 1240 500 goto err_power_off_phy; 1241 501 } 1242 502 } 503 + 504 + ret = iproc_pcie_map_dma_ranges(pcie); 505 + if (ret && ret != -ENOENT) 506 + goto err_power_off_phy; 1243 507 1244 508 #ifdef CONFIG_ARM 1245 509 pcie->sysdata.private_data = pcie; ··· 1274 530 1275 531 pci_scan_child_bus(bus); 1276 532 pci_assign_unassigned_bus_resources(bus); 1277 - pci_fixup_irqs(pci_common_swizzle, pcie->map_irq); 533 + 534 + if (pcie->map_irq) 535 + pci_fixup_irqs(pci_common_swizzle, pcie->map_irq); 536 + 1278 537 pci_bus_add_devices(bus); 1279 538 1280 539 return 0;
+38 -7
drivers/pci/host/pcie-iproc.h
··· 24 24 * endpoint devices. 25 25 */ 26 26 enum iproc_pcie_type { 27 - IPROC_PCIE_PAXB = 0, 27 + IPROC_PCIE_PAXB_BCMA = 0, 28 + IPROC_PCIE_PAXB, 29 + IPROC_PCIE_PAXB_V2, 28 30 IPROC_PCIE_PAXC, 31 + IPROC_PCIE_PAXC_V2, 29 32 }; 30 33 31 34 /** 32 35 * iProc PCIe outbound mapping 33 - * @set_oarr_size: indicates the OARR size bit needs to be set 34 36 * @axi_offset: offset from the AXI address to the internal address used by 35 37 * the iProc PCIe core 36 - * @window_size: outbound window size 38 + * @nr_windows: total number of supported outbound mapping windows 37 39 */ 38 40 struct iproc_pcie_ob { 39 - bool set_oarr_size; 40 41 resource_size_t axi_offset; 41 - resource_size_t window_size; 42 + unsigned int nr_windows; 42 43 }; 43 44 45 + /** 46 + * iProc PCIe inbound mapping 47 + * @nr_regions: total number of supported inbound mapping regions 48 + */ 49 + struct iproc_pcie_ib { 50 + unsigned int nr_regions; 51 + }; 52 + 53 + struct iproc_pcie_ob_map; 54 + struct iproc_pcie_ib_map; 44 55 struct iproc_msi; 45 56 46 57 /** ··· 66 55 * @root_bus: pointer to root bus 67 56 * @phy: optional PHY device that controls the Serdes 68 57 * @map_irq: function callback to map interrupts 58 + * @ep_is_internal: indicates an internal emulated endpoint device is connected 59 + * @has_apb_err_disable: indicates the controller can be configured to prevent 60 + * unsupported request from being forwarded as an APB bus error 61 + * 69 62 * @need_ob_cfg: indicates SW needs to configure the outbound mapping window 70 - * @ob: outbound mapping parameters 63 + * @ob: outbound mapping related parameters 64 + * @ob_map: outbound mapping related parameters specific to the controller 65 + * 66 + * @ib: inbound mapping related parameters 67 + * @ib_map: outbound mapping region related parameters 68 + * 69 + * @need_msi_steer: indicates additional configuration of the iProc PCIe 70 + * controller is required to steer MSI writes to external interrupt controller 71 71 * @msi: MSI data 72 72 */ 73 73 struct iproc_pcie { 74 74 struct device *dev; 75 75 enum iproc_pcie_type type; 76 - const u16 *reg_offsets; 76 + u16 *reg_offsets; 77 77 void __iomem *base; 78 78 phys_addr_t base_addr; 79 79 #ifdef CONFIG_ARM ··· 93 71 struct pci_bus *root_bus; 94 72 struct phy *phy; 95 73 int (*map_irq)(const struct pci_dev *, u8, u8); 74 + bool ep_is_internal; 75 + bool has_apb_err_disable; 76 + 96 77 bool need_ob_cfg; 97 78 struct iproc_pcie_ob ob; 79 + const struct iproc_pcie_ob_map *ob_map; 80 + 81 + struct iproc_pcie_ib ib; 82 + const struct iproc_pcie_ib_map *ib_map; 83 + 84 + bool need_msi_steer; 98 85 struct iproc_msi *msi; 99 86 }; 100 87
+172 -5
drivers/pci/host/pcie-qcom.c
··· 36 36 37 37 #include "pcie-designware.h" 38 38 39 + #define PCIE20_PARF_SYS_CTRL 0x00 39 40 #define PCIE20_PARF_PHY_CTRL 0x40 40 41 #define PCIE20_PARF_PHY_REFCLK 0x4C 41 42 #define PCIE20_PARF_DBI_BASE_ADDR 0x168 42 - #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16c 43 + #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16C 44 + #define PCIE20_PARF_MHI_CLOCK_RESET_CTRL 0x174 43 45 #define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT 0x178 46 + #define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1A8 47 + #define PCIE20_PARF_LTSSM 0x1B0 48 + #define PCIE20_PARF_SID_OFFSET 0x234 49 + #define PCIE20_PARF_BDF_TRANSLATE_CFG 0x24C 44 50 45 51 #define PCIE20_ELBI_SYS_CTRL 0x04 46 52 #define PCIE20_ELBI_SYS_CTRL_LT_ENABLE BIT(0) ··· 78 72 struct regulator *vdda; 79 73 }; 80 74 75 + struct qcom_pcie_resources_v2 { 76 + struct clk *aux_clk; 77 + struct clk *master_clk; 78 + struct clk *slave_clk; 79 + struct clk *cfg_clk; 80 + struct clk *pipe_clk; 81 + }; 82 + 81 83 union qcom_pcie_resources { 82 84 struct qcom_pcie_resources_v0 v0; 83 85 struct qcom_pcie_resources_v1 v1; 86 + struct qcom_pcie_resources_v2 v2; 84 87 }; 85 88 86 89 struct qcom_pcie; ··· 97 82 struct qcom_pcie_ops { 98 83 int (*get_resources)(struct qcom_pcie *pcie); 99 84 int (*init)(struct qcom_pcie *pcie); 85 + int (*post_init)(struct qcom_pcie *pcie); 100 86 void (*deinit)(struct qcom_pcie *pcie); 87 + void (*ltssm_enable)(struct qcom_pcie *pcie); 101 88 }; 102 89 103 90 struct qcom_pcie { ··· 133 116 return dw_handle_msi_irq(pp); 134 117 } 135 118 136 - static int qcom_pcie_establish_link(struct qcom_pcie *pcie) 119 + static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie) 137 120 { 138 121 u32 val; 139 - 140 - if (dw_pcie_link_up(&pcie->pp)) 141 - return 0; 142 122 143 123 /* enable link training */ 144 124 val = readl(pcie->elbi + PCIE20_ELBI_SYS_CTRL); 145 125 val |= PCIE20_ELBI_SYS_CTRL_LT_ENABLE; 146 126 writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL); 127 + } 128 + 129 + static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie) 130 + { 131 + u32 val; 132 + 133 + /* enable link training */ 134 + val = readl(pcie->parf + PCIE20_PARF_LTSSM); 135 + val |= BIT(8); 136 + writel(val, pcie->parf + PCIE20_PARF_LTSSM); 137 + } 138 + 139 + static int qcom_pcie_establish_link(struct qcom_pcie *pcie) 140 + { 141 + 142 + if (dw_pcie_link_up(&pcie->pp)) 143 + return 0; 144 + 145 + /* Enable Link Training state machine */ 146 + if (pcie->ops->ltssm_enable) 147 + pcie->ops->ltssm_enable(pcie); 147 148 148 149 return dw_pcie_wait_for_link(&pcie->pp); 149 150 } ··· 456 421 return ret; 457 422 } 458 423 424 + static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie) 425 + { 426 + struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 427 + struct device *dev = pcie->pp.dev; 428 + 429 + res->aux_clk = devm_clk_get(dev, "aux"); 430 + if (IS_ERR(res->aux_clk)) 431 + return PTR_ERR(res->aux_clk); 432 + 433 + res->cfg_clk = devm_clk_get(dev, "cfg"); 434 + if (IS_ERR(res->cfg_clk)) 435 + return PTR_ERR(res->cfg_clk); 436 + 437 + res->master_clk = devm_clk_get(dev, "bus_master"); 438 + if (IS_ERR(res->master_clk)) 439 + return PTR_ERR(res->master_clk); 440 + 441 + res->slave_clk = devm_clk_get(dev, "bus_slave"); 442 + if (IS_ERR(res->slave_clk)) 443 + return PTR_ERR(res->slave_clk); 444 + 445 + res->pipe_clk = devm_clk_get(dev, "pipe"); 446 + if (IS_ERR(res->pipe_clk)) 447 + return PTR_ERR(res->pipe_clk); 448 + 449 + return 0; 450 + } 451 + 452 + static int qcom_pcie_init_v2(struct qcom_pcie *pcie) 453 + { 454 + struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 455 + struct device *dev = pcie->pp.dev; 456 + u32 val; 457 + int ret; 458 + 459 + ret = clk_prepare_enable(res->aux_clk); 460 + if (ret) { 461 + dev_err(dev, "cannot prepare/enable aux clock\n"); 462 + return ret; 463 + } 464 + 465 + ret = clk_prepare_enable(res->cfg_clk); 466 + if (ret) { 467 + dev_err(dev, "cannot prepare/enable cfg clock\n"); 468 + goto err_cfg_clk; 469 + } 470 + 471 + ret = clk_prepare_enable(res->master_clk); 472 + if (ret) { 473 + dev_err(dev, "cannot prepare/enable master clock\n"); 474 + goto err_master_clk; 475 + } 476 + 477 + ret = clk_prepare_enable(res->slave_clk); 478 + if (ret) { 479 + dev_err(dev, "cannot prepare/enable slave clock\n"); 480 + goto err_slave_clk; 481 + } 482 + 483 + /* enable PCIe clocks and resets */ 484 + val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 485 + val &= ~BIT(0); 486 + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 487 + 488 + /* change DBI base address */ 489 + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 490 + 491 + /* MAC PHY_POWERDOWN MUX DISABLE */ 492 + val = readl(pcie->parf + PCIE20_PARF_SYS_CTRL); 493 + val &= ~BIT(29); 494 + writel(val, pcie->parf + PCIE20_PARF_SYS_CTRL); 495 + 496 + val = readl(pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL); 497 + val |= BIT(4); 498 + writel(val, pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL); 499 + 500 + val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2); 501 + val |= BIT(31); 502 + writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2); 503 + 504 + return 0; 505 + 506 + err_slave_clk: 507 + clk_disable_unprepare(res->master_clk); 508 + err_master_clk: 509 + clk_disable_unprepare(res->cfg_clk); 510 + err_cfg_clk: 511 + clk_disable_unprepare(res->aux_clk); 512 + 513 + return ret; 514 + } 515 + 516 + static int qcom_pcie_post_init_v2(struct qcom_pcie *pcie) 517 + { 518 + struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 519 + struct device *dev = pcie->pp.dev; 520 + int ret; 521 + 522 + ret = clk_prepare_enable(res->pipe_clk); 523 + if (ret) { 524 + dev_err(dev, "cannot prepare/enable pipe clock\n"); 525 + return ret; 526 + } 527 + 528 + return 0; 529 + } 530 + 459 531 static int qcom_pcie_link_up(struct pcie_port *pp) 460 532 { 461 533 struct qcom_pcie *pcie = to_qcom_pcie(pp); 462 534 u16 val = readw(pcie->pp.dbi_base + PCIE20_CAP + PCI_EXP_LNKSTA); 463 535 464 536 return !!(val & PCI_EXP_LNKSTA_DLLLA); 537 + } 538 + 539 + static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie) 540 + { 541 + struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 542 + 543 + clk_disable_unprepare(res->pipe_clk); 544 + clk_disable_unprepare(res->slave_clk); 545 + clk_disable_unprepare(res->master_clk); 546 + clk_disable_unprepare(res->cfg_clk); 547 + clk_disable_unprepare(res->aux_clk); 465 548 } 466 549 467 550 static void qcom_pcie_host_init(struct pcie_port *pp) ··· 596 443 ret = phy_power_on(pcie->phy); 597 444 if (ret) 598 445 goto err_deinit; 446 + 447 + if (pcie->ops->post_init) 448 + pcie->ops->post_init(pcie); 599 449 600 450 dw_pcie_setup_rc(pp); 601 451 ··· 643 487 .get_resources = qcom_pcie_get_resources_v0, 644 488 .init = qcom_pcie_init_v0, 645 489 .deinit = qcom_pcie_deinit_v0, 490 + .ltssm_enable = qcom_pcie_v0_v1_ltssm_enable, 646 491 }; 647 492 648 493 static const struct qcom_pcie_ops ops_v1 = { 649 494 .get_resources = qcom_pcie_get_resources_v1, 650 495 .init = qcom_pcie_init_v1, 651 496 .deinit = qcom_pcie_deinit_v1, 497 + .ltssm_enable = qcom_pcie_v0_v1_ltssm_enable, 498 + }; 499 + 500 + static const struct qcom_pcie_ops ops_v2 = { 501 + .get_resources = qcom_pcie_get_resources_v2, 502 + .init = qcom_pcie_init_v2, 503 + .post_init = qcom_pcie_post_init_v2, 504 + .deinit = qcom_pcie_deinit_v2, 505 + .ltssm_enable = qcom_pcie_v2_ltssm_enable, 652 506 }; 653 507 654 508 static int qcom_pcie_probe(struct platform_device *pdev) ··· 738 572 { .compatible = "qcom,pcie-ipq8064", .data = &ops_v0 }, 739 573 { .compatible = "qcom,pcie-apq8064", .data = &ops_v0 }, 740 574 { .compatible = "qcom,pcie-apq8084", .data = &ops_v1 }, 575 + { .compatible = "qcom,pcie-msm8996", .data = &ops_v2 }, 741 576 { } 742 577 }; 743 578
+3 -2
drivers/pci/host/pcie-rcar.c
··· 1071 1071 1072 1072 static const struct of_device_id rcar_pcie_of_match[] = { 1073 1073 { .compatible = "renesas,pcie-r8a7779", .data = rcar_pcie_hw_init_h1 }, 1074 - { .compatible = "renesas,pcie-rcar-gen2", 1075 - .data = rcar_pcie_hw_init_gen2 }, 1076 1074 { .compatible = "renesas,pcie-r8a7790", 1077 1075 .data = rcar_pcie_hw_init_gen2 }, 1078 1076 { .compatible = "renesas,pcie-r8a7791", 1079 1077 .data = rcar_pcie_hw_init_gen2 }, 1078 + { .compatible = "renesas,pcie-rcar-gen2", 1079 + .data = rcar_pcie_hw_init_gen2 }, 1080 1080 { .compatible = "renesas,pcie-r8a7795", .data = rcar_pcie_hw_init }, 1081 + { .compatible = "renesas,pcie-rcar-gen3", .data = rcar_pcie_hw_init }, 1081 1082 {}, 1082 1083 }; 1083 1084
+171 -115
drivers/pci/host/pcie-rockchip.c
··· 53 53 #define PCIE_CLIENT_ARI_ENABLE HIWORD_UPDATE_BIT(0x0008) 54 54 #define PCIE_CLIENT_CONF_LANE_NUM(x) HIWORD_UPDATE(0x0030, ENCODE_LANES(x)) 55 55 #define PCIE_CLIENT_MODE_RC HIWORD_UPDATE_BIT(0x0040) 56 + #define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0) 56 57 #define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080) 57 58 #define PCIE_CLIENT_BASIC_STATUS1 (PCIE_CLIENT_BASE + 0x48) 58 59 #define PCIE_CLIENT_LINK_STATUS_UP 0x00300000 ··· 136 135 #define PCIE_RC_CONFIG_VENDOR (PCIE_RC_CONFIG_BASE + 0x00) 137 136 #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) 138 137 #define PCIE_RC_CONFIG_SCC_SHIFT 16 138 + #define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) 139 + #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 140 + #define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff 141 + #define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26 139 142 #define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0) 140 - #define PCIE_RC_CONFIG_LCS_RETRAIN_LINK BIT(5) 141 - #define PCIE_RC_CONFIG_LCS_LBMIE BIT(10) 142 - #define PCIE_RC_CONFIG_LCS_LABIE BIT(11) 143 - #define PCIE_RC_CONFIG_LCS_LBMS BIT(30) 144 - #define PCIE_RC_CONFIG_LCS_LAMS BIT(31) 145 143 #define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c) 144 + #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) 145 + #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) 146 146 147 147 #define PCIE_CORE_AXI_CONF_BASE 0xc00000 148 148 #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) ··· 205 203 struct gpio_desc *ep_gpio; 206 204 u32 lanes; 207 205 u8 root_bus_nr; 206 + int link_gen; 208 207 struct device *dev; 209 208 struct irq_domain *irq_domain; 209 + u32 io_size; 210 + int offset; 211 + phys_addr_t io_bus_addr; 212 + u32 mem_size; 213 + phys_addr_t mem_bus_addr; 210 214 }; 211 215 212 216 static u32 rockchip_pcie_read(struct rockchip_pcie *rockchip, u32 reg) ··· 231 223 u32 status; 232 224 233 225 status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 234 - status |= (PCIE_RC_CONFIG_LCS_LBMIE | PCIE_RC_CONFIG_LCS_LABIE); 226 + status |= (PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE); 235 227 rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 236 228 } 237 229 ··· 240 232 u32 status; 241 233 242 234 status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 243 - status |= (PCIE_RC_CONFIG_LCS_LBMS | PCIE_RC_CONFIG_LCS_LAMS); 235 + status |= (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_LABS) << 16; 244 236 rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 245 237 } 246 238 ··· 406 398 .write = rockchip_pcie_wr_conf, 407 399 }; 408 400 401 + static void rockchip_pcie_set_power_limit(struct rockchip_pcie *rockchip) 402 + { 403 + u32 status, curr, scale, power; 404 + 405 + if (IS_ERR(rockchip->vpcie3v3)) 406 + return; 407 + 408 + /* 409 + * Set RC's captured slot power limit and scale if 410 + * vpcie3v3 available. The default values are both zero 411 + * which means the software should set these two according 412 + * to the actual power supply. 413 + */ 414 + curr = regulator_get_current_limit(rockchip->vpcie3v3); 415 + if (curr > 0) { 416 + scale = 3; /* 0.001x */ 417 + curr = curr / 1000; /* convert to mA */ 418 + power = (curr * 3300) / 1000; /* milliwatt */ 419 + while (power > PCIE_RC_CONFIG_DCR_CSPL_LIMIT) { 420 + if (!scale) { 421 + dev_warn(rockchip->dev, "invalid power supply\n"); 422 + return; 423 + } 424 + scale--; 425 + power = power / 10; 426 + } 427 + 428 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCR); 429 + status |= (power << PCIE_RC_CONFIG_DCR_CSPL_SHIFT) | 430 + (scale << PCIE_RC_CONFIG_DCR_CPLS_SHIFT); 431 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCR); 432 + } 433 + } 434 + 409 435 /** 410 436 * rockchip_pcie_init_port - Initialize hardware 411 437 * @rockchip: PCIe port information ··· 468 426 err = reset_control_assert(rockchip->pm_rst); 469 427 if (err) { 470 428 dev_err(dev, "assert pm_rst err %d\n", err); 471 - return err; 472 - } 473 - 474 - udelay(10); 475 - 476 - err = reset_control_deassert(rockchip->pm_rst); 477 - if (err) { 478 - dev_err(dev, "deassert pm_rst err %d\n", err); 479 - return err; 480 - } 481 - 482 - err = reset_control_deassert(rockchip->aclk_rst); 483 - if (err) { 484 - dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err); 485 - return err; 486 - } 487 - 488 - err = reset_control_deassert(rockchip->pclk_rst); 489 - if (err) { 490 - dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err); 491 429 return err; 492 430 } 493 431 ··· 501 479 return err; 502 480 } 503 481 482 + udelay(10); 483 + 484 + err = reset_control_deassert(rockchip->pm_rst); 485 + if (err) { 486 + dev_err(dev, "deassert pm_rst err %d\n", err); 487 + return err; 488 + } 489 + 490 + err = reset_control_deassert(rockchip->aclk_rst); 491 + if (err) { 492 + dev_err(dev, "deassert aclk_rst err %d\n", err); 493 + return err; 494 + } 495 + 496 + err = reset_control_deassert(rockchip->pclk_rst); 497 + if (err) { 498 + dev_err(dev, "deassert pclk_rst err %d\n", err); 499 + return err; 500 + } 501 + 502 + if (rockchip->link_gen == 2) 503 + rockchip_pcie_write(rockchip, PCIE_CLIENT_GEN_SEL_2, 504 + PCIE_CLIENT_CONFIG); 505 + else 506 + rockchip_pcie_write(rockchip, PCIE_CLIENT_GEN_SEL_1, 507 + PCIE_CLIENT_CONFIG); 508 + 504 509 rockchip_pcie_write(rockchip, 505 510 PCIE_CLIENT_CONF_ENABLE | 506 511 PCIE_CLIENT_LINK_TRAIN_ENABLE | 507 512 PCIE_CLIENT_ARI_ENABLE | 508 513 PCIE_CLIENT_CONF_LANE_NUM(rockchip->lanes) | 509 - PCIE_CLIENT_MODE_RC | 510 - PCIE_CLIENT_GEN_SEL_2, 511 - PCIE_CLIENT_CONFIG); 514 + PCIE_CLIENT_MODE_RC, 515 + PCIE_CLIENT_CONFIG); 512 516 513 517 err = phy_power_on(rockchip->phy); 514 518 if (err) { ··· 570 522 return err; 571 523 } 572 524 573 - /* 574 - * We need to read/write PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 before 575 - * enabling ASPM. Otherwise L1PwrOnSc and L1PwrOnVal isn't 576 - * reliable and enabling ASPM doesn't work. This is a controller 577 - * bug we need to work around. 578 - */ 579 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2); 580 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2); 581 - 582 525 /* Fix the transmitted FTS count desired to exit from L0s. */ 583 526 status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL_PLC1); 584 - status = (status & PCIE_CORE_CTRL_PLC1_FTS_MASK) | 527 + status = (status & ~PCIE_CORE_CTRL_PLC1_FTS_MASK) | 585 528 (PCIE_CORE_CTRL_PLC1_FTS_CNT << PCIE_CORE_CTRL_PLC1_FTS_SHIFT); 586 529 rockchip_pcie_write(rockchip, status, PCIE_CORE_CTRL_PLC1); 530 + 531 + rockchip_pcie_set_power_limit(rockchip); 532 + 533 + /* Set RC's clock architecture as common clock */ 534 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 535 + status |= PCI_EXP_LNKCTL_CCC; 536 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 587 537 588 538 /* Enable Gen1 training */ 589 539 rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, ··· 609 563 msleep(20); 610 564 } 611 565 612 - /* 613 - * Enable retrain for gen2. This should be configured only after 614 - * gen1 finished. 615 - */ 616 - status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 617 - status |= PCIE_RC_CONFIG_LCS_RETRAIN_LINK; 618 - rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 566 + if (rockchip->link_gen == 2) { 567 + /* 568 + * Enable retrain for gen2. This should be configured only after 569 + * gen1 finished. 570 + */ 571 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_LCS); 572 + status |= PCI_EXP_LNKCTL_RL; 573 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LCS); 619 574 620 - timeout = jiffies + msecs_to_jiffies(500); 621 - for (;;) { 622 - status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL); 623 - if ((status & PCIE_CORE_PL_CONF_SPEED_MASK) == 624 - PCIE_CORE_PL_CONF_SPEED_5G) { 625 - dev_dbg(dev, "PCIe link training gen2 pass!\n"); 626 - break; 575 + timeout = jiffies + msecs_to_jiffies(500); 576 + for (;;) { 577 + status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL); 578 + if ((status & PCIE_CORE_PL_CONF_SPEED_MASK) == 579 + PCIE_CORE_PL_CONF_SPEED_5G) { 580 + dev_dbg(dev, "PCIe link training gen2 pass!\n"); 581 + break; 582 + } 583 + 584 + if (time_after(jiffies, timeout)) { 585 + dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n"); 586 + break; 587 + } 588 + 589 + msleep(20); 627 590 } 628 - 629 - if (time_after(jiffies, timeout)) { 630 - dev_dbg(dev, "PCIe link training gen2 timeout, fall back to gen1!\n"); 631 - break; 632 - } 633 - 634 - msleep(20); 635 591 } 636 592 637 593 /* Check the final link width from negotiated lane counter from MGMT */ 638 594 status = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL); 639 - status = 0x1 << ((status & PCIE_CORE_PL_CONF_LANE_MASK) >> 640 - PCIE_CORE_PL_CONF_LANE_MASK); 595 + status = 0x1 << ((status & PCIE_CORE_PL_CONF_LANE_MASK) >> 596 + PCIE_CORE_PL_CONF_LANE_SHIFT); 641 597 dev_dbg(dev, "current link width is x%d\n", status); 642 598 643 599 rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID, ··· 647 599 rockchip_pcie_write(rockchip, 648 600 PCI_CLASS_BRIDGE_PCI << PCIE_RC_CONFIG_SCC_SHIFT, 649 601 PCIE_RC_CONFIG_RID_CCR); 602 + 603 + /* Clear THP cap's next cap pointer to remove L1 substate cap */ 604 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_THP_CAP); 605 + status &= ~PCIE_RC_CONFIG_THP_CAP_NEXT_MASK; 606 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_THP_CAP); 607 + 650 608 rockchip_pcie_write(rockchip, 0x0, PCIE_RC_BAR_CONF); 651 609 652 610 rockchip_pcie_write(rockchip, ··· 847 793 dev_warn(dev, "invalid num-lanes, default to use one lane\n"); 848 794 rockchip->lanes = 1; 849 795 } 796 + 797 + rockchip->link_gen = of_pci_get_max_link_speed(node); 798 + if (rockchip->link_gen < 0 || rockchip->link_gen > 2) 799 + rockchip->link_gen = 2; 850 800 851 801 rockchip->core_rst = devm_reset_control_get(dev, "core"); 852 802 if (IS_ERR(rockchip->core_rst)) { ··· 1145 1087 return 0; 1146 1088 } 1147 1089 1090 + static int rockchip_cfg_atu(struct rockchip_pcie *rockchip) 1091 + { 1092 + struct device *dev = rockchip->dev; 1093 + int offset; 1094 + int err; 1095 + int reg_no; 1096 + 1097 + for (reg_no = 0; reg_no < (rockchip->mem_size >> 20); reg_no++) { 1098 + err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1, 1099 + AXI_WRAPPER_MEM_WRITE, 1100 + 20 - 1, 1101 + rockchip->mem_bus_addr + 1102 + (reg_no << 20), 1103 + 0); 1104 + if (err) { 1105 + dev_err(dev, "program RC mem outbound ATU failed\n"); 1106 + return err; 1107 + } 1108 + } 1109 + 1110 + err = rockchip_pcie_prog_ib_atu(rockchip, 2, 32 - 1, 0x0, 0); 1111 + if (err) { 1112 + dev_err(dev, "program RC mem inbound ATU failed\n"); 1113 + return err; 1114 + } 1115 + 1116 + offset = rockchip->mem_size >> 20; 1117 + for (reg_no = 0; reg_no < (rockchip->io_size >> 20); reg_no++) { 1118 + err = rockchip_pcie_prog_ob_atu(rockchip, 1119 + reg_no + 1 + offset, 1120 + AXI_WRAPPER_IO_WRITE, 1121 + 20 - 1, 1122 + rockchip->io_bus_addr + 1123 + (reg_no << 20), 1124 + 0); 1125 + if (err) { 1126 + dev_err(dev, "program RC io outbound ATU failed\n"); 1127 + return err; 1128 + } 1129 + } 1130 + 1131 + return 0; 1132 + } 1133 + 1148 1134 static int rockchip_pcie_probe(struct platform_device *pdev) 1149 1135 { 1150 1136 struct rockchip_pcie *rockchip; ··· 1198 1096 resource_size_t io_base; 1199 1097 struct resource *mem; 1200 1098 struct resource *io; 1201 - phys_addr_t io_bus_addr = 0; 1202 - u32 io_size; 1203 - phys_addr_t mem_bus_addr = 0; 1204 - u32 mem_size = 0; 1205 - int reg_no; 1206 1099 int err; 1207 - int offset; 1208 1100 1209 1101 LIST_HEAD(res); 1210 1102 ··· 1265 1169 goto err_vpcie; 1266 1170 1267 1171 /* Get the I/O and memory ranges from DT */ 1268 - io_size = 0; 1269 1172 resource_list_for_each_entry(win, &res) { 1270 1173 switch (resource_type(win->res)) { 1271 1174 case IORESOURCE_IO: 1272 1175 io = win->res; 1273 1176 io->name = "I/O"; 1274 - io_size = resource_size(io); 1275 - io_bus_addr = io->start - win->offset; 1177 + rockchip->io_size = resource_size(io); 1178 + rockchip->io_bus_addr = io->start - win->offset; 1276 1179 err = pci_remap_iospace(io, io_base); 1277 1180 if (err) { 1278 1181 dev_warn(dev, "error %d: failed to map resource %pR\n", ··· 1282 1187 case IORESOURCE_MEM: 1283 1188 mem = win->res; 1284 1189 mem->name = "MEM"; 1285 - mem_size = resource_size(mem); 1286 - mem_bus_addr = mem->start - win->offset; 1190 + rockchip->mem_size = resource_size(mem); 1191 + rockchip->mem_bus_addr = mem->start - win->offset; 1287 1192 break; 1288 1193 case IORESOURCE_BUS: 1289 1194 rockchip->root_bus_nr = win->res->start; ··· 1293 1198 } 1294 1199 } 1295 1200 1296 - if (mem_size) { 1297 - for (reg_no = 0; reg_no < (mem_size >> 20); reg_no++) { 1298 - err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1, 1299 - AXI_WRAPPER_MEM_WRITE, 1300 - 20 - 1, 1301 - mem_bus_addr + 1302 - (reg_no << 20), 1303 - 0); 1304 - if (err) { 1305 - dev_err(dev, "program RC mem outbound ATU failed\n"); 1306 - goto err_vpcie; 1307 - } 1308 - } 1309 - } 1310 - 1311 - err = rockchip_pcie_prog_ib_atu(rockchip, 2, 32 - 1, 0x0, 0); 1312 - if (err) { 1313 - dev_err(dev, "program RC mem inbound ATU failed\n"); 1201 + err = rockchip_cfg_atu(rockchip); 1202 + if (err) 1314 1203 goto err_vpcie; 1315 - } 1316 - 1317 - offset = mem_size >> 20; 1318 - 1319 - if (io_size) { 1320 - for (reg_no = 0; reg_no < (io_size >> 20); reg_no++) { 1321 - err = rockchip_pcie_prog_ob_atu(rockchip, 1322 - reg_no + 1 + offset, 1323 - AXI_WRAPPER_IO_WRITE, 1324 - 20 - 1, 1325 - io_bus_addr + 1326 - (reg_no << 20), 1327 - 0); 1328 - if (err) { 1329 - dev_err(dev, "program RC io outbound ATU failed\n"); 1330 - goto err_vpcie; 1331 - } 1332 - } 1333 - } 1334 - 1335 1204 bus = pci_scan_root_bus(&pdev->dev, 0, &rockchip_pcie_ops, rockchip, &res); 1336 1205 if (!bus) { 1337 1206 err = -ENOMEM; ··· 1308 1249 pcie_bus_configure_settings(child); 1309 1250 1310 1251 pci_bus_add_devices(bus); 1311 - 1312 - dev_warn(dev, "only 32-bit config accesses supported; smaller writes may corrupt adjacent RW1C fields\n"); 1313 - 1314 1252 return err; 1315 1253 1316 1254 err_vpcie:
+1 -5
drivers/pci/host/pcie-spear13xx.c
··· 296 296 }, 297 297 }; 298 298 299 - static int __init spear13xx_pcie_init(void) 300 - { 301 - return platform_driver_register(&spear13xx_pcie_driver); 302 - } 303 - device_initcall(spear13xx_pcie_init); 299 + builtin_platform_driver(spear13xx_pcie_driver);
+22 -8
drivers/pci/host/vmd.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/msi.h> 21 21 #include <linux/pci.h> 22 + #include <linux/srcu.h> 22 23 #include <linux/rculist.h> 23 24 #include <linux/rcupdate.h> 24 25 ··· 40 39 /** 41 40 * struct vmd_irq - private data to map driver IRQ to the VMD shared vector 42 41 * @node: list item for parent traversal. 43 - * @rcu: RCU callback item for freeing. 44 42 * @irq: back pointer to parent. 45 43 * @enabled: true if driver enabled IRQ 46 44 * @virq: the virtual IRQ value provided to the requesting driver. ··· 49 49 */ 50 50 struct vmd_irq { 51 51 struct list_head node; 52 - struct rcu_head rcu; 53 52 struct vmd_irq_list *irq; 54 53 bool enabled; 55 54 unsigned int virq; ··· 57 58 /** 58 59 * struct vmd_irq_list - list of driver requested IRQs mapping to a VMD vector 59 60 * @irq_list: the list of irq's the VMD one demuxes to. 61 + * @srcu: SRCU struct for local synchronization. 60 62 * @count: number of child IRQs assigned to this vector; used to track 61 63 * sharing. 62 64 */ 63 65 struct vmd_irq_list { 64 66 struct list_head irq_list; 67 + struct srcu_struct srcu; 65 68 unsigned int count; 66 69 }; 67 70 ··· 225 224 struct vmd_irq *vmdirq = irq_get_chip_data(virq); 226 225 unsigned long flags; 227 226 228 - synchronize_rcu(); 227 + synchronize_srcu(&vmdirq->irq->srcu); 229 228 230 229 /* XXX: Potential optimization to rebalance */ 231 230 raw_spin_lock_irqsave(&list_lock, flags); 232 231 vmdirq->irq->count--; 233 232 raw_spin_unlock_irqrestore(&list_lock, flags); 234 233 235 - kfree_rcu(vmdirq, rcu); 234 + kfree(vmdirq); 236 235 } 237 236 238 237 static int vmd_msi_prepare(struct irq_domain *domain, struct device *dev, ··· 647 646 { 648 647 struct vmd_irq_list *irqs = data; 649 648 struct vmd_irq *vmdirq; 649 + int idx; 650 650 651 - rcu_read_lock(); 651 + idx = srcu_read_lock(&irqs->srcu); 652 652 list_for_each_entry_rcu(vmdirq, &irqs->irq_list, node) 653 653 generic_handle_irq(vmdirq->virq); 654 - rcu_read_unlock(); 654 + srcu_read_unlock(&irqs->srcu, idx); 655 655 656 656 return IRQ_HANDLED; 657 657 } ··· 698 696 return -ENOMEM; 699 697 700 698 for (i = 0; i < vmd->msix_count; i++) { 699 + err = init_srcu_struct(&vmd->irqs[i].srcu); 700 + if (err) 701 + return err; 702 + 701 703 INIT_LIST_HEAD(&vmd->irqs[i].irq_list); 702 704 err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i), 703 705 vmd_irq, 0, "vmd", &vmd->irqs[i]); ··· 720 714 return 0; 721 715 } 722 716 717 + static void vmd_cleanup_srcu(struct vmd_dev *vmd) 718 + { 719 + int i; 720 + 721 + for (i = 0; i < vmd->msix_count; i++) 722 + cleanup_srcu_struct(&vmd->irqs[i].srcu); 723 + } 724 + 723 725 static void vmd_remove(struct pci_dev *dev) 724 726 { 725 727 struct vmd_dev *vmd = pci_get_drvdata(dev); 726 728 727 729 vmd_detach_resources(vmd); 728 - pci_set_drvdata(dev, NULL); 730 + vmd_cleanup_srcu(vmd); 729 731 sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 730 732 pci_stop_root_bus(vmd->bus); 731 733 pci_remove_root_bus(vmd->bus); ··· 741 727 irq_domain_remove(vmd->irq_domain); 742 728 } 743 729 744 - #ifdef CONFIG_PM 730 + #ifdef CONFIG_PM_SLEEP 745 731 static int vmd_suspend(struct device *dev) 746 732 { 747 733 struct pci_dev *pdev = to_pci_dev(dev);
+1 -30
drivers/pci/hotplug/acpiphp_glue.c
··· 222 222 acpiphp_let_context_go(context); 223 223 } 224 224 225 - /* Check whether the PCI device is managed by native PCIe hotplug driver */ 226 - static bool device_is_managed_by_native_pciehp(struct pci_dev *pdev) 227 - { 228 - u32 reg32; 229 - acpi_handle tmp; 230 - struct acpi_pci_root *root; 231 - 232 - /* Check whether the PCIe port supports native PCIe hotplug */ 233 - if (pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &reg32)) 234 - return false; 235 - if (!(reg32 & PCI_EXP_SLTCAP_HPC)) 236 - return false; 237 - 238 - /* 239 - * Check whether native PCIe hotplug has been enabled for 240 - * this PCIe hierarchy. 241 - */ 242 - tmp = acpi_find_root_bridge_handle(pdev); 243 - if (!tmp) 244 - return false; 245 - root = acpi_pci_find_root(tmp); 246 - if (!root) 247 - return false; 248 - if (!(root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL)) 249 - return false; 250 - 251 - return true; 252 - } 253 - 254 225 /** 255 226 * acpiphp_add_context - Add ACPIPHP context to an ACPI device object. 256 227 * @handle: ACPI handle of the object to add a context to. ··· 305 334 * expose slots to user space in those cases. 306 335 */ 307 336 if ((acpi_pci_check_ejectable(pbus, handle) || is_dock_device(adev)) 308 - && !(pdev && device_is_managed_by_native_pciehp(pdev))) { 337 + && !(pdev && pdev->is_hotplug_bridge && pciehp_is_native(pdev))) { 309 338 unsigned long long sun; 310 339 int retval; 311 340
+2 -1
drivers/pci/hotplug/cpqphp_core.c
··· 867 867 */ 868 868 if ((pdev->revision <= 2) && (vendor_id != PCI_VENDOR_ID_INTEL)) { 869 869 err(msg_HPC_not_supported); 870 - return -ENODEV; 870 + rc = -ENODEV; 871 + goto err_disable_device; 871 872 } 872 873 873 874 /* TODO: This code can be made to support non-Compaq or Intel
+3 -7
drivers/pci/hotplug/pci_hotplug_core.c
··· 23 23 * 24 24 * Send feedback to <kristen.c.accardi@intel.com> 25 25 * 26 + * Authors: 27 + * Greg Kroah-Hartman <greg@kroah.com> 28 + * Scott Murray <scottm@somanetworks.com> 26 29 */ 27 30 28 31 #include <linux/module.h> /* try_module_get & module_put */ ··· 53 50 #define info(format, arg...) printk(KERN_INFO "%s: " format, MY_NAME, ## arg) 54 51 #define warn(format, arg...) printk(KERN_WARNING "%s: " format, MY_NAME, ## arg) 55 52 56 - 57 53 /* local variables */ 58 54 static bool debug; 59 - 60 - #define DRIVER_VERSION "0.5" 61 - #define DRIVER_AUTHOR "Greg Kroah-Hartman <greg@kroah.com>, Scott Murray <scottm@somanetworks.com>" 62 - #define DRIVER_DESC "PCI Hot Plug PCI Core" 63 - 64 55 65 56 static LIST_HEAD(pci_hotplug_slot_list); 66 57 static DEFINE_MUTEX(pci_hp_mutex); ··· 531 534 return result; 532 535 } 533 536 534 - info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); 535 537 return result; 536 538 } 537 539 device_initcall(pci_hotplug_init);
+4 -5
drivers/pci/hotplug/pciehp_core.c
··· 25 25 * 26 26 * Send feedback to <greg@kroah.com>, <kristen.c.accardi@intel.com> 27 27 * 28 + * Authors: 29 + * Dan Zink <dan.zink@compaq.com> 30 + * Greg Kroah-Hartman <greg@kroah.com> 31 + * Dely Sy <dely.l.sy@intel.com>" 28 32 */ 29 33 30 34 #include <linux/moduleparam.h> ··· 45 41 bool pciehp_poll_mode; 46 42 int pciehp_poll_time; 47 43 static bool pciehp_force; 48 - 49 - #define DRIVER_VERSION "0.4" 50 - #define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>" 51 - #define DRIVER_DESC "PCI Express Hot Plug Controller Driver" 52 44 53 45 /* 54 46 * not really modular, but the easiest way to keep compat with existing ··· 333 333 334 334 retval = pcie_port_service_register(&hpdriver_portdrv); 335 335 dbg("pcie_port_service_register = %d\n", retval); 336 - info(DRIVER_DESC " version: " DRIVER_VERSION "\n"); 337 336 if (retval) 338 337 dbg("Failure to register service\n"); 339 338
+7 -1
drivers/pci/hotplug/pciehp_ctrl.c
··· 31 31 #include <linux/kernel.h> 32 32 #include <linux/types.h> 33 33 #include <linux/slab.h> 34 + #include <linux/pm_runtime.h> 34 35 #include <linux/pci.h> 35 36 #include "../pci.h" 36 37 #include "pciehp.h" ··· 99 98 pciehp_green_led_blink(p_slot); 100 99 101 100 /* Check link training status */ 101 + pm_runtime_get_sync(&ctrl->pcie->port->dev); 102 102 retval = pciehp_check_link_status(ctrl); 103 103 if (retval) { 104 104 ctrl_err(ctrl, "Failed to check link status\n"); ··· 120 118 if (retval != -EEXIST) 121 119 goto err_exit; 122 120 } 121 + pm_runtime_put(&ctrl->pcie->port->dev); 123 122 124 123 pciehp_green_led_on(p_slot); 125 124 pciehp_set_attention_status(p_slot, 0); 126 125 return 0; 127 126 128 127 err_exit: 128 + pm_runtime_put(&ctrl->pcie->port->dev); 129 129 set_slot_off(ctrl, p_slot); 130 130 return retval; 131 131 } ··· 141 137 int retval; 142 138 struct controller *ctrl = p_slot->ctrl; 143 139 140 + pm_runtime_get_sync(&ctrl->pcie->port->dev); 144 141 retval = pciehp_unconfigure_device(p_slot); 142 + pm_runtime_put(&ctrl->pcie->port->dev); 145 143 if (retval) 146 144 return retval; 147 145 ··· 416 410 if (getstatus) { 417 411 ctrl_info(ctrl, "Slot(%s): Already enabled\n", 418 412 slot_name(p_slot)); 419 - return -EINVAL; 413 + return 0; 420 414 } 421 415 } 422 416
+12 -9
drivers/pci/hotplug/pciehp_hpc.c
··· 620 620 pciehp_queue_interrupt_event(slot, INT_BUTTON_PRESS); 621 621 } 622 622 623 - /* Check Presence Detect Changed */ 624 - if (events & PCI_EXP_SLTSTA_PDC) { 623 + /* 624 + * Check Link Status Changed at higher precedence than Presence 625 + * Detect Changed. The PDS value may be set to "card present" from 626 + * out-of-band detection, which may be in conflict with a Link Down 627 + * and cause the wrong event to queue. 628 + */ 629 + if (events & PCI_EXP_SLTSTA_DLLSC) { 630 + ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot), 631 + link ? "Up" : "Down"); 632 + pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP : 633 + INT_LINK_DOWN); 634 + } else if (events & PCI_EXP_SLTSTA_PDC) { 625 635 present = !!(status & PCI_EXP_SLTSTA_PDS); 626 636 ctrl_info(ctrl, "Slot(%s): Card %spresent\n", slot_name(slot), 627 637 present ? "" : "not "); ··· 644 634 ctrl->power_fault_detected = 1; 645 635 ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(slot)); 646 636 pciehp_queue_interrupt_event(slot, INT_POWER_FAULT); 647 - } 648 - 649 - if (events & PCI_EXP_SLTSTA_DLLSC) { 650 - ctrl_info(ctrl, "Slot(%s): Link %s\n", slot_name(slot), 651 - link ? "Up" : "Down"); 652 - pciehp_queue_interrupt_event(slot, link ? INT_LINK_UP : 653 - INT_LINK_DOWN); 654 637 } 655 638 656 639 return IRQ_HANDLED;
+55 -15
drivers/pci/iov.c
··· 306 306 return rc; 307 307 } 308 308 309 - pci_iov_set_numvfs(dev, nr_virtfn); 310 - iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE; 311 - pci_cfg_access_lock(dev); 312 - pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); 313 - msleep(100); 314 - pci_cfg_access_unlock(dev); 315 - 316 309 iov->initial_VFs = initial; 317 310 if (nr_virtfn < initial) 318 311 initial = nr_virtfn; ··· 315 322 dev_err(&dev->dev, "failure %d from pcibios_sriov_enable()\n", rc); 316 323 goto err_pcibios; 317 324 } 325 + 326 + pci_iov_set_numvfs(dev, nr_virtfn); 327 + iov->ctrl |= PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE; 328 + pci_cfg_access_lock(dev); 329 + pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, iov->ctrl); 330 + msleep(100); 331 + pci_cfg_access_unlock(dev); 318 332 319 333 for (i = 0; i < initial; i++) { 320 334 rc = pci_iov_add_virtfn(dev, i, 0); ··· 554 554 } 555 555 556 556 /** 557 - * pci_iov_resource_bar - get position of the SR-IOV BAR 557 + * pci_iov_update_resource - update a VF BAR 558 558 * @dev: the PCI device 559 559 * @resno: the resource number 560 560 * 561 - * Returns position of the BAR encapsulated in the SR-IOV capability. 561 + * Update a VF BAR in the SR-IOV capability of a PF. 562 562 */ 563 - int pci_iov_resource_bar(struct pci_dev *dev, int resno) 563 + void pci_iov_update_resource(struct pci_dev *dev, int resno) 564 564 { 565 - if (resno < PCI_IOV_RESOURCES || resno > PCI_IOV_RESOURCE_END) 566 - return 0; 565 + struct pci_sriov *iov = dev->is_physfn ? dev->sriov : NULL; 566 + struct resource *res = dev->resource + resno; 567 + int vf_bar = resno - PCI_IOV_RESOURCES; 568 + struct pci_bus_region region; 569 + u16 cmd; 570 + u32 new; 571 + int reg; 567 572 568 - BUG_ON(!dev->is_physfn); 573 + /* 574 + * The generic pci_restore_bars() path calls this for all devices, 575 + * including VFs and non-SR-IOV devices. If this is not a PF, we 576 + * have nothing to do. 577 + */ 578 + if (!iov) 579 + return; 569 580 570 - return dev->sriov->pos + PCI_SRIOV_BAR + 571 - 4 * (resno - PCI_IOV_RESOURCES); 581 + pci_read_config_word(dev, iov->pos + PCI_SRIOV_CTRL, &cmd); 582 + if ((cmd & PCI_SRIOV_CTRL_VFE) && (cmd & PCI_SRIOV_CTRL_MSE)) { 583 + dev_WARN(&dev->dev, "can't update enabled VF BAR%d %pR\n", 584 + vf_bar, res); 585 + return; 586 + } 587 + 588 + /* 589 + * Ignore unimplemented BARs, unused resource slots for 64-bit 590 + * BARs, and non-movable resources, e.g., those described via 591 + * Enhanced Allocation. 592 + */ 593 + if (!res->flags) 594 + return; 595 + 596 + if (res->flags & IORESOURCE_UNSET) 597 + return; 598 + 599 + if (res->flags & IORESOURCE_PCI_FIXED) 600 + return; 601 + 602 + pcibios_resource_to_bus(dev->bus, &region, res); 603 + new = region.start; 604 + new |= res->flags & ~PCI_BASE_ADDRESS_MEM_MASK; 605 + 606 + reg = iov->pos + PCI_SRIOV_BAR + 4 * vf_bar; 607 + pci_write_config_dword(dev, reg, new); 608 + if (res->flags & IORESOURCE_MEM_64) { 609 + new = region.start >> 16 >> 16; 610 + pci_write_config_dword(dev, reg + 4, new); 611 + } 572 612 } 573 613 574 614 resource_size_t __weak pcibios_iov_resource_alignment(struct pci_dev *dev,
+2 -1
drivers/pci/msi.c
··· 1302 1302 } else if (dev->msi_enabled) { 1303 1303 struct msi_desc *entry = first_pci_msi_entry(dev); 1304 1304 1305 - if (WARN_ON_ONCE(!entry || nr >= entry->nvec_used)) 1305 + if (WARN_ON_ONCE(!entry || !entry->affinity || 1306 + nr >= entry->nvec_used)) 1306 1307 return NULL; 1307 1308 1308 1309 return &entry->affinity[nr];
+100
drivers/pci/pci-acpi.c
··· 29 29 0x91, 0x17, 0xea, 0x4d, 0x19, 0xc3, 0x43, 0x4d 30 30 }; 31 31 32 + #if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64) 33 + static int acpi_get_rc_addr(struct acpi_device *adev, struct resource *res) 34 + { 35 + struct device *dev = &adev->dev; 36 + struct resource_entry *entry; 37 + struct list_head list; 38 + unsigned long flags; 39 + int ret; 40 + 41 + INIT_LIST_HEAD(&list); 42 + flags = IORESOURCE_MEM; 43 + ret = acpi_dev_get_resources(adev, &list, 44 + acpi_dev_filter_resource_type_cb, 45 + (void *) flags); 46 + if (ret < 0) { 47 + dev_err(dev, "failed to parse _CRS method, error code %d\n", 48 + ret); 49 + return ret; 50 + } 51 + 52 + if (ret == 0) { 53 + dev_err(dev, "no IO and memory resources present in _CRS\n"); 54 + return -EINVAL; 55 + } 56 + 57 + entry = list_first_entry(&list, struct resource_entry, node); 58 + *res = *entry->res; 59 + acpi_dev_free_resource_list(&list); 60 + return 0; 61 + } 62 + 63 + static acpi_status acpi_match_rc(acpi_handle handle, u32 lvl, void *context, 64 + void **retval) 65 + { 66 + u16 *segment = context; 67 + unsigned long long uid; 68 + acpi_status status; 69 + 70 + status = acpi_evaluate_integer(handle, "_UID", NULL, &uid); 71 + if (ACPI_FAILURE(status) || uid != *segment) 72 + return AE_CTRL_DEPTH; 73 + 74 + *(acpi_handle *)retval = handle; 75 + return AE_CTRL_TERMINATE; 76 + } 77 + 78 + int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment, 79 + struct resource *res) 80 + { 81 + struct acpi_device *adev; 82 + acpi_status status; 83 + acpi_handle handle; 84 + int ret; 85 + 86 + status = acpi_get_devices(hid, acpi_match_rc, &segment, &handle); 87 + if (ACPI_FAILURE(status)) { 88 + dev_err(dev, "can't find _HID %s device to locate resources\n", 89 + hid); 90 + return -ENODEV; 91 + } 92 + 93 + ret = acpi_bus_get_device(handle, &adev); 94 + if (ret) 95 + return ret; 96 + 97 + ret = acpi_get_rc_addr(adev, res); 98 + if (ret) { 99 + dev_err(dev, "can't get resource from %s\n", 100 + dev_name(&adev->dev)); 101 + return ret; 102 + } 103 + 104 + return 0; 105 + } 106 + #endif 107 + 32 108 phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle) 33 109 { 34 110 acpi_status status = AE_NOT_EXIST; ··· 368 292 return -ENODEV; 369 293 } 370 294 EXPORT_SYMBOL_GPL(pci_get_hp_params); 295 + 296 + /** 297 + * pciehp_is_native - Check whether a hotplug port is handled by the OS 298 + * @pdev: Hotplug port to check 299 + * 300 + * Walk up from @pdev to the host bridge, obtain its cached _OSC Control Field 301 + * and return the value of the "PCI Express Native Hot Plug control" bit. 302 + * On failure to obtain the _OSC Control Field return %false. 303 + */ 304 + bool pciehp_is_native(struct pci_dev *pdev) 305 + { 306 + struct acpi_pci_root *root; 307 + acpi_handle handle; 308 + 309 + handle = acpi_find_root_bridge_handle(pdev); 310 + if (!handle) 311 + return false; 312 + 313 + root = acpi_pci_find_root(handle); 314 + if (!root) 315 + return false; 316 + 317 + return root->osc_control_set & OSC_PCI_EXPRESS_NATIVE_HP_CONTROL; 318 + } 371 319 372 320 /** 373 321 * pci_acpi_wake_bus - Root bus wakeup notification fork function.
+1 -1
drivers/pci/pci-mid.c
··· 54 54 return false; 55 55 } 56 56 57 - static struct pci_platform_pm_ops mid_pci_platform_pm = { 57 + static const struct pci_platform_pm_ops mid_pci_platform_pm = { 58 58 .is_manageable = mid_pci_power_manageable, 59 59 .set_state = mid_pci_set_power_state, 60 60 .get_state = mid_pci_get_power_state,
+2
drivers/pci/pci-sysfs.c
··· 50 50 pci_config_attr(device, "0x%04x\n"); 51 51 pci_config_attr(subsystem_vendor, "0x%04x\n"); 52 52 pci_config_attr(subsystem_device, "0x%04x\n"); 53 + pci_config_attr(revision, "0x%02x\n"); 53 54 pci_config_attr(class, "0x%06x\n"); 54 55 pci_config_attr(irq, "%u\n"); 55 56 ··· 569 568 &dev_attr_device.attr, 570 569 &dev_attr_subsystem_vendor.attr, 571 570 &dev_attr_subsystem_device.attr, 571 + &dev_attr_revision.attr, 572 572 &dev_attr_class.attr, 573 573 &dev_attr_irq.attr, 574 574 &dev_attr_local_cpus.attr,
+52 -86
drivers/pci/pci.c
··· 564 564 { 565 565 int i; 566 566 567 - /* Per SR-IOV spec 3.4.1.11, VF BARs are RO zero */ 568 - if (dev->is_virtfn) 569 - return; 570 - 571 567 for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) 572 568 pci_update_resource(dev, i); 573 569 } ··· 2102 2106 if (!dev->pme_support) 2103 2107 return false; 2104 2108 2109 + /* PME-capable in principle, but not from the intended sleep state */ 2110 + if (!pci_pme_capable(dev, pci_target_state(dev))) 2111 + return false; 2112 + 2105 2113 while (bus->parent) { 2106 2114 struct pci_dev *bridge = bus->self; 2107 2115 ··· 2226 2226 * This function checks if it is possible to move the bridge to D3. 2227 2227 * Currently we only allow D3 for recent enough PCIe ports. 2228 2228 */ 2229 - static bool pci_bridge_d3_possible(struct pci_dev *bridge) 2229 + bool pci_bridge_d3_possible(struct pci_dev *bridge) 2230 2230 { 2231 2231 unsigned int year; 2232 2232 ··· 2239 2239 case PCI_EXP_TYPE_DOWNSTREAM: 2240 2240 if (pci_bridge_d3_disable) 2241 2241 return false; 2242 + 2243 + /* 2244 + * Hotplug ports handled by firmware in System Management Mode 2245 + * may not be put into D3 by the OS (Thunderbolt on non-Macs). 2246 + */ 2247 + if (bridge->is_hotplug_bridge && !pciehp_is_native(bridge)) 2248 + return false; 2249 + 2242 2250 if (pci_bridge_d3_force) 2243 2251 return true; 2244 2252 ··· 2267 2259 static int pci_dev_check_d3cold(struct pci_dev *dev, void *data) 2268 2260 { 2269 2261 bool *d3cold_ok = data; 2270 - bool no_d3cold; 2271 2262 2272 - /* 2273 - * The device needs to be allowed to go D3cold and if it is wake 2274 - * capable to do so from D3cold. 2275 - */ 2276 - no_d3cold = dev->no_d3cold || !dev->d3cold_allowed || 2277 - (device_may_wakeup(&dev->dev) && !pci_pme_capable(dev, PCI_D3cold)) || 2278 - !pci_power_manageable(dev); 2263 + if (/* The device needs to be allowed to go D3cold ... */ 2264 + dev->no_d3cold || !dev->d3cold_allowed || 2279 2265 2280 - *d3cold_ok = !no_d3cold; 2266 + /* ... and if it is wakeup capable to do so from D3cold. */ 2267 + (device_may_wakeup(&dev->dev) && 2268 + !pci_pme_capable(dev, PCI_D3cold)) || 2281 2269 2282 - return no_d3cold; 2270 + /* If it is a bridge it must be allowed to go to D3. */ 2271 + !pci_power_manageable(dev) || 2272 + 2273 + /* Hotplug interrupts cannot be delivered if the link is down. */ 2274 + dev->is_hotplug_bridge) 2275 + 2276 + *d3cold_ok = false; 2277 + 2278 + return !*d3cold_ok; 2283 2279 } 2284 2280 2285 2281 /* 2286 2282 * pci_bridge_d3_update - Update bridge D3 capabilities 2287 2283 * @dev: PCI device which is changed 2288 - * @remove: Is the device being removed 2289 2284 * 2290 2285 * Update upstream bridge PM capabilities accordingly depending on if the 2291 2286 * device PM configuration was changed or the device is being removed. The 2292 2287 * change is also propagated upstream. 2293 2288 */ 2294 - static void pci_bridge_d3_update(struct pci_dev *dev, bool remove) 2289 + void pci_bridge_d3_update(struct pci_dev *dev) 2295 2290 { 2291 + bool remove = !device_is_registered(&dev->dev); 2296 2292 struct pci_dev *bridge; 2297 2293 bool d3cold_ok = true; 2298 2294 ··· 2304 2292 if (!bridge || !pci_bridge_d3_possible(bridge)) 2305 2293 return; 2306 2294 2307 - pci_dev_get(bridge); 2308 2295 /* 2309 - * If the device is removed we do not care about its D3cold 2310 - * capabilities. 2296 + * If D3 is currently allowed for the bridge, removing one of its 2297 + * children won't change that. 2298 + */ 2299 + if (remove && bridge->bridge_d3) 2300 + return; 2301 + 2302 + /* 2303 + * If D3 is currently allowed for the bridge and a child is added or 2304 + * changed, disallowance of D3 can only be caused by that child, so 2305 + * we only need to check that single device, not any of its siblings. 2306 + * 2307 + * If D3 is currently not allowed for the bridge, checking the device 2308 + * first may allow us to skip checking its siblings. 2311 2309 */ 2312 2310 if (!remove) 2313 2311 pci_dev_check_d3cold(dev, &d3cold_ok); 2314 2312 2315 - if (d3cold_ok) { 2316 - /* 2317 - * We need to go through all children to find out if all of 2318 - * them can still go to D3cold. 2319 - */ 2313 + /* 2314 + * If D3 is currently not allowed for the bridge, this may be caused 2315 + * either by the device being changed/removed or any of its siblings, 2316 + * so we need to go through all children to find out if one of them 2317 + * continues to block D3. 2318 + */ 2319 + if (d3cold_ok && !bridge->bridge_d3) 2320 2320 pci_walk_bus(bridge->subordinate, pci_dev_check_d3cold, 2321 2321 &d3cold_ok); 2322 - } 2323 2322 2324 2323 if (bridge->bridge_d3 != d3cold_ok) { 2325 2324 bridge->bridge_d3 = d3cold_ok; 2326 2325 /* Propagate change to upstream bridges */ 2327 - pci_bridge_d3_update(bridge, false); 2326 + pci_bridge_d3_update(bridge); 2328 2327 } 2329 - 2330 - pci_dev_put(bridge); 2331 - } 2332 - 2333 - /** 2334 - * pci_bridge_d3_device_changed - Update bridge D3 capabilities on change 2335 - * @dev: PCI device that was changed 2336 - * 2337 - * If a device is added or its PM configuration, such as is it allowed to 2338 - * enter D3cold, is changed this function updates upstream bridge PM 2339 - * capabilities accordingly. 2340 - */ 2341 - void pci_bridge_d3_device_changed(struct pci_dev *dev) 2342 - { 2343 - pci_bridge_d3_update(dev, false); 2344 - } 2345 - 2346 - /** 2347 - * pci_bridge_d3_device_removed - Update bridge D3 capabilities on remove 2348 - * @dev: PCI device being removed 2349 - * 2350 - * Function updates upstream bridge PM capabilities based on other devices 2351 - * still left on the bus. 2352 - */ 2353 - void pci_bridge_d3_device_removed(struct pci_dev *dev) 2354 - { 2355 - pci_bridge_d3_update(dev, true); 2356 2328 } 2357 2329 2358 2330 /** ··· 2351 2355 { 2352 2356 if (dev->no_d3cold) { 2353 2357 dev->no_d3cold = false; 2354 - pci_bridge_d3_device_changed(dev); 2358 + pci_bridge_d3_update(dev); 2355 2359 } 2356 2360 } 2357 2361 EXPORT_SYMBOL_GPL(pci_d3cold_enable); ··· 2368 2372 { 2369 2373 if (!dev->no_d3cold) { 2370 2374 dev->no_d3cold = true; 2371 - pci_bridge_d3_device_changed(dev); 2375 + pci_bridge_d3_update(dev); 2372 2376 } 2373 2377 } 2374 2378 EXPORT_SYMBOL_GPL(pci_d3cold_disable); ··· 4826 4830 return bars; 4827 4831 } 4828 4832 EXPORT_SYMBOL(pci_select_bars); 4829 - 4830 - /** 4831 - * pci_resource_bar - get position of the BAR associated with a resource 4832 - * @dev: the PCI device 4833 - * @resno: the resource number 4834 - * @type: the BAR type to be filled in 4835 - * 4836 - * Returns BAR position in config space, or 0 if the BAR is invalid. 4837 - */ 4838 - int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type) 4839 - { 4840 - int reg; 4841 - 4842 - if (resno < PCI_ROM_RESOURCE) { 4843 - *type = pci_bar_unknown; 4844 - return PCI_BASE_ADDRESS_0 + 4 * resno; 4845 - } else if (resno == PCI_ROM_RESOURCE) { 4846 - *type = pci_bar_mem32; 4847 - return dev->rom_base_reg; 4848 - } else if (resno < PCI_BRIDGE_RESOURCES) { 4849 - /* device specific resource */ 4850 - *type = pci_bar_unknown; 4851 - reg = pci_iov_resource_bar(dev, resno); 4852 - if (reg) 4853 - return reg; 4854 - } 4855 - 4856 - dev_err(&dev->dev, "BAR %d: invalid resource\n", resno); 4857 - return 0; 4858 - } 4859 4833 4860 4834 /* Some architectures require additional programming to enable VGA */ 4861 4835 static arch_set_vga_state_t arch_set_vga_state;
+8 -11
drivers/pci/pci.h
··· 1 1 #ifndef DRIVERS_PCI_H 2 2 #define DRIVERS_PCI_H 3 3 4 - #define PCI_CFG_SPACE_SIZE 256 5 - #define PCI_CFG_SPACE_EXP_SIZE 4096 6 - 7 4 #define PCI_FIND_CAP_TTL 48 8 5 9 6 extern const unsigned char pcie_link_speed[]; ··· 82 85 void pci_ea_init(struct pci_dev *dev); 83 86 void pci_allocate_cap_save_buffers(struct pci_dev *dev); 84 87 void pci_free_cap_save_buffers(struct pci_dev *dev); 85 - void pci_bridge_d3_device_changed(struct pci_dev *dev); 86 - void pci_bridge_d3_device_removed(struct pci_dev *dev); 88 + bool pci_bridge_d3_possible(struct pci_dev *dev); 89 + void pci_bridge_d3_update(struct pci_dev *dev); 87 90 88 91 static inline void pci_wakeup_event(struct pci_dev *dev) 89 92 { ··· 242 245 int pci_setup_device(struct pci_dev *dev); 243 246 int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, 244 247 struct resource *res, unsigned int reg); 245 - int pci_resource_bar(struct pci_dev *dev, int resno, enum pci_bar_type *type); 246 248 void pci_configure_ari(struct pci_dev *dev); 247 249 void __pci_bus_size_bridges(struct pci_bus *bus, 248 250 struct list_head *realloc_head); ··· 285 289 #ifdef CONFIG_PCI_IOV 286 290 int pci_iov_init(struct pci_dev *dev); 287 291 void pci_iov_release(struct pci_dev *dev); 288 - int pci_iov_resource_bar(struct pci_dev *dev, int resno); 292 + void pci_iov_update_resource(struct pci_dev *dev, int resno); 289 293 resource_size_t pci_sriov_resource_alignment(struct pci_dev *dev, int resno); 290 294 void pci_restore_iov_state(struct pci_dev *dev); 291 295 int pci_iov_bus_range(struct pci_bus *bus); ··· 298 302 static inline void pci_iov_release(struct pci_dev *dev) 299 303 300 304 { 301 - } 302 - static inline int pci_iov_resource_bar(struct pci_dev *dev, int resno) 303 - { 304 - return 0; 305 305 } 306 306 static inline void pci_restore_iov_state(struct pci_dev *dev) 307 307 { ··· 346 354 { 347 355 return -ENOTTY; 348 356 } 357 + #endif 358 + 359 + #if defined(CONFIG_PCI_QUIRKS) && defined(CONFIG_ARM64) 360 + int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment, 361 + struct resource *res); 349 362 #endif 350 363 351 364 #endif /* DRIVERS_PCI_H */
+6 -12
drivers/pci/pcie/aer/aerdrv.c
··· 30 30 #include "aerdrv.h" 31 31 #include "../../pci.h" 32 32 33 - /* 34 - * Version Information 35 - */ 36 - #define DRIVER_VERSION "v1.0" 37 - #define DRIVER_AUTHOR "tom.l.nguyen@intel.com" 38 - #define DRIVER_DESC "Root Port Advanced Error Reporting Driver" 39 - 40 33 static int aer_probe(struct pcie_device *dev); 41 34 static void aer_remove(struct pcie_device *dev); 42 35 static pci_ers_result_t aer_error_detected(struct pci_dev *dev, ··· 290 297 { 291 298 int status; 292 299 struct aer_rpc *rpc; 293 - struct device *device = &dev->device; 300 + struct device *device = &dev->port->dev; 294 301 295 302 /* Alloc rpc data structure */ 296 303 rpc = aer_alloc_rpc(dev); 297 304 if (!rpc) { 298 - dev_printk(KERN_DEBUG, device, "alloc rpc failed\n"); 305 + dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n"); 299 306 aer_remove(dev); 300 307 return -ENOMEM; 301 308 } ··· 303 310 /* Request IRQ ISR */ 304 311 status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv", dev); 305 312 if (status) { 306 - dev_printk(KERN_DEBUG, device, "request IRQ failed\n"); 313 + dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n", 314 + dev->irq); 307 315 aer_remove(dev); 308 316 return status; 309 317 } ··· 312 318 rpc->isr = 1; 313 319 314 320 aer_enable_rootport(rpc); 315 - 316 - return status; 321 + dev_info(device, "AER enabled with IRQ %d\n", dev->irq); 322 + return 0; 317 323 } 318 324 319 325 /**
+19 -5
drivers/pci/pcie/aspm.c
··· 351 351 return; 352 352 } 353 353 354 - /* Configure common clock before checking latencies */ 355 - pcie_aspm_configure_common_clock(link); 356 - 357 354 /* Get upstream/downstream components' register state */ 358 355 pcie_get_aspm_reg(parent, &upreg); 359 356 child = list_entry(linkbus->devices.next, struct pci_dev, bus_list); 357 + pcie_get_aspm_reg(child, &dwreg); 358 + 359 + /* 360 + * If ASPM not supported, don't mess with the clocks and link, 361 + * bail out now. 362 + */ 363 + if (!(upreg.support & dwreg.support)) 364 + return; 365 + 366 + /* Configure common clock before checking latencies */ 367 + pcie_aspm_configure_common_clock(link); 368 + 369 + /* 370 + * Re-read upstream/downstream components' register state 371 + * after clock configuration 372 + */ 373 + pcie_get_aspm_reg(parent, &upreg); 360 374 pcie_get_aspm_reg(child, &dwreg); 361 375 362 376 /* ··· 900 886 return n; 901 887 } 902 888 903 - static DEVICE_ATTR(link_state, 0644, link_state_show, link_state_store); 904 - static DEVICE_ATTR(clk_ctl, 0644, clk_ctl_show, clk_ctl_store); 889 + static DEVICE_ATTR_RW(link_state); 890 + static DEVICE_ATTR_RW(clk_ctl); 905 891 906 892 static char power_group[] = "power"; 907 893 void pcie_aspm_create_sysfs_dev_files(struct pci_dev *pdev)
+7 -22
drivers/pci/pcie/pme.c
··· 300 300 */ 301 301 static int pcie_pme_set_native(struct pci_dev *dev, void *ign) 302 302 { 303 - dev_info(&dev->dev, "Signaling PME through PCIe PME interrupt\n"); 304 - 305 303 device_set_run_wake(&dev->dev, true); 306 304 dev->pme_interrupt = true; 307 305 return 0; ··· 317 319 static void pcie_pme_mark_devices(struct pci_dev *port) 318 320 { 319 321 pcie_pme_set_native(port, NULL); 320 - if (port->subordinate) { 322 + if (port->subordinate) 321 323 pci_walk_bus(port->subordinate, pcie_pme_set_native, NULL); 322 - } else { 323 - struct pci_bus *bus = port->bus; 324 - struct pci_dev *dev; 325 - 326 - /* Check if this is a root port event collector. */ 327 - if (pci_pcie_type(port) != PCI_EXP_TYPE_RC_EC || !bus) 328 - return; 329 - 330 - down_read(&pci_bus_sem); 331 - list_for_each_entry(dev, &bus->devices, bus_list) 332 - if (pci_is_pcie(dev) 333 - && pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END) 334 - pcie_pme_set_native(dev, NULL); 335 - up_read(&pci_bus_sem); 336 - } 337 324 } 338 325 339 326 /** ··· 347 364 ret = request_irq(srv->irq, pcie_pme_irq, IRQF_SHARED, "PCIe PME", srv); 348 365 if (ret) { 349 366 kfree(data); 350 - } else { 351 - pcie_pme_mark_devices(port); 352 - pcie_pme_interrupt_enable(port, true); 367 + return ret; 353 368 } 354 369 355 - return ret; 370 + dev_info(&port->dev, "Signaling PME with IRQ %d\n", srv->irq); 371 + 372 + pcie_pme_mark_devices(port); 373 + pcie_pme_interrupt_enable(port, true); 374 + return 0; 356 375 } 357 376 358 377 static bool pcie_pme_check_wakeup(struct pci_bus *bus)
-3
drivers/pci/pcie/portdrv_core.c
··· 499 499 if (status) 500 500 return status; 501 501 502 - dev_printk(KERN_DEBUG, dev, "service driver %s loaded\n", driver->name); 503 502 get_device(dev); 504 503 return 0; 505 504 } ··· 523 524 pciedev = to_pcie_device(dev); 524 525 driver = to_service_driver(dev->driver); 525 526 if (driver && driver->remove) { 526 - dev_printk(KERN_DEBUG, dev, "unloading service driver %s\n", 527 - driver->name); 528 527 driver->remove(pciedev); 529 528 put_device(dev); 530 529 }
+3 -10
drivers/pci/pcie/portdrv_pci.c
··· 19 19 #include <linux/dmi.h> 20 20 #include <linux/pci-aspm.h> 21 21 22 + #include "../pci.h" 22 23 #include "portdrv.h" 23 24 #include "aer/aerdrv.h" 24 25 ··· 150 149 151 150 pci_save_state(dev); 152 151 153 - /* 154 - * Prevent runtime PM if the port is advertising support for PCIe 155 - * hotplug. Otherwise the BIOS hotplug SMI code might not be able 156 - * to enumerate devices behind this port properly (the port is 157 - * powered down preventing all config space accesses to the 158 - * subordinate devices). We can't be sure for native PCIe hotplug 159 - * either so prevent that as well. 160 - */ 161 - if (!dev->is_hotplug_bridge) { 152 + if (pci_bridge_d3_possible(dev)) { 162 153 /* 163 154 * Keep the port resumed 100ms to make sure things like 164 155 * config space accesses from userspace (lspci) will not ··· 168 175 169 176 static void pcie_portdrv_remove(struct pci_dev *dev) 170 177 { 171 - if (!dev->is_hotplug_bridge) { 178 + if (pci_bridge_d3_possible(dev)) { 172 179 pm_runtime_forbid(&dev->dev); 173 180 pm_runtime_get_noresume(&dev->dev); 174 181 pm_runtime_dont_use_autosuspend(&dev->dev);
+147 -100
drivers/pci/probe.c
··· 227 227 mask64 = (u32)PCI_BASE_ADDRESS_MEM_MASK; 228 228 } 229 229 } else { 230 - res->flags |= (l & IORESOURCE_ROM_ENABLE); 230 + if (l & PCI_ROM_ADDRESS_ENABLE) 231 + res->flags |= IORESOURCE_ROM_ENABLE; 231 232 l64 = l & PCI_ROM_ADDRESS_MASK; 232 233 sz64 = sz & PCI_ROM_ADDRESS_MASK; 233 234 mask64 = (u32)PCI_ROM_ADDRESS_MASK; ··· 522 521 kfree(bridge); 523 522 } 524 523 525 - static struct pci_host_bridge *pci_alloc_host_bridge(struct pci_bus *b) 524 + struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) 526 525 { 527 526 struct pci_host_bridge *bridge; 528 527 529 - bridge = kzalloc(sizeof(*bridge), GFP_KERNEL); 528 + bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL); 530 529 if (!bridge) 531 530 return NULL; 532 531 533 532 INIT_LIST_HEAD(&bridge->windows); 534 - bridge->bus = b; 533 + 535 534 return bridge; 536 535 } 536 + EXPORT_SYMBOL(pci_alloc_host_bridge); 537 537 538 538 static const unsigned char pcix_bus_speed[] = { 539 539 PCI_SPEED_UNKNOWN, /* 0 */ ··· 718 716 719 717 dev_set_msi_domain(&bus->dev, d); 720 718 } 719 + 720 + int pci_register_host_bridge(struct pci_host_bridge *bridge) 721 + { 722 + struct device *parent = bridge->dev.parent; 723 + struct resource_entry *window, *n; 724 + struct pci_bus *bus, *b; 725 + resource_size_t offset; 726 + LIST_HEAD(resources); 727 + struct resource *res; 728 + char addr[64], *fmt; 729 + const char *name; 730 + int err; 731 + 732 + bus = pci_alloc_bus(NULL); 733 + if (!bus) 734 + return -ENOMEM; 735 + 736 + bridge->bus = bus; 737 + 738 + /* temporarily move resources off the list */ 739 + list_splice_init(&bridge->windows, &resources); 740 + bus->sysdata = bridge->sysdata; 741 + bus->msi = bridge->msi; 742 + bus->ops = bridge->ops; 743 + bus->number = bus->busn_res.start = bridge->busnr; 744 + #ifdef CONFIG_PCI_DOMAINS_GENERIC 745 + bus->domain_nr = pci_bus_find_domain_nr(bus, parent); 746 + #endif 747 + 748 + b = pci_find_bus(pci_domain_nr(bus), bridge->busnr); 749 + if (b) { 750 + /* If we already got to this bus through a different bridge, ignore it */ 751 + dev_dbg(&b->dev, "bus already known\n"); 752 + err = -EEXIST; 753 + goto free; 754 + } 755 + 756 + dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(bus), 757 + bridge->busnr); 758 + 759 + err = pcibios_root_bridge_prepare(bridge); 760 + if (err) 761 + goto free; 762 + 763 + err = device_register(&bridge->dev); 764 + if (err) 765 + put_device(&bridge->dev); 766 + 767 + bus->bridge = get_device(&bridge->dev); 768 + device_enable_async_suspend(bus->bridge); 769 + pci_set_bus_of_node(bus); 770 + pci_set_bus_msi_domain(bus); 771 + 772 + if (!parent) 773 + set_dev_node(bus->bridge, pcibus_to_node(bus)); 774 + 775 + bus->dev.class = &pcibus_class; 776 + bus->dev.parent = bus->bridge; 777 + 778 + dev_set_name(&bus->dev, "%04x:%02x", pci_domain_nr(bus), bus->number); 779 + name = dev_name(&bus->dev); 780 + 781 + err = device_register(&bus->dev); 782 + if (err) 783 + goto unregister; 784 + 785 + pcibios_add_bus(bus); 786 + 787 + /* Create legacy_io and legacy_mem files for this bus */ 788 + pci_create_legacy_files(bus); 789 + 790 + if (parent) 791 + dev_info(parent, "PCI host bridge to bus %s\n", name); 792 + else 793 + pr_info("PCI host bridge to bus %s\n", name); 794 + 795 + /* Add initial resources to the bus */ 796 + resource_list_for_each_entry_safe(window, n, &resources) { 797 + list_move_tail(&window->node, &bridge->windows); 798 + offset = window->offset; 799 + res = window->res; 800 + 801 + if (res->flags & IORESOURCE_BUS) 802 + pci_bus_insert_busn_res(bus, bus->number, res->end); 803 + else 804 + pci_bus_add_resource(bus, res, 0); 805 + 806 + if (offset) { 807 + if (resource_type(res) == IORESOURCE_IO) 808 + fmt = " (bus address [%#06llx-%#06llx])"; 809 + else 810 + fmt = " (bus address [%#010llx-%#010llx])"; 811 + 812 + snprintf(addr, sizeof(addr), fmt, 813 + (unsigned long long)(res->start - offset), 814 + (unsigned long long)(res->end - offset)); 815 + } else 816 + addr[0] = '\0'; 817 + 818 + dev_info(&bus->dev, "root bus resource %pR%s\n", res, addr); 819 + } 820 + 821 + down_write(&pci_bus_sem); 822 + list_add_tail(&bus->node, &pci_root_buses); 823 + up_write(&pci_bus_sem); 824 + 825 + return 0; 826 + 827 + unregister: 828 + put_device(&bridge->dev); 829 + device_unregister(&bridge->dev); 830 + 831 + free: 832 + kfree(bus); 833 + return err; 834 + } 835 + EXPORT_SYMBOL(pci_register_host_bridge); 721 836 722 837 static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent, 723 838 struct pci_dev *bridge, int busnr) ··· 2274 2155 { 2275 2156 } 2276 2157 2277 - struct pci_bus *pci_create_root_bus(struct device *parent, int bus, 2278 - struct pci_ops *ops, void *sysdata, struct list_head *resources) 2158 + static struct pci_bus *pci_create_root_bus_msi(struct device *parent, 2159 + int bus, struct pci_ops *ops, void *sysdata, 2160 + struct list_head *resources, struct msi_controller *msi) 2279 2161 { 2280 2162 int error; 2281 2163 struct pci_host_bridge *bridge; 2282 - struct pci_bus *b, *b2; 2283 - struct resource_entry *window, *n; 2284 - struct resource *res; 2285 - resource_size_t offset; 2286 - char bus_addr[64]; 2287 - char *fmt; 2288 2164 2289 - b = pci_alloc_bus(NULL); 2290 - if (!b) 2291 - return NULL; 2292 - 2293 - b->sysdata = sysdata; 2294 - b->ops = ops; 2295 - b->number = b->busn_res.start = bus; 2296 - #ifdef CONFIG_PCI_DOMAINS_GENERIC 2297 - b->domain_nr = pci_bus_find_domain_nr(b, parent); 2298 - #endif 2299 - b2 = pci_find_bus(pci_domain_nr(b), bus); 2300 - if (b2) { 2301 - /* If we already got to this bus through a different bridge, ignore it */ 2302 - dev_dbg(&b2->dev, "bus already known\n"); 2303 - goto err_out; 2304 - } 2305 - 2306 - bridge = pci_alloc_host_bridge(b); 2165 + bridge = pci_alloc_host_bridge(0); 2307 2166 if (!bridge) 2308 - goto err_out; 2167 + return NULL; 2309 2168 2310 2169 bridge->dev.parent = parent; 2311 2170 bridge->dev.release = pci_release_host_bridge_dev; 2312 - dev_set_name(&bridge->dev, "pci%04x:%02x", pci_domain_nr(b), bus); 2313 - error = pcibios_root_bridge_prepare(bridge); 2314 - if (error) { 2315 - kfree(bridge); 2171 + 2172 + list_splice_init(resources, &bridge->windows); 2173 + bridge->sysdata = sysdata; 2174 + bridge->busnr = bus; 2175 + bridge->ops = ops; 2176 + bridge->msi = msi; 2177 + 2178 + error = pci_register_host_bridge(bridge); 2179 + if (error < 0) 2316 2180 goto err_out; 2317 - } 2318 2181 2319 - error = device_register(&bridge->dev); 2320 - if (error) { 2321 - put_device(&bridge->dev); 2322 - goto err_out; 2323 - } 2324 - b->bridge = get_device(&bridge->dev); 2325 - device_enable_async_suspend(b->bridge); 2326 - pci_set_bus_of_node(b); 2327 - pci_set_bus_msi_domain(b); 2182 + return bridge->bus; 2328 2183 2329 - if (!parent) 2330 - set_dev_node(b->bridge, pcibus_to_node(b)); 2331 - 2332 - b->dev.class = &pcibus_class; 2333 - b->dev.parent = b->bridge; 2334 - dev_set_name(&b->dev, "%04x:%02x", pci_domain_nr(b), bus); 2335 - error = device_register(&b->dev); 2336 - if (error) 2337 - goto class_dev_reg_err; 2338 - 2339 - pcibios_add_bus(b); 2340 - 2341 - /* Create legacy_io and legacy_mem files for this bus */ 2342 - pci_create_legacy_files(b); 2343 - 2344 - if (parent) 2345 - dev_info(parent, "PCI host bridge to bus %s\n", dev_name(&b->dev)); 2346 - else 2347 - printk(KERN_INFO "PCI host bridge to bus %s\n", dev_name(&b->dev)); 2348 - 2349 - /* Add initial resources to the bus */ 2350 - resource_list_for_each_entry_safe(window, n, resources) { 2351 - list_move_tail(&window->node, &bridge->windows); 2352 - res = window->res; 2353 - offset = window->offset; 2354 - if (res->flags & IORESOURCE_BUS) 2355 - pci_bus_insert_busn_res(b, bus, res->end); 2356 - else 2357 - pci_bus_add_resource(b, res, 0); 2358 - if (offset) { 2359 - if (resource_type(res) == IORESOURCE_IO) 2360 - fmt = " (bus address [%#06llx-%#06llx])"; 2361 - else 2362 - fmt = " (bus address [%#010llx-%#010llx])"; 2363 - snprintf(bus_addr, sizeof(bus_addr), fmt, 2364 - (unsigned long long) (res->start - offset), 2365 - (unsigned long long) (res->end - offset)); 2366 - } else 2367 - bus_addr[0] = '\0'; 2368 - dev_info(&b->dev, "root bus resource %pR%s\n", res, bus_addr); 2369 - } 2370 - 2371 - down_write(&pci_bus_sem); 2372 - list_add_tail(&b->node, &pci_root_buses); 2373 - up_write(&pci_bus_sem); 2374 - 2375 - return b; 2376 - 2377 - class_dev_reg_err: 2378 - put_device(&bridge->dev); 2379 - device_unregister(&bridge->dev); 2380 2184 err_out: 2381 - kfree(b); 2185 + kfree(bridge); 2382 2186 return NULL; 2187 + } 2188 + 2189 + struct pci_bus *pci_create_root_bus(struct device *parent, int bus, 2190 + struct pci_ops *ops, void *sysdata, struct list_head *resources) 2191 + { 2192 + return pci_create_root_bus_msi(parent, bus, ops, sysdata, resources, 2193 + NULL); 2383 2194 } 2384 2195 EXPORT_SYMBOL_GPL(pci_create_root_bus); 2385 2196 ··· 2391 2342 break; 2392 2343 } 2393 2344 2394 - b = pci_create_root_bus(parent, bus, ops, sysdata, resources); 2345 + b = pci_create_root_bus_msi(parent, bus, ops, sysdata, resources, msi); 2395 2346 if (!b) 2396 2347 return NULL; 2397 - 2398 - b->msi = msi; 2399 2348 2400 2349 if (!found) { 2401 2350 dev_info(&b->dev,
+144 -38
drivers/pci/quirks.c
··· 2156 2156 { 2157 2157 if (dev->vpd) { 2158 2158 dev->vpd->len = 0; 2159 - dev_warn(&dev->dev, FW_BUG "VPD access disabled\n"); 2159 + dev_warn(&dev->dev, FW_BUG "disabling VPD access (can't determine size of non-standard VPD format)\n"); 2160 2160 } 2161 2161 } 2162 2162 ··· 3137 3137 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3_delay); 3138 3138 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3_delay); 3139 3139 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3_delay); 3140 + 3140 3141 /* 3141 - * Some devices may pass our check in pci_intx_mask_supported if 3142 + * Some devices may pass our check in pci_intx_mask_supported() if 3142 3143 * PCI_COMMAND_INTX_DISABLE works though they actually do not properly 3143 3144 * support this feature. 3144 3145 */ ··· 3147 3146 { 3148 3147 dev->broken_intx_masking = 1; 3149 3148 } 3150 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CHELSIO, 0x0030, 3151 - quirk_broken_intx_masking); 3152 - DECLARE_PCI_FIXUP_HEADER(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */ 3153 - quirk_broken_intx_masking); 3149 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x0030, 3150 + quirk_broken_intx_masking); 3151 + DECLARE_PCI_FIXUP_FINAL(0x1814, 0x0601, /* Ralink RT2800 802.11n PCI */ 3152 + quirk_broken_intx_masking); 3153 + 3154 3154 /* 3155 3155 * Realtek RTL8169 PCI Gigabit Ethernet Controller (rev 10) 3156 3156 * Subsystem: Realtek RTL8169/8110 Family PCI Gigabit Ethernet NIC 3157 3157 * 3158 3158 * RTL8110SC - Fails under PCI device assignment using DisINTx masking. 3159 3159 */ 3160 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_REALTEK, 0x8169, 3161 - quirk_broken_intx_masking); 3162 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID, 3163 - quirk_broken_intx_masking); 3160 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REALTEK, 0x8169, 3161 + quirk_broken_intx_masking); 3164 3162 3165 3163 /* 3166 3164 * Intel i40e (XL710/X710) 10/20/40GbE NICs all have broken INTx masking, 3167 3165 * DisINTx can be set but the interrupt status bit is non-functional. 3168 3166 */ 3169 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1572, 3170 - quirk_broken_intx_masking); 3171 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1574, 3172 - quirk_broken_intx_masking); 3173 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1580, 3174 - quirk_broken_intx_masking); 3175 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1581, 3176 - quirk_broken_intx_masking); 3177 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1583, 3178 - quirk_broken_intx_masking); 3179 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1584, 3180 - quirk_broken_intx_masking); 3181 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1585, 3182 - quirk_broken_intx_masking); 3183 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1586, 3184 - quirk_broken_intx_masking); 3185 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1587, 3186 - quirk_broken_intx_masking); 3187 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1588, 3188 - quirk_broken_intx_masking); 3189 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x1589, 3190 - quirk_broken_intx_masking); 3191 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d0, 3192 - quirk_broken_intx_masking); 3193 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d1, 3194 - quirk_broken_intx_masking); 3195 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x37d2, 3196 - quirk_broken_intx_masking); 3167 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1572, 3168 + quirk_broken_intx_masking); 3169 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1574, 3170 + quirk_broken_intx_masking); 3171 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1580, 3172 + quirk_broken_intx_masking); 3173 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1581, 3174 + quirk_broken_intx_masking); 3175 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1583, 3176 + quirk_broken_intx_masking); 3177 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1584, 3178 + quirk_broken_intx_masking); 3179 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1585, 3180 + quirk_broken_intx_masking); 3181 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1586, 3182 + quirk_broken_intx_masking); 3183 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1587, 3184 + quirk_broken_intx_masking); 3185 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1588, 3186 + quirk_broken_intx_masking); 3187 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1589, 3188 + quirk_broken_intx_masking); 3189 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d0, 3190 + quirk_broken_intx_masking); 3191 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d1, 3192 + quirk_broken_intx_masking); 3193 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d2, 3194 + quirk_broken_intx_masking); 3195 + 3196 + static u16 mellanox_broken_intx_devs[] = { 3197 + PCI_DEVICE_ID_MELLANOX_HERMON_SDR, 3198 + PCI_DEVICE_ID_MELLANOX_HERMON_DDR, 3199 + PCI_DEVICE_ID_MELLANOX_HERMON_QDR, 3200 + PCI_DEVICE_ID_MELLANOX_HERMON_DDR_GEN2, 3201 + PCI_DEVICE_ID_MELLANOX_HERMON_QDR_GEN2, 3202 + PCI_DEVICE_ID_MELLANOX_HERMON_EN, 3203 + PCI_DEVICE_ID_MELLANOX_HERMON_EN_GEN2, 3204 + PCI_DEVICE_ID_MELLANOX_CONNECTX_EN, 3205 + PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_T_GEN2, 3206 + PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_GEN2, 3207 + PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_5_GEN2, 3208 + PCI_DEVICE_ID_MELLANOX_CONNECTX2, 3209 + PCI_DEVICE_ID_MELLANOX_CONNECTX3, 3210 + PCI_DEVICE_ID_MELLANOX_CONNECTX3_PRO, 3211 + }; 3212 + 3213 + #define CONNECTX_4_CURR_MAX_MINOR 99 3214 + #define CONNECTX_4_INTX_SUPPORT_MINOR 14 3215 + 3216 + /* 3217 + * Check ConnectX-4/LX FW version to see if it supports legacy interrupts. 3218 + * If so, don't mark it as broken. 3219 + * FW minor > 99 means older FW version format and no INTx masking support. 3220 + * FW minor < 14 means new FW version format and no INTx masking support. 3221 + */ 3222 + static void mellanox_check_broken_intx_masking(struct pci_dev *pdev) 3223 + { 3224 + __be32 __iomem *fw_ver; 3225 + u16 fw_major; 3226 + u16 fw_minor; 3227 + u16 fw_subminor; 3228 + u32 fw_maj_min; 3229 + u32 fw_sub_min; 3230 + int i; 3231 + 3232 + for (i = 0; i < ARRAY_SIZE(mellanox_broken_intx_devs); i++) { 3233 + if (pdev->device == mellanox_broken_intx_devs[i]) { 3234 + pdev->broken_intx_masking = 1; 3235 + return; 3236 + } 3237 + } 3238 + 3239 + /* Getting here means Connect-IB cards and up. Connect-IB has no INTx 3240 + * support so shouldn't be checked further 3241 + */ 3242 + if (pdev->device == PCI_DEVICE_ID_MELLANOX_CONNECTIB) 3243 + return; 3244 + 3245 + if (pdev->device != PCI_DEVICE_ID_MELLANOX_CONNECTX4 && 3246 + pdev->device != PCI_DEVICE_ID_MELLANOX_CONNECTX4_LX) 3247 + return; 3248 + 3249 + /* For ConnectX-4 and ConnectX-4LX, need to check FW support */ 3250 + if (pci_enable_device_mem(pdev)) { 3251 + dev_warn(&pdev->dev, "Can't enable device memory\n"); 3252 + return; 3253 + } 3254 + 3255 + fw_ver = ioremap(pci_resource_start(pdev, 0), 4); 3256 + if (!fw_ver) { 3257 + dev_warn(&pdev->dev, "Can't map ConnectX-4 initialization segment\n"); 3258 + goto out; 3259 + } 3260 + 3261 + /* Reading from resource space should be 32b aligned */ 3262 + fw_maj_min = ioread32be(fw_ver); 3263 + fw_sub_min = ioread32be(fw_ver + 1); 3264 + fw_major = fw_maj_min & 0xffff; 3265 + fw_minor = fw_maj_min >> 16; 3266 + fw_subminor = fw_sub_min & 0xffff; 3267 + if (fw_minor > CONNECTX_4_CURR_MAX_MINOR || 3268 + fw_minor < CONNECTX_4_INTX_SUPPORT_MINOR) { 3269 + dev_warn(&pdev->dev, "ConnectX-4: FW %u.%u.%u doesn't support INTx masking, disabling. Please upgrade FW to %d.14.1100 and up for INTx support\n", 3270 + fw_major, fw_minor, fw_subminor, pdev->device == 3271 + PCI_DEVICE_ID_MELLANOX_CONNECTX4 ? 12 : 14); 3272 + pdev->broken_intx_masking = 1; 3273 + } 3274 + 3275 + iounmap(fw_ver); 3276 + 3277 + out: 3278 + pci_disable_device(pdev); 3279 + } 3280 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID, 3281 + mellanox_check_broken_intx_masking); 3197 3282 3198 3283 static void quirk_no_bus_reset(struct pci_dev *dev) 3199 3284 { ··· 3341 3254 quirk_thunderbolt_hotplug_msi); 3342 3255 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PORT_RIDGE, 3343 3256 quirk_thunderbolt_hotplug_msi); 3257 + 3258 + static void quirk_chelsio_extend_vpd(struct pci_dev *dev) 3259 + { 3260 + pci_set_vpd_size(dev, 8192); 3261 + } 3262 + 3263 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x20, quirk_chelsio_extend_vpd); 3264 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x21, quirk_chelsio_extend_vpd); 3265 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x22, quirk_chelsio_extend_vpd); 3266 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x23, quirk_chelsio_extend_vpd); 3267 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x24, quirk_chelsio_extend_vpd); 3268 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x25, quirk_chelsio_extend_vpd); 3269 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x26, quirk_chelsio_extend_vpd); 3270 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x30, quirk_chelsio_extend_vpd); 3271 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x31, quirk_chelsio_extend_vpd); 3272 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x32, quirk_chelsio_extend_vpd); 3273 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x35, quirk_chelsio_extend_vpd); 3274 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x36, quirk_chelsio_extend_vpd); 3275 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x37, quirk_chelsio_extend_vpd); 3344 3276 3345 3277 #ifdef CONFIG_ACPI 3346 3278 /*
+1 -1
drivers/pci/remove.c
··· 40 40 list_del(&dev->bus_list); 41 41 up_write(&pci_bus_sem); 42 42 43 - pci_bridge_d3_device_removed(dev); 43 + pci_bridge_d3_update(dev); 44 44 pci_free_resources(dev); 45 45 put_device(&dev->dev); 46 46 }
+5
drivers/pci/rom.c
··· 35 35 if (res->flags & IORESOURCE_ROM_SHADOW) 36 36 return 0; 37 37 38 + /* 39 + * Ideally pci_update_resource() would update the ROM BAR address, 40 + * and we would only set the enable bit here. But apparently some 41 + * devices have buggy ROM BARs that read as zero when disabled. 42 + */ 38 43 pcibios_resource_to_bus(pdev->bus, &region, res); 39 44 pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr); 40 45 rom_addr &= ~PCI_ROM_ADDRESS_MASK;
+34 -14
drivers/pci/setup-res.c
··· 25 25 #include <linux/slab.h> 26 26 #include "pci.h" 27 27 28 - 29 - void pci_update_resource(struct pci_dev *dev, int resno) 28 + static void pci_std_update_resource(struct pci_dev *dev, int resno) 30 29 { 31 30 struct pci_bus_region region; 32 31 bool disable; 33 32 u16 cmd; 34 33 u32 new, check, mask; 35 34 int reg; 36 - enum pci_bar_type type; 37 35 struct resource *res = dev->resource + resno; 38 36 39 - if (dev->is_virtfn) { 40 - dev_warn(&dev->dev, "can't update VF BAR%d\n", resno); 37 + /* Per SR-IOV spec 3.4.1.11, VF BARs are RO zero */ 38 + if (dev->is_virtfn) 41 39 return; 42 - } 43 40 44 41 /* 45 42 * Ignore resources for unimplemented BARs and unused resource slots ··· 57 60 return; 58 61 59 62 pcibios_resource_to_bus(dev->bus, &region, res); 63 + new = region.start; 60 64 61 - new = region.start | (res->flags & PCI_REGION_FLAG_MASK); 62 - if (res->flags & IORESOURCE_IO) 65 + if (res->flags & IORESOURCE_IO) { 63 66 mask = (u32)PCI_BASE_ADDRESS_IO_MASK; 64 - else 67 + new |= res->flags & ~PCI_BASE_ADDRESS_IO_MASK; 68 + } else if (resno == PCI_ROM_RESOURCE) { 69 + mask = (u32)PCI_ROM_ADDRESS_MASK; 70 + } else { 65 71 mask = (u32)PCI_BASE_ADDRESS_MEM_MASK; 72 + new |= res->flags & ~PCI_BASE_ADDRESS_MEM_MASK; 73 + } 66 74 67 - reg = pci_resource_bar(dev, resno, &type); 68 - if (!reg) 69 - return; 70 - if (type != pci_bar_unknown) { 75 + if (resno < PCI_ROM_RESOURCE) { 76 + reg = PCI_BASE_ADDRESS_0 + 4 * resno; 77 + } else if (resno == PCI_ROM_RESOURCE) { 78 + 79 + /* 80 + * Apparently some Matrox devices have ROM BARs that read 81 + * as zero when disabled, so don't update ROM BARs unless 82 + * they're enabled. See https://lkml.org/lkml/2005/8/30/138. 83 + */ 71 84 if (!(res->flags & IORESOURCE_ROM_ENABLE)) 72 85 return; 86 + 87 + reg = dev->rom_base_reg; 73 88 new |= PCI_ROM_ADDRESS_ENABLE; 74 - } 89 + } else 90 + return; 75 91 76 92 /* 77 93 * We can't update a 64-bit BAR atomically, so when possible, ··· 118 108 119 109 if (disable) 120 110 pci_write_config_word(dev, PCI_COMMAND, cmd); 111 + } 112 + 113 + void pci_update_resource(struct pci_dev *dev, int resno) 114 + { 115 + if (resno <= PCI_ROM_RESOURCE) 116 + pci_std_update_resource(dev, resno); 117 + #ifdef CONFIG_PCI_IOV 118 + else if (resno >= PCI_IOV_RESOURCES && resno <= PCI_IOV_RESOURCE_END) 119 + pci_iov_update_resource(dev, resno); 120 + #endif 121 121 } 122 122 123 123 int pci_claim_resource(struct pci_dev *dev, int resource)
+4
drivers/usb/host/uhci-pci.c
··· 129 129 if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_HP) 130 130 uhci->wait_for_hp = 1; 131 131 132 + /* Intel controllers use non-PME wakeup signalling */ 133 + if (to_pci_dev(uhci_dev(uhci))->vendor == PCI_VENDOR_ID_INTEL) 134 + device_set_run_wake(uhci_dev(uhci), 1); 135 + 132 136 /* Set up pointers to PCI-specific functions */ 133 137 uhci->reset_hc = uhci_pci_reset_hc; 134 138 uhci->check_and_reset_hc = uhci_pci_check_and_reset_hc;
-2
drivers/vfio/pci/vfio_pci_config.c
··· 31 31 32 32 #include "vfio_pci_private.h" 33 33 34 - #define PCI_CFG_SPACE_SIZE 256 35 - 36 34 /* Fake capability ID for standard config space */ 37 35 #define PCI_CAP_ID_BASIC 0 38 36
+7
include/linux/acpi.h
··· 437 437 return acpi_dev_filter_resource_type(ares, (unsigned long)arg); 438 438 } 439 439 440 + struct acpi_device *acpi_resource_consumer(struct resource *res); 441 + 440 442 int acpi_check_resource_conflict(const struct resource *res); 441 443 442 444 int acpi_check_region(resource_size_t start, resource_size_t n, ··· 787 785 static inline int acpi_reconfig_notifier_unregister(struct notifier_block *nb) 788 786 { 789 787 return -EINVAL; 788 + } 789 + 790 + static inline struct acpi_device *acpi_resource_consumer(struct resource *res) 791 + { 792 + return NULL; 790 793 } 791 794 792 795 #endif /* !CONFIG_ACPI */
+7
include/linux/of_pci.h
··· 16 16 int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin); 17 17 int of_pci_parse_bus_range(struct device_node *node, struct resource *res); 18 18 int of_get_pci_domain_nr(struct device_node *node); 19 + int of_pci_get_max_link_speed(struct device_node *node); 19 20 void of_pci_check_probe_only(void); 20 21 int of_pci_map_rid(struct device_node *np, u32 rid, 21 22 const char *map_name, const char *map_mask_name, ··· 59 58 static inline int of_pci_map_rid(struct device_node *np, u32 rid, 60 59 const char *map_name, const char *map_mask_name, 61 60 struct device_node **target, u32 *id_out) 61 + { 62 + return -EINVAL; 63 + } 64 + 65 + static inline int 66 + of_pci_get_max_link_speed(struct device_node *node) 62 67 { 63 68 return -EINVAL; 64 69 }
+3 -1
include/linux/pci-acpi.h
··· 24 24 } 25 25 extern phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle); 26 26 27 - extern phys_addr_t pci_mcfg_lookup(u16 domain, struct resource *bus_res); 27 + struct pci_ecam_ops; 28 + extern int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres, 29 + struct pci_ecam_ops **ecam_ops); 28 30 29 31 static inline acpi_handle acpi_find_root_bridge_handle(struct pci_dev *pdev) 30 32 {
+9
include/linux/pci-ecam.h
··· 59 59 /* default ECAM ops */ 60 60 extern struct pci_ecam_ops pci_generic_ecam_ops; 61 61 62 + #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 63 + extern struct pci_ecam_ops pci_32b_ops; /* 32-bit accesses only */ 64 + extern struct pci_ecam_ops hisi_pcie_ops; /* HiSilicon */ 65 + extern struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */ 66 + extern struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */ 67 + extern struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */ 68 + extern struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */ 69 + #endif 70 + 62 71 #ifdef CONFIG_PCI_HOST_GENERIC 63 72 /* for DT-based PCI controllers that support ECAM */ 64 73 int pci_host_common_probe(struct platform_device *pdev,
+17
include/linux/pci.h
··· 420 420 struct pci_host_bridge { 421 421 struct device dev; 422 422 struct pci_bus *bus; /* root bus */ 423 + struct pci_ops *ops; 424 + void *sysdata; 425 + int busnr; 423 426 struct list_head windows; /* resource_entry */ 424 427 void (*release_fn)(struct pci_host_bridge *); 425 428 void *release_data; 429 + struct msi_controller *msi; 426 430 unsigned int ignore_reset_delay:1; /* for entire hierarchy */ 427 431 /* Resource alignment requirements */ 428 432 resource_size_t (*align_resource)(struct pci_dev *dev, ··· 434 430 resource_size_t start, 435 431 resource_size_t size, 436 432 resource_size_t align); 433 + unsigned long private[0] ____cacheline_aligned; 437 434 }; 438 435 439 436 #define to_pci_host_bridge(n) container_of(n, struct pci_host_bridge, dev) 440 437 438 + static inline void *pci_host_bridge_priv(struct pci_host_bridge *bridge) 439 + { 440 + return (void *)bridge->private; 441 + } 442 + 443 + static inline struct pci_host_bridge *pci_host_bridge_from_priv(void *priv) 444 + { 445 + return container_of(priv, struct pci_host_bridge, private); 446 + } 447 + 448 + struct pci_host_bridge *pci_alloc_host_bridge(size_t priv); 449 + int pci_register_host_bridge(struct pci_host_bridge *bridge); 441 450 struct pci_host_bridge *pci_find_host_bridge(struct pci_bus *bus); 442 451 443 452 void pci_set_host_bridge_release(struct pci_host_bridge *bridge,
+2
include/linux/pci_hotplug.h
··· 176 176 #ifdef CONFIG_ACPI 177 177 #include <linux/acpi.h> 178 178 int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp); 179 + bool pciehp_is_native(struct pci_dev *pdev); 179 180 int acpi_get_hp_hw_control_from_firmware(struct pci_dev *dev, u32 flags); 180 181 int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle); 181 182 int acpi_pci_detect_ejectable(acpi_handle handle); ··· 186 185 { 187 186 return -ENODEV; 188 187 } 188 + static inline bool pciehp_is_native(struct pci_dev *pdev) { return true; } 189 189 #endif 190 190 #endif
+22 -5
include/linux/pci_ids.h
··· 2259 2259 #define PCI_DEVICE_ID_ZOLTRIX_2BD0 0x2bd0 2260 2260 2261 2261 #define PCI_VENDOR_ID_MELLANOX 0x15b3 2262 - #define PCI_DEVICE_ID_MELLANOX_TAVOR 0x5a44 2262 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX3 0x1003 2263 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX3_PRO 0x1007 2264 + #define PCI_DEVICE_ID_MELLANOX_CONNECTIB 0x1011 2265 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX4 0x1013 2266 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX4_LX 0x1015 2267 + #define PCI_DEVICE_ID_MELLANOX_TAVOR 0x5a44 2263 2268 #define PCI_DEVICE_ID_MELLANOX_TAVOR_BRIDGE 0x5a46 2264 - #define PCI_DEVICE_ID_MELLANOX_ARBEL_COMPAT 0x6278 2265 - #define PCI_DEVICE_ID_MELLANOX_ARBEL 0x6282 2266 - #define PCI_DEVICE_ID_MELLANOX_SINAI_OLD 0x5e8c 2267 - #define PCI_DEVICE_ID_MELLANOX_SINAI 0x6274 2269 + #define PCI_DEVICE_ID_MELLANOX_SINAI_OLD 0x5e8c 2270 + #define PCI_DEVICE_ID_MELLANOX_SINAI 0x6274 2271 + #define PCI_DEVICE_ID_MELLANOX_ARBEL_COMPAT 0x6278 2272 + #define PCI_DEVICE_ID_MELLANOX_ARBEL 0x6282 2273 + #define PCI_DEVICE_ID_MELLANOX_HERMON_SDR 0x6340 2274 + #define PCI_DEVICE_ID_MELLANOX_HERMON_DDR 0x634a 2275 + #define PCI_DEVICE_ID_MELLANOX_HERMON_QDR 0x6354 2276 + #define PCI_DEVICE_ID_MELLANOX_HERMON_EN 0x6368 2277 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN 0x6372 2278 + #define PCI_DEVICE_ID_MELLANOX_HERMON_DDR_GEN2 0x6732 2279 + #define PCI_DEVICE_ID_MELLANOX_HERMON_QDR_GEN2 0x673c 2280 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_5_GEN2 0x6746 2281 + #define PCI_DEVICE_ID_MELLANOX_HERMON_EN_GEN2 0x6750 2282 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_T_GEN2 0x675a 2283 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX_EN_GEN2 0x6764 2284 + #define PCI_DEVICE_ID_MELLANOX_CONNECTX2 0x676e 2268 2285 2269 2286 #define PCI_VENDOR_ID_DFI 0x15bd 2270 2287
+8
include/uapi/linux/pci_regs.h
··· 23 23 #define LINUX_PCI_REGS_H 24 24 25 25 /* 26 + * Conventional PCI and PCI-X Mode 1 devices have 256 bytes of 27 + * configuration space. PCI-X Mode 2 and PCIe devices have 4096 bytes of 28 + * configuration space. 29 + */ 30 + #define PCI_CFG_SPACE_SIZE 256 31 + #define PCI_CFG_SPACE_EXP_SIZE 4096 32 + 33 + /* 26 34 * Under PCI, each device has 256 bytes of configuration address space, 27 35 * of which the first 64 bytes are standardized as follows: 28 36 */