Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

- detach driver before tearing down procfs/sysfs (Alex Williamson)

- disable PCIe services during shutdown (Sinan Kaya)

- fix ASPM oops on systems with no Root Ports (Ard Biesheuvel)

- fix ASPM LTR_L1.2_THRESHOLD programming (Bjorn Helgaas)

- fix ASPM Common_Mode_Restore_Time computation (Bjorn Helgaas)

- fix portdrv MSI/MSI-X vector allocation (Dongdong Liu, Bjorn
Helgaas)

- report non-fatal AER errors only to the affected endpoint (Gabriele
Paoloni)

- distribute bus numbers, MMIO, and I/O space among hotplug bridges to
allow more devices to be hot-added (Mika Westerberg)

- fix pciehp races during initialization and surprise link down (Mika
Westerberg)

- handle surprise-removed devices in PME handling (Qiang)

- support resizable BARs for large graphics devices (Christian König)

- expose SR-IOV offset, stride, and VF device ID via sysfs (Filippo
Sironi)

- create SR-IOV virtfn/physfn sysfs links before attaching driver
(Stuart Hayes)

- fix SR-IOV "ARI Capable Hierarchy" restore issue (Tony Nguyen)

- enforce Kconfig IOV/REALLOC dependency (Sascha El-Sharkawy)

- avoid slot reset if bridge itself is broken (Jan Glauber)

- clean up pci_reset_function() path (Jan H. Schönherr)

- make pci_map_rom() fail if the option ROM is invalid (Changbin Du)

- convert timers to timer_setup() (Kees Cook)

- move PCI_QUIRKS to PCI bus Kconfig menu (Randy Dunlap)

- constify pci_dev_type and intel_mid_pci_ops (Bhumika Goyal)

- remove unnecessary pci_dev, pci_bus, resource, pcibios_set_master()
declarations (Bjorn Helgaas)

- fix endpoint framework overflows and BUG()s (Dan Carpenter)

- fix endpoint framework issues (Kishon Vijay Abraham I)

- avoid broken Cavium CN8xxx bus reset behavior (David Daney)

- extend Cavium ACS capability quirks (Vadim Lomovtsev)

- support Synopsys DesignWare RC in ECAM mode (Ard Biesheuvel)

- turn off dra7xx clocks cleanly on shutdown (Keerthy)

- fix Faraday probe error path (Wei Yongjun)

- support HiSilicon STB SoC PCIe host controller (Jianguo Sun)

- fix Hyper-V interrupt affinity issue (Dexuan Cui)

- remove useless ACPI warning for Hyper-V pass-through devices (Vitaly
Kuznetsov)

- support multiple MSI on iProc (Sandor Bodo-Merle)

- support Layerscape LS1012a and LS1046a PCIe host controllers (Hou
Zhiqiang)

- fix Layerscape default error response (Minghuan Lian)

- support MSI on Tango host controller (Marc Gonzalez)

- support Tegra186 PCIe host controller (Manikanta Maddireddy)

- use generic accessors on Tegra when possible (Thierry Reding)

- support V3 Semiconductor PCI host controller (Linus Walleij)

* tag 'pci-v4.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (85 commits)
PCI/ASPM: Add L1 Substates definitions
PCI/ASPM: Reformat ASPM register definitions
PCI/ASPM: Use correct capability pointer to program LTR_L1.2_THRESHOLD
PCI/ASPM: Account for downstream device's Port Common_Mode_Restore_Time
PCI: xgene: Rename xgene_pcie_probe_bridge() to xgene_pcie_probe()
PCI: xilinx: Rename xilinx_pcie_link_is_up() to xilinx_pcie_link_up()
PCI: altera: Rename altera_pcie_link_is_up() to altera_pcie_link_up()
PCI: Fix kernel-doc build warning
PCI: Fail pci_map_rom() if the option ROM is invalid
PCI: Move pci_map_rom() error path
PCI: Move PCI_QUIRKS to the PCI bus menu
alpha/PCI: Make pdev_save_srm_config() static
PCI: Remove unused declarations
PCI: Remove redundant pci_dev, pci_bus, resource declarations
PCI: Remove redundant pcibios_set_master() declarations
PCI/PME: Handle invalid data when reading Root Status
PCI: hv: Use effective affinity mask
PCI: pciehp: Do not clear Presence Detect Changed during initialization
PCI: pciehp: Fix race condition handling surprise link down
PCI: Distribute available resources to hotplug-capable bridges
...

+3458 -576
+1
Documentation/devicetree/bindings/interrupt-controller/fsl,ls-scfg-msi.txt
··· 8 8 "fsl,ls1043a-msi" 9 9 "fsl,ls1046a-msi" 10 10 "fsl,ls1043a-v1.1-msi" 11 + "fsl,ls1012a-msi" 11 12 - msi-controller: indicates that this is a PCIe MSI controller node 12 13 - reg: physical base address of the controller and length of memory mapped. 13 14 - interrupts: an interrupt to the parent interrupt controller.
+42
Documentation/devicetree/bindings/pci/designware-pcie-ecam.txt
··· 1 + * Synopsys DesignWare PCIe root complex in ECAM shift mode 2 + 3 + In some cases, firmware may already have configured the Synopsys DesignWare 4 + PCIe controller in RC mode with static ATU window mappings that cover all 5 + config, MMIO and I/O spaces in a [mostly] ECAM compatible fashion. 6 + In this case, there is no need for the OS to perform any low level setup 7 + of clocks, PHYs or device registers, nor is there any reason for the driver 8 + to reconfigure ATU windows for config and/or IO space accesses at runtime. 9 + 10 + In cases where the IP was synthesized with a minimum ATU window size of 11 + 64 KB, it cannot be supported by the generic ECAM driver, because it 12 + requires special config space accessors that filter accesses to device #1 13 + and beyond on the first bus. 14 + 15 + Required properties: 16 + - compatible: "marvell,armada8k-pcie-ecam" or 17 + "socionext,synquacer-pcie-ecam" or 18 + "snps,dw-pcie-ecam" (must be preceded by a more specific match) 19 + 20 + Please refer to the binding document of "pci-host-ecam-generic" in the 21 + file host-generic-pci.txt for a description of the remaining required 22 + and optional properties. 23 + 24 + Example: 25 + 26 + pcie1: pcie@7f000000 { 27 + compatible = "socionext,synquacer-pcie-ecam", "snps,dw-pcie-ecam"; 28 + device_type = "pci"; 29 + reg = <0x0 0x7f000000 0x0 0xf00000>; 30 + bus-range = <0x0 0xe>; 31 + #address-cells = <3>; 32 + #size-cells = <2>; 33 + ranges = <0x1000000 0x00 0x00010000 0x00 0x7ff00000 0x0 0x00010000>, 34 + <0x2000000 0x00 0x70000000 0x00 0x70000000 0x0 0x0f000000>, 35 + <0x3000000 0x3f 0x00000000 0x3f 0x00000000 0x1 0x00000000>; 36 + 37 + #interrupt-cells = <0x1>; 38 + interrupt-map-mask = <0x0 0x0 0x0 0x0>; 39 + interrupt-map = <0x0 0x0 0x0 0x0 &gic 0x0 0x0 0x0 182 0x4>; 40 + msi-map = <0x0 &its 0x0 0x10000>; 41 + dma-coherent; 42 + };
+68
Documentation/devicetree/bindings/pci/hisilicon-histb-pcie.txt
··· 1 + HiSilicon STB PCIe host bridge DT description 2 + 3 + The HiSilicon STB PCIe host controller is based on the DesignWare PCIe core. 4 + It shares common functions with the DesignWare PCIe core driver and inherits 5 + common properties defined in 6 + Documentation/devicetree/bindings/pci/designware-pcie.txt. 7 + 8 + Additional properties are described here: 9 + 10 + Required properties 11 + - compatible: Should be one of the following strings: 12 + "hisilicon,hi3798cv200-pcie" 13 + - reg: Should contain sysctl, rc_dbi, config registers location and length. 14 + - reg-names: Must include the following entries: 15 + "control": control registers of PCIe controller; 16 + "rc-dbi": configuration space of PCIe controller; 17 + "config": configuration transaction space of PCIe controller. 18 + - bus-range: PCI bus numbers covered. 19 + - interrupts: MSI interrupt. 20 + - interrupt-names: Must include "msi" entries. 21 + - clocks: List of phandle and clock specifier pairs as listed in clock-names 22 + property. 23 + - clock-name: Must include the following entries: 24 + "aux": auxiliary gate clock; 25 + "pipe": pipe gate clock; 26 + "sys": sys gate clock; 27 + "bus": bus gate clock. 28 + - resets: List of phandle and reset specifier pairs as listed in reset-names 29 + property. 30 + - reset-names: Must include the following entries: 31 + "soft": soft reset; 32 + "sys": sys reset; 33 + "bus": bus reset. 34 + 35 + Optional properties: 36 + - reset-gpios: The gpio to generate PCIe PERST# assert and deassert signal. 37 + - phys: List of phandle and phy mode specifier, should be 0. 38 + - phy-names: Must be "phy". 39 + 40 + Example: 41 + pcie@f9860000 { 42 + compatible = "hisilicon,hi3798cv200-pcie"; 43 + reg = <0xf9860000 0x1000>, 44 + <0xf0000000 0x2000>, 45 + <0xf2000000 0x01000000>; 46 + reg-names = "control", "rc-dbi", "config"; 47 + #address-cells = <3>; 48 + #size-cells = <2>; 49 + device_type = "pci"; 50 + bus-range = <0 15>; 51 + num-lanes = <1>; 52 + ranges=<0x81000000 0 0 0xf4000000 0 0x00010000 53 + 0x82000000 0 0xf3000000 0xf3000000 0 0x01000000>; 54 + interrupts = <GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>; 55 + interrupt-names = "msi"; 56 + #interrupt-cells = <1>; 57 + interrupt-map-mask = <0 0 0 0>; 58 + interrupt-map = <0 0 0 0 &gic GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>; 59 + clocks = <&crg PCIE_AUX_CLK>, 60 + <&crg PCIE_PIPE_CLK>, 61 + <&crg PCIE_SYS_CLK>, 62 + <&crg PCIE_BUS_CLK>; 63 + clock-names = "aux", "pipe", "sys", "bus"; 64 + resets = <&crg 0x18c 6>, <&crg 0x18c 5>, <&crg 0x18c 4>; 65 + reset-names = "soft", "sys", "bus"; 66 + phys = <&combphy1 PHY_TYPE_PCIE>; 67 + phy-names = "phy"; 68 + };
+1
Documentation/devicetree/bindings/pci/layerscape-pci.txt
··· 18 18 "fsl,ls2088a-pcie" 19 19 "fsl,ls1088a-pcie" 20 20 "fsl,ls1046a-pcie" 21 + "fsl,ls1012a-pcie" 21 22 - reg: base addresses and lengths of the PCIe controller register blocks. 22 23 - interrupts: A list of interrupt outputs of the controller. Must contain an 23 24 entry for each entry in the interrupt-names property.
+130 -4
Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
··· 1 1 NVIDIA Tegra PCIe controller 2 2 3 3 Required properties: 4 - - compatible: For Tegra20, must contain "nvidia,tegra20-pcie". For Tegra30, 5 - "nvidia,tegra30-pcie". For Tegra124, must contain "nvidia,tegra124-pcie". 6 - Otherwise, must contain "nvidia,<chip>-pcie", plus one of the above, where 7 - <chip> is tegra132 or tegra210. 4 + - compatible: Must be: 5 + - "nvidia,tegra20-pcie": for Tegra20 6 + - "nvidia,tegra30-pcie": for Tegra30 7 + - "nvidia,tegra124-pcie": for Tegra124 and Tegra132 8 + - "nvidia,tegra210-pcie": for Tegra210 9 + - "nvidia,tegra186-pcie": for Tegra186 10 + - power-domains: To ungate power partition by BPMP powergate driver. Must 11 + contain BPMP phandle and PCIe power partition ID. This is required only 12 + for Tegra186. 8 13 - device_type: Must be "pci" 9 14 - reg: A list of physical base address and length for each set of controller 10 15 registers. Must contain an entry for each entry in the reg-names property. ··· 127 122 - hvdd-pex-pll-e-supply: High-voltage supply for PLLE (shared with USB3). 128 123 Must supply 3.3 V. 129 124 - vddio-pex-ctl-supply: Power supply for PCIe control I/O partition. Must 125 + supply 1.8 V. 126 + 127 + Power supplies for Tegra186: 128 + - Required: 129 + - dvdd-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V. 130 + - hvdd-pex-pll-supply: High-voltage supply for PLLE (shared with USB3). Must 131 + supply 1.8 V. 132 + - hvdd-pex-supply: High-voltage supply for PCIe I/O and PCIe output clocks. 133 + Must supply 1.8 V. 134 + - vddio-pexctl-aud-supply: Power supply for PCIe side band signals. Must 130 135 supply 1.8 V. 131 136 132 137 Root ports are defined as subnodes of the PCIe controller node. ··· 559 544 phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>; 560 545 phy-names = "pcie-0"; 561 546 status = "okay"; 547 + }; 548 + }; 549 + 550 + Tegra186: 551 + --------- 552 + 553 + SoC DTSI: 554 + 555 + pcie@10003000 { 556 + compatible = "nvidia,tegra186-pcie"; 557 + power-domains = <&bpmp TEGRA186_POWER_DOMAIN_PCX>; 558 + device_type = "pci"; 559 + reg = <0x0 0x10003000 0x0 0x00000800 /* PADS registers */ 560 + 0x0 0x10003800 0x0 0x00000800 /* AFI registers */ 561 + 0x0 0x40000000 0x0 0x10000000>; /* configuration space */ 562 + reg-names = "pads", "afi", "cs"; 563 + 564 + interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 565 + <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */ 566 + interrupt-names = "intr", "msi"; 567 + 568 + #interrupt-cells = <1>; 569 + interrupt-map-mask = <0 0 0 0>; 570 + interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>; 571 + 572 + bus-range = <0x00 0xff>; 573 + #address-cells = <3>; 574 + #size-cells = <2>; 575 + 576 + ranges = <0x82000000 0 0x10000000 0x0 0x10000000 0 0x00001000 /* port 0 configuration space */ 577 + 0x82000000 0 0x10001000 0x0 0x10001000 0 0x00001000 /* port 1 configuration space */ 578 + 0x82000000 0 0x10004000 0x0 0x10004000 0 0x00001000 /* port 2 configuration space */ 579 + 0x81000000 0 0x0 0x0 0x50000000 0 0x00010000 /* downstream I/O (64 KiB) */ 580 + 0x82000000 0 0x50100000 0x0 0x50100000 0 0x07F00000 /* non-prefetchable memory (127 MiB) */ 581 + 0xc2000000 0 0x58000000 0x0 0x58000000 0 0x28000000>; /* prefetchable memory (640 MiB) */ 582 + 583 + clocks = <&bpmp TEGRA186_CLK_AFI>, 584 + <&bpmp TEGRA186_CLK_PCIE>, 585 + <&bpmp TEGRA186_CLK_PLLE>; 586 + clock-names = "afi", "pex", "pll_e"; 587 + 588 + resets = <&bpmp TEGRA186_RESET_AFI>, 589 + <&bpmp TEGRA186_RESET_PCIE>, 590 + <&bpmp TEGRA186_RESET_PCIEXCLK>; 591 + reset-names = "afi", "pex", "pcie_x"; 592 + 593 + status = "disabled"; 594 + 595 + pci@1,0 { 596 + device_type = "pci"; 597 + assigned-addresses = <0x82000800 0 0x10000000 0 0x1000>; 598 + reg = <0x000800 0 0 0 0>; 599 + status = "disabled"; 600 + 601 + #address-cells = <3>; 602 + #size-cells = <2>; 603 + ranges; 604 + 605 + nvidia,num-lanes = <2>; 606 + }; 607 + 608 + pci@2,0 { 609 + device_type = "pci"; 610 + assigned-addresses = <0x82001000 0 0x10001000 0 0x1000>; 611 + reg = <0x001000 0 0 0 0>; 612 + status = "disabled"; 613 + 614 + #address-cells = <3>; 615 + #size-cells = <2>; 616 + ranges; 617 + 618 + nvidia,num-lanes = <1>; 619 + }; 620 + 621 + pci@3,0 { 622 + device_type = "pci"; 623 + assigned-addresses = <0x82001800 0 0x10004000 0 0x1000>; 624 + reg = <0x001800 0 0 0 0>; 625 + status = "disabled"; 626 + 627 + #address-cells = <3>; 628 + #size-cells = <2>; 629 + ranges; 630 + 631 + nvidia,num-lanes = <1>; 632 + }; 633 + }; 634 + 635 + Board DTS: 636 + 637 + pcie@10003000 { 638 + status = "okay"; 639 + 640 + dvdd-pex-supply = <&vdd_pex>; 641 + hvdd-pex-pll-supply = <&vdd_1v8>; 642 + hvdd-pex-supply = <&vdd_1v8>; 643 + vddio-pexctl-aud-supply = <&vdd_1v8>; 644 + 645 + pci@1,0 { 646 + nvidia,num-lanes = <4>; 647 + status = "okay"; 648 + }; 649 + 650 + pci@2,0 { 651 + nvidia,num-lanes = <0>; 652 + status = "disabled"; 653 + }; 654 + 655 + pci@3,0 { 656 + nvidia,num-lanes = <1>; 657 + status = "disabled"; 562 658 }; 563 659 };
+4 -6
Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
··· 60 60 0x0800 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH 61 61 0x1000 0 0 2 &gic 0 108 IRQ_TYPE_LEVEL_HIGH>; 62 62 63 - pci@0,1 { 63 + usb@1,0 { 64 64 reg = <0x800 0 0 0 0>; 65 - device_type = "pci"; 66 - phys = <&usbphy 0 0>; 65 + phys = <&usb0 0>; 67 66 phy-names = "usb"; 68 67 }; 69 68 70 - pci@0,2 { 69 + usb@2,0 { 71 70 reg = <0x1000 0 0 0 0>; 72 - device_type = "pci"; 73 - phys = <&usbphy 0 0>; 71 + phys = <&usb0 0>; 74 72 phy-names = "usb"; 75 73 }; 76 74 };
+70 -9
Documentation/devicetree/bindings/pci/v3-v360epc-pci.txt
··· 2 2 3 3 This bridge is found in the ARM Integrator/AP (Application Platform) 4 4 5 - Integrator-specific notes: 6 - 7 - - syscon: should contain a link to the syscon device node (since 8 - on the Integrator, some registers in the syscon are required to 9 - operate the V3). 10 - 11 - V360 EPC specific notes: 12 - 13 - - reg: should contain the base address of the V3 adapter. 5 + Required properties: 6 + - compatible: should be one of: 7 + "v3,v360epc-pci" 8 + "arm,integrator-ap-pci", "v3,v360epc-pci" 9 + - reg: should contain two register areas: 10 + first the base address of the V3 host bridge controller, 64KB 11 + second the configuration area register space, 16MB 14 12 - interrupts: should contain a reference to the V3 error interrupt 15 13 as routed on the system. 14 + - bus-range: see pci.txt 15 + - ranges: this follows the standard PCI bindings in the IEEE Std 16 + 1275-1994 (see pci.txt) with the following restriction: 17 + - The non-prefetchable and prefetchable memory windows must 18 + each be exactly 256MB (0x10000000) in size. 19 + - The prefetchable memory window must be immediately adjacent 20 + to the non-prefetcable memory window 21 + - dma-ranges: three ranges for the inbound memory region. The ranges must 22 + be aligned to a 1MB boundary, and may be 1MB, 2MB, 4MB, 8MB, 16MB, 32MB, 23 + 64MB, 128MB, 256MB, 512MB, 1GB or 2GB in size. The memory should be marked 24 + as pre-fetchable. Two ranges are supported by the hardware. 25 + 26 + Integrator-specific required properties: 27 + - syscon: should contain a link to the syscon device node, since 28 + on the Integrator, some registers in the syscon are required to 29 + operate the V3 host bridge. 30 + 31 + Example: 32 + 33 + pci: pciv3@62000000 { 34 + compatible = "arm,integrator-ap-pci", "v3,v360epc-pci"; 35 + #interrupt-cells = <1>; 36 + #size-cells = <2>; 37 + #address-cells = <3>; 38 + reg = <0x62000000 0x10000>, <0x61000000 0x01000000>; 39 + interrupt-parent = <&pic>; 40 + interrupts = <17>; /* Bus error IRQ */ 41 + clocks = <&pciclk>; 42 + bus-range = <0x00 0xff>; 43 + ranges = 0x01000000 0 0x00000000 /* I/O space @00000000 */ 44 + 0x60000000 0 0x01000000 /* 16 MiB @ LB 60000000 */ 45 + 0x02000000 0 0x40000000 /* non-prefectable memory @40000000 */ 46 + 0x40000000 0 0x10000000 /* 256 MiB @ LB 40000000 1:1 */ 47 + 0x42000000 0 0x50000000 /* prefetchable memory @50000000 */ 48 + 0x50000000 0 0x10000000>; /* 256 MiB @ LB 50000000 1:1 */ 49 + dma-ranges = <0x02000000 0 0x20000000 /* EBI memory space */ 50 + 0x20000000 0 0x20000000 /* 512 MB @ LB 20000000 1:1 */ 51 + 0x02000000 0 0x80000000 /* Core module alias memory */ 52 + 0x80000000 0 0x40000000>; /* 1GB @ LB 80000000 */ 53 + interrupt-map-mask = <0xf800 0 0 0x7>; 54 + interrupt-map = < 55 + /* IDSEL 9 */ 56 + 0x4800 0 0 1 &pic 13 /* INT A on slot 9 is irq 13 */ 57 + 0x4800 0 0 2 &pic 14 /* INT B on slot 9 is irq 14 */ 58 + 0x4800 0 0 3 &pic 15 /* INT C on slot 9 is irq 15 */ 59 + 0x4800 0 0 4 &pic 16 /* INT D on slot 9 is irq 16 */ 60 + /* IDSEL 10 */ 61 + 0x5000 0 0 1 &pic 14 /* INT A on slot 10 is irq 14 */ 62 + 0x5000 0 0 2 &pic 15 /* INT B on slot 10 is irq 15 */ 63 + 0x5000 0 0 3 &pic 16 /* INT C on slot 10 is irq 16 */ 64 + 0x5000 0 0 4 &pic 13 /* INT D on slot 10 is irq 13 */ 65 + /* IDSEL 11 */ 66 + 0x5800 0 0 1 &pic 15 /* INT A on slot 11 is irq 15 */ 67 + 0x5800 0 0 2 &pic 16 /* INT B on slot 11 is irq 16 */ 68 + 0x5800 0 0 3 &pic 13 /* INT C on slot 11 is irq 13 */ 69 + 0x5800 0 0 4 &pic 14 /* INT D on slot 11 is irq 14 */ 70 + /* IDSEL 12 */ 71 + 0x6000 0 0 1 &pic 16 /* INT A on slot 12 is irq 16 */ 72 + 0x6000 0 0 2 &pic 13 /* INT B on slot 12 is irq 13 */ 73 + 0x6000 0 0 3 &pic 14 /* INT C on slot 12 is irq 14 */ 74 + 0x6000 0 0 4 &pic 15 /* INT D on slot 12 is irq 15 */ 75 + >; 76 + };
+15
MAINTAINERS
··· 10520 10520 F: Documentation/devicetree/bindings/pci/pcie-kirin.txt 10521 10521 F: drivers/pci/dwc/pcie-kirin.c 10522 10522 10523 + PCIE DRIVER FOR HISILICON STB 10524 + M: Jianguo Sun <sunjianguo1@huawei.com> 10525 + M: Shawn Guo <shawn.guo@linaro.org> 10526 + L: linux-pci@vger.kernel.org 10527 + S: Maintained 10528 + F: Documentation/devicetree/bindings/pci/hisilicon-histb-pcie.txt 10529 + F: drivers/pci/dwc/pcie-histb.c 10530 + 10523 10531 PCIE DRIVER FOR MEDIATEK 10524 10532 M: Ryder Lee <ryder.lee@mediatek.com> 10525 10533 L: linux-pci@vger.kernel.org ··· 10550 10542 S: Maintained 10551 10543 F: Documentation/devicetree/bindings/pci/rockchip-pcie.txt 10552 10544 F: drivers/pci/host/pcie-rockchip.c 10545 + 10546 + PCI DRIVER FOR V3 SEMICONDUCTOR V360EPC 10547 + M: Linus Walleij <linus.walleij@linaro.org> 10548 + L: linux-pci@vger.kernel.org 10549 + S: Maintained 10550 + F: Documentation/devicetree/bindings/pci/v3-v360epc-pci.txt 10551 + F: drivers/pci/host/pci-v3-semi.c 10553 10552 10554 10553 PCIE DRIVER FOR ST SPEAR13XX 10555 10554 M: Pratyush Anand <pratyush.anand@gmail.com>
-5
arch/alpha/include/asm/pci.h
··· 13 13 * The following structure is used to manage multiple PCI busses. 14 14 */ 15 15 16 - struct pci_dev; 17 - struct pci_bus; 18 - struct resource; 19 16 struct pci_iommu_arena; 20 17 struct page; 21 18 ··· 53 56 54 57 #define PCIBIOS_MIN_IO alpha_mv.min_io_address 55 58 #define PCIBIOS_MIN_MEM alpha_mv.min_mem_address 56 - 57 - extern void pcibios_set_master(struct pci_dev *dev); 58 59 59 60 /* IOMMU controls. */ 60 61
+10 -1
arch/alpha/kernel/pci.c
··· 197 197 subsys_initcall(pcibios_init); 198 198 199 199 #ifdef ALPHA_RESTORE_SRM_SETUP 200 + /* Store PCI device configuration left by SRM here. */ 201 + struct pdev_srm_saved_conf 202 + { 203 + struct pdev_srm_saved_conf *next; 204 + struct pci_dev *dev; 205 + }; 206 + 200 207 static struct pdev_srm_saved_conf *srm_saved_configs; 201 208 202 - void pdev_save_srm_config(struct pci_dev *dev) 209 + static void pdev_save_srm_config(struct pci_dev *dev) 203 210 { 204 211 struct pdev_srm_saved_conf *tmp; 205 212 static int printed = 0; ··· 246 239 pci_restore_state(tmp->dev); 247 240 } 248 241 } 242 + #else 243 + #define pdev_save_srm_config(dev) do {} while (0) 249 244 #endif 250 245 251 246 void pcibios_fixup_bus(struct pci_bus *bus)
-8
arch/alpha/kernel/pci_impl.h
··· 157 157 #endif 158 158 159 159 #ifdef ALPHA_RESTORE_SRM_SETUP 160 - /* Store PCI device configuration left by SRM here. */ 161 - struct pdev_srm_saved_conf 162 - { 163 - struct pdev_srm_saved_conf *next; 164 - struct pci_dev *dev; 165 - }; 166 - 167 160 extern void pci_restore_srm_config(void); 168 161 #else 169 - #define pdev_save_srm_config(dev) do {} while (0) 170 162 #define pci_restore_srm_config() do {} while (0) 171 163 #endif 172 164
+31
arch/arm64/boot/dts/freescale/fsl-ls1012a.dtsi
··· 471 471 dr_mode = "host"; 472 472 phy_type = "ulpi"; 473 473 }; 474 + 475 + msi: msi-controller1@1572000 { 476 + compatible = "fsl,ls1012a-msi"; 477 + reg = <0x0 0x1572000 0x0 0x8>; 478 + msi-controller; 479 + interrupts = <0 126 IRQ_TYPE_LEVEL_HIGH>; 480 + }; 481 + 482 + pcie@3400000 { 483 + compatible = "fsl,ls1012a-pcie", "snps,dw-pcie"; 484 + reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */ 485 + 0x40 0x00000000 0x0 0x00002000>; /* configuration space */ 486 + reg-names = "regs", "config"; 487 + interrupts = <0 118 0x4>, /* controller interrupt */ 488 + <0 117 0x4>; /* PME interrupt */ 489 + interrupt-names = "aer", "pme"; 490 + #address-cells = <3>; 491 + #size-cells = <2>; 492 + device_type = "pci"; 493 + num-lanes = <4>; 494 + bus-range = <0x0 0xff>; 495 + ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ 496 + 0x82000000 0x0 0x40000000 0x40 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */ 497 + msi-parent = <&msi>; 498 + #interrupt-cells = <1>; 499 + interrupt-map-mask = <0 0 0 7>; 500 + interrupt-map = <0000 0 0 1 &gic 0 110 IRQ_TYPE_LEVEL_HIGH>, 501 + <0000 0 0 2 &gic 0 111 IRQ_TYPE_LEVEL_HIGH>, 502 + <0000 0 0 3 &gic 0 112 IRQ_TYPE_LEVEL_HIGH>, 503 + <0000 0 0 4 &gic 0 113 IRQ_TYPE_LEVEL_HIGH>; 504 + }; 474 505 }; 475 506 };
+75
arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
··· 661 661 <GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>; 662 662 }; 663 663 664 + pcie@3400000 { 665 + compatible = "fsl,ls1046a-pcie", "snps,dw-pcie"; 666 + reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */ 667 + 0x40 0x00000000 0x0 0x00002000>; /* configuration space */ 668 + reg-names = "regs", "config"; 669 + interrupts = <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 670 + <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>; /* PME interrupt */ 671 + interrupt-names = "aer", "pme"; 672 + #address-cells = <3>; 673 + #size-cells = <2>; 674 + device_type = "pci"; 675 + dma-coherent; 676 + num-lanes = <4>; 677 + bus-range = <0x0 0xff>; 678 + ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */ 679 + 0x82000000 0x0 0x40000000 0x40 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */ 680 + msi-parent = <&msi1>, <&msi2>, <&msi3>; 681 + #interrupt-cells = <1>; 682 + interrupt-map-mask = <0 0 0 7>; 683 + interrupt-map = <0000 0 0 1 &gic GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>, 684 + <0000 0 0 2 &gic GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>, 685 + <0000 0 0 3 &gic GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>, 686 + <0000 0 0 4 &gic GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>; 687 + }; 688 + 689 + pcie@3500000 { 690 + compatible = "fsl,ls1046a-pcie", "snps,dw-pcie"; 691 + reg = <0x00 0x03500000 0x0 0x00100000 /* controller registers */ 692 + 0x48 0x00000000 0x0 0x00002000>; /* configuration space */ 693 + reg-names = "regs", "config"; 694 + interrupts = <GIC_SPI 128 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 695 + <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>; /* PME interrupt */ 696 + interrupt-names = "aer", "pme"; 697 + #address-cells = <3>; 698 + #size-cells = <2>; 699 + device_type = "pci"; 700 + dma-coherent; 701 + num-lanes = <2>; 702 + bus-range = <0x0 0xff>; 703 + ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */ 704 + 0x82000000 0x0 0x40000000 0x48 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */ 705 + msi-parent = <&msi2>, <&msi3>, <&msi1>; 706 + #interrupt-cells = <1>; 707 + interrupt-map-mask = <0 0 0 7>; 708 + interrupt-map = <0000 0 0 1 &gic GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>, 709 + <0000 0 0 2 &gic GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>, 710 + <0000 0 0 3 &gic GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>, 711 + <0000 0 0 4 &gic GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>; 712 + }; 713 + 714 + pcie@3600000 { 715 + compatible = "fsl,ls1046a-pcie", "snps,dw-pcie"; 716 + reg = <0x00 0x03600000 0x0 0x00100000 /* controller registers */ 717 + 0x50 0x00000000 0x0 0x00002000>; /* configuration space */ 718 + reg-names = "regs", "config"; 719 + interrupts = <GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 720 + <GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>; /* PME interrupt */ 721 + interrupt-names = "aer", "pme"; 722 + #address-cells = <3>; 723 + #size-cells = <2>; 724 + device_type = "pci"; 725 + dma-coherent; 726 + num-lanes = <2>; 727 + bus-range = <0x0 0xff>; 728 + ranges = <0x81000000 0x0 0x00000000 0x50 0x00010000 0x0 0x00010000 /* downstream I/O */ 729 + 0x82000000 0x0 0x40000000 0x50 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */ 730 + msi-parent = <&msi3>, <&msi1>, <&msi2>; 731 + #interrupt-cells = <1>; 732 + interrupt-map-mask = <0 0 0 7>; 733 + interrupt-map = <0000 0 0 1 &gic GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>, 734 + <0000 0 0 2 &gic GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>, 735 + <0000 0 0 3 &gic GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>, 736 + <0000 0 0 4 &gic GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>; 737 + }; 738 + 664 739 }; 665 740 666 741 reserved-memory {
-9
arch/cris/include/asm/pci.h
··· 17 17 18 18 #define PCIBIOS_MIN_CARDBUS_IO 0x4000 19 19 20 - void pcibios_config_init(void); 21 - struct pci_bus * pcibios_scan_root(int bus); 22 - 23 - void pcibios_set_master(struct pci_dev *dev); 24 - struct irq_routing_table *pcibios_get_irq_routing_table(void); 25 - int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq); 26 - 27 20 /* Dynamic DMA mapping stuff. 28 21 * i386 has everything mapped statically. 29 22 */ ··· 26 33 #include <linux/scatterlist.h> 27 34 #include <linux/string.h> 28 35 #include <asm/io.h> 29 - 30 - struct pci_dev; 31 36 32 37 /* The PCI address space does equal the physical memory 33 38 * address space. The networking and block device layers use
-4
arch/frv/include/asm/pci.h
··· 17 17 #include <linux/scatterlist.h> 18 18 #include <asm-generic/pci.h> 19 19 20 - struct pci_dev; 21 - 22 20 #define pcibios_assign_all_busses() 0 23 - 24 - extern void pcibios_set_master(struct pci_dev *dev); 25 21 26 22 #ifdef CONFIG_MMU 27 23 extern void *consistent_alloc(gfp_t gfp, size_t size, dma_addr_t *dma_handle);
-4
arch/ia64/include/asm/pci.h
··· 30 30 #define PCIBIOS_MIN_IO 0x1000 31 31 #define PCIBIOS_MIN_MEM 0x10000000 32 32 33 - void pcibios_config_init(void); 34 - 35 - struct pci_dev; 36 - 37 33 /* 38 34 * PCI_DMA_BUS_IS_PHYS should be set to 1 if there is _necessarily_ a direct 39 35 * correspondence between device bus addresses and CPU physical addresses.
-4
arch/mips/include/asm/pci.h
··· 106 106 107 107 #define PCIBIOS_MIN_CARDBUS_IO 0x4000 108 108 109 - extern void pcibios_set_master(struct pci_dev *dev); 110 - 111 109 #define HAVE_PCI_MMAP 112 110 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 113 111 #define HAVE_ARCH_PCI_RESOURCE_TO_USER ··· 120 122 #include <linux/scatterlist.h> 121 123 #include <linux/string.h> 122 124 #include <asm/io.h> 123 - 124 - struct pci_dev; 125 125 126 126 /* 127 127 * The PCI address space does equal the physical memory address space.
-4
arch/mn10300/include/asm/pci.h
··· 47 47 #define PCIBIOS_MIN_IO 0xBE000004 48 48 #define PCIBIOS_MIN_MEM 0xB8000000 49 49 50 - void pcibios_set_master(struct pci_dev *dev); 51 - 52 50 /* Dynamic DMA mapping stuff. 53 51 * i386 has everything mapped statically. 54 52 */ ··· 56 58 #include <linux/scatterlist.h> 57 59 #include <linux/string.h> 58 60 #include <asm/io.h> 59 - 60 - struct pci_dev; 61 61 62 62 /* The PCI address space does equal the physical memory 63 63 * address space. The networking and block device layers use
-3
arch/mn10300/unit-asb2305/pci-asb2305.h
··· 30 30 31 31 extern struct pci_ops *pci_root_ops; 32 32 33 - extern struct irq_routing_table *pcibios_get_irq_routing_table(void); 34 - extern int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq); 35 - 36 33 /* pci-irq.c */ 37 34 38 35 struct irq_info {
-8
arch/parisc/include/asm/pci.h
··· 88 88 #endif /* !CONFIG_64BIT */ 89 89 90 90 /* 91 - ** KLUGE: linux/pci.h include asm/pci.h BEFORE declaring struct pci_bus 92 - ** (This eliminates some of the warnings). 93 - */ 94 - struct pci_bus; 95 - struct pci_dev; 96 - 97 - /* 98 91 * If the PCI device's view of memory is the same as the CPU's view of memory, 99 92 * PCI_DMA_BUS_IS_PHYS is true. The networking and block device layers use 100 93 * this boolean for bounce buffer decisions. ··· 155 162 156 163 #ifdef CONFIG_PCI 157 164 extern void pcibios_register_hba(struct pci_hba_data *); 158 - extern void pcibios_set_master(struct pci_dev *); 159 165 #else 160 166 static inline void pcibios_register_hba(struct pci_hba_data *x) 161 167 {
-2
arch/powerpc/include/asm/pci.h
··· 28 28 #define PCIBIOS_MIN_IO 0x1000 29 29 #define PCIBIOS_MIN_MEM 0x10000000 30 30 31 - struct pci_dev; 32 - 33 31 /* Values for the `which' argument to sys_pciconfig_iobase syscall. */ 34 32 #define IOBASE_BRIDGE_NUMBER 0 35 33 #define IOBASE_MEMORY 1
+2 -2
arch/powerpc/kernel/eeh_driver.c
··· 441 441 } 442 442 443 443 #ifdef CONFIG_PPC_POWERNV 444 - pci_iov_add_virtfn(edev->physfn, pdn->vf_index, 0); 444 + pci_iov_add_virtfn(edev->physfn, pdn->vf_index); 445 445 #endif 446 446 return NULL; 447 447 } ··· 499 499 #ifdef CONFIG_PPC_POWERNV 500 500 struct pci_dn *pdn = eeh_dev_to_pdn(edev); 501 501 502 - pci_iov_remove_virtfn(edev->physfn, pdn->vf_index, 0); 502 + pci_iov_remove_virtfn(edev->physfn, pdn->vf_index); 503 503 edev->pdev = NULL; 504 504 505 505 /*
-4
arch/sh/include/asm/pci.h
··· 64 64 65 65 extern unsigned long PCIBIOS_MIN_IO, PCIBIOS_MIN_MEM; 66 66 67 - struct pci_dev; 68 - 69 67 #define HAVE_PCI_MMAP 70 68 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 71 - 72 - extern void pcibios_set_master(struct pci_dev *dev); 73 69 74 70 /* Dynamic DMA mapping stuff. 75 71 * SuperH has everything mapped statically like x86.
-2
arch/sparc/include/asm/pci_32.h
··· 21 21 */ 22 22 #define PCI_DMA_BUS_IS_PHYS (0) 23 23 24 - struct pci_dev; 25 - 26 24 #endif /* __KERNEL__ */ 27 25 28 26 #ifndef CONFIG_LEON_PCI
-2
arch/x86/include/asm/pci.h
··· 89 89 #define PCIBIOS_MIN_CARDBUS_IO 0x4000 90 90 91 91 extern int pcibios_enabled; 92 - void pcibios_config_init(void); 93 92 void pcibios_scan_root(int bus); 94 93 95 - void pcibios_set_master(struct pci_dev *dev); 96 94 struct irq_routing_table *pcibios_get_irq_routing_table(void); 97 95 int pcibios_set_irq_routing(struct pci_dev *dev, int pin, int irq); 98 96
+85
arch/x86/pci/fixup.c
··· 636 636 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid); 637 637 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid); 638 638 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid); 639 + 640 + #ifdef CONFIG_PHYS_ADDR_T_64BIT 641 + 642 + #define AMD_141b_MMIO_BASE(x) (0x80 + (x) * 0x8) 643 + #define AMD_141b_MMIO_BASE_RE_MASK BIT(0) 644 + #define AMD_141b_MMIO_BASE_WE_MASK BIT(1) 645 + #define AMD_141b_MMIO_BASE_MMIOBASE_MASK GENMASK(31,8) 646 + 647 + #define AMD_141b_MMIO_LIMIT(x) (0x84 + (x) * 0x8) 648 + #define AMD_141b_MMIO_LIMIT_MMIOLIMIT_MASK GENMASK(31,8) 649 + 650 + #define AMD_141b_MMIO_HIGH(x) (0x180 + (x) * 0x4) 651 + #define AMD_141b_MMIO_HIGH_MMIOBASE_MASK GENMASK(7,0) 652 + #define AMD_141b_MMIO_HIGH_MMIOLIMIT_SHIFT 16 653 + #define AMD_141b_MMIO_HIGH_MMIOLIMIT_MASK GENMASK(23,16) 654 + 655 + /* 656 + * The PCI Firmware Spec, rev 3.2, notes that ACPI should optionally allow 657 + * configuring host bridge windows using the _PRS and _SRS methods. 658 + * 659 + * But this is rarely implemented, so we manually enable a large 64bit BAR for 660 + * PCIe device on AMD Family 15h (Models 00h-1fh, 30h-3fh, 60h-7fh) Processors 661 + * here. 662 + */ 663 + static void pci_amd_enable_64bit_bar(struct pci_dev *dev) 664 + { 665 + unsigned i; 666 + u32 base, limit, high; 667 + struct resource *res, *conflict; 668 + 669 + for (i = 0; i < 8; i++) { 670 + pci_read_config_dword(dev, AMD_141b_MMIO_BASE(i), &base); 671 + pci_read_config_dword(dev, AMD_141b_MMIO_HIGH(i), &high); 672 + 673 + /* Is this slot free? */ 674 + if (!(base & (AMD_141b_MMIO_BASE_RE_MASK | 675 + AMD_141b_MMIO_BASE_WE_MASK))) 676 + break; 677 + 678 + base >>= 8; 679 + base |= high << 24; 680 + 681 + /* Abort if a slot already configures a 64bit BAR. */ 682 + if (base > 0x10000) 683 + return; 684 + } 685 + if (i == 8) 686 + return; 687 + 688 + res = kzalloc(sizeof(*res), GFP_KERNEL); 689 + if (!res) 690 + return; 691 + 692 + res->name = "PCI Bus 0000:00"; 693 + res->flags = IORESOURCE_PREFETCH | IORESOURCE_MEM | 694 + IORESOURCE_MEM_64 | IORESOURCE_WINDOW; 695 + res->start = 0x100000000ull; 696 + res->end = 0xfd00000000ull - 1; 697 + 698 + /* Just grab the free area behind system memory for this */ 699 + while ((conflict = request_resource_conflict(&iomem_resource, res))) 700 + res->start = conflict->end + 1; 701 + 702 + dev_info(&dev->dev, "adding root bus resource %pR\n", res); 703 + 704 + base = ((res->start >> 8) & AMD_141b_MMIO_BASE_MMIOBASE_MASK) | 705 + AMD_141b_MMIO_BASE_RE_MASK | AMD_141b_MMIO_BASE_WE_MASK; 706 + limit = ((res->end + 1) >> 8) & AMD_141b_MMIO_LIMIT_MMIOLIMIT_MASK; 707 + high = ((res->start >> 40) & AMD_141b_MMIO_HIGH_MMIOBASE_MASK) | 708 + ((((res->end + 1) >> 40) << AMD_141b_MMIO_HIGH_MMIOLIMIT_SHIFT) 709 + & AMD_141b_MMIO_HIGH_MMIOLIMIT_MASK); 710 + 711 + pci_write_config_dword(dev, AMD_141b_MMIO_HIGH(i), high); 712 + pci_write_config_dword(dev, AMD_141b_MMIO_LIMIT(i), limit); 713 + pci_write_config_dword(dev, AMD_141b_MMIO_BASE(i), base); 714 + 715 + pci_bus_add_resource(dev->bus, res, 0); 716 + } 717 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1401, pci_amd_enable_64bit_bar); 718 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x141b, pci_amd_enable_64bit_bar); 719 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1571, pci_amd_enable_64bit_bar); 720 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x15b1, pci_amd_enable_64bit_bar); 721 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1601, pci_amd_enable_64bit_bar); 722 + 723 + #endif
+1 -1
arch/x86/pci/intel_mid_pci.c
··· 280 280 } 281 281 } 282 282 283 - static struct pci_ops intel_mid_pci_ops = { 283 + static const struct pci_ops intel_mid_pci_ops __initconst = { 284 284 .read = pci_read, 285 285 .write = pci_write, 286 286 };
-2
arch/xtensa/include/asm/pci.h
··· 37 37 #include <linux/string.h> 38 38 #include <asm/io.h> 39 39 40 - struct pci_dev; 41 - 42 40 /* The PCI address space does equal the physical memory address space. 43 41 * The networking and block device layers use this boolean for bounce buffer 44 42 * decisions.
+1
drivers/irqchip/irq-ls-scfg-msi.c
··· 316 316 { .compatible = "fsl,1s1021a-msi", .data = &ls1021_msi_cfg}, 317 317 { .compatible = "fsl,1s1043a-msi", .data = &ls1021_msi_cfg}, 318 318 319 + { .compatible = "fsl,ls1012a-msi", .data = &ls1021_msi_cfg }, 319 320 { .compatible = "fsl,ls1021a-msi", .data = &ls1021_msi_cfg }, 320 321 { .compatible = "fsl,ls1043a-msi", .data = &ls1021_msi_cfg }, 321 322 { .compatible = "fsl,ls1043a-v1.1-msi", .data = &ls1043_v1_1_msi_cfg },
+31 -2
drivers/misc/pci_endpoint_test.c
··· 92 92 void __iomem *bar[6]; 93 93 struct completion irq_raised; 94 94 int last_irq; 95 + int num_irqs; 95 96 /* mutex to protect the ioctls */ 96 97 struct mutex mutex; 97 98 struct miscdevice miscdev; ··· 227 226 u32 src_crc32; 228 227 u32 dst_crc32; 229 228 229 + if (size > SIZE_MAX - alignment) 230 + goto err; 231 + 230 232 orig_src_addr = dma_alloc_coherent(dev, size + alignment, 231 233 &orig_src_phys_addr, GFP_KERNEL); 232 234 if (!orig_src_addr) { ··· 315 311 size_t alignment = test->alignment; 316 312 u32 crc32; 317 313 314 + if (size > SIZE_MAX - alignment) 315 + goto err; 316 + 318 317 orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 319 318 GFP_KERNEL); 320 319 if (!orig_addr) { ··· 375 368 size_t offset; 376 369 size_t alignment = test->alignment; 377 370 u32 crc32; 371 + 372 + if (size > SIZE_MAX - alignment) 373 + goto err; 378 374 379 375 orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 380 376 GFP_KERNEL); ··· 514 504 irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI); 515 505 if (irq < 0) 516 506 dev_err(dev, "failed to get MSI interrupts\n"); 507 + test->num_irqs = irq; 517 508 } 518 509 519 510 err = devm_request_irq(dev, pdev->irq, pci_endpoint_test_irqhandler, ··· 544 533 545 534 test->base = test->bar[test_reg_bar]; 546 535 if (!test->base) { 536 + err = -ENOMEM; 547 537 dev_err(dev, "Cannot perform PCI test without BAR%d\n", 548 538 test_reg_bar); 549 539 goto err_iounmap; ··· 554 542 555 543 id = ida_simple_get(&pci_endpoint_test_ida, 0, 0, GFP_KERNEL); 556 544 if (id < 0) { 545 + err = id; 557 546 dev_err(dev, "unable to get id\n"); 558 547 goto err_iounmap; 559 548 } ··· 562 549 snprintf(name, sizeof(name), DRV_MODULE_NAME ".%d", id); 563 550 misc_device = &test->miscdev; 564 551 misc_device->minor = MISC_DYNAMIC_MINOR; 565 - misc_device->name = name; 552 + misc_device->name = kstrdup(name, GFP_KERNEL); 553 + if (!misc_device->name) { 554 + err = -ENOMEM; 555 + goto err_ida_remove; 556 + } 566 557 misc_device->fops = &pci_endpoint_test_fops, 567 558 568 559 err = misc_register(misc_device); 569 560 if (err) { 570 561 dev_err(dev, "failed to register device\n"); 571 - goto err_ida_remove; 562 + goto err_kfree_name; 572 563 } 573 564 574 565 return 0; 566 + 567 + err_kfree_name: 568 + kfree(misc_device->name); 575 569 576 570 err_ida_remove: 577 571 ida_simple_remove(&pci_endpoint_test_ida, id); ··· 588 568 if (test->bar[bar]) 589 569 pci_iounmap(pdev, test->bar[bar]); 590 570 } 571 + 572 + for (i = 0; i < irq; i++) 573 + devm_free_irq(dev, pdev->irq + i, test); 591 574 592 575 err_disable_msi: 593 576 pci_disable_msi(pdev); ··· 605 582 static void pci_endpoint_test_remove(struct pci_dev *pdev) 606 583 { 607 584 int id; 585 + int i; 608 586 enum pci_barno bar; 609 587 struct pci_endpoint_test *test = pci_get_drvdata(pdev); 610 588 struct miscdevice *misc_device = &test->miscdev; 611 589 612 590 if (sscanf(misc_device->name, DRV_MODULE_NAME ".%d", &id) != 1) 613 591 return; 592 + if (id < 0) 593 + return; 614 594 615 595 misc_deregister(&test->miscdev); 596 + kfree(misc_device->name); 616 597 ida_simple_remove(&pci_endpoint_test_ida, id); 617 598 for (bar = BAR_0; bar <= BAR_5; bar++) { 618 599 if (test->bar[bar]) 619 600 pci_iounmap(pdev, test->bar[bar]); 620 601 } 602 + for (i = 0; i < test->num_irqs; i++) 603 + devm_free_irq(&pdev->dev, pdev->irq + i, test); 621 604 pci_disable_msi(pdev); 622 605 pci_release_regions(pdev); 623 606 pci_disable_device(pdev);
+16 -3
drivers/of/address.c
··· 232 232 } 233 233 EXPORT_SYMBOL_GPL(of_pci_address_to_resource); 234 234 235 - int of_pci_range_parser_init(struct of_pci_range_parser *parser, 236 - struct device_node *node) 235 + static int parser_init(struct of_pci_range_parser *parser, 236 + struct device_node *node, const char *name) 237 237 { 238 238 const int na = 3, ns = 2; 239 239 int rlen; ··· 242 242 parser->pna = of_n_addr_cells(node); 243 243 parser->np = parser->pna + na + ns; 244 244 245 - parser->range = of_get_property(node, "ranges", &rlen); 245 + parser->range = of_get_property(node, name, &rlen); 246 246 if (parser->range == NULL) 247 247 return -ENOENT; 248 248 ··· 250 250 251 251 return 0; 252 252 } 253 + 254 + int of_pci_range_parser_init(struct of_pci_range_parser *parser, 255 + struct device_node *node) 256 + { 257 + return parser_init(parser, node, "ranges"); 258 + } 253 259 EXPORT_SYMBOL_GPL(of_pci_range_parser_init); 260 + 261 + int of_pci_dma_range_parser_init(struct of_pci_range_parser *parser, 262 + struct device_node *node) 263 + { 264 + return parser_init(parser, node, "dma-ranges"); 265 + } 266 + EXPORT_SYMBOL_GPL(of_pci_dma_range_parser_init); 254 267 255 268 struct of_pci_range *of_pci_range_parser_one(struct of_pci_range_parser *parser, 256 269 struct of_pci_range *range)
+13 -4
drivers/pci/Kconfig
··· 29 29 depends on PCI_MSI 30 30 select GENERIC_MSI_IRQ_DOMAIN 31 31 32 + config PCI_QUIRKS 33 + default y 34 + bool "Enable PCI quirk workarounds" if EXPERT 35 + depends on PCI 36 + help 37 + This enables workarounds for various PCI chipset bugs/quirks. 38 + Disable this only if your target machine is unaffected by PCI 39 + quirks. 40 + 32 41 config PCI_DEBUG 33 42 bool "PCI Debugging" 34 43 depends on PCI && DEBUG_KERNEL ··· 51 42 config PCI_REALLOC_ENABLE_AUTO 52 43 bool "Enable PCI resource re-allocation detection" 53 44 depends on PCI 45 + depends on PCI_IOV 54 46 help 55 47 Say Y here if you want the PCI core to detect if PCI resource 56 48 re-allocation needs to be enabled. You can always use pci=realloc=on 57 - or pci=realloc=off to override it. Note this feature is a no-op 58 - unless PCI_IOV support is also enabled; in that case it will 59 - automatically re-allocate PCI resources if SR-IOV BARs have not 60 - been allocated by the BIOS. 49 + or pci=realloc=off to override it. It will automatically 50 + re-allocate PCI resources if SR-IOV BARs have not been allocated by 51 + the BIOS. 61 52 62 53 When in doubt, say N. 63 54
-3
drivers/pci/Makefile
··· 17 17 18 18 # Build the PCI Hotplug drivers if we were asked to 19 19 obj-$(CONFIG_HOTPLUG_PCI) += hotplug/ 20 - ifdef CONFIG_HOTPLUG_PCI 21 - obj-y += hotplug-pci.o 22 - endif 23 20 24 21 # Build the PCI MSI interrupt support 25 22 obj-$(CONFIG_PCI_MSI) += msi.o
+10
drivers/pci/dwc/Kconfig
··· 169 169 Say Y here if you want PCIe controller support 170 170 on HiSilicon Kirin series SoCs. 171 171 172 + config PCIE_HISI_STB 173 + bool "HiSilicon STB SoCs PCIe controllers" 174 + depends on ARCH_HISI 175 + depends on PCI 176 + depends on PCI_MSI_IRQ_DOMAIN 177 + select PCIEPORTBUS 178 + select PCIE_DW_HOST 179 + help 180 + Say Y here if you want PCIe controller support on HiSilicon STB SoCs 181 + 172 182 endmenu
+1
drivers/pci/dwc/Makefile
··· 15 15 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o 16 16 obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o 17 17 obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o 18 + obj-$(CONFIG_PCIE_HISI_STB) += pcie-histb.o 18 19 19 20 # The following drivers are for devices that use the generic ACPI 20 21 # pci_root.c driver but don't support standard ECAM config access.
+17
drivers/pci/dwc/pci-dra7xx.c
··· 810 810 } 811 811 #endif 812 812 813 + void dra7xx_pcie_shutdown(struct platform_device *pdev) 814 + { 815 + struct device *dev = &pdev->dev; 816 + struct dra7xx_pcie *dra7xx = dev_get_drvdata(dev); 817 + int ret; 818 + 819 + dra7xx_pcie_stop_link(dra7xx->pci); 820 + 821 + ret = pm_runtime_put_sync(dev); 822 + if (ret < 0) 823 + dev_dbg(dev, "pm_runtime_put_sync failed\n"); 824 + 825 + pm_runtime_disable(dev); 826 + dra7xx_pcie_disable_phy(dra7xx); 827 + } 828 + 813 829 static const struct dev_pm_ops dra7xx_pcie_pm_ops = { 814 830 SET_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend, dra7xx_pcie_resume) 815 831 SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend_noirq, ··· 839 823 .suppress_bind_attrs = true, 840 824 .pm = &dra7xx_pcie_pm_ops, 841 825 }, 826 + .shutdown = dra7xx_pcie_shutdown, 842 827 }; 843 828 builtin_platform_driver_probe(dra7xx_pcie_driver, dra7xx_pcie_probe);
+12
drivers/pci/dwc/pci-layerscape.c
··· 33 33 34 34 /* PEX Internal Configuration Registers */ 35 35 #define PCIE_STRFMR1 0x71c /* Symbol Timer & Filter Mask Register1 */ 36 + #define PCIE_ABSERR 0x8d0 /* Bridge Slave Error Response Register */ 37 + #define PCIE_ABSERR_SETTING 0x9401 /* Forward error of non-posted request */ 36 38 37 39 #define PCIE_IATU_NUM 6 38 40 ··· 126 124 return 1; 127 125 } 128 126 127 + /* Forward error response of outbound non-posted requests */ 128 + static void ls_pcie_fix_error_response(struct ls_pcie *pcie) 129 + { 130 + struct dw_pcie *pci = pcie->pci; 131 + 132 + iowrite32(PCIE_ABSERR_SETTING, pci->dbi_base + PCIE_ABSERR); 133 + } 134 + 129 135 static int ls_pcie_host_init(struct pcie_port *pp) 130 136 { 131 137 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 145 135 * dw_pcie_setup_rc() will reconfigure the outbound windows. 146 136 */ 147 137 ls_pcie_disable_outbound_atus(pcie); 138 + ls_pcie_fix_error_response(pcie); 148 139 149 140 dw_pcie_dbi_ro_wr_en(pci); 150 141 ls_pcie_clear_multifunction(pcie); ··· 264 253 }; 265 254 266 255 static const struct of_device_id ls_pcie_of_match[] = { 256 + { .compatible = "fsl,ls1012a-pcie", .data = &ls1046_drvdata }, 267 257 { .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata }, 268 258 { .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata }, 269 259 { .compatible = "fsl,ls1046a-pcie", .data = &ls1046_drvdata },
+470
drivers/pci/dwc/pcie-histb.c
··· 1 + /* 2 + * PCIe host controller driver for HiSilicon STB SoCs 3 + * 4 + * Copyright (C) 2016-2017 HiSilicon Co., Ltd. http://www.hisilicon.com 5 + * 6 + * Authors: Ruqiang Ju <juruqiang@hisilicon.com> 7 + * Jianguo Sun <sunjianguo1@huawei.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #include <linux/clk.h> 15 + #include <linux/delay.h> 16 + #include <linux/interrupt.h> 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/of.h> 20 + #include <linux/of_gpio.h> 21 + #include <linux/pci.h> 22 + #include <linux/phy/phy.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/resource.h> 25 + #include <linux/reset.h> 26 + 27 + #include "pcie-designware.h" 28 + 29 + #define to_histb_pcie(x) dev_get_drvdata((x)->dev) 30 + 31 + #define PCIE_SYS_CTRL0 0x0000 32 + #define PCIE_SYS_CTRL1 0x0004 33 + #define PCIE_SYS_CTRL7 0x001C 34 + #define PCIE_SYS_CTRL13 0x0034 35 + #define PCIE_SYS_CTRL15 0x003C 36 + #define PCIE_SYS_CTRL16 0x0040 37 + #define PCIE_SYS_CTRL17 0x0044 38 + 39 + #define PCIE_SYS_STAT0 0x0100 40 + #define PCIE_SYS_STAT4 0x0110 41 + 42 + #define PCIE_RDLH_LINK_UP BIT(5) 43 + #define PCIE_XMLH_LINK_UP BIT(15) 44 + #define PCIE_ELBI_SLV_DBI_ENABLE BIT(21) 45 + #define PCIE_APP_LTSSM_ENABLE BIT(11) 46 + 47 + #define PCIE_DEVICE_TYPE_MASK GENMASK(31, 28) 48 + #define PCIE_WM_EP 0 49 + #define PCIE_WM_LEGACY BIT(1) 50 + #define PCIE_WM_RC BIT(30) 51 + 52 + #define PCIE_LTSSM_STATE_MASK GENMASK(5, 0) 53 + #define PCIE_LTSSM_STATE_ACTIVE 0x11 54 + 55 + struct histb_pcie { 56 + struct dw_pcie *pci; 57 + struct clk *aux_clk; 58 + struct clk *pipe_clk; 59 + struct clk *sys_clk; 60 + struct clk *bus_clk; 61 + struct phy *phy; 62 + struct reset_control *soft_reset; 63 + struct reset_control *sys_reset; 64 + struct reset_control *bus_reset; 65 + void __iomem *ctrl; 66 + int reset_gpio; 67 + }; 68 + 69 + static u32 histb_pcie_readl(struct histb_pcie *histb_pcie, u32 reg) 70 + { 71 + return readl(histb_pcie->ctrl + reg); 72 + } 73 + 74 + static void histb_pcie_writel(struct histb_pcie *histb_pcie, u32 reg, u32 val) 75 + { 76 + writel(val, histb_pcie->ctrl + reg); 77 + } 78 + 79 + static void histb_pcie_dbi_w_mode(struct pcie_port *pp, bool enable) 80 + { 81 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 82 + struct histb_pcie *hipcie = to_histb_pcie(pci); 83 + u32 val; 84 + 85 + val = histb_pcie_readl(hipcie, PCIE_SYS_CTRL0); 86 + if (enable) 87 + val |= PCIE_ELBI_SLV_DBI_ENABLE; 88 + else 89 + val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 90 + histb_pcie_writel(hipcie, PCIE_SYS_CTRL0, val); 91 + } 92 + 93 + static void histb_pcie_dbi_r_mode(struct pcie_port *pp, bool enable) 94 + { 95 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 96 + struct histb_pcie *hipcie = to_histb_pcie(pci); 97 + u32 val; 98 + 99 + val = histb_pcie_readl(hipcie, PCIE_SYS_CTRL1); 100 + if (enable) 101 + val |= PCIE_ELBI_SLV_DBI_ENABLE; 102 + else 103 + val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 104 + histb_pcie_writel(hipcie, PCIE_SYS_CTRL1, val); 105 + } 106 + 107 + static u32 histb_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, 108 + u32 reg, size_t size) 109 + { 110 + u32 val; 111 + 112 + histb_pcie_dbi_r_mode(&pci->pp, true); 113 + dw_pcie_read(base + reg, size, &val); 114 + histb_pcie_dbi_r_mode(&pci->pp, false); 115 + 116 + return val; 117 + } 118 + 119 + static void histb_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, 120 + u32 reg, size_t size, u32 val) 121 + { 122 + histb_pcie_dbi_w_mode(&pci->pp, true); 123 + dw_pcie_write(base + reg, size, val); 124 + histb_pcie_dbi_w_mode(&pci->pp, false); 125 + } 126 + 127 + static int histb_pcie_rd_own_conf(struct pcie_port *pp, int where, 128 + int size, u32 *val) 129 + { 130 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 131 + int ret; 132 + 133 + histb_pcie_dbi_r_mode(pp, true); 134 + ret = dw_pcie_read(pci->dbi_base + where, size, val); 135 + histb_pcie_dbi_r_mode(pp, false); 136 + 137 + return ret; 138 + } 139 + 140 + static int histb_pcie_wr_own_conf(struct pcie_port *pp, int where, 141 + int size, u32 val) 142 + { 143 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 144 + int ret; 145 + 146 + histb_pcie_dbi_w_mode(pp, true); 147 + ret = dw_pcie_write(pci->dbi_base + where, size, val); 148 + histb_pcie_dbi_w_mode(pp, false); 149 + 150 + return ret; 151 + } 152 + 153 + static int histb_pcie_link_up(struct dw_pcie *pci) 154 + { 155 + struct histb_pcie *hipcie = to_histb_pcie(pci); 156 + u32 regval; 157 + u32 status; 158 + 159 + regval = histb_pcie_readl(hipcie, PCIE_SYS_STAT0); 160 + status = histb_pcie_readl(hipcie, PCIE_SYS_STAT4); 161 + status &= PCIE_LTSSM_STATE_MASK; 162 + if ((regval & PCIE_XMLH_LINK_UP) && (regval & PCIE_RDLH_LINK_UP) && 163 + (status == PCIE_LTSSM_STATE_ACTIVE)) 164 + return 1; 165 + 166 + return 0; 167 + } 168 + 169 + static int histb_pcie_establish_link(struct pcie_port *pp) 170 + { 171 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 172 + struct histb_pcie *hipcie = to_histb_pcie(pci); 173 + u32 regval; 174 + 175 + if (dw_pcie_link_up(pci)) { 176 + dev_info(pci->dev, "Link already up\n"); 177 + return 0; 178 + } 179 + 180 + /* PCIe RC work mode */ 181 + regval = histb_pcie_readl(hipcie, PCIE_SYS_CTRL0); 182 + regval &= ~PCIE_DEVICE_TYPE_MASK; 183 + regval |= PCIE_WM_RC; 184 + histb_pcie_writel(hipcie, PCIE_SYS_CTRL0, regval); 185 + 186 + /* setup root complex */ 187 + dw_pcie_setup_rc(pp); 188 + 189 + /* assert LTSSM enable */ 190 + regval = histb_pcie_readl(hipcie, PCIE_SYS_CTRL7); 191 + regval |= PCIE_APP_LTSSM_ENABLE; 192 + histb_pcie_writel(hipcie, PCIE_SYS_CTRL7, regval); 193 + 194 + return dw_pcie_wait_for_link(pci); 195 + } 196 + 197 + static int histb_pcie_host_init(struct pcie_port *pp) 198 + { 199 + histb_pcie_establish_link(pp); 200 + 201 + if (IS_ENABLED(CONFIG_PCI_MSI)) 202 + dw_pcie_msi_init(pp); 203 + 204 + return 0; 205 + } 206 + 207 + static struct dw_pcie_host_ops histb_pcie_host_ops = { 208 + .rd_own_conf = histb_pcie_rd_own_conf, 209 + .wr_own_conf = histb_pcie_wr_own_conf, 210 + .host_init = histb_pcie_host_init, 211 + }; 212 + 213 + static irqreturn_t histb_pcie_msi_irq_handler(int irq, void *arg) 214 + { 215 + struct pcie_port *pp = arg; 216 + 217 + return dw_handle_msi_irq(pp); 218 + } 219 + 220 + static void histb_pcie_host_disable(struct histb_pcie *hipcie) 221 + { 222 + reset_control_assert(hipcie->soft_reset); 223 + reset_control_assert(hipcie->sys_reset); 224 + reset_control_assert(hipcie->bus_reset); 225 + 226 + clk_disable_unprepare(hipcie->aux_clk); 227 + clk_disable_unprepare(hipcie->pipe_clk); 228 + clk_disable_unprepare(hipcie->sys_clk); 229 + clk_disable_unprepare(hipcie->bus_clk); 230 + 231 + if (gpio_is_valid(hipcie->reset_gpio)) 232 + gpio_set_value_cansleep(hipcie->reset_gpio, 0); 233 + } 234 + 235 + static int histb_pcie_host_enable(struct pcie_port *pp) 236 + { 237 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 238 + struct histb_pcie *hipcie = to_histb_pcie(pci); 239 + struct device *dev = pci->dev; 240 + int ret; 241 + 242 + /* power on PCIe device if have */ 243 + if (gpio_is_valid(hipcie->reset_gpio)) 244 + gpio_set_value_cansleep(hipcie->reset_gpio, 1); 245 + 246 + ret = clk_prepare_enable(hipcie->bus_clk); 247 + if (ret) { 248 + dev_err(dev, "cannot prepare/enable bus clk\n"); 249 + goto err_bus_clk; 250 + } 251 + 252 + ret = clk_prepare_enable(hipcie->sys_clk); 253 + if (ret) { 254 + dev_err(dev, "cannot prepare/enable sys clk\n"); 255 + goto err_sys_clk; 256 + } 257 + 258 + ret = clk_prepare_enable(hipcie->pipe_clk); 259 + if (ret) { 260 + dev_err(dev, "cannot prepare/enable pipe clk\n"); 261 + goto err_pipe_clk; 262 + } 263 + 264 + ret = clk_prepare_enable(hipcie->aux_clk); 265 + if (ret) { 266 + dev_err(dev, "cannot prepare/enable aux clk\n"); 267 + goto err_aux_clk; 268 + } 269 + 270 + reset_control_assert(hipcie->soft_reset); 271 + reset_control_deassert(hipcie->soft_reset); 272 + 273 + reset_control_assert(hipcie->sys_reset); 274 + reset_control_deassert(hipcie->sys_reset); 275 + 276 + reset_control_assert(hipcie->bus_reset); 277 + reset_control_deassert(hipcie->bus_reset); 278 + 279 + return 0; 280 + 281 + err_aux_clk: 282 + clk_disable_unprepare(hipcie->aux_clk); 283 + err_pipe_clk: 284 + clk_disable_unprepare(hipcie->pipe_clk); 285 + err_sys_clk: 286 + clk_disable_unprepare(hipcie->sys_clk); 287 + err_bus_clk: 288 + clk_disable_unprepare(hipcie->bus_clk); 289 + 290 + return ret; 291 + } 292 + 293 + static const struct dw_pcie_ops dw_pcie_ops = { 294 + .read_dbi = histb_pcie_read_dbi, 295 + .write_dbi = histb_pcie_write_dbi, 296 + .link_up = histb_pcie_link_up, 297 + }; 298 + 299 + static int histb_pcie_probe(struct platform_device *pdev) 300 + { 301 + struct histb_pcie *hipcie; 302 + struct dw_pcie *pci; 303 + struct pcie_port *pp; 304 + struct resource *res; 305 + struct device_node *np = pdev->dev.of_node; 306 + struct device *dev = &pdev->dev; 307 + enum of_gpio_flags of_flags; 308 + unsigned long flag = GPIOF_DIR_OUT; 309 + int ret; 310 + 311 + hipcie = devm_kzalloc(dev, sizeof(*hipcie), GFP_KERNEL); 312 + if (!hipcie) 313 + return -ENOMEM; 314 + 315 + pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); 316 + if (!pci) 317 + return -ENOMEM; 318 + 319 + hipcie->pci = pci; 320 + pp = &pci->pp; 321 + pci->dev = dev; 322 + pci->ops = &dw_pcie_ops; 323 + 324 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "control"); 325 + hipcie->ctrl = devm_ioremap_resource(dev, res); 326 + if (IS_ERR(hipcie->ctrl)) { 327 + dev_err(dev, "cannot get control reg base\n"); 328 + return PTR_ERR(hipcie->ctrl); 329 + } 330 + 331 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rc-dbi"); 332 + pci->dbi_base = devm_ioremap_resource(dev, res); 333 + if (IS_ERR(pci->dbi_base)) { 334 + dev_err(dev, "cannot get rc-dbi base\n"); 335 + return PTR_ERR(pci->dbi_base); 336 + } 337 + 338 + hipcie->reset_gpio = of_get_named_gpio_flags(np, 339 + "reset-gpios", 0, &of_flags); 340 + if (of_flags & OF_GPIO_ACTIVE_LOW) 341 + flag |= GPIOF_ACTIVE_LOW; 342 + if (gpio_is_valid(hipcie->reset_gpio)) { 343 + ret = devm_gpio_request_one(dev, hipcie->reset_gpio, 344 + flag, "PCIe device power control"); 345 + if (ret) { 346 + dev_err(dev, "unable to request gpio\n"); 347 + return ret; 348 + } 349 + } 350 + 351 + hipcie->aux_clk = devm_clk_get(dev, "aux"); 352 + if (IS_ERR(hipcie->aux_clk)) { 353 + dev_err(dev, "Failed to get PCIe aux clk\n"); 354 + return PTR_ERR(hipcie->aux_clk); 355 + } 356 + 357 + hipcie->pipe_clk = devm_clk_get(dev, "pipe"); 358 + if (IS_ERR(hipcie->pipe_clk)) { 359 + dev_err(dev, "Failed to get PCIe pipe clk\n"); 360 + return PTR_ERR(hipcie->pipe_clk); 361 + } 362 + 363 + hipcie->sys_clk = devm_clk_get(dev, "sys"); 364 + if (IS_ERR(hipcie->sys_clk)) { 365 + dev_err(dev, "Failed to get PCIEe sys clk\n"); 366 + return PTR_ERR(hipcie->sys_clk); 367 + } 368 + 369 + hipcie->bus_clk = devm_clk_get(dev, "bus"); 370 + if (IS_ERR(hipcie->bus_clk)) { 371 + dev_err(dev, "Failed to get PCIe bus clk\n"); 372 + return PTR_ERR(hipcie->bus_clk); 373 + } 374 + 375 + hipcie->soft_reset = devm_reset_control_get(dev, "soft"); 376 + if (IS_ERR(hipcie->soft_reset)) { 377 + dev_err(dev, "couldn't get soft reset\n"); 378 + return PTR_ERR(hipcie->soft_reset); 379 + } 380 + 381 + hipcie->sys_reset = devm_reset_control_get(dev, "sys"); 382 + if (IS_ERR(hipcie->sys_reset)) { 383 + dev_err(dev, "couldn't get sys reset\n"); 384 + return PTR_ERR(hipcie->sys_reset); 385 + } 386 + 387 + hipcie->bus_reset = devm_reset_control_get(dev, "bus"); 388 + if (IS_ERR(hipcie->bus_reset)) { 389 + dev_err(dev, "couldn't get bus reset\n"); 390 + return PTR_ERR(hipcie->bus_reset); 391 + } 392 + 393 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 394 + pp->msi_irq = platform_get_irq_byname(pdev, "msi"); 395 + if (pp->msi_irq < 0) { 396 + dev_err(dev, "Failed to get MSI IRQ\n"); 397 + return pp->msi_irq; 398 + } 399 + 400 + ret = devm_request_irq(dev, pp->msi_irq, 401 + histb_pcie_msi_irq_handler, 402 + IRQF_SHARED, "histb-pcie-msi", pp); 403 + if (ret) { 404 + dev_err(dev, "cannot request MSI IRQ\n"); 405 + return ret; 406 + } 407 + } 408 + 409 + hipcie->phy = devm_phy_get(dev, "phy"); 410 + if (IS_ERR(hipcie->phy)) { 411 + dev_info(dev, "no pcie-phy found\n"); 412 + hipcie->phy = NULL; 413 + /* fall through here! 414 + * if no pcie-phy found, phy init 415 + * should be done under boot! 416 + */ 417 + } else { 418 + phy_init(hipcie->phy); 419 + } 420 + 421 + pp->root_bus_nr = -1; 422 + pp->ops = &histb_pcie_host_ops; 423 + 424 + platform_set_drvdata(pdev, hipcie); 425 + 426 + ret = histb_pcie_host_enable(pp); 427 + if (ret) { 428 + dev_err(dev, "failed to enable host\n"); 429 + return ret; 430 + } 431 + 432 + ret = dw_pcie_host_init(pp); 433 + if (ret) { 434 + dev_err(dev, "failed to initialize host\n"); 435 + return ret; 436 + } 437 + 438 + return 0; 439 + } 440 + 441 + static int histb_pcie_remove(struct platform_device *pdev) 442 + { 443 + struct histb_pcie *hipcie = platform_get_drvdata(pdev); 444 + 445 + histb_pcie_host_disable(hipcie); 446 + 447 + if (hipcie->phy) 448 + phy_exit(hipcie->phy); 449 + 450 + return 0; 451 + } 452 + 453 + static const struct of_device_id histb_pcie_of_match[] = { 454 + { .compatible = "hisilicon,hi3798cv200-pcie", }, 455 + {}, 456 + }; 457 + MODULE_DEVICE_TABLE(of, histb_pcie_of_match); 458 + 459 + static struct platform_driver histb_pcie_platform_driver = { 460 + .probe = histb_pcie_probe, 461 + .remove = histb_pcie_remove, 462 + .driver = { 463 + .name = "histb-pcie", 464 + .of_match_table = histb_pcie_of_match, 465 + }, 466 + }; 467 + module_platform_driver(histb_pcie_platform_driver); 468 + 469 + MODULE_DESCRIPTION("HiSilicon STB PCIe host controller driver"); 470 + MODULE_LICENSE("GPL v2");
+6
drivers/pci/host/Kconfig
··· 95 95 Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC. 96 96 This MSI driver supports 5 PCIe ports on the APM X-Gene v1 SoC. 97 97 98 + config PCI_V3_SEMI 99 + bool "V3 Semiconductor PCI controller" 100 + depends on OF 101 + depends on ARM 102 + default ARCH_INTEGRATOR_AP 103 + 98 104 config PCI_VERSATILE 99 105 bool "ARM Versatile PB PCI controller" 100 106 depends on ARCH_VERSATILE
+1
drivers/pci/host/Makefile
··· 10 10 obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o 11 11 obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o 12 12 obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o 13 + obj-$(CONFIG_PCI_V3_SEMI) += pci-v3-semi.o 13 14 obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o 14 15 obj-$(CONFIG_PCI_VERSATILE) += pci-versatile.o 15 16 obj-$(CONFIG_PCIE_IPROC) += pcie-iproc.o
+2 -20
drivers/pci/host/pci-ftpci100.c
··· 371 371 return 0; 372 372 } 373 373 374 - static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 375 - struct device_node *node) 376 - { 377 - const int na = 3, ns = 2; 378 - int rlen; 379 - 380 - parser->node = node; 381 - parser->pna = of_n_addr_cells(node); 382 - parser->np = parser->pna + na + ns; 383 - 384 - parser->range = of_get_property(node, "dma-ranges", &rlen); 385 - if (!parser->range) 386 - return -ENOENT; 387 - parser->end = parser->range + rlen / sizeof(__be32); 388 - 389 - return 0; 390 - } 391 - 392 374 static int faraday_pci_parse_map_dma_ranges(struct faraday_pci *p, 393 375 struct device_node *np) 394 376 { ··· 385 403 int i = 0; 386 404 u32 val; 387 405 388 - if (pci_dma_range_parser_init(&parser, np)) { 406 + if (of_pci_dma_range_parser_init(&parser, np)) { 389 407 dev_err(dev, "missing dma-ranges property\n"); 390 408 return -EINVAL; 391 409 } ··· 464 482 } 465 483 p->bus_clk = devm_clk_get(dev, "PCICLK"); 466 484 if (IS_ERR(p->bus_clk)) 467 - return PTR_ERR(clk); 485 + return PTR_ERR(p->bus_clk); 468 486 ret = clk_prepare_enable(p->bus_clk); 469 487 if (ret) { 470 488 dev_err(dev, "could not prepare PCICLK\n");
+43
drivers/pci/host/pci-host-generic.c
··· 35 35 } 36 36 }; 37 37 38 + static bool pci_dw_valid_device(struct pci_bus *bus, unsigned int devfn) 39 + { 40 + struct pci_config_window *cfg = bus->sysdata; 41 + 42 + /* 43 + * The Synopsys DesignWare PCIe controller in ECAM mode will not filter 44 + * type 0 config TLPs sent to devices 1 and up on its downstream port, 45 + * resulting in devices appearing multiple times on bus 0 unless we 46 + * filter out those accesses here. 47 + */ 48 + if (bus->number == cfg->busr.start && PCI_SLOT(devfn) > 0) 49 + return false; 50 + 51 + return true; 52 + } 53 + 54 + static void __iomem *pci_dw_ecam_map_bus(struct pci_bus *bus, 55 + unsigned int devfn, int where) 56 + { 57 + if (!pci_dw_valid_device(bus, devfn)) 58 + return NULL; 59 + 60 + return pci_ecam_map_bus(bus, devfn, where); 61 + } 62 + 63 + static struct pci_ecam_ops pci_dw_ecam_bus_ops = { 64 + .bus_shift = 20, 65 + .pci_ops = { 66 + .map_bus = pci_dw_ecam_map_bus, 67 + .read = pci_generic_config_read, 68 + .write = pci_generic_config_write, 69 + } 70 + }; 71 + 38 72 static const struct of_device_id gen_pci_of_match[] = { 39 73 { .compatible = "pci-host-cam-generic", 40 74 .data = &gen_pci_cfg_cam_bus_ops }, 41 75 42 76 { .compatible = "pci-host-ecam-generic", 43 77 .data = &pci_generic_ecam_ops }, 78 + 79 + { .compatible = "marvell,armada8k-pcie-ecam", 80 + .data = &pci_dw_ecam_bus_ops }, 81 + 82 + { .compatible = "socionext,synquacer-pcie-ecam", 83 + .data = &pci_dw_ecam_bus_ops }, 84 + 85 + { .compatible = "snps,dw-pcie-ecam", 86 + .data = &pci_dw_ecam_bus_ops }, 44 87 45 88 { }, 46 89 };
+5 -3
drivers/pci/host/pci-hyperv.c
··· 879 879 int cpu; 880 880 u64 res; 881 881 882 - dest = irq_data_get_affinity_mask(data); 882 + dest = irq_data_get_effective_affinity_mask(data); 883 883 pdev = msi_desc_to_pci_dev(msi_desc); 884 884 pbus = pdev->bus; 885 885 hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); ··· 1042 1042 struct hv_pci_dev *hpdev; 1043 1043 struct pci_bus *pbus; 1044 1044 struct pci_dev *pdev; 1045 + struct cpumask *dest; 1045 1046 struct compose_comp_ctxt comp; 1046 1047 struct tran_int_desc *int_desc; 1047 1048 struct { ··· 1057 1056 int ret; 1058 1057 1059 1058 pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); 1059 + dest = irq_data_get_effective_affinity_mask(data); 1060 1060 pbus = pdev->bus; 1061 1061 hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); 1062 1062 hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); ··· 1083 1081 switch (pci_protocol_version) { 1084 1082 case PCI_PROTOCOL_VERSION_1_1: 1085 1083 size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1, 1086 - irq_data_get_affinity_mask(data), 1084 + dest, 1087 1085 hpdev->desc.win_slot.slot, 1088 1086 cfg->vector); 1089 1087 break; 1090 1088 1091 1089 case PCI_PROTOCOL_VERSION_1_2: 1092 1090 size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2, 1093 - irq_data_get_affinity_mask(data), 1091 + dest, 1094 1092 hpdev->desc.win_slot.slot, 1095 1093 cfg->vector); 1096 1094 break;
+1 -19
drivers/pci/host/pci-rcar-gen2.c
··· 293 293 .write = pci_generic_config_write, 294 294 }; 295 295 296 - static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 297 - struct device_node *node) 298 - { 299 - const int na = 3, ns = 2; 300 - int rlen; 301 - 302 - parser->node = node; 303 - parser->pna = of_n_addr_cells(node); 304 - parser->np = parser->pna + na + ns; 305 - 306 - parser->range = of_get_property(node, "dma-ranges", &rlen); 307 - if (!parser->range) 308 - return -ENOENT; 309 - 310 - parser->end = parser->range + rlen / sizeof(__be32); 311 - return 0; 312 - } 313 - 314 296 static int rcar_pci_parse_map_dma_ranges(struct rcar_pci_priv *pci, 315 297 struct device_node *np) 316 298 { ··· 302 320 int index = 0; 303 321 304 322 /* Failure to parse is ok as we fall back to defaults */ 305 - if (pci_dma_range_parser_init(&parser, np)) 323 + if (of_pci_dma_range_parser_init(&parser, np)) 306 324 return 0; 307 325 308 326 /* Get the dma-ranges from DT */
+131 -27
drivers/pci/host/pci-tegra.c
··· 159 159 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_SINGLE (0x0 << 20) 160 160 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_420 (0x0 << 20) 161 161 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_X2_X1 (0x0 << 20) 162 + #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_401 (0x0 << 20) 162 163 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_DUAL (0x1 << 20) 163 164 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_222 (0x1 << 20) 164 165 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_X4_X1 (0x1 << 20) 166 + #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_211 (0x1 << 20) 165 167 #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_411 (0x2 << 20) 168 + #define AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_111 (0x2 << 20) 166 169 167 170 #define AFI_FUSE 0x104 168 171 #define AFI_FUSE_PCIE_T0_GEN2_DIS (1 << 2) ··· 256 253 bool has_cml_clk; 257 254 bool has_gen2; 258 255 bool force_pca_enable; 256 + bool program_uphy; 259 257 }; 260 258 261 259 static inline struct tegra_msi *to_tegra_msi(struct msi_controller *chip) ··· 496 492 return addr; 497 493 } 498 494 495 + static int tegra_pcie_config_read(struct pci_bus *bus, unsigned int devfn, 496 + int where, int size, u32 *value) 497 + { 498 + if (bus->number == 0) 499 + return pci_generic_config_read32(bus, devfn, where, size, 500 + value); 501 + 502 + return pci_generic_config_read(bus, devfn, where, size, value); 503 + } 504 + 505 + static int tegra_pcie_config_write(struct pci_bus *bus, unsigned int devfn, 506 + int where, int size, u32 value) 507 + { 508 + if (bus->number == 0) 509 + return pci_generic_config_write32(bus, devfn, where, size, 510 + value); 511 + 512 + return pci_generic_config_write(bus, devfn, where, size, value); 513 + } 514 + 499 515 static struct pci_ops tegra_pcie_ops = { 500 516 .add_bus = tegra_pcie_add_bus, 501 517 .remove_bus = tegra_pcie_remove_bus, 502 518 .map_bus = tegra_pcie_map_bus, 503 - .read = pci_generic_config_read32, 504 - .write = pci_generic_config_write32, 519 + .read = tegra_pcie_config_read, 520 + .write = tegra_pcie_config_write, 505 521 }; 506 522 507 523 static unsigned long tegra_pcie_port_get_pex_ctrl(struct tegra_pcie_port *port) ··· 1037 1013 afi_writel(pcie, value, AFI_FUSE); 1038 1014 } 1039 1015 1040 - err = tegra_pcie_phy_power_on(pcie); 1041 - if (err < 0) { 1042 - dev_err(dev, "failed to power on PHY(s): %d\n", err); 1043 - return err; 1016 + if (soc->program_uphy) { 1017 + err = tegra_pcie_phy_power_on(pcie); 1018 + if (err < 0) { 1019 + dev_err(dev, "failed to power on PHY(s): %d\n", err); 1020 + return err; 1021 + } 1044 1022 } 1045 1023 1046 1024 /* take the PCIe interface module out of reset */ ··· 1075 1049 static void tegra_pcie_power_off(struct tegra_pcie *pcie) 1076 1050 { 1077 1051 struct device *dev = pcie->dev; 1052 + const struct tegra_pcie_soc *soc = pcie->soc; 1078 1053 int err; 1079 1054 1080 1055 /* TODO: disable and unprepare clocks? */ 1081 1056 1082 - err = tegra_pcie_phy_power_off(pcie); 1083 - if (err < 0) 1084 - dev_err(dev, "failed to power off PHY(s): %d\n", err); 1057 + if (soc->program_uphy) { 1058 + err = tegra_pcie_phy_power_off(pcie); 1059 + if (err < 0) 1060 + dev_err(dev, "failed to power off PHY(s): %d\n", err); 1061 + } 1085 1062 1086 1063 reset_control_assert(pcie->pcie_xrst); 1087 1064 reset_control_assert(pcie->afi_rst); 1088 1065 reset_control_assert(pcie->pex_rst); 1089 1066 1090 - tegra_powergate_power_off(TEGRA_POWERGATE_PCIE); 1067 + if (!dev->pm_domain) 1068 + tegra_powergate_power_off(TEGRA_POWERGATE_PCIE); 1091 1069 1092 1070 err = regulator_bulk_disable(pcie->num_supplies, pcie->supplies); 1093 1071 if (err < 0) ··· 1108 1078 reset_control_assert(pcie->afi_rst); 1109 1079 reset_control_assert(pcie->pex_rst); 1110 1080 1111 - tegra_powergate_power_off(TEGRA_POWERGATE_PCIE); 1081 + if (!dev->pm_domain) 1082 + tegra_powergate_power_off(TEGRA_POWERGATE_PCIE); 1112 1083 1113 1084 /* enable regulators */ 1114 1085 err = regulator_bulk_enable(pcie->num_supplies, pcie->supplies); 1115 1086 if (err < 0) 1116 1087 dev_err(dev, "failed to enable regulators: %d\n", err); 1117 1088 1118 - err = tegra_powergate_sequence_power_up(TEGRA_POWERGATE_PCIE, 1119 - pcie->pex_clk, 1120 - pcie->pex_rst); 1121 - if (err) { 1122 - dev_err(dev, "powerup sequence failed: %d\n", err); 1123 - return err; 1089 + if (dev->pm_domain) { 1090 + err = clk_prepare_enable(pcie->pex_clk); 1091 + if (err) { 1092 + dev_err(dev, "failed to enable PEX clock: %d\n", err); 1093 + return err; 1094 + } 1095 + reset_control_deassert(pcie->pex_rst); 1096 + } else { 1097 + err = tegra_powergate_sequence_power_up(TEGRA_POWERGATE_PCIE, 1098 + pcie->pex_clk, 1099 + pcie->pex_rst); 1100 + if (err) { 1101 + dev_err(dev, "powerup sequence failed: %d\n", err); 1102 + return err; 1103 + } 1124 1104 } 1125 1105 1126 1106 reset_control_deassert(pcie->afi_rst); ··· 1303 1263 struct device *dev = pcie->dev; 1304 1264 struct platform_device *pdev = to_platform_device(dev); 1305 1265 struct resource *pads, *afi, *res; 1266 + const struct tegra_pcie_soc *soc = pcie->soc; 1306 1267 int err; 1307 1268 1308 1269 err = tegra_pcie_clocks_get(pcie); ··· 1318 1277 return err; 1319 1278 } 1320 1279 1321 - err = tegra_pcie_phys_get(pcie); 1322 - if (err < 0) { 1323 - dev_err(dev, "failed to get PHYs: %d\n", err); 1324 - return err; 1280 + if (soc->program_uphy) { 1281 + err = tegra_pcie_phys_get(pcie); 1282 + if (err < 0) { 1283 + dev_err(dev, "failed to get PHYs: %d\n", err); 1284 + return err; 1285 + } 1325 1286 } 1326 1287 1327 1288 err = tegra_pcie_power_on(pcie); ··· 1385 1342 static int tegra_pcie_put_resources(struct tegra_pcie *pcie) 1386 1343 { 1387 1344 struct device *dev = pcie->dev; 1345 + const struct tegra_pcie_soc *soc = pcie->soc; 1388 1346 int err; 1389 1347 1390 1348 if (pcie->irq > 0) ··· 1393 1349 1394 1350 tegra_pcie_power_off(pcie); 1395 1351 1396 - err = phy_exit(pcie->phy); 1397 - if (err < 0) 1398 - dev_err(dev, "failed to teardown PHY: %d\n", err); 1352 + if (soc->program_uphy) { 1353 + err = phy_exit(pcie->phy); 1354 + if (err < 0) 1355 + dev_err(dev, "failed to teardown PHY: %d\n", err); 1356 + } 1399 1357 1400 1358 return 0; 1401 1359 } ··· 1652 1606 struct device *dev = pcie->dev; 1653 1607 struct device_node *np = dev->of_node; 1654 1608 1655 - if (of_device_is_compatible(np, "nvidia,tegra124-pcie") || 1656 - of_device_is_compatible(np, "nvidia,tegra210-pcie")) { 1609 + if (of_device_is_compatible(np, "nvidia,tegra186-pcie")) { 1610 + switch (lanes) { 1611 + case 0x010004: 1612 + dev_info(dev, "4x1, 1x1 configuration\n"); 1613 + *xbar = AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_401; 1614 + return 0; 1615 + 1616 + case 0x010102: 1617 + dev_info(dev, "2x1, 1X1, 1x1 configuration\n"); 1618 + *xbar = AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_211; 1619 + return 0; 1620 + 1621 + case 0x010101: 1622 + dev_info(dev, "1x1, 1x1, 1x1 configuration\n"); 1623 + *xbar = AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_111; 1624 + return 0; 1625 + 1626 + default: 1627 + dev_info(dev, "wrong configuration updated in DT, " 1628 + "switching to default 2x1, 1x1, 1x1 " 1629 + "configuration\n"); 1630 + *xbar = AFI_PCIE_CONFIG_SM2TMS0_XBAR_CONFIG_211; 1631 + return 0; 1632 + } 1633 + } else if (of_device_is_compatible(np, "nvidia,tegra124-pcie") || 1634 + of_device_is_compatible(np, "nvidia,tegra210-pcie")) { 1657 1635 switch (lanes) { 1658 1636 case 0x0000104: 1659 1637 dev_info(dev, "4x1, 1x1 configuration\n"); ··· 1797 1727 struct device_node *np = dev->of_node; 1798 1728 unsigned int i = 0; 1799 1729 1800 - if (of_device_is_compatible(np, "nvidia,tegra210-pcie")) { 1730 + if (of_device_is_compatible(np, "nvidia,tegra186-pcie")) { 1731 + pcie->num_supplies = 4; 1732 + 1733 + pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies, 1734 + sizeof(*pcie->supplies), 1735 + GFP_KERNEL); 1736 + if (!pcie->supplies) 1737 + return -ENOMEM; 1738 + 1739 + pcie->supplies[i++].supply = "dvdd-pex"; 1740 + pcie->supplies[i++].supply = "hvdd-pex-pll"; 1741 + pcie->supplies[i++].supply = "hvdd-pex"; 1742 + pcie->supplies[i++].supply = "vddio-pexctl-aud"; 1743 + } else if (of_device_is_compatible(np, "nvidia,tegra210-pcie")) { 1801 1744 pcie->num_supplies = 6; 1802 1745 1803 1746 pcie->supplies = devm_kcalloc(pcie->dev, pcie->num_supplies, ··· 2149 2066 .has_cml_clk = false, 2150 2067 .has_gen2 = false, 2151 2068 .force_pca_enable = false, 2069 + .program_uphy = true, 2152 2070 }; 2153 2071 2154 2072 static const struct tegra_pcie_soc tegra30_pcie = { ··· 2165 2081 .has_cml_clk = true, 2166 2082 .has_gen2 = false, 2167 2083 .force_pca_enable = false, 2084 + .program_uphy = true, 2168 2085 }; 2169 2086 2170 2087 static const struct tegra_pcie_soc tegra124_pcie = { ··· 2180 2095 .has_cml_clk = true, 2181 2096 .has_gen2 = true, 2182 2097 .force_pca_enable = false, 2098 + .program_uphy = true, 2183 2099 }; 2184 2100 2185 2101 static const struct tegra_pcie_soc tegra210_pcie = { ··· 2195 2109 .has_cml_clk = true, 2196 2110 .has_gen2 = true, 2197 2111 .force_pca_enable = true, 2112 + .program_uphy = true, 2113 + }; 2114 + 2115 + static const struct tegra_pcie_soc tegra186_pcie = { 2116 + .num_ports = 3, 2117 + .msi_base_shift = 8, 2118 + .pads_pll_ctl = PADS_PLL_CTL_TEGRA30, 2119 + .tx_ref_sel = PADS_PLL_CTL_TXCLKREF_BUF_EN, 2120 + .pads_refclk_cfg0 = 0x80b880b8, 2121 + .pads_refclk_cfg1 = 0x000480b8, 2122 + .has_pex_clkreq_en = true, 2123 + .has_pex_bias_ctrl = true, 2124 + .has_intr_prsnt_sense = true, 2125 + .has_cml_clk = false, 2126 + .has_gen2 = true, 2127 + .force_pca_enable = false, 2128 + .program_uphy = false, 2198 2129 }; 2199 2130 2200 2131 static const struct of_device_id tegra_pcie_of_match[] = { 2132 + { .compatible = "nvidia,tegra186-pcie", .data = &tegra186_pcie }, 2201 2133 { .compatible = "nvidia,tegra210-pcie", .data = &tegra210_pcie }, 2202 2134 { .compatible = "nvidia,tegra124-pcie", .data = &tegra124_pcie }, 2203 2135 { .compatible = "nvidia,tegra30-pcie", .data = &tegra30_pcie },
+959
drivers/pci/host/pci-v3-semi.c
··· 1 + /* 2 + * Support for V3 Semiconductor PCI Local Bus to PCI Bridge 3 + * Copyright (C) 2017 Linus Walleij <linus.walleij@linaro.org> 4 + * 5 + * Based on the code from arch/arm/mach-integrator/pci_v3.c 6 + * Copyright (C) 1999 ARM Limited 7 + * Copyright (C) 2000-2001 Deep Blue Solutions Ltd 8 + * 9 + * Contributors to the old driver include: 10 + * Russell King <linux@armlinux.org.uk> 11 + * David A. Rusling <david.rusling@linaro.org> (uHAL, ARM Firmware suite) 12 + * Rob Herring <robh@kernel.org> 13 + * Liviu Dudau <Liviu.Dudau@arm.com> 14 + * Grant Likely <grant.likely@secretlab.ca> 15 + * Arnd Bergmann <arnd@arndb.de> 16 + * Bjorn Helgaas <bhelgaas@google.com> 17 + */ 18 + #include <linux/init.h> 19 + #include <linux/interrupt.h> 20 + #include <linux/io.h> 21 + #include <linux/kernel.h> 22 + #include <linux/of_address.h> 23 + #include <linux/of_device.h> 24 + #include <linux/of_irq.h> 25 + #include <linux/of_pci.h> 26 + #include <linux/pci.h> 27 + #include <linux/platform_device.h> 28 + #include <linux/slab.h> 29 + #include <linux/bitops.h> 30 + #include <linux/irq.h> 31 + #include <linux/mfd/syscon.h> 32 + #include <linux/regmap.h> 33 + #include <linux/clk.h> 34 + 35 + #define V3_PCI_VENDOR 0x00000000 36 + #define V3_PCI_DEVICE 0x00000002 37 + #define V3_PCI_CMD 0x00000004 38 + #define V3_PCI_STAT 0x00000006 39 + #define V3_PCI_CC_REV 0x00000008 40 + #define V3_PCI_HDR_CFG 0x0000000C 41 + #define V3_PCI_IO_BASE 0x00000010 42 + #define V3_PCI_BASE0 0x00000014 43 + #define V3_PCI_BASE1 0x00000018 44 + #define V3_PCI_SUB_VENDOR 0x0000002C 45 + #define V3_PCI_SUB_ID 0x0000002E 46 + #define V3_PCI_ROM 0x00000030 47 + #define V3_PCI_BPARAM 0x0000003C 48 + #define V3_PCI_MAP0 0x00000040 49 + #define V3_PCI_MAP1 0x00000044 50 + #define V3_PCI_INT_STAT 0x00000048 51 + #define V3_PCI_INT_CFG 0x0000004C 52 + #define V3_LB_BASE0 0x00000054 53 + #define V3_LB_BASE1 0x00000058 54 + #define V3_LB_MAP0 0x0000005E 55 + #define V3_LB_MAP1 0x00000062 56 + #define V3_LB_BASE2 0x00000064 57 + #define V3_LB_MAP2 0x00000066 58 + #define V3_LB_SIZE 0x00000068 59 + #define V3_LB_IO_BASE 0x0000006E 60 + #define V3_FIFO_CFG 0x00000070 61 + #define V3_FIFO_PRIORITY 0x00000072 62 + #define V3_FIFO_STAT 0x00000074 63 + #define V3_LB_ISTAT 0x00000076 64 + #define V3_LB_IMASK 0x00000077 65 + #define V3_SYSTEM 0x00000078 66 + #define V3_LB_CFG 0x0000007A 67 + #define V3_PCI_CFG 0x0000007C 68 + #define V3_DMA_PCI_ADR0 0x00000080 69 + #define V3_DMA_PCI_ADR1 0x00000090 70 + #define V3_DMA_LOCAL_ADR0 0x00000084 71 + #define V3_DMA_LOCAL_ADR1 0x00000094 72 + #define V3_DMA_LENGTH0 0x00000088 73 + #define V3_DMA_LENGTH1 0x00000098 74 + #define V3_DMA_CSR0 0x0000008B 75 + #define V3_DMA_CSR1 0x0000009B 76 + #define V3_DMA_CTLB_ADR0 0x0000008C 77 + #define V3_DMA_CTLB_ADR1 0x0000009C 78 + #define V3_DMA_DELAY 0x000000E0 79 + #define V3_MAIL_DATA 0x000000C0 80 + #define V3_PCI_MAIL_IEWR 0x000000D0 81 + #define V3_PCI_MAIL_IERD 0x000000D2 82 + #define V3_LB_MAIL_IEWR 0x000000D4 83 + #define V3_LB_MAIL_IERD 0x000000D6 84 + #define V3_MAIL_WR_STAT 0x000000D8 85 + #define V3_MAIL_RD_STAT 0x000000DA 86 + #define V3_QBA_MAP 0x000000DC 87 + 88 + /* PCI STATUS bits */ 89 + #define V3_PCI_STAT_PAR_ERR BIT(15) 90 + #define V3_PCI_STAT_SYS_ERR BIT(14) 91 + #define V3_PCI_STAT_M_ABORT_ERR BIT(13) 92 + #define V3_PCI_STAT_T_ABORT_ERR BIT(12) 93 + 94 + /* LB ISTAT bits */ 95 + #define V3_LB_ISTAT_MAILBOX BIT(7) 96 + #define V3_LB_ISTAT_PCI_RD BIT(6) 97 + #define V3_LB_ISTAT_PCI_WR BIT(5) 98 + #define V3_LB_ISTAT_PCI_INT BIT(4) 99 + #define V3_LB_ISTAT_PCI_PERR BIT(3) 100 + #define V3_LB_ISTAT_I2O_QWR BIT(2) 101 + #define V3_LB_ISTAT_DMA1 BIT(1) 102 + #define V3_LB_ISTAT_DMA0 BIT(0) 103 + 104 + /* PCI COMMAND bits */ 105 + #define V3_COMMAND_M_FBB_EN BIT(9) 106 + #define V3_COMMAND_M_SERR_EN BIT(8) 107 + #define V3_COMMAND_M_PAR_EN BIT(6) 108 + #define V3_COMMAND_M_MASTER_EN BIT(2) 109 + #define V3_COMMAND_M_MEM_EN BIT(1) 110 + #define V3_COMMAND_M_IO_EN BIT(0) 111 + 112 + /* SYSTEM bits */ 113 + #define V3_SYSTEM_M_RST_OUT BIT(15) 114 + #define V3_SYSTEM_M_LOCK BIT(14) 115 + #define V3_SYSTEM_UNLOCK 0xa05f 116 + 117 + /* PCI CFG bits */ 118 + #define V3_PCI_CFG_M_I2O_EN BIT(15) 119 + #define V3_PCI_CFG_M_IO_REG_DIS BIT(14) 120 + #define V3_PCI_CFG_M_IO_DIS BIT(13) 121 + #define V3_PCI_CFG_M_EN3V BIT(12) 122 + #define V3_PCI_CFG_M_RETRY_EN BIT(10) 123 + #define V3_PCI_CFG_M_AD_LOW1 BIT(9) 124 + #define V3_PCI_CFG_M_AD_LOW0 BIT(8) 125 + /* 126 + * This is the value applied to C/BE[3:1], with bit 0 always held 0 127 + * during DMA access. 128 + */ 129 + #define V3_PCI_CFG_M_RTYPE_SHIFT 5 130 + #define V3_PCI_CFG_M_WTYPE_SHIFT 1 131 + #define V3_PCI_CFG_TYPE_DEFAULT 0x3 132 + 133 + /* PCI BASE bits (PCI -> Local Bus) */ 134 + #define V3_PCI_BASE_M_ADR_BASE 0xFFF00000U 135 + #define V3_PCI_BASE_M_ADR_BASEL 0x000FFF00U 136 + #define V3_PCI_BASE_M_PREFETCH BIT(3) 137 + #define V3_PCI_BASE_M_TYPE (3 << 1) 138 + #define V3_PCI_BASE_M_IO BIT(0) 139 + 140 + /* PCI MAP bits (PCI -> Local bus) */ 141 + #define V3_PCI_MAP_M_MAP_ADR 0xFFF00000U 142 + #define V3_PCI_MAP_M_RD_POST_INH BIT(15) 143 + #define V3_PCI_MAP_M_ROM_SIZE (3 << 10) 144 + #define V3_PCI_MAP_M_SWAP (3 << 8) 145 + #define V3_PCI_MAP_M_ADR_SIZE 0x000000F0U 146 + #define V3_PCI_MAP_M_REG_EN BIT(1) 147 + #define V3_PCI_MAP_M_ENABLE BIT(0) 148 + 149 + /* LB_BASE0,1 bits (Local bus -> PCI) */ 150 + #define V3_LB_BASE_ADR_BASE 0xfff00000U 151 + #define V3_LB_BASE_SWAP (3 << 8) 152 + #define V3_LB_BASE_ADR_SIZE (15 << 4) 153 + #define V3_LB_BASE_PREFETCH BIT(3) 154 + #define V3_LB_BASE_ENABLE BIT(0) 155 + 156 + #define V3_LB_BASE_ADR_SIZE_1MB (0 << 4) 157 + #define V3_LB_BASE_ADR_SIZE_2MB (1 << 4) 158 + #define V3_LB_BASE_ADR_SIZE_4MB (2 << 4) 159 + #define V3_LB_BASE_ADR_SIZE_8MB (3 << 4) 160 + #define V3_LB_BASE_ADR_SIZE_16MB (4 << 4) 161 + #define V3_LB_BASE_ADR_SIZE_32MB (5 << 4) 162 + #define V3_LB_BASE_ADR_SIZE_64MB (6 << 4) 163 + #define V3_LB_BASE_ADR_SIZE_128MB (7 << 4) 164 + #define V3_LB_BASE_ADR_SIZE_256MB (8 << 4) 165 + #define V3_LB_BASE_ADR_SIZE_512MB (9 << 4) 166 + #define V3_LB_BASE_ADR_SIZE_1GB (10 << 4) 167 + #define V3_LB_BASE_ADR_SIZE_2GB (11 << 4) 168 + 169 + #define v3_addr_to_lb_base(a) ((a) & V3_LB_BASE_ADR_BASE) 170 + 171 + /* LB_MAP0,1 bits (Local bus -> PCI) */ 172 + #define V3_LB_MAP_MAP_ADR 0xfff0U 173 + #define V3_LB_MAP_TYPE (7 << 1) 174 + #define V3_LB_MAP_AD_LOW_EN BIT(0) 175 + 176 + #define V3_LB_MAP_TYPE_IACK (0 << 1) 177 + #define V3_LB_MAP_TYPE_IO (1 << 1) 178 + #define V3_LB_MAP_TYPE_MEM (3 << 1) 179 + #define V3_LB_MAP_TYPE_CONFIG (5 << 1) 180 + #define V3_LB_MAP_TYPE_MEM_MULTIPLE (6 << 1) 181 + 182 + #define v3_addr_to_lb_map(a) (((a) >> 16) & V3_LB_MAP_MAP_ADR) 183 + 184 + /* LB_BASE2 bits (Local bus -> PCI IO) */ 185 + #define V3_LB_BASE2_ADR_BASE 0xff00U 186 + #define V3_LB_BASE2_SWAP_AUTO (3 << 6) 187 + #define V3_LB_BASE2_ENABLE BIT(0) 188 + 189 + #define v3_addr_to_lb_base2(a) (((a) >> 16) & V3_LB_BASE2_ADR_BASE) 190 + 191 + /* LB_MAP2 bits (Local bus -> PCI IO) */ 192 + #define V3_LB_MAP2_MAP_ADR 0xff00U 193 + 194 + #define v3_addr_to_lb_map2(a) (((a) >> 16) & V3_LB_MAP2_MAP_ADR) 195 + 196 + /* FIFO priority bits */ 197 + #define V3_FIFO_PRIO_LOCAL BIT(12) 198 + #define V3_FIFO_PRIO_LB_RD1_FLUSH_EOB BIT(10) 199 + #define V3_FIFO_PRIO_LB_RD1_FLUSH_AP1 BIT(11) 200 + #define V3_FIFO_PRIO_LB_RD1_FLUSH_ANY (BIT(10)|BIT(11)) 201 + #define V3_FIFO_PRIO_LB_RD0_FLUSH_EOB BIT(8) 202 + #define V3_FIFO_PRIO_LB_RD0_FLUSH_AP1 BIT(9) 203 + #define V3_FIFO_PRIO_LB_RD0_FLUSH_ANY (BIT(8)|BIT(9)) 204 + #define V3_FIFO_PRIO_PCI BIT(4) 205 + #define V3_FIFO_PRIO_PCI_RD1_FLUSH_EOB BIT(2) 206 + #define V3_FIFO_PRIO_PCI_RD1_FLUSH_AP1 BIT(3) 207 + #define V3_FIFO_PRIO_PCI_RD1_FLUSH_ANY (BIT(2)|BIT(3)) 208 + #define V3_FIFO_PRIO_PCI_RD0_FLUSH_EOB BIT(0) 209 + #define V3_FIFO_PRIO_PCI_RD0_FLUSH_AP1 BIT(1) 210 + #define V3_FIFO_PRIO_PCI_RD0_FLUSH_ANY (BIT(0)|BIT(1)) 211 + 212 + /* Local bus configuration bits */ 213 + #define V3_LB_CFG_LB_TO_64_CYCLES 0x0000 214 + #define V3_LB_CFG_LB_TO_256_CYCLES BIT(13) 215 + #define V3_LB_CFG_LB_TO_512_CYCLES BIT(14) 216 + #define V3_LB_CFG_LB_TO_1024_CYCLES (BIT(13)|BIT(14)) 217 + #define V3_LB_CFG_LB_RST BIT(12) 218 + #define V3_LB_CFG_LB_PPC_RDY BIT(11) 219 + #define V3_LB_CFG_LB_LB_INT BIT(10) 220 + #define V3_LB_CFG_LB_ERR_EN BIT(9) 221 + #define V3_LB_CFG_LB_RDY_EN BIT(8) 222 + #define V3_LB_CFG_LB_BE_IMODE BIT(7) 223 + #define V3_LB_CFG_LB_BE_OMODE BIT(6) 224 + #define V3_LB_CFG_LB_ENDIAN BIT(5) 225 + #define V3_LB_CFG_LB_PARK_EN BIT(4) 226 + #define V3_LB_CFG_LB_FBB_DIS BIT(2) 227 + 228 + /* ARM Integrator-specific extended control registers */ 229 + #define INTEGRATOR_SC_PCI_OFFSET 0x18 230 + #define INTEGRATOR_SC_PCI_ENABLE BIT(0) 231 + #define INTEGRATOR_SC_PCI_INTCLR BIT(1) 232 + #define INTEGRATOR_SC_LBFADDR_OFFSET 0x20 233 + #define INTEGRATOR_SC_LBFCODE_OFFSET 0x24 234 + 235 + struct v3_pci { 236 + struct device *dev; 237 + void __iomem *base; 238 + void __iomem *config_base; 239 + struct pci_bus *bus; 240 + u32 config_mem; 241 + u32 io_mem; 242 + u32 non_pre_mem; 243 + u32 pre_mem; 244 + phys_addr_t io_bus_addr; 245 + phys_addr_t non_pre_bus_addr; 246 + phys_addr_t pre_bus_addr; 247 + struct regmap *map; 248 + }; 249 + 250 + /* 251 + * The V3 PCI interface chip in Integrator provides several windows from 252 + * local bus memory into the PCI memory areas. Unfortunately, there 253 + * are not really enough windows for our usage, therefore we reuse 254 + * one of the windows for access to PCI configuration space. On the 255 + * Integrator/AP, the memory map is as follows: 256 + * 257 + * Local Bus Memory Usage 258 + * 259 + * 40000000 - 4FFFFFFF PCI memory. 256M non-prefetchable 260 + * 50000000 - 5FFFFFFF PCI memory. 256M prefetchable 261 + * 60000000 - 60FFFFFF PCI IO. 16M 262 + * 61000000 - 61FFFFFF PCI Configuration. 16M 263 + * 264 + * There are three V3 windows, each described by a pair of V3 registers. 265 + * These are LB_BASE0/LB_MAP0, LB_BASE1/LB_MAP1 and LB_BASE2/LB_MAP2. 266 + * Base0 and Base1 can be used for any type of PCI memory access. Base2 267 + * can be used either for PCI I/O or for I20 accesses. By default, uHAL 268 + * uses this only for PCI IO space. 269 + * 270 + * Normally these spaces are mapped using the following base registers: 271 + * 272 + * Usage Local Bus Memory Base/Map registers used 273 + * 274 + * Mem 40000000 - 4FFFFFFF LB_BASE0/LB_MAP0 275 + * Mem 50000000 - 5FFFFFFF LB_BASE1/LB_MAP1 276 + * IO 60000000 - 60FFFFFF LB_BASE2/LB_MAP2 277 + * Cfg 61000000 - 61FFFFFF 278 + * 279 + * This means that I20 and PCI configuration space accesses will fail. 280 + * When PCI configuration accesses are needed (via the uHAL PCI 281 + * configuration space primitives) we must remap the spaces as follows: 282 + * 283 + * Usage Local Bus Memory Base/Map registers used 284 + * 285 + * Mem 40000000 - 4FFFFFFF LB_BASE0/LB_MAP0 286 + * Mem 50000000 - 5FFFFFFF LB_BASE0/LB_MAP0 287 + * IO 60000000 - 60FFFFFF LB_BASE2/LB_MAP2 288 + * Cfg 61000000 - 61FFFFFF LB_BASE1/LB_MAP1 289 + * 290 + * To make this work, the code depends on overlapping windows working. 291 + * The V3 chip translates an address by checking its range within 292 + * each of the BASE/MAP pairs in turn (in ascending register number 293 + * order). It will use the first matching pair. So, for example, 294 + * if the same address is mapped by both LB_BASE0/LB_MAP0 and 295 + * LB_BASE1/LB_MAP1, the V3 will use the translation from 296 + * LB_BASE0/LB_MAP0. 297 + * 298 + * To allow PCI Configuration space access, the code enlarges the 299 + * window mapped by LB_BASE0/LB_MAP0 from 256M to 512M. This occludes 300 + * the windows currently mapped by LB_BASE1/LB_MAP1 so that it can 301 + * be remapped for use by configuration cycles. 302 + * 303 + * At the end of the PCI Configuration space accesses, 304 + * LB_BASE1/LB_MAP1 is reset to map PCI Memory. Finally the window 305 + * mapped by LB_BASE0/LB_MAP0 is reduced in size from 512M to 256M to 306 + * reveal the now restored LB_BASE1/LB_MAP1 window. 307 + * 308 + * NOTE: We do not set up I2O mapping. I suspect that this is only 309 + * for an intelligent (target) device. Using I2O disables most of 310 + * the mappings into PCI memory. 311 + */ 312 + static void __iomem *v3_map_bus(struct pci_bus *bus, 313 + unsigned int devfn, int offset) 314 + { 315 + struct v3_pci *v3 = bus->sysdata; 316 + unsigned int address, mapaddress, busnr; 317 + 318 + busnr = bus->number; 319 + if (busnr == 0) { 320 + int slot = PCI_SLOT(devfn); 321 + 322 + /* 323 + * local bus segment so need a type 0 config cycle 324 + * 325 + * build the PCI configuration "address" with one-hot in 326 + * A31-A11 327 + * 328 + * mapaddress: 329 + * 3:1 = config cycle (101) 330 + * 0 = PCI A1 & A0 are 0 (0) 331 + */ 332 + address = PCI_FUNC(devfn) << 8; 333 + mapaddress = V3_LB_MAP_TYPE_CONFIG; 334 + 335 + if (slot > 12) 336 + /* 337 + * high order bits are handled by the MAP register 338 + */ 339 + mapaddress |= BIT(slot - 5); 340 + else 341 + /* 342 + * low order bits handled directly in the address 343 + */ 344 + address |= BIT(slot + 11); 345 + } else { 346 + /* 347 + * not the local bus segment so need a type 1 config cycle 348 + * 349 + * address: 350 + * 23:16 = bus number 351 + * 15:11 = slot number (7:3 of devfn) 352 + * 10:8 = func number (2:0 of devfn) 353 + * 354 + * mapaddress: 355 + * 3:1 = config cycle (101) 356 + * 0 = PCI A1 & A0 from host bus (1) 357 + */ 358 + mapaddress = V3_LB_MAP_TYPE_CONFIG | V3_LB_MAP_AD_LOW_EN; 359 + address = (busnr << 16) | (devfn << 8); 360 + } 361 + 362 + /* 363 + * Set up base0 to see all 512Mbytes of memory space (not 364 + * prefetchable), this frees up base1 for re-use by 365 + * configuration memory 366 + */ 367 + writel(v3_addr_to_lb_base(v3->non_pre_mem) | 368 + V3_LB_BASE_ADR_SIZE_512MB | V3_LB_BASE_ENABLE, 369 + v3->base + V3_LB_BASE0); 370 + 371 + /* 372 + * Set up base1/map1 to point into configuration space. 373 + * The config mem is always 16MB. 374 + */ 375 + writel(v3_addr_to_lb_base(v3->config_mem) | 376 + V3_LB_BASE_ADR_SIZE_16MB | V3_LB_BASE_ENABLE, 377 + v3->base + V3_LB_BASE1); 378 + writew(mapaddress, v3->base + V3_LB_MAP1); 379 + 380 + return v3->config_base + address + offset; 381 + } 382 + 383 + static void v3_unmap_bus(struct v3_pci *v3) 384 + { 385 + /* 386 + * Reassign base1 for use by prefetchable PCI memory 387 + */ 388 + writel(v3_addr_to_lb_base(v3->pre_mem) | 389 + V3_LB_BASE_ADR_SIZE_256MB | V3_LB_BASE_PREFETCH | 390 + V3_LB_BASE_ENABLE, 391 + v3->base + V3_LB_BASE1); 392 + writew(v3_addr_to_lb_map(v3->pre_bus_addr) | 393 + V3_LB_MAP_TYPE_MEM, /* was V3_LB_MAP_TYPE_MEM_MULTIPLE */ 394 + v3->base + V3_LB_MAP1); 395 + 396 + /* 397 + * And shrink base0 back to a 256M window (NOTE: MAP0 already correct) 398 + */ 399 + writel(v3_addr_to_lb_base(v3->non_pre_mem) | 400 + V3_LB_BASE_ADR_SIZE_256MB | V3_LB_BASE_ENABLE, 401 + v3->base + V3_LB_BASE0); 402 + } 403 + 404 + static int v3_pci_read_config(struct pci_bus *bus, unsigned int fn, 405 + int config, int size, u32 *value) 406 + { 407 + struct v3_pci *v3 = bus->sysdata; 408 + int ret; 409 + 410 + dev_dbg(&bus->dev, 411 + "[read] slt: %.2d, fnc: %d, cnf: 0x%.2X, val (%d bytes): 0x%.8X\n", 412 + PCI_SLOT(fn), PCI_FUNC(fn), config, size, *value); 413 + ret = pci_generic_config_read(bus, fn, config, size, value); 414 + v3_unmap_bus(v3); 415 + return ret; 416 + } 417 + 418 + static int v3_pci_write_config(struct pci_bus *bus, unsigned int fn, 419 + int config, int size, u32 value) 420 + { 421 + struct v3_pci *v3 = bus->sysdata; 422 + int ret; 423 + 424 + dev_dbg(&bus->dev, 425 + "[write] slt: %.2d, fnc: %d, cnf: 0x%.2X, val (%d bytes): 0x%.8X\n", 426 + PCI_SLOT(fn), PCI_FUNC(fn), config, size, value); 427 + ret = pci_generic_config_write(bus, fn, config, size, value); 428 + v3_unmap_bus(v3); 429 + return ret; 430 + } 431 + 432 + static struct pci_ops v3_pci_ops = { 433 + .map_bus = v3_map_bus, 434 + .read = v3_pci_read_config, 435 + .write = v3_pci_write_config, 436 + }; 437 + 438 + static irqreturn_t v3_irq(int irq, void *data) 439 + { 440 + struct v3_pci *v3 = data; 441 + struct device *dev = v3->dev; 442 + u32 status; 443 + 444 + status = readw(v3->base + V3_PCI_STAT); 445 + if (status & V3_PCI_STAT_PAR_ERR) 446 + dev_err(dev, "parity error interrupt\n"); 447 + if (status & V3_PCI_STAT_SYS_ERR) 448 + dev_err(dev, "system error interrupt\n"); 449 + if (status & V3_PCI_STAT_M_ABORT_ERR) 450 + dev_err(dev, "master abort error interrupt\n"); 451 + if (status & V3_PCI_STAT_T_ABORT_ERR) 452 + dev_err(dev, "target abort error interrupt\n"); 453 + writew(status, v3->base + V3_PCI_STAT); 454 + 455 + status = readb(v3->base + V3_LB_ISTAT); 456 + if (status & V3_LB_ISTAT_MAILBOX) 457 + dev_info(dev, "PCI mailbox interrupt\n"); 458 + if (status & V3_LB_ISTAT_PCI_RD) 459 + dev_err(dev, "PCI target LB->PCI READ abort interrupt\n"); 460 + if (status & V3_LB_ISTAT_PCI_WR) 461 + dev_err(dev, "PCI target LB->PCI WRITE abort interrupt\n"); 462 + if (status & V3_LB_ISTAT_PCI_INT) 463 + dev_info(dev, "PCI pin interrupt\n"); 464 + if (status & V3_LB_ISTAT_PCI_PERR) 465 + dev_err(dev, "PCI parity error interrupt\n"); 466 + if (status & V3_LB_ISTAT_I2O_QWR) 467 + dev_info(dev, "I2O inbound post queue interrupt\n"); 468 + if (status & V3_LB_ISTAT_DMA1) 469 + dev_info(dev, "DMA channel 1 interrupt\n"); 470 + if (status & V3_LB_ISTAT_DMA0) 471 + dev_info(dev, "DMA channel 0 interrupt\n"); 472 + /* Clear all possible interrupts on the local bus */ 473 + writeb(0, v3->base + V3_LB_ISTAT); 474 + if (v3->map) 475 + regmap_write(v3->map, INTEGRATOR_SC_PCI_OFFSET, 476 + INTEGRATOR_SC_PCI_ENABLE | 477 + INTEGRATOR_SC_PCI_INTCLR); 478 + 479 + return IRQ_HANDLED; 480 + } 481 + 482 + static int v3_integrator_init(struct v3_pci *v3) 483 + { 484 + unsigned int val; 485 + 486 + v3->map = 487 + syscon_regmap_lookup_by_compatible("arm,integrator-ap-syscon"); 488 + if (IS_ERR(v3->map)) { 489 + dev_err(v3->dev, "no syscon\n"); 490 + return -ENODEV; 491 + } 492 + 493 + regmap_read(v3->map, INTEGRATOR_SC_PCI_OFFSET, &val); 494 + /* Take the PCI bridge out of reset, clear IRQs */ 495 + regmap_write(v3->map, INTEGRATOR_SC_PCI_OFFSET, 496 + INTEGRATOR_SC_PCI_ENABLE | 497 + INTEGRATOR_SC_PCI_INTCLR); 498 + 499 + if (!(val & INTEGRATOR_SC_PCI_ENABLE)) { 500 + /* If we were in reset we need to sleep a bit */ 501 + msleep(230); 502 + 503 + /* Set the physical base for the controller itself */ 504 + writel(0x6200, v3->base + V3_LB_IO_BASE); 505 + 506 + /* Wait for the mailbox to settle after reset */ 507 + do { 508 + writeb(0xaa, v3->base + V3_MAIL_DATA); 509 + writeb(0x55, v3->base + V3_MAIL_DATA + 4); 510 + } while (readb(v3->base + V3_MAIL_DATA) != 0xaa && 511 + readb(v3->base + V3_MAIL_DATA) != 0x55); 512 + } 513 + 514 + dev_info(v3->dev, "initialized PCI V3 Integrator/AP integration\n"); 515 + 516 + return 0; 517 + } 518 + 519 + static int v3_pci_setup_resource(struct v3_pci *v3, 520 + resource_size_t io_base, 521 + struct pci_host_bridge *host, 522 + struct resource_entry *win) 523 + { 524 + struct device *dev = v3->dev; 525 + struct resource *mem; 526 + struct resource *io; 527 + int ret; 528 + 529 + switch (resource_type(win->res)) { 530 + case IORESOURCE_IO: 531 + io = win->res; 532 + io->name = "V3 PCI I/O"; 533 + v3->io_mem = io_base; 534 + v3->io_bus_addr = io->start - win->offset; 535 + dev_dbg(dev, "I/O window %pR, bus addr %pap\n", 536 + io, &v3->io_bus_addr); 537 + ret = pci_remap_iospace(io, io_base); 538 + if (ret) { 539 + dev_warn(dev, 540 + "error %d: failed to map resource %pR\n", 541 + ret, io); 542 + return ret; 543 + } 544 + /* Setup window 2 - PCI I/O */ 545 + writel(v3_addr_to_lb_base2(v3->io_mem) | 546 + V3_LB_BASE2_ENABLE, 547 + v3->base + V3_LB_BASE2); 548 + writew(v3_addr_to_lb_map2(v3->io_bus_addr), 549 + v3->base + V3_LB_MAP2); 550 + break; 551 + case IORESOURCE_MEM: 552 + mem = win->res; 553 + if (mem->flags & IORESOURCE_PREFETCH) { 554 + mem->name = "V3 PCI PRE-MEM"; 555 + v3->pre_mem = mem->start; 556 + v3->pre_bus_addr = mem->start - win->offset; 557 + dev_dbg(dev, "PREFETCHABLE MEM window %pR, bus addr %pap\n", 558 + mem, &v3->pre_bus_addr); 559 + if (resource_size(mem) != SZ_256M) { 560 + dev_err(dev, "prefetchable memory range is not 256MB\n"); 561 + return -EINVAL; 562 + } 563 + if (v3->non_pre_mem && 564 + (mem->start != v3->non_pre_mem + SZ_256M)) { 565 + dev_err(dev, 566 + "prefetchable memory is not adjacent to non-prefetchable memory\n"); 567 + return -EINVAL; 568 + } 569 + /* Setup window 1 - PCI prefetchable memory */ 570 + writel(v3_addr_to_lb_base(v3->pre_mem) | 571 + V3_LB_BASE_ADR_SIZE_256MB | 572 + V3_LB_BASE_PREFETCH | 573 + V3_LB_BASE_ENABLE, 574 + v3->base + V3_LB_BASE1); 575 + writew(v3_addr_to_lb_map(v3->pre_bus_addr) | 576 + V3_LB_MAP_TYPE_MEM, /* Was V3_LB_MAP_TYPE_MEM_MULTIPLE */ 577 + v3->base + V3_LB_MAP1); 578 + } else { 579 + mem->name = "V3 PCI NON-PRE-MEM"; 580 + v3->non_pre_mem = mem->start; 581 + v3->non_pre_bus_addr = mem->start - win->offset; 582 + dev_dbg(dev, "NON-PREFETCHABLE MEM window %pR, bus addr %pap\n", 583 + mem, &v3->non_pre_bus_addr); 584 + if (resource_size(mem) != SZ_256M) { 585 + dev_err(dev, 586 + "non-prefetchable memory range is not 256MB\n"); 587 + return -EINVAL; 588 + } 589 + /* Setup window 0 - PCI non-prefetchable memory */ 590 + writel(v3_addr_to_lb_base(v3->non_pre_mem) | 591 + V3_LB_BASE_ADR_SIZE_256MB | 592 + V3_LB_BASE_ENABLE, 593 + v3->base + V3_LB_BASE0); 594 + writew(v3_addr_to_lb_map(v3->non_pre_bus_addr) | 595 + V3_LB_MAP_TYPE_MEM, 596 + v3->base + V3_LB_MAP0); 597 + } 598 + break; 599 + case IORESOURCE_BUS: 600 + dev_dbg(dev, "BUS %pR\n", win->res); 601 + host->busnr = win->res->start; 602 + break; 603 + default: 604 + dev_info(dev, "Unknown resource type %lu\n", 605 + resource_type(win->res)); 606 + break; 607 + } 608 + 609 + return 0; 610 + } 611 + 612 + static int v3_get_dma_range_config(struct v3_pci *v3, 613 + struct of_pci_range *range, 614 + u32 *pci_base, u32 *pci_map) 615 + { 616 + struct device *dev = v3->dev; 617 + u64 cpu_end = range->cpu_addr + range->size - 1; 618 + u64 pci_end = range->pci_addr + range->size - 1; 619 + u32 val; 620 + 621 + if (range->pci_addr & ~V3_PCI_BASE_M_ADR_BASE) { 622 + dev_err(dev, "illegal range, only PCI bits 31..20 allowed\n"); 623 + return -EINVAL; 624 + } 625 + val = ((u32)range->pci_addr) & V3_PCI_BASE_M_ADR_BASE; 626 + *pci_base = val; 627 + 628 + if (range->cpu_addr & ~V3_PCI_MAP_M_MAP_ADR) { 629 + dev_err(dev, "illegal range, only CPU bits 31..20 allowed\n"); 630 + return -EINVAL; 631 + } 632 + val = ((u32)range->cpu_addr) & V3_PCI_MAP_M_MAP_ADR; 633 + 634 + switch (range->size) { 635 + case SZ_1M: 636 + val |= V3_LB_BASE_ADR_SIZE_1MB; 637 + break; 638 + case SZ_2M: 639 + val |= V3_LB_BASE_ADR_SIZE_2MB; 640 + break; 641 + case SZ_4M: 642 + val |= V3_LB_BASE_ADR_SIZE_4MB; 643 + break; 644 + case SZ_8M: 645 + val |= V3_LB_BASE_ADR_SIZE_8MB; 646 + break; 647 + case SZ_16M: 648 + val |= V3_LB_BASE_ADR_SIZE_16MB; 649 + break; 650 + case SZ_32M: 651 + val |= V3_LB_BASE_ADR_SIZE_32MB; 652 + break; 653 + case SZ_64M: 654 + val |= V3_LB_BASE_ADR_SIZE_64MB; 655 + break; 656 + case SZ_128M: 657 + val |= V3_LB_BASE_ADR_SIZE_128MB; 658 + break; 659 + case SZ_256M: 660 + val |= V3_LB_BASE_ADR_SIZE_256MB; 661 + break; 662 + case SZ_512M: 663 + val |= V3_LB_BASE_ADR_SIZE_512MB; 664 + break; 665 + case SZ_1G: 666 + val |= V3_LB_BASE_ADR_SIZE_1GB; 667 + break; 668 + case SZ_2G: 669 + val |= V3_LB_BASE_ADR_SIZE_2GB; 670 + break; 671 + default: 672 + dev_err(v3->dev, "illegal dma memory chunk size\n"); 673 + return -EINVAL; 674 + break; 675 + }; 676 + val |= V3_PCI_MAP_M_REG_EN | V3_PCI_MAP_M_ENABLE; 677 + *pci_map = val; 678 + 679 + dev_dbg(dev, 680 + "DMA MEM CPU: 0x%016llx -> 0x%016llx => " 681 + "PCI: 0x%016llx -> 0x%016llx base %08x map %08x\n", 682 + range->cpu_addr, cpu_end, 683 + range->pci_addr, pci_end, 684 + *pci_base, *pci_map); 685 + 686 + return 0; 687 + } 688 + 689 + static int v3_pci_parse_map_dma_ranges(struct v3_pci *v3, 690 + struct device_node *np) 691 + { 692 + struct of_pci_range range; 693 + struct of_pci_range_parser parser; 694 + struct device *dev = v3->dev; 695 + int i = 0; 696 + 697 + if (of_pci_dma_range_parser_init(&parser, np)) { 698 + dev_err(dev, "missing dma-ranges property\n"); 699 + return -EINVAL; 700 + } 701 + 702 + /* 703 + * Get the dma-ranges from the device tree 704 + */ 705 + for_each_of_pci_range(&parser, &range) { 706 + int ret; 707 + u32 pci_base, pci_map; 708 + 709 + ret = v3_get_dma_range_config(v3, &range, &pci_base, &pci_map); 710 + if (ret) 711 + return ret; 712 + 713 + if (i == 0) { 714 + writel(pci_base, v3->base + V3_PCI_BASE0); 715 + writel(pci_map, v3->base + V3_PCI_MAP0); 716 + } else if (i == 1) { 717 + writel(pci_base, v3->base + V3_PCI_BASE1); 718 + writel(pci_map, v3->base + V3_PCI_MAP1); 719 + } else { 720 + dev_err(dev, "too many ranges, only two supported\n"); 721 + dev_err(dev, "range %d ignored\n", i); 722 + } 723 + i++; 724 + } 725 + return 0; 726 + } 727 + 728 + static int v3_pci_probe(struct platform_device *pdev) 729 + { 730 + struct device *dev = &pdev->dev; 731 + struct device_node *np = dev->of_node; 732 + resource_size_t io_base; 733 + struct resource *regs; 734 + struct resource_entry *win; 735 + struct v3_pci *v3; 736 + struct pci_host_bridge *host; 737 + struct clk *clk; 738 + u16 val; 739 + int irq; 740 + int ret; 741 + LIST_HEAD(res); 742 + 743 + host = pci_alloc_host_bridge(sizeof(*v3)); 744 + if (!host) 745 + return -ENOMEM; 746 + 747 + host->dev.parent = dev; 748 + host->ops = &v3_pci_ops; 749 + host->busnr = 0; 750 + host->msi = NULL; 751 + host->map_irq = of_irq_parse_and_map_pci; 752 + host->swizzle_irq = pci_common_swizzle; 753 + v3 = pci_host_bridge_priv(host); 754 + host->sysdata = v3; 755 + v3->dev = dev; 756 + 757 + /* Get and enable host clock */ 758 + clk = devm_clk_get(dev, NULL); 759 + if (IS_ERR(clk)) { 760 + dev_err(dev, "clock not found\n"); 761 + return PTR_ERR(clk); 762 + } 763 + ret = clk_prepare_enable(clk); 764 + if (ret) { 765 + dev_err(dev, "unable to enable clock\n"); 766 + return ret; 767 + } 768 + 769 + regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 770 + v3->base = devm_ioremap_resource(dev, regs); 771 + if (IS_ERR(v3->base)) 772 + return PTR_ERR(v3->base); 773 + /* 774 + * The hardware has a register with the physical base address 775 + * of the V3 controller itself, verify that this is the same 776 + * as the physical memory we've remapped it from. 777 + */ 778 + if (readl(v3->base + V3_LB_IO_BASE) != (regs->start >> 16)) 779 + dev_err(dev, "V3_LB_IO_BASE = %08x but device is @%pR\n", 780 + readl(v3->base + V3_LB_IO_BASE), regs); 781 + 782 + /* Configuration space is 16MB directly mapped */ 783 + regs = platform_get_resource(pdev, IORESOURCE_MEM, 1); 784 + if (resource_size(regs) != SZ_16M) { 785 + dev_err(dev, "config mem is not 16MB!\n"); 786 + return -EINVAL; 787 + } 788 + v3->config_mem = regs->start; 789 + v3->config_base = devm_ioremap_resource(dev, regs); 790 + if (IS_ERR(v3->config_base)) 791 + return PTR_ERR(v3->config_base); 792 + 793 + ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &res, &io_base); 794 + if (ret) 795 + return ret; 796 + 797 + ret = devm_request_pci_bus_resources(dev, &res); 798 + if (ret) 799 + return ret; 800 + 801 + /* Get and request error IRQ resource */ 802 + irq = platform_get_irq(pdev, 0); 803 + if (irq <= 0) { 804 + dev_err(dev, "unable to obtain PCIv3 error IRQ\n"); 805 + return -ENODEV; 806 + } 807 + ret = devm_request_irq(dev, irq, v3_irq, 0, 808 + "PCIv3 error", v3); 809 + if (ret < 0) { 810 + dev_err(dev, 811 + "unable to request PCIv3 error IRQ %d (%d)\n", 812 + irq, ret); 813 + return ret; 814 + } 815 + 816 + /* 817 + * Unlock V3 registers, but only if they were previously locked. 818 + */ 819 + if (readw(v3->base + V3_SYSTEM) & V3_SYSTEM_M_LOCK) 820 + writew(V3_SYSTEM_UNLOCK, v3->base + V3_SYSTEM); 821 + 822 + /* Disable all slave access while we set up the windows */ 823 + val = readw(v3->base + V3_PCI_CMD); 824 + val &= ~(PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER); 825 + writew(val, v3->base + V3_PCI_CMD); 826 + 827 + /* Put the PCI bus into reset */ 828 + val = readw(v3->base + V3_SYSTEM); 829 + val &= ~V3_SYSTEM_M_RST_OUT; 830 + writew(val, v3->base + V3_SYSTEM); 831 + 832 + /* Retry until we're ready */ 833 + val = readw(v3->base + V3_PCI_CFG); 834 + val |= V3_PCI_CFG_M_RETRY_EN; 835 + writew(val, v3->base + V3_PCI_CFG); 836 + 837 + /* Set up the local bus protocol */ 838 + val = readw(v3->base + V3_LB_CFG); 839 + val |= V3_LB_CFG_LB_BE_IMODE; /* Byte enable input */ 840 + val |= V3_LB_CFG_LB_BE_OMODE; /* Byte enable output */ 841 + val &= ~V3_LB_CFG_LB_ENDIAN; /* Little endian */ 842 + val &= ~V3_LB_CFG_LB_PPC_RDY; /* TODO: when using on PPC403Gx, set to 1 */ 843 + writew(val, v3->base + V3_LB_CFG); 844 + 845 + /* Enable the PCI bus master */ 846 + val = readw(v3->base + V3_PCI_CMD); 847 + val |= PCI_COMMAND_MASTER; 848 + writew(val, v3->base + V3_PCI_CMD); 849 + 850 + /* Get the I/O and memory ranges from DT */ 851 + resource_list_for_each_entry(win, &res) { 852 + ret = v3_pci_setup_resource(v3, io_base, host, win); 853 + if (ret) { 854 + dev_err(dev, "error setting up resources\n"); 855 + return ret; 856 + } 857 + } 858 + ret = v3_pci_parse_map_dma_ranges(v3, np); 859 + if (ret) 860 + return ret; 861 + 862 + /* 863 + * Disable PCI to host IO cycles, enable I/O buffers @3.3V, 864 + * set AD_LOW0 to 1 if one of the LB_MAP registers choose 865 + * to use this (should be unused). 866 + */ 867 + writel(0x00000000, v3->base + V3_PCI_IO_BASE); 868 + val = V3_PCI_CFG_M_IO_REG_DIS | V3_PCI_CFG_M_IO_DIS | 869 + V3_PCI_CFG_M_EN3V | V3_PCI_CFG_M_AD_LOW0; 870 + /* 871 + * DMA read and write from PCI bus commands types 872 + */ 873 + val |= V3_PCI_CFG_TYPE_DEFAULT << V3_PCI_CFG_M_RTYPE_SHIFT; 874 + val |= V3_PCI_CFG_TYPE_DEFAULT << V3_PCI_CFG_M_WTYPE_SHIFT; 875 + writew(val, v3->base + V3_PCI_CFG); 876 + 877 + /* 878 + * Set the V3 FIFO such that writes have higher priority than 879 + * reads, and local bus write causes local bus read fifo flush 880 + * on aperture 1. Same for PCI. 881 + */ 882 + writew(V3_FIFO_PRIO_LB_RD1_FLUSH_AP1 | 883 + V3_FIFO_PRIO_LB_RD0_FLUSH_AP1 | 884 + V3_FIFO_PRIO_PCI_RD1_FLUSH_AP1 | 885 + V3_FIFO_PRIO_PCI_RD0_FLUSH_AP1, 886 + v3->base + V3_FIFO_PRIORITY); 887 + 888 + 889 + /* 890 + * Clear any error interrupts, and enable parity and write error 891 + * interrupts 892 + */ 893 + writeb(0, v3->base + V3_LB_ISTAT); 894 + val = readw(v3->base + V3_LB_CFG); 895 + val |= V3_LB_CFG_LB_LB_INT; 896 + writew(val, v3->base + V3_LB_CFG); 897 + writeb(V3_LB_ISTAT_PCI_WR | V3_LB_ISTAT_PCI_PERR, 898 + v3->base + V3_LB_IMASK); 899 + 900 + /* Special Integrator initialization */ 901 + if (of_device_is_compatible(np, "arm,integrator-ap-pci")) { 902 + ret = v3_integrator_init(v3); 903 + if (ret) 904 + return ret; 905 + } 906 + 907 + /* Post-init: enable PCI memory and invalidate (master already on) */ 908 + val = readw(v3->base + V3_PCI_CMD); 909 + val |= PCI_COMMAND_MEMORY | PCI_COMMAND_INVALIDATE; 910 + writew(val, v3->base + V3_PCI_CMD); 911 + 912 + /* Clear pending interrupts */ 913 + writeb(0, v3->base + V3_LB_ISTAT); 914 + /* Read or write errors and parity errors cause interrupts */ 915 + writeb(V3_LB_ISTAT_PCI_RD | V3_LB_ISTAT_PCI_WR | V3_LB_ISTAT_PCI_PERR, 916 + v3->base + V3_LB_IMASK); 917 + 918 + /* Take the PCI bus out of reset so devices can initialize */ 919 + val = readw(v3->base + V3_SYSTEM); 920 + val |= V3_SYSTEM_M_RST_OUT; 921 + writew(val, v3->base + V3_SYSTEM); 922 + 923 + /* 924 + * Re-lock the system register. 925 + */ 926 + val = readw(v3->base + V3_SYSTEM); 927 + val |= V3_SYSTEM_M_LOCK; 928 + writew(val, v3->base + V3_SYSTEM); 929 + 930 + list_splice_init(&res, &host->windows); 931 + ret = pci_scan_root_bus_bridge(host); 932 + if (ret) { 933 + dev_err(dev, "failed to register host: %d\n", ret); 934 + return ret; 935 + } 936 + v3->bus = host->bus; 937 + 938 + pci_bus_assign_resources(v3->bus); 939 + pci_bus_add_devices(v3->bus); 940 + 941 + return 0; 942 + } 943 + 944 + static const struct of_device_id v3_pci_of_match[] = { 945 + { 946 + .compatible = "v3,v360epc-pci", 947 + }, 948 + {}, 949 + }; 950 + 951 + static struct platform_driver v3_pci_driver = { 952 + .driver = { 953 + .name = "pci-v3-semi", 954 + .of_match_table = of_match_ptr(v3_pci_of_match), 955 + .suppress_bind_attrs = true, 956 + }, 957 + .probe = v3_pci_probe, 958 + }; 959 + builtin_platform_driver(v3_pci_driver);
+3 -21
drivers/pci/host/pci-xgene.c
··· 542 542 xgene_pcie_setup_pims(port, pim_reg, pci_addr, ~(size - 1)); 543 543 } 544 544 545 - static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 546 - struct device_node *node) 547 - { 548 - const int na = 3, ns = 2; 549 - int rlen; 550 - 551 - parser->node = node; 552 - parser->pna = of_n_addr_cells(node); 553 - parser->np = parser->pna + na + ns; 554 - 555 - parser->range = of_get_property(node, "dma-ranges", &rlen); 556 - if (!parser->range) 557 - return -ENOENT; 558 - parser->end = parser->range + rlen / sizeof(__be32); 559 - 560 - return 0; 561 - } 562 - 563 545 static int xgene_pcie_parse_map_dma_ranges(struct xgene_pcie_port *port) 564 546 { 565 547 struct device_node *np = port->node; ··· 550 568 struct device *dev = port->dev; 551 569 u8 ib_reg_mask = 0; 552 570 553 - if (pci_dma_range_parser_init(&parser, np)) { 571 + if (of_pci_dma_range_parser_init(&parser, np)) { 554 572 dev_err(dev, "missing dma-ranges property\n"); 555 573 return -EINVAL; 556 574 } ··· 610 628 .write = pci_generic_config_write32, 611 629 }; 612 630 613 - static int xgene_pcie_probe_bridge(struct platform_device *pdev) 631 + static int xgene_pcie_probe(struct platform_device *pdev) 614 632 { 615 633 struct device *dev = &pdev->dev; 616 634 struct device_node *dn = dev->of_node; ··· 691 709 .of_match_table = of_match_ptr(xgene_pcie_match_table), 692 710 .suppress_bind_attrs = true, 693 711 }, 694 - .probe = xgene_pcie_probe_bridge, 712 + .probe = xgene_pcie_probe, 695 713 }; 696 714 builtin_platform_driver(xgene_pcie_driver); 697 715 #endif
+4 -4
drivers/pci/host/pcie-altera.c
··· 105 105 return readl_relaxed(pcie->cra_base + reg); 106 106 } 107 107 108 - static bool altera_pcie_link_is_up(struct altera_pcie *pcie) 108 + static bool altera_pcie_link_up(struct altera_pcie *pcie) 109 109 { 110 110 return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0); 111 111 } ··· 142 142 { 143 143 /* If there is no link, then there is no device */ 144 144 if (bus->number != pcie->root_bus_nr) { 145 - if (!altera_pcie_link_is_up(pcie)) 145 + if (!altera_pcie_link_up(pcie)) 146 146 return false; 147 147 } 148 148 ··· 412 412 /* Wait for link is up */ 413 413 start_jiffies = jiffies; 414 414 for (;;) { 415 - if (altera_pcie_link_is_up(pcie)) 415 + if (altera_pcie_link_up(pcie)) 416 416 break; 417 417 418 418 if (time_after(jiffies, start_jiffies + LINK_UP_TIMEOUT)) { ··· 427 427 { 428 428 u16 linkcap, linkstat, linkctl; 429 429 430 - if (!altera_pcie_link_is_up(pcie)) 430 + if (!altera_pcie_link_up(pcie)) 431 431 return; 432 432 433 433 /*
+12 -7
drivers/pci/host/pcie-iproc-msi.c
··· 179 179 180 180 static struct msi_domain_info iproc_msi_domain_info = { 181 181 .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 182 - MSI_FLAG_PCI_MSIX, 182 + MSI_FLAG_MULTI_PCI_MSI | MSI_FLAG_PCI_MSIX, 183 183 .chip = &iproc_msi_irq_chip, 184 184 }; 185 185 ··· 237 237 addr = msi->msi_addr + iproc_msi_addr_offset(msi, data->hwirq); 238 238 msg->address_lo = lower_32_bits(addr); 239 239 msg->address_hi = upper_32_bits(addr); 240 - msg->data = data->hwirq; 240 + msg->data = data->hwirq << 5; 241 241 } 242 242 243 243 static struct irq_chip iproc_msi_bottom_irq_chip = { ··· 251 251 void *args) 252 252 { 253 253 struct iproc_msi *msi = domain->host_data; 254 - int hwirq; 254 + int hwirq, i; 255 255 256 256 mutex_lock(&msi->bitmap_lock); 257 257 ··· 267 267 268 268 mutex_unlock(&msi->bitmap_lock); 269 269 270 - irq_domain_set_info(domain, virq, hwirq, &iproc_msi_bottom_irq_chip, 271 - domain->host_data, handle_simple_irq, NULL, NULL); 270 + for (i = 0; i < nr_irqs; i++) { 271 + irq_domain_set_info(domain, virq + i, hwirq + i, 272 + &iproc_msi_bottom_irq_chip, 273 + domain->host_data, handle_simple_irq, 274 + NULL, NULL); 275 + } 272 276 273 - return 0; 277 + return hwirq; 274 278 } 275 279 276 280 static void iproc_msi_irq_domain_free(struct irq_domain *domain, ··· 306 302 307 303 offs = iproc_msi_eq_offset(msi, eq) + head * sizeof(u32); 308 304 msg = (u32 *)(msi->eq_cpu + offs); 309 - hwirq = *msg & IPROC_MSI_EQ_MASK; 305 + hwirq = readl(msg); 306 + hwirq = (hwirq >> 5) + (hwirq & 0x1f); 310 307 311 308 /* 312 309 * Since we have multiple hwirq mapped to a single MSI vector,
+1 -19
drivers/pci/host/pcie-iproc.c
··· 1097 1097 return ret; 1098 1098 } 1099 1099 1100 - static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 1101 - struct device_node *node) 1102 - { 1103 - const int na = 3, ns = 2; 1104 - int rlen; 1105 - 1106 - parser->node = node; 1107 - parser->pna = of_n_addr_cells(node); 1108 - parser->np = parser->pna + na + ns; 1109 - 1110 - parser->range = of_get_property(node, "dma-ranges", &rlen); 1111 - if (!parser->range) 1112 - return -ENOENT; 1113 - 1114 - parser->end = parser->range + rlen / sizeof(__be32); 1115 - return 0; 1116 - } 1117 - 1118 1100 static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie) 1119 1101 { 1120 1102 struct of_pci_range range; ··· 1104 1122 int ret; 1105 1123 1106 1124 /* Get the dma-ranges from DT */ 1107 - ret = pci_dma_range_parser_init(&parser, pcie->dev->of_node); 1125 + ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node); 1108 1126 if (ret) 1109 1127 return ret; 1110 1128
+1 -19
drivers/pci/host/pcie-rcar.c
··· 1027 1027 return 0; 1028 1028 } 1029 1029 1030 - static int pci_dma_range_parser_init(struct of_pci_range_parser *parser, 1031 - struct device_node *node) 1032 - { 1033 - const int na = 3, ns = 2; 1034 - int rlen; 1035 - 1036 - parser->node = node; 1037 - parser->pna = of_n_addr_cells(node); 1038 - parser->np = parser->pna + na + ns; 1039 - 1040 - parser->range = of_get_property(node, "dma-ranges", &rlen); 1041 - if (!parser->range) 1042 - return -ENOENT; 1043 - 1044 - parser->end = parser->range + rlen / sizeof(__be32); 1045 - return 0; 1046 - } 1047 - 1048 1030 static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie, 1049 1031 struct device_node *np) 1050 1032 { ··· 1035 1053 int index = 0; 1036 1054 int err; 1037 1055 1038 - if (pci_dma_range_parser_init(&parser, np)) 1056 + if (of_pci_dma_range_parser_init(&parser, np)) 1039 1057 return -EINVAL; 1040 1058 1041 1059 /* Get the dma-ranges from DT */
+202 -3
drivers/pci/host/pcie-tango.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/irqchip/chained_irq.h> 3 + #include <linux/irqdomain.h> 2 4 #include <linux/pci-ecam.h> 3 5 #include <linux/delay.h> 4 - #include <linux/of.h> 6 + #include <linux/msi.h> 7 + #include <linux/of_address.h> 8 + 9 + #define MSI_MAX 256 5 10 6 11 #define SMP8759_MUX 0x48 7 12 #define SMP8759_TEST_OUT 0x74 13 + #define SMP8759_DOORBELL 0x7c 14 + #define SMP8759_STATUS 0x80 15 + #define SMP8759_ENABLE 0xa0 8 16 9 17 struct tango_pcie { 10 - void __iomem *base; 18 + DECLARE_BITMAP(used_msi, MSI_MAX); 19 + u64 msi_doorbell; 20 + spinlock_t used_msi_lock; 21 + void __iomem *base; 22 + struct irq_domain *dom; 23 + }; 24 + 25 + static void tango_msi_isr(struct irq_desc *desc) 26 + { 27 + struct irq_chip *chip = irq_desc_get_chip(desc); 28 + struct tango_pcie *pcie = irq_desc_get_handler_data(desc); 29 + unsigned long status, base, virq, idx, pos = 0; 30 + 31 + chained_irq_enter(chip, desc); 32 + spin_lock(&pcie->used_msi_lock); 33 + 34 + while ((pos = find_next_bit(pcie->used_msi, MSI_MAX, pos)) < MSI_MAX) { 35 + base = round_down(pos, 32); 36 + status = readl_relaxed(pcie->base + SMP8759_STATUS + base / 8); 37 + for_each_set_bit(idx, &status, 32) { 38 + virq = irq_find_mapping(pcie->dom, base + idx); 39 + generic_handle_irq(virq); 40 + } 41 + pos = base + 32; 42 + } 43 + 44 + spin_unlock(&pcie->used_msi_lock); 45 + chained_irq_exit(chip, desc); 46 + } 47 + 48 + static void tango_ack(struct irq_data *d) 49 + { 50 + struct tango_pcie *pcie = d->chip_data; 51 + u32 offset = (d->hwirq / 32) * 4; 52 + u32 bit = BIT(d->hwirq % 32); 53 + 54 + writel_relaxed(bit, pcie->base + SMP8759_STATUS + offset); 55 + } 56 + 57 + static void update_msi_enable(struct irq_data *d, bool unmask) 58 + { 59 + unsigned long flags; 60 + struct tango_pcie *pcie = d->chip_data; 61 + u32 offset = (d->hwirq / 32) * 4; 62 + u32 bit = BIT(d->hwirq % 32); 63 + u32 val; 64 + 65 + spin_lock_irqsave(&pcie->used_msi_lock, flags); 66 + val = readl_relaxed(pcie->base + SMP8759_ENABLE + offset); 67 + val = unmask ? val | bit : val & ~bit; 68 + writel_relaxed(val, pcie->base + SMP8759_ENABLE + offset); 69 + spin_unlock_irqrestore(&pcie->used_msi_lock, flags); 70 + } 71 + 72 + static void tango_mask(struct irq_data *d) 73 + { 74 + update_msi_enable(d, false); 75 + } 76 + 77 + static void tango_unmask(struct irq_data *d) 78 + { 79 + update_msi_enable(d, true); 80 + } 81 + 82 + static int tango_set_affinity(struct irq_data *d, const struct cpumask *mask, 83 + bool force) 84 + { 85 + return -EINVAL; 86 + } 87 + 88 + static void tango_compose_msi_msg(struct irq_data *d, struct msi_msg *msg) 89 + { 90 + struct tango_pcie *pcie = d->chip_data; 91 + msg->address_lo = lower_32_bits(pcie->msi_doorbell); 92 + msg->address_hi = upper_32_bits(pcie->msi_doorbell); 93 + msg->data = d->hwirq; 94 + } 95 + 96 + static struct irq_chip tango_chip = { 97 + .irq_ack = tango_ack, 98 + .irq_mask = tango_mask, 99 + .irq_unmask = tango_unmask, 100 + .irq_set_affinity = tango_set_affinity, 101 + .irq_compose_msi_msg = tango_compose_msi_msg, 102 + }; 103 + 104 + static void msi_ack(struct irq_data *d) 105 + { 106 + irq_chip_ack_parent(d); 107 + } 108 + 109 + static void msi_mask(struct irq_data *d) 110 + { 111 + pci_msi_mask_irq(d); 112 + irq_chip_mask_parent(d); 113 + } 114 + 115 + static void msi_unmask(struct irq_data *d) 116 + { 117 + pci_msi_unmask_irq(d); 118 + irq_chip_unmask_parent(d); 119 + } 120 + 121 + static struct irq_chip msi_chip = { 122 + .name = "MSI", 123 + .irq_ack = msi_ack, 124 + .irq_mask = msi_mask, 125 + .irq_unmask = msi_unmask, 126 + }; 127 + 128 + static struct msi_domain_info msi_dom_info = { 129 + .flags = MSI_FLAG_PCI_MSIX 130 + | MSI_FLAG_USE_DEF_DOM_OPS 131 + | MSI_FLAG_USE_DEF_CHIP_OPS, 132 + .chip = &msi_chip, 133 + }; 134 + 135 + static int tango_irq_domain_alloc(struct irq_domain *dom, unsigned int virq, 136 + unsigned int nr_irqs, void *args) 137 + { 138 + struct tango_pcie *pcie = dom->host_data; 139 + unsigned long flags; 140 + int pos; 141 + 142 + spin_lock_irqsave(&pcie->used_msi_lock, flags); 143 + pos = find_first_zero_bit(pcie->used_msi, MSI_MAX); 144 + if (pos >= MSI_MAX) { 145 + spin_unlock_irqrestore(&pcie->used_msi_lock, flags); 146 + return -ENOSPC; 147 + } 148 + __set_bit(pos, pcie->used_msi); 149 + spin_unlock_irqrestore(&pcie->used_msi_lock, flags); 150 + irq_domain_set_info(dom, virq, pos, &tango_chip, 151 + pcie, handle_edge_irq, NULL, NULL); 152 + 153 + return 0; 154 + } 155 + 156 + static void tango_irq_domain_free(struct irq_domain *dom, unsigned int virq, 157 + unsigned int nr_irqs) 158 + { 159 + unsigned long flags; 160 + struct irq_data *d = irq_domain_get_irq_data(dom, virq); 161 + struct tango_pcie *pcie = d->chip_data; 162 + 163 + spin_lock_irqsave(&pcie->used_msi_lock, flags); 164 + __clear_bit(d->hwirq, pcie->used_msi); 165 + spin_unlock_irqrestore(&pcie->used_msi_lock, flags); 166 + } 167 + 168 + static const struct irq_domain_ops dom_ops = { 169 + .alloc = tango_irq_domain_alloc, 170 + .free = tango_irq_domain_free, 11 171 }; 12 172 13 173 static int smp8759_config_read(struct pci_bus *bus, unsigned int devfn, ··· 237 77 struct device *dev = &pdev->dev; 238 78 struct tango_pcie *pcie; 239 79 struct resource *res; 240 - int ret; 80 + struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node); 81 + struct irq_domain *msi_dom, *irq_dom; 82 + struct of_pci_range_parser parser; 83 + struct of_pci_range range; 84 + int virq, offset; 241 85 242 86 dev_warn(dev, "simultaneous PCI config and MMIO accesses may cause data corruption\n"); 243 87 add_taint(TAINT_CRAP, LOCKDEP_STILL_OK); ··· 259 95 260 96 if (!tango_pcie_link_up(pcie)) 261 97 return -ENODEV; 98 + 99 + if (of_pci_dma_range_parser_init(&parser, dev->of_node) < 0) 100 + return -ENOENT; 101 + 102 + if (of_pci_range_parser_one(&parser, &range) == NULL) 103 + return -ENOENT; 104 + 105 + range.pci_addr += range.size; 106 + pcie->msi_doorbell = range.pci_addr + res->start + SMP8759_DOORBELL; 107 + 108 + for (offset = 0; offset < MSI_MAX / 8; offset += 4) 109 + writel_relaxed(0, pcie->base + SMP8759_ENABLE + offset); 110 + 111 + virq = platform_get_irq(pdev, 1); 112 + if (virq <= 0) { 113 + dev_err(dev, "Failed to map IRQ\n"); 114 + return -ENXIO; 115 + } 116 + 117 + irq_dom = irq_domain_create_linear(fwnode, MSI_MAX, &dom_ops, pcie); 118 + if (!irq_dom) { 119 + dev_err(dev, "Failed to create IRQ domain\n"); 120 + return -ENOMEM; 121 + } 122 + 123 + msi_dom = pci_msi_create_irq_domain(fwnode, &msi_dom_info, irq_dom); 124 + if (!msi_dom) { 125 + dev_err(dev, "Failed to create MSI domain\n"); 126 + irq_domain_remove(irq_dom); 127 + return -ENOMEM; 128 + } 129 + 130 + pcie->dom = irq_dom; 131 + spin_lock_init(&pcie->used_msi_lock); 132 + irq_set_chained_handler_and_data(virq, tango_msi_isr, pcie); 262 133 263 134 return pci_host_common_probe(pdev, &smp8759_ecam_ops); 264 135 }
+3 -3
drivers/pci/host/pcie-xilinx.c
··· 129 129 writel(val, port->reg_base + reg); 130 130 } 131 131 132 - static inline bool xilinx_pcie_link_is_up(struct xilinx_pcie_port *port) 132 + static inline bool xilinx_pcie_link_up(struct xilinx_pcie_port *port) 133 133 { 134 134 return (pcie_read(port, XILINX_PCIE_REG_PSCR) & 135 135 XILINX_PCIE_REG_PSCR_LNKUP) ? 1 : 0; ··· 165 165 166 166 /* Check if link is up when trying to access downstream ports */ 167 167 if (bus->number != port->root_busno) 168 - if (!xilinx_pcie_link_is_up(port)) 168 + if (!xilinx_pcie_link_up(port)) 169 169 return false; 170 170 171 171 /* Only one device down on each root port */ ··· 541 541 { 542 542 struct device *dev = port->dev; 543 543 544 - if (xilinx_pcie_link_is_up(port)) 544 + if (xilinx_pcie_link_up(port)) 545 545 dev_info(dev, "PCIe Link is UP\n"); 546 546 else 547 547 dev_info(dev, "PCIe Link is DOWN\n");
-29
drivers/pci/hotplug-pci.c
··· 1 - /* Core PCI functionality used only by PCI hotplug */ 2 - 3 - #include <linux/pci.h> 4 - #include <linux/export.h> 5 - #include "pci.h" 6 - 7 - int pci_hp_add_bridge(struct pci_dev *dev) 8 - { 9 - struct pci_bus *parent = dev->bus; 10 - int pass, busnr, start = parent->busn_res.start; 11 - int end = parent->busn_res.end; 12 - 13 - for (busnr = start; busnr <= end; busnr++) { 14 - if (!pci_find_bus(pci_domain_nr(parent), busnr)) 15 - break; 16 - } 17 - if (busnr-- > end) { 18 - printk(KERN_ERR "No bus number available for hot-added bridge %s\n", 19 - pci_name(dev)); 20 - return -1; 21 - } 22 - for (pass = 0; pass < 2; pass++) 23 - busnr = pci_scan_bridge(parent, dev, busnr, pass); 24 - if (!dev->subordinate) 25 - return -1; 26 - 27 - return 0; 28 - } 29 - EXPORT_SYMBOL_GPL(pci_hp_add_bridge);
+6 -9
drivers/pci/hotplug/acpiphp_glue.c
··· 462 462 acpiphp_rescan_slot(slot); 463 463 max = acpiphp_max_busnr(bus); 464 464 for (pass = 0; pass < 2; pass++) { 465 - list_for_each_entry(dev, &bus->devices, bus_list) { 465 + for_each_pci_bridge(dev, bus) { 466 466 if (PCI_SLOT(dev->devfn) != slot->device) 467 467 continue; 468 468 469 - if (pci_is_bridge(dev)) { 470 - max = pci_scan_bridge(bus, dev, max, pass); 471 - if (pass && dev->subordinate) { 472 - check_hotplug_bridge(slot, dev); 473 - pcibios_resource_survey_bus(dev->subordinate); 474 - __pci_bus_size_bridges(dev->subordinate, 475 - &add_list); 476 - } 469 + max = pci_scan_bridge(bus, dev, max, pass); 470 + if (pass && dev->subordinate) { 471 + check_hotplug_bridge(slot, dev); 472 + pcibios_resource_survey_bus(dev->subordinate); 473 + __pci_bus_size_bridges(dev->subordinate, &add_list); 477 474 } 478 475 } 479 476 }
+2 -5
drivers/pci/hotplug/cpci_hotplug_pci.c
··· 286 286 } 287 287 parent = slot->dev->bus; 288 288 289 - list_for_each_entry(dev, &parent->devices, bus_list) { 290 - if (PCI_SLOT(dev->devfn) != PCI_SLOT(slot->devfn)) 291 - continue; 292 - if (pci_is_bridge(dev)) 289 + for_each_pci_bridge(dev, parent) { 290 + if (PCI_SLOT(dev->devfn) == PCI_SLOT(slot->devfn)) 293 291 pci_hp_add_bridge(dev); 294 292 } 295 - 296 293 297 294 pci_assign_unassigned_bridge_resources(parent->self); 298 295
+1 -1
drivers/pci/hotplug/cpqphp.h
··· 410 410 void cpqhp_remove_debugfs_files(struct controller *ctrl); 411 411 412 412 /* controller functions */ 413 - void cpqhp_pushbutton_thread(unsigned long event_pointer); 413 + void cpqhp_pushbutton_thread(struct timer_list *t); 414 414 irqreturn_t cpqhp_ctrl_intr(int IRQ, void *data); 415 415 int cpqhp_find_available_resources(struct controller *ctrl, 416 416 void __iomem *rom_start);
+1 -2
drivers/pci/hotplug/cpqphp_core.c
··· 661 661 662 662 slot->p_sm_slot = slot_entry; 663 663 664 - init_timer(&slot->task_event); 664 + timer_setup(&slot->task_event, cpqhp_pushbutton_thread, 0); 665 665 slot->task_event.expires = jiffies + 5 * HZ; 666 - slot->task_event.function = cpqhp_pushbutton_thread; 667 666 668 667 /*FIXME: these capabilities aren't used but if they are 669 668 * they need to be correctly implemented
+10 -9
drivers/pci/hotplug/cpqphp_ctrl.c
··· 47 47 48 48 49 49 static struct task_struct *cpqhp_event_thread; 50 - static unsigned long pushbutton_pending; /* = 0 */ 50 + static struct timer_list *pushbutton_pending; /* = NULL */ 51 51 52 52 /* delay is in jiffies to wait for */ 53 53 static void long_delay(int delay) ··· 1732 1732 return 0; 1733 1733 } 1734 1734 1735 - static void pushbutton_helper_thread(unsigned long data) 1735 + static void pushbutton_helper_thread(struct timer_list *t) 1736 1736 { 1737 - pushbutton_pending = data; 1737 + pushbutton_pending = t; 1738 + 1738 1739 wake_up_process(cpqhp_event_thread); 1739 1740 } 1740 1741 ··· 1884 1883 wait_for_ctrl_irq(ctrl); 1885 1884 1886 1885 mutex_unlock(&ctrl->crit_sect); 1887 - init_timer(&p_slot->task_event); 1886 + timer_setup(&p_slot->task_event, 1887 + pushbutton_helper_thread, 1888 + 0); 1888 1889 p_slot->hp_slot = hp_slot; 1889 1890 p_slot->ctrl = ctrl; 1890 1891 /* p_slot->physical_slot = physical_slot; */ 1891 1892 p_slot->task_event.expires = jiffies + 5 * HZ; /* 5 second delay */ 1892 - p_slot->task_event.function = pushbutton_helper_thread; 1893 - p_slot->task_event.data = (u32) p_slot; 1894 1893 1895 1894 dbg("add_timer p_slot = %p\n", p_slot); 1896 1895 add_timer(&p_slot->task_event); ··· 1921 1920 * Scheduled procedure to handle blocking stuff for the pushbuttons. 1922 1921 * Handles all pending events and exits. 1923 1922 */ 1924 - void cpqhp_pushbutton_thread(unsigned long slot) 1923 + void cpqhp_pushbutton_thread(struct timer_list *t) 1925 1924 { 1926 1925 u8 hp_slot; 1927 1926 u8 device; 1928 1927 struct pci_func *func; 1929 - struct slot *p_slot = (struct slot *) slot; 1928 + struct slot *p_slot = from_timer(p_slot, t, task_event); 1930 1929 struct controller *ctrl = (struct controller *) p_slot->ctrl; 1931 1930 1932 - pushbutton_pending = 0; 1931 + pushbutton_pending = NULL; 1933 1932 hp_slot = p_slot->hp_slot; 1934 1933 1935 1934 device = p_slot->device;
+11 -8
drivers/pci/hotplug/ibmphp_pci.c
··· 1267 1267 size = size & 0xFFFFFFFC; 1268 1268 size = ~size + 1; 1269 1269 end_address = start_address + size - 1; 1270 - if (ibmphp_find_resource(bus, start_address, &io, IO) < 0) { 1271 - err("cannot find corresponding IO resource to remove\n"); 1272 - return -EIO; 1273 - } 1270 + if (ibmphp_find_resource(bus, start_address, &io, IO)) 1271 + goto report_search_failure; 1272 + 1274 1273 debug("io->start = %x\n", io->start); 1275 1274 temp_end = io->end; 1276 1275 start_address = io->end + 1; 1277 1276 ibmphp_remove_resource(io); 1278 1277 /* This is needed b/c of the old I/O restrictions in the BIOS */ 1279 1278 while (temp_end < end_address) { 1280 - if (ibmphp_find_resource(bus, start_address, &io, IO) < 0) { 1281 - err("cannot find corresponding IO resource to remove\n"); 1282 - return -EIO; 1283 - } 1279 + if (ibmphp_find_resource(bus, start_address, 1280 + &io, IO)) 1281 + goto report_search_failure; 1282 + 1284 1283 debug("io->start = %x\n", io->start); 1285 1284 temp_end = io->end; 1286 1285 start_address = io->end + 1; ··· 1326 1327 } /* end of for */ 1327 1328 1328 1329 return 0; 1330 + 1331 + report_search_failure: 1332 + err("cannot find corresponding IO resource to remove\n"); 1333 + return -EIO; 1329 1334 } 1330 1335 1331 1336 static int unconfigure_boot_bridge(u8 busno, u8 device, u8 function)
+4 -3
drivers/pci/hotplug/pciehp_ctrl.c
··· 113 113 114 114 retval = pciehp_configure_device(p_slot); 115 115 if (retval) { 116 - ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n", 117 - pci_domain_nr(parent), parent->number); 118 - if (retval != -EEXIST) 116 + if (retval != -EEXIST) { 117 + ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n", 118 + pci_domain_nr(parent), parent->number); 119 119 goto err_exit; 120 + } 120 121 } 121 122 122 123 pciehp_green_led_on(p_slot);
+13 -12
drivers/pci/hotplug/pciehp_hpc.c
··· 50 50 static void start_int_poll_timer(struct controller *ctrl, int sec); 51 51 52 52 /* This is the interrupt polling timeout function. */ 53 - static void int_poll_timeout(unsigned long data) 53 + static void int_poll_timeout(struct timer_list *t) 54 54 { 55 - struct controller *ctrl = (struct controller *)data; 55 + struct controller *ctrl = from_timer(ctrl, t, poll_timer); 56 56 57 57 /* Poll for interrupt events. regs == NULL => polling */ 58 58 pcie_isr(0, ctrl); 59 59 60 - init_timer(&ctrl->poll_timer); 61 60 if (!pciehp_poll_time) 62 61 pciehp_poll_time = 2; /* default polling interval is 2 sec */ 63 62 ··· 70 71 if ((sec <= 0) || (sec > 60)) 71 72 sec = 2; 72 73 73 - ctrl->poll_timer.function = &int_poll_timeout; 74 - ctrl->poll_timer.data = (unsigned long)ctrl; 75 74 ctrl->poll_timer.expires = jiffies + sec * HZ; 76 75 add_timer(&ctrl->poll_timer); 77 76 } ··· 80 83 81 84 /* Install interrupt polling timer. Start with 10 sec delay */ 82 85 if (pciehp_poll_mode) { 83 - init_timer(&ctrl->poll_timer); 86 + timer_setup(&ctrl->poll_timer, int_poll_timeout, 0); 84 87 start_int_poll_timer(ctrl, 10); 85 88 return 0; 86 89 } ··· 761 764 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 762 765 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask); 763 766 if (pciehp_poll_mode) 764 - int_poll_timeout(ctrl->poll_timer.data); 765 - 767 + int_poll_timeout(&ctrl->poll_timer); 766 768 return 0; 767 769 } 768 770 ··· 791 795 if (!slot) 792 796 return -ENOMEM; 793 797 794 - slot->wq = alloc_workqueue("pciehp-%u", 0, 0, PSN(ctrl)); 798 + slot->wq = alloc_ordered_workqueue("pciehp-%u", 0, PSN(ctrl)); 795 799 if (!slot->wq) 796 800 goto abort; 797 801 ··· 858 862 if (link_cap & PCI_EXP_LNKCAP_DLLLARC) 859 863 ctrl->link_active_reporting = 1; 860 864 861 - /* Clear all remaining event bits in Slot Status register */ 865 + /* 866 + * Clear all remaining event bits in Slot Status register except 867 + * Presence Detect Changed. We want to make sure possible 868 + * hotplug event is triggered when the interrupt is unmasked so 869 + * that we don't lose that event. 870 + */ 862 871 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 863 872 PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 864 - PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | 865 - PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC); 873 + PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC | 874 + PCI_EXP_SLTSTA_DLLSC); 866 875 867 876 ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c\n", 868 877 (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19,
+7 -4
drivers/pci/hotplug/pciehp_pci.c
··· 46 46 47 47 dev = pci_get_slot(parent, PCI_DEVFN(0, 0)); 48 48 if (dev) { 49 - ctrl_err(ctrl, "Device %s already exists at %04x:%02x:00, cannot hot-add\n", 49 + /* 50 + * The device is already there. Either configured by the 51 + * boot firmware or a previous hotplug event. 52 + */ 53 + ctrl_dbg(ctrl, "Device %s already exists at %04x:%02x:00, skipping hot-add\n", 50 54 pci_name(dev), pci_domain_nr(parent), parent->number); 51 55 pci_dev_put(dev); 52 56 ret = -EEXIST; ··· 64 60 goto out; 65 61 } 66 62 67 - list_for_each_entry(dev, &parent->devices, bus_list) 68 - if (pci_is_bridge(dev)) 69 - pci_hp_add_bridge(dev); 63 + for_each_pci_bridge(dev, parent) 64 + pci_hp_add_bridge(dev); 70 65 71 66 pci_assign_unassigned_bridge_resources(bridge); 72 67 pcie_bus_configure_settings(parent);
+3 -6
drivers/pci/hotplug/shpchp_hpc.c
··· 229 229 /* 230 230 * This is the interrupt polling timeout function. 231 231 */ 232 - static void int_poll_timeout(unsigned long data) 232 + static void int_poll_timeout(struct timer_list *t) 233 233 { 234 - struct controller *ctrl = (struct controller *)data; 234 + struct controller *ctrl = from_timer(ctrl, t, poll_timer); 235 235 236 236 /* Poll for interrupt events. regs == NULL => polling */ 237 237 shpc_isr(0, ctrl); 238 238 239 - init_timer(&ctrl->poll_timer); 240 239 if (!shpchp_poll_time) 241 240 shpchp_poll_time = 2; /* default polling interval is 2 sec */ 242 241 ··· 251 252 if ((sec <= 0) || (sec > 60)) 252 253 sec = 2; 253 254 254 - ctrl->poll_timer.function = &int_poll_timeout; 255 - ctrl->poll_timer.data = (unsigned long)ctrl; 256 255 ctrl->poll_timer.expires = jiffies + sec * HZ; 257 256 add_timer(&ctrl->poll_timer); 258 257 } ··· 1051 1054 1052 1055 if (shpchp_poll_mode) { 1053 1056 /* Install interrupt polling timer. Start with 10 sec delay */ 1054 - init_timer(&ctrl->poll_timer); 1057 + timer_setup(&ctrl->poll_timer, int_poll_timeout, 0); 1055 1058 start_int_poll_timer(ctrl, 10); 1056 1059 } else { 1057 1060 /* Installs the interrupt handler */
+2 -4
drivers/pci/hotplug/shpchp_pci.c
··· 61 61 goto out; 62 62 } 63 63 64 - list_for_each_entry(dev, &parent->devices, bus_list) { 65 - if (PCI_SLOT(dev->devfn) != p_slot->device) 66 - continue; 67 - if (pci_is_bridge(dev)) 64 + for_each_pci_bridge(dev, parent) { 65 + if (PCI_SLOT(dev->devfn) == p_slot->device) 68 66 pci_hp_add_bridge(dev); 69 67 } 70 68
+18 -16
drivers/pci/iov.c
··· 113 113 return dev->sriov->barsz[resno - PCI_IOV_RESOURCES]; 114 114 } 115 115 116 - int pci_iov_add_virtfn(struct pci_dev *dev, int id, int reset) 116 + int pci_iov_add_virtfn(struct pci_dev *dev, int id) 117 117 { 118 118 int i; 119 119 int rc = -ENOMEM; ··· 134 134 135 135 virtfn->devfn = pci_iov_virtfn_devfn(dev, id); 136 136 virtfn->vendor = dev->vendor; 137 - pci_read_config_word(dev, iov->pos + PCI_SRIOV_VF_DID, &virtfn->device); 137 + virtfn->device = iov->vf_device; 138 138 rc = pci_setup_device(virtfn); 139 139 if (rc) 140 140 goto failed0; ··· 157 157 BUG_ON(rc); 158 158 } 159 159 160 - if (reset) 161 - __pci_reset_function(virtfn); 162 - 163 160 pci_device_add(virtfn, virtfn->bus); 164 161 165 - pci_bus_add_device(virtfn); 166 162 sprintf(buf, "virtfn%u", id); 167 163 rc = sysfs_create_link(&dev->dev.kobj, &virtfn->dev.kobj, buf); 168 164 if (rc) ··· 168 172 goto failed2; 169 173 170 174 kobject_uevent(&virtfn->dev.kobj, KOBJ_CHANGE); 175 + 176 + pci_bus_add_device(virtfn); 171 177 172 178 return 0; 173 179 ··· 185 187 return rc; 186 188 } 187 189 188 - void pci_iov_remove_virtfn(struct pci_dev *dev, int id, int reset) 190 + void pci_iov_remove_virtfn(struct pci_dev *dev, int id) 189 191 { 190 192 char buf[VIRTFN_ID_LEN]; 191 193 struct pci_dev *virtfn; ··· 195 197 pci_iov_virtfn_devfn(dev, id)); 196 198 if (!virtfn) 197 199 return; 198 - 199 - if (reset) { 200 - device_release_driver(&virtfn->dev); 201 - __pci_reset_function(virtfn); 202 - } 203 200 204 201 sprintf(buf, "virtfn%u", id); 205 202 sysfs_remove_link(&dev->dev.kobj, buf); ··· 310 317 pci_cfg_access_unlock(dev); 311 318 312 319 for (i = 0; i < initial; i++) { 313 - rc = pci_iov_add_virtfn(dev, i, 0); 320 + rc = pci_iov_add_virtfn(dev, i); 314 321 if (rc) 315 322 goto failed; 316 323 } ··· 322 329 323 330 failed: 324 331 while (i--) 325 - pci_iov_remove_virtfn(dev, i, 0); 332 + pci_iov_remove_virtfn(dev, i); 326 333 327 334 err_pcibios: 328 335 iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE); ··· 349 356 return; 350 357 351 358 for (i = 0; i < iov->num_VFs; i++) 352 - pci_iov_remove_virtfn(dev, i, 0); 359 + pci_iov_remove_virtfn(dev, i); 353 360 354 361 iov->ctrl &= ~(PCI_SRIOV_CTRL_VFE | PCI_SRIOV_CTRL_MSE); 355 362 pci_cfg_access_lock(dev); ··· 442 449 iov->nres = nres; 443 450 iov->ctrl = ctrl; 444 451 iov->total_VFs = total; 452 + pci_read_config_word(dev, pos + PCI_SRIOV_VF_DID, &iov->vf_device); 445 453 iov->pgsz = pgsz; 446 454 iov->self = dev; 447 455 iov->drivers_autoprobe = true; ··· 497 503 pci_read_config_word(dev, iov->pos + PCI_SRIOV_CTRL, &ctrl); 498 504 if (ctrl & PCI_SRIOV_CTRL_VFE) 499 505 return; 506 + 507 + /* 508 + * Restore PCI_SRIOV_CTRL_ARI before pci_iov_set_numvfs() because 509 + * it reads offset & stride, which depend on PCI_SRIOV_CTRL_ARI. 510 + */ 511 + ctrl &= ~PCI_SRIOV_CTRL_ARI; 512 + ctrl |= iov->ctrl & PCI_SRIOV_CTRL_ARI; 513 + pci_write_config_word(dev, iov->pos + PCI_SRIOV_CTRL, ctrl); 500 514 501 515 for (i = PCI_IOV_RESOURCES; i <= PCI_IOV_RESOURCE_END; i++) 502 516 pci_update_resource(dev, i); ··· 726 724 * determine the device ID for the VFs, the vendor ID will be the 727 725 * same as the PF so there is no need to check for that one 728 726 */ 729 - pci_read_config_word(dev, dev->sriov->pos + PCI_SRIOV_VF_DID, &dev_id); 727 + dev_id = dev->sriov->vf_device; 730 728 731 729 /* loop through all the VFs to see if we own any that are assigned */ 732 730 vfdev = pci_get_device(dev->vendor, dev_id, NULL);
+1 -1
drivers/pci/pci-acpi.c
··· 624 624 union acpi_object *obj; 625 625 struct pci_host_bridge *bridge; 626 626 627 - if (acpi_pci_disabled || !bus->bridge) 627 + if (acpi_pci_disabled || !bus->bridge || !ACPI_HANDLE(bus->bridge)) 628 628 return; 629 629 630 630 acpi_pci_slot_enumerate(bus);
+34 -1
drivers/pci/pci-sysfs.c
··· 649 649 return count; 650 650 } 651 651 652 + static ssize_t sriov_offset_show(struct device *dev, 653 + struct device_attribute *attr, 654 + char *buf) 655 + { 656 + struct pci_dev *pdev = to_pci_dev(dev); 657 + 658 + return sprintf(buf, "%u\n", pdev->sriov->offset); 659 + } 660 + 661 + static ssize_t sriov_stride_show(struct device *dev, 662 + struct device_attribute *attr, 663 + char *buf) 664 + { 665 + struct pci_dev *pdev = to_pci_dev(dev); 666 + 667 + return sprintf(buf, "%u\n", pdev->sriov->stride); 668 + } 669 + 670 + static ssize_t sriov_vf_device_show(struct device *dev, 671 + struct device_attribute *attr, 672 + char *buf) 673 + { 674 + struct pci_dev *pdev = to_pci_dev(dev); 675 + 676 + return sprintf(buf, "%x\n", pdev->sriov->vf_device); 677 + } 678 + 652 679 static ssize_t sriov_drivers_autoprobe_show(struct device *dev, 653 680 struct device_attribute *attr, 654 681 char *buf) ··· 704 677 static struct device_attribute sriov_numvfs_attr = 705 678 __ATTR(sriov_numvfs, (S_IRUGO|S_IWUSR|S_IWGRP), 706 679 sriov_numvfs_show, sriov_numvfs_store); 680 + static struct device_attribute sriov_offset_attr = __ATTR_RO(sriov_offset); 681 + static struct device_attribute sriov_stride_attr = __ATTR_RO(sriov_stride); 682 + static struct device_attribute sriov_vf_device_attr = __ATTR_RO(sriov_vf_device); 707 683 static struct device_attribute sriov_drivers_autoprobe_attr = 708 684 __ATTR(sriov_drivers_autoprobe, (S_IRUGO|S_IWUSR|S_IWGRP), 709 685 sriov_drivers_autoprobe_show, sriov_drivers_autoprobe_store); ··· 1779 1749 static struct attribute *sriov_dev_attrs[] = { 1780 1750 &sriov_totalvfs_attr.attr, 1781 1751 &sriov_numvfs_attr.attr, 1752 + &sriov_offset_attr.attr, 1753 + &sriov_stride_attr.attr, 1754 + &sriov_vf_device_attr.attr, 1782 1755 &sriov_drivers_autoprobe_attr.attr, 1783 1756 NULL, 1784 1757 }; ··· 1829 1796 NULL, 1830 1797 }; 1831 1798 1832 - struct device_type pci_dev_type = { 1799 + const struct device_type pci_dev_type = { 1833 1800 .groups = pci_dev_attr_groups, 1834 1801 };
+121 -33
drivers/pci/pci.c
··· 2965 2965 } 2966 2966 2967 2967 /** 2968 + * pci_rebar_find_pos - find position of resize ctrl reg for BAR 2969 + * @pdev: PCI device 2970 + * @bar: BAR to find 2971 + * 2972 + * Helper to find the position of the ctrl register for a BAR. 2973 + * Returns -ENOTSUPP if resizable BARs are not supported at all. 2974 + * Returns -ENOENT if no ctrl register for the BAR could be found. 2975 + */ 2976 + static int pci_rebar_find_pos(struct pci_dev *pdev, int bar) 2977 + { 2978 + unsigned int pos, nbars, i; 2979 + u32 ctrl; 2980 + 2981 + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_REBAR); 2982 + if (!pos) 2983 + return -ENOTSUPP; 2984 + 2985 + pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 2986 + nbars = (ctrl & PCI_REBAR_CTRL_NBAR_MASK) >> 2987 + PCI_REBAR_CTRL_NBAR_SHIFT; 2988 + 2989 + for (i = 0; i < nbars; i++, pos += 8) { 2990 + int bar_idx; 2991 + 2992 + pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 2993 + bar_idx = ctrl & PCI_REBAR_CTRL_BAR_IDX; 2994 + if (bar_idx == bar) 2995 + return pos; 2996 + } 2997 + 2998 + return -ENOENT; 2999 + } 3000 + 3001 + /** 3002 + * pci_rebar_get_possible_sizes - get possible sizes for BAR 3003 + * @pdev: PCI device 3004 + * @bar: BAR to query 3005 + * 3006 + * Get the possible sizes of a resizable BAR as bitmask defined in the spec 3007 + * (bit 0=1MB, bit 19=512GB). Returns 0 if BAR isn't resizable. 3008 + */ 3009 + u32 pci_rebar_get_possible_sizes(struct pci_dev *pdev, int bar) 3010 + { 3011 + int pos; 3012 + u32 cap; 3013 + 3014 + pos = pci_rebar_find_pos(pdev, bar); 3015 + if (pos < 0) 3016 + return 0; 3017 + 3018 + pci_read_config_dword(pdev, pos + PCI_REBAR_CAP, &cap); 3019 + return (cap & PCI_REBAR_CAP_SIZES) >> 4; 3020 + } 3021 + 3022 + /** 3023 + * pci_rebar_get_current_size - get the current size of a BAR 3024 + * @pdev: PCI device 3025 + * @bar: BAR to set size to 3026 + * 3027 + * Read the size of a BAR from the resizable BAR config. 3028 + * Returns size if found or negative error code. 3029 + */ 3030 + int pci_rebar_get_current_size(struct pci_dev *pdev, int bar) 3031 + { 3032 + int pos; 3033 + u32 ctrl; 3034 + 3035 + pos = pci_rebar_find_pos(pdev, bar); 3036 + if (pos < 0) 3037 + return pos; 3038 + 3039 + pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 3040 + return (ctrl & PCI_REBAR_CTRL_BAR_SIZE) >> 8; 3041 + } 3042 + 3043 + /** 3044 + * pci_rebar_set_size - set a new size for a BAR 3045 + * @pdev: PCI device 3046 + * @bar: BAR to set size to 3047 + * @size: new size as defined in the spec (0=1MB, 19=512GB) 3048 + * 3049 + * Set the new size of a BAR as defined in the spec. 3050 + * Returns zero if resizing was successful, error code otherwise. 3051 + */ 3052 + int pci_rebar_set_size(struct pci_dev *pdev, int bar, int size) 3053 + { 3054 + int pos; 3055 + u32 ctrl; 3056 + 3057 + pos = pci_rebar_find_pos(pdev, bar); 3058 + if (pos < 0) 3059 + return pos; 3060 + 3061 + pci_read_config_dword(pdev, pos + PCI_REBAR_CTRL, &ctrl); 3062 + ctrl &= ~PCI_REBAR_CTRL_BAR_SIZE; 3063 + ctrl |= size << 8; 3064 + pci_write_config_dword(pdev, pos + PCI_REBAR_CTRL, ctrl); 3065 + return 0; 3066 + } 3067 + 3068 + /** 2968 3069 * pci_swizzle_interrupt_pin - swizzle INTx for device behind bridge 2969 3070 * @dev: the PCI device 2970 3071 * @pin: the INTx pin (1=INTA, 2=INTB, 3=INTC, 4=INTD) ··· 3571 3470 * All operations are managed and will be undone on driver detach. 3572 3471 * 3573 3472 * Returns a pointer to the remapped memory or an ERR_PTR() encoded error code 3574 - * on failure. Usage example: 3473 + * on failure. Usage example:: 3575 3474 * 3576 3475 * res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 3577 3476 * base = devm_pci_remap_cfg_resource(&pdev->dev, res); ··· 4246 4145 } 4247 4146 4248 4147 /** 4249 - * __pci_reset_function - reset a PCI device function 4250 - * @dev: PCI device to reset 4251 - * 4252 - * Some devices allow an individual function to be reset without affecting 4253 - * other functions in the same device. The PCI device must be responsive 4254 - * to PCI config space in order to use this function. 4255 - * 4256 - * The device function is presumed to be unused when this function is called. 4257 - * Resetting the device will make the contents of PCI configuration space 4258 - * random, so any caller of this must be prepared to reinitialise the 4259 - * device including MSI, bus mastering, BARs, decoding IO and memory spaces, 4260 - * etc. 4261 - * 4262 - * Returns 0 if the device function was successfully reset or negative if the 4263 - * device doesn't support resetting a single function. 4264 - */ 4265 - int __pci_reset_function(struct pci_dev *dev) 4266 - { 4267 - int ret; 4268 - 4269 - pci_dev_lock(dev); 4270 - ret = __pci_reset_function_locked(dev); 4271 - pci_dev_unlock(dev); 4272 - 4273 - return ret; 4274 - } 4275 - EXPORT_SYMBOL_GPL(__pci_reset_function); 4276 - 4277 - /** 4278 4148 * __pci_reset_function_locked - reset a PCI device function while holding 4279 4149 * the @dev mutex lock. 4280 4150 * @dev: PCI device to reset ··· 4270 4198 4271 4199 might_sleep(); 4272 4200 4201 + /* 4202 + * A reset method returns -ENOTTY if it doesn't support this device 4203 + * and we should try the next method. 4204 + * 4205 + * If it returns 0 (success), we're finished. If it returns any 4206 + * other error, we're also finished: this indicates that further 4207 + * reset mechanisms might be broken on the device. 4208 + */ 4273 4209 rc = pci_dev_specific_reset(dev, 0); 4274 4210 if (rc != -ENOTTY) 4275 4211 return rc; ··· 4343 4263 * 4344 4264 * This function does not just reset the PCI portion of a device, but 4345 4265 * clears all the state associated with the device. This function differs 4346 - * from __pci_reset_function in that it saves and restores device state 4347 - * over the reset. 4266 + * from __pci_reset_function_locked() in that it saves and restores device state 4267 + * over the reset and takes the PCI device lock. 4348 4268 * 4349 4269 * Returns 0 if the device function was successfully reset or negative if the 4350 4270 * device doesn't support resetting a single function. ··· 4379 4299 * 4380 4300 * This function does not just reset the PCI portion of a device, but 4381 4301 * clears all the state associated with the device. This function differs 4382 - * from __pci_reset_function() in that it saves and restores device state 4302 + * from __pci_reset_function_locked() in that it saves and restores device state 4383 4303 * over the reset. It also differs from pci_reset_function() in that it 4384 4304 * requires the PCI device lock to be held. 4385 4305 * ··· 4434 4354 static bool pci_bus_resetable(struct pci_bus *bus) 4435 4355 { 4436 4356 struct pci_dev *dev; 4357 + 4358 + 4359 + if (bus->self && (bus->self->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET)) 4360 + return false; 4437 4361 4438 4362 list_for_each_entry(dev, &bus->devices, bus_list) { 4439 4363 if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || ··· 4502 4418 static bool pci_slot_resetable(struct pci_slot *slot) 4503 4419 { 4504 4420 struct pci_dev *dev; 4421 + 4422 + if (slot->bus->self && 4423 + (slot->bus->self->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET)) 4424 + return false; 4505 4425 4506 4426 list_for_each_entry(dev, &slot->bus->devices, bus_list) { 4507 4427 if (!dev->slot || dev->slot != slot)
+10 -1
drivers/pci/pci.h
··· 193 193 } 194 194 extern const struct attribute_group *pci_dev_groups[]; 195 195 extern const struct attribute_group *pcibus_groups[]; 196 - extern struct device_type pci_dev_type; 196 + extern const struct device_type pci_dev_type; 197 197 extern const struct attribute_group *pci_bus_groups[]; 198 198 199 199 ··· 264 264 u16 num_VFs; /* number of VFs available */ 265 265 u16 offset; /* first VF Routing ID offset */ 266 266 u16 stride; /* following VF stride */ 267 + u16 vf_device; /* VF device ID */ 267 268 u32 pgsz; /* page size for BAR alignment */ 268 269 u8 link; /* Function Dependency Link */ 269 270 u8 max_VF_buses; /* max buses consumed by VFs */ ··· 367 366 int acpi_get_rc_resources(struct device *dev, const char *hid, u16 segment, 368 367 struct resource *res); 369 368 #endif 369 + 370 + u32 pci_rebar_get_possible_sizes(struct pci_dev *pdev, int bar); 371 + int pci_rebar_get_current_size(struct pci_dev *pdev, int bar); 372 + int pci_rebar_set_size(struct pci_dev *pdev, int bar, int size); 373 + static inline u64 pci_rebar_size_to_bytes(int size) 374 + { 375 + return 1ULL << (size + 20); 376 + } 370 377 371 378 #endif /* DRIVERS_PCI_H */
+8 -1
drivers/pci/pcie/aer/aerdrv_core.c
··· 390 390 * If the error is reported by an end point, we think this 391 391 * error is related to the upstream link of the end point. 392 392 */ 393 - pci_walk_bus(dev->bus, cb, &result_data); 393 + if (state == pci_channel_io_normal) 394 + /* 395 + * the error is non fatal so the bus is ok, just invoke 396 + * the callback for the function that logged the error. 397 + */ 398 + cb(dev, &result_data); 399 + else 400 + pci_walk_bus(dev->bus, cb, &result_data); 394 401 } 395 402 396 403 return result_data.result;
+27 -17
drivers/pci/pcie/aspm.c
··· 451 451 if (!(link->aspm_support & ASPM_STATE_L1_2_MASK)) 452 452 return; 453 453 454 - /* Choose the greater of the two T_cmn_mode_rstr_time */ 455 - val1 = (upreg->l1ss_cap >> 8) & 0xFF; 456 - val2 = (upreg->l1ss_cap >> 8) & 0xFF; 454 + /* Choose the greater of the two Port Common_Mode_Restore_Times */ 455 + val1 = (upreg->l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 456 + val2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 457 457 if (val1 > val2) 458 458 link->l1ss.ctl1 |= val1 << 8; 459 459 else 460 460 link->l1ss.ctl1 |= val2 << 8; 461 + 461 462 /* 462 463 * We currently use LTR L1.2 threshold to be fixed constant picked from 463 464 * Intel's coreboot. 464 465 */ 465 466 link->l1ss.ctl1 |= LTR_L1_2_THRESHOLD_BITS; 466 467 467 - /* Choose the greater of the two T_pwr_on */ 468 - val1 = (upreg->l1ss_cap >> 19) & 0x1F; 469 - scale1 = (upreg->l1ss_cap >> 16) & 0x03; 470 - val2 = (dwreg->l1ss_cap >> 19) & 0x1F; 471 - scale2 = (dwreg->l1ss_cap >> 16) & 0x03; 468 + /* Choose the greater of the two Port T_POWER_ON times */ 469 + val1 = (upreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 470 + scale1 = (upreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 471 + val2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 472 + scale2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 472 473 473 474 if (calc_l1ss_pwron(link->pdev, scale1, val1) > 474 475 calc_l1ss_pwron(link->downstream, scale2, val2)) ··· 648 647 649 648 if (enable_req & ASPM_STATE_L1_2_MASK) { 650 649 651 - /* Program T_pwr_on in both ports */ 650 + /* Program T_POWER_ON times in both ports */ 652 651 pci_write_config_dword(parent, up_cap_ptr + PCI_L1SS_CTL2, 653 652 link->l1ss.ctl2); 654 653 pci_write_config_dword(child, dw_cap_ptr + PCI_L1SS_CTL2, 655 654 link->l1ss.ctl2); 656 655 657 - /* Program T_cmn_mode in parent */ 656 + /* Program Common_Mode_Restore_Time in upstream device */ 658 657 pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 659 - 0xFF00, link->l1ss.ctl1); 658 + PCI_L1SS_CTL1_CM_RESTORE_TIME, 659 + link->l1ss.ctl1); 660 660 661 - /* Program LTR L1.2 threshold in both ports */ 662 - pci_clear_and_set_dword(parent, dw_cap_ptr + PCI_L1SS_CTL1, 663 - 0xE3FF0000, link->l1ss.ctl1); 661 + /* Program LTR_L1.2_THRESHOLD time in both ports */ 662 + pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 663 + PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 664 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE, 665 + link->l1ss.ctl1); 664 666 pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, 665 - 0xE3FF0000, link->l1ss.ctl1); 667 + PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 668 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE, 669 + link->l1ss.ctl1); 666 670 } 667 671 668 672 val = 0; ··· 809 803 810 804 /* 811 805 * Root Ports and PCI/PCI-X to PCIe Bridges are roots of PCIe 812 - * hierarchies. 806 + * hierarchies. Note that some PCIe host implementations omit 807 + * the root ports entirely, in which case a downstream port on 808 + * a switch may become the root of the link state chain for all 809 + * its subordinate endpoints. 813 810 */ 814 811 if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT || 815 - pci_pcie_type(pdev) == PCI_EXP_TYPE_PCIE_BRIDGE) { 812 + pci_pcie_type(pdev) == PCI_EXP_TYPE_PCIE_BRIDGE || 813 + !pdev->bus->parent->self) { 816 814 link->root = link; 817 815 } else { 818 816 struct pcie_link_state *parent;
+4 -1
drivers/pci/pcie/pme.c
··· 226 226 break; 227 227 228 228 pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta); 229 + if (rtsta == (u32) ~0) 230 + break; 231 + 229 232 if (rtsta & PCI_EXP_RTSTA_PME) { 230 233 /* 231 234 * Clear PME status of the port. If there are other ··· 276 273 spin_lock_irqsave(&data->lock, flags); 277 274 pcie_capability_read_dword(port, PCI_EXP_RTSTA, &rtsta); 278 275 279 - if (!(rtsta & PCI_EXP_RTSTA_PME)) { 276 + if (rtsta == (u32) ~0 || !(rtsta & PCI_EXP_RTSTA_PME)) { 280 277 spin_unlock_irqrestore(&data->lock, flags); 281 278 return IRQ_NONE; 282 279 }
+76 -97
drivers/pci/pcie/portdrv_core.c
··· 44 44 kfree(to_pcie_device(dev)); 45 45 } 46 46 47 + /* 48 + * Fill in *pme, *aer, *dpc with the relevant Interrupt Message Numbers if 49 + * services are enabled in "mask". Return the number of MSI/MSI-X vectors 50 + * required to accommodate the largest Message Number. 51 + */ 52 + static int pcie_message_numbers(struct pci_dev *dev, int mask, 53 + u32 *pme, u32 *aer, u32 *dpc) 54 + { 55 + u32 nvec = 0, pos, reg32; 56 + u16 reg16; 57 + 58 + /* 59 + * The Interrupt Message Number indicates which vector is used, i.e., 60 + * the MSI-X table entry or the MSI offset between the base Message 61 + * Data and the generated interrupt message. See PCIe r3.1, sec 62 + * 7.8.2, 7.10.10, 7.31.2. 63 + */ 64 + 65 + if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) { 66 + pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16); 67 + *pme = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9; 68 + nvec = *pme + 1; 69 + } 70 + 71 + if (mask & PCIE_PORT_SERVICE_AER) { 72 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 73 + if (pos) { 74 + pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, 75 + &reg32); 76 + *aer = (reg32 & PCI_ERR_ROOT_AER_IRQ) >> 27; 77 + nvec = max(nvec, *aer + 1); 78 + } 79 + } 80 + 81 + if (mask & PCIE_PORT_SERVICE_DPC) { 82 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC); 83 + if (pos) { 84 + pci_read_config_word(dev, pos + PCI_EXP_DPC_CAP, 85 + &reg16); 86 + *dpc = reg16 & PCI_EXP_DPC_IRQ; 87 + nvec = max(nvec, *dpc + 1); 88 + } 89 + } 90 + 91 + return nvec; 92 + } 93 + 47 94 /** 48 95 * pcie_port_enable_irq_vec - try to set up MSI-X or MSI as interrupt mode 49 96 * for given port ··· 102 55 */ 103 56 static int pcie_port_enable_irq_vec(struct pci_dev *dev, int *irqs, int mask) 104 57 { 105 - int nr_entries, entry, nvec = 0; 58 + int nr_entries, nvec; 59 + u32 pme = 0, aer = 0, dpc = 0; 106 60 107 - /* 108 - * Allocate as many entries as the port wants, so that we can check 109 - * which of them will be useful. Moreover, if nr_entries is correctly 110 - * equal to the number of entries this port actually uses, we'll happily 111 - * go through without any tricks. 112 - */ 61 + /* Allocate the maximum possible number of MSI/MSI-X vectors */ 113 62 nr_entries = pci_alloc_irq_vectors(dev, 1, PCIE_PORT_MAX_MSI_ENTRIES, 114 63 PCI_IRQ_MSIX | PCI_IRQ_MSI); 115 64 if (nr_entries < 0) 116 65 return nr_entries; 117 66 118 - if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) { 119 - u16 reg16; 120 - 121 - /* 122 - * Per PCIe r3.1, sec 6.1.6, "PME and Hot-Plug Event 123 - * interrupts (when both are implemented) always share the 124 - * same MSI or MSI-X vector, as indicated by the Interrupt 125 - * Message Number field in the PCI Express Capabilities 126 - * register". 127 - * 128 - * Per sec 7.8.2, "For MSI, the [Interrupt Message Number] 129 - * indicates the offset between the base Message Data and 130 - * the interrupt message that is generated." 131 - * 132 - * "For MSI-X, the [Interrupt Message Number] indicates 133 - * which MSI-X Table entry is used to generate the 134 - * interrupt message." 135 - */ 136 - pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16); 137 - entry = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9; 138 - if (entry >= nr_entries) 139 - goto out_free_irqs; 140 - 141 - irqs[PCIE_PORT_SERVICE_PME_SHIFT] = pci_irq_vector(dev, entry); 142 - irqs[PCIE_PORT_SERVICE_HP_SHIFT] = pci_irq_vector(dev, entry); 143 - 144 - nvec = max(nvec, entry + 1); 145 - } 146 - 147 - if (mask & PCIE_PORT_SERVICE_AER) { 148 - u32 reg32, pos; 149 - 150 - /* 151 - * Per PCIe r3.1, sec 7.10.10, the Advanced Error Interrupt 152 - * Message Number in the Root Error Status register 153 - * indicates which MSI/MSI-X vector is used for AER. 154 - * 155 - * "For MSI, the [Advanced Error Interrupt Message Number] 156 - * indicates the offset between the base Message Data and 157 - * the interrupt message that is generated." 158 - * 159 - * "For MSI-X, the [Advanced Error Interrupt Message 160 - * Number] indicates which MSI-X Table entry is used to 161 - * generate the interrupt message." 162 - */ 163 - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 164 - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32); 165 - entry = reg32 >> 27; 166 - if (entry >= nr_entries) 167 - goto out_free_irqs; 168 - 169 - irqs[PCIE_PORT_SERVICE_AER_SHIFT] = pci_irq_vector(dev, entry); 170 - 171 - nvec = max(nvec, entry + 1); 172 - } 173 - 174 - if (mask & PCIE_PORT_SERVICE_DPC) { 175 - u16 reg16, pos; 176 - 177 - /* 178 - * Per PCIe r4.0 (v0.9), sec 7.9.15.2, the DPC Interrupt 179 - * Message Number in the DPC Capability register indicates 180 - * which MSI/MSI-X vector is used for DPC. 181 - * 182 - * "For MSI, the [DPC Interrupt Message Number] indicates 183 - * the offset between the base Message Data and the 184 - * interrupt message that is generated." 185 - * 186 - * "For MSI-X, the [DPC Interrupt Message Number] indicates 187 - * which MSI-X Table entry is used to generate the 188 - * interrupt message." 189 - */ 190 - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC); 191 - pci_read_config_word(dev, pos + PCI_EXP_DPC_CAP, &reg16); 192 - entry = reg16 & 0x1f; 193 - if (entry >= nr_entries) 194 - goto out_free_irqs; 195 - 196 - irqs[PCIE_PORT_SERVICE_DPC_SHIFT] = pci_irq_vector(dev, entry); 197 - 198 - nvec = max(nvec, entry + 1); 67 + /* See how many and which Interrupt Message Numbers we actually use */ 68 + nvec = pcie_message_numbers(dev, mask, &pme, &aer, &dpc); 69 + if (nvec > nr_entries) { 70 + pci_free_irq_vectors(dev); 71 + return -EIO; 199 72 } 200 73 201 74 /* 202 - * If nvec is equal to the allocated number of entries, we can just use 203 - * what we have. Otherwise, the port has some extra entries not for the 204 - * services we know and we need to work around that. 75 + * If we allocated more than we need, free them and reallocate fewer. 76 + * 77 + * Reallocating may change the specific vectors we get, so 78 + * pci_irq_vector() must be done *after* the reallocation. 79 + * 80 + * If we're using MSI, hardware is *allowed* to change the Interrupt 81 + * Message Numbers when we free and reallocate the vectors, but we 82 + * assume it won't because we allocate enough vectors for the 83 + * biggest Message Number we found. 205 84 */ 206 85 if (nvec != nr_entries) { 207 - /* Drop the temporary MSI-X setup */ 208 86 pci_free_irq_vectors(dev); 209 87 210 - /* Now allocate the MSI-X vectors for real */ 211 88 nr_entries = pci_alloc_irq_vectors(dev, nvec, nvec, 212 89 PCI_IRQ_MSIX | PCI_IRQ_MSI); 213 90 if (nr_entries < 0) 214 91 return nr_entries; 215 92 } 216 93 217 - return 0; 94 + /* PME and hotplug share an MSI/MSI-X vector */ 95 + if (mask & (PCIE_PORT_SERVICE_PME | PCIE_PORT_SERVICE_HP)) { 96 + irqs[PCIE_PORT_SERVICE_PME_SHIFT] = pci_irq_vector(dev, pme); 97 + irqs[PCIE_PORT_SERVICE_HP_SHIFT] = pci_irq_vector(dev, pme); 98 + } 218 99 219 - out_free_irqs: 220 - pci_free_irq_vectors(dev); 221 - return -EIO; 100 + if (mask & PCIE_PORT_SERVICE_AER) 101 + irqs[PCIE_PORT_SERVICE_AER_SHIFT] = pci_irq_vector(dev, aer); 102 + 103 + if (mask & PCIE_PORT_SERVICE_DPC) 104 + irqs[PCIE_PORT_SERVICE_DPC_SHIFT] = pci_irq_vector(dev, dpc); 105 + 106 + return 0; 222 107 } 223 108 224 109 /**
+1
drivers/pci/pcie/portdrv_pci.c
··· 247 247 248 248 .probe = pcie_portdrv_probe, 249 249 .remove = pcie_portdrv_remove, 250 + .shutdown = pcie_portdrv_remove, 250 251 251 252 .err_handler = &pcie_portdrv_err_handler, 252 253
+173 -14
drivers/pci/probe.c
··· 959 959 PCI_EXP_RTCTL_CRSSVE); 960 960 } 961 961 962 + static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus, 963 + unsigned int available_buses); 964 + 962 965 /* 966 + * pci_scan_bridge_extend() - Scan buses behind a bridge 967 + * @bus: Parent bus the bridge is on 968 + * @dev: Bridge itself 969 + * @max: Starting subordinate number of buses behind this bridge 970 + * @available_buses: Total number of buses available for this bridge and 971 + * the devices below. After the minimal bus space has 972 + * been allocated the remaining buses will be 973 + * distributed equally between hotplug-capable bridges. 974 + * @pass: Either %0 (scan already configured bridges) or %1 (scan bridges 975 + * that need to be reconfigured. 976 + * 963 977 * If it's a bridge, configure it and scan the bus behind it. 964 978 * For CardBus bridges, we don't scan behind as the devices will 965 979 * be handled by the bridge driver itself. ··· 983 969 * them, we proceed to assigning numbers to the remaining buses in 984 970 * order to avoid overlaps between old and new bus numbers. 985 971 */ 986 - int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max, int pass) 972 + static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev, 973 + int max, unsigned int available_buses, 974 + int pass) 987 975 { 988 976 struct pci_bus *child; 989 977 int is_cardbus = (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS); ··· 1092 1076 child = pci_add_new_bus(bus, dev, max+1); 1093 1077 if (!child) 1094 1078 goto out; 1095 - pci_bus_insert_busn_res(child, max+1, 0xff); 1079 + pci_bus_insert_busn_res(child, max+1, 1080 + bus->busn_res.end); 1096 1081 } 1097 1082 max++; 1083 + if (available_buses) 1084 + available_buses--; 1085 + 1098 1086 buses = (buses & 0xff000000) 1099 1087 | ((unsigned int)(child->primary) << 0) 1100 1088 | ((unsigned int)(child->busn_res.start) << 8) ··· 1120 1100 1121 1101 if (!is_cardbus) { 1122 1102 child->bridge_ctl = bctl; 1123 - max = pci_scan_child_bus(child); 1103 + max = pci_scan_child_bus_extend(child, available_buses); 1124 1104 } else { 1125 1105 /* 1126 1106 * For CardBus bridges, we leave 4 bus numbers ··· 1187 1167 pm_runtime_put(&dev->dev); 1188 1168 1189 1169 return max; 1170 + } 1171 + 1172 + /* 1173 + * pci_scan_bridge() - Scan buses behind a bridge 1174 + * @bus: Parent bus the bridge is on 1175 + * @dev: Bridge itself 1176 + * @max: Starting subordinate number of buses behind this bridge 1177 + * @pass: Either %0 (scan already configured bridges) or %1 (scan bridges 1178 + * that need to be reconfigured. 1179 + * 1180 + * If it's a bridge, configure it and scan the bus behind it. 1181 + * For CardBus bridges, we don't scan behind as the devices will 1182 + * be handled by the bridge driver itself. 1183 + * 1184 + * We need to process bridges in two passes -- first we scan those 1185 + * already configured by the BIOS and after we are done with all of 1186 + * them, we proceed to assigning numbers to the remaining buses in 1187 + * order to avoid overlaps between old and new bus numbers. 1188 + */ 1189 + int pci_scan_bridge(struct pci_bus *bus, struct pci_dev *dev, int max, int pass) 1190 + { 1191 + return pci_scan_bridge_extend(bus, dev, max, 0, pass); 1190 1192 } 1191 1193 EXPORT_SYMBOL(pci_scan_bridge); 1192 1194 ··· 2438 2396 /* nothing to do, expected to be removed in the future */ 2439 2397 } 2440 2398 2441 - unsigned int pci_scan_child_bus(struct pci_bus *bus) 2399 + /** 2400 + * pci_scan_child_bus_extend() - Scan devices below a bus 2401 + * @bus: Bus to scan for devices 2402 + * @available_buses: Total number of buses available (%0 does not try to 2403 + * extend beyond the minimal) 2404 + * 2405 + * Scans devices below @bus including subordinate buses. Returns new 2406 + * subordinate number including all the found devices. Passing 2407 + * @available_buses causes the remaining bus space to be distributed 2408 + * equally between hotplug-capable bridges to allow future extension of the 2409 + * hierarchy. 2410 + */ 2411 + static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus, 2412 + unsigned int available_buses) 2442 2413 { 2443 - unsigned int devfn, pass, max = bus->busn_res.start; 2414 + unsigned int used_buses, normal_bridges = 0, hotplug_bridges = 0; 2415 + unsigned int start = bus->busn_res.start; 2416 + unsigned int devfn, cmax, max = start; 2444 2417 struct pci_dev *dev; 2445 2418 2446 2419 dev_dbg(&bus->dev, "scanning bus\n"); ··· 2465 2408 pci_scan_slot(bus, devfn); 2466 2409 2467 2410 /* Reserve buses for SR-IOV capability. */ 2468 - max += pci_iov_bus_range(bus); 2411 + used_buses = pci_iov_bus_range(bus); 2412 + max += used_buses; 2469 2413 2470 2414 /* 2471 2415 * After performing arch-dependent fixup of the bus, look behind ··· 2478 2420 bus->is_added = 1; 2479 2421 } 2480 2422 2481 - for (pass = 0; pass < 2; pass++) 2482 - list_for_each_entry(dev, &bus->devices, bus_list) { 2483 - if (pci_is_bridge(dev)) 2484 - max = pci_scan_bridge(bus, dev, max, pass); 2423 + /* 2424 + * Calculate how many hotplug bridges and normal bridges there 2425 + * are on this bus. We will distribute the additional available 2426 + * buses between hotplug bridges. 2427 + */ 2428 + for_each_pci_bridge(dev, bus) { 2429 + if (dev->is_hotplug_bridge) 2430 + hotplug_bridges++; 2431 + else 2432 + normal_bridges++; 2433 + } 2434 + 2435 + /* 2436 + * Scan bridges that are already configured. We don't touch them 2437 + * unless they are misconfigured (which will be done in the second 2438 + * scan below). 2439 + */ 2440 + for_each_pci_bridge(dev, bus) { 2441 + cmax = max; 2442 + max = pci_scan_bridge_extend(bus, dev, max, 0, 0); 2443 + used_buses += cmax - max; 2444 + } 2445 + 2446 + /* Scan bridges that need to be reconfigured */ 2447 + for_each_pci_bridge(dev, bus) { 2448 + unsigned int buses = 0; 2449 + 2450 + if (!hotplug_bridges && normal_bridges == 1) { 2451 + /* 2452 + * There is only one bridge on the bus (upstream 2453 + * port) so it gets all available buses which it 2454 + * can then distribute to the possible hotplug 2455 + * bridges below. 2456 + */ 2457 + buses = available_buses; 2458 + } else if (dev->is_hotplug_bridge) { 2459 + /* 2460 + * Distribute the extra buses between hotplug 2461 + * bridges if any. 2462 + */ 2463 + buses = available_buses / hotplug_bridges; 2464 + buses = min(buses, available_buses - used_buses); 2485 2465 } 2466 + 2467 + cmax = max; 2468 + max = pci_scan_bridge_extend(bus, dev, cmax, buses, 1); 2469 + used_buses += max - cmax; 2470 + } 2486 2471 2487 2472 /* 2488 2473 * Make sure a hotplug bridge has at least the minimum requested 2489 - * number of buses. 2474 + * number of buses but allow it to grow up to the maximum available 2475 + * bus number of there is room. 2490 2476 */ 2491 - if (bus->self && bus->self->is_hotplug_bridge && pci_hotplug_bus_size) { 2492 - if (max - bus->busn_res.start < pci_hotplug_bus_size - 1) 2493 - max = bus->busn_res.start + pci_hotplug_bus_size - 1; 2477 + if (bus->self && bus->self->is_hotplug_bridge) { 2478 + used_buses = max_t(unsigned int, available_buses, 2479 + pci_hotplug_bus_size - 1); 2480 + if (max - start < used_buses) { 2481 + max = start + used_buses; 2482 + 2483 + /* Do not allocate more buses than we have room left */ 2484 + if (max > bus->busn_res.end) 2485 + max = bus->busn_res.end; 2486 + 2487 + dev_dbg(&bus->dev, "%pR extended by %#02x\n", 2488 + &bus->busn_res, max - start); 2489 + } 2494 2490 } 2495 2491 2496 2492 /* ··· 2556 2444 */ 2557 2445 dev_dbg(&bus->dev, "bus scan returning with max=%02x\n", max); 2558 2446 return max; 2447 + } 2448 + 2449 + /** 2450 + * pci_scan_child_bus() - Scan devices below a bus 2451 + * @bus: Bus to scan for devices 2452 + * 2453 + * Scans devices below @bus including subordinate buses. Returns new 2454 + * subordinate number including all the found devices. 2455 + */ 2456 + unsigned int pci_scan_child_bus(struct pci_bus *bus) 2457 + { 2458 + return pci_scan_child_bus_extend(bus, 0); 2559 2459 } 2560 2460 EXPORT_SYMBOL_GPL(pci_scan_child_bus); 2561 2461 ··· 2861 2737 { 2862 2738 bus_sort_breadthfirst(&pci_bus_type, &pci_sort_bf_cmp); 2863 2739 } 2740 + 2741 + int pci_hp_add_bridge(struct pci_dev *dev) 2742 + { 2743 + struct pci_bus *parent = dev->bus; 2744 + int busnr, start = parent->busn_res.start; 2745 + unsigned int available_buses = 0; 2746 + int end = parent->busn_res.end; 2747 + 2748 + for (busnr = start; busnr <= end; busnr++) { 2749 + if (!pci_find_bus(pci_domain_nr(parent), busnr)) 2750 + break; 2751 + } 2752 + if (busnr-- > end) { 2753 + dev_err(&dev->dev, "No bus number available for hot-added bridge\n"); 2754 + return -1; 2755 + } 2756 + 2757 + /* Scan bridges that are already configured */ 2758 + busnr = pci_scan_bridge(parent, dev, busnr, 0); 2759 + 2760 + /* 2761 + * Distribute the available bus numbers between hotplug-capable 2762 + * bridges to make extending the chain later possible. 2763 + */ 2764 + available_buses = end - busnr; 2765 + 2766 + /* Scan bridges that need to be reconfigured */ 2767 + pci_scan_bridge_extend(parent, dev, busnr, available_buses, 1); 2768 + 2769 + if (!dev->subordinate) 2770 + return -1; 2771 + 2772 + return 0; 2773 + } 2774 + EXPORT_SYMBOL_GPL(pci_hp_add_bridge);
+36 -6
drivers/pci/quirks.c
··· 3366 3366 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset); 3367 3367 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset); 3368 3368 3369 + /* 3370 + * Root port on some Cavium CN8xxx chips do not successfully complete a bus 3371 + * reset when used with certain child devices. After the reset, config 3372 + * accesses to the child may fail. 3373 + */ 3374 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CAVIUM, 0xa100, quirk_no_bus_reset); 3375 + 3369 3376 static void quirk_no_pm_reset(struct pci_dev *dev) 3370 3377 { 3371 3378 /* ··· 4219 4212 #endif 4220 4213 } 4221 4214 4215 + static bool pci_quirk_cavium_acs_match(struct pci_dev *dev) 4216 + { 4217 + /* 4218 + * Effectively selects all downstream ports for whole ThunderX 1 4219 + * family by 0xf800 mask (which represents 8 SoCs), while the lower 4220 + * bits of device ID are used to indicate which subdevice is used 4221 + * within the SoC. 4222 + */ 4223 + return (pci_is_pcie(dev) && 4224 + (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) && 4225 + ((dev->device & 0xf800) == 0xa000)); 4226 + } 4227 + 4222 4228 static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags) 4223 4229 { 4224 4230 /* 4225 - * Cavium devices matching this quirk do not perform peer-to-peer 4226 - * with other functions, allowing masking out these bits as if they 4227 - * were unimplemented in the ACS capability. 4231 + * Cavium root ports don't advertise an ACS capability. However, 4232 + * the RTL internally implements similar protection as if ACS had 4233 + * Request Redirection, Completion Redirection, Source Validation, 4234 + * and Upstream Forwarding features enabled. Assert that the 4235 + * hardware implements and enables equivalent ACS functionality for 4236 + * these flags. 4228 4237 */ 4229 - acs_flags &= ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | 4230 - PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); 4238 + acs_flags &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_SV | PCI_ACS_UF); 4231 4239 4232 - if (!((dev->device >= 0xa000) && (dev->device <= 0xa0ff))) 4240 + if (!pci_quirk_cavium_acs_match(dev)) 4233 4241 return -ENOTTY; 4234 4242 4235 4243 return acs_flags ? 0 : 1; ··· 4822 4800 /* AMD Stoney platform GPU */ 4823 4801 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats); 4824 4802 #endif /* CONFIG_PCI_ATS */ 4803 + 4804 + /* Freescale PCIe doesn't support MSI in RC mode */ 4805 + static void quirk_fsl_no_msi(struct pci_dev *pdev) 4806 + { 4807 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT) 4808 + pdev->no_msi = 1; 4809 + } 4810 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_FREESCALE, PCI_ANY_ID, quirk_fsl_no_msi);
+1 -1
drivers/pci/remove.c
··· 19 19 pci_pme_active(dev, false); 20 20 21 21 if (dev->is_added) { 22 + device_release_driver(&dev->dev); 22 23 pci_proc_detach_device(dev); 23 24 pci_remove_sysfs_dev_files(dev); 24 - device_release_driver(&dev->dev); 25 25 dev->is_added = 0; 26 26 } 27 27
+13 -6
drivers/pci/rom.c
··· 147 147 return NULL; 148 148 149 149 rom = ioremap(start, *size); 150 - if (!rom) { 151 - /* restore enable if ioremap fails */ 152 - if (!(res->flags & IORESOURCE_ROM_ENABLE)) 153 - pci_disable_rom(pdev); 154 - return NULL; 155 - } 150 + if (!rom) 151 + goto err_ioremap; 156 152 157 153 /* 158 154 * Try to find the true size of the ROM since sometimes the PCI window ··· 156 160 * True size is important if the ROM is going to be copied. 157 161 */ 158 162 *size = pci_get_rom_size(pdev, rom, *size); 163 + if (!*size) 164 + goto invalid_rom; 165 + 159 166 return rom; 167 + 168 + invalid_rom: 169 + iounmap(rom); 170 + err_ioremap: 171 + /* restore enable if ioremap fails */ 172 + if (!(res->flags & IORESOURCE_ROM_ENABLE)) 173 + pci_disable_rom(pdev); 174 + return NULL; 160 175 } 161 176 EXPORT_SYMBOL(pci_map_rom); 162 177
+286 -13
drivers/pci/setup-bus.c
··· 1518 1518 break; 1519 1519 } 1520 1520 } 1521 + 1522 + #define PCI_RES_TYPE_MASK \ 1523 + (IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_PREFETCH |\ 1524 + IORESOURCE_MEM_64) 1525 + 1521 1526 static void pci_bridge_release_resources(struct pci_bus *bus, 1522 1527 unsigned long type) 1523 1528 { 1524 1529 struct pci_dev *dev = bus->self; 1525 1530 struct resource *r; 1526 - unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | 1527 - IORESOURCE_PREFETCH | IORESOURCE_MEM_64; 1528 1531 unsigned old_flags = 0; 1529 1532 struct resource *b_res; 1530 1533 int idx = 1; ··· 1570 1567 */ 1571 1568 release_child_resources(r); 1572 1569 if (!release_resource(r)) { 1573 - type = old_flags = r->flags & type_mask; 1570 + type = old_flags = r->flags & PCI_RES_TYPE_MASK; 1574 1571 dev_printk(KERN_DEBUG, &dev->dev, "resource %d %pR released\n", 1575 1572 PCI_BRIDGE_RESOURCES + idx, r); 1576 1573 /* keep the old size */ ··· 1761 1758 enum release_type rel_type = leaf_only; 1762 1759 LIST_HEAD(fail_head); 1763 1760 struct pci_dev_resource *fail_res; 1764 - unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | 1765 - IORESOURCE_PREFETCH | IORESOURCE_MEM_64; 1766 1761 int pci_try_num = 1; 1767 1762 enum enable_type enable_local; 1768 1763 ··· 1819 1818 */ 1820 1819 list_for_each_entry(fail_res, &fail_head, list) 1821 1820 pci_bus_release_bridge_resources(fail_res->dev->bus, 1822 - fail_res->flags & type_mask, 1821 + fail_res->flags & PCI_RES_TYPE_MASK, 1823 1822 rel_type); 1824 1823 1825 1824 /* restore size and flags */ ··· 1854 1853 } 1855 1854 } 1856 1855 1856 + static void extend_bridge_window(struct pci_dev *bridge, struct resource *res, 1857 + struct list_head *add_list, resource_size_t available) 1858 + { 1859 + struct pci_dev_resource *dev_res; 1860 + 1861 + if (res->parent) 1862 + return; 1863 + 1864 + if (resource_size(res) >= available) 1865 + return; 1866 + 1867 + dev_res = res_to_dev_res(add_list, res); 1868 + if (!dev_res) 1869 + return; 1870 + 1871 + /* Is there room to extend the window? */ 1872 + if (available - resource_size(res) <= dev_res->add_size) 1873 + return; 1874 + 1875 + dev_res->add_size = available - resource_size(res); 1876 + dev_dbg(&bridge->dev, "bridge window %pR extended by %pa\n", res, 1877 + &dev_res->add_size); 1878 + } 1879 + 1880 + static void pci_bus_distribute_available_resources(struct pci_bus *bus, 1881 + struct list_head *add_list, resource_size_t available_io, 1882 + resource_size_t available_mmio, resource_size_t available_mmio_pref) 1883 + { 1884 + resource_size_t remaining_io, remaining_mmio, remaining_mmio_pref; 1885 + unsigned int normal_bridges = 0, hotplug_bridges = 0; 1886 + struct resource *io_res, *mmio_res, *mmio_pref_res; 1887 + struct pci_dev *dev, *bridge = bus->self; 1888 + 1889 + io_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; 1890 + mmio_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; 1891 + mmio_pref_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; 1892 + 1893 + /* 1894 + * Update additional resource list (add_list) to fill all the 1895 + * extra resource space available for this port except the space 1896 + * calculated in __pci_bus_size_bridges() which covers all the 1897 + * devices currently connected to the port and below. 1898 + */ 1899 + extend_bridge_window(bridge, io_res, add_list, available_io); 1900 + extend_bridge_window(bridge, mmio_res, add_list, available_mmio); 1901 + extend_bridge_window(bridge, mmio_pref_res, add_list, 1902 + available_mmio_pref); 1903 + 1904 + /* 1905 + * Calculate the total amount of extra resource space we can 1906 + * pass to bridges below this one. This is basically the 1907 + * extra space reduced by the minimal required space for the 1908 + * non-hotplug bridges. 1909 + */ 1910 + remaining_io = available_io; 1911 + remaining_mmio = available_mmio; 1912 + remaining_mmio_pref = available_mmio_pref; 1913 + 1914 + /* 1915 + * Calculate how many hotplug bridges and normal bridges there 1916 + * are on this bus. We will distribute the additional available 1917 + * resources between hotplug bridges. 1918 + */ 1919 + for_each_pci_bridge(dev, bus) { 1920 + if (dev->is_hotplug_bridge) 1921 + hotplug_bridges++; 1922 + else 1923 + normal_bridges++; 1924 + } 1925 + 1926 + for_each_pci_bridge(dev, bus) { 1927 + const struct resource *res; 1928 + 1929 + if (dev->is_hotplug_bridge) 1930 + continue; 1931 + 1932 + /* 1933 + * Reduce the available resource space by what the 1934 + * bridge and devices below it occupy. 1935 + */ 1936 + res = &dev->resource[PCI_BRIDGE_RESOURCES + 0]; 1937 + if (!res->parent && available_io > resource_size(res)) 1938 + remaining_io -= resource_size(res); 1939 + 1940 + res = &dev->resource[PCI_BRIDGE_RESOURCES + 1]; 1941 + if (!res->parent && available_mmio > resource_size(res)) 1942 + remaining_mmio -= resource_size(res); 1943 + 1944 + res = &dev->resource[PCI_BRIDGE_RESOURCES + 2]; 1945 + if (!res->parent && available_mmio_pref > resource_size(res)) 1946 + remaining_mmio_pref -= resource_size(res); 1947 + } 1948 + 1949 + /* 1950 + * Go over devices on this bus and distribute the remaining 1951 + * resource space between hotplug bridges. 1952 + */ 1953 + for_each_pci_bridge(dev, bus) { 1954 + struct pci_bus *b; 1955 + 1956 + b = dev->subordinate; 1957 + if (!b) 1958 + continue; 1959 + 1960 + if (!hotplug_bridges && normal_bridges == 1) { 1961 + /* 1962 + * There is only one bridge on the bus (upstream 1963 + * port) so it gets all available resources 1964 + * which it can then distribute to the possible 1965 + * hotplug bridges below. 1966 + */ 1967 + pci_bus_distribute_available_resources(b, add_list, 1968 + available_io, available_mmio, 1969 + available_mmio_pref); 1970 + } else if (dev->is_hotplug_bridge) { 1971 + resource_size_t align, io, mmio, mmio_pref; 1972 + 1973 + /* 1974 + * Distribute available extra resources equally 1975 + * between hotplug-capable downstream ports 1976 + * taking alignment into account. 1977 + * 1978 + * Here hotplug_bridges is always != 0. 1979 + */ 1980 + align = pci_resource_alignment(bridge, io_res); 1981 + io = div64_ul(available_io, hotplug_bridges); 1982 + io = min(ALIGN(io, align), remaining_io); 1983 + remaining_io -= io; 1984 + 1985 + align = pci_resource_alignment(bridge, mmio_res); 1986 + mmio = div64_ul(available_mmio, hotplug_bridges); 1987 + mmio = min(ALIGN(mmio, align), remaining_mmio); 1988 + remaining_mmio -= mmio; 1989 + 1990 + align = pci_resource_alignment(bridge, mmio_pref_res); 1991 + mmio_pref = div64_ul(available_mmio_pref, 1992 + hotplug_bridges); 1993 + mmio_pref = min(ALIGN(mmio_pref, align), 1994 + remaining_mmio_pref); 1995 + remaining_mmio_pref -= mmio_pref; 1996 + 1997 + pci_bus_distribute_available_resources(b, add_list, io, 1998 + mmio, mmio_pref); 1999 + } 2000 + } 2001 + } 2002 + 2003 + static void 2004 + pci_bridge_distribute_available_resources(struct pci_dev *bridge, 2005 + struct list_head *add_list) 2006 + { 2007 + resource_size_t available_io, available_mmio, available_mmio_pref; 2008 + const struct resource *res; 2009 + 2010 + if (!bridge->is_hotplug_bridge) 2011 + return; 2012 + 2013 + /* Take the initial extra resources from the hotplug port */ 2014 + res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; 2015 + available_io = resource_size(res); 2016 + res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; 2017 + available_mmio = resource_size(res); 2018 + res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; 2019 + available_mmio_pref = resource_size(res); 2020 + 2021 + pci_bus_distribute_available_resources(bridge->subordinate, 2022 + add_list, available_io, available_mmio, available_mmio_pref); 2023 + } 2024 + 1857 2025 void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge) 1858 2026 { 1859 2027 struct pci_bus *parent = bridge->subordinate; ··· 2032 1862 LIST_HEAD(fail_head); 2033 1863 struct pci_dev_resource *fail_res; 2034 1864 int retval; 2035 - unsigned long type_mask = IORESOURCE_IO | IORESOURCE_MEM | 2036 - IORESOURCE_PREFETCH | IORESOURCE_MEM_64; 2037 1865 2038 1866 again: 2039 1867 __pci_bus_size_bridges(parent, &add_list); 1868 + 1869 + /* 1870 + * Distribute remaining resources (if any) equally between 1871 + * hotplug bridges below. This makes it possible to extend the 1872 + * hierarchy later without running out of resources. 1873 + */ 1874 + pci_bridge_distribute_available_resources(bridge, &add_list); 1875 + 2040 1876 __pci_bridge_assign_resources(bridge, &add_list, &fail_head); 2041 1877 BUG_ON(!list_empty(&add_list)); 2042 1878 tried_times++; ··· 2065 1889 */ 2066 1890 list_for_each_entry(fail_res, &fail_head, list) 2067 1891 pci_bus_release_bridge_resources(fail_res->dev->bus, 2068 - fail_res->flags & type_mask, 1892 + fail_res->flags & PCI_RES_TYPE_MASK, 2069 1893 whole_subtree); 2070 1894 2071 1895 /* restore size and flags */ ··· 2090 1914 } 2091 1915 EXPORT_SYMBOL_GPL(pci_assign_unassigned_bridge_resources); 2092 1916 1917 + int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type) 1918 + { 1919 + struct pci_dev_resource *dev_res; 1920 + struct pci_dev *next; 1921 + LIST_HEAD(saved); 1922 + LIST_HEAD(added); 1923 + LIST_HEAD(failed); 1924 + unsigned int i; 1925 + int ret; 1926 + 1927 + /* Walk to the root hub, releasing bridge BARs when possible */ 1928 + next = bridge; 1929 + do { 1930 + bridge = next; 1931 + for (i = PCI_BRIDGE_RESOURCES; i < PCI_BRIDGE_RESOURCE_END; 1932 + i++) { 1933 + struct resource *res = &bridge->resource[i]; 1934 + 1935 + if ((res->flags ^ type) & PCI_RES_TYPE_MASK) 1936 + continue; 1937 + 1938 + /* Ignore BARs which are still in use */ 1939 + if (res->child) 1940 + continue; 1941 + 1942 + ret = add_to_list(&saved, bridge, res, 0, 0); 1943 + if (ret) 1944 + goto cleanup; 1945 + 1946 + dev_info(&bridge->dev, "BAR %d: releasing %pR\n", 1947 + i, res); 1948 + 1949 + if (res->parent) 1950 + release_resource(res); 1951 + res->start = 0; 1952 + res->end = 0; 1953 + break; 1954 + } 1955 + if (i == PCI_BRIDGE_RESOURCE_END) 1956 + break; 1957 + 1958 + next = bridge->bus ? bridge->bus->self : NULL; 1959 + } while (next); 1960 + 1961 + if (list_empty(&saved)) 1962 + return -ENOENT; 1963 + 1964 + __pci_bus_size_bridges(bridge->subordinate, &added); 1965 + __pci_bridge_assign_resources(bridge, &added, &failed); 1966 + BUG_ON(!list_empty(&added)); 1967 + 1968 + if (!list_empty(&failed)) { 1969 + ret = -ENOSPC; 1970 + goto cleanup; 1971 + } 1972 + 1973 + list_for_each_entry(dev_res, &saved, list) { 1974 + /* Skip the bridge we just assigned resources for. */ 1975 + if (bridge == dev_res->dev) 1976 + continue; 1977 + 1978 + bridge = dev_res->dev; 1979 + pci_setup_bridge(bridge->subordinate); 1980 + } 1981 + 1982 + free_list(&saved); 1983 + return 0; 1984 + 1985 + cleanup: 1986 + /* restore size and flags */ 1987 + list_for_each_entry(dev_res, &failed, list) { 1988 + struct resource *res = dev_res->res; 1989 + 1990 + res->start = dev_res->start; 1991 + res->end = dev_res->end; 1992 + res->flags = dev_res->flags; 1993 + } 1994 + free_list(&failed); 1995 + 1996 + /* Revert to the old configuration */ 1997 + list_for_each_entry(dev_res, &saved, list) { 1998 + struct resource *res = dev_res->res; 1999 + 2000 + bridge = dev_res->dev; 2001 + i = res - bridge->resource; 2002 + 2003 + res->start = dev_res->start; 2004 + res->end = dev_res->end; 2005 + res->flags = dev_res->flags; 2006 + 2007 + pci_claim_resource(bridge, i); 2008 + pci_setup_bridge(bridge->subordinate); 2009 + } 2010 + free_list(&saved); 2011 + 2012 + return ret; 2013 + } 2014 + 2093 2015 void pci_assign_unassigned_bus_resources(struct pci_bus *bus) 2094 2016 { 2095 2017 struct pci_dev *dev; ··· 2195 1921 want additional resources */ 2196 1922 2197 1923 down_read(&pci_bus_sem); 2198 - list_for_each_entry(dev, &bus->devices, bus_list) 2199 - if (pci_is_bridge(dev) && pci_has_subordinate(dev)) 2200 - __pci_bus_size_bridges(dev->subordinate, 2201 - &add_list); 1924 + for_each_pci_bridge(dev, bus) 1925 + if (pci_has_subordinate(dev)) 1926 + __pci_bus_size_bridges(dev->subordinate, &add_list); 2202 1927 up_read(&pci_bus_sem); 2203 1928 __pci_bus_assign_resources(bus, &add_list, NULL); 2204 1929 BUG_ON(!list_empty(&add_list));
+58
drivers/pci/setup-res.c
··· 397 397 return 0; 398 398 } 399 399 400 + void pci_release_resource(struct pci_dev *dev, int resno) 401 + { 402 + struct resource *res = dev->resource + resno; 403 + 404 + dev_info(&dev->dev, "BAR %d: releasing %pR\n", resno, res); 405 + release_resource(res); 406 + res->end = resource_size(res) - 1; 407 + res->start = 0; 408 + res->flags |= IORESOURCE_UNSET; 409 + } 410 + EXPORT_SYMBOL(pci_release_resource); 411 + 412 + int pci_resize_resource(struct pci_dev *dev, int resno, int size) 413 + { 414 + struct resource *res = dev->resource + resno; 415 + int old, ret; 416 + u32 sizes; 417 + u16 cmd; 418 + 419 + /* Make sure the resource isn't assigned before resizing it. */ 420 + if (!(res->flags & IORESOURCE_UNSET)) 421 + return -EBUSY; 422 + 423 + pci_read_config_word(dev, PCI_COMMAND, &cmd); 424 + if (cmd & PCI_COMMAND_MEMORY) 425 + return -EBUSY; 426 + 427 + sizes = pci_rebar_get_possible_sizes(dev, resno); 428 + if (!sizes) 429 + return -ENOTSUPP; 430 + 431 + if (!(sizes & BIT(size))) 432 + return -EINVAL; 433 + 434 + old = pci_rebar_get_current_size(dev, resno); 435 + if (old < 0) 436 + return old; 437 + 438 + ret = pci_rebar_set_size(dev, resno, size); 439 + if (ret) 440 + return ret; 441 + 442 + res->end = res->start + pci_rebar_size_to_bytes(size) - 1; 443 + 444 + /* Check if the new config works by trying to assign everything. */ 445 + ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); 446 + if (ret) 447 + goto error_resize; 448 + 449 + return 0; 450 + 451 + error_resize: 452 + pci_rebar_set_size(dev, resno, old); 453 + res->end = res->start + pci_rebar_size_to_bytes(old) - 1; 454 + return ret; 455 + } 456 + EXPORT_SYMBOL(pci_resize_resource); 457 + 400 458 int pci_enable_resources(struct pci_dev *dev, int mask) 401 459 { 402 460 u16 cmd, old_cmd;
+1 -1
drivers/pci/switch/switchtec.c
··· 943 943 #define EV_PAR(i, r)[i] = {offsetof(struct part_cfg_regs, r), part_ev_reg} 944 944 #define EV_PFF(i, r)[i] = {offsetof(struct pff_csr_regs, r), pff_ev_reg} 945 945 946 - const struct event_reg { 946 + static const struct event_reg { 947 947 size_t offset; 948 948 u32 __iomem *(*map_reg)(struct switchtec_dev *stdev, 949 949 size_t offset, int index);
+2 -3
drivers/pcmcia/cardbus.c
··· 77 77 78 78 max = bus->busn_res.start; 79 79 for (pass = 0; pass < 2; pass++) 80 - list_for_each_entry(dev, &bus->devices, bus_list) 81 - if (pci_is_bridge(dev)) 82 - max = pci_scan_bridge(bus, dev, max, pass); 80 + for_each_pci_bridge(dev, bus) 81 + max = pci_scan_bridge(bus, dev, max, pass); 83 82 84 83 /* 85 84 * Size all resources below the CardBus controller.
+9 -1
include/linux/of_address.h
··· 50 50 51 51 extern int of_pci_range_parser_init(struct of_pci_range_parser *parser, 52 52 struct device_node *node); 53 + extern int of_pci_dma_range_parser_init(struct of_pci_range_parser *parser, 54 + struct device_node *node); 53 55 extern struct of_pci_range *of_pci_range_parser_one( 54 56 struct of_pci_range_parser *parser, 55 57 struct of_pci_range *range); ··· 88 86 static inline int of_pci_range_parser_init(struct of_pci_range_parser *parser, 89 87 struct device_node *node) 90 88 { 91 - return -1; 89 + return -ENOSYS; 90 + } 91 + 92 + static inline int of_pci_dma_range_parser_init(struct of_pci_range_parser *parser, 93 + struct device_node *node) 94 + { 95 + return -ENOSYS; 92 96 } 93 97 94 98 static inline struct of_pci_range *of_pci_range_parser_one(
+11 -5
include/linux/pci.h
··· 592 592 dev->hdr_type == PCI_HEADER_TYPE_CARDBUS; 593 593 } 594 594 595 + #define for_each_pci_bridge(dev, bus) \ 596 + list_for_each_entry(dev, &bus->devices, bus_list) \ 597 + if (!pci_is_bridge(dev)) {} else 598 + 595 599 static inline struct pci_dev *pci_upstream_bridge(struct pci_dev *dev) 596 600 { 597 601 dev = pci_physfn(dev); ··· 1089 1085 int pcie_get_minimum_link(struct pci_dev *dev, enum pci_bus_speed *speed, 1090 1086 enum pcie_link_width *width); 1091 1087 void pcie_flr(struct pci_dev *dev); 1092 - int __pci_reset_function(struct pci_dev *dev); 1093 1088 int __pci_reset_function_locked(struct pci_dev *dev); 1094 1089 int pci_reset_function(struct pci_dev *dev); 1095 1090 int pci_reset_function_locked(struct pci_dev *dev); ··· 1105 1102 void pci_update_resource(struct pci_dev *dev, int resno); 1106 1103 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 1107 1104 int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); 1105 + void pci_release_resource(struct pci_dev *dev, int resno); 1106 + int __must_check pci_resize_resource(struct pci_dev *dev, int i, int size); 1108 1107 int pci_select_bars(struct pci_dev *dev, unsigned long flags); 1109 1108 bool pci_device_is_present(struct pci_dev *pdev); 1110 1109 void pci_ignore_hotplug(struct pci_dev *dev); ··· 1186 1181 void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge); 1187 1182 void pci_assign_unassigned_bus_resources(struct pci_bus *bus); 1188 1183 void pci_assign_unassigned_root_bus_resources(struct pci_bus *bus); 1184 + int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type); 1189 1185 void pdev_enable_device(struct pci_dev *); 1190 1186 int pci_enable_resources(struct pci_dev *, int mask); 1191 1187 void pci_assign_irq(struct pci_dev *dev); ··· 1960 1954 1961 1955 int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn); 1962 1956 void pci_disable_sriov(struct pci_dev *dev); 1963 - int pci_iov_add_virtfn(struct pci_dev *dev, int id, int reset); 1964 - void pci_iov_remove_virtfn(struct pci_dev *dev, int id, int reset); 1957 + int pci_iov_add_virtfn(struct pci_dev *dev, int id); 1958 + void pci_iov_remove_virtfn(struct pci_dev *dev, int id); 1965 1959 int pci_num_vf(struct pci_dev *dev); 1966 1960 int pci_vfs_assigned(struct pci_dev *dev); 1967 1961 int pci_sriov_set_totalvfs(struct pci_dev *dev, u16 numvfs); ··· 1978 1972 } 1979 1973 static inline int pci_enable_sriov(struct pci_dev *dev, int nr_virtfn) 1980 1974 { return -ENODEV; } 1981 - static inline int pci_iov_add_virtfn(struct pci_dev *dev, int id, int reset) 1975 + static inline int pci_iov_add_virtfn(struct pci_dev *dev, int id) 1982 1976 { 1983 1977 return -ENOSYS; 1984 1978 } 1985 1979 static inline void pci_iov_remove_virtfn(struct pci_dev *dev, 1986 - int id, int reset) { } 1980 + int id) { } 1987 1981 static inline void pci_disable_sriov(struct pci_dev *dev) { } 1988 1982 static inline int pci_num_vf(struct pci_dev *dev) { return 0; } 1989 1983 static inline int pci_vfs_assigned(struct pci_dev *dev)
+28 -16
include/uapi/linux/pci_regs.h
··· 747 747 #define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First UNC is Fatal */ 748 748 #define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */ 749 749 #define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */ 750 + #define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */ 750 751 #define PCI_ERR_ROOT_ERR_SRC 52 /* Error Source Identification */ 751 752 752 753 /* Virtual Channel */ ··· 941 940 #define PCI_SATA_SIZEOF_LONG 16 942 941 943 942 /* Resizable BARs */ 943 + #define PCI_REBAR_CAP 4 /* capability register */ 944 + #define PCI_REBAR_CAP_SIZES 0x00FFFFF0 /* supported BAR sizes */ 944 945 #define PCI_REBAR_CTRL 8 /* control register */ 945 - #define PCI_REBAR_CTRL_NBAR_MASK (7 << 5) /* mask for # bars */ 946 - #define PCI_REBAR_CTRL_NBAR_SHIFT 5 /* shift for # bars */ 946 + #define PCI_REBAR_CTRL_BAR_IDX 0x00000007 /* BAR index */ 947 + #define PCI_REBAR_CTRL_NBAR_MASK 0x000000E0 /* # of resizable BARs */ 948 + #define PCI_REBAR_CTRL_NBAR_SHIFT 5 /* shift for # of BARs */ 949 + #define PCI_REBAR_CTRL_BAR_SIZE 0x00001F00 /* BAR size */ 947 950 948 951 /* Dynamic Power Allocation */ 949 952 #define PCI_DPA_CAP 4 /* capability register */ ··· 966 961 967 962 /* Downstream Port Containment */ 968 963 #define PCI_EXP_DPC_CAP 4 /* DPC Capability */ 964 + #define PCI_EXP_DPC_IRQ 0x1f /* DPC Interrupt Message Number */ 969 965 #define PCI_EXP_DPC_CAP_RP_EXT 0x20 /* Root Port Extensions for DPC */ 970 966 #define PCI_EXP_DPC_CAP_POISONED_TLP 0x40 /* Poisoned TLP Egress Blocking Supported */ 971 967 #define PCI_EXP_DPC_CAP_SW_TRIGGER 0x80 /* Software Triggering Supported */ ··· 1002 996 #define PCI_PTM_CTRL_ENABLE 0x00000001 /* PTM enable */ 1003 997 #define PCI_PTM_CTRL_ROOT 0x00000002 /* Root select */ 1004 998 1005 - /* L1 PM Substates */ 1006 - #define PCI_L1SS_CAP 4 /* capability register */ 1007 - #define PCI_L1SS_CAP_PCIPM_L1_2 1 /* PCI PM L1.2 Support */ 1008 - #define PCI_L1SS_CAP_PCIPM_L1_1 2 /* PCI PM L1.1 Support */ 1009 - #define PCI_L1SS_CAP_ASPM_L1_2 4 /* ASPM L1.2 Support */ 1010 - #define PCI_L1SS_CAP_ASPM_L1_1 8 /* ASPM L1.1 Support */ 1011 - #define PCI_L1SS_CAP_L1_PM_SS 16 /* L1 PM Substates Support */ 1012 - #define PCI_L1SS_CTL1 8 /* Control Register 1 */ 1013 - #define PCI_L1SS_CTL1_PCIPM_L1_2 1 /* PCI PM L1.2 Enable */ 1014 - #define PCI_L1SS_CTL1_PCIPM_L1_1 2 /* PCI PM L1.1 Support */ 1015 - #define PCI_L1SS_CTL1_ASPM_L1_2 4 /* ASPM L1.2 Support */ 1016 - #define PCI_L1SS_CTL1_ASPM_L1_1 8 /* ASPM L1.1 Support */ 1017 - #define PCI_L1SS_CTL1_L1SS_MASK 0x0000000F 1018 - #define PCI_L1SS_CTL2 0xC /* Control Register 2 */ 999 + /* ASPM L1 PM Substates */ 1000 + #define PCI_L1SS_CAP 0x04 /* Capabilities Register */ 1001 + #define PCI_L1SS_CAP_PCIPM_L1_2 0x00000001 /* PCI-PM L1.2 Supported */ 1002 + #define PCI_L1SS_CAP_PCIPM_L1_1 0x00000002 /* PCI-PM L1.1 Supported */ 1003 + #define PCI_L1SS_CAP_ASPM_L1_2 0x00000004 /* ASPM L1.2 Supported */ 1004 + #define PCI_L1SS_CAP_ASPM_L1_1 0x00000008 /* ASPM L1.1 Supported */ 1005 + #define PCI_L1SS_CAP_L1_PM_SS 0x00000010 /* L1 PM Substates Supported */ 1006 + #define PCI_L1SS_CAP_CM_RESTORE_TIME 0x0000ff00 /* Port Common_Mode_Restore_Time */ 1007 + #define PCI_L1SS_CAP_P_PWR_ON_SCALE 0x00030000 /* Port T_POWER_ON scale */ 1008 + #define PCI_L1SS_CAP_P_PWR_ON_VALUE 0x00f80000 /* Port T_POWER_ON value */ 1009 + #define PCI_L1SS_CTL1 0x08 /* Control 1 Register */ 1010 + #define PCI_L1SS_CTL1_PCIPM_L1_2 0x00000001 /* PCI-PM L1.2 Enable */ 1011 + #define PCI_L1SS_CTL1_PCIPM_L1_1 0x00000002 /* PCI-PM L1.1 Enable */ 1012 + #define PCI_L1SS_CTL1_ASPM_L1_2 0x00000004 /* ASPM L1.2 Enable */ 1013 + #define PCI_L1SS_CTL1_ASPM_L1_1 0x00000008 /* ASPM L1.1 Enable */ 1014 + #define PCI_L1SS_CTL1_L1SS_MASK 0x0000000f 1015 + #define PCI_L1SS_CTL1_CM_RESTORE_TIME 0x0000ff00 /* Common_Mode_Restore_Time */ 1016 + #define PCI_L1SS_CTL1_LTR_L12_TH_VALUE 0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */ 1017 + #define PCI_L1SS_CTL1_LTR_L12_TH_SCALE 0xe0000000 /* LTR_L1.2_THRESHOLD_Scale */ 1018 + #define PCI_L1SS_CTL2 0x0c /* Control 2 Register */ 1019 1019 1020 1020 #endif /* LINUX_PCI_REGS_H */
-9
init/Kconfig
··· 1386 1386 Enable the userfaultfd() system call that allows to intercept and 1387 1387 handle page faults in userland. 1388 1388 1389 - config PCI_QUIRKS 1390 - default y 1391 - bool "Enable PCI quirk workarounds" if EXPERT 1392 - depends on PCI 1393 - help 1394 - This enables workarounds for various PCI chipset 1395 - bugs/quirks. Disable this only if your target machine is 1396 - unaffected by PCI quirks. 1397 - 1398 1389 config MEMBARRIER 1399 1390 bool "Enable membarrier() system call" if EXPERT 1400 1391 default y