Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

- add sysfs max_link_speed/width, current_link_speed/width (Wong Vee
Khee)

- make host bridge IRQ mapping much more generic (Matthew Minter,
Lorenzo Pieralisi)

- convert most drivers to pci_scan_root_bus_bridge() (Lorenzo
Pieralisi)

- mutex sriov_configure() (Jakub Kicinski)

- mutex pci_error_handlers callbacks (Christoph Hellwig)

- split ->reset_notify() into ->reset_prepare()/reset_done()
(Christoph Hellwig)

- support multiple PCIe portdrv interrupts for MSI as well as MSI-X
(Gabriele Paoloni)

- allocate MSI/MSI-X vector for Downstream Port Containment (Gabriele
Paoloni)

- fix MSI IRQ affinity pre/post/min_vecs issue (Michael Hernandez)

- test INTx masking during enumeration, not at run-time (Piotr Gregor)

- avoid using device_may_wakeup() for runtime PM (Rafael J. Wysocki)

- restore the status of PCI devices across hibernation (Chen Yu)

- keep parent resources that start at 0x0 (Ard Biesheuvel)

- enable ECRC only if device supports it (Bjorn Helgaas)

- restore PRI and PASID state after Function-Level Reset (CQ Tang)

- skip DPC event if device is not present (Keith Busch)

- check domain when matching SMBIOS info (Sujith Pandel)

- mark Intel XXV710 NIC INTx masking as broken (Alex Williamson)

- avoid AMD SB7xx EHCI USB wakeup defect (Kai-Heng Feng)

- work around long-standing Macbook Pro poweroff issue (Bjorn Helgaas)

- add Switchtec "running" status flag (Logan Gunthorpe)

- fix dra7xx incorrect RW1C IRQ register usage (Arvind Yadav)

- modify xilinx-nwl IRQ chip for legacy interrupts (Bharat Kumar
Gogada)

- move VMD SRCU cleanup after bus, child device removal (Jon Derrick)

- add Faraday clock handling (Linus Walleij)

- configure Rockchip MPS and reorganize (Shawn Lin)

- limit Qualcomm TLP size to 2K (hardware issue) (Srinivas Kandagatla)

- support Tegra MSI 64-bit addressing (Thierry Reding)

- use Rockchip normal (not privileged) register bank (Shawn Lin)

- add HiSilicon Kirin SoC PCIe controller driver (Xiaowei Song)

- add Sigma Designs Tango SMP8759 PCIe controller driver (Marc
Gonzalez)

- add MediaTek PCIe host controller support (Ryder Lee)

- add Qualcomm IPQ4019 support (John Crispin)

- add HyperV vPCI protocol v1.2 support (Jork Loeser)

- add i.MX6 regulator support (Quentin Schulz)

* tag 'pci-v4.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (113 commits)
PCI: tango: Add Sigma Designs Tango SMP8759 PCIe host bridge support
PCI: Add DT binding for Sigma Designs Tango PCIe controller
PCI: rockchip: Use normal register bank for config accessors
dt-bindings: PCI: Add documentation for MediaTek PCIe
PCI: Remove __pci_dev_reset() and pci_dev_reset()
PCI: Split ->reset_notify() method into ->reset_prepare() and ->reset_done()
PCI: xilinx: Make of_device_ids const
PCI: xilinx-nwl: Modify IRQ chip for legacy interrupts
PCI: vmd: Move SRCU cleanup after bus, child device removal
PCI: vmd: Correct comment: VMD domains start at 0x10000, not 0x1000
PCI: versatile: Add local struct device pointers
PCI: tegra: Do not allocate MSI target memory
PCI: tegra: Support MSI 64-bit addressing
PCI: rockchip: Use local struct device pointer consistently
PCI: rockchip: Check for clk_prepare_enable() errors during resume
MAINTAINERS: Remove Wenrui Li as Rockchip PCIe driver maintainer
PCI: rockchip: Configure RC's MPS setting
PCI: rockchip: Reconfigure configuration space header type
PCI: rockchip: Split out rockchip_pcie_cfg_configuration_accesses()
PCI: rockchip: Move configuration accesses into rockchip_pcie_cfg_atu()
...

+3789 -825
+7
Documentation/devicetree/bindings/pci/faraday,ftpci100.txt
··· 30 30 128MB, 256MB, 512MB, 1GB or 2GB in size. The memory should be marked as 31 31 pre-fetchable. 32 32 33 + Optional properties: 34 + - clocks: when present, this should contain the peripheral clock (PCLK) and the 35 + PCI clock (PCICLK). If these are not present, they are assumed to be 36 + hard-wired enabled and always on. The PCI clock will be 33 or 66 MHz. 37 + - clock-names: when present, this should contain "PCLK" for the peripheral 38 + clock and "PCICLK" for the PCI-side clock. 39 + 33 40 Mandatory subnodes: 34 41 - For "faraday,ftpci100" a node representing the interrupt-controller inside the 35 42 host bridge is mandatory. It has the following mandatory properties:
+4
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
··· 33 33 - reset-gpio-active-high: If present then the reset sequence using the GPIO 34 34 specified in the "reset-gpio" property is reversed (H=reset state, 35 35 L=operation state). 36 + - vpcie-supply: Should specify the regulator in charge of PCIe port power. 37 + The regulator will be enabled when initializing the PCIe host and 38 + disabled either as part of the init process or when shutting down the 39 + host. 36 40 37 41 Additional required properties for imx6sx-pcie: 38 42 - clock names: Must include the following additional entries:
+130
Documentation/devicetree/bindings/pci/mediatek,mt7623-pcie.txt
··· 1 + MediaTek Gen2 PCIe controller which is available on MT7623 series SoCs 2 + 3 + PCIe subsys supports single root complex (RC) with 3 Root Ports. Each root 4 + ports supports a Gen2 1-lane Link and has PIPE interface to PHY. 5 + 6 + Required properties: 7 + - compatible: Should contain "mediatek,mt7623-pcie". 8 + - device_type: Must be "pci" 9 + - reg: Base addresses and lengths of the PCIe controller. 10 + - #address-cells: Address representation for root ports (must be 3) 11 + - #size-cells: Size representation for root ports (must be 2) 12 + - #interrupt-cells: Size representation for interrupts (must be 1) 13 + - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 14 + Please refer to the standard PCI bus binding document for a more detailed 15 + explanation. 16 + - clocks: Must contain an entry for each entry in clock-names. 17 + See ../clocks/clock-bindings.txt for details. 18 + - clock-names: Must include the following entries: 19 + - free_ck :for reference clock of PCIe subsys 20 + - sys_ck0 :for clock of Port0 21 + - sys_ck1 :for clock of Port1 22 + - sys_ck2 :for clock of Port2 23 + - resets: Must contain an entry for each entry in reset-names. 24 + See ../reset/reset.txt for details. 25 + - reset-names: Must include the following entries: 26 + - pcie-rst0 :port0 reset 27 + - pcie-rst1 :port1 reset 28 + - pcie-rst2 :port2 reset 29 + - phys: List of PHY specifiers (used by generic PHY framework). 30 + - phy-names : Must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the 31 + number of PHYs as specified in *phys* property. 32 + - power-domains: A phandle and power domain specifier pair to the power domain 33 + which is responsible for collapsing and restoring power to the peripheral. 34 + - bus-range: Range of bus numbers associated with this controller. 35 + - ranges: Ranges for the PCI memory and I/O regions. 36 + 37 + In addition, the device tree node must have sub-nodes describing each 38 + PCIe port interface, having the following mandatory properties: 39 + 40 + Required properties: 41 + - device_type: Must be "pci" 42 + - reg: Only the first four bytes are used to refer to the correct bus number 43 + and device number. 44 + - #address-cells: Must be 3 45 + - #size-cells: Must be 2 46 + - #interrupt-cells: Must be 1 47 + - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 48 + Please refer to the standard PCI bus binding document for a more detailed 49 + explanation. 50 + - ranges: Sub-ranges distributed from the PCIe controller node. An empty 51 + property is sufficient. 52 + - num-lanes: Number of lanes to use for this port. 53 + 54 + Examples: 55 + 56 + hifsys: syscon@1a000000 { 57 + compatible = "mediatek,mt7623-hifsys", 58 + "mediatek,mt2701-hifsys", 59 + "syscon"; 60 + reg = <0 0x1a000000 0 0x1000>; 61 + #clock-cells = <1>; 62 + #reset-cells = <1>; 63 + }; 64 + 65 + pcie: pcie-controller@1a140000 { 66 + compatible = "mediatek,mt7623-pcie"; 67 + device_type = "pci"; 68 + reg = <0 0x1a140000 0 0x1000>, /* PCIe shared registers */ 69 + <0 0x1a142000 0 0x1000>, /* Port0 registers */ 70 + <0 0x1a143000 0 0x1000>, /* Port1 registers */ 71 + <0 0x1a144000 0 0x1000>; /* Port2 registers */ 72 + #address-cells = <3>; 73 + #size-cells = <2>; 74 + #interrupt-cells = <1>; 75 + interrupt-map-mask = <0xf800 0 0 0>; 76 + interrupt-map = <0x0000 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>, 77 + <0x0800 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>, 78 + <0x1000 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>; 79 + clocks = <&topckgen CLK_TOP_ETHIF_SEL>, 80 + <&hifsys CLK_HIFSYS_PCIE0>, 81 + <&hifsys CLK_HIFSYS_PCIE1>, 82 + <&hifsys CLK_HIFSYS_PCIE2>; 83 + clock-names = "free_ck", "sys_ck0", "sys_ck1", "sys_ck2"; 84 + resets = <&hifsys MT2701_HIFSYS_PCIE0_RST>, 85 + <&hifsys MT2701_HIFSYS_PCIE1_RST>, 86 + <&hifsys MT2701_HIFSYS_PCIE2_RST>; 87 + reset-names = "pcie-rst0", "pcie-rst1", "pcie-rst2"; 88 + phys = <&pcie0_phy>, <&pcie1_phy>, <&pcie2_phy>; 89 + phy-names = "pcie-phy0", "pcie-phy1", "pcie-phy2"; 90 + power-domains = <&scpsys MT2701_POWER_DOMAIN_HIF>; 91 + bus-range = <0x00 0xff>; 92 + ranges = <0x81000000 0 0x1a160000 0 0x1a160000 0 0x00010000 /* I/O space */ 93 + 0x83000000 0 0x60000000 0 0x60000000 0 0x10000000>; /* memory space */ 94 + 95 + pcie@0,0 { 96 + device_type = "pci"; 97 + reg = <0x0000 0 0 0 0>; 98 + #address-cells = <3>; 99 + #size-cells = <2>; 100 + #interrupt-cells = <1>; 101 + interrupt-map-mask = <0 0 0 0>; 102 + interrupt-map = <0 0 0 0 &sysirq GIC_SPI 193 IRQ_TYPE_LEVEL_LOW>; 103 + ranges; 104 + num-lanes = <1>; 105 + }; 106 + 107 + pcie@1,0 { 108 + device_type = "pci"; 109 + reg = <0x0800 0 0 0 0>; 110 + #address-cells = <3>; 111 + #size-cells = <2>; 112 + #interrupt-cells = <1>; 113 + interrupt-map-mask = <0 0 0 0>; 114 + interrupt-map = <0 0 0 0 &sysirq GIC_SPI 194 IRQ_TYPE_LEVEL_LOW>; 115 + ranges; 116 + num-lanes = <1>; 117 + }; 118 + 119 + pcie@2,0 { 120 + device_type = "pci"; 121 + reg = <0x1000 0 0 0 0>; 122 + #address-cells = <3>; 123 + #size-cells = <2>; 124 + #interrupt-cells = <1>; 125 + interrupt-map-mask = <0 0 0 0>; 126 + interrupt-map = <0 0 0 0 &sysirq GIC_SPI 195 IRQ_TYPE_LEVEL_LOW>; 127 + ranges; 128 + num-lanes = <1>; 129 + }; 130 + };
+19 -1
Documentation/devicetree/bindings/pci/qcom,pcie.txt
··· 8 8 - "qcom,pcie-apq8064" for apq8064 9 9 - "qcom,pcie-apq8084" for apq8084 10 10 - "qcom,pcie-msm8996" for msm8996 or apq8096 11 + - "qcom,pcie-ipq4019" for ipq4019 11 12 12 13 - reg: 13 14 Usage: required ··· 88 87 - "core" Clocks the pcie hw block 89 88 - "phy" Clocks the pcie PHY block 90 89 - clock-names: 91 - Usage: required for apq8084 90 + Usage: required for apq8084/ipq4019 92 91 Value type: <stringlist> 93 92 Definition: Should contain the following entries 94 93 - "aux" Auxiliary (AUX) clock ··· 126 125 Value type: <stringlist> 127 126 Definition: Should contain the following entries 128 127 - "core" Core reset 128 + 129 + - reset-names: 130 + Usage: required for ipq/apq8064 131 + Value type: <stringlist> 132 + Definition: Should contain the following entries 133 + - "axi_m" AXI master reset 134 + - "axi_s" AXI slave reset 135 + - "pipe" PIPE reset 136 + - "axi_m_vmid" VMID reset 137 + - "axi_s_xpu" XPU reset 138 + - "parf" PARF reset 139 + - "phy" PHY reset 140 + - "axi_m_sticky" AXI sticky reset 141 + - "pipe_sticky" PIPE sticky reset 142 + - "pwr" PWR reset 143 + - "ahb" AHB reset 144 + - "phy_ahb" PHY AHB reset 129 145 130 146 - power-domains: 131 147 Usage: required for apq8084 and msm8996/apq8096
+1 -1
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 1 - * Renesas RCar PCIe interface 1 + * Renesas R-Car PCIe interface 2 2 3 3 Required properties: 4 4 compatible: "renesas,pcie-r8a7779" for the R8A7779 SoC;
+29
Documentation/devicetree/bindings/pci/tango-pcie.txt
··· 1 + Sigma Designs Tango PCIe controller 2 + 3 + Required properties: 4 + 5 + - compatible: "sigma,smp8759-pcie" 6 + - reg: address/size of PCI configuration space, address/size of register area 7 + - bus-range: defined by size of PCI configuration space 8 + - device_type: "pci" 9 + - #size-cells: <2> 10 + - #address-cells: <3> 11 + - msi-controller 12 + - ranges: translation from system to bus addresses 13 + - interrupts: spec for misc interrupts, spec for MSI 14 + 15 + Example: 16 + 17 + pcie@2e000 { 18 + compatible = "sigma,smp8759-pcie"; 19 + reg = <0x50000000 0x400000>, <0x2e000 0x100>; 20 + bus-range = <0 3>; 21 + device_type = "pci"; 22 + #size-cells = <2>; 23 + #address-cells = <3>; 24 + msi-controller; 25 + ranges = <0x02000000 0x0 0x00400000 0x50400000 0x0 0x3c00000>; 26 + interrupts = 27 + <54 IRQ_TYPE_LEVEL_HIGH>, /* misc interrupts */ 28 + <55 IRQ_TYPE_LEVEL_HIGH>; /* MSI */ 29 + };
+1
Documentation/driver-model/devres.txt
··· 348 348 devm_free_percpu() 349 349 350 350 PCI 351 + devm_pci_alloc_host_bridge() : managed PCI host bridge allocation 351 352 devm_pci_remap_cfgspace() : ioremap PCI configuration space 352 353 devm_pci_remap_cfg_resource() : ioremap PCI configuration space resource 353 354 pcim_enable_device() : after success, all PCI ops become managed
+16 -1
MAINTAINERS
··· 10160 10160 F: Documentation/devicetree/bindings/pci/hisilicon-pcie.txt 10161 10161 F: drivers/pci/dwc/pcie-hisi.c 10162 10162 10163 + PCIE DRIVER FOR HISILICON KIRIN 10164 + M: Xiaowei Song <songxiaowei@hisilicon.com> 10165 + M: Binghui Wang <wangbinghui@hisilicon.com> 10166 + L: linux-pci@vger.kernel.org 10167 + S: Maintained 10168 + F: Documentation/devicetree/bindings/pci/pcie-kirin.txt 10169 + F: drivers/pci/dwc/pcie-kirin.c 10170 + 10163 10171 PCIE DRIVER FOR ROCKCHIP 10164 10172 M: Shawn Lin <shawn.lin@rock-chips.com> 10165 - M: Wenrui Li <wenrui.li@rock-chips.com> 10166 10173 L: linux-pci@vger.kernel.org 10167 10174 L: linux-rockchip@lists.infradead.org 10168 10175 S: Maintained ··· 10190 10183 S: Supported 10191 10184 F: Documentation/devicetree/bindings/pci/pci-thunder-* 10192 10185 F: drivers/pci/host/pci-thunder-* 10186 + 10187 + PCIE DRIVER FOR MEDIATEK 10188 + M: Ryder Lee <ryder.lee@mediatek.com> 10189 + L: linux-pci@vger.kernel.org 10190 + L: linux-mediatek@lists.infradead.org 10191 + S: Supported 10192 + F: Documentation/devicetree/bindings/pci/mediatek* 10193 + F: drivers/pci/host/*mediatek* 10193 10194 10194 10195 PCMCIA SUBSYSTEM 10195 10196 P: Linux PCMCIA Team
+2 -1
arch/arm/include/asm/mach/pci.h
··· 16 16 struct pci_sys_data; 17 17 struct pci_ops; 18 18 struct pci_bus; 19 + struct pci_host_bridge; 19 20 struct device; 20 21 21 22 struct hw_pci { ··· 26 25 unsigned int io_optional:1; 27 26 void **private_data; 28 27 int (*setup)(int nr, struct pci_sys_data *); 29 - struct pci_bus *(*scan)(int nr, struct pci_sys_data *); 28 + int (*scan)(int nr, struct pci_host_bridge *); 30 29 void (*preinit)(void); 31 30 void (*postinit)(void); 32 31 u8 (*swizzle)(struct pci_dev *dev, u8 *pin);
+29 -17
arch/arm/kernel/bios32.c
··· 458 458 int nr, busnr; 459 459 460 460 for (nr = busnr = 0; nr < hw->nr_controllers; nr++) { 461 - sys = kzalloc(sizeof(struct pci_sys_data), GFP_KERNEL); 462 - if (WARN(!sys, "PCI: unable to allocate sys data!")) 461 + struct pci_host_bridge *bridge; 462 + 463 + bridge = pci_alloc_host_bridge(sizeof(struct pci_sys_data)); 464 + if (WARN(!bridge, "PCI: unable to allocate bridge!")) 463 465 break; 466 + 467 + sys = pci_host_bridge_priv(bridge); 464 468 465 469 sys->busnr = busnr; 466 470 sys->swizzle = hw->swizzle; ··· 477 473 ret = hw->setup(nr, sys); 478 474 479 475 if (ret > 0) { 480 - struct pci_host_bridge *host_bridge; 481 476 482 477 ret = pcibios_init_resource(nr, sys, hw->io_optional); 483 478 if (ret) { ··· 484 481 break; 485 482 } 486 483 487 - if (hw->scan) 488 - sys->bus = hw->scan(nr, sys); 489 - else 490 - sys->bus = pci_scan_root_bus_msi(parent, 491 - sys->busnr, hw->ops, sys, 492 - &sys->resources, hw->msi_ctrl); 484 + bridge->map_irq = pcibios_map_irq; 485 + bridge->swizzle_irq = pcibios_swizzle; 493 486 494 - if (WARN(!sys->bus, "PCI: unable to scan bus!")) { 495 - kfree(sys); 487 + if (hw->scan) 488 + ret = hw->scan(nr, bridge); 489 + else { 490 + list_splice_init(&sys->resources, 491 + &bridge->windows); 492 + bridge->dev.parent = parent; 493 + bridge->sysdata = sys; 494 + bridge->busnr = sys->busnr; 495 + bridge->ops = hw->ops; 496 + bridge->msi = hw->msi_ctrl; 497 + bridge->align_resource = 498 + hw->align_resource; 499 + 500 + ret = pci_scan_root_bus_bridge(bridge); 501 + } 502 + 503 + if (WARN(ret < 0, "PCI: unable to scan bus!")) { 504 + pci_free_host_bridge(bridge); 496 505 break; 497 506 } 507 + 508 + sys->bus = bridge->bus; 498 509 499 510 busnr = sys->bus->busn_res.end + 1; 500 511 501 512 list_add(&sys->node, head); 502 - 503 - host_bridge = pci_find_host_bridge(sys->bus); 504 - host_bridge->align_resource = hw->align_resource; 505 513 } else { 506 - kfree(sys); 514 + pci_free_host_bridge(bridge); 507 515 if (ret < 0) 508 516 break; 509 517 } ··· 532 518 pcibios_init_hw(parent, hw, &head); 533 519 if (hw->postinit) 534 520 hw->postinit(); 535 - 536 - pci_fixup_irqs(pcibios_swizzle, pcibios_map_irq); 537 521 538 522 list_for_each_entry(sys, &head, node) { 539 523 struct pci_bus *bus = sys->bus;
+12 -5
arch/arm/mach-dove/pcie.c
··· 152 152 } 153 153 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL, PCI_ANY_ID, rc_pci_fixup); 154 154 155 - static struct pci_bus __init * 156 - dove_pcie_scan_bus(int nr, struct pci_sys_data *sys) 155 + static int __init 156 + dove_pcie_scan_bus(int nr, struct pci_host_bridge *bridge) 157 157 { 158 + struct pci_sys_data *sys = pci_host_bridge_priv(bridge); 159 + 158 160 if (nr >= num_pcie_ports) { 159 161 BUG(); 160 - return NULL; 162 + return -EINVAL; 161 163 } 162 164 163 - return pci_scan_root_bus(NULL, sys->busnr, &pcie_ops, sys, 164 - &sys->resources); 165 + list_splice_init(&sys->resources, &bridge->windows); 166 + bridge->dev.parent = NULL; 167 + bridge->sysdata = sys; 168 + bridge->busnr = sys->busnr; 169 + bridge->ops = &pcie_ops; 170 + 171 + return pci_scan_root_bus_bridge(bridge); 165 172 } 166 173 167 174 static int __init dove_pcie_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+20 -11
arch/arm/mach-iop13xx/pci.c
··· 504 504 505 505 /* Scan an IOP13XX PCI bus. nr selects which ATU we use. 506 506 */ 507 - struct pci_bus *iop13xx_scan_bus(int nr, struct pci_sys_data *sys) 507 + int iop13xx_scan_bus(int nr, struct pci_host_bridge *bridge) 508 508 { 509 - int which_atu; 510 - struct pci_bus *bus = NULL; 509 + int which_atu, ret; 510 + struct pci_sys_data *sys = pci_host_bridge_priv(bridge); 511 511 512 512 switch (init_atu) { 513 513 case IOP13XX_INIT_ATU_ATUX: ··· 525 525 526 526 if (!which_atu) { 527 527 BUG(); 528 - return NULL; 528 + return -ENODEV; 529 529 } 530 + 531 + list_splice_init(&sys->resources, &bridge->windows); 532 + bridge->dev.parent = NULL; 533 + bridge->sysdata = sys; 534 + bridge->busnr = sys->busnr; 530 535 531 536 switch (which_atu) { 532 537 case IOP13XX_INIT_ATU_ATUX: ··· 540 535 while(time_before(jiffies, atux_trhfa_timeout)) 541 536 udelay(100); 542 537 543 - bus = pci_bus_atux = pci_scan_root_bus(NULL, sys->busnr, 544 - &iop13xx_atux_ops, 545 - sys, &sys->resources); 538 + bridge->ops = &iop13xx_atux_ops; 539 + ret = pci_scan_root_bus_bridge(bridge); 540 + if (!ret) 541 + pci_bus_atux = bridge->bus; 546 542 break; 547 543 case IOP13XX_INIT_ATU_ATUE: 548 - bus = pci_bus_atue = pci_scan_root_bus(NULL, sys->busnr, 549 - &iop13xx_atue_ops, 550 - sys, &sys->resources); 544 + bridge->ops = &iop13xx_atue_ops; 545 + ret = pci_scan_root_bus_bridge(bridge); 546 + if (!ret) 547 + pci_bus_atue = bridge->bus; 551 548 break; 549 + default: 550 + ret = -EINVAL; 552 551 } 553 552 554 - return bus; 553 + return ret; 555 554 } 556 555 557 556 /* This function is called from iop13xx_pci_init() after assigning valid
+2 -1
arch/arm/mach-iop13xx/pci.h
··· 11 11 extern size_t iop13xx_atux_mem_size; 12 12 13 13 struct pci_sys_data; 14 + struct pci_host_bridge; 14 15 struct hw_pci; 15 16 int iop13xx_pci_setup(int nr, struct pci_sys_data *sys); 16 - struct pci_bus *iop13xx_scan_bus(int nr, struct pci_sys_data *); 17 + int iop13xx_scan_bus(int nr, struct pci_host_bridge *bridge); 17 18 void iop13xx_atu_select(struct hw_pci *plat_pci); 18 19 void iop13xx_pci_init(void); 19 20 void iop13xx_map_pci_memory(void);
+11 -5
arch/arm/mach-mv78xx0/pcie.c
··· 194 194 } 195 195 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL, PCI_ANY_ID, rc_pci_fixup); 196 196 197 - static struct pci_bus __init * 198 - mv78xx0_pcie_scan_bus(int nr, struct pci_sys_data *sys) 197 + static int __init mv78xx0_pcie_scan_bus(int nr, struct pci_host_bridge *bridge) 199 198 { 199 + struct pci_sys_data *sys = pci_host_bridge_priv(bridge); 200 + 200 201 if (nr >= num_pcie_ports) { 201 202 BUG(); 202 - return NULL; 203 + return -EINVAL; 203 204 } 204 205 205 - return pci_scan_root_bus(NULL, sys->busnr, &pcie_ops, sys, 206 - &sys->resources); 206 + list_splice_init(&sys->resources, &bridge->windows); 207 + bridge->dev.parent = NULL; 208 + bridge->sysdata = sys; 209 + bridge->busnr = sys->busnr; 210 + bridge->ops = &pcie_ops; 211 + 212 + return pci_scan_root_bus_bridge(bridge); 207 213 } 208 214 209 215 static int __init mv78xx0_pcie_map_irq(const struct pci_dev *dev, u8 slot,
+2 -1
arch/arm/mach-orion5x/common.h
··· 54 54 * PCIe/PCI functions. 55 55 */ 56 56 struct pci_bus; 57 + struct pci_host_bridge; 57 58 struct pci_sys_data; 58 59 struct pci_dev; 59 60 ··· 62 61 void orion5x_pci_disable(void); 63 62 void orion5x_pci_set_cardbus_mode(void); 64 63 int orion5x_pci_sys_setup(int nr, struct pci_sys_data *sys); 65 - struct pci_bus *orion5x_pci_sys_scan_bus(int nr, struct pci_sys_data *sys); 64 + int orion5x_pci_sys_scan_bus(int nr, struct pci_host_bridge *bridge); 66 65 int orion5x_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin); 67 66 68 67 struct tag;
+17 -8
arch/arm/mach-orion5x/pci.c
··· 555 555 return 0; 556 556 } 557 557 558 - struct pci_bus __init *orion5x_pci_sys_scan_bus(int nr, struct pci_sys_data *sys) 558 + int __init orion5x_pci_sys_scan_bus(int nr, struct pci_host_bridge *bridge) 559 559 { 560 - if (nr == 0) 561 - return pci_scan_root_bus(NULL, sys->busnr, &pcie_ops, sys, 562 - &sys->resources); 560 + struct pci_sys_data *sys = pci_host_bridge_priv(bridge); 563 561 564 - if (nr == 1 && !orion5x_pci_disabled) 565 - return pci_scan_root_bus(NULL, sys->busnr, &pci_ops, sys, 566 - &sys->resources); 562 + list_splice_init(&sys->resources, &bridge->windows); 563 + bridge->dev.parent = NULL; 564 + bridge->sysdata = sys; 565 + bridge->busnr = sys->busnr; 566 + 567 + if (nr == 0) { 568 + bridge->ops = &pcie_ops; 569 + return pci_scan_root_bus_bridge(bridge); 570 + } 571 + 572 + if (nr == 1 && !orion5x_pci_disabled) { 573 + bridge->ops = &pci_ops; 574 + return pci_scan_root_bus_bridge(bridge); 575 + } 567 576 568 577 BUG(); 569 - return NULL; 578 + return -ENODEV; 570 579 } 571 580 572 581 int __init orion5x_pci_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
+4 -6
arch/arm64/kernel/pci.c
··· 39 39 return res->start; 40 40 } 41 41 42 + #ifdef CONFIG_ACPI 42 43 /* 43 44 * Try to assign the IRQ number when probing a new device 44 45 */ 45 46 int pcibios_alloc_irq(struct pci_dev *dev) 46 47 { 47 - if (acpi_disabled) 48 - dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); 49 - #ifdef CONFIG_ACPI 50 - else 51 - return acpi_pci_irq_enable(dev); 52 - #endif 48 + if (!acpi_disabled) 49 + acpi_pci_irq_enable(dev); 53 50 54 51 return 0; 55 52 } 53 + #endif 56 54 57 55 /* 58 56 * raw_pci_read/write - Platform-specific PCI config space access.
-1
arch/mips/include/asm/mach-loongson64/cs5536/cs5536_pci.h
··· 80 80 #define PCI_BAR3_REG 0x1c 81 81 #define PCI_BAR4_REG 0x20 82 82 #define PCI_BAR5_REG 0x24 83 - #define PCI_BAR_COUNT 6 84 83 #define PCI_BAR_RANGE_MASK 0xFFFFFFFF 85 84 86 85 /* CARDBUS CIS POINTER */
-1
arch/mips/include/asm/pci.h
··· 39 39 unsigned long io_offset; 40 40 unsigned long io_map_base; 41 41 struct resource *busn_resource; 42 - unsigned long busn_offset; 43 42 44 43 #ifndef CONFIG_PCI_DOMAINS_GENERIC 45 44 unsigned int index;
+1 -2
arch/mips/pci/pci-legacy.c
··· 86 86 hose->mem_resource, hose->mem_offset); 87 87 pci_add_resource_offset(&resources, 88 88 hose->io_resource, hose->io_offset); 89 - pci_add_resource_offset(&resources, 90 - hose->busn_resource, hose->busn_offset); 89 + pci_add_resource(&resources, hose->busn_resource); 91 90 bus = pci_scan_root_bus(NULL, next_busno, hose->pci_ops, hose, 92 91 &resources); 93 92 hose->bus = bus;
+6
arch/x86/include/uapi/asm/hyperv.h
··· 150 150 #define HV_X64_DEPRECATING_AEOI_RECOMMENDED (1 << 9) 151 151 152 152 /* 153 + * HV_VP_SET available 154 + */ 155 + #define HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED (1 << 11) 156 + 157 + 158 + /* 153 159 * Crash notification flag. 154 160 */ 155 161 #define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
+5 -22
arch/x86/pci/common.c
··· 24 24 25 25 unsigned int pci_early_dump_regs; 26 26 static int pci_bf_sort; 27 - static int smbios_type_b1_flag; 28 27 int pci_routeirq; 29 28 int noioapicquirk; 30 29 #ifdef CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS ··· 196 197 static void __init read_dmi_type_b1(const struct dmi_header *dm, 197 198 void *private_data) 198 199 { 199 - u8 *d = (u8 *)dm + 4; 200 + u8 *data = (u8 *)dm + 4; 200 201 201 202 if (dm->type != 0xB1) 202 203 return; 203 - switch (((*(u32 *)d) >> 9) & 0x03) { 204 - case 0x00: 205 - printk(KERN_INFO "dmi type 0xB1 record - unknown flag\n"); 206 - break; 207 - case 0x01: /* set pci=bfsort */ 208 - smbios_type_b1_flag = 1; 209 - break; 210 - case 0x02: /* do not set pci=bfsort */ 211 - smbios_type_b1_flag = 2; 212 - break; 213 - default: 214 - break; 215 - } 204 + if ((((*(u32 *)data) >> 9) & 0x03) == 0x01) 205 + set_bf_sort((const struct dmi_system_id *)private_data); 216 206 } 217 207 218 208 static int __init find_sort_method(const struct dmi_system_id *d) 219 209 { 220 - dmi_walk(read_dmi_type_b1, NULL); 221 - 222 - if (smbios_type_b1_flag == 1) { 223 - set_bf_sort(d); 224 - return 0; 225 - } 226 - return -1; 210 + dmi_walk(read_dmi_type_b1, (void *)d); 211 + return 0; 227 212 } 228 213 229 214 /*
+47
arch/x86/pci/fixup.c
··· 571 571 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6f60, pci_invalid_bar); 572 572 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_invalid_bar); 573 573 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_invalid_bar); 574 + 575 + /* 576 + * Device [1022:7808] 577 + * 23. USB Wake on Connect/Disconnect with Low Speed Devices 578 + * https://support.amd.com/TechDocs/46837.pdf 579 + * Appendix A2 580 + * https://support.amd.com/TechDocs/42413.pdf 581 + */ 582 + static void pci_fixup_amd_ehci_pme(struct pci_dev *dev) 583 + { 584 + dev_info(&dev->dev, "PME# does not work under D3, disabling it\n"); 585 + dev->pme_support &= ~((PCI_PM_CAP_PME_D3 | PCI_PM_CAP_PME_D3cold) 586 + >> PCI_PM_CAP_PME_SHIFT); 587 + } 588 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x7808, pci_fixup_amd_ehci_pme); 589 + 590 + /* 591 + * Apple MacBook Pro: Avoid [mem 0x7fa00000-0x7fbfffff] 592 + * 593 + * Using the [mem 0x7fa00000-0x7fbfffff] region, e.g., by assigning it to 594 + * the 00:1c.0 Root Port, causes a conflict with [io 0x1804], which is used 595 + * for soft poweroff and suspend-to-RAM. 596 + * 597 + * As far as we know, this is related to the address space, not to the Root 598 + * Port itself. Attaching the quirk to the Root Port is a convenience, but 599 + * it could probably also be a standalone DMI quirk. 600 + * 601 + * https://bugzilla.kernel.org/show_bug.cgi?id=103211 602 + */ 603 + static void quirk_apple_mbp_poweroff(struct pci_dev *pdev) 604 + { 605 + struct device *dev = &pdev->dev; 606 + struct resource *res; 607 + 608 + if ((!dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,4") && 609 + !dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,5")) || 610 + pdev->bus->number != 0 || pdev->devfn != PCI_DEVFN(0x1c, 0)) 611 + return; 612 + 613 + res = request_mem_region(0x7fa00000, 0x200000, 614 + "MacBook Pro poweroff workaround"); 615 + if (res) 616 + dev_info(dev, "claimed %s %pR\n", res->name, res); 617 + else 618 + dev_info(dev, "can't work around MacBook Pro poweroff issue\n"); 619 + } 620 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff);
+1 -1
arch/x86/pci/pcbios.c
··· 46 46 pcibios_enabled = 1; 47 47 set_memory_x(PAGE_OFFSET + BIOS_BEGIN, (BIOS_END - BIOS_BEGIN) >> PAGE_SHIFT); 48 48 if (__supported_pte_mask & _PAGE_NX) 49 - printk(KERN_INFO "PCI : PCI BIOS area is rw and x. Use pci=nobios if you want it NX.\n"); 49 + printk(KERN_INFO "PCI: PCI BIOS area is rw and x. Use pci=nobios if you want it NX.\n"); 50 50 } 51 51 52 52 /*
-4
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 1152 1152 return; 1153 1153 1154 1154 if (state == VGA_SWITCHEROO_ON) { 1155 - unsigned d3_delay = dev->pdev->d3_delay; 1156 - 1157 1155 pr_info("amdgpu: switched on\n"); 1158 1156 /* don't suspend or resume card normally */ 1159 1157 dev->switch_power_state = DRM_SWITCH_POWER_CHANGING; 1160 1158 1161 1159 amdgpu_device_resume(dev, true, true); 1162 - 1163 - dev->pdev->d3_delay = d3_delay; 1164 1160 1165 1161 dev->switch_power_state = DRM_SWITCH_POWER_ON; 1166 1162 drm_kms_helper_poll_enable(dev);
-11
drivers/gpu/drm/radeon/radeon_device.c
··· 113 113 #endif 114 114 115 115 #define RADEON_PX_QUIRK_DISABLE_PX (1 << 0) 116 - #define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1) 117 116 118 117 struct radeon_px_quirk { 119 118 u32 chip_vendor; ··· 139 140 * https://bugs.freedesktop.org/show_bug.cgi?id=101491 140 141 */ 141 142 { PCI_VENDOR_ID_ATI, 0x6741, 0x1043, 0x2122, RADEON_PX_QUIRK_DISABLE_PX }, 142 - /* macbook pro 8.2 */ 143 - { PCI_VENDOR_ID_ATI, 0x6741, PCI_VENDOR_ID_APPLE, 0x00e2, RADEON_PX_QUIRK_LONG_WAKEUP }, 144 143 { 0, 0, 0, 0, 0 }, 145 144 }; 146 145 ··· 1242 1245 static void radeon_switcheroo_set_state(struct pci_dev *pdev, enum vga_switcheroo_state state) 1243 1246 { 1244 1247 struct drm_device *dev = pci_get_drvdata(pdev); 1245 - struct radeon_device *rdev = dev->dev_private; 1246 1248 1247 1249 if (radeon_is_px(dev) && state == VGA_SWITCHEROO_OFF) 1248 1250 return; 1249 1251 1250 1252 if (state == VGA_SWITCHEROO_ON) { 1251 - unsigned d3_delay = dev->pdev->d3_delay; 1252 - 1253 1253 pr_info("radeon: switched on\n"); 1254 1254 /* don't suspend or resume card normally */ 1255 1255 dev->switch_power_state = DRM_SWITCH_POWER_CHANGING; 1256 1256 1257 - if (d3_delay < 20 && (rdev->px_quirk_flags & RADEON_PX_QUIRK_LONG_WAKEUP)) 1258 - dev->pdev->d3_delay = 20; 1259 - 1260 1257 radeon_resume_kms(dev, true, true); 1261 - 1262 - dev->pdev->d3_delay = d3_delay; 1263 1258 1264 1259 dev->switch_power_state = DRM_SWITCH_POWER_ON; 1265 1260 drm_kms_helper_poll_enable(dev);
+13 -23
drivers/net/ethernet/intel/fm10k/fm10k_pci.c
··· 2348 2348 netif_device_attach(netdev); 2349 2349 } 2350 2350 2351 - /** 2352 - * fm10k_io_reset_notify - called when PCI function is reset 2353 - * @pdev: Pointer to PCI device 2354 - * 2355 - * This callback is called when the PCI function is reset such as from 2356 - * /sys/class/net/<enpX>/device/reset or similar. When prepare is true, it 2357 - * means we should prepare for a function reset. If prepare is false, it means 2358 - * the function reset just occurred. 2359 - */ 2360 - static void fm10k_io_reset_notify(struct pci_dev *pdev, bool prepare) 2351 + static void fm10k_io_reset_prepare(struct pci_dev *pdev) 2352 + { 2353 + /* warn incase we have any active VF devices */ 2354 + if (pci_num_vf(pdev)) 2355 + dev_warn(&pdev->dev, 2356 + "PCIe FLR may cause issues for any active VF devices\n"); 2357 + fm10k_prepare_suspend(pci_get_drvdata(pdev)); 2358 + } 2359 + 2360 + static void fm10k_io_reset_done(struct pci_dev *pdev) 2361 2361 { 2362 2362 struct fm10k_intfc *interface = pci_get_drvdata(pdev); 2363 - int err = 0; 2364 - 2365 - if (prepare) { 2366 - /* warn incase we have any active VF devices */ 2367 - if (pci_num_vf(pdev)) 2368 - dev_warn(&pdev->dev, 2369 - "PCIe FLR may cause issues for any active VF devices\n"); 2370 - 2371 - fm10k_prepare_suspend(interface); 2372 - } else { 2373 - err = fm10k_handle_resume(interface); 2374 - } 2363 + int err = fm10k_handle_resume(interface); 2375 2364 2376 2365 if (err) { 2377 2366 dev_warn(&pdev->dev, ··· 2373 2384 .error_detected = fm10k_io_error_detected, 2374 2385 .slot_reset = fm10k_io_slot_reset, 2375 2386 .resume = fm10k_io_resume, 2376 - .reset_notify = fm10k_io_reset_notify, 2387 + .reset_prepare = fm10k_io_reset_prepare, 2388 + .reset_done = fm10k_io_reset_done, 2377 2389 }; 2378 2390 2379 2391 static struct pci_driver fm10k_driver = {
+41 -28
drivers/net/wireless/marvell/mwifiex/pcie.c
··· 346 346 347 347 MODULE_DEVICE_TABLE(pci, mwifiex_ids); 348 348 349 - static void mwifiex_pcie_reset_notify(struct pci_dev *pdev, bool prepare) 349 + /* 350 + * Cleanup all software without cleaning anything related to PCIe and HW. 351 + */ 352 + static void mwifiex_pcie_reset_prepare(struct pci_dev *pdev) 353 + { 354 + struct pcie_service_card *card = pci_get_drvdata(pdev); 355 + struct mwifiex_adapter *adapter = card->adapter; 356 + 357 + if (!adapter) { 358 + dev_err(&pdev->dev, "%s: adapter structure is not valid\n", 359 + __func__); 360 + return; 361 + } 362 + 363 + mwifiex_dbg(adapter, INFO, 364 + "%s: vendor=0x%4.04x device=0x%4.04x rev=%d Pre-FLR\n", 365 + __func__, pdev->vendor, pdev->device, pdev->revision); 366 + 367 + mwifiex_shutdown_sw(adapter); 368 + clear_bit(MWIFIEX_IFACE_WORK_DEVICE_DUMP, &card->work_flags); 369 + clear_bit(MWIFIEX_IFACE_WORK_CARD_RESET, &card->work_flags); 370 + mwifiex_dbg(adapter, INFO, "%s, successful\n", __func__); 371 + } 372 + 373 + /* 374 + * Kernel stores and restores PCIe function context before and after performing 375 + * FLR respectively. Reconfigure the software and firmware including firmware 376 + * redownload. 377 + */ 378 + static void mwifiex_pcie_reset_done(struct pci_dev *pdev) 350 379 { 351 380 struct pcie_service_card *card = pci_get_drvdata(pdev); 352 381 struct mwifiex_adapter *adapter = card->adapter; ··· 388 359 } 389 360 390 361 mwifiex_dbg(adapter, INFO, 391 - "%s: vendor=0x%4.04x device=0x%4.04x rev=%d %s\n", 392 - __func__, pdev->vendor, pdev->device, 393 - pdev->revision, 394 - prepare ? "Pre-FLR" : "Post-FLR"); 362 + "%s: vendor=0x%4.04x device=0x%4.04x rev=%d Post-FLR\n", 363 + __func__, pdev->vendor, pdev->device, pdev->revision); 395 364 396 - if (prepare) { 397 - /* Kernel would be performing FLR after this notification. 398 - * Cleanup all software without cleaning anything related to 399 - * PCIe and HW. 400 - */ 401 - mwifiex_shutdown_sw(adapter); 402 - clear_bit(MWIFIEX_IFACE_WORK_DEVICE_DUMP, &card->work_flags); 403 - clear_bit(MWIFIEX_IFACE_WORK_CARD_RESET, &card->work_flags); 404 - } else { 405 - /* Kernel stores and restores PCIe function context before and 406 - * after performing FLR respectively. Reconfigure the software 407 - * and firmware including firmware redownload 408 - */ 409 - ret = mwifiex_reinit_sw(adapter); 410 - if (ret) { 411 - dev_err(&pdev->dev, "reinit failed: %d\n", ret); 412 - return; 413 - } 414 - } 415 - mwifiex_dbg(adapter, INFO, "%s, successful\n", __func__); 365 + ret = mwifiex_reinit_sw(adapter); 366 + if (ret) 367 + dev_err(&pdev->dev, "reinit failed: %d\n", ret); 368 + else 369 + mwifiex_dbg(adapter, INFO, "%s, successful\n", __func__); 416 370 } 417 371 418 - static const struct pci_error_handlers mwifiex_pcie_err_handler[] = { 419 - { .reset_notify = mwifiex_pcie_reset_notify, }, 372 + static const struct pci_error_handlers mwifiex_pcie_err_handler = { 373 + .reset_prepare = mwifiex_pcie_reset_prepare, 374 + .reset_done = mwifiex_pcie_reset_done, 420 375 }; 421 376 422 377 #ifdef CONFIG_PM_SLEEP ··· 421 408 }, 422 409 #endif 423 410 .shutdown = mwifiex_pcie_shutdown, 424 - .err_handler = mwifiex_pcie_err_handler, 411 + .err_handler = &mwifiex_pcie_err_handler, 425 412 }; 426 413 427 414 /*
+9 -6
drivers/nvme/host/pci.c
··· 2303 2303 return result; 2304 2304 } 2305 2305 2306 - static void nvme_reset_notify(struct pci_dev *pdev, bool prepare) 2306 + static void nvme_reset_prepare(struct pci_dev *pdev) 2307 2307 { 2308 2308 struct nvme_dev *dev = pci_get_drvdata(pdev); 2309 + nvme_dev_disable(dev, false); 2310 + } 2309 2311 2310 - if (prepare) 2311 - nvme_dev_disable(dev, false); 2312 - else 2313 - nvme_reset_ctrl(&dev->ctrl); 2312 + static void nvme_reset_done(struct pci_dev *pdev) 2313 + { 2314 + struct nvme_dev *dev = pci_get_drvdata(pdev); 2315 + nvme_reset_ctrl(&dev->ctrl); 2314 2316 } 2315 2317 2316 2318 static void nvme_shutdown(struct pci_dev *pdev) ··· 2436 2434 .error_detected = nvme_error_detected, 2437 2435 .slot_reset = nvme_slot_reset, 2438 2436 .resume = nvme_error_resume, 2439 - .reset_notify = nvme_reset_notify, 2437 + .reset_prepare = nvme_reset_prepare, 2438 + .reset_done = nvme_reset_done, 2440 2439 }; 2441 2440 2442 2441 static const struct pci_device_id nvme_id_table[] = {
+2 -1
drivers/of/of_pci_irq.c
··· 113 113 * @pin: PCI irq pin number; passed when used as map_irq callback. Unused 114 114 * 115 115 * @slot and @pin are unused, but included in the function so that this 116 - * function can be used directly as the map_irq callback to pci_fixup_irqs(). 116 + * function can be used directly as the map_irq callback to 117 + * pci_assign_irq() and struct pci_host_bridge.map_irq pointer 117 118 */ 118 119 int of_irq_parse_and_map_pci(const struct pci_dev *dev, u8 slot, u8 pin) 119 120 {
+2 -15
drivers/pci/Makefile
··· 4 4 5 5 obj-y += access.o bus.o probe.o host-bridge.o remove.o pci.o \ 6 6 pci-driver.o search.o pci-sysfs.o rom.o setup-res.o \ 7 - irq.o vpd.o setup-bus.o vc.o mmap.o 7 + irq.o vpd.o setup-bus.o vc.o mmap.o setup-irq.o 8 + 8 9 obj-$(CONFIG_PROC_FS) += proc.o 9 10 obj-$(CONFIG_SYSFS) += slot.o 10 11 ··· 28 27 29 28 obj-$(CONFIG_PCI_ATS) += ats.o 30 29 obj-$(CONFIG_PCI_IOV) += iov.o 31 - 32 - # 33 - # Some architectures use the generic PCI setup functions 34 - # 35 - obj-$(CONFIG_ALPHA) += setup-irq.o 36 - obj-$(CONFIG_ARC) += setup-irq.o 37 - obj-$(CONFIG_ARM) += setup-irq.o 38 - obj-$(CONFIG_ARM64) += setup-irq.o 39 - obj-$(CONFIG_UNICORE32) += setup-irq.o 40 - obj-$(CONFIG_SUPERH) += setup-irq.o 41 - obj-$(CONFIG_MIPS) += setup-irq.o 42 - obj-$(CONFIG_TILE) += setup-irq.o 43 - obj-$(CONFIG_SPARC_LEON) += setup-irq.o 44 - obj-$(CONFIG_M68K) += setup-irq.o 45 30 46 31 # 47 32 # ACPI Related PCI FW Functions
+71 -16
drivers/pci/ats.c
··· 153 153 u32 max_requests; 154 154 int pos; 155 155 156 + if (WARN_ON(pdev->pri_enabled)) 157 + return -EBUSY; 158 + 156 159 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 157 160 if (!pos) 158 161 return -EINVAL; 159 162 160 - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control); 161 163 pci_read_config_word(pdev, pos + PCI_PRI_STATUS, &status); 162 - if ((control & PCI_PRI_CTRL_ENABLE) || 163 - !(status & PCI_PRI_STATUS_STOPPED)) 164 + if (!(status & PCI_PRI_STATUS_STOPPED)) 164 165 return -EBUSY; 165 166 166 167 pci_read_config_dword(pdev, pos + PCI_PRI_MAX_REQ, &max_requests); 167 168 reqs = min(max_requests, reqs); 169 + pdev->pri_reqs_alloc = reqs; 168 170 pci_write_config_dword(pdev, pos + PCI_PRI_ALLOC_REQ, reqs); 169 171 170 - control |= PCI_PRI_CTRL_ENABLE; 172 + control = PCI_PRI_CTRL_ENABLE; 171 173 pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); 174 + 175 + pdev->pri_enabled = 1; 172 176 173 177 return 0; 174 178 } ··· 189 185 u16 control; 190 186 int pos; 191 187 188 + if (WARN_ON(!pdev->pri_enabled)) 189 + return; 190 + 192 191 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 193 192 if (!pos) 194 193 return; ··· 199 192 pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control); 200 193 control &= ~PCI_PRI_CTRL_ENABLE; 201 194 pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); 195 + 196 + pdev->pri_enabled = 0; 202 197 } 203 198 EXPORT_SYMBOL_GPL(pci_disable_pri); 199 + 200 + /** 201 + * pci_restore_pri_state - Restore PRI 202 + * @pdev: PCI device structure 203 + */ 204 + void pci_restore_pri_state(struct pci_dev *pdev) 205 + { 206 + u16 control = PCI_PRI_CTRL_ENABLE; 207 + u32 reqs = pdev->pri_reqs_alloc; 208 + int pos; 209 + 210 + if (!pdev->pri_enabled) 211 + return; 212 + 213 + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 214 + if (!pos) 215 + return; 216 + 217 + pci_write_config_dword(pdev, pos + PCI_PRI_ALLOC_REQ, reqs); 218 + pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); 219 + } 220 + EXPORT_SYMBOL_GPL(pci_restore_pri_state); 204 221 205 222 /** 206 223 * pci_reset_pri - Resets device's PRI state ··· 238 207 u16 control; 239 208 int pos; 240 209 210 + if (WARN_ON(pdev->pri_enabled)) 211 + return -EBUSY; 212 + 241 213 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI); 242 214 if (!pos) 243 215 return -EINVAL; 244 216 245 - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, &control); 246 - if (control & PCI_PRI_CTRL_ENABLE) 247 - return -EBUSY; 248 - 249 - control |= PCI_PRI_CTRL_RESET; 250 - 217 + control = PCI_PRI_CTRL_RESET; 251 218 pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control); 252 219 253 220 return 0; ··· 268 239 u16 control, supported; 269 240 int pos; 270 241 242 + if (WARN_ON(pdev->pasid_enabled)) 243 + return -EBUSY; 244 + 271 245 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); 272 246 if (!pos) 273 247 return -EINVAL; 274 248 275 - pci_read_config_word(pdev, pos + PCI_PASID_CTRL, &control); 276 249 pci_read_config_word(pdev, pos + PCI_PASID_CAP, &supported); 277 - 278 - if (control & PCI_PASID_CTRL_ENABLE) 279 - return -EINVAL; 280 - 281 250 supported &= PCI_PASID_CAP_EXEC | PCI_PASID_CAP_PRIV; 282 251 283 252 /* User wants to enable anything unsupported? */ ··· 283 256 return -EINVAL; 284 257 285 258 control = PCI_PASID_CTRL_ENABLE | features; 259 + pdev->pasid_features = features; 286 260 287 261 pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control); 262 + 263 + pdev->pasid_enabled = 1; 288 264 289 265 return 0; 290 266 } ··· 296 266 /** 297 267 * pci_disable_pasid - Disable the PASID capability 298 268 * @pdev: PCI device structure 299 - * 300 269 */ 301 270 void pci_disable_pasid(struct pci_dev *pdev) 302 271 { 303 272 u16 control = 0; 304 273 int pos; 305 274 275 + if (WARN_ON(!pdev->pasid_enabled)) 276 + return; 277 + 306 278 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); 307 279 if (!pos) 308 280 return; 309 281 310 282 pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control); 283 + 284 + pdev->pasid_enabled = 0; 311 285 } 312 286 EXPORT_SYMBOL_GPL(pci_disable_pasid); 287 + 288 + /** 289 + * pci_restore_pasid_state - Restore PASID capabilities 290 + * @pdev: PCI device structure 291 + */ 292 + void pci_restore_pasid_state(struct pci_dev *pdev) 293 + { 294 + u16 control; 295 + int pos; 296 + 297 + if (!pdev->pasid_enabled) 298 + return; 299 + 300 + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID); 301 + if (!pos) 302 + return; 303 + 304 + control = PCI_PASID_CTRL_ENABLE | pdev->pasid_features; 305 + pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control); 306 + } 307 + EXPORT_SYMBOL_GPL(pci_restore_pasid_state); 313 308 314 309 /** 315 310 * pci_pasid_features - Check which PASID features are supported
+11
drivers/pci/dwc/Kconfig
··· 16 16 17 17 config PCI_DRA7XX 18 18 bool "TI DRA7xx PCIe controller" 19 + depends on SOC_DRA7XX || COMPILE_TEST 19 20 depends on (PCI && PCI_MSI_IRQ_DOMAIN) || PCI_ENDPOINT 20 21 depends on OF && HAS_IOMEM && TI_PIPE3 21 22 help ··· 158 157 help 159 158 Say Y here to enable PCIe controller support on Axis ARTPEC-6 160 159 SoCs. This PCIe controller uses the DesignWare core. 160 + 161 + config PCIE_KIRIN 162 + depends on OF && ARM64 163 + bool "HiSilicon Kirin series SoCs PCIe controllers" 164 + depends on PCI 165 + select PCIEPORTBUS 166 + select PCIE_DW_HOST 167 + help 168 + Say Y here if you want PCIe controller support 169 + on HiSilicon Kirin series SoCs. 161 170 162 171 endmenu
+1
drivers/pci/dwc/Makefile
··· 13 13 obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o 14 14 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o 15 15 obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o 16 + obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o 16 17 17 18 # The following drivers are for devices that use the generic ACPI 18 19 # pci_root.c driver but don't support standard ECAM config access.
+3 -3
drivers/pci/dwc/pci-dra7xx.c
··· 174 174 static void dra7xx_pcie_enable_msi_interrupts(struct dra7xx_pcie *dra7xx) 175 175 { 176 176 dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, 177 - ~LEG_EP_INTERRUPTS & ~MSI); 177 + LEG_EP_INTERRUPTS | MSI); 178 178 179 179 dra7xx_pcie_writel(dra7xx, 180 180 PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MSI, ··· 184 184 static void dra7xx_pcie_enable_wrapper_interrupts(struct dra7xx_pcie *dra7xx) 185 185 { 186 186 dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MAIN, 187 - ~INTERRUPTS); 187 + INTERRUPTS); 188 188 dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQENABLE_SET_MAIN, 189 189 INTERRUPTS); 190 190 } ··· 208 208 dra7xx_pcie_enable_interrupts(dra7xx); 209 209 } 210 210 211 - static struct dw_pcie_host_ops dra7xx_pcie_host_ops = { 211 + static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = { 212 212 .host_init = dra7xx_pcie_host_init, 213 213 }; 214 214
+1 -1
drivers/pci/dwc/pci-exynos.c
··· 590 590 exynos_pcie_enable_interrupts(ep); 591 591 } 592 592 593 - static struct dw_pcie_host_ops exynos_pcie_host_ops = { 593 + static const struct dw_pcie_host_ops exynos_pcie_host_ops = { 594 594 .rd_own_conf = exynos_pcie_rd_own_conf, 595 595 .wr_own_conf = exynos_pcie_wr_own_conf, 596 596 .host_init = exynos_pcie_host_init,
+37 -2
drivers/pci/dwc/pci-imx6.c
··· 24 24 #include <linux/pci.h> 25 25 #include <linux/platform_device.h> 26 26 #include <linux/regmap.h> 27 + #include <linux/regulator/consumer.h> 27 28 #include <linux/resource.h> 28 29 #include <linux/signal.h> 29 30 #include <linux/types.h> ··· 60 59 u32 tx_swing_full; 61 60 u32 tx_swing_low; 62 61 int link_gen; 62 + struct regulator *vpcie; 63 63 }; 64 64 65 65 /* Parameters for the waiting for PCIe PHY PLL to lock on i.MX7 */ ··· 286 284 287 285 static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) 288 286 { 287 + struct device *dev = imx6_pcie->pci->dev; 288 + 289 289 switch (imx6_pcie->variant) { 290 290 case IMX7D: 291 291 reset_control_assert(imx6_pcie->pciephy_reset); ··· 313 309 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 314 310 IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16); 315 311 break; 312 + } 313 + 314 + if (imx6_pcie->vpcie && regulator_is_enabled(imx6_pcie->vpcie) > 0) { 315 + int ret = regulator_disable(imx6_pcie->vpcie); 316 + 317 + if (ret) 318 + dev_err(dev, "failed to disable vpcie regulator: %d\n", 319 + ret); 316 320 } 317 321 } 318 322 ··· 388 376 struct device *dev = pci->dev; 389 377 int ret; 390 378 379 + if (imx6_pcie->vpcie && !regulator_is_enabled(imx6_pcie->vpcie)) { 380 + ret = regulator_enable(imx6_pcie->vpcie); 381 + if (ret) { 382 + dev_err(dev, "failed to enable vpcie regulator: %d\n", 383 + ret); 384 + return; 385 + } 386 + } 387 + 391 388 ret = clk_prepare_enable(imx6_pcie->pcie_phy); 392 389 if (ret) { 393 390 dev_err(dev, "unable to enable pcie_phy clock\n"); 394 - return; 391 + goto err_pcie_phy; 395 392 } 396 393 397 394 ret = clk_prepare_enable(imx6_pcie->pcie_bus); ··· 460 439 clk_disable_unprepare(imx6_pcie->pcie_bus); 461 440 err_pcie_bus: 462 441 clk_disable_unprepare(imx6_pcie->pcie_phy); 442 + err_pcie_phy: 443 + if (imx6_pcie->vpcie && regulator_is_enabled(imx6_pcie->vpcie) > 0) { 444 + ret = regulator_disable(imx6_pcie->vpcie); 445 + if (ret) 446 + dev_err(dev, "failed to disable vpcie regulator: %d\n", 447 + ret); 448 + } 463 449 } 464 450 465 451 static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie) ··· 657 629 PCIE_PHY_DEBUG_R1_XMLH_LINK_UP; 658 630 } 659 631 660 - static struct dw_pcie_host_ops imx6_pcie_host_ops = { 632 + static const struct dw_pcie_host_ops imx6_pcie_host_ops = { 661 633 .host_init = imx6_pcie_host_init, 662 634 }; 663 635 ··· 829 801 &imx6_pcie->link_gen); 830 802 if (ret) 831 803 imx6_pcie->link_gen = 1; 804 + 805 + imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); 806 + if (IS_ERR(imx6_pcie->vpcie)) { 807 + if (PTR_ERR(imx6_pcie->vpcie) == -EPROBE_DEFER) 808 + return -EPROBE_DEFER; 809 + imx6_pcie->vpcie = NULL; 810 + } 832 811 833 812 platform_set_drvdata(pdev, imx6_pcie); 834 813
+1 -1
drivers/pci/dwc/pci-keystone.c
··· 291 291 "Asynchronous external abort"); 292 292 } 293 293 294 - static struct dw_pcie_host_ops keystone_pcie_host_ops = { 294 + static const struct dw_pcie_host_ops keystone_pcie_host_ops = { 295 295 .rd_other_conf = ks_dw_pcie_rd_other_conf, 296 296 .wr_other_conf = ks_dw_pcie_wr_other_conf, 297 297 .host_init = ks_pcie_host_init,
+3 -3
drivers/pci/dwc/pci-layerscape.c
··· 39 39 u32 lut_offset; 40 40 u32 ltssm_shift; 41 41 u32 lut_dbg; 42 - struct dw_pcie_host_ops *ops; 42 + const struct dw_pcie_host_ops *ops; 43 43 const struct dw_pcie_ops *dw_pcie_ops; 44 44 }; 45 45 ··· 185 185 return 0; 186 186 } 187 187 188 - static struct dw_pcie_host_ops ls1021_pcie_host_ops = { 188 + static const struct dw_pcie_host_ops ls1021_pcie_host_ops = { 189 189 .host_init = ls1021_pcie_host_init, 190 190 .msi_host_init = ls_pcie_msi_host_init, 191 191 }; 192 192 193 - static struct dw_pcie_host_ops ls_pcie_host_ops = { 193 + static const struct dw_pcie_host_ops ls_pcie_host_ops = { 194 194 .host_init = ls_pcie_host_init, 195 195 .msi_host_init = ls_pcie_msi_host_init, 196 196 };
+1 -1
drivers/pci/dwc/pcie-armada8k.c
··· 160 160 return IRQ_HANDLED; 161 161 } 162 162 163 - static struct dw_pcie_host_ops armada8k_pcie_host_ops = { 163 + static const struct dw_pcie_host_ops armada8k_pcie_host_ops = { 164 164 .host_init = armada8k_pcie_host_init, 165 165 }; 166 166
+1 -1
drivers/pci/dwc/pcie-artpec6.c
··· 184 184 artpec6_pcie_enable_interrupts(artpec6_pcie); 185 185 } 186 186 187 - static struct dw_pcie_host_ops artpec6_pcie_host_ops = { 187 + static const struct dw_pcie_host_ops artpec6_pcie_host_ops = { 188 188 .host_init = artpec6_pcie_host_init, 189 189 }; 190 190
+24 -19
drivers/pci/dwc/pcie-designware-host.c
··· 280 280 struct device_node *np = dev->of_node; 281 281 struct platform_device *pdev = to_platform_device(dev); 282 282 struct pci_bus *bus, *child; 283 + struct pci_host_bridge *bridge; 283 284 struct resource *cfg_res; 284 285 int i, ret; 285 - LIST_HEAD(res); 286 286 struct resource_entry *win, *tmp; 287 287 288 288 cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); ··· 295 295 dev_err(dev, "missing *config* reg space\n"); 296 296 } 297 297 298 - ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &res, &pp->io_base); 298 + bridge = pci_alloc_host_bridge(0); 299 + if (!bridge) 300 + return -ENOMEM; 301 + 302 + ret = of_pci_get_host_bridge_resources(np, 0, 0xff, 303 + &bridge->windows, &pp->io_base); 299 304 if (ret) 300 305 return ret; 301 306 302 - ret = devm_request_pci_bus_resources(dev, &res); 307 + ret = devm_request_pci_bus_resources(dev, &bridge->windows); 303 308 if (ret) 304 309 goto error; 305 310 306 311 /* Get the I/O and memory ranges from DT */ 307 - resource_list_for_each_entry_safe(win, tmp, &res) { 312 + resource_list_for_each_entry_safe(win, tmp, &bridge->windows) { 308 313 switch (resource_type(win->res)) { 309 314 case IORESOURCE_IO: 310 315 ret = pci_remap_iospace(win->res, pp->io_base); ··· 405 400 pp->ops->host_init(pp); 406 401 407 402 pp->root_bus_nr = pp->busn->start; 403 + 404 + bridge->dev.parent = dev; 405 + bridge->sysdata = pp; 406 + bridge->busnr = pp->root_bus_nr; 407 + bridge->ops = &dw_pcie_ops; 408 + bridge->map_irq = of_irq_parse_and_map_pci; 409 + bridge->swizzle_irq = pci_common_swizzle; 408 410 if (IS_ENABLED(CONFIG_PCI_MSI)) { 409 - bus = pci_scan_root_bus_msi(dev, pp->root_bus_nr, 410 - &dw_pcie_ops, pp, &res, 411 - &dw_pcie_msi_chip); 411 + bridge->msi = &dw_pcie_msi_chip; 412 412 dw_pcie_msi_chip.dev = dev; 413 - } else 414 - bus = pci_scan_root_bus(dev, pp->root_bus_nr, &dw_pcie_ops, 415 - pp, &res); 416 - if (!bus) { 417 - ret = -ENOMEM; 418 - goto error; 419 413 } 414 + 415 + ret = pci_scan_root_bus_bridge(bridge); 416 + if (ret) 417 + goto error; 418 + 419 + bus = bridge->bus; 420 420 421 421 if (pp->ops->scan_bus) 422 422 pp->ops->scan_bus(pp); 423 - 424 - #ifdef CONFIG_ARM 425 - /* support old dtbs that incorrectly describe IRQs */ 426 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 427 - #endif 428 423 429 424 pci_bus_size_bridges(bus); 430 425 pci_bus_assign_resources(bus); ··· 436 431 return 0; 437 432 438 433 error: 439 - pci_free_resource_list(&res); 434 + pci_free_host_bridge(bridge); 440 435 return ret; 441 436 } 442 437
+3 -2
drivers/pci/dwc/pcie-designware-plat.c
··· 46 46 dw_pcie_msi_init(pp); 47 47 } 48 48 49 - static struct dw_pcie_host_ops dw_plat_pcie_host_ops = { 49 + static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = { 50 50 .host_init = dw_plat_pcie_host_init, 51 51 }; 52 52 ··· 67 67 68 68 ret = devm_request_irq(dev, pp->msi_irq, 69 69 dw_plat_pcie_msi_irq_handler, 70 - IRQF_SHARED, "dw-plat-pcie-msi", pp); 70 + IRQF_SHARED | IRQF_NO_THREAD, 71 + "dw-plat-pcie-msi", pp); 71 72 if (ret) { 72 73 dev_err(dev, "failed to request MSI IRQ\n"); 73 74 return ret;
+1 -1
drivers/pci/dwc/pcie-designware.h
··· 162 162 struct resource *mem; 163 163 struct resource *busn; 164 164 int irq; 165 - struct dw_pcie_host_ops *ops; 165 + const struct dw_pcie_host_ops *ops; 166 166 int msi_irq; 167 167 struct irq_domain *irq_domain; 168 168 unsigned long msi_data;
+517
drivers/pci/dwc/pcie-kirin.c
··· 1 + /* 2 + * PCIe host controller driver for Kirin Phone SoCs 3 + * 4 + * Copyright (C) 2017 Hilisicon Electronics Co., Ltd. 5 + * http://www.huawei.com 6 + * 7 + * Author: Xiaowei Song <songxiaowei@huawei.com> 8 + * 9 + * This program is free software; you can redistribute it and/or modify 10 + * it under the terms of the GNU General Public License version 2 as 11 + * published by the Free Software Foundation. 12 + */ 13 + 14 + #include <asm/compiler.h> 15 + #include <linux/compiler.h> 16 + #include <linux/clk.h> 17 + #include <linux/delay.h> 18 + #include <linux/err.h> 19 + #include <linux/gpio.h> 20 + #include <linux/interrupt.h> 21 + #include <linux/mfd/syscon.h> 22 + #include <linux/of_address.h> 23 + #include <linux/of_gpio.h> 24 + #include <linux/of_pci.h> 25 + #include <linux/pci.h> 26 + #include <linux/pci_regs.h> 27 + #include <linux/platform_device.h> 28 + #include <linux/regmap.h> 29 + #include <linux/resource.h> 30 + #include <linux/types.h> 31 + #include "pcie-designware.h" 32 + 33 + #define to_kirin_pcie(x) dev_get_drvdata((x)->dev) 34 + 35 + #define REF_CLK_FREQ 100000000 36 + 37 + /* PCIe ELBI registers */ 38 + #define SOC_PCIECTRL_CTRL0_ADDR 0x000 39 + #define SOC_PCIECTRL_CTRL1_ADDR 0x004 40 + #define SOC_PCIEPHY_CTRL2_ADDR 0x008 41 + #define SOC_PCIEPHY_CTRL3_ADDR 0x00c 42 + #define PCIE_ELBI_SLV_DBI_ENABLE (0x1 << 21) 43 + 44 + /* info located in APB */ 45 + #define PCIE_APP_LTSSM_ENABLE 0x01c 46 + #define PCIE_APB_PHY_CTRL0 0x0 47 + #define PCIE_APB_PHY_CTRL1 0x4 48 + #define PCIE_APB_PHY_STATUS0 0x400 49 + #define PCIE_LINKUP_ENABLE (0x8020) 50 + #define PCIE_LTSSM_ENABLE_BIT (0x1 << 11) 51 + #define PIPE_CLK_STABLE (0x1 << 19) 52 + #define PHY_REF_PAD_BIT (0x1 << 8) 53 + #define PHY_PWR_DOWN_BIT (0x1 << 22) 54 + #define PHY_RST_ACK_BIT (0x1 << 16) 55 + 56 + /* info located in sysctrl */ 57 + #define SCTRL_PCIE_CMOS_OFFSET 0x60 58 + #define SCTRL_PCIE_CMOS_BIT 0x10 59 + #define SCTRL_PCIE_ISO_OFFSET 0x44 60 + #define SCTRL_PCIE_ISO_BIT 0x30 61 + #define SCTRL_PCIE_HPCLK_OFFSET 0x190 62 + #define SCTRL_PCIE_HPCLK_BIT 0x184000 63 + #define SCTRL_PCIE_OE_OFFSET 0x14a 64 + #define PCIE_DEBOUNCE_PARAM 0xF0F400 65 + #define PCIE_OE_BYPASS (0x3 << 28) 66 + 67 + /* peri_crg ctrl */ 68 + #define CRGCTRL_PCIE_ASSERT_OFFSET 0x88 69 + #define CRGCTRL_PCIE_ASSERT_BIT 0x8c000000 70 + 71 + /* Time for delay */ 72 + #define REF_2_PERST_MIN 20000 73 + #define REF_2_PERST_MAX 25000 74 + #define PERST_2_ACCESS_MIN 10000 75 + #define PERST_2_ACCESS_MAX 12000 76 + #define LINK_WAIT_MIN 900 77 + #define LINK_WAIT_MAX 1000 78 + #define PIPE_CLK_WAIT_MIN 550 79 + #define PIPE_CLK_WAIT_MAX 600 80 + #define TIME_CMOS_MIN 100 81 + #define TIME_CMOS_MAX 105 82 + #define TIME_PHY_PD_MIN 10 83 + #define TIME_PHY_PD_MAX 11 84 + 85 + struct kirin_pcie { 86 + struct dw_pcie *pci; 87 + void __iomem *apb_base; 88 + void __iomem *phy_base; 89 + struct regmap *crgctrl; 90 + struct regmap *sysctrl; 91 + struct clk *apb_sys_clk; 92 + struct clk *apb_phy_clk; 93 + struct clk *phy_ref_clk; 94 + struct clk *pcie_aclk; 95 + struct clk *pcie_aux_clk; 96 + int gpio_id_reset; 97 + }; 98 + 99 + /* Registers in PCIeCTRL */ 100 + static inline void kirin_apb_ctrl_writel(struct kirin_pcie *kirin_pcie, 101 + u32 val, u32 reg) 102 + { 103 + writel(val, kirin_pcie->apb_base + reg); 104 + } 105 + 106 + static inline u32 kirin_apb_ctrl_readl(struct kirin_pcie *kirin_pcie, u32 reg) 107 + { 108 + return readl(kirin_pcie->apb_base + reg); 109 + } 110 + 111 + /* Registers in PCIePHY */ 112 + static inline void kirin_apb_phy_writel(struct kirin_pcie *kirin_pcie, 113 + u32 val, u32 reg) 114 + { 115 + writel(val, kirin_pcie->phy_base + reg); 116 + } 117 + 118 + static inline u32 kirin_apb_phy_readl(struct kirin_pcie *kirin_pcie, u32 reg) 119 + { 120 + return readl(kirin_pcie->phy_base + reg); 121 + } 122 + 123 + static long kirin_pcie_get_clk(struct kirin_pcie *kirin_pcie, 124 + struct platform_device *pdev) 125 + { 126 + struct device *dev = &pdev->dev; 127 + 128 + kirin_pcie->phy_ref_clk = devm_clk_get(dev, "pcie_phy_ref"); 129 + if (IS_ERR(kirin_pcie->phy_ref_clk)) 130 + return PTR_ERR(kirin_pcie->phy_ref_clk); 131 + 132 + kirin_pcie->pcie_aux_clk = devm_clk_get(dev, "pcie_aux"); 133 + if (IS_ERR(kirin_pcie->pcie_aux_clk)) 134 + return PTR_ERR(kirin_pcie->pcie_aux_clk); 135 + 136 + kirin_pcie->apb_phy_clk = devm_clk_get(dev, "pcie_apb_phy"); 137 + if (IS_ERR(kirin_pcie->apb_phy_clk)) 138 + return PTR_ERR(kirin_pcie->apb_phy_clk); 139 + 140 + kirin_pcie->apb_sys_clk = devm_clk_get(dev, "pcie_apb_sys"); 141 + if (IS_ERR(kirin_pcie->apb_sys_clk)) 142 + return PTR_ERR(kirin_pcie->apb_sys_clk); 143 + 144 + kirin_pcie->pcie_aclk = devm_clk_get(dev, "pcie_aclk"); 145 + if (IS_ERR(kirin_pcie->pcie_aclk)) 146 + return PTR_ERR(kirin_pcie->pcie_aclk); 147 + 148 + return 0; 149 + } 150 + 151 + static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie, 152 + struct platform_device *pdev) 153 + { 154 + struct device *dev = &pdev->dev; 155 + struct resource *apb; 156 + struct resource *phy; 157 + struct resource *dbi; 158 + 159 + apb = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb"); 160 + kirin_pcie->apb_base = devm_ioremap_resource(dev, apb); 161 + if (IS_ERR(kirin_pcie->apb_base)) 162 + return PTR_ERR(kirin_pcie->apb_base); 163 + 164 + phy = platform_get_resource_byname(pdev, IORESOURCE_MEM, "phy"); 165 + kirin_pcie->phy_base = devm_ioremap_resource(dev, phy); 166 + if (IS_ERR(kirin_pcie->phy_base)) 167 + return PTR_ERR(kirin_pcie->phy_base); 168 + 169 + dbi = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 170 + kirin_pcie->pci->dbi_base = devm_ioremap_resource(dev, dbi); 171 + if (IS_ERR(kirin_pcie->pci->dbi_base)) 172 + return PTR_ERR(kirin_pcie->pci->dbi_base); 173 + 174 + kirin_pcie->crgctrl = 175 + syscon_regmap_lookup_by_compatible("hisilicon,hi3660-crgctrl"); 176 + if (IS_ERR(kirin_pcie->crgctrl)) 177 + return PTR_ERR(kirin_pcie->crgctrl); 178 + 179 + kirin_pcie->sysctrl = 180 + syscon_regmap_lookup_by_compatible("hisilicon,hi3660-sctrl"); 181 + if (IS_ERR(kirin_pcie->sysctrl)) 182 + return PTR_ERR(kirin_pcie->sysctrl); 183 + 184 + return 0; 185 + } 186 + 187 + static int kirin_pcie_phy_init(struct kirin_pcie *kirin_pcie) 188 + { 189 + struct device *dev = kirin_pcie->pci->dev; 190 + u32 reg_val; 191 + 192 + reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); 193 + reg_val &= ~PHY_REF_PAD_BIT; 194 + kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); 195 + 196 + reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL0); 197 + reg_val &= ~PHY_PWR_DOWN_BIT; 198 + kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL0); 199 + usleep_range(TIME_PHY_PD_MIN, TIME_PHY_PD_MAX); 200 + 201 + reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_CTRL1); 202 + reg_val &= ~PHY_RST_ACK_BIT; 203 + kirin_apb_phy_writel(kirin_pcie, reg_val, PCIE_APB_PHY_CTRL1); 204 + 205 + usleep_range(PIPE_CLK_WAIT_MIN, PIPE_CLK_WAIT_MAX); 206 + reg_val = kirin_apb_phy_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); 207 + if (reg_val & PIPE_CLK_STABLE) { 208 + dev_err(dev, "PIPE clk is not stable\n"); 209 + return -EINVAL; 210 + } 211 + 212 + return 0; 213 + } 214 + 215 + static void kirin_pcie_oe_enable(struct kirin_pcie *kirin_pcie) 216 + { 217 + u32 val; 218 + 219 + regmap_read(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, &val); 220 + val |= PCIE_DEBOUNCE_PARAM; 221 + val &= ~PCIE_OE_BYPASS; 222 + regmap_write(kirin_pcie->sysctrl, SCTRL_PCIE_OE_OFFSET, val); 223 + } 224 + 225 + static int kirin_pcie_clk_ctrl(struct kirin_pcie *kirin_pcie, bool enable) 226 + { 227 + int ret = 0; 228 + 229 + if (!enable) 230 + goto close_clk; 231 + 232 + ret = clk_set_rate(kirin_pcie->phy_ref_clk, REF_CLK_FREQ); 233 + if (ret) 234 + return ret; 235 + 236 + ret = clk_prepare_enable(kirin_pcie->phy_ref_clk); 237 + if (ret) 238 + return ret; 239 + 240 + ret = clk_prepare_enable(kirin_pcie->apb_sys_clk); 241 + if (ret) 242 + goto apb_sys_fail; 243 + 244 + ret = clk_prepare_enable(kirin_pcie->apb_phy_clk); 245 + if (ret) 246 + goto apb_phy_fail; 247 + 248 + ret = clk_prepare_enable(kirin_pcie->pcie_aclk); 249 + if (ret) 250 + goto aclk_fail; 251 + 252 + ret = clk_prepare_enable(kirin_pcie->pcie_aux_clk); 253 + if (ret) 254 + goto aux_clk_fail; 255 + 256 + return 0; 257 + 258 + close_clk: 259 + clk_disable_unprepare(kirin_pcie->pcie_aux_clk); 260 + aux_clk_fail: 261 + clk_disable_unprepare(kirin_pcie->pcie_aclk); 262 + aclk_fail: 263 + clk_disable_unprepare(kirin_pcie->apb_phy_clk); 264 + apb_phy_fail: 265 + clk_disable_unprepare(kirin_pcie->apb_sys_clk); 266 + apb_sys_fail: 267 + clk_disable_unprepare(kirin_pcie->phy_ref_clk); 268 + 269 + return ret; 270 + } 271 + 272 + static int kirin_pcie_power_on(struct kirin_pcie *kirin_pcie) 273 + { 274 + int ret; 275 + 276 + /* Power supply for Host */ 277 + regmap_write(kirin_pcie->sysctrl, 278 + SCTRL_PCIE_CMOS_OFFSET, SCTRL_PCIE_CMOS_BIT); 279 + usleep_range(TIME_CMOS_MIN, TIME_CMOS_MAX); 280 + kirin_pcie_oe_enable(kirin_pcie); 281 + 282 + ret = kirin_pcie_clk_ctrl(kirin_pcie, true); 283 + if (ret) 284 + return ret; 285 + 286 + /* ISO disable, PCIeCtrl, PHY assert and clk gate clear */ 287 + regmap_write(kirin_pcie->sysctrl, 288 + SCTRL_PCIE_ISO_OFFSET, SCTRL_PCIE_ISO_BIT); 289 + regmap_write(kirin_pcie->crgctrl, 290 + CRGCTRL_PCIE_ASSERT_OFFSET, CRGCTRL_PCIE_ASSERT_BIT); 291 + regmap_write(kirin_pcie->sysctrl, 292 + SCTRL_PCIE_HPCLK_OFFSET, SCTRL_PCIE_HPCLK_BIT); 293 + 294 + ret = kirin_pcie_phy_init(kirin_pcie); 295 + if (ret) 296 + goto close_clk; 297 + 298 + /* perst assert Endpoint */ 299 + if (!gpio_request(kirin_pcie->gpio_id_reset, "pcie_perst")) { 300 + usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX); 301 + ret = gpio_direction_output(kirin_pcie->gpio_id_reset, 1); 302 + if (ret) 303 + goto close_clk; 304 + usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); 305 + 306 + return 0; 307 + } 308 + 309 + close_clk: 310 + kirin_pcie_clk_ctrl(kirin_pcie, false); 311 + return ret; 312 + } 313 + 314 + static void kirin_pcie_sideband_dbi_w_mode(struct kirin_pcie *kirin_pcie, 315 + bool on) 316 + { 317 + u32 val; 318 + 319 + val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL0_ADDR); 320 + if (on) 321 + val = val | PCIE_ELBI_SLV_DBI_ENABLE; 322 + else 323 + val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; 324 + 325 + kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL0_ADDR); 326 + } 327 + 328 + static void kirin_pcie_sideband_dbi_r_mode(struct kirin_pcie *kirin_pcie, 329 + bool on) 330 + { 331 + u32 val; 332 + 333 + val = kirin_apb_ctrl_readl(kirin_pcie, SOC_PCIECTRL_CTRL1_ADDR); 334 + if (on) 335 + val = val | PCIE_ELBI_SLV_DBI_ENABLE; 336 + else 337 + val = val & ~PCIE_ELBI_SLV_DBI_ENABLE; 338 + 339 + kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL1_ADDR); 340 + } 341 + 342 + static int kirin_pcie_rd_own_conf(struct pcie_port *pp, 343 + int where, int size, u32 *val) 344 + { 345 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 346 + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 347 + int ret; 348 + 349 + kirin_pcie_sideband_dbi_r_mode(kirin_pcie, true); 350 + ret = dw_pcie_read(pci->dbi_base + where, size, val); 351 + kirin_pcie_sideband_dbi_r_mode(kirin_pcie, false); 352 + 353 + return ret; 354 + } 355 + 356 + static int kirin_pcie_wr_own_conf(struct pcie_port *pp, 357 + int where, int size, u32 val) 358 + { 359 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 360 + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 361 + int ret; 362 + 363 + kirin_pcie_sideband_dbi_w_mode(kirin_pcie, true); 364 + ret = dw_pcie_write(pci->dbi_base + where, size, val); 365 + kirin_pcie_sideband_dbi_w_mode(kirin_pcie, false); 366 + 367 + return ret; 368 + } 369 + 370 + static u32 kirin_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, 371 + u32 reg, size_t size) 372 + { 373 + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 374 + u32 ret; 375 + 376 + kirin_pcie_sideband_dbi_r_mode(kirin_pcie, true); 377 + dw_pcie_read(base + reg, size, &ret); 378 + kirin_pcie_sideband_dbi_r_mode(kirin_pcie, false); 379 + 380 + return ret; 381 + } 382 + 383 + static void kirin_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, 384 + u32 reg, size_t size, u32 val) 385 + { 386 + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 387 + 388 + kirin_pcie_sideband_dbi_w_mode(kirin_pcie, true); 389 + dw_pcie_write(base + reg, size, val); 390 + kirin_pcie_sideband_dbi_w_mode(kirin_pcie, false); 391 + } 392 + 393 + static int kirin_pcie_link_up(struct dw_pcie *pci) 394 + { 395 + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 396 + u32 val = kirin_apb_ctrl_readl(kirin_pcie, PCIE_APB_PHY_STATUS0); 397 + 398 + if ((val & PCIE_LINKUP_ENABLE) == PCIE_LINKUP_ENABLE) 399 + return 1; 400 + 401 + return 0; 402 + } 403 + 404 + static int kirin_pcie_establish_link(struct pcie_port *pp) 405 + { 406 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 407 + struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 408 + struct device *dev = kirin_pcie->pci->dev; 409 + int count = 0; 410 + 411 + if (kirin_pcie_link_up(pci)) 412 + return 0; 413 + 414 + dw_pcie_setup_rc(pp); 415 + 416 + /* assert LTSSM enable */ 417 + kirin_apb_ctrl_writel(kirin_pcie, PCIE_LTSSM_ENABLE_BIT, 418 + PCIE_APP_LTSSM_ENABLE); 419 + 420 + /* check if the link is up or not */ 421 + while (!kirin_pcie_link_up(pci)) { 422 + usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX); 423 + count++; 424 + if (count == 1000) { 425 + dev_err(dev, "Link Fail\n"); 426 + return -EINVAL; 427 + } 428 + } 429 + 430 + return 0; 431 + } 432 + 433 + static void kirin_pcie_host_init(struct pcie_port *pp) 434 + { 435 + kirin_pcie_establish_link(pp); 436 + } 437 + 438 + static struct dw_pcie_ops kirin_dw_pcie_ops = { 439 + .read_dbi = kirin_pcie_read_dbi, 440 + .write_dbi = kirin_pcie_write_dbi, 441 + .link_up = kirin_pcie_link_up, 442 + }; 443 + 444 + static struct dw_pcie_host_ops kirin_pcie_host_ops = { 445 + .rd_own_conf = kirin_pcie_rd_own_conf, 446 + .wr_own_conf = kirin_pcie_wr_own_conf, 447 + .host_init = kirin_pcie_host_init, 448 + }; 449 + 450 + static int __init kirin_add_pcie_port(struct dw_pcie *pci, 451 + struct platform_device *pdev) 452 + { 453 + pci->pp.ops = &kirin_pcie_host_ops; 454 + 455 + return dw_pcie_host_init(&pci->pp); 456 + } 457 + 458 + static int kirin_pcie_probe(struct platform_device *pdev) 459 + { 460 + struct device *dev = &pdev->dev; 461 + struct kirin_pcie *kirin_pcie; 462 + struct dw_pcie *pci; 463 + int ret; 464 + 465 + if (!dev->of_node) { 466 + dev_err(dev, "NULL node\n"); 467 + return -EINVAL; 468 + } 469 + 470 + kirin_pcie = devm_kzalloc(dev, sizeof(struct kirin_pcie), GFP_KERNEL); 471 + if (!kirin_pcie) 472 + return -ENOMEM; 473 + 474 + pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); 475 + if (!pci) 476 + return -ENOMEM; 477 + 478 + pci->dev = dev; 479 + pci->ops = &kirin_dw_pcie_ops; 480 + kirin_pcie->pci = pci; 481 + 482 + ret = kirin_pcie_get_clk(kirin_pcie, pdev); 483 + if (ret) 484 + return ret; 485 + 486 + ret = kirin_pcie_get_resource(kirin_pcie, pdev); 487 + if (ret) 488 + return ret; 489 + 490 + kirin_pcie->gpio_id_reset = of_get_named_gpio(dev->of_node, 491 + "reset-gpio", 0); 492 + if (kirin_pcie->gpio_id_reset < 0) 493 + return -ENODEV; 494 + 495 + ret = kirin_pcie_power_on(kirin_pcie); 496 + if (ret) 497 + return ret; 498 + 499 + platform_set_drvdata(pdev, kirin_pcie); 500 + 501 + return kirin_add_pcie_port(pci, pdev); 502 + } 503 + 504 + static const struct of_device_id kirin_pcie_match[] = { 505 + { .compatible = "hisilicon,kirin960-pcie" }, 506 + {}, 507 + }; 508 + 509 + struct platform_driver kirin_pcie_driver = { 510 + .probe = kirin_pcie_probe, 511 + .driver = { 512 + .name = "kirin-pcie", 513 + .of_match_table = kirin_pcie_match, 514 + .suppress_bind_attrs = true, 515 + }, 516 + }; 517 + builtin_platform_driver(kirin_pcie_driver);
+383 -63
drivers/pci/dwc/pcie-qcom.c
··· 51 51 #define PCIE20_ELBI_SYS_CTRL 0x04 52 52 #define PCIE20_ELBI_SYS_CTRL_LT_ENABLE BIT(0) 53 53 54 + #define PCIE20_AXI_MSTR_RESP_COMP_CTRL0 0x818 55 + #define CFG_REMOTE_RD_REQ_BRIDGE_SIZE_2K 0x4 56 + #define CFG_REMOTE_RD_REQ_BRIDGE_SIZE_4K 0x5 57 + #define PCIE20_AXI_MSTR_RESP_COMP_CTRL1 0x81c 58 + #define CFG_BRIDGE_SB_INIT BIT(0) 59 + 54 60 #define PCIE20_CAP 0x70 55 61 56 62 #define PERST_DELAY_US 1000 ··· 92 86 struct clk *pipe_clk; 93 87 }; 94 88 89 + struct qcom_pcie_resources_v3 { 90 + struct clk *aux_clk; 91 + struct clk *master_clk; 92 + struct clk *slave_clk; 93 + struct reset_control *axi_m_reset; 94 + struct reset_control *axi_s_reset; 95 + struct reset_control *pipe_reset; 96 + struct reset_control *axi_m_vmid_reset; 97 + struct reset_control *axi_s_xpu_reset; 98 + struct reset_control *parf_reset; 99 + struct reset_control *phy_reset; 100 + struct reset_control *axi_m_sticky_reset; 101 + struct reset_control *pipe_sticky_reset; 102 + struct reset_control *pwr_reset; 103 + struct reset_control *ahb_reset; 104 + struct reset_control *phy_ahb_reset; 105 + }; 106 + 95 107 union qcom_pcie_resources { 96 108 struct qcom_pcie_resources_v0 v0; 97 109 struct qcom_pcie_resources_v1 v1; 98 110 struct qcom_pcie_resources_v2 v2; 111 + struct qcom_pcie_resources_v3 v3; 99 112 }; 100 113 101 114 struct qcom_pcie; ··· 158 133 return dw_handle_msi_irq(pp); 159 134 } 160 135 161 - static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie) 162 - { 163 - u32 val; 164 - 165 - /* enable link training */ 166 - val = readl(pcie->elbi + PCIE20_ELBI_SYS_CTRL); 167 - val |= PCIE20_ELBI_SYS_CTRL_LT_ENABLE; 168 - writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL); 169 - } 170 - 171 - static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie) 172 - { 173 - u32 val; 174 - 175 - /* enable link training */ 176 - val = readl(pcie->parf + PCIE20_PARF_LTSSM); 177 - val |= BIT(8); 178 - writel(val, pcie->parf + PCIE20_PARF_LTSSM); 179 - } 180 - 181 136 static int qcom_pcie_establish_link(struct qcom_pcie *pcie) 182 137 { 183 138 struct dw_pcie *pci = pcie->pci; ··· 170 165 pcie->ops->ltssm_enable(pcie); 171 166 172 167 return dw_pcie_wait_for_link(pci); 168 + } 169 + 170 + static void qcom_pcie_v0_v1_ltssm_enable(struct qcom_pcie *pcie) 171 + { 172 + u32 val; 173 + 174 + /* enable link training */ 175 + val = readl(pcie->elbi + PCIE20_ELBI_SYS_CTRL); 176 + val |= PCIE20_ELBI_SYS_CTRL_LT_ENABLE; 177 + writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL); 173 178 } 174 179 175 180 static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie) ··· 230 215 231 216 res->phy_reset = devm_reset_control_get(dev, "phy"); 232 217 return PTR_ERR_OR_ZERO(res->phy_reset); 233 - } 234 - 235 - static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie) 236 - { 237 - struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 238 - struct dw_pcie *pci = pcie->pci; 239 - struct device *dev = pci->dev; 240 - 241 - res->vdda = devm_regulator_get(dev, "vdda"); 242 - if (IS_ERR(res->vdda)) 243 - return PTR_ERR(res->vdda); 244 - 245 - res->iface = devm_clk_get(dev, "iface"); 246 - if (IS_ERR(res->iface)) 247 - return PTR_ERR(res->iface); 248 - 249 - res->aux = devm_clk_get(dev, "aux"); 250 - if (IS_ERR(res->aux)) 251 - return PTR_ERR(res->aux); 252 - 253 - res->master_bus = devm_clk_get(dev, "master_bus"); 254 - if (IS_ERR(res->master_bus)) 255 - return PTR_ERR(res->master_bus); 256 - 257 - res->slave_bus = devm_clk_get(dev, "slave_bus"); 258 - if (IS_ERR(res->slave_bus)) 259 - return PTR_ERR(res->slave_bus); 260 - 261 - res->core = devm_reset_control_get(dev, "core"); 262 - return PTR_ERR_OR_ZERO(res->core); 263 218 } 264 219 265 220 static void qcom_pcie_deinit_v0(struct qcom_pcie *pcie) ··· 342 357 /* wait for clock acquisition */ 343 358 usleep_range(1000, 1500); 344 359 360 + 361 + /* Set the Max TLP size to 2K, instead of using default of 4K */ 362 + writel(CFG_REMOTE_RD_REQ_BRIDGE_SIZE_2K, 363 + pci->dbi_base + PCIE20_AXI_MSTR_RESP_COMP_CTRL0); 364 + writel(CFG_BRIDGE_SB_INIT, 365 + pci->dbi_base + PCIE20_AXI_MSTR_RESP_COMP_CTRL1); 366 + 345 367 return 0; 346 368 347 369 err_deassert_ahb: ··· 365 373 regulator_disable(res->vdda); 366 374 367 375 return ret; 376 + } 377 + 378 + static int qcom_pcie_get_resources_v1(struct qcom_pcie *pcie) 379 + { 380 + struct qcom_pcie_resources_v1 *res = &pcie->res.v1; 381 + struct dw_pcie *pci = pcie->pci; 382 + struct device *dev = pci->dev; 383 + 384 + res->vdda = devm_regulator_get(dev, "vdda"); 385 + if (IS_ERR(res->vdda)) 386 + return PTR_ERR(res->vdda); 387 + 388 + res->iface = devm_clk_get(dev, "iface"); 389 + if (IS_ERR(res->iface)) 390 + return PTR_ERR(res->iface); 391 + 392 + res->aux = devm_clk_get(dev, "aux"); 393 + if (IS_ERR(res->aux)) 394 + return PTR_ERR(res->aux); 395 + 396 + res->master_bus = devm_clk_get(dev, "master_bus"); 397 + if (IS_ERR(res->master_bus)) 398 + return PTR_ERR(res->master_bus); 399 + 400 + res->slave_bus = devm_clk_get(dev, "slave_bus"); 401 + if (IS_ERR(res->slave_bus)) 402 + return PTR_ERR(res->slave_bus); 403 + 404 + res->core = devm_reset_control_get(dev, "core"); 405 + return PTR_ERR_OR_ZERO(res->core); 368 406 } 369 407 370 408 static void qcom_pcie_deinit_v1(struct qcom_pcie *pcie) ··· 477 455 return ret; 478 456 } 479 457 458 + static void qcom_pcie_v2_ltssm_enable(struct qcom_pcie *pcie) 459 + { 460 + u32 val; 461 + 462 + /* enable link training */ 463 + val = readl(pcie->parf + PCIE20_PARF_LTSSM); 464 + val |= BIT(8); 465 + writel(val, pcie->parf + PCIE20_PARF_LTSSM); 466 + } 467 + 480 468 static int qcom_pcie_get_resources_v2(struct qcom_pcie *pcie) 481 469 { 482 470 struct qcom_pcie_resources_v2 *res = &pcie->res.v2; ··· 511 479 512 480 res->pipe_clk = devm_clk_get(dev, "pipe"); 513 481 return PTR_ERR_OR_ZERO(res->pipe_clk); 482 + } 483 + 484 + static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie) 485 + { 486 + struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 487 + 488 + clk_disable_unprepare(res->pipe_clk); 489 + clk_disable_unprepare(res->slave_clk); 490 + clk_disable_unprepare(res->master_clk); 491 + clk_disable_unprepare(res->cfg_clk); 492 + clk_disable_unprepare(res->aux_clk); 514 493 } 515 494 516 495 static int qcom_pcie_init_v2(struct qcom_pcie *pcie) ··· 605 562 return 0; 606 563 } 607 564 565 + static int qcom_pcie_get_resources_v3(struct qcom_pcie *pcie) 566 + { 567 + struct qcom_pcie_resources_v3 *res = &pcie->res.v3; 568 + struct dw_pcie *pci = pcie->pci; 569 + struct device *dev = pci->dev; 570 + 571 + res->aux_clk = devm_clk_get(dev, "aux"); 572 + if (IS_ERR(res->aux_clk)) 573 + return PTR_ERR(res->aux_clk); 574 + 575 + res->master_clk = devm_clk_get(dev, "master_bus"); 576 + if (IS_ERR(res->master_clk)) 577 + return PTR_ERR(res->master_clk); 578 + 579 + res->slave_clk = devm_clk_get(dev, "slave_bus"); 580 + if (IS_ERR(res->slave_clk)) 581 + return PTR_ERR(res->slave_clk); 582 + 583 + res->axi_m_reset = devm_reset_control_get(dev, "axi_m"); 584 + if (IS_ERR(res->axi_m_reset)) 585 + return PTR_ERR(res->axi_m_reset); 586 + 587 + res->axi_s_reset = devm_reset_control_get(dev, "axi_s"); 588 + if (IS_ERR(res->axi_s_reset)) 589 + return PTR_ERR(res->axi_s_reset); 590 + 591 + res->pipe_reset = devm_reset_control_get(dev, "pipe"); 592 + if (IS_ERR(res->pipe_reset)) 593 + return PTR_ERR(res->pipe_reset); 594 + 595 + res->axi_m_vmid_reset = devm_reset_control_get(dev, "axi_m_vmid"); 596 + if (IS_ERR(res->axi_m_vmid_reset)) 597 + return PTR_ERR(res->axi_m_vmid_reset); 598 + 599 + res->axi_s_xpu_reset = devm_reset_control_get(dev, "axi_s_xpu"); 600 + if (IS_ERR(res->axi_s_xpu_reset)) 601 + return PTR_ERR(res->axi_s_xpu_reset); 602 + 603 + res->parf_reset = devm_reset_control_get(dev, "parf"); 604 + if (IS_ERR(res->parf_reset)) 605 + return PTR_ERR(res->parf_reset); 606 + 607 + res->phy_reset = devm_reset_control_get(dev, "phy"); 608 + if (IS_ERR(res->phy_reset)) 609 + return PTR_ERR(res->phy_reset); 610 + 611 + res->axi_m_sticky_reset = devm_reset_control_get(dev, "axi_m_sticky"); 612 + if (IS_ERR(res->axi_m_sticky_reset)) 613 + return PTR_ERR(res->axi_m_sticky_reset); 614 + 615 + res->pipe_sticky_reset = devm_reset_control_get(dev, "pipe_sticky"); 616 + if (IS_ERR(res->pipe_sticky_reset)) 617 + return PTR_ERR(res->pipe_sticky_reset); 618 + 619 + res->pwr_reset = devm_reset_control_get(dev, "pwr"); 620 + if (IS_ERR(res->pwr_reset)) 621 + return PTR_ERR(res->pwr_reset); 622 + 623 + res->ahb_reset = devm_reset_control_get(dev, "ahb"); 624 + if (IS_ERR(res->ahb_reset)) 625 + return PTR_ERR(res->ahb_reset); 626 + 627 + res->phy_ahb_reset = devm_reset_control_get(dev, "phy_ahb"); 628 + if (IS_ERR(res->phy_ahb_reset)) 629 + return PTR_ERR(res->phy_ahb_reset); 630 + 631 + return 0; 632 + } 633 + 634 + static void qcom_pcie_deinit_v3(struct qcom_pcie *pcie) 635 + { 636 + struct qcom_pcie_resources_v3 *res = &pcie->res.v3; 637 + 638 + reset_control_assert(res->axi_m_reset); 639 + reset_control_assert(res->axi_s_reset); 640 + reset_control_assert(res->pipe_reset); 641 + reset_control_assert(res->pipe_sticky_reset); 642 + reset_control_assert(res->phy_reset); 643 + reset_control_assert(res->phy_ahb_reset); 644 + reset_control_assert(res->axi_m_sticky_reset); 645 + reset_control_assert(res->pwr_reset); 646 + reset_control_assert(res->ahb_reset); 647 + clk_disable_unprepare(res->aux_clk); 648 + clk_disable_unprepare(res->master_clk); 649 + clk_disable_unprepare(res->slave_clk); 650 + } 651 + 652 + static int qcom_pcie_init_v3(struct qcom_pcie *pcie) 653 + { 654 + struct qcom_pcie_resources_v3 *res = &pcie->res.v3; 655 + struct dw_pcie *pci = pcie->pci; 656 + struct device *dev = pci->dev; 657 + u32 val; 658 + int ret; 659 + 660 + ret = reset_control_assert(res->axi_m_reset); 661 + if (ret) { 662 + dev_err(dev, "cannot assert axi master reset\n"); 663 + return ret; 664 + } 665 + 666 + ret = reset_control_assert(res->axi_s_reset); 667 + if (ret) { 668 + dev_err(dev, "cannot assert axi slave reset\n"); 669 + return ret; 670 + } 671 + 672 + usleep_range(10000, 12000); 673 + 674 + ret = reset_control_assert(res->pipe_reset); 675 + if (ret) { 676 + dev_err(dev, "cannot assert pipe reset\n"); 677 + return ret; 678 + } 679 + 680 + ret = reset_control_assert(res->pipe_sticky_reset); 681 + if (ret) { 682 + dev_err(dev, "cannot assert pipe sticky reset\n"); 683 + return ret; 684 + } 685 + 686 + ret = reset_control_assert(res->phy_reset); 687 + if (ret) { 688 + dev_err(dev, "cannot assert phy reset\n"); 689 + return ret; 690 + } 691 + 692 + ret = reset_control_assert(res->phy_ahb_reset); 693 + if (ret) { 694 + dev_err(dev, "cannot assert phy ahb reset\n"); 695 + return ret; 696 + } 697 + 698 + usleep_range(10000, 12000); 699 + 700 + ret = reset_control_assert(res->axi_m_sticky_reset); 701 + if (ret) { 702 + dev_err(dev, "cannot assert axi master sticky reset\n"); 703 + return ret; 704 + } 705 + 706 + ret = reset_control_assert(res->pwr_reset); 707 + if (ret) { 708 + dev_err(dev, "cannot assert power reset\n"); 709 + return ret; 710 + } 711 + 712 + ret = reset_control_assert(res->ahb_reset); 713 + if (ret) { 714 + dev_err(dev, "cannot assert ahb reset\n"); 715 + return ret; 716 + } 717 + 718 + usleep_range(10000, 12000); 719 + 720 + ret = reset_control_deassert(res->phy_ahb_reset); 721 + if (ret) { 722 + dev_err(dev, "cannot deassert phy ahb reset\n"); 723 + return ret; 724 + } 725 + 726 + ret = reset_control_deassert(res->phy_reset); 727 + if (ret) { 728 + dev_err(dev, "cannot deassert phy reset\n"); 729 + goto err_rst_phy; 730 + } 731 + 732 + ret = reset_control_deassert(res->pipe_reset); 733 + if (ret) { 734 + dev_err(dev, "cannot deassert pipe reset\n"); 735 + goto err_rst_pipe; 736 + } 737 + 738 + ret = reset_control_deassert(res->pipe_sticky_reset); 739 + if (ret) { 740 + dev_err(dev, "cannot deassert pipe sticky reset\n"); 741 + goto err_rst_pipe_sticky; 742 + } 743 + 744 + usleep_range(10000, 12000); 745 + 746 + ret = reset_control_deassert(res->axi_m_reset); 747 + if (ret) { 748 + dev_err(dev, "cannot deassert axi master reset\n"); 749 + goto err_rst_axi_m; 750 + } 751 + 752 + ret = reset_control_deassert(res->axi_m_sticky_reset); 753 + if (ret) { 754 + dev_err(dev, "cannot deassert axi master sticky reset\n"); 755 + goto err_rst_axi_m_sticky; 756 + } 757 + 758 + ret = reset_control_deassert(res->axi_s_reset); 759 + if (ret) { 760 + dev_err(dev, "cannot deassert axi slave reset\n"); 761 + goto err_rst_axi_s; 762 + } 763 + 764 + ret = reset_control_deassert(res->pwr_reset); 765 + if (ret) { 766 + dev_err(dev, "cannot deassert power reset\n"); 767 + goto err_rst_pwr; 768 + } 769 + 770 + ret = reset_control_deassert(res->ahb_reset); 771 + if (ret) { 772 + dev_err(dev, "cannot deassert ahb reset\n"); 773 + goto err_rst_ahb; 774 + } 775 + 776 + usleep_range(10000, 12000); 777 + 778 + ret = clk_prepare_enable(res->aux_clk); 779 + if (ret) { 780 + dev_err(dev, "cannot prepare/enable iface clock\n"); 781 + goto err_clk_aux; 782 + } 783 + 784 + ret = clk_prepare_enable(res->master_clk); 785 + if (ret) { 786 + dev_err(dev, "cannot prepare/enable core clock\n"); 787 + goto err_clk_axi_m; 788 + } 789 + 790 + ret = clk_prepare_enable(res->slave_clk); 791 + if (ret) { 792 + dev_err(dev, "cannot prepare/enable phy clock\n"); 793 + goto err_clk_axi_s; 794 + } 795 + 796 + /* enable PCIe clocks and resets */ 797 + val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 798 + val &= !BIT(0); 799 + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 800 + 801 + /* change DBI base address */ 802 + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 803 + 804 + /* MAC PHY_POWERDOWN MUX DISABLE */ 805 + val = readl(pcie->parf + PCIE20_PARF_SYS_CTRL); 806 + val &= ~BIT(29); 807 + writel(val, pcie->parf + PCIE20_PARF_SYS_CTRL); 808 + 809 + val = readl(pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL); 810 + val |= BIT(4); 811 + writel(val, pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL); 812 + 813 + val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2); 814 + val |= BIT(31); 815 + writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2); 816 + 817 + return 0; 818 + 819 + err_clk_axi_s: 820 + clk_disable_unprepare(res->master_clk); 821 + err_clk_axi_m: 822 + clk_disable_unprepare(res->aux_clk); 823 + err_clk_aux: 824 + reset_control_assert(res->ahb_reset); 825 + err_rst_ahb: 826 + reset_control_assert(res->pwr_reset); 827 + err_rst_pwr: 828 + reset_control_assert(res->axi_s_reset); 829 + err_rst_axi_s: 830 + reset_control_assert(res->axi_m_sticky_reset); 831 + err_rst_axi_m_sticky: 832 + reset_control_assert(res->axi_m_reset); 833 + err_rst_axi_m: 834 + reset_control_assert(res->pipe_sticky_reset); 835 + err_rst_pipe_sticky: 836 + reset_control_assert(res->pipe_reset); 837 + err_rst_pipe: 838 + reset_control_assert(res->phy_reset); 839 + err_rst_phy: 840 + reset_control_assert(res->phy_ahb_reset); 841 + return ret; 842 + } 843 + 608 844 static int qcom_pcie_link_up(struct dw_pcie *pci) 609 845 { 610 846 u16 val = readw(pci->dbi_base + PCIE20_CAP + PCI_EXP_LNKSTA); 611 847 612 848 return !!(val & PCI_EXP_LNKSTA_DLLLA); 613 - } 614 - 615 - static void qcom_pcie_deinit_v2(struct qcom_pcie *pcie) 616 - { 617 - struct qcom_pcie_resources_v2 *res = &pcie->res.v2; 618 - 619 - clk_disable_unprepare(res->pipe_clk); 620 - clk_disable_unprepare(res->slave_clk); 621 - clk_disable_unprepare(res->master_clk); 622 - clk_disable_unprepare(res->cfg_clk); 623 - clk_disable_unprepare(res->aux_clk); 624 849 } 625 850 626 851 static void qcom_pcie_host_init(struct pcie_port *pp) ··· 945 634 return dw_pcie_read(pci->dbi_base + where, size, val); 946 635 } 947 636 948 - static struct dw_pcie_host_ops qcom_pcie_dw_ops = { 637 + static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 949 638 .host_init = qcom_pcie_host_init, 950 639 .rd_own_conf = qcom_pcie_rd_own_conf, 951 640 }; ··· 974 663 975 664 static const struct dw_pcie_ops dw_pcie_ops = { 976 665 .link_up = qcom_pcie_link_up, 666 + }; 667 + 668 + static const struct qcom_pcie_ops ops_v3 = { 669 + .get_resources = qcom_pcie_get_resources_v3, 670 + .init = qcom_pcie_init_v3, 671 + .deinit = qcom_pcie_deinit_v3, 672 + .ltssm_enable = qcom_pcie_v2_ltssm_enable, 977 673 }; 978 674 979 675 static int qcom_pcie_probe(struct platform_device *pdev) ··· 1045 727 1046 728 ret = devm_request_irq(dev, pp->msi_irq, 1047 729 qcom_pcie_msi_irq_handler, 1048 - IRQF_SHARED, "qcom-pcie-msi", pp); 730 + IRQF_SHARED | IRQF_NO_THREAD, 731 + "qcom-pcie-msi", pp); 1049 732 if (ret) { 1050 733 dev_err(dev, "cannot request msi irq\n"); 1051 734 return ret; ··· 1073 754 { .compatible = "qcom,pcie-apq8064", .data = &ops_v0 }, 1074 755 { .compatible = "qcom,pcie-apq8084", .data = &ops_v1 }, 1075 756 { .compatible = "qcom,pcie-msm8996", .data = &ops_v2 }, 757 + { .compatible = "qcom,pcie-ipq4019", .data = &ops_v3 }, 1076 758 { } 1077 759 }; 1078 760
+1 -1
drivers/pci/dwc/pcie-spear13xx.c
··· 186 186 spear13xx_pcie_enable_interrupts(spear13xx_pcie); 187 187 } 188 188 189 - static struct dw_pcie_host_ops spear13xx_pcie_host_ops = { 189 + static const struct dw_pcie_host_ops spear13xx_pcie_host_ops = { 190 190 .host_init = spear13xx_pcie_host_init, 191 191 }; 192 192
+25
drivers/pci/host/Kconfig
··· 180 180 There is 1 internal PCIe port available to support GEN2 with 181 181 4 slots. 182 182 183 + config PCIE_MEDIATEK 184 + bool "MediaTek PCIe controller" 185 + depends on ARM && (ARCH_MEDIATEK || COMPILE_TEST) 186 + depends on OF 187 + depends on PCI 188 + select PCIEPORTBUS 189 + help 190 + Say Y here if you want to enable PCIe controller support on 191 + MT7623 series SoCs. There is one single root complex with 3 root 192 + ports available. Each port supports Gen2 lane x1. 193 + 194 + config PCIE_TANGO_SMP8759 195 + bool "Tango SMP8759 PCIe controller (DANGEROUS)" 196 + depends on ARCH_TANGO && PCI_MSI && OF 197 + depends on BROKEN 198 + select PCI_HOST_COMMON 199 + help 200 + Say Y here to enable PCIe controller support for Sigma Designs 201 + Tango SMP8759-based systems. 202 + 203 + Note: The SMP8759 controller multiplexes PCI config and MMIO 204 + accesses, and Linux doesn't provide a way to serialize them. 205 + This can lead to data corruption if drivers perform concurrent 206 + config and MMIO accesses. 207 + 183 208 config VMD 184 209 depends on PCI_MSI && X86_64 && SRCU 185 210 tristate "Intel Volume Management Device Driver"
+2
drivers/pci/host/Makefile
··· 18 18 obj-$(CONFIG_PCIE_ALTERA) += pcie-altera.o 19 19 obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o 20 20 obj-$(CONFIG_PCIE_ROCKCHIP) += pcie-rockchip.o 21 + obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o 22 + obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o 21 23 obj-$(CONFIG_VMD) += vmd.o 22 24 23 25 # The following drivers are for devices that use the generic ACPI
+15 -6
drivers/pci/host/pci-aardvark.c
··· 886 886 struct advk_pcie *pcie; 887 887 struct resource *res; 888 888 struct pci_bus *bus, *child; 889 + struct pci_host_bridge *bridge; 889 890 int ret, irq; 890 891 891 - pcie = devm_kzalloc(dev, sizeof(struct advk_pcie), GFP_KERNEL); 892 - if (!pcie) 892 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(struct advk_pcie)); 893 + if (!bridge) 893 894 return -ENOMEM; 894 895 896 + pcie = pci_host_bridge_priv(bridge); 895 897 pcie->pdev = pdev; 896 898 897 899 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 931 929 return ret; 932 930 } 933 931 934 - bus = pci_scan_root_bus(dev, 0, &advk_pcie_ops, 935 - pcie, &pcie->resources); 936 - if (!bus) { 932 + list_splice_init(&pcie->resources, &bridge->windows); 933 + bridge->dev.parent = dev; 934 + bridge->sysdata = pcie; 935 + bridge->busnr = 0; 936 + bridge->ops = &advk_pcie_ops; 937 + 938 + ret = pci_scan_root_bus_bridge(bridge); 939 + if (ret < 0) { 937 940 advk_pcie_remove_msi_irq_domain(pcie); 938 941 advk_pcie_remove_irq_domain(pcie); 939 - return -ENOMEM; 942 + return ret; 940 943 } 944 + 945 + bus = bridge->bus; 941 946 942 947 pci_bus_assign_resources(bus); 943 948
+107 -36
drivers/pci/host/pci-ftpci100.c
··· 25 25 #include <linux/irqchip/chained_irq.h> 26 26 #include <linux/bitops.h> 27 27 #include <linux/irq.h> 28 + #include <linux/clk.h> 28 29 29 30 /* 30 31 * Special configuration registers directly in the first few words ··· 38 37 #define PCI_CONFIG 0x28 /* PCI configuration command register */ 39 38 #define PCI_DATA 0x2C 40 39 40 + #define FARADAY_PCI_STATUS_CMD 0x04 /* Status and command */ 41 41 #define FARADAY_PCI_PMC 0x40 /* Power management control */ 42 42 #define FARADAY_PCI_PMCSR 0x44 /* Power management status */ 43 43 #define FARADAY_PCI_CTRL1 0x48 /* Control register 1 */ ··· 46 44 #define FARADAY_PCI_MEM1_BASE_SIZE 0x50 /* Memory base and size #1 */ 47 45 #define FARADAY_PCI_MEM2_BASE_SIZE 0x54 /* Memory base and size #2 */ 48 46 #define FARADAY_PCI_MEM3_BASE_SIZE 0x58 /* Memory base and size #3 */ 47 + 48 + #define PCI_STATUS_66MHZ_CAPABLE BIT(21) 49 49 50 50 /* Bits 31..28 gives INTD..INTA status */ 51 51 #define PCI_CTRL2_INTSTS_SHIFT 28 ··· 121 117 void __iomem *base; 122 118 struct irq_domain *irqdomain; 123 119 struct pci_bus *bus; 120 + struct clk *bus_clk; 124 121 }; 125 122 126 123 static int faraday_res_to_memcfg(resource_size_t mem_base, ··· 183 178 return 0; 184 179 } 185 180 186 - static int faraday_pci_read_config(struct pci_bus *bus, unsigned int fn, 187 - int config, int size, u32 *value) 181 + static int faraday_raw_pci_read_config(struct faraday_pci *p, int bus_number, 182 + unsigned int fn, int config, int size, 183 + u32 *value) 188 184 { 189 - struct faraday_pci *p = bus->sysdata; 190 - 191 - writel(PCI_CONF_BUS(bus->number) | 185 + writel(PCI_CONF_BUS(bus_number) | 192 186 PCI_CONF_DEVICE(PCI_SLOT(fn)) | 193 187 PCI_CONF_FUNCTION(PCI_FUNC(fn)) | 194 188 PCI_CONF_WHERE(config) | ··· 201 197 else if (size == 2) 202 198 *value = (*value >> (8 * (config & 3))) & 0xFFFF; 203 199 200 + return PCIBIOS_SUCCESSFUL; 201 + } 202 + 203 + static int faraday_pci_read_config(struct pci_bus *bus, unsigned int fn, 204 + int config, int size, u32 *value) 205 + { 206 + struct faraday_pci *p = bus->sysdata; 207 + 204 208 dev_dbg(&bus->dev, 205 209 "[read] slt: %.2d, fnc: %d, cnf: 0x%.2X, val (%d bytes): 0x%.8X\n", 206 210 PCI_SLOT(fn), PCI_FUNC(fn), config, size, *value); 207 211 208 - return PCIBIOS_SUCCESSFUL; 212 + return faraday_raw_pci_read_config(p, bus->number, fn, config, size, value); 209 213 } 210 214 211 - static int faraday_pci_write_config(struct pci_bus *bus, unsigned int fn, 212 - int config, int size, u32 value) 215 + static int faraday_raw_pci_write_config(struct faraday_pci *p, int bus_number, 216 + unsigned int fn, int config, int size, 217 + u32 value) 213 218 { 214 - struct faraday_pci *p = bus->sysdata; 215 219 int ret = PCIBIOS_SUCCESSFUL; 216 220 217 - dev_dbg(&bus->dev, 218 - "[write] slt: %.2d, fnc: %d, cnf: 0x%.2X, val (%d bytes): 0x%.8X\n", 219 - PCI_SLOT(fn), PCI_FUNC(fn), config, size, value); 220 - 221 - writel(PCI_CONF_BUS(bus->number) | 221 + writel(PCI_CONF_BUS(bus_number) | 222 222 PCI_CONF_DEVICE(PCI_SLOT(fn)) | 223 223 PCI_CONF_FUNCTION(PCI_FUNC(fn)) | 224 224 PCI_CONF_WHERE(config) | ··· 246 238 return ret; 247 239 } 248 240 241 + static int faraday_pci_write_config(struct pci_bus *bus, unsigned int fn, 242 + int config, int size, u32 value) 243 + { 244 + struct faraday_pci *p = bus->sysdata; 245 + 246 + dev_dbg(&bus->dev, 247 + "[write] slt: %.2d, fnc: %d, cnf: 0x%.2X, val (%d bytes): 0x%.8X\n", 248 + PCI_SLOT(fn), PCI_FUNC(fn), config, size, value); 249 + 250 + return faraday_raw_pci_write_config(p, bus->number, fn, config, size, 251 + value); 252 + } 253 + 249 254 static struct pci_ops faraday_pci_ops = { 250 255 .read = faraday_pci_read_config, 251 256 .write = faraday_pci_write_config, ··· 269 248 struct faraday_pci *p = irq_data_get_irq_chip_data(d); 270 249 unsigned int reg; 271 250 272 - faraday_pci_read_config(p->bus, 0, FARADAY_PCI_CTRL2, 4, &reg); 251 + faraday_raw_pci_read_config(p, 0, 0, FARADAY_PCI_CTRL2, 4, &reg); 273 252 reg &= ~(0xF << PCI_CTRL2_INTSTS_SHIFT); 274 253 reg |= BIT(irqd_to_hwirq(d) + PCI_CTRL2_INTSTS_SHIFT); 275 - faraday_pci_write_config(p->bus, 0, FARADAY_PCI_CTRL2, 4, reg); 254 + faraday_raw_pci_write_config(p, 0, 0, FARADAY_PCI_CTRL2, 4, reg); 276 255 } 277 256 278 257 static void faraday_pci_mask_irq(struct irq_data *d) ··· 280 259 struct faraday_pci *p = irq_data_get_irq_chip_data(d); 281 260 unsigned int reg; 282 261 283 - faraday_pci_read_config(p->bus, 0, FARADAY_PCI_CTRL2, 4, &reg); 262 + faraday_raw_pci_read_config(p, 0, 0, FARADAY_PCI_CTRL2, 4, &reg); 284 263 reg &= ~((0xF << PCI_CTRL2_INTSTS_SHIFT) 285 264 | BIT(irqd_to_hwirq(d) + PCI_CTRL2_INTMASK_SHIFT)); 286 - faraday_pci_write_config(p->bus, 0, FARADAY_PCI_CTRL2, 4, reg); 265 + faraday_raw_pci_write_config(p, 0, 0, FARADAY_PCI_CTRL2, 4, reg); 287 266 } 288 267 289 268 static void faraday_pci_unmask_irq(struct irq_data *d) ··· 291 270 struct faraday_pci *p = irq_data_get_irq_chip_data(d); 292 271 unsigned int reg; 293 272 294 - faraday_pci_read_config(p->bus, 0, FARADAY_PCI_CTRL2, 4, &reg); 273 + faraday_raw_pci_read_config(p, 0, 0, FARADAY_PCI_CTRL2, 4, &reg); 295 274 reg &= ~(0xF << PCI_CTRL2_INTSTS_SHIFT); 296 275 reg |= BIT(irqd_to_hwirq(d) + PCI_CTRL2_INTMASK_SHIFT); 297 - faraday_pci_write_config(p->bus, 0, FARADAY_PCI_CTRL2, 4, reg); 276 + faraday_raw_pci_write_config(p, 0, 0, FARADAY_PCI_CTRL2, 4, reg); 298 277 } 299 278 300 279 static void faraday_pci_irq_handler(struct irq_desc *desc) ··· 303 282 struct irq_chip *irqchip = irq_desc_get_chip(desc); 304 283 unsigned int irq_stat, reg, i; 305 284 306 - faraday_pci_read_config(p->bus, 0, FARADAY_PCI_CTRL2, 4, &reg); 285 + faraday_raw_pci_read_config(p, 0, 0, FARADAY_PCI_CTRL2, 4, &reg); 307 286 irq_stat = reg >> PCI_CTRL2_INTSTS_SHIFT; 308 287 309 288 chained_irq_enter(irqchip, desc); ··· 424 403 dev_info(dev, "DMA MEM%d BASE: 0x%016llx -> 0x%016llx config %08x\n", 425 404 i + 1, range.pci_addr, end, val); 426 405 if (i <= 2) { 427 - faraday_pci_write_config(p->bus, 0, confreg[i], 428 - 4, val); 406 + faraday_raw_pci_write_config(p, 0, 0, confreg[i], 407 + 4, val); 429 408 } else { 430 409 dev_err(dev, "ignore extraneous dma-range %d\n", i); 431 410 break; ··· 449 428 struct resource *mem; 450 429 struct resource *io; 451 430 struct pci_host_bridge *host; 431 + struct clk *clk; 432 + unsigned char max_bus_speed = PCI_SPEED_33MHz; 433 + unsigned char cur_bus_speed = PCI_SPEED_33MHz; 452 434 int ret; 453 435 u32 val; 454 436 LIST_HEAD(res); 455 437 456 - host = pci_alloc_host_bridge(sizeof(*p)); 438 + host = devm_pci_alloc_host_bridge(dev, sizeof(*p)); 457 439 if (!host) 458 440 return -ENOMEM; 459 441 ··· 464 440 host->ops = &faraday_pci_ops; 465 441 host->busnr = 0; 466 442 host->msi = NULL; 443 + host->map_irq = of_irq_parse_and_map_pci; 444 + host->swizzle_irq = pci_common_swizzle; 467 445 p = pci_host_bridge_priv(host); 468 446 host->sysdata = p; 469 447 p->dev = dev; 448 + 449 + /* Retrieve and enable optional clocks */ 450 + clk = devm_clk_get(dev, "PCLK"); 451 + if (IS_ERR(clk)) 452 + return PTR_ERR(clk); 453 + ret = clk_prepare_enable(clk); 454 + if (ret) { 455 + dev_err(dev, "could not prepare PCLK\n"); 456 + return ret; 457 + } 458 + p->bus_clk = devm_clk_get(dev, "PCICLK"); 459 + if (IS_ERR(p->bus_clk)) 460 + return PTR_ERR(clk); 461 + ret = clk_prepare_enable(p->bus_clk); 462 + if (ret) { 463 + dev_err(dev, "could not prepare PCICLK\n"); 464 + return ret; 465 + } 470 466 471 467 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 472 468 p->base = devm_ioremap_resource(dev, regs); ··· 540 496 val |= PCI_COMMAND_MEMORY; 541 497 val |= PCI_COMMAND_MASTER; 542 498 writel(val, p->base + PCI_CTRL); 543 - 544 - list_splice_init(&res, &host->windows); 545 - ret = pci_register_host_bridge(host); 546 - if (ret) { 547 - dev_err(dev, "failed to register host: %d\n", ret); 548 - return ret; 549 - } 550 - p->bus = host->bus; 551 - 552 499 /* Mask and clear all interrupts */ 553 - faraday_pci_write_config(p->bus, 0, FARADAY_PCI_CTRL2 + 2, 2, 0xF000); 500 + faraday_raw_pci_write_config(p, 0, 0, FARADAY_PCI_CTRL2 + 2, 2, 0xF000); 554 501 if (variant->cascaded_irq) { 555 502 ret = faraday_pci_setup_cascaded_irq(p); 556 503 if (ret) { ··· 550 515 } 551 516 } 552 517 518 + /* Check bus clock if we can gear up to 66 MHz */ 519 + if (!IS_ERR(p->bus_clk)) { 520 + unsigned long rate; 521 + u32 val; 522 + 523 + faraday_raw_pci_read_config(p, 0, 0, 524 + FARADAY_PCI_STATUS_CMD, 4, &val); 525 + rate = clk_get_rate(p->bus_clk); 526 + 527 + if ((rate == 33000000) && (val & PCI_STATUS_66MHZ_CAPABLE)) { 528 + dev_info(dev, "33MHz bus is 66MHz capable\n"); 529 + max_bus_speed = PCI_SPEED_66MHz; 530 + ret = clk_set_rate(p->bus_clk, 66000000); 531 + if (ret) 532 + dev_err(dev, "failed to set bus clock\n"); 533 + } else { 534 + dev_info(dev, "33MHz only bus\n"); 535 + max_bus_speed = PCI_SPEED_33MHz; 536 + } 537 + 538 + /* Bumping the clock may fail so read back the rate */ 539 + rate = clk_get_rate(p->bus_clk); 540 + if (rate == 33000000) 541 + cur_bus_speed = PCI_SPEED_33MHz; 542 + if (rate == 66000000) 543 + cur_bus_speed = PCI_SPEED_66MHz; 544 + } 545 + 553 546 ret = faraday_pci_parse_map_dma_ranges(p, dev->of_node); 554 547 if (ret) 555 548 return ret; 556 549 557 - pci_scan_child_bus(p->bus); 558 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 550 + list_splice_init(&res, &host->windows); 551 + ret = pci_scan_root_bus_bridge(host); 552 + if (ret) { 553 + dev_err(dev, "failed to scan host: %d\n", ret); 554 + return ret; 555 + } 556 + p->bus = host->bus; 557 + p->bus->max_bus_speed = max_bus_speed; 558 + p->bus->cur_bus_speed = cur_bus_speed; 559 + 559 560 pci_bus_assign_resources(p->bus); 560 561 pci_bus_add_devices(p->bus); 561 562 pci_free_resource_list(&res);
+19 -8
drivers/pci/host/pci-host-common.c
··· 117 117 struct device *dev = &pdev->dev; 118 118 struct device_node *np = dev->of_node; 119 119 struct pci_bus *bus, *child; 120 + struct pci_host_bridge *bridge; 120 121 struct pci_config_window *cfg; 121 122 struct list_head resources; 123 + int ret; 124 + 125 + bridge = devm_pci_alloc_host_bridge(dev, 0); 126 + if (!bridge) 127 + return -ENOMEM; 122 128 123 129 type = of_get_property(np, "device_type", NULL); 124 130 if (!type || strcmp(type, "pci")) { ··· 144 138 if (!pci_has_flag(PCI_PROBE_ONLY)) 145 139 pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 146 140 147 - bus = pci_scan_root_bus(dev, cfg->busr.start, &ops->pci_ops, cfg, 148 - &resources); 149 - if (!bus) { 150 - dev_err(dev, "Scanning rootbus failed"); 151 - return -ENODEV; 141 + list_splice_init(&resources, &bridge->windows); 142 + bridge->dev.parent = dev; 143 + bridge->sysdata = cfg; 144 + bridge->busnr = cfg->busr.start; 145 + bridge->ops = &ops->pci_ops; 146 + bridge->map_irq = of_irq_parse_and_map_pci; 147 + bridge->swizzle_irq = pci_common_swizzle; 148 + 149 + ret = pci_scan_root_bus_bridge(bridge); 150 + if (ret < 0) { 151 + dev_err(dev, "Scanning root bridge failed"); 152 + return ret; 152 153 } 153 154 154 - #ifdef CONFIG_ARM 155 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 156 - #endif 155 + bus = bridge->bus; 157 156 158 157 /* 159 158 * We insert PCI resources into the iomem_resource and
+364 -81
drivers/pci/host/pci-hyperv.c
··· 64 64 * major version. 65 65 */ 66 66 67 - #define PCI_MAKE_VERSION(major, minor) ((u32)(((major) << 16) | (major))) 67 + #define PCI_MAKE_VERSION(major, minor) ((u32)(((major) << 16) | (minor))) 68 68 #define PCI_MAJOR_VERSION(version) ((u32)(version) >> 16) 69 69 #define PCI_MINOR_VERSION(version) ((u32)(version) & 0xff) 70 70 71 - enum { 72 - PCI_PROTOCOL_VERSION_1_1 = PCI_MAKE_VERSION(1, 1), 73 - PCI_PROTOCOL_VERSION_CURRENT = PCI_PROTOCOL_VERSION_1_1 71 + enum pci_protocol_version_t { 72 + PCI_PROTOCOL_VERSION_1_1 = PCI_MAKE_VERSION(1, 1), /* Win10 */ 73 + PCI_PROTOCOL_VERSION_1_2 = PCI_MAKE_VERSION(1, 2), /* RS1 */ 74 74 }; 75 75 76 76 #define CPU_AFFINITY_ALL -1ULL 77 + 78 + /* 79 + * Supported protocol versions in the order of probing - highest go 80 + * first. 81 + */ 82 + static enum pci_protocol_version_t pci_protocol_versions[] = { 83 + PCI_PROTOCOL_VERSION_1_2, 84 + PCI_PROTOCOL_VERSION_1_1, 85 + }; 86 + 87 + /* 88 + * Protocol version negotiated by hv_pci_protocol_negotiation(). 89 + */ 90 + static enum pci_protocol_version_t pci_protocol_version; 91 + 77 92 #define PCI_CONFIG_MMIO_LENGTH 0x2000 78 93 #define CFG_PAGE_OFFSET 0x1000 79 94 #define CFG_PAGE_SIZE (PCI_CONFIG_MMIO_LENGTH - CFG_PAGE_OFFSET) 80 95 81 96 #define MAX_SUPPORTED_MSI_MESSAGES 0x400 97 + 98 + #define STATUS_REVISION_MISMATCH 0xC0000059 82 99 83 100 /* 84 101 * Message Types ··· 126 109 PCI_QUERY_PROTOCOL_VERSION = PCI_MESSAGE_BASE + 0x13, 127 110 PCI_CREATE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x14, 128 111 PCI_DELETE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x15, 112 + PCI_RESOURCES_ASSIGNED2 = PCI_MESSAGE_BASE + 0x16, 113 + PCI_CREATE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x17, 114 + PCI_DELETE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x18, /* unused */ 129 115 PCI_MESSAGE_MAXIMUM 130 116 }; 131 117 ··· 199 179 } __packed; 200 180 201 181 /** 182 + * struct hv_msi_desc2 - 1.2 version of hv_msi_desc 183 + * @vector: IDT entry 184 + * @delivery_mode: As defined in Intel's Programmer's 185 + * Reference Manual, Volume 3, Chapter 8. 186 + * @vector_count: Number of contiguous entries in the 187 + * Interrupt Descriptor Table that are 188 + * occupied by this Message-Signaled 189 + * Interrupt. For "MSI", as first defined 190 + * in PCI 2.2, this can be between 1 and 191 + * 32. For "MSI-X," as first defined in PCI 192 + * 3.0, this must be 1, as each MSI-X table 193 + * entry would have its own descriptor. 194 + * @processor_count: number of bits enabled in array. 195 + * @processor_array: All the target virtual processors. 196 + */ 197 + struct hv_msi_desc2 { 198 + u8 vector; 199 + u8 delivery_mode; 200 + u16 vector_count; 201 + u16 processor_count; 202 + u16 processor_array[32]; 203 + } __packed; 204 + 205 + /** 202 206 * struct tran_int_desc 203 207 * @reserved: unused, padding 204 208 * @vector_count: same as in hv_msi_desc ··· 289 245 290 246 struct pci_version_request { 291 247 struct pci_message message_type; 292 - enum pci_message_type protocol_version; 248 + u32 protocol_version; 293 249 } __packed; 294 250 295 251 /* ··· 338 294 u32 reserved[4]; 339 295 } __packed; 340 296 297 + struct pci_resources_assigned2 { 298 + struct pci_message message_type; 299 + union win_slot_encoding wslot; 300 + u8 memory_range[0x14][6]; /* not used here */ 301 + u32 msi_descriptor_count; 302 + u8 reserved[70]; 303 + } __packed; 304 + 341 305 struct pci_create_interrupt { 342 306 struct pci_message message_type; 343 307 union win_slot_encoding wslot; ··· 356 304 struct pci_response response; 357 305 u32 reserved; 358 306 struct tran_int_desc int_desc; 307 + } __packed; 308 + 309 + struct pci_create_interrupt2 { 310 + struct pci_message message_type; 311 + union win_slot_encoding wslot; 312 + struct hv_msi_desc2 int_desc; 359 313 } __packed; 360 314 361 315 struct pci_delete_interrupt { ··· 389 331 #define HV_PARTITION_ID_SELF ((u64)-1) 390 332 #define HVCALL_RETARGET_INTERRUPT 0x7e 391 333 392 - struct retarget_msi_interrupt { 393 - u64 partition_id; /* use "self" */ 394 - u64 device_id; 334 + struct hv_interrupt_entry { 395 335 u32 source; /* 1 for MSI(-X) */ 396 336 u32 reserved1; 397 337 u32 address; 398 338 u32 data; 399 - u64 reserved2; 339 + }; 340 + 341 + #define HV_VP_SET_BANK_COUNT_MAX 5 /* current implementation limit */ 342 + 343 + struct hv_vp_set { 344 + u64 format; /* 0 (HvGenericSetSparse4k) */ 345 + u64 valid_banks; 346 + u64 masks[HV_VP_SET_BANK_COUNT_MAX]; 347 + }; 348 + 349 + /* 350 + * flags for hv_device_interrupt_target.flags 351 + */ 352 + #define HV_DEVICE_INTERRUPT_TARGET_MULTICAST 1 353 + #define HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET 2 354 + 355 + struct hv_device_interrupt_target { 400 356 u32 vector; 401 357 u32 flags; 402 - u64 vp_mask; 358 + union { 359 + u64 vp_mask; 360 + struct hv_vp_set vp_set; 361 + }; 362 + }; 363 + 364 + struct retarget_msi_interrupt { 365 + u64 partition_id; /* use "self" */ 366 + u64 device_id; 367 + struct hv_interrupt_entry int_entry; 368 + u64 reserved2; 369 + struct hv_device_interrupt_target int_target; 403 370 } __packed; 404 371 405 372 /* ··· 465 382 struct msi_domain_info msi_info; 466 383 struct msi_controller msi_chip; 467 384 struct irq_domain *irq_domain; 385 + 386 + /* hypercall arg, must not cross page boundary */ 468 387 struct retarget_msi_interrupt retarget_msi_interrupt_params; 388 + 469 389 spinlock_t retarget_msi_interrupt_lock; 470 390 }; 471 391 ··· 561 475 562 476 static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus); 563 477 static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus); 478 + 479 + 480 + /* 481 + * Temporary CPU to vCPU mapping to address transitioning 482 + * vmbus_cpu_number_to_vp_number() being migrated to 483 + * hv_cpu_number_to_vp_number() in a separate patch. Once that patch 484 + * has been picked up in the main line, remove this code here and use 485 + * the official code. 486 + */ 487 + static struct hv_tmpcpumap 488 + { 489 + bool initialized; 490 + u32 vp_index[NR_CPUS]; 491 + } hv_tmpcpumap; 492 + 493 + static void hv_tmpcpumap_init_cpu(void *_unused) 494 + { 495 + int cpu = smp_processor_id(); 496 + u64 vp_index; 497 + 498 + hv_get_vp_index(vp_index); 499 + 500 + hv_tmpcpumap.vp_index[cpu] = vp_index; 501 + } 502 + 503 + static void hv_tmpcpumap_init(void) 504 + { 505 + if (hv_tmpcpumap.initialized) 506 + return; 507 + 508 + memset(hv_tmpcpumap.vp_index, -1, sizeof(hv_tmpcpumap.vp_index)); 509 + on_each_cpu(hv_tmpcpumap_init_cpu, NULL, true); 510 + hv_tmpcpumap.initialized = true; 511 + } 512 + 513 + /** 514 + * hv_tmp_cpu_nr_to_vp_nr() - Convert Linux CPU nr to Hyper-V vCPU nr 515 + * 516 + * Remove once vmbus_cpu_number_to_vp_number() has been converted to 517 + * hv_cpu_number_to_vp_number() and replace callers appropriately. 518 + */ 519 + static u32 hv_tmp_cpu_nr_to_vp_nr(int cpu) 520 + { 521 + return hv_tmpcpumap.vp_index[cpu]; 522 + } 523 + 564 524 565 525 /** 566 526 * devfn_to_wslot() - Convert from Linux PCI slot to Windows ··· 918 786 struct cpumask *dest; 919 787 struct pci_bus *pbus; 920 788 struct pci_dev *pdev; 921 - int cpu; 922 789 unsigned long flags; 790 + u32 var_size = 0; 791 + int cpu_vmbus; 792 + int cpu; 793 + u64 res; 923 794 924 795 dest = irq_data_get_affinity_mask(data); 925 796 pdev = msi_desc_to_pci_dev(msi_desc); ··· 934 799 params = &hbus->retarget_msi_interrupt_params; 935 800 memset(params, 0, sizeof(*params)); 936 801 params->partition_id = HV_PARTITION_ID_SELF; 937 - params->source = 1; /* MSI(-X) */ 938 - params->address = msi_desc->msg.address_lo; 939 - params->data = msi_desc->msg.data; 802 + params->int_entry.source = 1; /* MSI(-X) */ 803 + params->int_entry.address = msi_desc->msg.address_lo; 804 + params->int_entry.data = msi_desc->msg.data; 940 805 params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | 941 806 (hbus->hdev->dev_instance.b[4] << 16) | 942 807 (hbus->hdev->dev_instance.b[7] << 8) | 943 808 (hbus->hdev->dev_instance.b[6] & 0xf8) | 944 809 PCI_FUNC(pdev->devfn); 945 - params->vector = cfg->vector; 810 + params->int_target.vector = cfg->vector; 946 811 947 - for_each_cpu_and(cpu, dest, cpu_online_mask) 948 - params->vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu)); 812 + /* 813 + * Honoring apic->irq_delivery_mode set to dest_Fixed by 814 + * setting the HV_DEVICE_INTERRUPT_TARGET_MULTICAST flag results in a 815 + * spurious interrupt storm. Not doing so does not seem to have a 816 + * negative effect (yet?). 817 + */ 949 818 950 - hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, params, NULL); 819 + if (pci_protocol_version >= PCI_PROTOCOL_VERSION_1_2) { 820 + /* 821 + * PCI_PROTOCOL_VERSION_1_2 supports the VP_SET version of the 822 + * HVCALL_RETARGET_INTERRUPT hypercall, which also coincides 823 + * with >64 VP support. 824 + * ms_hyperv.hints & HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED 825 + * is not sufficient for this hypercall. 826 + */ 827 + params->int_target.flags |= 828 + HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET; 829 + params->int_target.vp_set.valid_banks = 830 + (1ull << HV_VP_SET_BANK_COUNT_MAX) - 1; 951 831 832 + /* 833 + * var-sized hypercall, var-size starts after vp_mask (thus 834 + * vp_set.format does not count, but vp_set.valid_banks does). 835 + */ 836 + var_size = 1 + HV_VP_SET_BANK_COUNT_MAX; 837 + 838 + for_each_cpu_and(cpu, dest, cpu_online_mask) { 839 + cpu_vmbus = hv_tmp_cpu_nr_to_vp_nr(cpu); 840 + 841 + if (cpu_vmbus >= HV_VP_SET_BANK_COUNT_MAX * 64) { 842 + dev_err(&hbus->hdev->device, 843 + "too high CPU %d", cpu_vmbus); 844 + res = 1; 845 + goto exit_unlock; 846 + } 847 + 848 + params->int_target.vp_set.masks[cpu_vmbus / 64] |= 849 + (1ULL << (cpu_vmbus & 63)); 850 + } 851 + } else { 852 + for_each_cpu_and(cpu, dest, cpu_online_mask) { 853 + params->int_target.vp_mask |= 854 + (1ULL << hv_tmp_cpu_nr_to_vp_nr(cpu)); 855 + } 856 + } 857 + 858 + res = hv_do_hypercall(HVCALL_RETARGET_INTERRUPT | (var_size << 17), 859 + params, NULL); 860 + 861 + exit_unlock: 952 862 spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags); 863 + 864 + if (res) { 865 + dev_err(&hbus->hdev->device, 866 + "%s() failed: %#llx", __func__, res); 867 + return; 868 + } 953 869 954 870 pci_msi_unmask_irq(data); 955 871 } ··· 1022 836 complete(&comp_pkt->comp_pkt.host_event); 1023 837 } 1024 838 839 + static u32 hv_compose_msi_req_v1( 840 + struct pci_create_interrupt *int_pkt, struct cpumask *affinity, 841 + u32 slot, u8 vector) 842 + { 843 + int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE; 844 + int_pkt->wslot.slot = slot; 845 + int_pkt->int_desc.vector = vector; 846 + int_pkt->int_desc.vector_count = 1; 847 + int_pkt->int_desc.delivery_mode = 848 + (apic->irq_delivery_mode == dest_LowestPrio) ? 849 + dest_LowestPrio : dest_Fixed; 850 + 851 + /* 852 + * Create MSI w/ dummy vCPU set, overwritten by subsequent retarget in 853 + * hv_irq_unmask(). 854 + */ 855 + int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL; 856 + 857 + return sizeof(*int_pkt); 858 + } 859 + 860 + static u32 hv_compose_msi_req_v2( 861 + struct pci_create_interrupt2 *int_pkt, struct cpumask *affinity, 862 + u32 slot, u8 vector) 863 + { 864 + int cpu; 865 + 866 + int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE2; 867 + int_pkt->wslot.slot = slot; 868 + int_pkt->int_desc.vector = vector; 869 + int_pkt->int_desc.vector_count = 1; 870 + int_pkt->int_desc.delivery_mode = 871 + (apic->irq_delivery_mode == dest_LowestPrio) ? 872 + dest_LowestPrio : dest_Fixed; 873 + 874 + /* 875 + * Create MSI w/ dummy vCPU set targeting just one vCPU, overwritten 876 + * by subsequent retarget in hv_irq_unmask(). 877 + */ 878 + cpu = cpumask_first_and(affinity, cpu_online_mask); 879 + int_pkt->int_desc.processor_array[0] = 880 + hv_tmp_cpu_nr_to_vp_nr(cpu); 881 + int_pkt->int_desc.processor_count = 1; 882 + 883 + return sizeof(*int_pkt); 884 + } 885 + 1025 886 /** 1026 887 * hv_compose_msi_msg() - Supplies a valid MSI address/data 1027 888 * @data: Everything about this MSI ··· 1087 854 struct hv_pci_dev *hpdev; 1088 855 struct pci_bus *pbus; 1089 856 struct pci_dev *pdev; 1090 - struct pci_create_interrupt *int_pkt; 1091 857 struct compose_comp_ctxt comp; 1092 858 struct tran_int_desc *int_desc; 1093 - struct cpumask *affinity; 1094 859 struct { 1095 - struct pci_packet pkt; 1096 - u8 buffer[sizeof(struct pci_create_interrupt)]; 1097 - } ctxt; 1098 - int cpu; 860 + struct pci_packet pci_pkt; 861 + union { 862 + struct pci_create_interrupt v1; 863 + struct pci_create_interrupt2 v2; 864 + } int_pkts; 865 + } __packed ctxt; 866 + 867 + u32 size; 1099 868 int ret; 1100 869 1101 870 pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); ··· 1120 885 1121 886 memset(&ctxt, 0, sizeof(ctxt)); 1122 887 init_completion(&comp.comp_pkt.host_event); 1123 - ctxt.pkt.completion_func = hv_pci_compose_compl; 1124 - ctxt.pkt.compl_ctxt = &comp; 1125 - int_pkt = (struct pci_create_interrupt *)&ctxt.pkt.message; 1126 - int_pkt->message_type.type = PCI_CREATE_INTERRUPT_MESSAGE; 1127 - int_pkt->wslot.slot = hpdev->desc.win_slot.slot; 1128 - int_pkt->int_desc.vector = cfg->vector; 1129 - int_pkt->int_desc.vector_count = 1; 1130 - int_pkt->int_desc.delivery_mode = 1131 - (apic->irq_delivery_mode == dest_LowestPrio) ? 1 : 0; 888 + ctxt.pci_pkt.completion_func = hv_pci_compose_compl; 889 + ctxt.pci_pkt.compl_ctxt = &comp; 1132 890 1133 - /* 1134 - * This bit doesn't have to work on machines with more than 64 1135 - * processors because Hyper-V only supports 64 in a guest. 1136 - */ 1137 - affinity = irq_data_get_affinity_mask(data); 1138 - if (cpumask_weight(affinity) >= 32) { 1139 - int_pkt->int_desc.cpu_mask = CPU_AFFINITY_ALL; 1140 - } else { 1141 - for_each_cpu_and(cpu, affinity, cpu_online_mask) { 1142 - int_pkt->int_desc.cpu_mask |= 1143 - (1ULL << vmbus_cpu_number_to_vp_number(cpu)); 1144 - } 891 + switch (pci_protocol_version) { 892 + case PCI_PROTOCOL_VERSION_1_1: 893 + size = hv_compose_msi_req_v1(&ctxt.int_pkts.v1, 894 + irq_data_get_affinity_mask(data), 895 + hpdev->desc.win_slot.slot, 896 + cfg->vector); 897 + break; 898 + 899 + case PCI_PROTOCOL_VERSION_1_2: 900 + size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2, 901 + irq_data_get_affinity_mask(data), 902 + hpdev->desc.win_slot.slot, 903 + cfg->vector); 904 + break; 905 + 906 + default: 907 + /* As we only negotiate protocol versions known to this driver, 908 + * this path should never hit. However, this is it not a hot 909 + * path so we print a message to aid future updates. 910 + */ 911 + dev_err(&hbus->hdev->device, 912 + "Unexpected vPCI protocol, update driver."); 913 + goto free_int_desc; 1145 914 } 1146 915 1147 - ret = vmbus_sendpacket(hpdev->hbus->hdev->channel, int_pkt, 1148 - sizeof(*int_pkt), (unsigned long)&ctxt.pkt, 916 + ret = vmbus_sendpacket(hpdev->hbus->hdev->channel, &ctxt.int_pkts, 917 + size, (unsigned long)&ctxt.pci_pkt, 1149 918 VM_PKT_DATA_INBAND, 1150 919 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 1151 - if (ret) 920 + if (ret) { 921 + dev_err(&hbus->hdev->device, 922 + "Sending request for interrupt failed: 0x%x", 923 + comp.comp_pkt.completion_status); 1152 924 goto free_int_desc; 925 + } 1153 926 1154 927 wait_for_completion(&comp.comp_pkt.host_event); 1155 928 ··· 1756 1513 put_pcichild(hpdev, hv_pcidev_ref_initial); 1757 1514 } 1758 1515 1759 - switch(hbus->state) { 1516 + switch (hbus->state) { 1760 1517 case hv_pcibus_installed: 1761 1518 /* 1762 - * Tell the core to rescan bus 1763 - * because there may have been changes. 1764 - */ 1519 + * Tell the core to rescan bus 1520 + * because there may have been changes. 1521 + */ 1765 1522 pci_lock_rescan_remove(); 1766 1523 pci_scan_child_bus(hbus->pci_bus); 1767 1524 pci_unlock_rescan_remove(); ··· 2043 1800 struct hv_pci_compl comp_pkt; 2044 1801 struct pci_packet *pkt; 2045 1802 int ret; 1803 + int i; 2046 1804 2047 1805 /* 2048 1806 * Initiate the handshake with the host and negotiate ··· 2060 1816 pkt->compl_ctxt = &comp_pkt; 2061 1817 version_req = (struct pci_version_request *)&pkt->message; 2062 1818 version_req->message_type.type = PCI_QUERY_PROTOCOL_VERSION; 2063 - version_req->protocol_version = PCI_PROTOCOL_VERSION_CURRENT; 2064 1819 2065 - ret = vmbus_sendpacket(hdev->channel, version_req, 2066 - sizeof(struct pci_version_request), 2067 - (unsigned long)pkt, VM_PKT_DATA_INBAND, 2068 - VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2069 - if (ret) 2070 - goto exit; 1820 + for (i = 0; i < ARRAY_SIZE(pci_protocol_versions); i++) { 1821 + version_req->protocol_version = pci_protocol_versions[i]; 1822 + ret = vmbus_sendpacket(hdev->channel, version_req, 1823 + sizeof(struct pci_version_request), 1824 + (unsigned long)pkt, VM_PKT_DATA_INBAND, 1825 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 1826 + if (ret) { 1827 + dev_err(&hdev->device, 1828 + "PCI Pass-through VSP failed sending version reqquest: %#x", 1829 + ret); 1830 + goto exit; 1831 + } 2071 1832 2072 - wait_for_completion(&comp_pkt.host_event); 1833 + wait_for_completion(&comp_pkt.host_event); 2073 1834 2074 - if (comp_pkt.completion_status < 0) { 2075 - dev_err(&hdev->device, 2076 - "PCI Pass-through VSP failed version request %x\n", 2077 - comp_pkt.completion_status); 2078 - ret = -EPROTO; 2079 - goto exit; 1835 + if (comp_pkt.completion_status >= 0) { 1836 + pci_protocol_version = pci_protocol_versions[i]; 1837 + dev_info(&hdev->device, 1838 + "PCI VMBus probing: Using version %#x\n", 1839 + pci_protocol_version); 1840 + goto exit; 1841 + } 1842 + 1843 + if (comp_pkt.completion_status != STATUS_REVISION_MISMATCH) { 1844 + dev_err(&hdev->device, 1845 + "PCI Pass-through VSP failed version request: %#x", 1846 + comp_pkt.completion_status); 1847 + ret = -EPROTO; 1848 + goto exit; 1849 + } 1850 + 1851 + reinit_completion(&comp_pkt.host_event); 2080 1852 } 2081 1853 2082 - ret = 0; 1854 + dev_err(&hdev->device, 1855 + "PCI pass-through VSP failed to find supported version"); 1856 + ret = -EPROTO; 2083 1857 2084 1858 exit: 2085 1859 kfree(pkt); ··· 2356 2094 { 2357 2095 struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 2358 2096 struct pci_resources_assigned *res_assigned; 2097 + struct pci_resources_assigned2 *res_assigned2; 2359 2098 struct hv_pci_compl comp_pkt; 2360 2099 struct hv_pci_dev *hpdev; 2361 2100 struct pci_packet *pkt; 2101 + size_t size_res; 2362 2102 u32 wslot; 2363 2103 int ret; 2364 2104 2365 - pkt = kmalloc(sizeof(*pkt) + sizeof(*res_assigned), GFP_KERNEL); 2105 + size_res = (pci_protocol_version < PCI_PROTOCOL_VERSION_1_2) 2106 + ? sizeof(*res_assigned) : sizeof(*res_assigned2); 2107 + 2108 + pkt = kmalloc(sizeof(*pkt) + size_res, GFP_KERNEL); 2366 2109 if (!pkt) 2367 2110 return -ENOMEM; 2368 2111 ··· 2378 2111 if (!hpdev) 2379 2112 continue; 2380 2113 2381 - memset(pkt, 0, sizeof(*pkt) + sizeof(*res_assigned)); 2114 + memset(pkt, 0, sizeof(*pkt) + size_res); 2382 2115 init_completion(&comp_pkt.host_event); 2383 2116 pkt->completion_func = hv_pci_generic_compl; 2384 2117 pkt->compl_ctxt = &comp_pkt; 2385 - res_assigned = (struct pci_resources_assigned *)&pkt->message; 2386 - res_assigned->message_type.type = PCI_RESOURCES_ASSIGNED; 2387 - res_assigned->wslot.slot = hpdev->desc.win_slot.slot; 2388 2118 2119 + if (pci_protocol_version < PCI_PROTOCOL_VERSION_1_2) { 2120 + res_assigned = 2121 + (struct pci_resources_assigned *)&pkt->message; 2122 + res_assigned->message_type.type = 2123 + PCI_RESOURCES_ASSIGNED; 2124 + res_assigned->wslot.slot = hpdev->desc.win_slot.slot; 2125 + } else { 2126 + res_assigned2 = 2127 + (struct pci_resources_assigned2 *)&pkt->message; 2128 + res_assigned2->message_type.type = 2129 + PCI_RESOURCES_ASSIGNED2; 2130 + res_assigned2->wslot.slot = hpdev->desc.win_slot.slot; 2131 + } 2389 2132 put_pcichild(hpdev, hv_pcidev_ref_by_slot); 2390 2133 2391 - ret = vmbus_sendpacket( 2392 - hdev->channel, &pkt->message, 2393 - sizeof(*res_assigned), 2394 - (unsigned long)pkt, 2395 - VM_PKT_DATA_INBAND, 2396 - VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2134 + ret = vmbus_sendpacket(hdev->channel, &pkt->message, 2135 + size_res, (unsigned long)pkt, 2136 + VM_PKT_DATA_INBAND, 2137 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2397 2138 if (ret) 2398 2139 break; 2399 2140 ··· 2479 2204 struct hv_pcibus_device *hbus; 2480 2205 int ret; 2481 2206 2482 - hbus = kzalloc(sizeof(*hbus), GFP_KERNEL); 2207 + /* 2208 + * hv_pcibus_device contains the hypercall arguments for retargeting in 2209 + * hv_irq_unmask(). Those must not cross a page boundary. 2210 + */ 2211 + BUILD_BUG_ON(sizeof(*hbus) > PAGE_SIZE); 2212 + 2213 + hbus = (struct hv_pcibus_device *)get_zeroed_page(GFP_KERNEL); 2483 2214 if (!hbus) 2484 2215 return -ENOMEM; 2485 2216 hbus->state = hv_pcibus_init; 2217 + 2218 + hv_tmpcpumap_init(); 2486 2219 2487 2220 /* 2488 2221 * The PCI bus "domain" is what is called "segment" in ACPI and ··· 2591 2308 close: 2592 2309 vmbus_close(hdev->channel); 2593 2310 free_bus: 2594 - kfree(hbus); 2311 + free_page((unsigned long)hbus); 2595 2312 return ret; 2596 2313 } 2597 2314 ··· 2669 2386 irq_domain_free_fwnode(hbus->sysdata.fwnode); 2670 2387 put_hvpcibus(hbus); 2671 2388 wait_for_completion(&hbus->remove_event); 2672 - kfree(hbus); 2389 + free_page((unsigned long)hbus); 2673 2390 return 0; 2674 2391 } 2675 2392
+1 -1
drivers/pci/host/pci-rcar-gen2.c
··· 429 429 return 0; 430 430 } 431 431 432 - static struct of_device_id rcar_pci_of_match[] = { 432 + static const struct of_device_id rcar_pci_of_match[] = { 433 433 { .compatible = "renesas,pci-r8a7790", }, 434 434 { .compatible = "renesas,pci-r8a7791", }, 435 435 { .compatible = "renesas,pci-r8a7794", },
+25 -17
drivers/pci/host/pci-tegra.c
··· 233 233 struct msi_controller chip; 234 234 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 235 235 struct irq_domain *domain; 236 - unsigned long pages; 237 236 struct mutex lock; 237 + u64 phys; 238 238 int irq; 239 239 }; 240 240 ··· 1448 1448 1449 1449 irq_set_msi_desc(irq, desc); 1450 1450 1451 - msg.address_lo = virt_to_phys((void *)msi->pages); 1452 - /* 32 bit address only */ 1453 - msg.address_hi = 0; 1451 + msg.address_lo = lower_32_bits(msi->phys); 1452 + msg.address_hi = upper_32_bits(msi->phys); 1454 1453 msg.data = hwirq; 1455 1454 1456 1455 pci_write_msi_msg(irq, &msg); ··· 1498 1499 const struct tegra_pcie_soc *soc = pcie->soc; 1499 1500 struct tegra_msi *msi = &pcie->msi; 1500 1501 struct device *dev = pcie->dev; 1501 - unsigned long base; 1502 1502 int err; 1503 1503 u32 reg; 1504 1504 ··· 1529 1531 goto err; 1530 1532 } 1531 1533 1532 - /* setup AFI/FPCI range */ 1533 - msi->pages = __get_free_pages(GFP_KERNEL, 0); 1534 - base = virt_to_phys((void *)msi->pages); 1534 + /* 1535 + * The PCI host bridge on Tegra contains some logic that intercepts 1536 + * MSI writes, which means that the MSI target address doesn't have 1537 + * to point to actual physical memory. Rather than allocating one 4 1538 + * KiB page of system memory that's never used, we can simply pick 1539 + * an arbitrary address within an area reserved for system memory 1540 + * in the FPCI address map. 1541 + * 1542 + * However, in order to avoid confusion, we pick an address that 1543 + * doesn't map to physical memory. The FPCI address map reserves a 1544 + * 1012 GiB region for system memory and memory-mapped I/O. Since 1545 + * none of the Tegra SoCs that contain this PCI host bridge can 1546 + * address more than 16 GiB of system memory, the last 4 KiB of 1547 + * these 1012 GiB is a good candidate. 1548 + */ 1549 + msi->phys = 0xfcfffff000; 1535 1550 1536 - afi_writel(pcie, base >> soc->msi_base_shift, AFI_MSI_FPCI_BAR_ST); 1537 - afi_writel(pcie, base, AFI_MSI_AXI_BAR_ST); 1551 + afi_writel(pcie, msi->phys >> soc->msi_base_shift, AFI_MSI_FPCI_BAR_ST); 1552 + afi_writel(pcie, msi->phys, AFI_MSI_AXI_BAR_ST); 1538 1553 /* this register is in 4K increments */ 1539 1554 afi_writel(pcie, 1, AFI_MSI_BAR_SZ); 1540 1555 ··· 1595 1584 afi_writel(pcie, 0, AFI_MSI_EN_VEC5); 1596 1585 afi_writel(pcie, 0, AFI_MSI_EN_VEC6); 1597 1586 afi_writel(pcie, 0, AFI_MSI_EN_VEC7); 1598 - 1599 - free_pages(msi->pages, 0); 1600 1587 1601 1588 if (msi->irq > 0) 1602 1589 free_irq(msi->irq, pcie); ··· 2247 2238 struct pci_bus *child; 2248 2239 int err; 2249 2240 2250 - host = pci_alloc_host_bridge(sizeof(*pcie)); 2241 + host = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 2251 2242 if (!host) 2252 2243 return -ENOMEM; 2253 2244 ··· 2293 2284 host->busnr = pcie->busn.start; 2294 2285 host->dev.parent = &pdev->dev; 2295 2286 host->ops = &tegra_pcie_ops; 2287 + host->map_irq = tegra_pcie_map_irq; 2288 + host->swizzle_irq = pci_common_swizzle; 2296 2289 2297 - err = pci_register_host_bridge(host); 2290 + err = pci_scan_root_bus_bridge(host); 2298 2291 if (err < 0) { 2299 2292 dev_err(dev, "failed to register host: %d\n", err); 2300 2293 goto disable_msi; 2301 2294 } 2302 2295 2303 - pci_scan_child_bus(host->bus); 2304 - 2305 - pci_fixup_irqs(pci_common_swizzle, tegra_pcie_map_irq); 2306 2296 pci_bus_size_bridges(host->bus); 2307 2297 pci_bus_assign_resources(host->bus); 2308 2298
+25 -11
drivers/pci/host/pci-versatile.c
··· 120 120 121 121 static int versatile_pci_probe(struct platform_device *pdev) 122 122 { 123 + struct device *dev = &pdev->dev; 123 124 struct resource *res; 124 125 int ret, i, myslot = -1; 125 126 u32 val; 126 127 void __iomem *local_pci_cfg_base; 127 128 struct pci_bus *bus, *child; 129 + struct pci_host_bridge *bridge; 128 130 LIST_HEAD(pci_res); 129 131 132 + bridge = devm_pci_alloc_host_bridge(dev, 0); 133 + if (!bridge) 134 + return -ENOMEM; 135 + 130 136 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 131 - versatile_pci_base = devm_ioremap_resource(&pdev->dev, res); 137 + versatile_pci_base = devm_ioremap_resource(dev, res); 132 138 if (IS_ERR(versatile_pci_base)) 133 139 return PTR_ERR(versatile_pci_base); 134 140 135 141 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 136 - versatile_cfg_base[0] = devm_ioremap_resource(&pdev->dev, res); 142 + versatile_cfg_base[0] = devm_ioremap_resource(dev, res); 137 143 if (IS_ERR(versatile_cfg_base[0])) 138 144 return PTR_ERR(versatile_cfg_base[0]); 139 145 140 146 res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 141 - versatile_cfg_base[1] = devm_pci_remap_cfg_resource(&pdev->dev, 142 - res); 147 + versatile_cfg_base[1] = devm_pci_remap_cfg_resource(dev, res); 143 148 if (IS_ERR(versatile_cfg_base[1])) 144 149 return PTR_ERR(versatile_cfg_base[1]); 145 150 146 - ret = versatile_pci_parse_request_of_pci_ranges(&pdev->dev, &pci_res); 151 + ret = versatile_pci_parse_request_of_pci_ranges(dev, &pci_res); 147 152 if (ret) 148 153 return ret; 149 154 ··· 164 159 } 165 160 } 166 161 if (myslot == -1) { 167 - dev_err(&pdev->dev, "Cannot find PCI core!\n"); 162 + dev_err(dev, "Cannot find PCI core!\n"); 168 163 return -EIO; 169 164 } 170 165 /* ··· 172 167 */ 173 168 pci_slot_ignore |= (1 << myslot); 174 169 175 - dev_info(&pdev->dev, "PCI core found (slot %d)\n", myslot); 170 + dev_info(dev, "PCI core found (slot %d)\n", myslot); 176 171 177 172 writel(myslot, PCI_SELFID); 178 173 local_pci_cfg_base = versatile_cfg_base[1] + (myslot << 11); ··· 204 199 pci_add_flags(PCI_ENABLE_PROC_DOMAINS); 205 200 pci_add_flags(PCI_REASSIGN_ALL_BUS | PCI_REASSIGN_ALL_RSRC); 206 201 207 - bus = pci_scan_root_bus(&pdev->dev, 0, &pci_versatile_ops, NULL, &pci_res); 208 - if (!bus) 209 - return -ENOMEM; 202 + list_splice_init(&pci_res, &bridge->windows); 203 + bridge->dev.parent = dev; 204 + bridge->sysdata = NULL; 205 + bridge->busnr = 0; 206 + bridge->ops = &pci_versatile_ops; 207 + bridge->map_irq = of_irq_parse_and_map_pci; 208 + bridge->swizzle_irq = pci_common_swizzle; 210 209 211 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 210 + ret = pci_scan_root_bus_bridge(bridge); 211 + if (ret < 0) 212 + return ret; 213 + 214 + bus = bridge->bus; 215 + 212 216 pci_assign_unassigned_bus_resources(bus); 213 217 list_for_each_entry(child, &bus->children, node) 214 218 pcie_bus_configure_settings(child);
+17 -6
drivers/pci/host/pci-xgene.c
··· 636 636 struct xgene_pcie_port *port; 637 637 resource_size_t iobase = 0; 638 638 struct pci_bus *bus, *child; 639 + struct pci_host_bridge *bridge; 639 640 int ret; 640 641 LIST_HEAD(res); 641 642 642 - port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 643 - if (!port) 643 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port)); 644 + if (!bridge) 644 645 return -ENOMEM; 646 + 647 + port = pci_host_bridge_priv(bridge); 645 648 646 649 port->node = of_node_get(dn); 647 650 port->dev = dev; ··· 673 670 if (ret) 674 671 goto error; 675 672 676 - bus = pci_create_root_bus(dev, 0, &xgene_pcie_ops, port, &res); 677 - if (!bus) { 678 - ret = -ENOMEM; 673 + list_splice_init(&res, &bridge->windows); 674 + bridge->dev.parent = dev; 675 + bridge->sysdata = port; 676 + bridge->busnr = 0; 677 + bridge->ops = &xgene_pcie_ops; 678 + bridge->map_irq = of_irq_parse_and_map_pci; 679 + bridge->swizzle_irq = pci_common_swizzle; 680 + 681 + ret = pci_scan_root_bus_bridge(bridge); 682 + if (ret < 0) 679 683 goto error; 680 - } 684 + 685 + bus = bridge->bus; 681 686 682 687 pci_scan_child_bus(bus); 683 688 pci_assign_unassigned_bus_resources(bus);
+17 -7
drivers/pci/host/pcie-altera.c
··· 579 579 struct altera_pcie *pcie; 580 580 struct pci_bus *bus; 581 581 struct pci_bus *child; 582 + struct pci_host_bridge *bridge; 582 583 int ret; 583 584 584 - pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 585 - if (!pcie) 585 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 586 + if (!bridge) 586 587 return -ENOMEM; 587 588 589 + pcie = pci_host_bridge_priv(bridge); 588 590 pcie->pdev = pdev; 589 591 590 592 ret = altera_pcie_parse_dt(pcie); ··· 615 613 cra_writel(pcie, P2A_INT_ENA_ALL, P2A_INT_ENABLE); 616 614 altera_pcie_host_init(pcie); 617 615 618 - bus = pci_scan_root_bus(dev, pcie->root_bus_nr, &altera_pcie_ops, 619 - pcie, &pcie->resources); 620 - if (!bus) 621 - return -ENOMEM; 616 + list_splice_init(&pcie->resources, &bridge->windows); 617 + bridge->dev.parent = dev; 618 + bridge->sysdata = pcie; 619 + bridge->busnr = pcie->root_bus_nr; 620 + bridge->ops = &altera_pcie_ops; 621 + bridge->map_irq = of_irq_parse_and_map_pci; 622 + bridge->swizzle_irq = pci_common_swizzle; 622 623 623 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 624 + ret = pci_scan_root_bus_bridge(bridge); 625 + if (ret < 0) 626 + return ret; 627 + 628 + bus = bridge->bus; 629 + 624 630 pci_assign_unassigned_bus_resources(bus); 625 631 626 632 /* Configure PCI Express setting. */
+5 -2
drivers/pci/host/pcie-iproc-bcma.c
··· 45 45 struct device *dev = &bdev->dev; 46 46 struct iproc_pcie *pcie; 47 47 LIST_HEAD(resources); 48 + struct pci_host_bridge *bridge; 48 49 int ret; 49 50 50 - pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 51 - if (!pcie) 51 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 52 + if (!bridge) 52 53 return -ENOMEM; 54 + 55 + pcie = pci_host_bridge_priv(bridge); 53 56 54 57 pcie->dev = dev; 55 58
+5 -2
drivers/pci/host/pcie-iproc-platform.c
··· 52 52 struct resource reg; 53 53 resource_size_t iobase = 0; 54 54 LIST_HEAD(resources); 55 + struct pci_host_bridge *bridge; 55 56 int ret; 56 57 57 - pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 58 - if (!pcie) 58 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 59 + if (!bridge) 59 60 return -ENOMEM; 61 + 62 + pcie = pci_host_bridge_priv(bridge); 60 63 61 64 pcie->dev = dev; 62 65 pcie->type = (enum iproc_pcie_type) of_device_get_match_data(dev);
+95 -40
drivers/pci/host/pcie-iproc.c
··· 452 452 * Note access to the configuration registers are protected at the higher layer 453 453 * by 'pci_lock' in drivers/pci/access.c 454 454 */ 455 - static void __iomem *iproc_pcie_map_cfg_bus(struct pci_bus *bus, 455 + static void __iomem *iproc_pcie_map_cfg_bus(struct iproc_pcie *pcie, 456 + int busno, 456 457 unsigned int devfn, 457 458 int where) 458 459 { 459 - struct iproc_pcie *pcie = iproc_data(bus); 460 460 unsigned slot = PCI_SLOT(devfn); 461 461 unsigned fn = PCI_FUNC(devfn); 462 - unsigned busno = bus->number; 463 462 u32 val; 464 463 u16 offset; 465 464 ··· 498 499 return (pcie->base + offset); 499 500 } 500 501 502 + static void __iomem *iproc_pcie_bus_map_cfg_bus(struct pci_bus *bus, 503 + unsigned int devfn, 504 + int where) 505 + { 506 + return iproc_pcie_map_cfg_bus(iproc_data(bus), bus->number, devfn, 507 + where); 508 + } 509 + 510 + static int iproc_pci_raw_config_read32(struct iproc_pcie *pcie, 511 + unsigned int devfn, int where, 512 + int size, u32 *val) 513 + { 514 + void __iomem *addr; 515 + 516 + addr = iproc_pcie_map_cfg_bus(pcie, 0, devfn, where & ~0x3); 517 + if (!addr) { 518 + *val = ~0; 519 + return PCIBIOS_DEVICE_NOT_FOUND; 520 + } 521 + 522 + *val = readl(addr); 523 + 524 + if (size <= 2) 525 + *val = (*val >> (8 * (where & 3))) & ((1 << (size * 8)) - 1); 526 + 527 + return PCIBIOS_SUCCESSFUL; 528 + } 529 + 530 + static int iproc_pci_raw_config_write32(struct iproc_pcie *pcie, 531 + unsigned int devfn, int where, 532 + int size, u32 val) 533 + { 534 + void __iomem *addr; 535 + u32 mask, tmp; 536 + 537 + addr = iproc_pcie_map_cfg_bus(pcie, 0, devfn, where & ~0x3); 538 + if (!addr) 539 + return PCIBIOS_DEVICE_NOT_FOUND; 540 + 541 + if (size == 4) { 542 + writel(val, addr); 543 + return PCIBIOS_SUCCESSFUL; 544 + } 545 + 546 + mask = ~(((1 << (size * 8)) - 1) << ((where & 0x3) * 8)); 547 + tmp = readl(addr) & mask; 548 + tmp |= val << ((where & 0x3) * 8); 549 + writel(tmp, addr); 550 + 551 + return PCIBIOS_SUCCESSFUL; 552 + } 553 + 501 554 static int iproc_pcie_config_read32(struct pci_bus *bus, unsigned int devfn, 502 555 int where, int size, u32 *val) 503 556 { ··· 575 524 } 576 525 577 526 static struct pci_ops iproc_pcie_ops = { 578 - .map_bus = iproc_pcie_map_cfg_bus, 527 + .map_bus = iproc_pcie_bus_map_cfg_bus, 579 528 .read = iproc_pcie_config_read32, 580 529 .write = iproc_pcie_config_write32, 581 530 }; ··· 607 556 msleep(100); 608 557 } 609 558 610 - static int iproc_pcie_check_link(struct iproc_pcie *pcie, struct pci_bus *bus) 559 + static int iproc_pcie_check_link(struct iproc_pcie *pcie) 611 560 { 612 561 struct device *dev = pcie->dev; 613 - u8 hdr_type; 614 - u32 link_ctrl, class, val; 615 - u16 pos = PCI_EXP_CAP, link_status; 562 + u32 hdr_type, link_ctrl, link_status, class, val; 563 + u16 pos = PCI_EXP_CAP; 616 564 bool link_is_active = false; 617 565 618 566 /* ··· 628 578 } 629 579 630 580 /* make sure we are not in EP mode */ 631 - pci_bus_read_config_byte(bus, 0, PCI_HEADER_TYPE, &hdr_type); 581 + iproc_pci_raw_config_read32(pcie, 0, PCI_HEADER_TYPE, 1, &hdr_type); 632 582 if ((hdr_type & 0x7f) != PCI_HEADER_TYPE_BRIDGE) { 633 583 dev_err(dev, "in EP mode, hdr=%#02x\n", hdr_type); 634 584 return -EFAULT; ··· 638 588 #define PCI_BRIDGE_CTRL_REG_OFFSET 0x43c 639 589 #define PCI_CLASS_BRIDGE_MASK 0xffff00 640 590 #define PCI_CLASS_BRIDGE_SHIFT 8 641 - pci_bus_read_config_dword(bus, 0, PCI_BRIDGE_CTRL_REG_OFFSET, &class); 591 + iproc_pci_raw_config_read32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, 592 + 4, &class); 642 593 class &= ~PCI_CLASS_BRIDGE_MASK; 643 594 class |= (PCI_CLASS_BRIDGE_PCI << PCI_CLASS_BRIDGE_SHIFT); 644 - pci_bus_write_config_dword(bus, 0, PCI_BRIDGE_CTRL_REG_OFFSET, class); 595 + iproc_pci_raw_config_write32(pcie, 0, PCI_BRIDGE_CTRL_REG_OFFSET, 596 + 4, class); 645 597 646 598 /* check link status to see if link is active */ 647 - pci_bus_read_config_word(bus, 0, pos + PCI_EXP_LNKSTA, &link_status); 599 + iproc_pci_raw_config_read32(pcie, 0, pos + PCI_EXP_LNKSTA, 600 + 2, &link_status); 648 601 if (link_status & PCI_EXP_LNKSTA_NLW) 649 602 link_is_active = true; 650 603 ··· 656 603 #define PCI_TARGET_LINK_SPEED_MASK 0xf 657 604 #define PCI_TARGET_LINK_SPEED_GEN2 0x2 658 605 #define PCI_TARGET_LINK_SPEED_GEN1 0x1 659 - pci_bus_read_config_dword(bus, 0, 660 - pos + PCI_EXP_LNKCTL2, 606 + iproc_pci_raw_config_read32(pcie, 0, 607 + pos + PCI_EXP_LNKCTL2, 4, 661 608 &link_ctrl); 662 609 if ((link_ctrl & PCI_TARGET_LINK_SPEED_MASK) == 663 610 PCI_TARGET_LINK_SPEED_GEN2) { 664 611 link_ctrl &= ~PCI_TARGET_LINK_SPEED_MASK; 665 612 link_ctrl |= PCI_TARGET_LINK_SPEED_GEN1; 666 - pci_bus_write_config_dword(bus, 0, 667 - pos + PCI_EXP_LNKCTL2, 668 - link_ctrl); 613 + iproc_pci_raw_config_write32(pcie, 0, 614 + pos + PCI_EXP_LNKCTL2, 615 + 4, link_ctrl); 669 616 msleep(100); 670 617 671 - pci_bus_read_config_word(bus, 0, pos + PCI_EXP_LNKSTA, 672 - &link_status); 618 + iproc_pci_raw_config_read32(pcie, 0, 619 + pos + PCI_EXP_LNKSTA, 620 + 2, &link_status); 673 621 if (link_status & PCI_EXP_LNKSTA_NLW) 674 622 link_is_active = true; 675 623 } ··· 1259 1205 struct device *dev; 1260 1206 int ret; 1261 1207 void *sysdata; 1262 - struct pci_bus *bus, *child; 1208 + struct pci_bus *child; 1209 + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 1263 1210 1264 1211 dev = pcie->dev; 1265 1212 ··· 1307 1252 sysdata = pcie; 1308 1253 #endif 1309 1254 1310 - bus = pci_create_root_bus(dev, 0, &iproc_pcie_ops, sysdata, res); 1311 - if (!bus) { 1312 - dev_err(dev, "unable to create PCI root bus\n"); 1313 - ret = -ENOMEM; 1314 - goto err_power_off_phy; 1315 - } 1316 - pcie->root_bus = bus; 1317 - 1318 - ret = iproc_pcie_check_link(pcie, bus); 1255 + ret = iproc_pcie_check_link(pcie); 1319 1256 if (ret) { 1320 1257 dev_err(dev, "no PCIe EP device detected\n"); 1321 - goto err_rm_root_bus; 1258 + goto err_power_off_phy; 1322 1259 } 1323 1260 1324 1261 iproc_pcie_enable(pcie); ··· 1319 1272 if (iproc_pcie_msi_enable(pcie)) 1320 1273 dev_info(dev, "not using iProc MSI\n"); 1321 1274 1322 - pci_scan_child_bus(bus); 1323 - pci_assign_unassigned_bus_resources(bus); 1275 + list_splice_init(res, &host->windows); 1276 + host->busnr = 0; 1277 + host->dev.parent = dev; 1278 + host->ops = &iproc_pcie_ops; 1279 + host->sysdata = sysdata; 1280 + host->map_irq = pcie->map_irq; 1281 + host->swizzle_irq = pci_common_swizzle; 1324 1282 1325 - if (pcie->map_irq) 1326 - pci_fixup_irqs(pci_common_swizzle, pcie->map_irq); 1283 + ret = pci_scan_root_bus_bridge(host); 1284 + if (ret < 0) { 1285 + dev_err(dev, "failed to scan host: %d\n", ret); 1286 + goto err_power_off_phy; 1287 + } 1327 1288 1328 - list_for_each_entry(child, &bus->children, node) 1289 + pci_assign_unassigned_bus_resources(host->bus); 1290 + 1291 + pcie->root_bus = host->bus; 1292 + 1293 + list_for_each_entry(child, &host->bus->children, node) 1329 1294 pcie_bus_configure_settings(child); 1330 1295 1331 - pci_bus_add_devices(bus); 1296 + pci_bus_add_devices(host->bus); 1332 1297 1333 1298 return 0; 1334 - 1335 - err_rm_root_bus: 1336 - pci_stop_root_bus(bus); 1337 - pci_remove_root_bus(bus); 1338 1299 1339 1300 err_power_off_phy: 1340 1301 phy_power_off(pcie->phy);
+554
drivers/pci/host/pcie-mediatek.c
··· 1 + /* 2 + * MediaTek PCIe host controller driver. 3 + * 4 + * Copyright (c) 2017 MediaTek Inc. 5 + * Author: Ryder Lee <ryder.lee@mediatek.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #include <linux/clk.h> 18 + #include <linux/delay.h> 19 + #include <linux/kernel.h> 20 + #include <linux/of_address.h> 21 + #include <linux/of_pci.h> 22 + #include <linux/of_platform.h> 23 + #include <linux/pci.h> 24 + #include <linux/phy/phy.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/pm_runtime.h> 27 + #include <linux/reset.h> 28 + 29 + /* PCIe shared registers */ 30 + #define PCIE_SYS_CFG 0x00 31 + #define PCIE_INT_ENABLE 0x0c 32 + #define PCIE_CFG_ADDR 0x20 33 + #define PCIE_CFG_DATA 0x24 34 + 35 + /* PCIe per port registers */ 36 + #define PCIE_BAR0_SETUP 0x10 37 + #define PCIE_CLASS 0x34 38 + #define PCIE_LINK_STATUS 0x50 39 + 40 + #define PCIE_PORT_INT_EN(x) BIT(20 + (x)) 41 + #define PCIE_PORT_PERST(x) BIT(1 + (x)) 42 + #define PCIE_PORT_LINKUP BIT(0) 43 + #define PCIE_BAR_MAP_MAX GENMASK(31, 16) 44 + 45 + #define PCIE_BAR_ENABLE BIT(0) 46 + #define PCIE_REVISION_ID BIT(0) 47 + #define PCIE_CLASS_CODE (0x60400 << 8) 48 + #define PCIE_CONF_REG(regn) (((regn) & GENMASK(7, 2)) | \ 49 + ((((regn) >> 8) & GENMASK(3, 0)) << 24)) 50 + #define PCIE_CONF_FUN(fun) (((fun) << 8) & GENMASK(10, 8)) 51 + #define PCIE_CONF_DEV(dev) (((dev) << 11) & GENMASK(15, 11)) 52 + #define PCIE_CONF_BUS(bus) (((bus) << 16) & GENMASK(23, 16)) 53 + #define PCIE_CONF_ADDR(regn, fun, dev, bus) \ 54 + (PCIE_CONF_REG(regn) | PCIE_CONF_FUN(fun) | \ 55 + PCIE_CONF_DEV(dev) | PCIE_CONF_BUS(bus)) 56 + 57 + /* MediaTek specific configuration registers */ 58 + #define PCIE_FTS_NUM 0x70c 59 + #define PCIE_FTS_NUM_MASK GENMASK(15, 8) 60 + #define PCIE_FTS_NUM_L0(x) ((x) & 0xff << 8) 61 + 62 + #define PCIE_FC_CREDIT 0x73c 63 + #define PCIE_FC_CREDIT_MASK (GENMASK(31, 31) | GENMASK(28, 16)) 64 + #define PCIE_FC_CREDIT_VAL(x) ((x) << 16) 65 + 66 + /** 67 + * struct mtk_pcie_port - PCIe port information 68 + * @base: IO mapped register base 69 + * @list: port list 70 + * @pcie: pointer to PCIe host info 71 + * @reset: pointer to port reset control 72 + * @sys_ck: pointer to bus clock 73 + * @phy: pointer to phy control block 74 + * @lane: lane count 75 + * @index: port index 76 + */ 77 + struct mtk_pcie_port { 78 + void __iomem *base; 79 + struct list_head list; 80 + struct mtk_pcie *pcie; 81 + struct reset_control *reset; 82 + struct clk *sys_ck; 83 + struct phy *phy; 84 + u32 lane; 85 + u32 index; 86 + }; 87 + 88 + /** 89 + * struct mtk_pcie - PCIe host information 90 + * @dev: pointer to PCIe device 91 + * @base: IO mapped register base 92 + * @free_ck: free-run reference clock 93 + * @io: IO resource 94 + * @pio: PIO resource 95 + * @mem: non-prefetchable memory resource 96 + * @busn: bus range 97 + * @offset: IO / Memory offset 98 + * @ports: pointer to PCIe port information 99 + */ 100 + struct mtk_pcie { 101 + struct device *dev; 102 + void __iomem *base; 103 + struct clk *free_ck; 104 + 105 + struct resource io; 106 + struct resource pio; 107 + struct resource mem; 108 + struct resource busn; 109 + struct { 110 + resource_size_t mem; 111 + resource_size_t io; 112 + } offset; 113 + struct list_head ports; 114 + }; 115 + 116 + static inline bool mtk_pcie_link_up(struct mtk_pcie_port *port) 117 + { 118 + return !!(readl(port->base + PCIE_LINK_STATUS) & PCIE_PORT_LINKUP); 119 + } 120 + 121 + static void mtk_pcie_subsys_powerdown(struct mtk_pcie *pcie) 122 + { 123 + struct device *dev = pcie->dev; 124 + 125 + clk_disable_unprepare(pcie->free_ck); 126 + 127 + if (dev->pm_domain) { 128 + pm_runtime_put_sync(dev); 129 + pm_runtime_disable(dev); 130 + } 131 + } 132 + 133 + static void mtk_pcie_port_free(struct mtk_pcie_port *port) 134 + { 135 + struct mtk_pcie *pcie = port->pcie; 136 + struct device *dev = pcie->dev; 137 + 138 + devm_iounmap(dev, port->base); 139 + list_del(&port->list); 140 + devm_kfree(dev, port); 141 + } 142 + 143 + static void mtk_pcie_put_resources(struct mtk_pcie *pcie) 144 + { 145 + struct mtk_pcie_port *port, *tmp; 146 + 147 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 148 + phy_power_off(port->phy); 149 + clk_disable_unprepare(port->sys_ck); 150 + mtk_pcie_port_free(port); 151 + } 152 + 153 + mtk_pcie_subsys_powerdown(pcie); 154 + } 155 + 156 + static void __iomem *mtk_pcie_map_bus(struct pci_bus *bus, 157 + unsigned int devfn, int where) 158 + { 159 + struct pci_host_bridge *host = pci_find_host_bridge(bus); 160 + struct mtk_pcie *pcie = pci_host_bridge_priv(host); 161 + 162 + writel(PCIE_CONF_ADDR(where, PCI_FUNC(devfn), PCI_SLOT(devfn), 163 + bus->number), pcie->base + PCIE_CFG_ADDR); 164 + 165 + return pcie->base + PCIE_CFG_DATA + (where & 3); 166 + } 167 + 168 + static struct pci_ops mtk_pcie_ops = { 169 + .map_bus = mtk_pcie_map_bus, 170 + .read = pci_generic_config_read, 171 + .write = pci_generic_config_write, 172 + }; 173 + 174 + static void mtk_pcie_configure_rc(struct mtk_pcie_port *port) 175 + { 176 + struct mtk_pcie *pcie = port->pcie; 177 + u32 func = PCI_FUNC(port->index << 3); 178 + u32 slot = PCI_SLOT(port->index << 3); 179 + u32 val; 180 + 181 + /* enable interrupt */ 182 + val = readl(pcie->base + PCIE_INT_ENABLE); 183 + val |= PCIE_PORT_INT_EN(port->index); 184 + writel(val, pcie->base + PCIE_INT_ENABLE); 185 + 186 + /* map to all DDR region. We need to set it before cfg operation. */ 187 + writel(PCIE_BAR_MAP_MAX | PCIE_BAR_ENABLE, 188 + port->base + PCIE_BAR0_SETUP); 189 + 190 + /* configure class code and revision ID */ 191 + writel(PCIE_CLASS_CODE | PCIE_REVISION_ID, port->base + PCIE_CLASS); 192 + 193 + /* configure FC credit */ 194 + writel(PCIE_CONF_ADDR(PCIE_FC_CREDIT, func, slot, 0), 195 + pcie->base + PCIE_CFG_ADDR); 196 + val = readl(pcie->base + PCIE_CFG_DATA); 197 + val &= ~PCIE_FC_CREDIT_MASK; 198 + val |= PCIE_FC_CREDIT_VAL(0x806c); 199 + writel(PCIE_CONF_ADDR(PCIE_FC_CREDIT, func, slot, 0), 200 + pcie->base + PCIE_CFG_ADDR); 201 + writel(val, pcie->base + PCIE_CFG_DATA); 202 + 203 + /* configure RC FTS number to 250 when it leaves L0s */ 204 + writel(PCIE_CONF_ADDR(PCIE_FTS_NUM, func, slot, 0), 205 + pcie->base + PCIE_CFG_ADDR); 206 + val = readl(pcie->base + PCIE_CFG_DATA); 207 + val &= ~PCIE_FTS_NUM_MASK; 208 + val |= PCIE_FTS_NUM_L0(0x50); 209 + writel(PCIE_CONF_ADDR(PCIE_FTS_NUM, func, slot, 0), 210 + pcie->base + PCIE_CFG_ADDR); 211 + writel(val, pcie->base + PCIE_CFG_DATA); 212 + } 213 + 214 + static void mtk_pcie_assert_ports(struct mtk_pcie_port *port) 215 + { 216 + struct mtk_pcie *pcie = port->pcie; 217 + u32 val; 218 + 219 + /* assert port PERST_N */ 220 + val = readl(pcie->base + PCIE_SYS_CFG); 221 + val |= PCIE_PORT_PERST(port->index); 222 + writel(val, pcie->base + PCIE_SYS_CFG); 223 + 224 + /* de-assert port PERST_N */ 225 + val = readl(pcie->base + PCIE_SYS_CFG); 226 + val &= ~PCIE_PORT_PERST(port->index); 227 + writel(val, pcie->base + PCIE_SYS_CFG); 228 + 229 + /* PCIe v2.0 need at least 100ms delay to train from Gen1 to Gen2 */ 230 + msleep(100); 231 + } 232 + 233 + static void mtk_pcie_enable_ports(struct mtk_pcie_port *port) 234 + { 235 + struct device *dev = port->pcie->dev; 236 + int err; 237 + 238 + err = clk_prepare_enable(port->sys_ck); 239 + if (err) { 240 + dev_err(dev, "failed to enable port%d clock\n", port->index); 241 + goto err_sys_clk; 242 + } 243 + 244 + reset_control_assert(port->reset); 245 + reset_control_deassert(port->reset); 246 + 247 + err = phy_power_on(port->phy); 248 + if (err) { 249 + dev_err(dev, "failed to power on port%d phy\n", port->index); 250 + goto err_phy_on; 251 + } 252 + 253 + mtk_pcie_assert_ports(port); 254 + 255 + /* if link up, then setup root port configuration space */ 256 + if (mtk_pcie_link_up(port)) { 257 + mtk_pcie_configure_rc(port); 258 + return; 259 + } 260 + 261 + dev_info(dev, "Port%d link down\n", port->index); 262 + 263 + phy_power_off(port->phy); 264 + err_phy_on: 265 + clk_disable_unprepare(port->sys_ck); 266 + err_sys_clk: 267 + mtk_pcie_port_free(port); 268 + } 269 + 270 + static int mtk_pcie_parse_ports(struct mtk_pcie *pcie, 271 + struct device_node *node, 272 + int index) 273 + { 274 + struct mtk_pcie_port *port; 275 + struct resource *regs; 276 + struct device *dev = pcie->dev; 277 + struct platform_device *pdev = to_platform_device(dev); 278 + char name[10]; 279 + int err; 280 + 281 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 282 + if (!port) 283 + return -ENOMEM; 284 + 285 + err = of_property_read_u32(node, "num-lanes", &port->lane); 286 + if (err) { 287 + dev_err(dev, "missing num-lanes property\n"); 288 + return err; 289 + } 290 + 291 + regs = platform_get_resource(pdev, IORESOURCE_MEM, index + 1); 292 + port->base = devm_ioremap_resource(dev, regs); 293 + if (IS_ERR(port->base)) { 294 + dev_err(dev, "failed to map port%d base\n", index); 295 + return PTR_ERR(port->base); 296 + } 297 + 298 + snprintf(name, sizeof(name), "sys_ck%d", index); 299 + port->sys_ck = devm_clk_get(dev, name); 300 + if (IS_ERR(port->sys_ck)) { 301 + dev_err(dev, "failed to get port%d clock\n", index); 302 + return PTR_ERR(port->sys_ck); 303 + } 304 + 305 + snprintf(name, sizeof(name), "pcie-rst%d", index); 306 + port->reset = devm_reset_control_get_optional(dev, name); 307 + if (PTR_ERR(port->reset) == -EPROBE_DEFER) 308 + return PTR_ERR(port->reset); 309 + 310 + /* some platforms may use default PHY setting */ 311 + snprintf(name, sizeof(name), "pcie-phy%d", index); 312 + port->phy = devm_phy_optional_get(dev, name); 313 + if (IS_ERR(port->phy)) 314 + return PTR_ERR(port->phy); 315 + 316 + port->index = index; 317 + port->pcie = pcie; 318 + 319 + INIT_LIST_HEAD(&port->list); 320 + list_add_tail(&port->list, &pcie->ports); 321 + 322 + return 0; 323 + } 324 + 325 + static int mtk_pcie_subsys_powerup(struct mtk_pcie *pcie) 326 + { 327 + struct device *dev = pcie->dev; 328 + struct platform_device *pdev = to_platform_device(dev); 329 + struct resource *regs; 330 + int err; 331 + 332 + /* get shared registers */ 333 + regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 334 + pcie->base = devm_ioremap_resource(dev, regs); 335 + if (IS_ERR(pcie->base)) { 336 + dev_err(dev, "failed to map shared register\n"); 337 + return PTR_ERR(pcie->base); 338 + } 339 + 340 + pcie->free_ck = devm_clk_get(dev, "free_ck"); 341 + if (IS_ERR(pcie->free_ck)) { 342 + if (PTR_ERR(pcie->free_ck) == -EPROBE_DEFER) 343 + return -EPROBE_DEFER; 344 + 345 + pcie->free_ck = NULL; 346 + } 347 + 348 + if (dev->pm_domain) { 349 + pm_runtime_enable(dev); 350 + pm_runtime_get_sync(dev); 351 + } 352 + 353 + /* enable top level clock */ 354 + err = clk_prepare_enable(pcie->free_ck); 355 + if (err) { 356 + dev_err(dev, "failed to enable free_ck\n"); 357 + goto err_free_ck; 358 + } 359 + 360 + return 0; 361 + 362 + err_free_ck: 363 + if (dev->pm_domain) { 364 + pm_runtime_put_sync(dev); 365 + pm_runtime_disable(dev); 366 + } 367 + 368 + return err; 369 + } 370 + 371 + static int mtk_pcie_setup(struct mtk_pcie *pcie) 372 + { 373 + struct device *dev = pcie->dev; 374 + struct device_node *node = dev->of_node, *child; 375 + struct of_pci_range_parser parser; 376 + struct of_pci_range range; 377 + struct resource res; 378 + struct mtk_pcie_port *port, *tmp; 379 + int err; 380 + 381 + if (of_pci_range_parser_init(&parser, node)) { 382 + dev_err(dev, "missing \"ranges\" property\n"); 383 + return -EINVAL; 384 + } 385 + 386 + for_each_of_pci_range(&parser, &range) { 387 + err = of_pci_range_to_resource(&range, node, &res); 388 + if (err < 0) 389 + return err; 390 + 391 + switch (res.flags & IORESOURCE_TYPE_BITS) { 392 + case IORESOURCE_IO: 393 + pcie->offset.io = res.start - range.pci_addr; 394 + 395 + memcpy(&pcie->pio, &res, sizeof(res)); 396 + pcie->pio.name = node->full_name; 397 + 398 + pcie->io.start = range.cpu_addr; 399 + pcie->io.end = range.cpu_addr + range.size - 1; 400 + pcie->io.flags = IORESOURCE_MEM; 401 + pcie->io.name = "I/O"; 402 + 403 + memcpy(&res, &pcie->io, sizeof(res)); 404 + break; 405 + 406 + case IORESOURCE_MEM: 407 + pcie->offset.mem = res.start - range.pci_addr; 408 + 409 + memcpy(&pcie->mem, &res, sizeof(res)); 410 + pcie->mem.name = "non-prefetchable"; 411 + break; 412 + } 413 + } 414 + 415 + err = of_pci_parse_bus_range(node, &pcie->busn); 416 + if (err < 0) { 417 + dev_err(dev, "failed to parse bus ranges property: %d\n", err); 418 + pcie->busn.name = node->name; 419 + pcie->busn.start = 0; 420 + pcie->busn.end = 0xff; 421 + pcie->busn.flags = IORESOURCE_BUS; 422 + } 423 + 424 + for_each_available_child_of_node(node, child) { 425 + int index; 426 + 427 + err = of_pci_get_devfn(child); 428 + if (err < 0) { 429 + dev_err(dev, "failed to parse devfn: %d\n", err); 430 + return err; 431 + } 432 + 433 + index = PCI_SLOT(err); 434 + 435 + err = mtk_pcie_parse_ports(pcie, child, index); 436 + if (err) 437 + return err; 438 + } 439 + 440 + err = mtk_pcie_subsys_powerup(pcie); 441 + if (err) 442 + return err; 443 + 444 + /* enable each port, and then check link status */ 445 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) 446 + mtk_pcie_enable_ports(port); 447 + 448 + /* power down PCIe subsys if slots are all empty (link down) */ 449 + if (list_empty(&pcie->ports)) 450 + mtk_pcie_subsys_powerdown(pcie); 451 + 452 + return 0; 453 + } 454 + 455 + static int mtk_pcie_request_resources(struct mtk_pcie *pcie) 456 + { 457 + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 458 + struct list_head *windows = &host->windows; 459 + struct device *dev = pcie->dev; 460 + int err; 461 + 462 + pci_add_resource_offset(windows, &pcie->pio, pcie->offset.io); 463 + pci_add_resource_offset(windows, &pcie->mem, pcie->offset.mem); 464 + pci_add_resource(windows, &pcie->busn); 465 + 466 + err = devm_request_pci_bus_resources(dev, windows); 467 + if (err < 0) 468 + return err; 469 + 470 + pci_remap_iospace(&pcie->pio, pcie->io.start); 471 + 472 + return 0; 473 + } 474 + 475 + static int mtk_pcie_register_host(struct pci_host_bridge *host) 476 + { 477 + struct mtk_pcie *pcie = pci_host_bridge_priv(host); 478 + struct pci_bus *child; 479 + int err; 480 + 481 + host->busnr = pcie->busn.start; 482 + host->dev.parent = pcie->dev; 483 + host->ops = &mtk_pcie_ops; 484 + host->map_irq = of_irq_parse_and_map_pci; 485 + host->swizzle_irq = pci_common_swizzle; 486 + 487 + err = pci_scan_root_bus_bridge(host); 488 + if (err < 0) 489 + return err; 490 + 491 + pci_bus_size_bridges(host->bus); 492 + pci_bus_assign_resources(host->bus); 493 + 494 + list_for_each_entry(child, &host->bus->children, node) 495 + pcie_bus_configure_settings(child); 496 + 497 + pci_bus_add_devices(host->bus); 498 + 499 + return 0; 500 + } 501 + 502 + static int mtk_pcie_probe(struct platform_device *pdev) 503 + { 504 + struct device *dev = &pdev->dev; 505 + struct mtk_pcie *pcie; 506 + struct pci_host_bridge *host; 507 + int err; 508 + 509 + host = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 510 + if (!host) 511 + return -ENOMEM; 512 + 513 + pcie = pci_host_bridge_priv(host); 514 + 515 + pcie->dev = dev; 516 + platform_set_drvdata(pdev, pcie); 517 + INIT_LIST_HEAD(&pcie->ports); 518 + 519 + err = mtk_pcie_setup(pcie); 520 + if (err) 521 + return err; 522 + 523 + err = mtk_pcie_request_resources(pcie); 524 + if (err) 525 + goto put_resources; 526 + 527 + err = mtk_pcie_register_host(host); 528 + if (err) 529 + goto put_resources; 530 + 531 + return 0; 532 + 533 + put_resources: 534 + if (!list_empty(&pcie->ports)) 535 + mtk_pcie_put_resources(pcie); 536 + 537 + return err; 538 + } 539 + 540 + static const struct of_device_id mtk_pcie_ids[] = { 541 + { .compatible = "mediatek,mt7623-pcie"}, 542 + { .compatible = "mediatek,mt2701-pcie"}, 543 + {}, 544 + }; 545 + 546 + static struct platform_driver mtk_pcie_driver = { 547 + .probe = mtk_pcie_probe, 548 + .driver = { 549 + .name = "mtk-pcie", 550 + .of_match_table = mtk_pcie_ids, 551 + .suppress_bind_attrs = true, 552 + }, 553 + }; 554 + builtin_platform_driver(mtk_pcie_driver);
+25 -15
drivers/pci/host/pcie-rcar.c
··· 450 450 static int rcar_pcie_enable(struct rcar_pcie *pcie) 451 451 { 452 452 struct device *dev = pcie->dev; 453 + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 453 454 struct pci_bus *bus, *child; 454 - LIST_HEAD(res); 455 + int ret; 455 456 456 457 /* Try setting 5 GT/s link speed */ 457 458 rcar_pcie_force_speedup(pcie); 458 459 459 - rcar_pcie_setup(&res, pcie); 460 + rcar_pcie_setup(&bridge->windows, pcie); 460 461 461 462 pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 462 463 464 + bridge->dev.parent = dev; 465 + bridge->sysdata = pcie; 466 + bridge->busnr = pcie->root_bus_nr; 467 + bridge->ops = &rcar_pcie_ops; 468 + bridge->map_irq = of_irq_parse_and_map_pci; 469 + bridge->swizzle_irq = pci_common_swizzle; 463 470 if (IS_ENABLED(CONFIG_PCI_MSI)) 464 - bus = pci_scan_root_bus_msi(dev, pcie->root_bus_nr, 465 - &rcar_pcie_ops, pcie, &res, &pcie->msi.chip); 466 - else 467 - bus = pci_scan_root_bus(dev, pcie->root_bus_nr, 468 - &rcar_pcie_ops, pcie, &res); 471 + bridge->msi = &pcie->msi.chip; 469 472 470 - if (!bus) { 471 - dev_err(dev, "Scanning rootbus failed"); 472 - return -ENODEV; 473 + ret = pci_scan_root_bus_bridge(bridge); 474 + if (ret < 0) { 475 + kfree(bridge); 476 + return ret; 473 477 } 474 478 475 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 479 + bus = bridge->bus; 476 480 477 481 pci_bus_size_bridges(bus); 478 482 pci_bus_assign_resources(bus); ··· 1131 1127 unsigned int data; 1132 1128 int err; 1133 1129 int (*hw_init_fn)(struct rcar_pcie *); 1130 + struct pci_host_bridge *bridge; 1134 1131 1135 - pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 1136 - if (!pcie) 1132 + bridge = pci_alloc_host_bridge(sizeof(*pcie)); 1133 + if (!bridge) 1137 1134 return -ENOMEM; 1135 + 1136 + pcie = pci_host_bridge_priv(bridge); 1138 1137 1139 1138 pcie->dev = dev; 1140 1139 ··· 1148 1141 err = rcar_pcie_get_resources(pcie); 1149 1142 if (err < 0) { 1150 1143 dev_err(dev, "failed to request resources: %d\n", err); 1151 - return err; 1144 + goto err_free_bridge; 1152 1145 } 1153 1146 1154 1147 err = rcar_pcie_parse_map_dma_ranges(pcie, dev->of_node); 1155 1148 if (err) 1156 - return err; 1149 + goto err_free_bridge; 1157 1150 1158 1151 pm_runtime_enable(dev); 1159 1152 err = pm_runtime_get_sync(dev); ··· 1189 1182 goto err_pm_put; 1190 1183 1191 1184 return 0; 1185 + 1186 + err_free_bridge: 1187 + pci_free_host_bridge(bridge); 1192 1188 1193 1189 err_pm_put: 1194 1190 pm_runtime_put(dev);
+117 -30
drivers/pci/host/pcie-rockchip.c
··· 139 139 PCIE_CORE_INT_CT | PCIE_CORE_INT_UTC | \ 140 140 PCIE_CORE_INT_MMVC) 141 141 142 + #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 142 143 #define PCIE_RC_CONFIG_BASE 0xa00000 143 144 #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) 144 145 #define PCIE_RC_CONFIG_SCC_SHIFT 16 ··· 147 146 #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 148 147 #define PCIE_RC_CONFIG_DCR_CSPL_LIMIT 0xff 149 148 #define PCIE_RC_CONFIG_DCR_CPLS_SHIFT 26 149 + #define PCIE_RC_CONFIG_DCSR (PCIE_RC_CONFIG_BASE + 0xc8) 150 + #define PCIE_RC_CONFIG_DCSR_MPS_MASK GENMASK(7, 5) 151 + #define PCIE_RC_CONFIG_DCSR_MPS_256 (0x1 << 5) 150 152 #define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc) 151 153 #define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10) 152 154 #define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0) ··· 179 175 #define IB_ROOT_PORT_REG_SIZE_SHIFT 3 180 176 #define AXI_WRAPPER_IO_WRITE 0x6 181 177 #define AXI_WRAPPER_MEM_WRITE 0x2 178 + #define AXI_WRAPPER_TYPE0_CFG 0xa 179 + #define AXI_WRAPPER_TYPE1_CFG 0xb 182 180 #define AXI_WRAPPER_NOR_MSG 0xc 183 181 184 182 #define MAX_AXI_IB_ROOTPORT_REGION_NUM 3 ··· 204 198 #define RC_REGION_0_ADDR_TRANS_H 0x00000000 205 199 #define RC_REGION_0_ADDR_TRANS_L 0x00000000 206 200 #define RC_REGION_0_PASS_BITS (25 - 1) 201 + #define RC_REGION_0_TYPE_MASK GENMASK(3, 0) 207 202 #define MAX_AXI_WRAPPER_REGION_NUM 33 208 203 209 204 struct rockchip_pcie { ··· 302 295 static int rockchip_pcie_rd_own_conf(struct rockchip_pcie *rockchip, 303 296 int where, int size, u32 *val) 304 297 { 305 - void __iomem *addr = rockchip->apb_base + PCIE_RC_CONFIG_BASE + where; 298 + void __iomem *addr; 299 + 300 + addr = rockchip->apb_base + PCIE_RC_CONFIG_NORMAL_BASE + where; 306 301 307 302 if (!IS_ALIGNED((uintptr_t)addr, size)) { 308 303 *val = 0; ··· 328 319 int where, int size, u32 val) 329 320 { 330 321 u32 mask, tmp, offset; 322 + void __iomem *addr; 331 323 332 324 offset = where & ~0x3; 325 + addr = rockchip->apb_base + PCIE_RC_CONFIG_NORMAL_BASE + offset; 333 326 334 327 if (size == 4) { 335 - writel(val, rockchip->apb_base + PCIE_RC_CONFIG_BASE + offset); 328 + writel(val, addr); 336 329 return PCIBIOS_SUCCESSFUL; 337 330 } 338 331 ··· 345 334 * corrupt RW1C bits in adjacent registers. But the hardware 346 335 * doesn't support smaller writes. 347 336 */ 348 - tmp = readl(rockchip->apb_base + PCIE_RC_CONFIG_BASE + offset) & mask; 337 + tmp = readl(addr) & mask; 349 338 tmp |= val << ((where & 0x3) * 8); 350 - writel(tmp, rockchip->apb_base + PCIE_RC_CONFIG_BASE + offset); 339 + writel(tmp, addr); 351 340 352 341 return PCIBIOS_SUCCESSFUL; 342 + } 343 + 344 + static void rockchip_pcie_cfg_configuration_accesses( 345 + struct rockchip_pcie *rockchip, u32 type) 346 + { 347 + u32 ob_desc_0; 348 + 349 + /* Configuration Accesses for region 0 */ 350 + rockchip_pcie_write(rockchip, 0x0, PCIE_RC_BAR_CONF); 351 + 352 + rockchip_pcie_write(rockchip, 353 + (RC_REGION_0_ADDR_TRANS_L + RC_REGION_0_PASS_BITS), 354 + PCIE_CORE_OB_REGION_ADDR0); 355 + rockchip_pcie_write(rockchip, RC_REGION_0_ADDR_TRANS_H, 356 + PCIE_CORE_OB_REGION_ADDR1); 357 + ob_desc_0 = rockchip_pcie_read(rockchip, PCIE_CORE_OB_REGION_DESC0); 358 + ob_desc_0 &= ~(RC_REGION_0_TYPE_MASK); 359 + ob_desc_0 |= (type | (0x1 << 23)); 360 + rockchip_pcie_write(rockchip, ob_desc_0, PCIE_CORE_OB_REGION_DESC0); 361 + rockchip_pcie_write(rockchip, 0x0, PCIE_CORE_OB_REGION_DESC1); 353 362 } 354 363 355 364 static int rockchip_pcie_rd_other_conf(struct rockchip_pcie *rockchip, ··· 385 354 *val = 0; 386 355 return PCIBIOS_BAD_REGISTER_NUMBER; 387 356 } 357 + 358 + if (bus->parent->number == rockchip->root_bus_nr) 359 + rockchip_pcie_cfg_configuration_accesses(rockchip, 360 + AXI_WRAPPER_TYPE0_CFG); 361 + else 362 + rockchip_pcie_cfg_configuration_accesses(rockchip, 363 + AXI_WRAPPER_TYPE1_CFG); 388 364 389 365 if (size == 4) { 390 366 *val = readl(rockchip->reg_base + busdev); ··· 416 378 PCI_FUNC(devfn), where); 417 379 if (!IS_ALIGNED(busdev, size)) 418 380 return PCIBIOS_BAD_REGISTER_NUMBER; 381 + 382 + if (bus->parent->number == rockchip->root_bus_nr) 383 + rockchip_pcie_cfg_configuration_accesses(rockchip, 384 + AXI_WRAPPER_TYPE0_CFG); 385 + else 386 + rockchip_pcie_cfg_configuration_accesses(rockchip, 387 + AXI_WRAPPER_TYPE1_CFG); 419 388 420 389 if (size == 4) 421 390 writel(val, rockchip->reg_base + busdev); ··· 709 664 rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_LINK_CAP); 710 665 } 711 666 712 - rockchip_pcie_write(rockchip, 0x0, PCIE_RC_BAR_CONF); 713 - 714 - rockchip_pcie_write(rockchip, 715 - (RC_REGION_0_ADDR_TRANS_L + RC_REGION_0_PASS_BITS), 716 - PCIE_CORE_OB_REGION_ADDR0); 717 - rockchip_pcie_write(rockchip, RC_REGION_0_ADDR_TRANS_H, 718 - PCIE_CORE_OB_REGION_ADDR1); 719 - rockchip_pcie_write(rockchip, 0x0080000a, PCIE_CORE_OB_REGION_DESC0); 720 - rockchip_pcie_write(rockchip, 0x0, PCIE_CORE_OB_REGION_DESC1); 667 + status = rockchip_pcie_read(rockchip, PCIE_RC_CONFIG_DCSR); 668 + status &= ~PCIE_RC_CONFIG_DCSR_MPS_MASK; 669 + status |= PCIE_RC_CONFIG_DCSR_MPS_256; 670 + rockchip_pcie_write(rockchip, status, PCIE_RC_CONFIG_DCSR); 721 671 722 672 return 0; 723 673 } ··· 1196 1156 return 0; 1197 1157 } 1198 1158 1199 - static int rockchip_cfg_atu(struct rockchip_pcie *rockchip) 1159 + static int rockchip_pcie_cfg_atu(struct rockchip_pcie *rockchip) 1200 1160 { 1201 1161 struct device *dev = rockchip->dev; 1202 1162 int offset; 1203 1163 int err; 1204 1164 int reg_no; 1165 + 1166 + rockchip_pcie_cfg_configuration_accesses(rockchip, 1167 + AXI_WRAPPER_TYPE0_CFG); 1205 1168 1206 1169 for (reg_no = 0; reg_no < (rockchip->mem_size >> 20); reg_no++) { 1207 1170 err = rockchip_pcie_prog_ob_atu(rockchip, reg_no + 1, ··· 1294 1251 clk_disable_unprepare(rockchip->aclk_perf_pcie); 1295 1252 clk_disable_unprepare(rockchip->aclk_pcie); 1296 1253 1254 + if (!IS_ERR(rockchip->vpcie0v9)) 1255 + regulator_disable(rockchip->vpcie0v9); 1256 + 1297 1257 return ret; 1298 1258 } 1299 1259 ··· 1305 1259 struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 1306 1260 int err; 1307 1261 1308 - clk_prepare_enable(rockchip->clk_pcie_pm); 1309 - clk_prepare_enable(rockchip->hclk_pcie); 1310 - clk_prepare_enable(rockchip->aclk_perf_pcie); 1311 - clk_prepare_enable(rockchip->aclk_pcie); 1262 + if (!IS_ERR(rockchip->vpcie0v9)) { 1263 + err = regulator_enable(rockchip->vpcie0v9); 1264 + if (err) { 1265 + dev_err(dev, "fail to enable vpcie0v9 regulator\n"); 1266 + return err; 1267 + } 1268 + } 1269 + 1270 + err = clk_prepare_enable(rockchip->clk_pcie_pm); 1271 + if (err) 1272 + goto err_pcie_pm; 1273 + 1274 + err = clk_prepare_enable(rockchip->hclk_pcie); 1275 + if (err) 1276 + goto err_hclk_pcie; 1277 + 1278 + err = clk_prepare_enable(rockchip->aclk_perf_pcie); 1279 + if (err) 1280 + goto err_aclk_perf_pcie; 1281 + 1282 + err = clk_prepare_enable(rockchip->aclk_pcie); 1283 + if (err) 1284 + goto err_aclk_pcie; 1312 1285 1313 1286 err = rockchip_pcie_init_port(rockchip); 1314 1287 if (err) 1315 - return err; 1288 + goto err_pcie_resume; 1316 1289 1317 - err = rockchip_cfg_atu(rockchip); 1290 + err = rockchip_pcie_cfg_atu(rockchip); 1318 1291 if (err) 1319 - return err; 1292 + goto err_pcie_resume; 1320 1293 1321 1294 /* Need this to enter L1 again */ 1322 1295 rockchip_pcie_update_txcredit_mui(rockchip); 1323 1296 rockchip_pcie_enable_interrupts(rockchip); 1324 1297 1325 1298 return 0; 1299 + 1300 + err_pcie_resume: 1301 + clk_disable_unprepare(rockchip->aclk_pcie); 1302 + err_aclk_pcie: 1303 + clk_disable_unprepare(rockchip->aclk_perf_pcie); 1304 + err_aclk_perf_pcie: 1305 + clk_disable_unprepare(rockchip->hclk_pcie); 1306 + err_hclk_pcie: 1307 + clk_disable_unprepare(rockchip->clk_pcie_pm); 1308 + err_pcie_pm: 1309 + return err; 1326 1310 } 1327 1311 1328 1312 static int rockchip_pcie_probe(struct platform_device *pdev) ··· 1360 1284 struct rockchip_pcie *rockchip; 1361 1285 struct device *dev = &pdev->dev; 1362 1286 struct pci_bus *bus, *child; 1287 + struct pci_host_bridge *bridge; 1363 1288 struct resource_entry *win; 1364 1289 resource_size_t io_base; 1365 1290 struct resource *mem; ··· 1372 1295 if (!dev->of_node) 1373 1296 return -ENODEV; 1374 1297 1375 - rockchip = devm_kzalloc(dev, sizeof(*rockchip), GFP_KERNEL); 1376 - if (!rockchip) 1298 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rockchip)); 1299 + if (!bridge) 1377 1300 return -ENOMEM; 1301 + 1302 + rockchip = pci_host_bridge_priv(bridge); 1378 1303 1379 1304 platform_set_drvdata(pdev, rockchip); 1380 1305 rockchip->dev = dev; ··· 1464 1385 } 1465 1386 } 1466 1387 1467 - err = rockchip_cfg_atu(rockchip); 1388 + err = rockchip_pcie_cfg_atu(rockchip); 1468 1389 if (err) 1469 1390 goto err_free_res; 1470 1391 1471 - rockchip->msg_region = devm_ioremap(rockchip->dev, 1472 - rockchip->msg_bus_addr, SZ_1M); 1392 + rockchip->msg_region = devm_ioremap(dev, rockchip->msg_bus_addr, SZ_1M); 1473 1393 if (!rockchip->msg_region) { 1474 1394 err = -ENOMEM; 1475 1395 goto err_free_res; 1476 1396 } 1477 1397 1478 - bus = pci_scan_root_bus(&pdev->dev, 0, &rockchip_pcie_ops, rockchip, &res); 1479 - if (!bus) { 1480 - err = -ENOMEM; 1398 + list_splice_init(&res, &bridge->windows); 1399 + bridge->dev.parent = dev; 1400 + bridge->sysdata = rockchip; 1401 + bridge->busnr = 0; 1402 + bridge->ops = &rockchip_pcie_ops; 1403 + bridge->map_irq = of_irq_parse_and_map_pci; 1404 + bridge->swizzle_irq = pci_common_swizzle; 1405 + 1406 + err = pci_scan_root_bus_bridge(bridge); 1407 + if (!err) 1481 1408 goto err_free_res; 1482 - } 1409 + 1410 + bus = bridge->bus; 1411 + 1483 1412 rockchip->root_bus = bus; 1484 1413 1485 1414 pci_bus_size_bridges(bus);
+141
drivers/pci/host/pcie-tango.c
··· 1 + #include <linux/pci-ecam.h> 2 + #include <linux/delay.h> 3 + #include <linux/of.h> 4 + 5 + #define SMP8759_MUX 0x48 6 + #define SMP8759_TEST_OUT 0x74 7 + 8 + struct tango_pcie { 9 + void __iomem *base; 10 + }; 11 + 12 + static int smp8759_config_read(struct pci_bus *bus, unsigned int devfn, 13 + int where, int size, u32 *val) 14 + { 15 + struct pci_config_window *cfg = bus->sysdata; 16 + struct tango_pcie *pcie = dev_get_drvdata(cfg->parent); 17 + int ret; 18 + 19 + /* Reads in configuration space outside devfn 0 return garbage */ 20 + if (devfn != 0) 21 + return PCIBIOS_FUNC_NOT_SUPPORTED; 22 + 23 + /* 24 + * PCI config and MMIO accesses are muxed. Linux doesn't have a 25 + * mutual exclusion mechanism for config vs. MMIO accesses, so 26 + * concurrent accesses may cause corruption. 27 + */ 28 + writel_relaxed(1, pcie->base + SMP8759_MUX); 29 + ret = pci_generic_config_read(bus, devfn, where, size, val); 30 + writel_relaxed(0, pcie->base + SMP8759_MUX); 31 + 32 + return ret; 33 + } 34 + 35 + static int smp8759_config_write(struct pci_bus *bus, unsigned int devfn, 36 + int where, int size, u32 val) 37 + { 38 + struct pci_config_window *cfg = bus->sysdata; 39 + struct tango_pcie *pcie = dev_get_drvdata(cfg->parent); 40 + int ret; 41 + 42 + writel_relaxed(1, pcie->base + SMP8759_MUX); 43 + ret = pci_generic_config_write(bus, devfn, where, size, val); 44 + writel_relaxed(0, pcie->base + SMP8759_MUX); 45 + 46 + return ret; 47 + } 48 + 49 + static struct pci_ecam_ops smp8759_ecam_ops = { 50 + .bus_shift = 20, 51 + .pci_ops = { 52 + .map_bus = pci_ecam_map_bus, 53 + .read = smp8759_config_read, 54 + .write = smp8759_config_write, 55 + } 56 + }; 57 + 58 + static int tango_pcie_link_up(struct tango_pcie *pcie) 59 + { 60 + void __iomem *test_out = pcie->base + SMP8759_TEST_OUT; 61 + int i; 62 + 63 + writel_relaxed(16, test_out); 64 + for (i = 0; i < 10; ++i) { 65 + u32 ltssm_state = readl_relaxed(test_out) >> 8; 66 + if ((ltssm_state & 0x1f) == 0xf) /* L0 */ 67 + return 1; 68 + usleep_range(3000, 4000); 69 + } 70 + 71 + return 0; 72 + } 73 + 74 + static int tango_pcie_probe(struct platform_device *pdev) 75 + { 76 + struct device *dev = &pdev->dev; 77 + struct tango_pcie *pcie; 78 + struct resource *res; 79 + int ret; 80 + 81 + dev_warn(dev, "simultaneous PCI config and MMIO accesses may cause data corruption\n"); 82 + add_taint(TAINT_CRAP, LOCKDEP_STILL_OK); 83 + 84 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 85 + if (!pcie) 86 + return -ENOMEM; 87 + 88 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 89 + pcie->base = devm_ioremap_resource(dev, res); 90 + if (IS_ERR(pcie->base)) 91 + return PTR_ERR(pcie->base); 92 + 93 + platform_set_drvdata(pdev, pcie); 94 + 95 + if (!tango_pcie_link_up(pcie)) 96 + return -ENODEV; 97 + 98 + return pci_host_common_probe(pdev, &smp8759_ecam_ops); 99 + } 100 + 101 + static const struct of_device_id tango_pcie_ids[] = { 102 + { .compatible = "sigma,smp8759-pcie" }, 103 + { }, 104 + }; 105 + 106 + static struct platform_driver tango_pcie_driver = { 107 + .probe = tango_pcie_probe, 108 + .driver = { 109 + .name = KBUILD_MODNAME, 110 + .of_match_table = tango_pcie_ids, 111 + .suppress_bind_attrs = true, 112 + }, 113 + }; 114 + builtin_platform_driver(tango_pcie_driver); 115 + 116 + /* 117 + * The root complex advertises the wrong device class. 118 + * Header Type 1 is for PCI-to-PCI bridges. 119 + */ 120 + static void tango_fixup_class(struct pci_dev *dev) 121 + { 122 + dev->class = PCI_CLASS_BRIDGE_PCI << 8; 123 + } 124 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_class); 125 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_class); 126 + 127 + /* 128 + * The root complex exposes a "fake" BAR, which is used to filter 129 + * bus-to-system accesses. Only accesses within the range defined by this 130 + * BAR are forwarded to the host, others are ignored. 131 + * 132 + * By default, the DMA framework expects an identity mapping, and DRAM0 is 133 + * mapped at 0x80000000. 134 + */ 135 + static void tango_fixup_bar(struct pci_dev *dev) 136 + { 137 + dev->non_compliant_bars = true; 138 + pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x80000000); 139 + } 140 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0024, tango_fixup_bar); 141 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SIGMA, 0x0028, tango_fixup_bar);
+66 -13
drivers/pci/host/pcie-xilinx-nwl.c
··· 172 172 u8 root_busno; 173 173 struct nwl_msi msi; 174 174 struct irq_domain *legacy_irq_domain; 175 + raw_spinlock_t leg_mask_lock; 175 176 }; 176 177 177 178 static inline u32 nwl_bridge_readl(struct nwl_pcie *pcie, u32 off) ··· 384 383 chained_irq_exit(chip, desc); 385 384 } 386 385 386 + static void nwl_mask_leg_irq(struct irq_data *data) 387 + { 388 + struct irq_desc *desc = irq_to_desc(data->irq); 389 + struct nwl_pcie *pcie; 390 + unsigned long flags; 391 + u32 mask; 392 + u32 val; 393 + 394 + pcie = irq_desc_get_chip_data(desc); 395 + mask = 1 << (data->hwirq - 1); 396 + raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags); 397 + val = nwl_bridge_readl(pcie, MSGF_LEG_MASK); 398 + nwl_bridge_writel(pcie, (val & (~mask)), MSGF_LEG_MASK); 399 + raw_spin_unlock_irqrestore(&pcie->leg_mask_lock, flags); 400 + } 401 + 402 + static void nwl_unmask_leg_irq(struct irq_data *data) 403 + { 404 + struct irq_desc *desc = irq_to_desc(data->irq); 405 + struct nwl_pcie *pcie; 406 + unsigned long flags; 407 + u32 mask; 408 + u32 val; 409 + 410 + pcie = irq_desc_get_chip_data(desc); 411 + mask = 1 << (data->hwirq - 1); 412 + raw_spin_lock_irqsave(&pcie->leg_mask_lock, flags); 413 + val = nwl_bridge_readl(pcie, MSGF_LEG_MASK); 414 + nwl_bridge_writel(pcie, (val | mask), MSGF_LEG_MASK); 415 + raw_spin_unlock_irqrestore(&pcie->leg_mask_lock, flags); 416 + } 417 + 418 + static struct irq_chip nwl_leg_irq_chip = { 419 + .name = "nwl_pcie:legacy", 420 + .irq_enable = nwl_unmask_leg_irq, 421 + .irq_disable = nwl_mask_leg_irq, 422 + .irq_mask = nwl_mask_leg_irq, 423 + .irq_unmask = nwl_unmask_leg_irq, 424 + }; 425 + 387 426 static int nwl_legacy_map(struct irq_domain *domain, unsigned int irq, 388 427 irq_hw_number_t hwirq) 389 428 { 390 - irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); 429 + irq_set_chip_and_handler(irq, &nwl_leg_irq_chip, handle_level_irq); 391 430 irq_set_chip_data(irq, domain->host_data); 431 + irq_set_status_flags(irq, IRQ_LEVEL); 392 432 393 433 return 0; 394 434 } ··· 568 526 return -ENOMEM; 569 527 } 570 528 529 + raw_spin_lock_init(&pcie->leg_mask_lock); 571 530 nwl_pcie_init_msi_irq_domain(pcie); 572 531 return 0; 573 532 } 574 533 575 - static int nwl_pcie_enable_msi(struct nwl_pcie *pcie, struct pci_bus *bus) 534 + static int nwl_pcie_enable_msi(struct nwl_pcie *pcie) 576 535 { 577 536 struct device *dev = pcie->dev; 578 537 struct platform_device *pdev = to_platform_device(dev); ··· 834 791 struct nwl_pcie *pcie; 835 792 struct pci_bus *bus; 836 793 struct pci_bus *child; 794 + struct pci_host_bridge *bridge; 837 795 int err; 838 796 resource_size_t iobase = 0; 839 797 LIST_HEAD(res); 840 798 841 - pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 842 - if (!pcie) 843 - return -ENOMEM; 799 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 800 + if (!bridge) 801 + return -ENODEV; 802 + 803 + pcie = pci_host_bridge_priv(bridge); 844 804 845 805 pcie->dev = dev; 846 806 pcie->ecam_value = NWL_ECAM_VALUE_DEFAULT; ··· 876 830 goto error; 877 831 } 878 832 879 - bus = pci_create_root_bus(dev, pcie->root_busno, 880 - &nwl_pcie_ops, pcie, &res); 881 - if (!bus) { 882 - err = -ENOMEM; 883 - goto error; 884 - } 833 + list_splice_init(&res, &bridge->windows); 834 + bridge->dev.parent = dev; 835 + bridge->sysdata = pcie; 836 + bridge->busnr = pcie->root_busno; 837 + bridge->ops = &nwl_pcie_ops; 838 + bridge->map_irq = of_irq_parse_and_map_pci; 839 + bridge->swizzle_irq = pci_common_swizzle; 885 840 886 841 if (IS_ENABLED(CONFIG_PCI_MSI)) { 887 - err = nwl_pcie_enable_msi(pcie, bus); 842 + err = nwl_pcie_enable_msi(pcie); 888 843 if (err < 0) { 889 844 dev_err(dev, "failed to enable MSI support: %d\n", err); 890 845 goto error; 891 846 } 892 847 } 893 - pci_scan_child_bus(bus); 848 + 849 + err = pci_scan_root_bus_bridge(bridge); 850 + if (err) 851 + goto error; 852 + 853 + bus = bridge->bus; 854 + 894 855 pci_assign_unassigned_bus_resources(bus); 895 856 list_for_each_entry(child, &bus->children, node) 896 857 pcie_bus_configure_settings(child);
+22 -14
drivers/pci/host/pcie-xilinx.c
··· 633 633 struct device *dev = &pdev->dev; 634 634 struct xilinx_pcie_port *port; 635 635 struct pci_bus *bus, *child; 636 + struct pci_host_bridge *bridge; 636 637 int err; 637 638 resource_size_t iobase = 0; 638 639 LIST_HEAD(res); ··· 641 640 if (!dev->of_node) 642 641 return -ENODEV; 643 642 644 - port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 645 - if (!port) 646 - return -ENOMEM; 643 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*port)); 644 + if (!bridge) 645 + return -ENODEV; 646 + 647 + port = pci_host_bridge_priv(bridge); 647 648 648 649 port->dev = dev; 649 650 ··· 674 671 if (err) 675 672 goto error; 676 673 677 - bus = pci_create_root_bus(dev, 0, &xilinx_pcie_ops, port, &res); 678 - if (!bus) { 679 - err = -ENOMEM; 680 - goto error; 681 - } 674 + 675 + list_splice_init(&res, &bridge->windows); 676 + bridge->dev.parent = dev; 677 + bridge->sysdata = port; 678 + bridge->busnr = 0; 679 + bridge->ops = &xilinx_pcie_ops; 680 + bridge->map_irq = of_irq_parse_and_map_pci; 681 + bridge->swizzle_irq = pci_common_swizzle; 682 682 683 683 #ifdef CONFIG_PCI_MSI 684 684 xilinx_pcie_msi_chip.dev = dev; 685 - bus->msi = &xilinx_pcie_msi_chip; 685 + bridge->msi = &xilinx_pcie_msi_chip; 686 686 #endif 687 - pci_scan_child_bus(bus); 687 + err = pci_scan_root_bus_bridge(bridge); 688 + if (err < 0) 689 + goto error; 690 + 691 + bus = bridge->bus; 692 + 688 693 pci_assign_unassigned_bus_resources(bus); 689 - #ifndef CONFIG_MICROBLAZE 690 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 691 - #endif 692 694 list_for_each_entry(child, &bus->children, node) 693 695 pcie_bus_configure_settings(child); 694 696 pci_bus_add_devices(bus); ··· 704 696 return err; 705 697 } 706 698 707 - static struct of_device_id xilinx_pcie_of_match[] = { 699 + static const struct of_device_id xilinx_pcie_of_match[] = { 708 700 { .compatible = "xlnx,axi-pcie-host-1.00.a", }, 709 701 {} 710 702 };
+7 -3
drivers/pci/host/vmd.c
··· 539 539 } 540 540 541 541 /* 542 - * VMD domains start at 0x1000 to not clash with ACPI _SEG domains. 542 + * VMD domains start at 0x10000 to not clash with ACPI _SEG domains. 543 + * Per ACPI r6.0, sec 6.5.6, _SEG returns an integer, of which the lower 544 + * 16 bits are the PCI Segment Group (domain) number. Other bits are 545 + * currently reserved. 543 546 */ 544 547 static int vmd_find_free_domain(void) 545 548 { ··· 713 710 714 711 INIT_LIST_HEAD(&vmd->irqs[i].irq_list); 715 712 err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i), 716 - vmd_irq, 0, "vmd", &vmd->irqs[i]); 713 + vmd_irq, IRQF_NO_THREAD, 714 + "vmd", &vmd->irqs[i]); 717 715 if (err) 718 716 return err; 719 717 } ··· 743 739 struct vmd_dev *vmd = pci_get_drvdata(dev); 744 740 745 741 vmd_detach_resources(vmd); 746 - vmd_cleanup_srcu(vmd); 747 742 sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 748 743 pci_stop_root_bus(vmd->bus); 749 744 pci_remove_root_bus(vmd->bus); 745 + vmd_cleanup_srcu(vmd); 750 746 vmd_teardown_dma_ops(vmd); 751 747 irq_domain_remove(vmd->irq_domain); 752 748 }
-4
drivers/pci/iov.c
··· 461 461 else 462 462 iov->dev = dev; 463 463 464 - mutex_init(&iov->lock); 465 - 466 464 dev->sriov = iov; 467 465 dev->is_physfn = 1; 468 466 rc = compute_max_vf_buses(dev); ··· 488 490 489 491 if (dev != dev->sriov->dev) 490 492 pci_dev_put(dev->sriov->dev); 491 - 492 - mutex_destroy(&dev->sriov->lock); 493 493 494 494 kfree(dev->sriov); 495 495 dev->sriov = NULL;
+2 -12
drivers/pci/msi.c
··· 1058 1058 1059 1059 for (;;) { 1060 1060 if (affd) { 1061 - nvec = irq_calc_affinity_vectors(nvec, affd); 1061 + nvec = irq_calc_affinity_vectors(minvec, nvec, affd); 1062 1062 if (nvec < minvec) 1063 1063 return -ENOSPC; 1064 1064 } ··· 1097 1097 1098 1098 for (;;) { 1099 1099 if (affd) { 1100 - nvec = irq_calc_affinity_vectors(nvec, affd); 1100 + nvec = irq_calc_affinity_vectors(minvec, nvec, affd); 1101 1101 if (nvec < minvec) 1102 1102 return -ENOSPC; 1103 1103 } ··· 1165 1165 if (flags & PCI_IRQ_AFFINITY) { 1166 1166 if (!affd) 1167 1167 affd = &msi_default_affd; 1168 - 1169 - if (affd->pre_vectors + affd->post_vectors > min_vecs) 1170 - return -EINVAL; 1171 - 1172 - /* 1173 - * If there aren't any vectors left after applying the pre/post 1174 - * vectors don't bother with assigning affinity. 1175 - */ 1176 - if (affd->pre_vectors + affd->post_vectors == min_vecs) 1177 - affd = NULL; 1178 1168 } else { 1179 1169 if (WARN_ON(affd)) 1180 1170 affd = NULL;
+3
drivers/pci/pci-driver.c
··· 415 415 struct pci_dev *pci_dev = to_pci_dev(dev); 416 416 struct pci_driver *drv = to_pci_driver(dev->driver); 417 417 418 + pci_assign_irq(pci_dev); 419 + 418 420 error = pcibios_alloc_irq(pci_dev); 419 421 if (error < 0) 420 422 return error; ··· 969 967 return pci_legacy_resume_early(dev); 970 968 971 969 pci_update_current_state(pci_dev, PCI_D0); 970 + pci_restore_state(pci_dev); 972 971 973 972 if (drv && drv->pm && drv->pm->thaw_noirq) 974 973 error = drv->pm->thaw_noirq(dev);
+5 -2
drivers/pci/pci-label.c
··· 43 43 { 44 44 const struct dmi_device *dmi; 45 45 struct dmi_dev_onboard *donboard; 46 + int domain_nr; 46 47 int bus; 47 48 int devfn; 48 49 50 + domain_nr = pci_domain_nr(pdev->bus); 49 51 bus = pdev->bus->number; 50 52 devfn = pdev->devfn; 51 53 ··· 55 53 while ((dmi = dmi_find_device(DMI_DEV_TYPE_DEV_ONBOARD, 56 54 NULL, dmi)) != NULL) { 57 55 donboard = dmi->device_data; 58 - if (donboard && donboard->bus == bus && 59 - donboard->devfn == devfn) { 56 + if (donboard && donboard->segment == domain_nr && 57 + donboard->bus == bus && 58 + donboard->devfn == devfn) { 60 59 if (buf) { 61 60 if (attribute == SMBIOS_ATTR_INSTANCE_SHOW) 62 61 return scnprintf(buf, PAGE_SIZE,
+197 -7
drivers/pci/pci-sysfs.c
··· 154 154 } 155 155 static DEVICE_ATTR_RO(resource); 156 156 157 + static ssize_t max_link_speed_show(struct device *dev, 158 + struct device_attribute *attr, char *buf) 159 + { 160 + struct pci_dev *pci_dev = to_pci_dev(dev); 161 + u32 linkcap; 162 + int err; 163 + const char *speed; 164 + 165 + err = pcie_capability_read_dword(pci_dev, PCI_EXP_LNKCAP, &linkcap); 166 + if (err) 167 + return -EINVAL; 168 + 169 + switch (linkcap & PCI_EXP_LNKCAP_SLS) { 170 + case PCI_EXP_LNKCAP_SLS_8_0GB: 171 + speed = "8 GT/s"; 172 + break; 173 + case PCI_EXP_LNKCAP_SLS_5_0GB: 174 + speed = "5 GT/s"; 175 + break; 176 + case PCI_EXP_LNKCAP_SLS_2_5GB: 177 + speed = "2.5 GT/s"; 178 + break; 179 + default: 180 + speed = "Unknown speed"; 181 + } 182 + 183 + return sprintf(buf, "%s\n", speed); 184 + } 185 + static DEVICE_ATTR_RO(max_link_speed); 186 + 187 + static ssize_t max_link_width_show(struct device *dev, 188 + struct device_attribute *attr, char *buf) 189 + { 190 + struct pci_dev *pci_dev = to_pci_dev(dev); 191 + u32 linkcap; 192 + int err; 193 + 194 + err = pcie_capability_read_dword(pci_dev, PCI_EXP_LNKCAP, &linkcap); 195 + if (err) 196 + return -EINVAL; 197 + 198 + return sprintf(buf, "%u\n", (linkcap & PCI_EXP_LNKCAP_MLW) >> 4); 199 + } 200 + static DEVICE_ATTR_RO(max_link_width); 201 + 202 + static ssize_t current_link_speed_show(struct device *dev, 203 + struct device_attribute *attr, char *buf) 204 + { 205 + struct pci_dev *pci_dev = to_pci_dev(dev); 206 + u16 linkstat; 207 + int err; 208 + const char *speed; 209 + 210 + err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 211 + if (err) 212 + return -EINVAL; 213 + 214 + switch (linkstat & PCI_EXP_LNKSTA_CLS) { 215 + case PCI_EXP_LNKSTA_CLS_8_0GB: 216 + speed = "8 GT/s"; 217 + break; 218 + case PCI_EXP_LNKSTA_CLS_5_0GB: 219 + speed = "5 GT/s"; 220 + break; 221 + case PCI_EXP_LNKSTA_CLS_2_5GB: 222 + speed = "2.5 GT/s"; 223 + break; 224 + default: 225 + speed = "Unknown speed"; 226 + } 227 + 228 + return sprintf(buf, "%s\n", speed); 229 + } 230 + static DEVICE_ATTR_RO(current_link_speed); 231 + 232 + static ssize_t current_link_width_show(struct device *dev, 233 + struct device_attribute *attr, char *buf) 234 + { 235 + struct pci_dev *pci_dev = to_pci_dev(dev); 236 + u16 linkstat; 237 + int err; 238 + 239 + err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 240 + if (err) 241 + return -EINVAL; 242 + 243 + return sprintf(buf, "%u\n", 244 + (linkstat & PCI_EXP_LNKSTA_NLW) >> PCI_EXP_LNKSTA_NLW_SHIFT); 245 + } 246 + static DEVICE_ATTR_RO(current_link_width); 247 + 248 + static ssize_t secondary_bus_number_show(struct device *dev, 249 + struct device_attribute *attr, 250 + char *buf) 251 + { 252 + struct pci_dev *pci_dev = to_pci_dev(dev); 253 + u8 sec_bus; 254 + int err; 255 + 256 + err = pci_read_config_byte(pci_dev, PCI_SECONDARY_BUS, &sec_bus); 257 + if (err) 258 + return -EINVAL; 259 + 260 + return sprintf(buf, "%u\n", sec_bus); 261 + } 262 + static DEVICE_ATTR_RO(secondary_bus_number); 263 + 264 + static ssize_t subordinate_bus_number_show(struct device *dev, 265 + struct device_attribute *attr, 266 + char *buf) 267 + { 268 + struct pci_dev *pci_dev = to_pci_dev(dev); 269 + u8 sub_bus; 270 + int err; 271 + 272 + err = pci_read_config_byte(pci_dev, PCI_SUBORDINATE_BUS, &sub_bus); 273 + if (err) 274 + return -EINVAL; 275 + 276 + return sprintf(buf, "%u\n", sub_bus); 277 + } 278 + static DEVICE_ATTR_RO(subordinate_bus_number); 279 + 157 280 static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, 158 281 char *buf) 159 282 { ··· 595 472 const char *buf, size_t count) 596 473 { 597 474 struct pci_dev *pdev = to_pci_dev(dev); 598 - struct pci_sriov *iov = pdev->sriov; 599 475 int ret; 600 476 u16 num_vfs; 601 477 ··· 605 483 if (num_vfs > pci_sriov_get_totalvfs(pdev)) 606 484 return -ERANGE; 607 485 608 - mutex_lock(&iov->dev->sriov->lock); 486 + device_lock(&pdev->dev); 609 487 610 488 if (num_vfs == pdev->sriov->num_VFs) 611 489 goto exit; ··· 640 518 num_vfs, ret); 641 519 642 520 exit: 643 - mutex_unlock(&iov->dev->sriov->lock); 521 + device_unlock(&pdev->dev); 644 522 645 523 if (ret < 0) 646 524 return ret; ··· 751 629 NULL, 752 630 }; 753 631 754 - static const struct attribute_group pci_dev_group = { 755 - .attrs = pci_dev_attrs, 632 + static struct attribute *pci_bridge_attrs[] = { 633 + &dev_attr_subordinate_bus_number.attr, 634 + &dev_attr_secondary_bus_number.attr, 635 + NULL, 756 636 }; 757 637 758 - const struct attribute_group *pci_dev_groups[] = { 759 - &pci_dev_group, 638 + static struct attribute *pcie_dev_attrs[] = { 639 + &dev_attr_current_link_speed.attr, 640 + &dev_attr_current_link_width.attr, 641 + &dev_attr_max_link_width.attr, 642 + &dev_attr_max_link_speed.attr, 760 643 NULL, 761 644 }; 762 645 ··· 1684 1557 return a->mode; 1685 1558 } 1686 1559 1560 + static umode_t pci_bridge_attrs_are_visible(struct kobject *kobj, 1561 + struct attribute *a, int n) 1562 + { 1563 + struct device *dev = kobj_to_dev(kobj); 1564 + struct pci_dev *pdev = to_pci_dev(dev); 1565 + 1566 + if (pci_is_bridge(pdev)) 1567 + return a->mode; 1568 + 1569 + return 0; 1570 + } 1571 + 1572 + static umode_t pcie_dev_attrs_are_visible(struct kobject *kobj, 1573 + struct attribute *a, int n) 1574 + { 1575 + struct device *dev = kobj_to_dev(kobj); 1576 + struct pci_dev *pdev = to_pci_dev(dev); 1577 + 1578 + if (pci_is_pcie(pdev)) 1579 + return a->mode; 1580 + 1581 + return 0; 1582 + } 1583 + 1584 + static const struct attribute_group pci_dev_group = { 1585 + .attrs = pci_dev_attrs, 1586 + }; 1587 + 1588 + const struct attribute_group *pci_dev_groups[] = { 1589 + &pci_dev_group, 1590 + NULL, 1591 + }; 1592 + 1593 + static const struct attribute_group pci_bridge_group = { 1594 + .attrs = pci_bridge_attrs, 1595 + }; 1596 + 1597 + const struct attribute_group *pci_bridge_groups[] = { 1598 + &pci_bridge_group, 1599 + NULL, 1600 + }; 1601 + 1602 + static const struct attribute_group pcie_dev_group = { 1603 + .attrs = pcie_dev_attrs, 1604 + }; 1605 + 1606 + const struct attribute_group *pcie_dev_groups[] = { 1607 + &pcie_dev_group, 1608 + NULL, 1609 + }; 1610 + 1687 1611 static struct attribute_group pci_dev_hp_attr_group = { 1688 1612 .attrs = pci_dev_hp_attrs, 1689 1613 .is_visible = pci_dev_hp_attrs_are_visible, ··· 1770 1592 .is_visible = pci_dev_attrs_are_visible, 1771 1593 }; 1772 1594 1595 + static struct attribute_group pci_bridge_attr_group = { 1596 + .attrs = pci_bridge_attrs, 1597 + .is_visible = pci_bridge_attrs_are_visible, 1598 + }; 1599 + 1600 + static struct attribute_group pcie_dev_attr_group = { 1601 + .attrs = pcie_dev_attrs, 1602 + .is_visible = pcie_dev_attrs_are_visible, 1603 + }; 1604 + 1773 1605 static const struct attribute_group *pci_dev_attr_groups[] = { 1774 1606 &pci_dev_attr_group, 1775 1607 &pci_dev_hp_attr_group, 1776 1608 #ifdef CONFIG_PCI_IOV 1777 1609 &sriov_dev_attr_group, 1778 1610 #endif 1611 + &pci_bridge_attr_group, 1612 + &pcie_dev_attr_group, 1779 1613 NULL, 1780 1614 }; 1781 1615
+98 -129
drivers/pci/pci.c
··· 28 28 #include <linux/pm_runtime.h> 29 29 #include <linux/pci_hotplug.h> 30 30 #include <linux/vmalloc.h> 31 + #include <linux/pci-ats.h> 31 32 #include <asm/setup.h> 32 33 #include <asm/dma.h> 33 34 #include <linux/aer.h> ··· 456 455 pci_bus_for_each_resource(bus, r, i) { 457 456 if (!r) 458 457 continue; 459 - if (res->start && resource_contains(r, res)) { 458 + if (resource_contains(r, res)) { 460 459 461 460 /* 462 461 * If the window is prefetchable but the BAR is ··· 1167 1166 1168 1167 /* PCI Express register must be restored first */ 1169 1168 pci_restore_pcie_state(dev); 1169 + pci_restore_pasid_state(dev); 1170 + pci_restore_pri_state(dev); 1170 1171 pci_restore_ats_state(dev); 1171 1172 pci_restore_vc_state(dev); 1172 1173 ··· 1969 1966 /** 1970 1967 * pci_target_state - find an appropriate low power state for a given PCI dev 1971 1968 * @dev: PCI device 1969 + * @wakeup: Whether or not wakeup functionality will be enabled for the device. 1972 1970 * 1973 1971 * Use underlying platform code to find a supported low power state for @dev. 1974 1972 * If the platform can't manage @dev, return the deepest state from which it 1975 1973 * can generate wake events, based on any available PME info. 1976 1974 */ 1977 - static pci_power_t pci_target_state(struct pci_dev *dev) 1975 + static pci_power_t pci_target_state(struct pci_dev *dev, bool wakeup) 1978 1976 { 1979 1977 pci_power_t target_state = PCI_D3hot; 1980 1978 ··· 2012 2008 if (dev->current_state == PCI_D3cold) 2013 2009 target_state = PCI_D3cold; 2014 2010 2015 - if (device_may_wakeup(&dev->dev)) { 2011 + if (wakeup) { 2016 2012 /* 2017 2013 * Find the deepest state from which the device can generate 2018 2014 * wake-up events, make it the target state and enable device ··· 2038 2034 */ 2039 2035 int pci_prepare_to_sleep(struct pci_dev *dev) 2040 2036 { 2041 - pci_power_t target_state = pci_target_state(dev); 2037 + bool wakeup = device_may_wakeup(&dev->dev); 2038 + pci_power_t target_state = pci_target_state(dev, wakeup); 2042 2039 int error; 2043 2040 2044 2041 if (target_state == PCI_POWER_ERROR) 2045 2042 return -EIO; 2046 2043 2047 - pci_enable_wake(dev, target_state, device_may_wakeup(&dev->dev)); 2044 + pci_enable_wake(dev, target_state, wakeup); 2048 2045 2049 2046 error = pci_set_power_state(dev, target_state); 2050 2047 ··· 2078 2073 */ 2079 2074 int pci_finish_runtime_suspend(struct pci_dev *dev) 2080 2075 { 2081 - pci_power_t target_state = pci_target_state(dev); 2076 + pci_power_t target_state; 2082 2077 int error; 2083 2078 2079 + target_state = pci_target_state(dev, device_can_wakeup(&dev->dev)); 2084 2080 if (target_state == PCI_POWER_ERROR) 2085 2081 return -EIO; 2086 2082 ··· 2117 2111 if (!dev->pme_support) 2118 2112 return false; 2119 2113 2120 - /* PME-capable in principle, but not from the intended sleep state */ 2121 - if (!pci_pme_capable(dev, pci_target_state(dev))) 2114 + /* PME-capable in principle, but not from the target power state */ 2115 + if (!pci_pme_capable(dev, pci_target_state(dev, false))) 2122 2116 return false; 2123 2117 2124 2118 while (bus->parent) { ··· 2153 2147 bool pci_dev_keep_suspended(struct pci_dev *pci_dev) 2154 2148 { 2155 2149 struct device *dev = &pci_dev->dev; 2150 + bool wakeup = device_may_wakeup(dev); 2156 2151 2157 2152 if (!pm_runtime_suspended(dev) 2158 - || pci_target_state(pci_dev) != pci_dev->current_state 2153 + || pci_target_state(pci_dev, wakeup) != pci_dev->current_state 2159 2154 || platform_pci_need_resume(pci_dev) 2160 2155 || (pci_dev->dev_flags & PCI_DEV_FLAGS_NEEDS_RESUME)) 2161 2156 return false; ··· 2174 2167 spin_lock_irq(&dev->power.lock); 2175 2168 2176 2169 if (pm_runtime_suspended(dev) && pci_dev->current_state < PCI_D3cold && 2177 - !device_may_wakeup(dev)) 2170 + !wakeup) 2178 2171 __pci_pme_active(pci_dev, false); 2179 2172 2180 2173 spin_unlock_irq(&dev->power.lock); ··· 3722 3715 } 3723 3716 EXPORT_SYMBOL_GPL(pci_intx); 3724 3717 3725 - /** 3726 - * pci_intx_mask_supported - probe for INTx masking support 3727 - * @dev: the PCI device to operate on 3728 - * 3729 - * Check if the device dev support INTx masking via the config space 3730 - * command word. 3731 - */ 3732 - bool pci_intx_mask_supported(struct pci_dev *dev) 3733 - { 3734 - bool mask_supported = false; 3735 - u16 orig, new; 3736 - 3737 - if (dev->broken_intx_masking) 3738 - return false; 3739 - 3740 - pci_cfg_access_lock(dev); 3741 - 3742 - pci_read_config_word(dev, PCI_COMMAND, &orig); 3743 - pci_write_config_word(dev, PCI_COMMAND, 3744 - orig ^ PCI_COMMAND_INTX_DISABLE); 3745 - pci_read_config_word(dev, PCI_COMMAND, &new); 3746 - 3747 - /* 3748 - * There's no way to protect against hardware bugs or detect them 3749 - * reliably, but as long as we know what the value should be, let's 3750 - * go ahead and check it. 3751 - */ 3752 - if ((new ^ orig) & ~PCI_COMMAND_INTX_DISABLE) { 3753 - dev_err(&dev->dev, "Command register changed from 0x%x to 0x%x: driver or hardware bug?\n", 3754 - orig, new); 3755 - } else if ((new ^ orig) & PCI_COMMAND_INTX_DISABLE) { 3756 - mask_supported = true; 3757 - pci_write_config_word(dev, PCI_COMMAND, orig); 3758 - } 3759 - 3760 - pci_cfg_access_unlock(dev); 3761 - return mask_supported; 3762 - } 3763 - EXPORT_SYMBOL_GPL(pci_intx_mask_supported); 3764 - 3765 3718 static bool pci_check_and_set_intx_mask(struct pci_dev *dev, bool mask) 3766 3719 { 3767 3720 struct pci_bus *bus = dev->bus; ··· 3772 3805 * @dev: the PCI device to operate on 3773 3806 * 3774 3807 * Check if the device dev has its INTx line asserted, mask it and 3775 - * return true in that case. False is returned if not interrupt was 3808 + * return true in that case. False is returned if no interrupt was 3776 3809 * pending. 3777 3810 */ 3778 3811 bool pci_check_and_mask_intx(struct pci_dev *dev) ··· 4042 4075 return pci_reset_hotplug_slot(dev->slot->hotplug, probe); 4043 4076 } 4044 4077 4045 - static int __pci_dev_reset(struct pci_dev *dev, int probe) 4046 - { 4047 - int rc; 4048 - 4049 - might_sleep(); 4050 - 4051 - rc = pci_dev_specific_reset(dev, probe); 4052 - if (rc != -ENOTTY) 4053 - goto done; 4054 - 4055 - if (pcie_has_flr(dev)) { 4056 - if (!probe) 4057 - pcie_flr(dev); 4058 - rc = 0; 4059 - goto done; 4060 - } 4061 - 4062 - rc = pci_af_flr(dev, probe); 4063 - if (rc != -ENOTTY) 4064 - goto done; 4065 - 4066 - rc = pci_pm_reset(dev, probe); 4067 - if (rc != -ENOTTY) 4068 - goto done; 4069 - 4070 - rc = pci_dev_reset_slot_function(dev, probe); 4071 - if (rc != -ENOTTY) 4072 - goto done; 4073 - 4074 - rc = pci_parent_bus_reset(dev, probe); 4075 - done: 4076 - return rc; 4077 - } 4078 - 4079 4078 static void pci_dev_lock(struct pci_dev *dev) 4080 4079 { 4081 4080 pci_cfg_access_lock(dev); ··· 4067 4134 pci_cfg_access_unlock(dev); 4068 4135 } 4069 4136 4070 - /** 4071 - * pci_reset_notify - notify device driver of reset 4072 - * @dev: device to be notified of reset 4073 - * @prepare: 'true' if device is about to be reset; 'false' if reset attempt 4074 - * completed 4075 - * 4076 - * Must be called prior to device access being disabled and after device 4077 - * access is restored. 4078 - */ 4079 - static void pci_reset_notify(struct pci_dev *dev, bool prepare) 4137 + static void pci_dev_save_and_disable(struct pci_dev *dev) 4080 4138 { 4081 4139 const struct pci_error_handlers *err_handler = 4082 4140 dev->driver ? dev->driver->err_handler : NULL; 4083 - if (err_handler && err_handler->reset_notify) 4084 - err_handler->reset_notify(dev, prepare); 4085 - } 4086 4141 4087 - static void pci_dev_save_and_disable(struct pci_dev *dev) 4088 - { 4089 - pci_reset_notify(dev, true); 4142 + /* 4143 + * dev->driver->err_handler->reset_prepare() is protected against 4144 + * races with ->remove() by the device lock, which must be held by 4145 + * the caller. 4146 + */ 4147 + if (err_handler && err_handler->reset_prepare) 4148 + err_handler->reset_prepare(dev); 4090 4149 4091 4150 /* 4092 4151 * Wake-up device prior to save. PM registers default to D0 after ··· 4100 4175 4101 4176 static void pci_dev_restore(struct pci_dev *dev) 4102 4177 { 4178 + const struct pci_error_handlers *err_handler = 4179 + dev->driver ? dev->driver->err_handler : NULL; 4180 + 4103 4181 pci_restore_state(dev); 4104 - pci_reset_notify(dev, false); 4105 - } 4106 4182 4107 - static int pci_dev_reset(struct pci_dev *dev, int probe) 4108 - { 4109 - int rc; 4110 - 4111 - if (!probe) 4112 - pci_dev_lock(dev); 4113 - 4114 - rc = __pci_dev_reset(dev, probe); 4115 - 4116 - if (!probe) 4117 - pci_dev_unlock(dev); 4118 - 4119 - return rc; 4183 + /* 4184 + * dev->driver->err_handler->reset_done() is protected against 4185 + * races with ->remove() by the device lock, which must be held by 4186 + * the caller. 4187 + */ 4188 + if (err_handler && err_handler->reset_done) 4189 + err_handler->reset_done(dev); 4120 4190 } 4121 4191 4122 4192 /** ··· 4133 4213 */ 4134 4214 int __pci_reset_function(struct pci_dev *dev) 4135 4215 { 4136 - return pci_dev_reset(dev, 0); 4216 + int ret; 4217 + 4218 + pci_dev_lock(dev); 4219 + ret = __pci_reset_function_locked(dev); 4220 + pci_dev_unlock(dev); 4221 + 4222 + return ret; 4137 4223 } 4138 4224 EXPORT_SYMBOL_GPL(__pci_reset_function); 4139 4225 ··· 4164 4238 */ 4165 4239 int __pci_reset_function_locked(struct pci_dev *dev) 4166 4240 { 4167 - return __pci_dev_reset(dev, 0); 4241 + int rc; 4242 + 4243 + might_sleep(); 4244 + 4245 + rc = pci_dev_specific_reset(dev, 0); 4246 + if (rc != -ENOTTY) 4247 + return rc; 4248 + if (pcie_has_flr(dev)) { 4249 + pcie_flr(dev); 4250 + return 0; 4251 + } 4252 + rc = pci_af_flr(dev, 0); 4253 + if (rc != -ENOTTY) 4254 + return rc; 4255 + rc = pci_pm_reset(dev, 0); 4256 + if (rc != -ENOTTY) 4257 + return rc; 4258 + rc = pci_dev_reset_slot_function(dev, 0); 4259 + if (rc != -ENOTTY) 4260 + return rc; 4261 + return pci_parent_bus_reset(dev, 0); 4168 4262 } 4169 4263 EXPORT_SYMBOL_GPL(__pci_reset_function_locked); 4170 4264 ··· 4201 4255 */ 4202 4256 int pci_probe_reset_function(struct pci_dev *dev) 4203 4257 { 4204 - return pci_dev_reset(dev, 1); 4258 + int rc; 4259 + 4260 + might_sleep(); 4261 + 4262 + rc = pci_dev_specific_reset(dev, 1); 4263 + if (rc != -ENOTTY) 4264 + return rc; 4265 + if (pcie_has_flr(dev)) 4266 + return 0; 4267 + rc = pci_af_flr(dev, 1); 4268 + if (rc != -ENOTTY) 4269 + return rc; 4270 + rc = pci_pm_reset(dev, 1); 4271 + if (rc != -ENOTTY) 4272 + return rc; 4273 + rc = pci_dev_reset_slot_function(dev, 1); 4274 + if (rc != -ENOTTY) 4275 + return rc; 4276 + 4277 + return pci_parent_bus_reset(dev, 1); 4205 4278 } 4206 4279 4207 4280 /** ··· 4243 4278 { 4244 4279 int rc; 4245 4280 4246 - rc = pci_dev_reset(dev, 1); 4281 + rc = pci_probe_reset_function(dev); 4247 4282 if (rc) 4248 4283 return rc; 4249 4284 4285 + pci_dev_lock(dev); 4250 4286 pci_dev_save_and_disable(dev); 4251 4287 4252 - rc = pci_dev_reset(dev, 0); 4288 + rc = __pci_reset_function_locked(dev); 4253 4289 4254 4290 pci_dev_restore(dev); 4291 + pci_dev_unlock(dev); 4255 4292 4256 4293 return rc; 4257 4294 } ··· 4269 4302 { 4270 4303 int rc; 4271 4304 4272 - rc = pci_dev_reset(dev, 1); 4305 + rc = pci_probe_reset_function(dev); 4273 4306 if (rc) 4274 4307 return rc; 4275 4308 4276 - pci_dev_save_and_disable(dev); 4309 + if (!pci_dev_trylock(dev)) 4310 + return -EAGAIN; 4277 4311 4278 - if (pci_dev_trylock(dev)) { 4279 - rc = __pci_dev_reset(dev, 0); 4280 - pci_dev_unlock(dev); 4281 - } else 4282 - rc = -EAGAIN; 4312 + pci_dev_save_and_disable(dev); 4313 + rc = __pci_reset_function_locked(dev); 4314 + pci_dev_unlock(dev); 4283 4315 4284 4316 pci_dev_restore(dev); 4285 - 4286 4317 return rc; 4287 4318 } 4288 4319 EXPORT_SYMBOL_GPL(pci_try_reset_function); ··· 4430 4465 struct pci_dev *dev; 4431 4466 4432 4467 list_for_each_entry(dev, &bus->devices, bus_list) { 4468 + pci_dev_lock(dev); 4433 4469 pci_dev_save_and_disable(dev); 4470 + pci_dev_unlock(dev); 4434 4471 if (dev->subordinate) 4435 4472 pci_bus_save_and_disable(dev->subordinate); 4436 4473 } ··· 4447 4480 struct pci_dev *dev; 4448 4481 4449 4482 list_for_each_entry(dev, &bus->devices, bus_list) { 4483 + pci_dev_lock(dev); 4450 4484 pci_dev_restore(dev); 4485 + pci_dev_unlock(dev); 4451 4486 if (dev->subordinate) 4452 4487 pci_bus_restore(dev->subordinate); 4453 4488 }
-1
drivers/pci/pci.h
··· 267 267 u16 driver_max_VFs; /* max num VFs driver supports */ 268 268 struct pci_dev *dev; /* lowest numbered PF */ 269 269 struct pci_dev *self; /* this PF */ 270 - struct mutex lock; /* lock for setting sriov_numvfs in sysfs */ 271 270 resource_size_t barsz[PCI_SRIOV_NUM_BARS]; /* VF BAR size */ 272 271 bool drivers_autoprobe; /* auto probing of VFs by driver */ 273 272 };
+2 -2
drivers/pci/pcie/pcie-dpc.c
··· 92 92 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_STATUS, &status); 93 93 pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_SOURCE_ID, 94 94 &source); 95 - if (!status) 95 + if (!status || status == (u16)(~0)) 96 96 return IRQ_NONE; 97 97 98 98 dev_info(&dpc->dev->device, "DPC containment event, status:%#06x source:%#06x\n", ··· 144 144 145 145 dpc->rp = (cap & PCI_EXP_DPC_CAP_RP_EXT); 146 146 147 - ctl |= PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN; 147 + ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_NONFATAL | PCI_EXP_DPC_CTL_INT_EN; 148 148 pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 149 149 150 150 dev_info(&dev->device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n",
+4 -3
drivers/pci/pcie/portdrv.h
··· 13 13 14 14 #define PCIE_PORT_DEVICE_MAXSERVICES 5 15 15 /* 16 - * According to the PCI Express Base Specification 2.0, the indices of 17 - * the MSI-X table entries used by port services must not exceed 31 16 + * The PCIe Capability Interrupt Message Number (PCIe r3.1, sec 7.8.2) must 17 + * be one of the first 32 MSI-X entries. Per PCI r3.0, sec 6.8.3.1, MSI 18 + * supports a maximum of 32 vectors per function. 18 19 */ 19 - #define PCIE_PORT_MAX_MSIX_ENTRIES 32 20 + #define PCIE_PORT_MAX_MSI_ENTRIES 32 20 21 21 22 #define get_descriptor_id(type, service) (((type - 4) << 8) | service) 22 23
+72 -32
drivers/pci/pcie/portdrv_core.c
··· 44 44 } 45 45 46 46 /** 47 - * pcie_port_enable_msix - try to set up MSI-X as interrupt mode for given port 47 + * pcie_port_enable_irq_vec - try to set up MSI-X or MSI as interrupt mode 48 + * for given port 48 49 * @dev: PCI Express port to handle 49 50 * @irqs: Array of interrupt vectors to populate 50 51 * @mask: Bitmask of port capabilities returned by get_port_device_capability() 51 52 * 52 53 * Return value: 0 on success, error code on failure 53 54 */ 54 - static int pcie_port_enable_msix(struct pci_dev *dev, int *irqs, int mask) 55 + static int pcie_port_enable_irq_vec(struct pci_dev *dev, int *irqs, int mask) 55 56 { 56 57 int nr_entries, entry, nvec = 0; 57 58 ··· 62 61 * equal to the number of entries this port actually uses, we'll happily 63 62 * go through without any tricks. 64 63 */ 65 - nr_entries = pci_alloc_irq_vectors(dev, 1, PCIE_PORT_MAX_MSIX_ENTRIES, 66 - PCI_IRQ_MSIX); 64 + nr_entries = pci_alloc_irq_vectors(dev, 1, PCIE_PORT_MAX_MSI_ENTRIES, 65 + PCI_IRQ_MSIX | PCI_IRQ_MSI); 67 66 if (nr_entries < 0) 68 67 return nr_entries; 69 68 ··· 71 70 u16 reg16; 72 71 73 72 /* 74 - * The code below follows the PCI Express Base Specification 2.0 75 - * stating in Section 6.1.6 that "PME and Hot-Plug Event 76 - * interrupts (when both are implemented) always share the same 77 - * MSI or MSI-X vector, as indicated by the Interrupt Message 78 - * Number field in the PCI Express Capabilities register", where 79 - * according to Section 7.8.2 of the specification "For MSI-X, 80 - * the value in this field indicates which MSI-X Table entry is 81 - * used to generate the interrupt message." 73 + * Per PCIe r3.1, sec 6.1.6, "PME and Hot-Plug Event 74 + * interrupts (when both are implemented) always share the 75 + * same MSI or MSI-X vector, as indicated by the Interrupt 76 + * Message Number field in the PCI Express Capabilities 77 + * register". 78 + * 79 + * Per sec 7.8.2, "For MSI, the [Interrupt Message Number] 80 + * indicates the offset between the base Message Data and 81 + * the interrupt message that is generated." 82 + * 83 + * "For MSI-X, the [Interrupt Message Number] indicates 84 + * which MSI-X Table entry is used to generate the 85 + * interrupt message." 82 86 */ 83 87 pcie_capability_read_word(dev, PCI_EXP_FLAGS, &reg16); 84 88 entry = (reg16 & PCI_EXP_FLAGS_IRQ) >> 9; ··· 100 94 u32 reg32, pos; 101 95 102 96 /* 103 - * The code below follows Section 7.10.10 of the PCI Express 104 - * Base Specification 2.0 stating that bits 31-27 of the Root 105 - * Error Status Register contain a value indicating which of the 106 - * MSI/MSI-X vectors assigned to the port is going to be used 107 - * for AER, where "For MSI-X, the value in this register 108 - * indicates which MSI-X Table entry is used to generate the 109 - * interrupt message." 97 + * Per PCIe r3.1, sec 7.10.10, the Advanced Error Interrupt 98 + * Message Number in the Root Error Status register 99 + * indicates which MSI/MSI-X vector is used for AER. 100 + * 101 + * "For MSI, the [Advanced Error Interrupt Message Number] 102 + * indicates the offset between the base Message Data and 103 + * the interrupt message that is generated." 104 + * 105 + * "For MSI-X, the [Advanced Error Interrupt Message 106 + * Number] indicates which MSI-X Table entry is used to 107 + * generate the interrupt message." 110 108 */ 111 109 pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 112 110 pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32); ··· 119 109 goto out_free_irqs; 120 110 121 111 irqs[PCIE_PORT_SERVICE_AER_SHIFT] = pci_irq_vector(dev, entry); 112 + 113 + nvec = max(nvec, entry + 1); 114 + } 115 + 116 + if (mask & PCIE_PORT_SERVICE_DPC) { 117 + u16 reg16, pos; 118 + 119 + /* 120 + * Per PCIe r4.0 (v0.9), sec 7.9.15.2, the DPC Interrupt 121 + * Message Number in the DPC Capability register indicates 122 + * which MSI/MSI-X vector is used for DPC. 123 + * 124 + * "For MSI, the [DPC Interrupt Message Number] indicates 125 + * the offset between the base Message Data and the 126 + * interrupt message that is generated." 127 + * 128 + * "For MSI-X, the [DPC Interrupt Message Number] indicates 129 + * which MSI-X Table entry is used to generate the 130 + * interrupt message." 131 + */ 132 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_DPC); 133 + pci_read_config_word(dev, pos + PCI_EXP_DPC_CAP, &reg16); 134 + entry = reg16 & 0x1f; 135 + if (entry >= nr_entries) 136 + goto out_free_irqs; 137 + 138 + irqs[PCIE_PORT_SERVICE_DPC_SHIFT] = pci_irq_vector(dev, entry); 122 139 123 140 nvec = max(nvec, entry + 1); 124 141 } ··· 161 124 162 125 /* Now allocate the MSI-X vectors for real */ 163 126 nr_entries = pci_alloc_irq_vectors(dev, nvec, nvec, 164 - PCI_IRQ_MSIX); 127 + PCI_IRQ_MSIX | PCI_IRQ_MSI); 165 128 if (nr_entries < 0) 166 129 return nr_entries; 167 130 } ··· 183 146 */ 184 147 static int pcie_init_service_irqs(struct pci_dev *dev, int *irqs, int mask) 185 148 { 186 - unsigned flags = PCI_IRQ_LEGACY | PCI_IRQ_MSI; 187 149 int ret, i; 188 150 189 151 for (i = 0; i < PCIE_PORT_DEVICE_MAXSERVICES; i++) 190 152 irqs[i] = -1; 191 153 192 154 /* 193 - * If MSI cannot be used for PCIe PME or hotplug, we have to use 194 - * INTx or other interrupts, e.g. system shared interrupt. 155 + * If we support PME or hotplug, but we can't use MSI/MSI-X for 156 + * them, we have to fall back to INTx or other interrupts, e.g., a 157 + * system shared interrupt. 195 158 */ 196 - if (((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) || 197 - ((mask & PCIE_PORT_SERVICE_HP) && pciehp_no_msi())) { 198 - flags &= ~PCI_IRQ_MSI; 199 - } else { 200 - /* Try to use MSI-X if supported */ 201 - if (!pcie_port_enable_msix(dev, irqs, mask)) 202 - return 0; 203 - } 159 + if ((mask & PCIE_PORT_SERVICE_PME) && pcie_pme_no_msi()) 160 + goto legacy_irq; 204 161 205 - ret = pci_alloc_irq_vectors(dev, 1, 1, flags); 162 + if ((mask & PCIE_PORT_SERVICE_HP) && pciehp_no_msi()) 163 + goto legacy_irq; 164 + 165 + /* Try to use MSI-X or MSI if supported */ 166 + if (pcie_port_enable_irq_vec(dev, irqs, mask) == 0) 167 + return 0; 168 + 169 + legacy_irq: 170 + /* fall back to legacy IRQ */ 171 + ret = pci_alloc_irq_vectors(dev, 1, 1, PCI_IRQ_LEGACY); 206 172 if (ret < 0) 207 173 return -ENODEV; 208 174
+111 -29
drivers/pci/probe.c
··· 510 510 return b; 511 511 } 512 512 513 - static void pci_release_host_bridge_dev(struct device *dev) 513 + static void devm_pci_release_host_bridge_dev(struct device *dev) 514 514 { 515 515 struct pci_host_bridge *bridge = to_pci_host_bridge(dev); 516 516 517 517 if (bridge->release_fn) 518 518 bridge->release_fn(bridge); 519 + } 519 520 520 - pci_free_resource_list(&bridge->windows); 521 - 522 - kfree(bridge); 521 + static void pci_release_host_bridge_dev(struct device *dev) 522 + { 523 + devm_pci_release_host_bridge_dev(dev); 524 + pci_free_host_bridge(to_pci_host_bridge(dev)); 523 525 } 524 526 525 527 struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) ··· 533 531 return NULL; 534 532 535 533 INIT_LIST_HEAD(&bridge->windows); 534 + bridge->dev.release = pci_release_host_bridge_dev; 536 535 537 536 return bridge; 538 537 } 539 538 EXPORT_SYMBOL(pci_alloc_host_bridge); 539 + 540 + struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev, 541 + size_t priv) 542 + { 543 + struct pci_host_bridge *bridge; 544 + 545 + bridge = devm_kzalloc(dev, sizeof(*bridge) + priv, GFP_KERNEL); 546 + if (!bridge) 547 + return NULL; 548 + 549 + INIT_LIST_HEAD(&bridge->windows); 550 + bridge->dev.release = devm_pci_release_host_bridge_dev; 551 + 552 + return bridge; 553 + } 554 + EXPORT_SYMBOL(devm_pci_alloc_host_bridge); 555 + 556 + void pci_free_host_bridge(struct pci_host_bridge *bridge) 557 + { 558 + pci_free_resource_list(&bridge->windows); 559 + 560 + kfree(bridge); 561 + } 562 + EXPORT_SYMBOL(pci_free_host_bridge); 540 563 541 564 static const unsigned char pcix_bus_speed[] = { 542 565 PCI_SPEED_UNKNOWN, /* 0 */ ··· 746 719 dev_set_msi_domain(&bus->dev, d); 747 720 } 748 721 749 - int pci_register_host_bridge(struct pci_host_bridge *bridge) 722 + static int pci_register_host_bridge(struct pci_host_bridge *bridge) 750 723 { 751 724 struct device *parent = bridge->dev.parent; 752 725 struct resource_entry *window, *n; ··· 861 834 kfree(bus); 862 835 return err; 863 836 } 864 - EXPORT_SYMBOL(pci_register_host_bridge); 865 837 866 838 static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent, 867 839 struct pci_dev *bridge, int busnr) ··· 1356 1330 } 1357 1331 1358 1332 /** 1333 + * pci_intx_mask_broken - test PCI_COMMAND_INTX_DISABLE writability 1334 + * @dev: PCI device 1335 + * 1336 + * Test whether PCI_COMMAND_INTX_DISABLE is writable for @dev. Check this 1337 + * at enumeration-time to avoid modifying PCI_COMMAND at run-time. 1338 + */ 1339 + static int pci_intx_mask_broken(struct pci_dev *dev) 1340 + { 1341 + u16 orig, toggle, new; 1342 + 1343 + pci_read_config_word(dev, PCI_COMMAND, &orig); 1344 + toggle = orig ^ PCI_COMMAND_INTX_DISABLE; 1345 + pci_write_config_word(dev, PCI_COMMAND, toggle); 1346 + pci_read_config_word(dev, PCI_COMMAND, &new); 1347 + 1348 + pci_write_config_word(dev, PCI_COMMAND, orig); 1349 + 1350 + /* 1351 + * PCI_COMMAND_INTX_DISABLE was reserved and read-only prior to PCI 1352 + * r2.3, so strictly speaking, a device is not *broken* if it's not 1353 + * writable. But we'll live with the misnomer for now. 1354 + */ 1355 + if (new != toggle) 1356 + return 1; 1357 + return 0; 1358 + } 1359 + 1360 + /** 1359 1361 * pci_setup_device - fill in class and map information of a device 1360 1362 * @dev: the device structure to fill 1361 1363 * ··· 1452 1398 pci_write_config_word(dev, PCI_COMMAND, cmd); 1453 1399 } 1454 1400 } 1401 + 1402 + dev->broken_intx_masking = pci_intx_mask_broken(dev); 1455 1403 1456 1404 switch (dev->hdr_type) { /* header type */ 1457 1405 case PCI_HEADER_TYPE_NORMAL: /* standard header */ ··· 1730 1674 /* Initialize Advanced Error Capabilities and Control Register */ 1731 1675 pci_read_config_dword(dev, pos + PCI_ERR_CAP, &reg32); 1732 1676 reg32 = (reg32 & hpp->adv_err_cap_and) | hpp->adv_err_cap_or; 1677 + /* Don't enable ECRC generation or checking if unsupported */ 1678 + if (!(reg32 & PCI_ERR_CAP_ECRC_GENC)) 1679 + reg32 &= ~PCI_ERR_CAP_ECRC_GENE; 1680 + if (!(reg32 & PCI_ERR_CAP_ECRC_CHKC)) 1681 + reg32 &= ~PCI_ERR_CAP_ECRC_CHKE; 1733 1682 pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); 1734 1683 1735 1684 /* ··· 2359 2298 { 2360 2299 } 2361 2300 2362 - static struct pci_bus *pci_create_root_bus_msi(struct device *parent, 2363 - int bus, struct pci_ops *ops, void *sysdata, 2364 - struct list_head *resources, struct msi_controller *msi) 2301 + struct pci_bus *pci_create_root_bus(struct device *parent, int bus, 2302 + struct pci_ops *ops, void *sysdata, struct list_head *resources) 2365 2303 { 2366 2304 int error; 2367 2305 struct pci_host_bridge *bridge; ··· 2370 2310 return NULL; 2371 2311 2372 2312 bridge->dev.parent = parent; 2373 - bridge->dev.release = pci_release_host_bridge_dev; 2374 2313 2375 2314 list_splice_init(resources, &bridge->windows); 2376 2315 bridge->sysdata = sysdata; 2377 2316 bridge->busnr = bus; 2378 2317 bridge->ops = ops; 2379 - bridge->msi = msi; 2380 2318 2381 2319 error = pci_register_host_bridge(bridge); 2382 2320 if (error < 0) ··· 2385 2327 err_out: 2386 2328 kfree(bridge); 2387 2329 return NULL; 2388 - } 2389 - 2390 - struct pci_bus *pci_create_root_bus(struct device *parent, int bus, 2391 - struct pci_ops *ops, void *sysdata, struct list_head *resources) 2392 - { 2393 - return pci_create_root_bus_msi(parent, bus, ops, sysdata, resources, 2394 - NULL); 2395 2330 } 2396 2331 EXPORT_SYMBOL_GPL(pci_create_root_bus); 2397 2332 ··· 2451 2400 res, ret ? "can not be" : "is"); 2452 2401 } 2453 2402 2454 - struct pci_bus *pci_scan_root_bus_msi(struct device *parent, int bus, 2455 - struct pci_ops *ops, void *sysdata, 2456 - struct list_head *resources, struct msi_controller *msi) 2403 + int pci_scan_root_bus_bridge(struct pci_host_bridge *bridge) 2404 + { 2405 + struct resource_entry *window; 2406 + bool found = false; 2407 + struct pci_bus *b; 2408 + int max, bus, ret; 2409 + 2410 + if (!bridge) 2411 + return -EINVAL; 2412 + 2413 + resource_list_for_each_entry(window, &bridge->windows) 2414 + if (window->res->flags & IORESOURCE_BUS) { 2415 + found = true; 2416 + break; 2417 + } 2418 + 2419 + ret = pci_register_host_bridge(bridge); 2420 + if (ret < 0) 2421 + return ret; 2422 + 2423 + b = bridge->bus; 2424 + bus = bridge->busnr; 2425 + 2426 + if (!found) { 2427 + dev_info(&b->dev, 2428 + "No busn resource found for root bus, will use [bus %02x-ff]\n", 2429 + bus); 2430 + pci_bus_insert_busn_res(b, bus, 255); 2431 + } 2432 + 2433 + max = pci_scan_child_bus(b); 2434 + 2435 + if (!found) 2436 + pci_bus_update_busn_res_end(b, max); 2437 + 2438 + return 0; 2439 + } 2440 + EXPORT_SYMBOL(pci_scan_root_bus_bridge); 2441 + 2442 + struct pci_bus *pci_scan_root_bus(struct device *parent, int bus, 2443 + struct pci_ops *ops, void *sysdata, struct list_head *resources) 2457 2444 { 2458 2445 struct resource_entry *window; 2459 2446 bool found = false; ··· 2504 2415 break; 2505 2416 } 2506 2417 2507 - b = pci_create_root_bus_msi(parent, bus, ops, sysdata, resources, msi); 2418 + b = pci_create_root_bus(parent, bus, ops, sysdata, resources); 2508 2419 if (!b) 2509 2420 return NULL; 2510 2421 ··· 2521 2432 pci_bus_update_busn_res_end(b, max); 2522 2433 2523 2434 return b; 2524 - } 2525 - 2526 - struct pci_bus *pci_scan_root_bus(struct device *parent, int bus, 2527 - struct pci_ops *ops, void *sysdata, struct list_head *resources) 2528 - { 2529 - return pci_scan_root_bus_msi(parent, bus, ops, sysdata, resources, 2530 - NULL); 2531 2435 } 2532 2436 EXPORT_SYMBOL(pci_scan_root_bus); 2533 2437
+18 -1
drivers/pci/quirks.c
··· 304 304 { 305 305 int i; 306 306 307 - for (i = 0; i < PCI_STD_RESOURCE_END; i++) { 307 + for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { 308 308 struct resource *r = &dev->resource[i]; 309 309 310 310 if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) { ··· 1683 1683 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2609, quirk_intel_pcie_pm); 1684 1684 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x260a, quirk_intel_pcie_pm); 1685 1685 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x260b, quirk_intel_pcie_pm); 1686 + 1687 + static void quirk_radeon_pm(struct pci_dev *dev) 1688 + { 1689 + if (dev->subsystem_vendor == PCI_VENDOR_ID_APPLE && 1690 + dev->subsystem_device == 0x00e2) { 1691 + if (dev->d3_delay < 20) { 1692 + dev->d3_delay = 20; 1693 + dev_info(&dev->dev, "extending delay after power-on from D3 to %d msec\n", 1694 + dev->d3_delay); 1695 + } 1696 + } 1697 + } 1698 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6741, quirk_radeon_pm); 1686 1699 1687 1700 #ifdef CONFIG_X86_IO_APIC 1688 1701 static int dmi_disable_ioapicreroute(const struct dmi_system_id *d) ··· 3248 3235 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1588, 3249 3236 quirk_broken_intx_masking); 3250 3237 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1589, 3238 + quirk_broken_intx_masking); 3239 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x158a, 3240 + quirk_broken_intx_masking); 3241 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x158b, 3251 3242 quirk_broken_intx_masking); 3252 3243 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x37d0, 3253 3244 quirk_broken_intx_masking);
+35 -10
drivers/pci/setup-irq.c
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/ioport.h> 17 17 #include <linux/cache.h> 18 + #include "pci.h" 18 19 19 20 void __weak pcibios_update_irq(struct pci_dev *dev, int irq) 20 21 { ··· 23 22 pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq); 24 23 } 25 24 26 - static void pdev_fixup_irq(struct pci_dev *dev, 27 - u8 (*swizzle)(struct pci_dev *, u8 *), 28 - int (*map_irq)(const struct pci_dev *, u8, u8)) 25 + void pci_assign_irq(struct pci_dev *dev) 29 26 { 30 - u8 pin, slot; 27 + u8 pin; 28 + u8 slot = -1; 31 29 int irq = 0; 30 + struct pci_host_bridge *hbrg = pci_find_host_bridge(dev->bus); 31 + 32 + if (!(hbrg->map_irq)) { 33 + dev_dbg(&dev->dev, "runtime IRQ mapping not provided by arch\n"); 34 + return; 35 + } 32 36 33 37 /* If this device is not on the primary bus, we need to figure out 34 38 which interrupt pin it will come in on. We know which slot it ··· 46 40 if (pin > 4) 47 41 pin = 1; 48 42 49 - if (pin != 0) { 43 + if (pin) { 50 44 /* Follow the chain of bridges, swizzling as we go. */ 51 - slot = (*swizzle)(dev, &pin); 45 + if (hbrg->swizzle_irq) 46 + slot = (*(hbrg->swizzle_irq))(dev, &pin); 52 47 53 - irq = (*map_irq)(dev, slot, pin); 48 + /* 49 + * If a swizzling function is not used map_irq must 50 + * ignore slot 51 + */ 52 + irq = (*(hbrg->map_irq))(dev, slot, pin); 54 53 if (irq == -1) 55 54 irq = 0; 56 55 } 57 56 dev->irq = irq; 58 57 59 - dev_dbg(&dev->dev, "fixup irq: got %d\n", dev->irq); 58 + dev_dbg(&dev->dev, "assign IRQ: got %d\n", dev->irq); 60 59 61 60 /* Always tell the device, so the driver knows what is 62 61 the real IRQ to use; the device does not use it. */ ··· 71 60 void pci_fixup_irqs(u8 (*swizzle)(struct pci_dev *, u8 *), 72 61 int (*map_irq)(const struct pci_dev *, u8, u8)) 73 62 { 63 + /* 64 + * Implement pci_fixup_irqs() through pci_assign_irq(). 65 + * This code should be remove eventually, it is a wrapper 66 + * around pci_assign_irq() interface to keep current 67 + * pci_fixup_irqs() behaviour unchanged on architecture 68 + * code still relying on its interface. 69 + */ 74 70 struct pci_dev *dev = NULL; 71 + struct pci_host_bridge *hbrg = NULL; 75 72 76 - for_each_pci_dev(dev) 77 - pdev_fixup_irq(dev, swizzle, map_irq); 73 + for_each_pci_dev(dev) { 74 + hbrg = pci_find_host_bridge(dev->bus); 75 + hbrg->swizzle_irq = swizzle; 76 + hbrg->map_irq = map_irq; 77 + pci_assign_irq(dev); 78 + hbrg->swizzle_irq = NULL; 79 + hbrg->map_irq = NULL; 80 + } 78 81 } 79 82 EXPORT_SYMBOL_GPL(pci_fixup_irqs);
+38 -2
drivers/pci/switch/switchtec.c
··· 120 120 u32 reserved16[4]; 121 121 } __packed; 122 122 123 + enum { 124 + SWITCHTEC_CFG0_RUNNING = 0x04, 125 + SWITCHTEC_CFG1_RUNNING = 0x05, 126 + SWITCHTEC_IMG0_RUNNING = 0x03, 127 + SWITCHTEC_IMG1_RUNNING = 0x07, 128 + }; 129 + 123 130 struct sys_info_regs { 124 131 u32 device_id; 125 132 u32 device_version; ··· 136 129 u32 table_format_version; 137 130 u32 partition_id; 138 131 u32 cfg_file_fmt_version; 139 - u32 reserved2[58]; 132 + u16 cfg_running; 133 + u16 img_running; 134 + u32 reserved2[57]; 140 135 char vendor_id[8]; 141 136 char product_id[16]; 142 137 char product_revision[4]; ··· 816 807 { 817 808 struct switchtec_ioctl_flash_part_info info = {0}; 818 809 struct flash_info_regs __iomem *fi = stdev->mmio_flash_info; 810 + struct sys_info_regs __iomem *si = stdev->mmio_sys_info; 819 811 u32 active_addr = -1; 820 812 821 813 if (copy_from_user(&info, uinfo, sizeof(info))) ··· 826 816 case SWITCHTEC_IOCTL_PART_CFG0: 827 817 active_addr = ioread32(&fi->active_cfg); 828 818 set_fw_info_part(&info, &fi->cfg0); 819 + if (ioread16(&si->cfg_running) == SWITCHTEC_CFG0_RUNNING) 820 + info.active |= SWITCHTEC_IOCTL_PART_RUNNING; 829 821 break; 830 822 case SWITCHTEC_IOCTL_PART_CFG1: 831 823 active_addr = ioread32(&fi->active_cfg); 832 824 set_fw_info_part(&info, &fi->cfg1); 825 + if (ioread16(&si->cfg_running) == SWITCHTEC_CFG1_RUNNING) 826 + info.active |= SWITCHTEC_IOCTL_PART_RUNNING; 833 827 break; 834 828 case SWITCHTEC_IOCTL_PART_IMG0: 835 829 active_addr = ioread32(&fi->active_img); 836 830 set_fw_info_part(&info, &fi->img0); 831 + if (ioread16(&si->img_running) == SWITCHTEC_IMG0_RUNNING) 832 + info.active |= SWITCHTEC_IOCTL_PART_RUNNING; 837 833 break; 838 834 case SWITCHTEC_IOCTL_PART_IMG1: 839 835 active_addr = ioread32(&fi->active_img); 840 836 set_fw_info_part(&info, &fi->img1); 837 + if (ioread16(&si->img_running) == SWITCHTEC_IMG1_RUNNING) 838 + info.active |= SWITCHTEC_IOCTL_PART_RUNNING; 841 839 break; 842 840 case SWITCHTEC_IOCTL_PART_NVLOG: 843 841 set_fw_info_part(&info, &fi->nvlog); ··· 879 861 } 880 862 881 863 if (info.address == active_addr) 882 - info.active = 1; 864 + info.active |= SWITCHTEC_IOCTL_PART_ACTIVE; 883 865 884 866 if (copy_to_user(uinfo, &info, sizeof(info))) 885 867 return -EFAULT; ··· 1558 1540 SWITCHTEC_PCI_DEVICE(0x8544), //PSX 64xG3 1559 1541 SWITCHTEC_PCI_DEVICE(0x8545), //PSX 80xG3 1560 1542 SWITCHTEC_PCI_DEVICE(0x8546), //PSX 96xG3 1543 + SWITCHTEC_PCI_DEVICE(0x8551), //PAX 24XG3 1544 + SWITCHTEC_PCI_DEVICE(0x8552), //PAX 32XG3 1545 + SWITCHTEC_PCI_DEVICE(0x8553), //PAX 48XG3 1546 + SWITCHTEC_PCI_DEVICE(0x8554), //PAX 64XG3 1547 + SWITCHTEC_PCI_DEVICE(0x8555), //PAX 80XG3 1548 + SWITCHTEC_PCI_DEVICE(0x8556), //PAX 96XG3 1549 + SWITCHTEC_PCI_DEVICE(0x8561), //PFXL 24XG3 1550 + SWITCHTEC_PCI_DEVICE(0x8562), //PFXL 32XG3 1551 + SWITCHTEC_PCI_DEVICE(0x8563), //PFXL 48XG3 1552 + SWITCHTEC_PCI_DEVICE(0x8564), //PFXL 64XG3 1553 + SWITCHTEC_PCI_DEVICE(0x8565), //PFXL 80XG3 1554 + SWITCHTEC_PCI_DEVICE(0x8566), //PFXL 96XG3 1555 + SWITCHTEC_PCI_DEVICE(0x8571), //PFXI 24XG3 1556 + SWITCHTEC_PCI_DEVICE(0x8572), //PFXI 32XG3 1557 + SWITCHTEC_PCI_DEVICE(0x8573), //PFXI 48XG3 1558 + SWITCHTEC_PCI_DEVICE(0x8574), //PFXI 64XG3 1559 + SWITCHTEC_PCI_DEVICE(0x8575), //PFXI 80XG3 1560 + SWITCHTEC_PCI_DEVICE(0x8576), //PFXI 96XG3 1561 1561 {0} 1562 1562 }; 1563 1563 MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);
+1 -1
drivers/video/fbdev/efifb.c
··· 408 408 if (!base) 409 409 return; 410 410 411 - for (i = 0; i < PCI_STD_RESOURCE_END; i++) { 411 + for (i = 0; i <= PCI_STD_RESOURCE_END; i++) { 412 412 struct resource *res = &dev->resource[i]; 413 413 414 414 if (!(res->flags & IORESOURCE_MEM))
+2 -2
include/linux/interrupt.h
··· 291 291 irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify); 292 292 293 293 struct cpumask *irq_create_affinity_masks(int nvec, const struct irq_affinity *affd); 294 - int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd); 294 + int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity *affd); 295 295 296 296 #else /* CONFIG_SMP */ 297 297 ··· 331 331 } 332 332 333 333 static inline int 334 - irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd) 334 + irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity *affd) 335 335 { 336 336 return maxvec; 337 337 }
+10
include/linux/pci-ats.h
··· 7 7 8 8 int pci_enable_pri(struct pci_dev *pdev, u32 reqs); 9 9 void pci_disable_pri(struct pci_dev *pdev); 10 + void pci_restore_pri_state(struct pci_dev *pdev); 10 11 int pci_reset_pri(struct pci_dev *pdev); 11 12 12 13 #else /* CONFIG_PCI_PRI */ ··· 18 17 } 19 18 20 19 static inline void pci_disable_pri(struct pci_dev *pdev) 20 + { 21 + } 22 + 23 + static inline void pci_restore_pri_state(struct pci_dev *pdev) 21 24 { 22 25 } 23 26 ··· 36 31 37 32 int pci_enable_pasid(struct pci_dev *pdev, int features); 38 33 void pci_disable_pasid(struct pci_dev *pdev); 34 + void pci_restore_pasid_state(struct pci_dev *pdev); 39 35 int pci_pasid_features(struct pci_dev *pdev); 40 36 int pci_max_pasids(struct pci_dev *pdev); 41 37 ··· 48 42 } 49 43 50 44 static inline void pci_disable_pasid(struct pci_dev *pdev) 45 + { 46 + } 47 + 48 + static inline void pci_restore_pasid_state(struct pci_dev *pdev) 51 49 { 52 50 } 53 51
+27 -8
include/linux/pci.h
··· 360 360 unsigned int msix_enabled:1; 361 361 unsigned int ari_enabled:1; /* ARI forwarding */ 362 362 unsigned int ats_enabled:1; /* Address Translation Service */ 363 + unsigned int pasid_enabled:1; /* Process Address Space ID */ 364 + unsigned int pri_enabled:1; /* Page Request Interface */ 363 365 unsigned int is_managed:1; 364 366 unsigned int needs_freset:1; /* Dev requires fundamental reset */ 365 367 unsigned int state_saved:1; ··· 372 370 unsigned int is_thunderbolt:1; /* Thunderbolt controller */ 373 371 unsigned int __aer_firmware_first_valid:1; 374 372 unsigned int __aer_firmware_first:1; 375 - unsigned int broken_intx_masking:1; 373 + unsigned int broken_intx_masking:1; /* INTx masking can't be used */ 376 374 unsigned int io_window_1k:1; /* Intel P2P bridge 1K I/O windows */ 377 375 unsigned int irq_managed:1; 378 376 unsigned int has_secondary_link:1; ··· 405 403 u16 ats_cap; /* ATS Capability offset */ 406 404 u8 ats_stu; /* ATS Smallest Translation Unit */ 407 405 atomic_t ats_ref_cnt; /* number of VFs with ATS enabled */ 406 + #endif 407 + #ifdef CONFIG_PCI_PRI 408 + u32 pri_reqs_alloc; /* Number of PRI requests allocated */ 409 + #endif 410 + #ifdef CONFIG_PCI_PASID 411 + u16 pasid_features; 408 412 #endif 409 413 phys_addr_t rom; /* Physical address of ROM if it's not from the BAR */ 410 414 size_t romlen; /* Length of ROM if it's not from the BAR */ ··· 445 437 void *sysdata; 446 438 int busnr; 447 439 struct list_head windows; /* resource_entry */ 440 + u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* platform IRQ swizzler */ 441 + int (*map_irq)(const struct pci_dev *, u8, u8); 448 442 void (*release_fn)(struct pci_host_bridge *); 449 443 void *release_data; 450 444 struct msi_controller *msi; ··· 473 463 } 474 464 475 465 struct pci_host_bridge *pci_alloc_host_bridge(size_t priv); 476 - int pci_register_host_bridge(struct pci_host_bridge *bridge); 466 + struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev, 467 + size_t priv); 468 + void pci_free_host_bridge(struct pci_host_bridge *bridge); 477 469 struct pci_host_bridge *pci_find_host_bridge(struct pci_bus *bus); 478 470 479 471 void pci_set_host_bridge_release(struct pci_host_bridge *bridge, ··· 707 695 pci_ers_result_t (*slot_reset)(struct pci_dev *dev); 708 696 709 697 /* PCI function reset prepare or completed */ 710 - void (*reset_notify)(struct pci_dev *dev, bool prepare); 698 + void (*reset_prepare)(struct pci_dev *dev); 699 + void (*reset_done)(struct pci_dev *dev); 711 700 712 701 /* Device driver may resume normal operations */ 713 702 void (*resume)(struct pci_dev *dev); ··· 865 852 int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int busmax); 866 853 int pci_bus_update_busn_res_end(struct pci_bus *b, int busmax); 867 854 void pci_bus_release_busn_res(struct pci_bus *b); 868 - struct pci_bus *pci_scan_root_bus_msi(struct device *parent, int bus, 869 - struct pci_ops *ops, void *sysdata, 870 - struct list_head *resources, 871 - struct msi_controller *msi); 872 855 struct pci_bus *pci_scan_root_bus(struct device *parent, int bus, 873 856 struct pci_ops *ops, void *sysdata, 874 857 struct list_head *resources); 858 + int pci_scan_root_bus_bridge(struct pci_host_bridge *bridge); 875 859 struct pci_bus *pci_add_new_bus(struct pci_bus *parent, struct pci_dev *dev, 876 860 int busnr); 877 861 void pcie_update_link_speed(struct pci_bus *bus, u16 link_status); ··· 1018 1008 int __must_check pcim_enable_device(struct pci_dev *pdev); 1019 1009 void pcim_pin_device(struct pci_dev *pdev); 1020 1010 1011 + static inline bool pci_intx_mask_supported(struct pci_dev *pdev) 1012 + { 1013 + /* 1014 + * INTx masking is supported if PCI_COMMAND_INTX_DISABLE is 1015 + * writable and no quirk has marked the feature broken. 1016 + */ 1017 + return !pdev->broken_intx_masking; 1018 + } 1019 + 1021 1020 static inline int pci_is_enabled(struct pci_dev *pdev) 1022 1021 { 1023 1022 return (atomic_read(&pdev->enable_cnt) > 0); ··· 1050 1031 int pci_try_set_mwi(struct pci_dev *dev); 1051 1032 void pci_clear_mwi(struct pci_dev *dev); 1052 1033 void pci_intx(struct pci_dev *dev, int enable); 1053 - bool pci_intx_mask_supported(struct pci_dev *dev); 1054 1034 bool pci_check_and_mask_intx(struct pci_dev *dev); 1055 1035 bool pci_check_and_unmask_intx(struct pci_dev *dev); 1056 1036 int pci_wait_for_pending(struct pci_dev *dev, int pos, u16 mask); ··· 1162 1144 int pci_enable_resources(struct pci_dev *, int mask); 1163 1145 void pci_fixup_irqs(u8 (*)(struct pci_dev *, u8 *), 1164 1146 int (*)(const struct pci_dev *, u8, u8)); 1147 + void pci_assign_irq(struct pci_dev *dev); 1165 1148 struct resource *pci_find_resource(struct pci_dev *dev, struct resource *res); 1166 1149 #define HAVE_PCI_REQ_REGIONS 2 1167 1150 int __must_check pci_request_regions(struct pci_dev *, const char *);
+2
include/linux/pci_ids.h
··· 1373 1373 #define PCI_DEVICE_ID_TTI_HPT374 0x0008 1374 1374 #define PCI_DEVICE_ID_TTI_HPT372N 0x0009 /* apparently a 372N variant? */ 1375 1375 1376 + #define PCI_VENDOR_ID_SIGMA 0x1105 1377 + 1376 1378 #define PCI_VENDOR_ID_VIA 0x1106 1377 1379 #define PCI_DEVICE_ID_VIA_8763_0 0x0198 1378 1380 #define PCI_DEVICE_ID_VIA_8380_0 0x0204
+1
include/uapi/linux/pci_regs.h
··· 517 517 #define PCI_EXP_LNKCAP_SLS 0x0000000f /* Supported Link Speeds */ 518 518 #define PCI_EXP_LNKCAP_SLS_2_5GB 0x00000001 /* LNKCAP2 SLS Vector bit 0 */ 519 519 #define PCI_EXP_LNKCAP_SLS_5_0GB 0x00000002 /* LNKCAP2 SLS Vector bit 1 */ 520 + #define PCI_EXP_LNKCAP_SLS_8_0GB 0x00000003 /* LNKCAP2 SLS Vector bit 2 */ 520 521 #define PCI_EXP_LNKCAP_MLW 0x000003f0 /* Maximum Link Width */ 521 522 #define PCI_EXP_LNKCAP_ASPMS 0x00000c00 /* ASPM Support */ 522 523 #define PCI_EXP_LNKCAP_L0SEL 0x00007000 /* L0s Exit Latency */
+3
include/uapi/linux/switchtec_ioctl.h
··· 39 39 __u32 padding; 40 40 }; 41 41 42 + #define SWITCHTEC_IOCTL_PART_ACTIVE 1 43 + #define SWITCHTEC_IOCTL_PART_RUNNING 2 44 + 42 45 struct switchtec_ioctl_flash_part_info { 43 46 __u32 flash_partition; 44 47 __u32 address;
+12 -1
kernel/irq/affinity.c
··· 110 110 struct cpumask *masks; 111 111 cpumask_var_t nmsk, *node_to_present_cpumask; 112 112 113 + /* 114 + * If there aren't any vectors left after applying the pre/post 115 + * vectors don't bother with assigning affinity. 116 + */ 117 + if (!affv) 118 + return NULL; 119 + 113 120 if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) 114 121 return NULL; 115 122 ··· 199 192 200 193 /** 201 194 * irq_calc_affinity_vectors - Calculate the optimal number of vectors 195 + * @minvec: The minimum number of vectors available 202 196 * @maxvec: The maximum number of vectors available 203 197 * @affd: Description of the affinity requirements 204 198 */ 205 - int irq_calc_affinity_vectors(int maxvec, const struct irq_affinity *affd) 199 + int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity *affd) 206 200 { 207 201 int resv = affd->pre_vectors + affd->post_vectors; 208 202 int vecs = maxvec - resv; 209 203 int ret; 204 + 205 + if (resv > minvec) 206 + return 0; 210 207 211 208 get_online_cpus(); 212 209 ret = min_t(int, cpumask_weight(cpu_present_mask), vecs) + resv;