Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.6-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"PCI changes for v4.6:

Enumeration:
- Disable IO/MEM decoding for devices with non-compliant BARs (Bjorn Helgaas)
- Mark Broadwell-EP Home Agent & PCU as having non-compliant BARs (Bjorn Helgaas

Resource management:
- Mark shadow copy of VGA ROM as IORESOURCE_PCI_FIXED (Bjorn Helgaas)
- Don't assign or reassign immutable resources (Bjorn Helgaas)
- Don't enable/disable ROM BAR if we're using a RAM shadow copy (Bjorn Helgaas)
- Set ROM shadow location in arch code, not in PCI core (Bjorn Helgaas)
- Remove arch-specific IORESOURCE_ROM_SHADOW size from sysfs (Bjorn Helgaas)
- ia64: Use ioremap() instead of open-coded equivalent (Bjorn Helgaas)
- ia64: Keep CPU physical (not virtual) addresses in shadow ROM resource (Bjorn Helgaas)
- MIPS: Keep CPU physical (not virtual) addresses in shadow ROM resource (Bjorn Helgaas)
- Remove unused IORESOURCE_ROM_COPY and IORESOURCE_ROM_BIOS_COPY (Bjorn Helgaas)
- Don't leak memory if sysfs_create_bin_file() fails (Bjorn Helgaas)
- rcar: Remove PCI_PROBE_ONLY handling (Lorenzo Pieralisi)
- designware: Remove PCI_PROBE_ONLY handling (Lorenzo Pieralisi)

Virtualization:
- Wait for up to 1000ms after FLR reset (Alex Williamson)
- Support SR-IOV on any function type (Kelly Zytaruk)
- Add ACS quirk for all Cavium devices (Manish Jaggi)

AER:
- Rename pci_ops_aer to aer_inj_pci_ops (Bjorn Helgaas)
- Restore pci_ops pointer while calling original pci_ops (David Daney)
- Fix aer_inject error codes (Jean Delvare)
- Use dev_warn() in aer_inject (Jean Delvare)
- Log actual error causes in aer_inject (Jean Delvare)
- Log aer_inject error injections (Jean Delvare)

VPD:
- Prevent VPD access for buggy devices (Babu Moger)
- Move pci_read_vpd() and pci_write_vpd() close to other VPD code (Bjorn Helgaas)
- Move pci_vpd_release() from header file to pci/access.c (Bjorn Helgaas)
- Remove struct pci_vpd_ops.release function pointer (Bjorn Helgaas)
- Rename VPD symbols to remove unnecessary "pci22" (Bjorn Helgaas)
- Fold struct pci_vpd_pci22 into struct pci_vpd (Bjorn Helgaas)
- Sleep rather than busy-wait for VPD access completion (Bjorn Helgaas)
- Update VPD definitions (Hannes Reinecke)
- Allow access to VPD attributes with size 0 (Hannes Reinecke)
- Determine actual VPD size on first access (Hannes Reinecke)

Generic host bridge driver:
- Move structure definitions to separate header file (David Daney)
- Add pci_host_common_probe(), based on gen_pci_probe() (David Daney)
- Expose pci_host_common_probe() for use by other drivers (David Daney)

Altera host bridge driver:
- Fix altera_pcie_link_is_up() (Ley Foon Tan)

Cavium ThunderX host bridge driver:
- Add PCIe host driver for ThunderX processors (David Daney)
- Add driver for ThunderX-pass{1,2} on-chip devices (David Daney)

Freescale i.MX6 host bridge driver:
- Add DT bindings to configure PHY Tx driver settings (Justin Waters)
- Move imx6_pcie_reset_phy() near other PHY handling functions (Lucas Stach)
- Move PHY reset into imx6_pcie_establish_link() (Lucas Stach)
- Remove broken Gen2 workaround (Lucas Stach)
- Move link up check into imx6_pcie_wait_for_link() (Lucas Stach)

Freescale Layerscape host bridge driver:
- Add "fsl,ls2085a-pcie" compatible ID (Yang Shi)

Intel VMD host bridge driver:
- Attach VMD resources to parent domain's resource tree (Jon Derrick)
- Set bus resource start to 0 (Keith Busch)

Microsoft Hyper-V host bridge driver:
- Add fwnode_handle to x86 pci_sysdata (Jake Oshins)
- Look up IRQ domain by fwnode_handle (Jake Oshins)
- Add paravirtual PCI front-end for Microsoft Hyper-V VMs (Jake Oshins)

NVIDIA Tegra host bridge driver:
- Add pci_ops.{add,remove}_bus() callbacks (Thierry Reding)
- Implement ->{add,remove}_bus() callbacks (Thierry Reding)
- Remove unused struct tegra_pcie.num_ports field (Thierry Reding)
- Track bus -> CPU mapping (Thierry Reding)
- Remove misleading PHYS_OFFSET (Thierry Reding)

Renesas R-Car host bridge driver:
- Depend on ARCH_RENESAS, not ARCH_SHMOBILE (Simon Horman)

Synopsys DesignWare host bridge driver:
- ARC: Add PCI support (Joao Pinto)
- Add generic dw_pcie_wait_for_link() (Joao Pinto)
- Add default link up check if sub-driver doesn't override (Joao Pinto)
- Add driver for prototyping kits based on ARC SDP (Joao Pinto)

TI Keystone host bridge driver:
- Defer probing if devm_phy_get() returns -EPROBE_DEFER (Shawn Lin)

Xilinx AXI host bridge driver:
- Use of_pci_get_host_bridge_resources() to parse DT (Bharat Kumar Gogada)
- Remove dependency on ARM-specific struct hw_pci (Bharat Kumar Gogada)
- Don't call pci_fixup_irqs() on Microblaze (Bharat Kumar Gogada)
- Update Zynq binding with Microblaze node (Bharat Kumar Gogada)
- microblaze: Support generic Xilinx AXI PCIe Host Bridge IP driver (Bharat Kumar Gogada)

Xilinx NWL host bridge driver:
- Add support for Xilinx NWL PCIe Host Controller (Bharat Kumar Gogada)

Miscellaneous:
- Check device_attach() return value always (Bjorn Helgaas)
- Move pci_set_flags() from asm-generic/pci-bridge.h to linux/pci.h (Bjorn Helgaas)
- Remove includes of empty asm-generic/pci-bridge.h (Bjorn Helgaas)
- ARM64: Remove generated include of asm-generic/pci-bridge.h (Bjorn Helgaas)
- Remove empty asm-generic/pci-bridge.h (Bjorn Helgaas)
- Remove includes of asm/pci-bridge.h (Bjorn Helgaas)
- Consolidate PCI DMA constants and interfaces in linux/pci-dma-compat.h (Bjorn Helgaas)
- unicore32: Remove unused HAVE_ARCH_PCI_SET_DMA_MASK definition (Bjorn Helgaas)
- Cleanup pci/pcie/Kconfig whitespace (Andreas Ziegler)
- Include pci/hotplug Kconfig directly from pci/Kconfig (Bjorn Helgaas)
- Include pci/pcie/Kconfig directly from pci/Kconfig (Bogicevic Sasa)
- frv: Remove stray pci_{alloc,free}_consistent() declaration (Christoph Hellwig)
- Move pci_dma_* helpers to common code (Christoph Hellwig)
- Add PCI_CLASS_SERIAL_USB_DEVICE definition (Heikki Krogerus)
- Add QEMU top-level IDs for (sub)vendor & device (Robin H. Johnson)
- Fix broken URL for Dell biosdevname (Naga Venkata Sai Indubhaskar Jupudi)"

* tag 'pci-v4.6-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (94 commits)
PCI: Add PCI_CLASS_SERIAL_USB_DEVICE definition
PCI: designware: Add driver for prototyping kits based on ARC SDP
PCI: designware: Add default link up check if sub-driver doesn't override
PCI: designware: Add generic dw_pcie_wait_for_link()
PCI: Cleanup pci/pcie/Kconfig whitespace
PCI: Simplify pci_create_attr() control flow
PCI: Don't leak memory if sysfs_create_bin_file() fails
PCI: Simplify sysfs ROM cleanup
PCI: Remove unused IORESOURCE_ROM_COPY and IORESOURCE_ROM_BIOS_COPY
MIPS: Loongson 3: Keep CPU physical (not virtual) addresses in shadow ROM resource
MIPS: Loongson 3: Use temporary struct resource * to avoid repetition
ia64/PCI: Keep CPU physical (not virtual) addresses in shadow ROM resource
ia64/PCI: Use ioremap() instead of open-coded equivalent
ia64/PCI: Use temporary struct resource * to avoid repetition
PCI: Clean up pci_map_rom() whitespace
PCI: Remove arch-specific IORESOURCE_ROM_SHADOW size from sysfs
PCI: thunder: Add driver for ThunderX-pass{1,2} on-chip devices
PCI: thunder: Add PCIe host driver for ThunderX processors
PCI: generic: Expose pci_host_common_probe() for use by other drivers
PCI: generic: Add pci_host_common_probe(), based on gen_pci_probe()
...

+5641 -1144
+17
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 28 28 - clock-names: Must include the following entries: 29 29 - "pcie" 30 30 - "pcie_bus" 31 + 32 + Example configuration: 33 + 34 + pcie: pcie@0xdffff000 { 35 + compatible = "snps,dw-pcie"; 36 + reg = <0xdffff000 0x1000>, /* Controller registers */ 37 + <0xd0000000 0x2000>; /* PCI config space */ 38 + reg-names = "ctrlreg", "config"; 39 + #address-cells = <3>; 40 + #size-cells = <2>; 41 + device_type = "pci"; 42 + ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000 43 + 0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>; 44 + interrupts = <25>, <24>; 45 + #interrupt-cells = <1>; 46 + num-lanes = <1>; 47 + };
+7
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
··· 13 13 - clock-names: Must include the following additional entries: 14 14 - "pcie_phy" 15 15 16 + Optional properties: 17 + - fsl,tx-deemph-gen1: Gen1 De-emphasis value. Default: 0 18 + - fsl,tx-deemph-gen2-3p5db: Gen2 (3.5db) De-emphasis value. Default: 0 19 + - fsl,tx-deemph-gen2-6db: Gen2 (6db) De-emphasis value. Default: 20 20 + - fsl,tx-swing-full: Gen2 TX SWING FULL value. Default: 127 21 + - fsl,tx-swing-low: TX launch amplitude swing_low value. Default: 127 22 + 16 23 Example: 17 24 18 25 pcie@0x01000000 {
+30
Documentation/devicetree/bindings/pci/pci-thunder-ecam.txt
··· 1 + * ThunderX PCI host controller for pass-1.x silicon 2 + 3 + Firmware-initialized PCI host controller to on-chip devices found on 4 + some Cavium ThunderX processors. These devices have ECAM-based config 5 + access, but the BARs are all at fixed addresses. We handle the fixed 6 + addresses by synthesizing Enhanced Allocation (EA) capabilities for 7 + these devices. 8 + 9 + The properties and their meanings are identical to those described in 10 + host-generic-pci.txt except as listed below. 11 + 12 + Properties of the host controller node that differ from 13 + host-generic-pci.txt: 14 + 15 + - compatible : Must be "cavium,pci-host-thunder-ecam" 16 + 17 + Example: 18 + 19 + pcie@84b000000000 { 20 + compatible = "cavium,pci-host-thunder-ecam"; 21 + device_type = "pci"; 22 + msi-parent = <&its>; 23 + msi-map = <0 &its 0x30000 0x10000>; 24 + bus-range = <0 31>; 25 + #size-cells = <2>; 26 + #address-cells = <3>; 27 + #stream-id-cells = <1>; 28 + reg = <0x84b0 0x00000000 0 0x02000000>; /* Configuration space */ 29 + ranges = <0x03000000 0x8180 0x00000000 0x8180 0x00000000 0x80 0x00000000>; /* mem ranges */ 30 + };
+43
Documentation/devicetree/bindings/pci/pci-thunder-pem.txt
··· 1 + * ThunderX PEM PCIe host controller 2 + 3 + Firmware-initialized PCI host controller found on some Cavium 4 + ThunderX processors. 5 + 6 + The properties and their meanings are identical to those described in 7 + host-generic-pci.txt except as listed below. 8 + 9 + Properties of the host controller node that differ from 10 + host-generic-pci.txt: 11 + 12 + - compatible : Must be "cavium,pci-host-thunder-pem" 13 + 14 + - reg : Two entries: First the configuration space for down 15 + stream devices base address and size, as accessed 16 + from the parent bus. Second, the register bank of 17 + the PEM device PCIe bridge. 18 + 19 + Example: 20 + 21 + pci@87e0,c2000000 { 22 + compatible = "cavium,pci-host-thunder-pem"; 23 + device_type = "pci"; 24 + msi-parent = <&its>; 25 + msi-map = <0 &its 0x10000 0x10000>; 26 + bus-range = <0x8f 0xc7>; 27 + #size-cells = <2>; 28 + #address-cells = <3>; 29 + 30 + reg = <0x8880 0x8f000000 0x0 0x39000000>, /* Configuration space */ 31 + <0x87e0 0xc2000000 0x0 0x00010000>; /* PEM space */ 32 + ranges = <0x01000000 0x00 0x00020000 0x88b0 0x00020000 0x00 0x00010000>, /* I/O */ 33 + <0x03000000 0x00 0x10000000 0x8890 0x10000000 0x0f 0xf0000000>, /* mem64 */ 34 + <0x43000000 0x10 0x00000000 0x88a0 0x00000000 0x10 0x00000000>, /* mem64-pref */ 35 + <0x03000000 0x87e0 0xc2f00000 0x87e0 0xc2000000 0x00 0x00100000>; /* mem64 PEM BAR4 */ 36 + 37 + #interrupt-cells = <1>; 38 + interrupt-map-mask = <0 0 0 7>; 39 + interrupt-map = <0 0 0 1 &gic0 0 0 0 24 4>, /* INTA */ 40 + <0 0 0 2 &gic0 0 0 0 25 4>, /* INTB */ 41 + <0 0 0 3 &gic0 0 0 0 26 4>, /* INTC */ 42 + <0 0 0 4 &gic0 0 0 0 27 4>; /* INTD */ 43 + };
+68
Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt
··· 1 + * Xilinx NWL PCIe Root Port Bridge DT description 2 + 3 + Required properties: 4 + - compatible: Should contain "xlnx,nwl-pcie-2.11" 5 + - #address-cells: Address representation for root ports, set to <3> 6 + - #size-cells: Size representation for root ports, set to <2> 7 + - #interrupt-cells: specifies the number of cells needed to encode an 8 + interrupt source. The value must be 1. 9 + - reg: Should contain Bridge, PCIe Controller registers location, 10 + configuration space, and length 11 + - reg-names: Must include the following entries: 12 + "breg": bridge registers 13 + "pcireg": PCIe controller registers 14 + "cfg": configuration space region 15 + - device_type: must be "pci" 16 + - interrupts: Should contain NWL PCIe interrupt 17 + - interrupt-names: Must include the following entries: 18 + "msi1, msi0": interrupt asserted when MSI is received 19 + "intx": interrupt asserted when a legacy interrupt is received 20 + "misc": interrupt asserted when miscellaneous is received 21 + - interrupt-map-mask and interrupt-map: standard PCI properties to define the 22 + mapping of the PCI interface to interrupt numbers. 23 + - ranges: ranges for the PCI memory regions (I/O space region is not 24 + supported by hardware) 25 + Please refer to the standard PCI bus binding document for a more 26 + detailed explanation 27 + - msi-controller: indicates that this is MSI controller node 28 + - msi-parent: MSI parent of the root complex itself 29 + - legacy-interrupt-controller: Interrupt controller device node for Legacy interrupts 30 + - interrupt-controller: identifies the node as an interrupt controller 31 + - #interrupt-cells: should be set to 1 32 + - #address-cells: specifies the number of cells needed to encode an 33 + address. The value must be 0. 34 + 35 + 36 + Example: 37 + ++++++++ 38 + 39 + nwl_pcie: pcie@fd0e0000 { 40 + #address-cells = <3>; 41 + #size-cells = <2>; 42 + compatible = "xlnx,nwl-pcie-2.11"; 43 + #interrupt-cells = <1>; 44 + msi-controller; 45 + device_type = "pci"; 46 + interrupt-parent = <&gic>; 47 + interrupts = <0 114 4>, <0 115 4>, <0 116 4>, <0 117 4>, <0 118 4>; 48 + interrupt-names = "msi0", "msi1", "intx", "dummy", "misc"; 49 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 50 + interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc 0x1>, 51 + <0x0 0x0 0x0 0x2 &pcie_intc 0x2>, 52 + <0x0 0x0 0x0 0x3 &pcie_intc 0x3>, 53 + <0x0 0x0 0x0 0x4 &pcie_intc 0x4>; 54 + 55 + msi-parent = <&nwl_pcie>; 56 + reg = <0x0 0xfd0e0000 0x0 0x1000>, 57 + <0x0 0xfd480000 0x0 0x1000>, 58 + <0x0 0xe0000000 0x0 0x1000000>; 59 + reg-names = "breg", "pcireg", "cfg"; 60 + ranges = <0x02000000 0x00000000 0xe1000000 0x00000000 0xe1000000 0 0x0f000000>; 61 + 62 + pcie_intc: legacy-interrupt-controller { 63 + interrupt-controller; 64 + #address-cells = <0>; 65 + #interrupt-cells = <1>; 66 + }; 67 + 68 + };
+29 -3
Documentation/devicetree/bindings/pci/xilinx-pcie.txt
··· 17 17 Please refer to the standard PCI bus binding document for a more 18 18 detailed explanation 19 19 20 - Optional properties: 20 + Optional properties for Zynq/Microblaze: 21 21 - bus-range: PCI bus numbers covered 22 22 23 23 Interrupt controller child node ··· 38 38 39 39 Example: 40 40 ++++++++ 41 - 41 + Zynq: 42 42 pci_express: axi-pcie@50000000 { 43 43 #address-cells = <3>; 44 44 #size-cells = <2>; 45 45 #interrupt-cells = <1>; 46 46 compatible = "xlnx,axi-pcie-host-1.00.a"; 47 - reg = < 0x50000000 0x10000000 >; 47 + reg = < 0x50000000 0x1000000 >; 48 48 device_type = "pci"; 49 49 interrupts = < 0 52 4 >; 50 50 interrupt-map-mask = <0 0 0 7>; ··· 59 59 #address-cells = <0>; 60 60 #interrupt-cells = <1>; 61 61 }; 62 + }; 63 + 64 + 65 + Microblaze: 66 + pci_express: axi-pcie@10000000 { 67 + #address-cells = <3>; 68 + #size-cells = <2>; 69 + #interrupt-cells = <1>; 70 + compatible = "xlnx,axi-pcie-host-1.00.a"; 71 + reg = <0x10000000 0x4000000>; 72 + device_type = "pci"; 73 + interrupt-parent = <&microblaze_0_intc>; 74 + interrupts = <1 2>; 75 + interrupt-map-mask = <0 0 0 7>; 76 + interrupt-map = <0 0 0 1 &pcie_intc 1>, 77 + <0 0 0 2 &pcie_intc 2>, 78 + <0 0 0 3 &pcie_intc 3>, 79 + <0 0 0 4 &pcie_intc 4>; 80 + ranges = <0x02000000 0x00000000 0x80000000 0x80000000 0x00000000 0x10000000>; 81 + 82 + pcie_intc: interrupt-controller { 83 + interrupt-controller; 84 + #address-cells = <0>; 85 + #interrupt-cells = <1>; 86 + }; 87 + 62 88 };
+17
MAINTAINERS
··· 5205 5205 F: drivers/hid/hid-hyperv.c 5206 5206 F: drivers/hv/ 5207 5207 F: drivers/input/serio/hyperv-keyboard.c 5208 + F: drivers/pci/host/pci-hyperv.c 5208 5209 F: drivers/net/hyperv/ 5209 5210 F: drivers/scsi/storvsc_drv.c 5210 5211 F: drivers/video/fbdev/hyperv_fb.c ··· 8384 8383 S: Maintained 8385 8384 F: drivers/pci/host/*designware* 8386 8385 8386 + PCI DRIVER FOR SYNOPSYS PROTOTYPING DEVICE 8387 + M: Joao Pinto <jpinto@synopsys.com> 8388 + L: linux-pci@vger.kernel.org 8389 + S: Maintained 8390 + F: Documentation/devicetree/bindings/pci/designware-pcie.txt 8391 + F: drivers/pci/host/pcie-designware-plat.c 8392 + 8387 8393 PCI DRIVER FOR GENERIC OF HOSTS 8388 8394 M: Will Deacon <will.deacon@arm.com> 8389 8395 L: linux-pci@vger.kernel.org 8390 8396 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 8391 8397 S: Maintained 8392 8398 F: Documentation/devicetree/bindings/pci/host-generic-pci.txt 8399 + F: drivers/pci/host/pci-host-common.c 8393 8400 F: drivers/pci/host/pci-host-generic.c 8394 8401 8395 8402 PCI DRIVER FOR INTEL VOLUME MANAGEMENT DEVICE (VMD) ··· 8442 8433 L: linux-arm-msm@vger.kernel.org 8443 8434 S: Maintained 8444 8435 F: drivers/pci/host/*qcom* 8436 + 8437 + PCIE DRIVER FOR CAVIUM THUNDERX 8438 + M: David Daney <david.daney@cavium.com> 8439 + L: linux-pci@vger.kernel.org 8440 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 8441 + S: Supported 8442 + F: Documentation/devicetree/bindings/pci/pci-thunder-* 8443 + F: drivers/pci/host/pci-thunder-* 8445 8444 8446 8445 PCMCIA SUBSYSTEM 8447 8446 P: Linux PCMCIA Team
-8
arch/alpha/include/asm/pci.h
··· 7 7 #include <linux/dma-mapping.h> 8 8 #include <linux/scatterlist.h> 9 9 #include <asm/machvec.h> 10 - #include <asm-generic/pci-bridge.h> 11 10 12 11 /* 13 12 * The following structure is used to manage multiple PCI busses. ··· 64 65 The networking and block device layers use this boolean for bounce buffer 65 66 decisions. */ 66 67 #define PCI_DMA_BUS_IS_PHYS 0 67 - 68 - #ifdef CONFIG_PCI 69 - 70 - /* implement the pci_ DMA API in terms of the generic device dma_ one */ 71 - #include <asm-generic/pci-dma-compat.h> 72 - 73 - #endif 74 68 75 69 /* TODO: integrate with include/asm-generic/pci.h ? */ 76 70 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel)
+26
arch/arc/Kconfig
··· 17 17 select GENERIC_FIND_FIRST_BIT 18 18 # for now, we don't need GENERIC_IRQ_PROBE, CONFIG_GENERIC_IRQ_CHIP 19 19 select GENERIC_IRQ_SHOW 20 + select GENERIC_PCI_IOMAP 20 21 select GENERIC_PENDING_IRQ if SMP 21 22 select GENERIC_SMP_IDLE_THREAD 22 23 select HAVE_ARCH_KGDB ··· 37 36 select OF_EARLY_FLATTREE 38 37 select PERF_USE_VMALLOC 39 38 select HAVE_DEBUG_STACKOVERFLOW 39 + 40 + config MIGHT_HAVE_PCI 41 + bool 40 42 41 43 config TRACE_IRQFLAGS_SUPPORT 42 44 def_bool y ··· 573 569 574 570 source "net/Kconfig" 575 571 source "drivers/Kconfig" 572 + 573 + menu "Bus Support" 574 + 575 + config PCI 576 + bool "PCI support" if MIGHT_HAVE_PCI 577 + help 578 + PCI is the name of a bus system, i.e., the way the CPU talks to 579 + the other stuff inside your box. Find out if your board/platform 580 + has PCI. 581 + 582 + Note: PCIe support for Synopsys Device will be available only 583 + when HAPS DX is configured with PCIe RC bitmap. If you have PCI, 584 + say Y, otherwise N. 585 + 586 + config PCI_SYSCALL 587 + def_bool PCI 588 + 589 + source "drivers/pci/Kconfig" 590 + source "drivers/pci/pcie/Kconfig" 591 + 592 + endmenu 593 + 576 594 source "fs/Kconfig" 577 595 source "arch/arc/Kconfig.debug" 578 596 source "security/Kconfig"
+5
arch/arc/include/asm/dma.h
··· 10 10 #define ASM_ARC_DMA_H 11 11 12 12 #define MAX_DMA_ADDRESS 0xC0000000 13 + #ifdef CONFIG_PCI 14 + extern int isa_dma_bridge_buggy; 15 + #else 16 + #define isa_dma_bridge_buggy 0 17 + #endif 13 18 14 19 #endif
+9
arch/arc/include/asm/io.h
··· 16 16 extern void __iomem *ioremap(unsigned long physaddr, unsigned long size); 17 17 extern void __iomem *ioremap_prot(phys_addr_t offset, unsigned long size, 18 18 unsigned long flags); 19 + static inline void __iomem *ioport_map(unsigned long port, unsigned int nr) 20 + { 21 + return (void __iomem *)port; 22 + } 23 + 24 + static inline void ioport_unmap(void __iomem *addr) 25 + { 26 + } 27 + 19 28 extern void iounmap(const void __iomem *addr); 20 29 21 30 #define ioremap_nocache(phy, sz) ioremap(phy, sz)
+28
arch/arc/include/asm/pci.h
··· 1 + /* 2 + * Copyright (C) 2015-2016 Synopsys, Inc. (www.synopsys.com) 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #ifndef _ASM_ARC_PCI_H 10 + #define _ASM_ARC_PCI_H 11 + 12 + #ifdef __KERNEL__ 13 + #include <linux/ioport.h> 14 + 15 + #define PCIBIOS_MIN_IO 0x100 16 + #define PCIBIOS_MIN_MEM 0x100000 17 + 18 + #define pcibios_assign_all_busses() 1 19 + /* 20 + * The PCI address space does equal the physical memory address space. 21 + * The networking and block device layers use this boolean for bounce 22 + * buffer decisions. 23 + */ 24 + #define PCI_DMA_BUS_IS_PHYS 1 25 + 26 + #endif /* __KERNEL__ */ 27 + 28 + #endif /* _ASM_ARC_PCI_H */
+1
arch/arc/kernel/Makefile
··· 12 12 obj-y += signal.o traps.o sys.o troubleshoot.o stacktrace.o disasm.o clk.o 13 13 obj-$(CONFIG_ISA_ARCOMPACT) += entry-compact.o intc-compact.o 14 14 obj-$(CONFIG_ISA_ARCV2) += entry-arcv2.o intc-arcv2.o 15 + obj-$(CONFIG_PCI) += pcibios.o 15 16 16 17 obj-$(CONFIG_MODULES) += arcksyms.o module.o 17 18 obj-$(CONFIG_SMP) += smp.o
+22
arch/arc/kernel/pcibios.c
··· 1 + /* 2 + * Copyright (C) 2014-2015 Synopsys, Inc. (www.synopsys.com) 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #include <linux/pci.h> 10 + 11 + /* 12 + * We don't have to worry about legacy ISA devices, so nothing to do here 13 + */ 14 + resource_size_t pcibios_align_resource(void *data, const struct resource *res, 15 + resource_size_t size, resource_size_t align) 16 + { 17 + return res->start; 18 + } 19 + 20 + void pcibios_fixup_bus(struct pci_bus *bus) 21 + { 22 + }
+1
arch/arc/plat-axs10x/Kconfig
··· 11 11 select DW_APB_ICTL 12 12 select GPIO_DWAPB 13 13 select OF_GPIO 14 + select MIGHT_HAVE_PCI 14 15 select GENERIC_IRQ_CHIP 15 16 select ARCH_REQUIRE_GPIOLIB 16 17 help
-1
arch/arm/Kconfig
··· 1212 1212 select DMABOUNCE 1213 1213 1214 1214 source "drivers/pci/Kconfig" 1215 - source "drivers/pci/pcie/Kconfig" 1216 1215 1217 1216 source "drivers/pcmcia/Kconfig" 1218 1217
-4
arch/arm/include/asm/pci.h
··· 2 2 #define ASMARM_PCI_H 3 3 4 4 #ifdef __KERNEL__ 5 - #include <asm-generic/pci-dma-compat.h> 6 - #include <asm-generic/pci-bridge.h> 7 - 8 5 #include <asm/mach/pci.h> /* for pci_sys_data */ 9 6 10 7 extern unsigned long pcibios_min_io; ··· 38 41 } 39 42 40 43 #endif /* __KERNEL__ */ 41 - 42 44 #endif
-2
arch/arm64/Kconfig
··· 235 235 def_bool PCI 236 236 237 237 source "drivers/pci/Kconfig" 238 - source "drivers/pci/pcie/Kconfig" 239 - source "drivers/pci/hotplug/Kconfig" 240 238 241 239 endmenu 242 240
-3
arch/arm64/include/asm/Kbuild
··· 1 - 2 - 3 1 generic-y += bug.h 4 2 generic-y += bugs.h 5 3 generic-y += checksum.h ··· 29 31 generic-y += msi.h 30 32 generic-y += mutex.h 31 33 generic-y += pci.h 32 - generic-y += pci-bridge.h 33 34 generic-y += poll.h 34 35 generic-y += preempt.h 35 36 generic-y += resource.h
-2
arch/arm64/include/asm/pci.h
··· 7 7 #include <linux/dma-mapping.h> 8 8 9 9 #include <asm/io.h> 10 - #include <asm-generic/pci-bridge.h> 11 - #include <asm-generic/pci-dma-compat.h> 12 10 13 11 #define PCIBIOS_MIN_IO 0x1000 14 12 #define PCIBIOS_MIN_MEM 0
-2
arch/arm64/kernel/pci.c
··· 19 19 #include <linux/of_platform.h> 20 20 #include <linux/slab.h> 21 21 22 - #include <asm/pci-bridge.h> 23 - 24 22 /* 25 23 * Called after each bus is probed, but before its children are examined 26 24 */
-2
arch/avr32/include/asm/pci.h
··· 5 5 6 6 #define PCI_DMA_BUS_IS_PHYS (1) 7 7 8 - #include <asm-generic/pci-dma-compat.h> 9 - 10 8 #endif /* __ASM_AVR32_PCI_H__ */
-2
arch/blackfin/Kconfig
··· 1233 1233 1234 1234 source "drivers/pcmcia/Kconfig" 1235 1235 1236 - source "drivers/pci/hotplug/Kconfig" 1237 - 1238 1236 endmenu 1239 1237 1240 1238 menu "Executable file formats"
-1
arch/blackfin/include/asm/pci.h
··· 4 4 #define _ASM_BFIN_PCI_H 5 5 6 6 #include <linux/scatterlist.h> 7 - #include <asm-generic/pci-dma-compat.h> 8 7 #include <asm-generic/pci.h> 9 8 10 9 #define PCIBIOS_MIN_IO 0x00001000
-3
arch/cris/include/asm/pci.h
··· 48 48 49 49 #endif /* __KERNEL__ */ 50 50 51 - /* implement the pci_ DMA API in terms of the generic device dma_ one */ 52 - #include <asm-generic/pci-dma-compat.h> 53 - 54 51 /* generic pci stuff */ 55 52 #include <asm-generic/pci.h> 56 53
-7
arch/frv/include/asm/pci.h
··· 15 15 16 16 #include <linux/mm.h> 17 17 #include <linux/scatterlist.h> 18 - #include <asm-generic/pci-dma-compat.h> 19 18 #include <asm-generic/pci.h> 20 19 21 20 struct pci_dev; ··· 30 31 extern void consistent_sync_page(struct page *page, unsigned long offset, 31 32 size_t size, int direction); 32 33 #endif 33 - 34 - extern void *pci_alloc_consistent(struct pci_dev *hwdev, size_t size, 35 - dma_addr_t *dma_handle); 36 - 37 - extern void pci_free_consistent(struct pci_dev *hwdev, size_t size, 38 - void *vaddr, dma_addr_t dma_handle); 39 34 40 35 /* Return the index of the PCI controller for device PDEV. */ 41 36 #define pci_controller_num(PDEV) (0)
-4
arch/ia64/Kconfig
··· 574 574 config PCI_SYSCALL 575 575 def_bool PCI 576 576 577 - source "drivers/pci/pcie/Kconfig" 578 - 579 577 source "drivers/pci/Kconfig" 580 - 581 - source "drivers/pci/hotplug/Kconfig" 582 578 583 579 source "drivers/pcmcia/Kconfig" 584 580
-2
arch/ia64/include/asm/pci.h
··· 50 50 extern unsigned long ia64_max_iommu_merge_mask; 51 51 #define PCI_DMA_BUS_IS_PHYS (ia64_max_iommu_merge_mask == ~0UL) 52 52 53 - #include <asm-generic/pci-dma-compat.h> 54 - 55 53 #define HAVE_PCI_MMAP 56 54 extern int pci_mmap_page_range (struct pci_dev *dev, struct vm_area_struct *vma, 57 55 enum pci_mmap_state mmap_state, int write_combine);
+16 -5
arch/ia64/pci/fixup.c
··· 17 17 * 18 18 * The standard boot ROM sequence for an x86 machine uses the BIOS 19 19 * to select an initial video card for boot display. This boot video 20 - * card will have it's BIOS copied to C0000 in system RAM. 20 + * card will have its BIOS copied to 0xC0000 in system RAM. 21 21 * IORESOURCE_ROM_SHADOW is used to associate the boot video 22 22 * card with this copy. On laptops this copy has to be used since 23 23 * the main ROM may be compressed or combined with another image. 24 24 * See pci_map_rom() for use of this flag. Before marking the device 25 25 * with IORESOURCE_ROM_SHADOW check if a vga_default_device is already set 26 - * by either arch cde or vga-arbitration, if so only apply the fixup to this 27 - * already determined primary video card. 26 + * by either arch code or vga-arbitration; if so only apply the fixup to this 27 + * already-determined primary video card. 28 28 */ 29 29 30 30 static void pci_fixup_video(struct pci_dev *pdev) ··· 32 32 struct pci_dev *bridge; 33 33 struct pci_bus *bus; 34 34 u16 config; 35 + struct resource *res; 35 36 36 37 if ((strcmp(ia64_platform_name, "dig") != 0) 37 38 && (strcmp(ia64_platform_name, "hpzx1") != 0)) ··· 62 61 if (!vga_default_device() || pdev == vga_default_device()) { 63 62 pci_read_config_word(pdev, PCI_COMMAND, &config); 64 63 if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 65 - pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 66 - dev_printk(KERN_DEBUG, &pdev->dev, "Video device with shadowed ROM\n"); 64 + res = &pdev->resource[PCI_ROM_RESOURCE]; 65 + 66 + pci_disable_rom(pdev); 67 + if (res->parent) 68 + release_resource(res); 69 + 70 + res->start = 0xC0000; 71 + res->end = res->start + 0x20000 - 1; 72 + res->flags = IORESOURCE_MEM | IORESOURCE_ROM_SHADOW | 73 + IORESOURCE_PCI_FIXED; 74 + dev_info(&pdev->dev, "Video device with shadowed ROM at %pR\n", 75 + res); 67 76 } 68 77 } 69 78 }
+13 -9
arch/ia64/sn/kernel/io_acpi_init.c
··· 429 429 void __iomem *addr; 430 430 struct pcidev_info *pcidev_info = NULL; 431 431 struct sn_irq_info *sn_irq_info = NULL; 432 - size_t image_size, size; 432 + struct resource *res; 433 + size_t size; 433 434 434 435 if (sn_acpi_get_pcidev_info(dev, &pcidev_info, &sn_irq_info)) { 435 436 panic("%s: Failure obtaining pcidev_info for %s\n", ··· 444 443 * of the shadowed copy, and the actual length of the ROM image. 445 444 */ 446 445 size = pci_resource_len(dev, PCI_ROM_RESOURCE); 447 - addr = ioremap(pcidev_info->pdi_pio_mapped_addr[PCI_ROM_RESOURCE], 448 - size); 449 - image_size = pci_get_rom_size(dev, addr, size); 450 - dev->resource[PCI_ROM_RESOURCE].start = (unsigned long) addr; 451 - dev->resource[PCI_ROM_RESOURCE].end = 452 - (unsigned long) addr + image_size - 1; 453 - dev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_BIOS_COPY; 446 + 447 + res = &dev->resource[PCI_ROM_RESOURCE]; 448 + 449 + pci_disable_rom(dev); 450 + if (res->parent) 451 + release_resource(res); 452 + 453 + res->start = pcidev_info->pdi_pio_mapped_addr[PCI_ROM_RESOURCE]; 454 + res->end = res->start + size - 1; 455 + res->flags = IORESOURCE_MEM | IORESOURCE_ROM_SHADOW | 456 + IORESOURCE_PCI_FIXED; 454 457 } 455 458 sn_pci_fixup_slot(dev, pcidev_info, sn_irq_info); 456 459 } 457 - 458 460 EXPORT_SYMBOL(sn_acpi_slot_fixup); 459 461 460 462
+19 -32
arch/ia64/sn/kernel/io_init.c
··· 150 150 sn_io_slot_fixup(struct pci_dev *dev) 151 151 { 152 152 int idx; 153 - unsigned long addr, end, size, start; 153 + struct resource *res; 154 + unsigned long addr, size; 154 155 struct pcidev_info *pcidev_info; 155 156 struct sn_irq_info *sn_irq_info; 156 157 int status; ··· 176 175 177 176 /* Copy over PIO Mapped Addresses */ 178 177 for (idx = 0; idx <= PCI_ROM_RESOURCE; idx++) { 179 - 180 - if (!pcidev_info->pdi_pio_mapped_addr[idx]) { 178 + if (!pcidev_info->pdi_pio_mapped_addr[idx]) 181 179 continue; 182 - } 183 180 184 - start = dev->resource[idx].start; 185 - end = dev->resource[idx].end; 186 - size = end - start; 187 - if (size == 0) { 181 + res = &dev->resource[idx]; 182 + 183 + size = res->end - res->start; 184 + if (size == 0) 188 185 continue; 189 - } 190 - addr = pcidev_info->pdi_pio_mapped_addr[idx]; 191 - addr = ((addr << 4) >> 4) | __IA64_UNCACHED_OFFSET; 192 - dev->resource[idx].start = addr; 193 - dev->resource[idx].end = addr + size; 186 + 187 + res->start = pcidev_info->pdi_pio_mapped_addr[idx]; 188 + res->end = addr + size; 194 189 195 190 /* 196 191 * if it's already in the device structure, remove it before 197 192 * inserting 198 193 */ 199 - if (dev->resource[idx].parent && dev->resource[idx].parent->child) 200 - release_resource(&dev->resource[idx]); 194 + if (res->parent && res->parent->child) 195 + release_resource(res); 201 196 202 - if (dev->resource[idx].flags & IORESOURCE_IO) 203 - insert_resource(&ioport_resource, &dev->resource[idx]); 197 + if (res->flags & IORESOURCE_IO) 198 + insert_resource(&ioport_resource, res); 204 199 else 205 - insert_resource(&iomem_resource, &dev->resource[idx]); 200 + insert_resource(&iomem_resource, res); 206 201 /* 207 - * If ROM, set the actual ROM image size, and mark as 208 - * shadowed in PROM. 202 + * If ROM, mark as shadowed in PROM. 209 203 */ 210 204 if (idx == PCI_ROM_RESOURCE) { 211 - size_t image_size; 212 - void __iomem *rom; 213 - 214 - rom = ioremap(pci_resource_start(dev, PCI_ROM_RESOURCE), 215 - size + 1); 216 - image_size = pci_get_rom_size(dev, rom, size + 1); 217 - dev->resource[PCI_ROM_RESOURCE].end = 218 - dev->resource[PCI_ROM_RESOURCE].start + 219 - image_size - 1; 220 - dev->resource[PCI_ROM_RESOURCE].flags |= 221 - IORESOURCE_ROM_BIOS_COPY; 205 + pci_disable_rom(dev); 206 + res->flags = IORESOURCE_MEM | IORESOURCE_ROM_SHADOW | 207 + IORESOURCE_PCI_FIXED; 222 208 } 223 209 } 224 210 225 211 sn_pci_fixup_slot(dev, pcidev_info, sn_irq_info); 226 212 } 227 - 228 213 EXPORT_SYMBOL(sn_io_slot_fixup); 229 214 230 215 /*
-2
arch/m32r/Kconfig
··· 387 387 388 388 source "drivers/pcmcia/Kconfig" 389 389 390 - source "drivers/pci/hotplug/Kconfig" 391 - 392 390 endmenu 393 391 394 392
-1
arch/m68k/include/asm/pci.h
··· 1 1 #ifndef _ASM_M68K_PCI_H 2 2 #define _ASM_M68K_PCI_H 3 3 4 - #include <asm-generic/pci-dma-compat.h> 5 4 #include <asm-generic/pci.h> 6 5 7 6 /* The PCI address space does equal the physical memory
+3
arch/microblaze/Kconfig
··· 267 267 config PCI_DOMAINS 268 268 def_bool PCI 269 269 270 + config PCI_DOMAINS_GENERIC 271 + def_bool PCI_DOMAINS 272 + 270 273 config PCI_SYSCALL 271 274 def_bool PCI 272 275
-2
arch/microblaze/include/asm/pci.h
··· 22 22 #include <asm/prom.h> 23 23 #include <asm/pci-bridge.h> 24 24 25 - #include <asm-generic/pci-dma-compat.h> 26 - 27 25 #define PCIBIOS_MIN_IO 0x1000 28 26 #define PCIBIOS_MIN_MEM 0x10000000 29 27
+10 -46
arch/microblaze/pci/pci-common.c
··· 123 123 } 124 124 EXPORT_SYMBOL_GPL(pci_address_to_pio); 125 125 126 - /* 127 - * Return the domain number for this bus. 128 - */ 129 - int pci_domain_nr(struct pci_bus *bus) 130 - { 131 - struct pci_controller *hose = pci_bus_to_host(bus); 132 - 133 - return hose->global_number; 134 - } 135 - EXPORT_SYMBOL(pci_domain_nr); 136 - 137 126 /* This routine is meant to be used early during boot, when the 138 127 * PCI bus numbers have not yet been assigned, and you need to 139 128 * issue PCI config cycles to an OF device. ··· 852 863 853 864 void pcibios_fixup_bus(struct pci_bus *bus) 854 865 { 855 - /* When called from the generic PCI probe, read PCI<->PCI bridge 856 - * bases. This is -not- called when generating the PCI tree from 857 - * the OF device-tree. 858 - */ 859 - if (bus->self != NULL) 860 - pci_read_bridge_bases(bus); 861 - 862 - /* Now fixup the bus bus */ 863 - pcibios_setup_bus_self(bus); 864 - 865 - /* Now fixup devices on that bus */ 866 - pcibios_setup_bus_devices(bus); 866 + /* nothing to do */ 867 867 } 868 868 EXPORT_SYMBOL(pcibios_fixup_bus); 869 - 870 - static int skip_isa_ioresource_align(struct pci_dev *dev) 871 - { 872 - return 0; 873 - } 874 869 875 870 /* 876 871 * We need to avoid collisions with `mirrored' VGA ports ··· 872 899 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 873 900 resource_size_t size, resource_size_t align) 874 901 { 875 - struct pci_dev *dev = data; 876 - resource_size_t start = res->start; 877 - 878 - if (res->flags & IORESOURCE_IO) { 879 - if (skip_isa_ioresource_align(dev)) 880 - return start; 881 - if (start & 0x300) 882 - start = (start + 0x3ff) & ~0x3ff; 883 - } 884 - 885 - return start; 902 + return res->start; 886 903 } 887 904 EXPORT_SYMBOL(pcibios_align_resource); 905 + 906 + int pcibios_add_device(struct pci_dev *dev) 907 + { 908 + dev->irq = of_irq_parse_and_map_pci(dev, 0, 0); 909 + 910 + return 0; 911 + } 912 + EXPORT_SYMBOL(pcibios_add_device); 888 913 889 914 /* 890 915 * Reparent resource children of pr that conflict with res ··· 1302 1331 (unsigned long long)hose->pci_mem_offset); 1303 1332 pr_debug("PCI: PHB IO offset = %08lx\n", 1304 1333 (unsigned long)hose->io_base_virt - _IO_BASE); 1305 - } 1306 - 1307 - struct device_node *pcibios_get_phb_of_node(struct pci_bus *bus) 1308 - { 1309 - struct pci_controller *hose = bus->sysdata; 1310 - 1311 - return of_node_get(hose->dn); 1312 1334 } 1313 1335 1314 1336 static void pcibios_scan_phb(struct pci_controller *hose)
-4
arch/mips/Kconfig
··· 2871 2871 2872 2872 source "drivers/pci/Kconfig" 2873 2873 2874 - source "drivers/pci/pcie/Kconfig" 2875 - 2876 2874 # 2877 2875 # ISA support is now enabled via select. Too many systems still have the one 2878 2876 # or other ISA chip on the board that users don't know about so don't expect ··· 2929 2931 bool 2930 2932 2931 2933 source "drivers/pcmcia/Kconfig" 2932 - 2933 - source "drivers/pci/hotplug/Kconfig" 2934 2934 2935 2935 config RAPIDIO 2936 2936 tristate "RapidIO support"
-4
arch/mips/include/asm/pci.h
··· 102 102 #include <linux/scatterlist.h> 103 103 #include <linux/string.h> 104 104 #include <asm/io.h> 105 - #include <asm-generic/pci-bridge.h> 106 105 107 106 struct pci_dev; 108 107 ··· 123 124 #endif /* CONFIG_PCI_DOMAINS */ 124 125 125 126 #endif /* __KERNEL__ */ 126 - 127 - /* implement the pci_ DMA API in terms of the generic device dma_ one */ 128 - #include <asm-generic/pci-dma-compat.h> 129 127 130 128 /* Do platform specific device initialization at pci_enable_device() time */ 131 129 extern int pcibios_plat_dev_init(struct pci_dev *dev);
+12 -7
arch/mips/pci/fixup-loongson3.c
··· 40 40 41 41 static void pci_fixup_radeon(struct pci_dev *pdev) 42 42 { 43 - if (pdev->resource[PCI_ROM_RESOURCE].start) 43 + struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 44 + 45 + if (res->start) 44 46 return; 45 47 46 48 if (!loongson_sysconf.vgabios_addr) 47 49 return; 48 50 49 - pdev->resource[PCI_ROM_RESOURCE].start = 50 - loongson_sysconf.vgabios_addr; 51 - pdev->resource[PCI_ROM_RESOURCE].end = 52 - loongson_sysconf.vgabios_addr + 256*1024 - 1; 53 - pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_COPY; 51 + pci_disable_rom(pdev); 52 + if (res->parent) 53 + release_resource(res); 54 + 55 + res->start = virt_to_phys((void *) loongson_sysconf.vgabios_addr); 56 + res->end = res->start + 256*1024 - 1; 57 + res->flags = IORESOURCE_MEM | IORESOURCE_ROM_SHADOW | 58 + IORESOURCE_PCI_FIXED; 54 59 55 60 dev_info(&pdev->dev, "BAR %d: assigned %pR for Radeon ROM\n", 56 - PCI_ROM_RESOURCE, &pdev->resource[PCI_ROM_RESOURCE]); 61 + PCI_ROM_RESOURCE, res); 57 62 } 58 63 59 64 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID,
-3
arch/mn10300/include/asm/pci.h
··· 80 80 81 81 #endif /* __KERNEL__ */ 82 82 83 - /* implement the pci_ DMA API in terms of the generic device dma_ one */ 84 - #include <asm-generic/pci-dma-compat.h> 85 - 86 83 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 87 84 { 88 85 return channel ? 15 : 14;
-3
arch/parisc/include/asm/pci.h
··· 194 194 #define PCIBIOS_MIN_IO 0x10 195 195 #define PCIBIOS_MIN_MEM 0x1000 /* NBPG - but pci/setup-res.c dies */ 196 196 197 - /* export the pci_ DMA API in terms of the dma_ one */ 198 - #include <asm-generic/pci-dma-compat.h> 199 - 200 197 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 201 198 { 202 199 return channel ? 15 : 14;
-4
arch/powerpc/Kconfig
··· 828 828 select PPC_INDIRECT_PCI 829 829 default y 830 830 831 - source "drivers/pci/pcie/Kconfig" 832 - 833 831 source "drivers/pci/Kconfig" 834 832 835 833 source "drivers/pcmcia/Kconfig" 836 - 837 - source "drivers/pci/hotplug/Kconfig" 838 834 839 835 config HAS_RAPIDIO 840 836 bool
-1
arch/powerpc/include/asm/pci-bridge.h
··· 10 10 #include <linux/pci.h> 11 11 #include <linux/list.h> 12 12 #include <linux/ioport.h> 13 - #include <asm-generic/pci-bridge.h> 14 13 15 14 struct device_node; 16 15
-2
arch/powerpc/include/asm/pci.h
··· 20 20 #include <asm/prom.h> 21 21 #include <asm/pci-bridge.h> 22 22 23 - #include <asm-generic/pci-dma-compat.h> 24 - 25 23 /* Return values for pci_controller_ops.probe_mode function */ 26 24 #define PCI_PROBE_NONE -1 /* Don't look at this bus at all */ 27 25 #define PCI_PROBE_NORMAL 0 /* Do normal PCI probing */
-2
arch/s390/Kconfig
··· 605 605 PCI devices. 606 606 607 607 source "drivers/pci/Kconfig" 608 - source "drivers/pci/pcie/Kconfig" 609 - source "drivers/pci/hotplug/Kconfig" 610 608 611 609 endif # PCI 612 610
-1
arch/s390/include/asm/pci.h
··· 9 9 #include <linux/pci.h> 10 10 #include <linux/mutex.h> 11 11 #include <asm-generic/pci.h> 12 - #include <asm-generic/pci-dma-compat.h> 13 12 #include <asm/pci_clp.h> 14 13 #include <asm/pci_debug.h> 15 14
-4
arch/sh/Kconfig
··· 847 847 config PCI_DOMAINS 848 848 bool 849 849 850 - source "drivers/pci/pcie/Kconfig" 851 - 852 850 source "drivers/pci/Kconfig" 853 851 854 852 source "drivers/pcmcia/Kconfig" 855 - 856 - source "drivers/pci/hotplug/Kconfig" 857 853 858 854 endmenu 859 855
-3
arch/sh/include/asm/pci.h
··· 105 105 return channel ? 15 : 14; 106 106 } 107 107 108 - /* generic DMA-mapping stuff */ 109 - #include <asm-generic/pci-dma-compat.h> 110 - 111 108 #endif /* __KERNEL__ */ 112 109 #endif /* __ASM_SH_PCI_H */ 113 110
-3
arch/sparc/include/asm/pci.h
··· 5 5 #else 6 6 #include <asm/pci_32.h> 7 7 #endif 8 - 9 - #include <asm-generic/pci-dma-compat.h> 10 - 11 8 #endif
-4
arch/tile/Kconfig
··· 455 455 456 456 source "drivers/pci/Kconfig" 457 457 458 - source "drivers/pci/pcie/Kconfig" 459 - 460 458 config TILE_USB 461 459 tristate "Tilera USB host adapter support" 462 460 default y ··· 464 466 ---help--- 465 467 Provides USB host adapter support for the built-in EHCI and OHCI 466 468 interfaces on TILE-Gx chips. 467 - 468 - source "drivers/pci/hotplug/Kconfig" 469 469 470 470 endmenu 471 471
-3
arch/tile/include/asm/pci.h
··· 226 226 /* Use any cpu for PCI. */ 227 227 #define cpumask_of_pcibus(bus) cpu_online_mask 228 228 229 - /* implement the pci_ DMA API in terms of the generic device dma_ one */ 230 - #include <asm-generic/pci-dma-compat.h> 231 - 232 229 #endif /* _ASM_TILE_PCI_H */
-3
arch/unicore32/include/asm/pci.h
··· 13 13 #define __UNICORE_PCI_H__ 14 14 15 15 #ifdef __KERNEL__ 16 - #include <asm-generic/pci-dma-compat.h> 17 - #include <asm-generic/pci-bridge.h> 18 16 #include <asm-generic/pci.h> 19 17 #include <mach/hardware.h> /* for PCIBIOS_MIN_* */ 20 18 ··· 21 23 enum pci_mmap_state mmap_state, int write_combine); 22 24 23 25 #endif /* __KERNEL__ */ 24 - 25 26 #endif
-5
arch/unicore32/include/mach/hardware.h
··· 28 28 #define PCIBIOS_MIN_IO 0x4000 /* should lower than 64KB */ 29 29 #define PCIBIOS_MIN_MEM io_v2p(PKUNITY_PCIMEM_BASE) 30 30 31 - /* 32 - * We override the standard dma-mask routines for bouncing. 33 - */ 34 - #define HAVE_ARCH_PCI_SET_DMA_MASK 35 - 36 31 #define pcibios_assign_all_busses() 1 37 32 38 33 #endif /* __MACH_PUV3_HARDWARE_H__ */
-4
arch/x86/Kconfig
··· 2435 2435 2436 2436 You should say N unless you know you need this. 2437 2437 2438 - source "drivers/pci/pcie/Kconfig" 2439 - 2440 2438 source "drivers/pci/Kconfig" 2441 2439 2442 2440 # x86_64 have no ISA slots, but can have ISA-style DMA. ··· 2589 2591 depends on CPU_SUP_AMD && PCI 2590 2592 2591 2593 source "drivers/pcmcia/Kconfig" 2592 - 2593 - source "drivers/pci/hotplug/Kconfig" 2594 2594 2595 2595 config RAPIDIO 2596 2596 tristate "RapidIO support"
+15 -3
arch/x86/include/asm/pci.h
··· 20 20 #ifdef CONFIG_X86_64 21 21 void *iommu; /* IOMMU private data */ 22 22 #endif 23 + #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN 24 + void *fwnode; /* IRQ domain for MSI assignment */ 25 + #endif 23 26 }; 24 27 25 28 extern int pci_routeirq; ··· 35 32 static inline int pci_domain_nr(struct pci_bus *bus) 36 33 { 37 34 struct pci_sysdata *sd = bus->sysdata; 35 + 38 36 return sd->domain; 39 37 } 40 38 ··· 43 39 { 44 40 return pci_domain_nr(bus); 45 41 } 42 + #endif 43 + 44 + #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN 45 + static inline void *_pci_root_bus_fwnode(struct pci_bus *bus) 46 + { 47 + struct pci_sysdata *sd = bus->sysdata; 48 + 49 + return sd->fwnode; 50 + } 51 + 52 + #define pci_root_bus_fwnode _pci_root_bus_fwnode 46 53 #endif 47 54 48 55 /* Can be used to override the logic in pci_scan_bus for skipping ··· 119 104 #ifdef CONFIG_X86_64 120 105 #include <asm/pci_64.h> 121 106 #endif 122 - 123 - /* implement the pci_ DMA API in terms of the generic device dma_ one */ 124 - #include <asm-generic/pci-dma-compat.h> 125 107 126 108 /* generic pci stuff */ 127 109 #include <asm-generic/pci.h>
-1
arch/x86/pci/common.c
··· 12 12 #include <linux/dmi.h> 13 13 #include <linux/slab.h> 14 14 15 - #include <asm-generic/pci-bridge.h> 16 15 #include <asm/acpi.h> 17 16 #include <asm/segment.h> 18 17 #include <asm/io.h>
+23 -5
arch/x86/pci/fixup.c
··· 297 297 * 298 298 * The standard boot ROM sequence for an x86 machine uses the BIOS 299 299 * to select an initial video card for boot display. This boot video 300 - * card will have it's BIOS copied to C0000 in system RAM. 300 + * card will have its BIOS copied to 0xC0000 in system RAM. 301 301 * IORESOURCE_ROM_SHADOW is used to associate the boot video 302 302 * card with this copy. On laptops this copy has to be used since 303 303 * the main ROM may be compressed or combined with another image. 304 304 * See pci_map_rom() for use of this flag. Before marking the device 305 305 * with IORESOURCE_ROM_SHADOW check if a vga_default_device is already set 306 - * by either arch cde or vga-arbitration, if so only apply the fixup to this 307 - * already determined primary video card. 306 + * by either arch code or vga-arbitration; if so only apply the fixup to this 307 + * already-determined primary video card. 308 308 */ 309 309 310 310 static void pci_fixup_video(struct pci_dev *pdev) ··· 312 312 struct pci_dev *bridge; 313 313 struct pci_bus *bus; 314 314 u16 config; 315 + struct resource *res; 315 316 316 317 /* Is VGA routed to us? */ 317 318 bus = pdev->bus; ··· 337 336 if (!vga_default_device() || pdev == vga_default_device()) { 338 337 pci_read_config_word(pdev, PCI_COMMAND, &config); 339 338 if (config & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 340 - pdev->resource[PCI_ROM_RESOURCE].flags |= IORESOURCE_ROM_SHADOW; 341 - dev_printk(KERN_DEBUG, &pdev->dev, "Video device with shadowed ROM\n"); 339 + res = &pdev->resource[PCI_ROM_RESOURCE]; 340 + 341 + pci_disable_rom(pdev); 342 + if (res->parent) 343 + release_resource(res); 344 + 345 + res->start = 0xC0000; 346 + res->end = res->start + 0x20000 - 1; 347 + res->flags = IORESOURCE_MEM | IORESOURCE_ROM_SHADOW | 348 + IORESOURCE_PCI_FIXED; 349 + dev_info(&pdev->dev, "Video device with shadowed ROM at %pR\n", 350 + res); 342 351 } 343 352 } 344 353 } ··· 551 540 } 552 541 } 553 542 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x27B9, twinhead_reserve_killing_zone); 543 + 544 + static void pci_bdwep_bar(struct pci_dev *dev) 545 + { 546 + dev->non_compliant_bars = 1; 547 + } 548 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_bdwep_bar); 549 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_bdwep_bar);
+34 -1
arch/x86/pci/vmd.c
··· 503 503 .write = vmd_pci_write, 504 504 }; 505 505 506 + static void vmd_attach_resources(struct vmd_dev *vmd) 507 + { 508 + vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1]; 509 + vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[2]; 510 + } 511 + 512 + static void vmd_detach_resources(struct vmd_dev *vmd) 513 + { 514 + vmd->dev->resource[VMD_MEMBAR1].child = NULL; 515 + vmd->dev->resource[VMD_MEMBAR2].child = NULL; 516 + } 517 + 506 518 /* 507 519 * VMD domains start at 0x1000 to not clash with ACPI _SEG domains. 508 520 */ ··· 539 527 res = &vmd->dev->resource[VMD_CFGBAR]; 540 528 vmd->resources[0] = (struct resource) { 541 529 .name = "VMD CFGBAR", 542 - .start = res->start, 530 + .start = 0, 543 531 .end = (resource_size(res) >> 20) - 1, 544 532 .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED, 545 533 }; 546 534 535 + /* 536 + * If the window is below 4GB, clear IORESOURCE_MEM_64 so we can 537 + * put 32-bit resources in the window. 538 + * 539 + * There's no hardware reason why a 64-bit window *couldn't* 540 + * contain a 32-bit resource, but pbus_size_mem() computes the 541 + * bridge window size assuming a 64-bit window will contain no 542 + * 32-bit resources. __pci_assign_resource() enforces that 543 + * artificial restriction to make sure everything will fit. 544 + * 545 + * The only way we could use a 64-bit non-prefechable MEMBAR is 546 + * if its address is <4GB so that we can convert it to a 32-bit 547 + * resource. To be visible to the host OS, all VMD endpoints must 548 + * be initially configured by platform BIOS, which includes setting 549 + * up these resources. We can assume the device is configured 550 + * according to the platform needs. 551 + */ 547 552 res = &vmd->dev->resource[VMD_MEMBAR1]; 548 553 upper_bits = upper_32_bits(res->end); 549 554 flags = res->flags & ~IORESOURCE_SIZEALIGN; ··· 571 542 .start = res->start, 572 543 .end = res->end, 573 544 .flags = flags, 545 + .parent = res, 574 546 }; 575 547 576 548 res = &vmd->dev->resource[VMD_MEMBAR2]; ··· 584 554 .start = res->start + 0x2000, 585 555 .end = res->end, 586 556 .flags = flags, 557 + .parent = res, 587 558 }; 588 559 589 560 sd->domain = vmd_find_free_domain(); ··· 609 578 return -ENODEV; 610 579 } 611 580 581 + vmd_attach_resources(vmd); 612 582 vmd_setup_dma_ops(vmd); 613 583 dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); 614 584 pci_rescan_bus(vmd->bus); ··· 706 674 { 707 675 struct vmd_dev *vmd = pci_get_drvdata(dev); 708 676 677 + vmd_detach_resources(vmd); 709 678 pci_set_drvdata(dev, NULL); 710 679 sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 711 680 pci_stop_root_bus(vmd->bus);
-2
arch/xtensa/Kconfig
··· 413 413 414 414 source "drivers/pcmcia/Kconfig" 415 415 416 - source "drivers/pci/hotplug/Kconfig" 417 - 418 416 config PLATFORM_WANT_DEFAULT_MEM 419 417 def_bool n 420 418
-3
arch/xtensa/include/asm/pci.h
··· 55 55 56 56 #endif /* __KERNEL__ */ 57 57 58 - /* Implement the pci_ DMA API in terms of the generic device dma_ one */ 59 - #include <asm-generic/pci-dma-compat.h> 60 - 61 58 /* Generic PCI */ 62 59 #include <asm-generic/pci.h> 63 60
+1 -1
drivers/ata/pata_macio.c
··· 22 22 #include <linux/scatterlist.h> 23 23 #include <linux/of.h> 24 24 #include <linux/gfp.h> 25 + #include <linux/pci.h> 25 26 26 27 #include <scsi/scsi.h> 27 28 #include <scsi/scsi_host.h> ··· 31 30 #include <asm/macio.h> 32 31 #include <asm/io.h> 33 32 #include <asm/dbdma.h> 34 - #include <asm/pci-bridge.h> 35 33 #include <asm/machdep.h> 36 34 #include <asm/pmac_feature.h> 37 35 #include <asm/mediabay.h>
-1
drivers/char/agp/uninorth-agp.c
··· 10 10 #include <linux/delay.h> 11 11 #include <linux/vmalloc.h> 12 12 #include <asm/uninorth.h> 13 - #include <asm/pci-bridge.h> 14 13 #include <asm/prom.h> 15 14 #include <asm/pmac_feature.h> 16 15 #include "agp.h"
+2 -2
drivers/gpu/drm/bochs/bochs_drv.c
··· 182 182 { 183 183 .vendor = 0x1234, 184 184 .device = 0x1111, 185 - .subvendor = 0x1af4, 186 - .subdevice = 0x1100, 185 + .subvendor = PCI_SUBVENDOR_ID_REDHAT_QUMRANET, 186 + .subdevice = PCI_SUBDEVICE_ID_QEMU, 187 187 .driver_data = BOCHS_QEMU_STDVGA, 188 188 }, 189 189 {
+3 -2
drivers/gpu/drm/cirrus/cirrus_drv.c
··· 33 33 34 34 /* only bind to the cirrus chip in qemu */ 35 35 static const struct pci_device_id pciidlist[] = { 36 - { PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_5446, 0x1af4, 0x1100, 0, 37 - 0, 0 }, 36 + { PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_5446, 37 + PCI_SUBVENDOR_ID_REDHAT_QUMRANET, PCI_SUBDEVICE_ID_QEMU, 38 + 0, 0, 0 }, 38 39 { PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_5446, PCI_VENDOR_ID_XEN, 39 40 0x0001, 0, 0, 0 }, 40 41 {0,}
-1
drivers/gpu/drm/radeon/radeon_combios.c
··· 34 34 #include <asm/machdep.h> 35 35 #include <asm/pmac_feature.h> 36 36 #include <asm/prom.h> 37 - #include <asm/pci-bridge.h> 38 37 #endif /* CONFIG_PPC_PMAC */ 39 38 40 39 /* from radeon_legacy_encoder.c */
-1
drivers/ide/pdc202xx_new.c
··· 28 28 29 29 #ifdef CONFIG_PPC_PMAC 30 30 #include <asm/prom.h> 31 - #include <asm/pci-bridge.h> 32 31 #endif 33 32 34 33 #define DRV_NAME "pdc202xx_new"
-1
drivers/ide/pmac.c
··· 40 40 #include <asm/io.h> 41 41 #include <asm/dbdma.h> 42 42 #include <asm/ide.h> 43 - #include <asm/pci-bridge.h> 44 43 #include <asm/machdep.h> 45 44 #include <asm/pmac_feature.h> 46 45 #include <asm/sections.h>
-1
drivers/macintosh/macio_asic.c
··· 31 31 #include <asm/macio.h> 32 32 #include <asm/pmac_feature.h> 33 33 #include <asm/prom.h> 34 - #include <asm/pci-bridge.h> 35 34 36 35 #undef DEBUG 37 36
-1
drivers/misc/cxl/pci.c
··· 19 19 #include <linux/delay.h> 20 20 #include <asm/opal.h> 21 21 #include <asm/msi_bitmap.h> 22 - #include <asm/pci-bridge.h> /* for struct pci_controller */ 23 22 #include <asm/pnv-pci.h> 24 23 #include <asm/io.h> 25 24
-1
drivers/net/ethernet/sun/sungem.c
··· 51 51 #endif 52 52 53 53 #ifdef CONFIG_PPC_PMAC 54 - #include <asm/pci-bridge.h> 55 54 #include <asm/prom.h> 56 55 #include <asm/machdep.h> 57 56 #include <asm/pmac_feature.h>
-1
drivers/net/ethernet/toshiba/spider_net.c
··· 48 48 #include <linux/wait.h> 49 49 #include <linux/workqueue.h> 50 50 #include <linux/bitops.h> 51 - #include <asm/pci-bridge.h> 52 51 #include <net/checksum.h> 53 52 54 53 #include "spider_net.h"
-1
drivers/of/of_pci.c
··· 5 5 #include <linux/of_device.h> 6 6 #include <linux/of_pci.h> 7 7 #include <linux/slab.h> 8 - #include <asm-generic/pci-bridge.h> 9 8 10 9 static inline int __of_pci_pci_compare(struct device_node *node, 11 10 unsigned int data)
-2
drivers/parisc/Kconfig
··· 110 110 111 111 source "drivers/pcmcia/Kconfig" 112 112 113 - source "drivers/pci/hotplug/Kconfig" 114 - 115 113 endmenu 116 114 117 115 menu "PA-RISC specific drivers"
+10
drivers/pci/Kconfig
··· 1 1 # 2 2 # PCI configuration 3 3 # 4 + 5 + source "drivers/pci/pcie/Kconfig" 6 + 4 7 config PCI_BUS_ADDR_T_64BIT 5 8 def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT) 6 9 depends on PCI ··· 120 117 config PCI_LABEL 121 118 def_bool y if (DMI || ACPI) 122 119 select NLS 120 + 121 + config PCI_HYPERV 122 + tristate "Hyper-V PCI Frontend" 123 + depends on PCI && X86 && HYPERV && PCI_MSI && PCI_MSI_IRQ_DOMAIN && X86_64 124 + help 125 + The PCI device frontend driver allows the kernel to import arbitrary 126 + PCI devices from a PCI backend to support PCI driver domains. 123 127 124 128 source "drivers/pci/host/Kconfig"
+1
drivers/pci/Makefile
··· 32 32 # Some architectures use the generic PCI setup functions 33 33 # 34 34 obj-$(CONFIG_ALPHA) += setup-irq.o 35 + obj-$(CONFIG_ARC) += setup-irq.o 35 36 obj-$(CONFIG_ARM) += setup-irq.o 36 37 obj-$(CONFIG_ARM64) += setup-irq.o 37 38 obj-$(CONFIG_UNICORE32) += setup-irq.o
+154 -85
drivers/pci/access.c
··· 174 174 } 175 175 EXPORT_SYMBOL(pci_bus_set_ops); 176 176 177 - /** 178 - * pci_read_vpd - Read one entry from Vital Product Data 179 - * @dev: pci device struct 180 - * @pos: offset in vpd space 181 - * @count: number of bytes to read 182 - * @buf: pointer to where to store result 183 - * 184 - */ 185 - ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) 186 - { 187 - if (!dev->vpd || !dev->vpd->ops) 188 - return -ENODEV; 189 - return dev->vpd->ops->read(dev, pos, count, buf); 190 - } 191 - EXPORT_SYMBOL(pci_read_vpd); 192 - 193 - /** 194 - * pci_write_vpd - Write entry to Vital Product Data 195 - * @dev: pci device struct 196 - * @pos: offset in vpd space 197 - * @count: number of bytes to write 198 - * @buf: buffer containing write data 199 - * 200 - */ 201 - ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) 202 - { 203 - if (!dev->vpd || !dev->vpd->ops) 204 - return -ENODEV; 205 - return dev->vpd->ops->write(dev, pos, count, buf); 206 - } 207 - EXPORT_SYMBOL(pci_write_vpd); 208 - 209 177 /* 210 178 * The following routines are to prevent the user from accessing PCI config 211 179 * space when it's unsafe to do so. Some devices require this during BIST and ··· 245 277 246 278 /* VPD access through PCI 2.2+ VPD capability */ 247 279 248 - #define PCI_VPD_PCI22_SIZE (PCI_VPD_ADDR_MASK + 1) 280 + /** 281 + * pci_read_vpd - Read one entry from Vital Product Data 282 + * @dev: pci device struct 283 + * @pos: offset in vpd space 284 + * @count: number of bytes to read 285 + * @buf: pointer to where to store result 286 + */ 287 + ssize_t pci_read_vpd(struct pci_dev *dev, loff_t pos, size_t count, void *buf) 288 + { 289 + if (!dev->vpd || !dev->vpd->ops) 290 + return -ENODEV; 291 + return dev->vpd->ops->read(dev, pos, count, buf); 292 + } 293 + EXPORT_SYMBOL(pci_read_vpd); 249 294 250 - struct pci_vpd_pci22 { 251 - struct pci_vpd base; 252 - struct mutex lock; 253 - u16 flag; 254 - bool busy; 255 - u8 cap; 256 - }; 295 + /** 296 + * pci_write_vpd - Write entry to Vital Product Data 297 + * @dev: pci device struct 298 + * @pos: offset in vpd space 299 + * @count: number of bytes to write 300 + * @buf: buffer containing write data 301 + */ 302 + ssize_t pci_write_vpd(struct pci_dev *dev, loff_t pos, size_t count, const void *buf) 303 + { 304 + if (!dev->vpd || !dev->vpd->ops) 305 + return -ENODEV; 306 + return dev->vpd->ops->write(dev, pos, count, buf); 307 + } 308 + EXPORT_SYMBOL(pci_write_vpd); 309 + 310 + #define PCI_VPD_MAX_SIZE (PCI_VPD_ADDR_MASK + 1) 311 + 312 + /** 313 + * pci_vpd_size - determine actual size of Vital Product Data 314 + * @dev: pci device struct 315 + * @old_size: current assumed size, also maximum allowed size 316 + */ 317 + static size_t pci_vpd_size(struct pci_dev *dev, size_t old_size) 318 + { 319 + size_t off = 0; 320 + unsigned char header[1+2]; /* 1 byte tag, 2 bytes length */ 321 + 322 + while (off < old_size && 323 + pci_read_vpd(dev, off, 1, header) == 1) { 324 + unsigned char tag; 325 + 326 + if (header[0] & PCI_VPD_LRDT) { 327 + /* Large Resource Data Type Tag */ 328 + tag = pci_vpd_lrdt_tag(header); 329 + /* Only read length from known tag items */ 330 + if ((tag == PCI_VPD_LTIN_ID_STRING) || 331 + (tag == PCI_VPD_LTIN_RO_DATA) || 332 + (tag == PCI_VPD_LTIN_RW_DATA)) { 333 + if (pci_read_vpd(dev, off+1, 2, 334 + &header[1]) != 2) { 335 + dev_warn(&dev->dev, 336 + "invalid large VPD tag %02x size at offset %zu", 337 + tag, off + 1); 338 + return 0; 339 + } 340 + off += PCI_VPD_LRDT_TAG_SIZE + 341 + pci_vpd_lrdt_size(header); 342 + } 343 + } else { 344 + /* Short Resource Data Type Tag */ 345 + off += PCI_VPD_SRDT_TAG_SIZE + 346 + pci_vpd_srdt_size(header); 347 + tag = pci_vpd_srdt_tag(header); 348 + } 349 + 350 + if (tag == PCI_VPD_STIN_END) /* End tag descriptor */ 351 + return off; 352 + 353 + if ((tag != PCI_VPD_LTIN_ID_STRING) && 354 + (tag != PCI_VPD_LTIN_RO_DATA) && 355 + (tag != PCI_VPD_LTIN_RW_DATA)) { 356 + dev_warn(&dev->dev, 357 + "invalid %s VPD tag %02x at offset %zu", 358 + (header[0] & PCI_VPD_LRDT) ? "large" : "short", 359 + tag, off); 360 + return 0; 361 + } 362 + } 363 + return 0; 364 + } 257 365 258 366 /* 259 367 * Wait for last operation to complete. ··· 339 295 * 340 296 * Returns 0 on success, negative values indicate error. 341 297 */ 342 - static int pci_vpd_pci22_wait(struct pci_dev *dev) 298 + static int pci_vpd_wait(struct pci_dev *dev) 343 299 { 344 - struct pci_vpd_pci22 *vpd = 345 - container_of(dev->vpd, struct pci_vpd_pci22, base); 346 - unsigned long timeout = jiffies + HZ/20 + 2; 300 + struct pci_vpd *vpd = dev->vpd; 301 + unsigned long timeout = jiffies + msecs_to_jiffies(50); 302 + unsigned long max_sleep = 16; 347 303 u16 status; 348 304 int ret; 349 305 350 306 if (!vpd->busy) 351 307 return 0; 352 308 353 - for (;;) { 309 + while (time_before(jiffies, timeout)) { 354 310 ret = pci_user_read_config_word(dev, vpd->cap + PCI_VPD_ADDR, 355 311 &status); 356 312 if (ret < 0) 357 313 return ret; 358 314 359 315 if ((status & PCI_VPD_ADDR_F) == vpd->flag) { 360 - vpd->busy = false; 316 + vpd->busy = 0; 361 317 return 0; 362 318 } 363 319 364 - if (time_after(jiffies, timeout)) { 365 - dev_printk(KERN_DEBUG, &dev->dev, "vpd r/w failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update\n"); 366 - return -ETIMEDOUT; 367 - } 368 320 if (fatal_signal_pending(current)) 369 321 return -EINTR; 370 - if (!cond_resched()) 371 - udelay(10); 322 + 323 + usleep_range(10, max_sleep); 324 + if (max_sleep < 1024) 325 + max_sleep *= 2; 372 326 } 327 + 328 + dev_warn(&dev->dev, "VPD access failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update\n"); 329 + return -ETIMEDOUT; 373 330 } 374 331 375 - static ssize_t pci_vpd_pci22_read(struct pci_dev *dev, loff_t pos, size_t count, 376 - void *arg) 332 + static ssize_t pci_vpd_read(struct pci_dev *dev, loff_t pos, size_t count, 333 + void *arg) 377 334 { 378 - struct pci_vpd_pci22 *vpd = 379 - container_of(dev->vpd, struct pci_vpd_pci22, base); 335 + struct pci_vpd *vpd = dev->vpd; 380 336 int ret; 381 337 loff_t end = pos + count; 382 338 u8 *buf = arg; 383 339 384 - if (pos < 0 || pos > vpd->base.len || end > vpd->base.len) 340 + if (pos < 0) 385 341 return -EINVAL; 342 + 343 + if (!vpd->valid) { 344 + vpd->valid = 1; 345 + vpd->len = pci_vpd_size(dev, vpd->len); 346 + } 347 + 348 + if (vpd->len == 0) 349 + return -EIO; 350 + 351 + if (pos > vpd->len) 352 + return 0; 353 + 354 + if (end > vpd->len) { 355 + end = vpd->len; 356 + count = end - pos; 357 + } 386 358 387 359 if (mutex_lock_killable(&vpd->lock)) 388 360 return -EINTR; 389 361 390 - ret = pci_vpd_pci22_wait(dev); 362 + ret = pci_vpd_wait(dev); 391 363 if (ret < 0) 392 364 goto out; 393 365 ··· 415 355 pos & ~3); 416 356 if (ret < 0) 417 357 break; 418 - vpd->busy = true; 358 + vpd->busy = 1; 419 359 vpd->flag = PCI_VPD_ADDR_F; 420 - ret = pci_vpd_pci22_wait(dev); 360 + ret = pci_vpd_wait(dev); 421 361 if (ret < 0) 422 362 break; 423 363 ··· 440 380 return ret ? ret : count; 441 381 } 442 382 443 - static ssize_t pci_vpd_pci22_write(struct pci_dev *dev, loff_t pos, size_t count, 444 - const void *arg) 383 + static ssize_t pci_vpd_write(struct pci_dev *dev, loff_t pos, size_t count, 384 + const void *arg) 445 385 { 446 - struct pci_vpd_pci22 *vpd = 447 - container_of(dev->vpd, struct pci_vpd_pci22, base); 386 + struct pci_vpd *vpd = dev->vpd; 448 387 const u8 *buf = arg; 449 388 loff_t end = pos + count; 450 389 int ret = 0; 451 390 452 - if (pos < 0 || (pos & 3) || (count & 3) || end > vpd->base.len) 391 + if (pos < 0 || (pos & 3) || (count & 3)) 392 + return -EINVAL; 393 + 394 + if (!vpd->valid) { 395 + vpd->valid = 1; 396 + vpd->len = pci_vpd_size(dev, vpd->len); 397 + } 398 + 399 + if (vpd->len == 0) 400 + return -EIO; 401 + 402 + if (end > vpd->len) 453 403 return -EINVAL; 454 404 455 405 if (mutex_lock_killable(&vpd->lock)) 456 406 return -EINTR; 457 407 458 - ret = pci_vpd_pci22_wait(dev); 408 + ret = pci_vpd_wait(dev); 459 409 if (ret < 0) 460 410 goto out; 461 411 ··· 485 415 if (ret < 0) 486 416 break; 487 417 488 - vpd->busy = true; 418 + vpd->busy = 1; 489 419 vpd->flag = 0; 490 - ret = pci_vpd_pci22_wait(dev); 420 + ret = pci_vpd_wait(dev); 491 421 if (ret < 0) 492 422 break; 493 423 ··· 498 428 return ret ? ret : count; 499 429 } 500 430 501 - static void pci_vpd_pci22_release(struct pci_dev *dev) 502 - { 503 - kfree(container_of(dev->vpd, struct pci_vpd_pci22, base)); 504 - } 505 - 506 - static const struct pci_vpd_ops pci_vpd_pci22_ops = { 507 - .read = pci_vpd_pci22_read, 508 - .write = pci_vpd_pci22_write, 509 - .release = pci_vpd_pci22_release, 431 + static const struct pci_vpd_ops pci_vpd_ops = { 432 + .read = pci_vpd_read, 433 + .write = pci_vpd_write, 510 434 }; 511 435 512 436 static ssize_t pci_vpd_f0_read(struct pci_dev *dev, loff_t pos, size_t count, ··· 536 472 static const struct pci_vpd_ops pci_vpd_f0_ops = { 537 473 .read = pci_vpd_f0_read, 538 474 .write = pci_vpd_f0_write, 539 - .release = pci_vpd_pci22_release, 540 475 }; 541 476 542 - int pci_vpd_pci22_init(struct pci_dev *dev) 477 + int pci_vpd_init(struct pci_dev *dev) 543 478 { 544 - struct pci_vpd_pci22 *vpd; 479 + struct pci_vpd *vpd; 545 480 u8 cap; 546 481 547 482 cap = pci_find_capability(dev, PCI_CAP_ID_VPD); ··· 551 488 if (!vpd) 552 489 return -ENOMEM; 553 490 554 - vpd->base.len = PCI_VPD_PCI22_SIZE; 491 + vpd->len = PCI_VPD_MAX_SIZE; 555 492 if (dev->dev_flags & PCI_DEV_FLAGS_VPD_REF_F0) 556 - vpd->base.ops = &pci_vpd_f0_ops; 493 + vpd->ops = &pci_vpd_f0_ops; 557 494 else 558 - vpd->base.ops = &pci_vpd_pci22_ops; 495 + vpd->ops = &pci_vpd_ops; 559 496 mutex_init(&vpd->lock); 560 497 vpd->cap = cap; 561 - vpd->busy = false; 562 - dev->vpd = &vpd->base; 498 + vpd->busy = 0; 499 + vpd->valid = 0; 500 + dev->vpd = vpd; 563 501 return 0; 502 + } 503 + 504 + void pci_vpd_release(struct pci_dev *dev) 505 + { 506 + kfree(dev->vpd); 564 507 } 565 508 566 509 /**
+6 -1
drivers/pci/bus.c
··· 291 291 292 292 dev->match_driver = true; 293 293 retval = device_attach(&dev->dev); 294 - WARN_ON(retval < 0); 294 + if (retval < 0) { 295 + dev_warn(&dev->dev, "device attach failed (%d)\n", retval); 296 + pci_proc_detach_device(dev); 297 + pci_remove_sysfs_dev_files(dev); 298 + return; 299 + } 295 300 296 301 dev->is_added = 1; 297 302 }
+43 -3
drivers/pci/host/Kconfig
··· 17 17 depends on ARM 18 18 depends on OF 19 19 20 + 21 + config PCIE_XILINX_NWL 22 + bool "NWL PCIe Core" 23 + depends on ARCH_ZYNQMP 24 + select PCI_MSI_IRQ_DOMAIN if PCI_MSI 25 + help 26 + Say 'Y' here if you want kernel support for Xilinx 27 + NWL PCIe controller. The controller can act as Root Port 28 + or End Point. The current option selection will only 29 + support root port enabling. 30 + 31 + config PCIE_DW_PLAT 32 + bool "Platform bus based DesignWare PCIe Controller" 33 + select PCIE_DW 34 + ---help--- 35 + This selects the DesignWare PCIe controller support. Select this if 36 + you have a PCIe controller on Platform bus. 37 + 38 + If you have a controller with this interface, say Y or M here. 39 + 40 + If unsure, say N. 41 + 20 42 config PCIE_DW 21 43 bool 22 44 ··· 64 42 config PCI_RCAR_GEN2 65 43 bool "Renesas R-Car Gen2 Internal PCI controller" 66 44 depends on ARM 67 - depends on ARCH_SHMOBILE || COMPILE_TEST 45 + depends on ARCH_RENESAS || COMPILE_TEST 68 46 help 69 47 Say Y here if you want internal PCI support on R-Car Gen2 SoC. 70 48 There are 3 internal PCI controllers available with a single ··· 72 50 73 51 config PCI_RCAR_GEN2_PCIE 74 52 bool "Renesas R-Car PCIe controller" 75 - depends on ARCH_SHMOBILE || (ARM && COMPILE_TEST) 53 + depends on ARCH_RENESAS || (ARM && COMPILE_TEST) 76 54 help 77 55 Say Y here if you want PCIe controller support on R-Car Gen2 SoCs. 56 + 57 + config PCI_HOST_COMMON 58 + bool 78 59 79 60 config PCI_HOST_GENERIC 80 61 bool "Generic PCI host controller" 81 62 depends on (ARM || ARM64) && OF 63 + select PCI_HOST_COMMON 82 64 help 83 65 Say Y here if you want to support a simple generic PCI host 84 66 controller, such as the one emulated by kvmtool. ··· 108 82 109 83 config PCIE_XILINX 110 84 bool "Xilinx AXI PCIe host bridge support" 111 - depends on ARCH_ZYNQ 85 + depends on ARCH_ZYNQ || MICROBLAZE 112 86 help 113 87 Say 'Y' here if you want kernel to support the Xilinx AXI PCIe 114 88 Host Bridge driver. ··· 217 191 Say Y here to enable PCIe controller support on Qualcomm SoCs. The 218 192 PCIe controller uses the Designware core plus Qualcomm-specific 219 193 hardware wrappers. 194 + 195 + config PCI_HOST_THUNDER_PEM 196 + bool "Cavium Thunder PCIe controller to off-chip devices" 197 + depends on OF && ARM64 198 + select PCI_HOST_COMMON 199 + help 200 + Say Y here if you want PCIe support for CN88XX Cavium Thunder SoCs. 201 + 202 + config PCI_HOST_THUNDER_ECAM 203 + bool "Cavium Thunder ECAM controller to on-chip devices on pass-1.x silicon" 204 + depends on OF && ARM64 205 + select PCI_HOST_COMMON 206 + help 207 + Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs. 220 208 221 209 endmenu
+6
drivers/pci/host/Makefile
··· 1 1 obj-$(CONFIG_PCIE_DW) += pcie-designware.o 2 + obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o 2 3 obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o 3 4 obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o 4 5 obj-$(CONFIG_PCI_IMX6) += pci-imx6.o 6 + obj-$(CONFIG_PCI_HYPERV) += pci-hyperv.o 5 7 obj-$(CONFIG_PCI_MVEBU) += pci-mvebu.o 6 8 obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o 7 9 obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o 8 10 obj-$(CONFIG_PCI_RCAR_GEN2_PCIE) += pcie-rcar.o 11 + obj-$(CONFIG_PCI_HOST_COMMON) += pci-host-common.o 9 12 obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o 10 13 obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o 11 14 obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o 12 15 obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o 16 + obj-$(CONFIG_PCIE_XILINX_NWL) += pcie-xilinx-nwl.o 13 17 obj-$(CONFIG_PCI_XGENE) += pci-xgene.o 14 18 obj-$(CONFIG_PCI_XGENE_MSI) += pci-xgene-msi.o 15 19 obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o ··· 26 22 obj-$(CONFIG_PCIE_ALTERA_MSI) += pcie-altera-msi.o 27 23 obj-$(CONFIG_PCI_HISI) += pcie-hisi.o 28 24 obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o 25 + obj-$(CONFIG_PCI_HOST_THUNDER_ECAM) += pci-thunder-ecam.o 26 + obj-$(CONFIG_PCI_HOST_THUNDER_PEM) += pci-thunder-pem.o
+1 -10
drivers/pci/host/pci-dra7xx.c
··· 10 10 * published by the Free Software Foundation. 11 11 */ 12 12 13 - #include <linux/delay.h> 14 13 #include <linux/err.h> 15 14 #include <linux/interrupt.h> 16 15 #include <linux/irq.h> ··· 107 108 { 108 109 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pp); 109 110 u32 reg; 110 - unsigned int retries; 111 111 112 112 if (dw_pcie_link_up(pp)) { 113 113 dev_err(pp->dev, "link is already up\n"); ··· 117 119 reg |= LTSSM_EN; 118 120 dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD, reg); 119 121 120 - for (retries = 0; retries < 1000; retries++) { 121 - if (dw_pcie_link_up(pp)) 122 - return 0; 123 - usleep_range(10, 20); 124 - } 125 - 126 - dev_err(pp->dev, "link is not up\n"); 127 - return -EINVAL; 122 + return dw_pcie_wait_for_link(pp); 128 123 } 129 124 130 125 static void dra7xx_pcie_enable_interrupts(struct pcie_port *pp)
+3 -10
drivers/pci/host/pci-exynos.c
··· 318 318 { 319 319 struct exynos_pcie *exynos_pcie = to_exynos_pcie(pp); 320 320 u32 val; 321 - unsigned int retries; 322 321 323 322 if (dw_pcie_link_up(pp)) { 324 323 dev_err(pp->dev, "Link already up\n"); ··· 356 357 PCIE_APP_LTSSM_ENABLE); 357 358 358 359 /* check if the link is up or not */ 359 - for (retries = 0; retries < 10; retries++) { 360 - if (dw_pcie_link_up(pp)) { 361 - dev_info(pp->dev, "Link up\n"); 362 - return 0; 363 - } 364 - mdelay(100); 365 - } 360 + if (!dw_pcie_wait_for_link(pp)) 361 + return 0; 366 362 367 363 while (exynos_phy_readl(exynos_pcie, PCIE_PHY_PLL_LOCKED) == 0) { 368 364 val = exynos_blk_readl(exynos_pcie, PCIE_PHY_PLL_LOCKED); ··· 366 372 /* power off phy */ 367 373 exynos_pcie_power_off_phy(pp); 368 374 369 - dev_err(pp->dev, "PCIe Link Fail\n"); 370 - return -EINVAL; 375 + return -ETIMEDOUT; 371 376 } 372 377 373 378 static void exynos_pcie_clear_irq_pulse(struct pcie_port *pp)
+194
drivers/pci/host/pci-host-common.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License version 2 as 4 + * published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * You should have received a copy of the GNU General Public License 12 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 13 + * 14 + * Copyright (C) 2014 ARM Limited 15 + * 16 + * Author: Will Deacon <will.deacon@arm.com> 17 + */ 18 + 19 + #include <linux/kernel.h> 20 + #include <linux/module.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_pci.h> 23 + #include <linux/platform_device.h> 24 + 25 + #include "pci-host-common.h" 26 + 27 + static void gen_pci_release_of_pci_ranges(struct gen_pci *pci) 28 + { 29 + pci_free_resource_list(&pci->resources); 30 + } 31 + 32 + static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci) 33 + { 34 + int err, res_valid = 0; 35 + struct device *dev = pci->host.dev.parent; 36 + struct device_node *np = dev->of_node; 37 + resource_size_t iobase; 38 + struct resource_entry *win; 39 + 40 + err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources, 41 + &iobase); 42 + if (err) 43 + return err; 44 + 45 + resource_list_for_each_entry(win, &pci->resources) { 46 + struct resource *parent, *res = win->res; 47 + 48 + switch (resource_type(res)) { 49 + case IORESOURCE_IO: 50 + parent = &ioport_resource; 51 + err = pci_remap_iospace(res, iobase); 52 + if (err) { 53 + dev_warn(dev, "error %d: failed to map resource %pR\n", 54 + err, res); 55 + continue; 56 + } 57 + break; 58 + case IORESOURCE_MEM: 59 + parent = &iomem_resource; 60 + res_valid |= !(res->flags & IORESOURCE_PREFETCH); 61 + break; 62 + case IORESOURCE_BUS: 63 + pci->cfg.bus_range = res; 64 + default: 65 + continue; 66 + } 67 + 68 + err = devm_request_resource(dev, parent, res); 69 + if (err) 70 + goto out_release_res; 71 + } 72 + 73 + if (!res_valid) { 74 + dev_err(dev, "non-prefetchable memory resource required\n"); 75 + err = -EINVAL; 76 + goto out_release_res; 77 + } 78 + 79 + return 0; 80 + 81 + out_release_res: 82 + gen_pci_release_of_pci_ranges(pci); 83 + return err; 84 + } 85 + 86 + static int gen_pci_parse_map_cfg_windows(struct gen_pci *pci) 87 + { 88 + int err; 89 + u8 bus_max; 90 + resource_size_t busn; 91 + struct resource *bus_range; 92 + struct device *dev = pci->host.dev.parent; 93 + struct device_node *np = dev->of_node; 94 + u32 sz = 1 << pci->cfg.ops->bus_shift; 95 + 96 + err = of_address_to_resource(np, 0, &pci->cfg.res); 97 + if (err) { 98 + dev_err(dev, "missing \"reg\" property\n"); 99 + return err; 100 + } 101 + 102 + /* Limit the bus-range to fit within reg */ 103 + bus_max = pci->cfg.bus_range->start + 104 + (resource_size(&pci->cfg.res) >> pci->cfg.ops->bus_shift) - 1; 105 + pci->cfg.bus_range->end = min_t(resource_size_t, 106 + pci->cfg.bus_range->end, bus_max); 107 + 108 + pci->cfg.win = devm_kcalloc(dev, resource_size(pci->cfg.bus_range), 109 + sizeof(*pci->cfg.win), GFP_KERNEL); 110 + if (!pci->cfg.win) 111 + return -ENOMEM; 112 + 113 + /* Map our Configuration Space windows */ 114 + if (!devm_request_mem_region(dev, pci->cfg.res.start, 115 + resource_size(&pci->cfg.res), 116 + "Configuration Space")) 117 + return -ENOMEM; 118 + 119 + bus_range = pci->cfg.bus_range; 120 + for (busn = bus_range->start; busn <= bus_range->end; ++busn) { 121 + u32 idx = busn - bus_range->start; 122 + 123 + pci->cfg.win[idx] = devm_ioremap(dev, 124 + pci->cfg.res.start + idx * sz, 125 + sz); 126 + if (!pci->cfg.win[idx]) 127 + return -ENOMEM; 128 + } 129 + 130 + return 0; 131 + } 132 + 133 + int pci_host_common_probe(struct platform_device *pdev, 134 + struct gen_pci *pci) 135 + { 136 + int err; 137 + const char *type; 138 + struct device *dev = &pdev->dev; 139 + struct device_node *np = dev->of_node; 140 + struct pci_bus *bus, *child; 141 + 142 + type = of_get_property(np, "device_type", NULL); 143 + if (!type || strcmp(type, "pci")) { 144 + dev_err(dev, "invalid \"device_type\" %s\n", type); 145 + return -EINVAL; 146 + } 147 + 148 + of_pci_check_probe_only(); 149 + 150 + pci->host.dev.parent = dev; 151 + INIT_LIST_HEAD(&pci->host.windows); 152 + INIT_LIST_HEAD(&pci->resources); 153 + 154 + /* Parse our PCI ranges and request their resources */ 155 + err = gen_pci_parse_request_of_pci_ranges(pci); 156 + if (err) 157 + return err; 158 + 159 + /* Parse and map our Configuration Space windows */ 160 + err = gen_pci_parse_map_cfg_windows(pci); 161 + if (err) { 162 + gen_pci_release_of_pci_ranges(pci); 163 + return err; 164 + } 165 + 166 + /* Do not reassign resources if probe only */ 167 + if (!pci_has_flag(PCI_PROBE_ONLY)) 168 + pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 169 + 170 + 171 + bus = pci_scan_root_bus(dev, pci->cfg.bus_range->start, 172 + &pci->cfg.ops->ops, pci, &pci->resources); 173 + if (!bus) { 174 + dev_err(dev, "Scanning rootbus failed"); 175 + return -ENODEV; 176 + } 177 + 178 + pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 179 + 180 + if (!pci_has_flag(PCI_PROBE_ONLY)) { 181 + pci_bus_size_bridges(bus); 182 + pci_bus_assign_resources(bus); 183 + 184 + list_for_each_entry(child, &bus->children, node) 185 + pcie_bus_configure_settings(child); 186 + } 187 + 188 + pci_bus_add_devices(bus); 189 + return 0; 190 + } 191 + 192 + MODULE_DESCRIPTION("Generic PCI host driver common code"); 193 + MODULE_AUTHOR("Will Deacon <will.deacon@arm.com>"); 194 + MODULE_LICENSE("GPL v2");
+47
drivers/pci/host/pci-host-common.h
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License version 2 as 4 + * published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * You should have received a copy of the GNU General Public License 12 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 13 + * 14 + * Copyright (C) 2014 ARM Limited 15 + * 16 + * Author: Will Deacon <will.deacon@arm.com> 17 + */ 18 + 19 + #ifndef _PCI_HOST_COMMON_H 20 + #define _PCI_HOST_COMMON_H 21 + 22 + #include <linux/kernel.h> 23 + #include <linux/platform_device.h> 24 + 25 + struct gen_pci_cfg_bus_ops { 26 + u32 bus_shift; 27 + struct pci_ops ops; 28 + }; 29 + 30 + struct gen_pci_cfg_windows { 31 + struct resource res; 32 + struct resource *bus_range; 33 + void __iomem **win; 34 + 35 + struct gen_pci_cfg_bus_ops *ops; 36 + }; 37 + 38 + struct gen_pci { 39 + struct pci_host_bridge host; 40 + struct gen_pci_cfg_windows cfg; 41 + struct list_head resources; 42 + }; 43 + 44 + int pci_host_common_probe(struct platform_device *pdev, 45 + struct gen_pci *pci); 46 + 47 + #endif /* _PCI_HOST_COMMON_H */
+4 -177
drivers/pci/host/pci-host-generic.c
··· 25 25 #include <linux/of_pci.h> 26 26 #include <linux/platform_device.h> 27 27 28 - struct gen_pci_cfg_bus_ops { 29 - u32 bus_shift; 30 - struct pci_ops ops; 31 - }; 32 - 33 - struct gen_pci_cfg_windows { 34 - struct resource res; 35 - struct resource *bus_range; 36 - void __iomem **win; 37 - 38 - struct gen_pci_cfg_bus_ops *ops; 39 - }; 40 - 41 - struct gen_pci { 42 - struct pci_host_bridge host; 43 - struct gen_pci_cfg_windows cfg; 44 - struct list_head resources; 45 - }; 28 + #include "pci-host-common.h" 46 29 47 30 static void __iomem *gen_pci_map_cfg_bus_cam(struct pci_bus *bus, 48 31 unsigned int devfn, ··· 76 93 }; 77 94 MODULE_DEVICE_TABLE(of, gen_pci_of_match); 78 95 79 - static void gen_pci_release_of_pci_ranges(struct gen_pci *pci) 80 - { 81 - pci_free_resource_list(&pci->resources); 82 - } 83 - 84 - static int gen_pci_parse_request_of_pci_ranges(struct gen_pci *pci) 85 - { 86 - int err, res_valid = 0; 87 - struct device *dev = pci->host.dev.parent; 88 - struct device_node *np = dev->of_node; 89 - resource_size_t iobase; 90 - struct resource_entry *win; 91 - 92 - err = of_pci_get_host_bridge_resources(np, 0, 0xff, &pci->resources, 93 - &iobase); 94 - if (err) 95 - return err; 96 - 97 - resource_list_for_each_entry(win, &pci->resources) { 98 - struct resource *parent, *res = win->res; 99 - 100 - switch (resource_type(res)) { 101 - case IORESOURCE_IO: 102 - parent = &ioport_resource; 103 - err = pci_remap_iospace(res, iobase); 104 - if (err) { 105 - dev_warn(dev, "error %d: failed to map resource %pR\n", 106 - err, res); 107 - continue; 108 - } 109 - break; 110 - case IORESOURCE_MEM: 111 - parent = &iomem_resource; 112 - res_valid |= !(res->flags & IORESOURCE_PREFETCH); 113 - break; 114 - case IORESOURCE_BUS: 115 - pci->cfg.bus_range = res; 116 - default: 117 - continue; 118 - } 119 - 120 - err = devm_request_resource(dev, parent, res); 121 - if (err) 122 - goto out_release_res; 123 - } 124 - 125 - if (!res_valid) { 126 - dev_err(dev, "non-prefetchable memory resource required\n"); 127 - err = -EINVAL; 128 - goto out_release_res; 129 - } 130 - 131 - return 0; 132 - 133 - out_release_res: 134 - gen_pci_release_of_pci_ranges(pci); 135 - return err; 136 - } 137 - 138 - static int gen_pci_parse_map_cfg_windows(struct gen_pci *pci) 139 - { 140 - int err; 141 - u8 bus_max; 142 - resource_size_t busn; 143 - struct resource *bus_range; 144 - struct device *dev = pci->host.dev.parent; 145 - struct device_node *np = dev->of_node; 146 - u32 sz = 1 << pci->cfg.ops->bus_shift; 147 - 148 - err = of_address_to_resource(np, 0, &pci->cfg.res); 149 - if (err) { 150 - dev_err(dev, "missing \"reg\" property\n"); 151 - return err; 152 - } 153 - 154 - /* Limit the bus-range to fit within reg */ 155 - bus_max = pci->cfg.bus_range->start + 156 - (resource_size(&pci->cfg.res) >> pci->cfg.ops->bus_shift) - 1; 157 - pci->cfg.bus_range->end = min_t(resource_size_t, 158 - pci->cfg.bus_range->end, bus_max); 159 - 160 - pci->cfg.win = devm_kcalloc(dev, resource_size(pci->cfg.bus_range), 161 - sizeof(*pci->cfg.win), GFP_KERNEL); 162 - if (!pci->cfg.win) 163 - return -ENOMEM; 164 - 165 - /* Map our Configuration Space windows */ 166 - if (!devm_request_mem_region(dev, pci->cfg.res.start, 167 - resource_size(&pci->cfg.res), 168 - "Configuration Space")) 169 - return -ENOMEM; 170 - 171 - bus_range = pci->cfg.bus_range; 172 - for (busn = bus_range->start; busn <= bus_range->end; ++busn) { 173 - u32 idx = busn - bus_range->start; 174 - 175 - pci->cfg.win[idx] = devm_ioremap(dev, 176 - pci->cfg.res.start + idx * sz, 177 - sz); 178 - if (!pci->cfg.win[idx]) 179 - return -ENOMEM; 180 - } 181 - 182 - return 0; 183 - } 184 - 185 96 static int gen_pci_probe(struct platform_device *pdev) 186 97 { 187 - int err; 188 - const char *type; 189 - const struct of_device_id *of_id; 190 98 struct device *dev = &pdev->dev; 191 - struct device_node *np = dev->of_node; 99 + const struct of_device_id *of_id; 192 100 struct gen_pci *pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); 193 - struct pci_bus *bus, *child; 194 101 195 102 if (!pci) 196 103 return -ENOMEM; 197 104 198 - type = of_get_property(np, "device_type", NULL); 199 - if (!type || strcmp(type, "pci")) { 200 - dev_err(dev, "invalid \"device_type\" %s\n", type); 201 - return -EINVAL; 202 - } 203 - 204 - of_pci_check_probe_only(); 205 - 206 - of_id = of_match_node(gen_pci_of_match, np); 105 + of_id = of_match_node(gen_pci_of_match, dev->of_node); 207 106 pci->cfg.ops = (struct gen_pci_cfg_bus_ops *)of_id->data; 208 - pci->host.dev.parent = dev; 209 - INIT_LIST_HEAD(&pci->host.windows); 210 - INIT_LIST_HEAD(&pci->resources); 211 107 212 - /* Parse our PCI ranges and request their resources */ 213 - err = gen_pci_parse_request_of_pci_ranges(pci); 214 - if (err) 215 - return err; 216 - 217 - /* Parse and map our Configuration Space windows */ 218 - err = gen_pci_parse_map_cfg_windows(pci); 219 - if (err) { 220 - gen_pci_release_of_pci_ranges(pci); 221 - return err; 222 - } 223 - 224 - /* Do not reassign resources if probe only */ 225 - if (!pci_has_flag(PCI_PROBE_ONLY)) 226 - pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 227 - 228 - 229 - bus = pci_scan_root_bus(dev, pci->cfg.bus_range->start, 230 - &pci->cfg.ops->ops, pci, &pci->resources); 231 - if (!bus) { 232 - dev_err(dev, "Scanning rootbus failed"); 233 - return -ENODEV; 234 - } 235 - 236 - pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 237 - 238 - if (!pci_has_flag(PCI_PROBE_ONLY)) { 239 - pci_bus_size_bridges(bus); 240 - pci_bus_assign_resources(bus); 241 - 242 - list_for_each_entry(child, &bus->children, node) 243 - pcie_bus_configure_settings(child); 244 - } 245 - 246 - pci_bus_add_devices(bus); 247 - return 0; 108 + return pci_host_common_probe(pdev, pci); 248 109 } 249 110 250 111 static struct platform_driver gen_pci_driver = {
+2346
drivers/pci/host/pci-hyperv.c
··· 1 + /* 2 + * Copyright (c) Microsoft Corporation. 3 + * 4 + * Author: 5 + * Jake Oshins <jakeo@microsoft.com> 6 + * 7 + * This driver acts as a paravirtual front-end for PCI Express root buses. 8 + * When a PCI Express function (either an entire device or an SR-IOV 9 + * Virtual Function) is being passed through to the VM, this driver exposes 10 + * a new bus to the guest VM. This is modeled as a root PCI bus because 11 + * no bridges are being exposed to the VM. In fact, with a "Generation 2" 12 + * VM within Hyper-V, there may seem to be no PCI bus at all in the VM 13 + * until a device as been exposed using this driver. 14 + * 15 + * Each root PCI bus has its own PCI domain, which is called "Segment" in 16 + * the PCI Firmware Specifications. Thus while each device passed through 17 + * to the VM using this front-end will appear at "device 0", the domain will 18 + * be unique. Typically, each bus will have one PCI function on it, though 19 + * this driver does support more than one. 20 + * 21 + * In order to map the interrupts from the device through to the guest VM, 22 + * this driver also implements an IRQ Domain, which handles interrupts (either 23 + * MSI or MSI-X) associated with the functions on the bus. As interrupts are 24 + * set up, torn down, or reaffined, this driver communicates with the 25 + * underlying hypervisor to adjust the mappings in the I/O MMU so that each 26 + * interrupt will be delivered to the correct virtual processor at the right 27 + * vector. This driver does not support level-triggered (line-based) 28 + * interrupts, and will report that the Interrupt Line register in the 29 + * function's configuration space is zero. 30 + * 31 + * The rest of this driver mostly maps PCI concepts onto underlying Hyper-V 32 + * facilities. For instance, the configuration space of a function exposed 33 + * by Hyper-V is mapped into a single page of memory space, and the 34 + * read and write handlers for config space must be aware of this mechanism. 35 + * Similarly, device setup and teardown involves messages sent to and from 36 + * the PCI back-end driver in Hyper-V. 37 + * 38 + * This program is free software; you can redistribute it and/or modify it 39 + * under the terms of the GNU General Public License version 2 as published 40 + * by the Free Software Foundation. 41 + * 42 + * This program is distributed in the hope that it will be useful, but 43 + * WITHOUT ANY WARRANTY; without even the implied warranty of 44 + * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 45 + * NON INFRINGEMENT. See the GNU General Public License for more 46 + * details. 47 + * 48 + */ 49 + 50 + #include <linux/kernel.h> 51 + #include <linux/module.h> 52 + #include <linux/pci.h> 53 + #include <linux/semaphore.h> 54 + #include <linux/irqdomain.h> 55 + #include <asm/irqdomain.h> 56 + #include <asm/apic.h> 57 + #include <linux/msi.h> 58 + #include <linux/hyperv.h> 59 + #include <asm/mshyperv.h> 60 + 61 + /* 62 + * Protocol versions. The low word is the minor version, the high word the 63 + * major version. 64 + */ 65 + 66 + #define PCI_MAKE_VERSION(major, minor) ((u32)(((major) << 16) | (major))) 67 + #define PCI_MAJOR_VERSION(version) ((u32)(version) >> 16) 68 + #define PCI_MINOR_VERSION(version) ((u32)(version) & 0xff) 69 + 70 + enum { 71 + PCI_PROTOCOL_VERSION_1_1 = PCI_MAKE_VERSION(1, 1), 72 + PCI_PROTOCOL_VERSION_CURRENT = PCI_PROTOCOL_VERSION_1_1 73 + }; 74 + 75 + #define PCI_CONFIG_MMIO_LENGTH 0x2000 76 + #define CFG_PAGE_OFFSET 0x1000 77 + #define CFG_PAGE_SIZE (PCI_CONFIG_MMIO_LENGTH - CFG_PAGE_OFFSET) 78 + 79 + #define MAX_SUPPORTED_MSI_MESSAGES 0x400 80 + 81 + /* 82 + * Message Types 83 + */ 84 + 85 + enum pci_message_type { 86 + /* 87 + * Version 1.1 88 + */ 89 + PCI_MESSAGE_BASE = 0x42490000, 90 + PCI_BUS_RELATIONS = PCI_MESSAGE_BASE + 0, 91 + PCI_QUERY_BUS_RELATIONS = PCI_MESSAGE_BASE + 1, 92 + PCI_POWER_STATE_CHANGE = PCI_MESSAGE_BASE + 4, 93 + PCI_QUERY_RESOURCE_REQUIREMENTS = PCI_MESSAGE_BASE + 5, 94 + PCI_QUERY_RESOURCE_RESOURCES = PCI_MESSAGE_BASE + 6, 95 + PCI_BUS_D0ENTRY = PCI_MESSAGE_BASE + 7, 96 + PCI_BUS_D0EXIT = PCI_MESSAGE_BASE + 8, 97 + PCI_READ_BLOCK = PCI_MESSAGE_BASE + 9, 98 + PCI_WRITE_BLOCK = PCI_MESSAGE_BASE + 0xA, 99 + PCI_EJECT = PCI_MESSAGE_BASE + 0xB, 100 + PCI_QUERY_STOP = PCI_MESSAGE_BASE + 0xC, 101 + PCI_REENABLE = PCI_MESSAGE_BASE + 0xD, 102 + PCI_QUERY_STOP_FAILED = PCI_MESSAGE_BASE + 0xE, 103 + PCI_EJECTION_COMPLETE = PCI_MESSAGE_BASE + 0xF, 104 + PCI_RESOURCES_ASSIGNED = PCI_MESSAGE_BASE + 0x10, 105 + PCI_RESOURCES_RELEASED = PCI_MESSAGE_BASE + 0x11, 106 + PCI_INVALIDATE_BLOCK = PCI_MESSAGE_BASE + 0x12, 107 + PCI_QUERY_PROTOCOL_VERSION = PCI_MESSAGE_BASE + 0x13, 108 + PCI_CREATE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x14, 109 + PCI_DELETE_INTERRUPT_MESSAGE = PCI_MESSAGE_BASE + 0x15, 110 + PCI_MESSAGE_MAXIMUM 111 + }; 112 + 113 + /* 114 + * Structures defining the virtual PCI Express protocol. 115 + */ 116 + 117 + union pci_version { 118 + struct { 119 + u16 minor_version; 120 + u16 major_version; 121 + } parts; 122 + u32 version; 123 + } __packed; 124 + 125 + /* 126 + * Function numbers are 8-bits wide on Express, as interpreted through ARI, 127 + * which is all this driver does. This representation is the one used in 128 + * Windows, which is what is expected when sending this back and forth with 129 + * the Hyper-V parent partition. 130 + */ 131 + union win_slot_encoding { 132 + struct { 133 + u32 func:8; 134 + u32 reserved:24; 135 + } bits; 136 + u32 slot; 137 + } __packed; 138 + 139 + /* 140 + * Pretty much as defined in the PCI Specifications. 141 + */ 142 + struct pci_function_description { 143 + u16 v_id; /* vendor ID */ 144 + u16 d_id; /* device ID */ 145 + u8 rev; 146 + u8 prog_intf; 147 + u8 subclass; 148 + u8 base_class; 149 + u32 subsystem_id; 150 + union win_slot_encoding win_slot; 151 + u32 ser; /* serial number */ 152 + } __packed; 153 + 154 + /** 155 + * struct hv_msi_desc 156 + * @vector: IDT entry 157 + * @delivery_mode: As defined in Intel's Programmer's 158 + * Reference Manual, Volume 3, Chapter 8. 159 + * @vector_count: Number of contiguous entries in the 160 + * Interrupt Descriptor Table that are 161 + * occupied by this Message-Signaled 162 + * Interrupt. For "MSI", as first defined 163 + * in PCI 2.2, this can be between 1 and 164 + * 32. For "MSI-X," as first defined in PCI 165 + * 3.0, this must be 1, as each MSI-X table 166 + * entry would have its own descriptor. 167 + * @reserved: Empty space 168 + * @cpu_mask: All the target virtual processors. 169 + */ 170 + struct hv_msi_desc { 171 + u8 vector; 172 + u8 delivery_mode; 173 + u16 vector_count; 174 + u32 reserved; 175 + u64 cpu_mask; 176 + } __packed; 177 + 178 + /** 179 + * struct tran_int_desc 180 + * @reserved: unused, padding 181 + * @vector_count: same as in hv_msi_desc 182 + * @data: This is the "data payload" value that is 183 + * written by the device when it generates 184 + * a message-signaled interrupt, either MSI 185 + * or MSI-X. 186 + * @address: This is the address to which the data 187 + * payload is written on interrupt 188 + * generation. 189 + */ 190 + struct tran_int_desc { 191 + u16 reserved; 192 + u16 vector_count; 193 + u32 data; 194 + u64 address; 195 + } __packed; 196 + 197 + /* 198 + * A generic message format for virtual PCI. 199 + * Specific message formats are defined later in the file. 200 + */ 201 + 202 + struct pci_message { 203 + u32 message_type; 204 + } __packed; 205 + 206 + struct pci_child_message { 207 + u32 message_type; 208 + union win_slot_encoding wslot; 209 + } __packed; 210 + 211 + struct pci_incoming_message { 212 + struct vmpacket_descriptor hdr; 213 + struct pci_message message_type; 214 + } __packed; 215 + 216 + struct pci_response { 217 + struct vmpacket_descriptor hdr; 218 + s32 status; /* negative values are failures */ 219 + } __packed; 220 + 221 + struct pci_packet { 222 + void (*completion_func)(void *context, struct pci_response *resp, 223 + int resp_packet_size); 224 + void *compl_ctxt; 225 + struct pci_message message; 226 + }; 227 + 228 + /* 229 + * Specific message types supporting the PCI protocol. 230 + */ 231 + 232 + /* 233 + * Version negotiation message. Sent from the guest to the host. 234 + * The guest is free to try different versions until the host 235 + * accepts the version. 236 + * 237 + * pci_version: The protocol version requested. 238 + * is_last_attempt: If TRUE, this is the last version guest will request. 239 + * reservedz: Reserved field, set to zero. 240 + */ 241 + 242 + struct pci_version_request { 243 + struct pci_message message_type; 244 + enum pci_message_type protocol_version; 245 + } __packed; 246 + 247 + /* 248 + * Bus D0 Entry. This is sent from the guest to the host when the virtual 249 + * bus (PCI Express port) is ready for action. 250 + */ 251 + 252 + struct pci_bus_d0_entry { 253 + struct pci_message message_type; 254 + u32 reserved; 255 + u64 mmio_base; 256 + } __packed; 257 + 258 + struct pci_bus_relations { 259 + struct pci_incoming_message incoming; 260 + u32 device_count; 261 + struct pci_function_description func[1]; 262 + } __packed; 263 + 264 + struct pci_q_res_req_response { 265 + struct vmpacket_descriptor hdr; 266 + s32 status; /* negative values are failures */ 267 + u32 probed_bar[6]; 268 + } __packed; 269 + 270 + struct pci_set_power { 271 + struct pci_message message_type; 272 + union win_slot_encoding wslot; 273 + u32 power_state; /* In Windows terms */ 274 + u32 reserved; 275 + } __packed; 276 + 277 + struct pci_set_power_response { 278 + struct vmpacket_descriptor hdr; 279 + s32 status; /* negative values are failures */ 280 + union win_slot_encoding wslot; 281 + u32 resultant_state; /* In Windows terms */ 282 + u32 reserved; 283 + } __packed; 284 + 285 + struct pci_resources_assigned { 286 + struct pci_message message_type; 287 + union win_slot_encoding wslot; 288 + u8 memory_range[0x14][6]; /* not used here */ 289 + u32 msi_descriptors; 290 + u32 reserved[4]; 291 + } __packed; 292 + 293 + struct pci_create_interrupt { 294 + struct pci_message message_type; 295 + union win_slot_encoding wslot; 296 + struct hv_msi_desc int_desc; 297 + } __packed; 298 + 299 + struct pci_create_int_response { 300 + struct pci_response response; 301 + u32 reserved; 302 + struct tran_int_desc int_desc; 303 + } __packed; 304 + 305 + struct pci_delete_interrupt { 306 + struct pci_message message_type; 307 + union win_slot_encoding wslot; 308 + struct tran_int_desc int_desc; 309 + } __packed; 310 + 311 + struct pci_dev_incoming { 312 + struct pci_incoming_message incoming; 313 + union win_slot_encoding wslot; 314 + } __packed; 315 + 316 + struct pci_eject_response { 317 + u32 message_type; 318 + union win_slot_encoding wslot; 319 + u32 status; 320 + } __packed; 321 + 322 + static int pci_ring_size = (4 * PAGE_SIZE); 323 + 324 + /* 325 + * Definitions or interrupt steering hypercall. 326 + */ 327 + #define HV_PARTITION_ID_SELF ((u64)-1) 328 + #define HVCALL_RETARGET_INTERRUPT 0x7e 329 + 330 + struct retarget_msi_interrupt { 331 + u64 partition_id; /* use "self" */ 332 + u64 device_id; 333 + u32 source; /* 1 for MSI(-X) */ 334 + u32 reserved1; 335 + u32 address; 336 + u32 data; 337 + u64 reserved2; 338 + u32 vector; 339 + u32 flags; 340 + u64 vp_mask; 341 + } __packed; 342 + 343 + /* 344 + * Driver specific state. 345 + */ 346 + 347 + enum hv_pcibus_state { 348 + hv_pcibus_init = 0, 349 + hv_pcibus_probed, 350 + hv_pcibus_installed, 351 + hv_pcibus_maximum 352 + }; 353 + 354 + struct hv_pcibus_device { 355 + struct pci_sysdata sysdata; 356 + enum hv_pcibus_state state; 357 + atomic_t remove_lock; 358 + struct hv_device *hdev; 359 + resource_size_t low_mmio_space; 360 + resource_size_t high_mmio_space; 361 + struct resource *mem_config; 362 + struct resource *low_mmio_res; 363 + struct resource *high_mmio_res; 364 + struct completion *survey_event; 365 + struct completion remove_event; 366 + struct pci_bus *pci_bus; 367 + spinlock_t config_lock; /* Avoid two threads writing index page */ 368 + spinlock_t device_list_lock; /* Protect lists below */ 369 + void __iomem *cfg_addr; 370 + 371 + struct semaphore enum_sem; 372 + struct list_head resources_for_children; 373 + 374 + struct list_head children; 375 + struct list_head dr_list; 376 + struct work_struct wrk; 377 + 378 + struct msi_domain_info msi_info; 379 + struct msi_controller msi_chip; 380 + struct irq_domain *irq_domain; 381 + }; 382 + 383 + /* 384 + * Tracks "Device Relations" messages from the host, which must be both 385 + * processed in order and deferred so that they don't run in the context 386 + * of the incoming packet callback. 387 + */ 388 + struct hv_dr_work { 389 + struct work_struct wrk; 390 + struct hv_pcibus_device *bus; 391 + }; 392 + 393 + struct hv_dr_state { 394 + struct list_head list_entry; 395 + u32 device_count; 396 + struct pci_function_description func[1]; 397 + }; 398 + 399 + enum hv_pcichild_state { 400 + hv_pcichild_init = 0, 401 + hv_pcichild_requirements, 402 + hv_pcichild_resourced, 403 + hv_pcichild_ejecting, 404 + hv_pcichild_maximum 405 + }; 406 + 407 + enum hv_pcidev_ref_reason { 408 + hv_pcidev_ref_invalid = 0, 409 + hv_pcidev_ref_initial, 410 + hv_pcidev_ref_by_slot, 411 + hv_pcidev_ref_packet, 412 + hv_pcidev_ref_pnp, 413 + hv_pcidev_ref_childlist, 414 + hv_pcidev_irqdata, 415 + hv_pcidev_ref_max 416 + }; 417 + 418 + struct hv_pci_dev { 419 + /* List protected by pci_rescan_remove_lock */ 420 + struct list_head list_entry; 421 + atomic_t refs; 422 + enum hv_pcichild_state state; 423 + struct pci_function_description desc; 424 + bool reported_missing; 425 + struct hv_pcibus_device *hbus; 426 + struct work_struct wrk; 427 + 428 + /* 429 + * What would be observed if one wrote 0xFFFFFFFF to a BAR and then 430 + * read it back, for each of the BAR offsets within config space. 431 + */ 432 + u32 probed_bar[6]; 433 + }; 434 + 435 + struct hv_pci_compl { 436 + struct completion host_event; 437 + s32 completion_status; 438 + }; 439 + 440 + /** 441 + * hv_pci_generic_compl() - Invoked for a completion packet 442 + * @context: Set up by the sender of the packet. 443 + * @resp: The response packet 444 + * @resp_packet_size: Size in bytes of the packet 445 + * 446 + * This function is used to trigger an event and report status 447 + * for any message for which the completion packet contains a 448 + * status and nothing else. 449 + */ 450 + static 451 + void 452 + hv_pci_generic_compl(void *context, struct pci_response *resp, 453 + int resp_packet_size) 454 + { 455 + struct hv_pci_compl *comp_pkt = context; 456 + 457 + if (resp_packet_size >= offsetofend(struct pci_response, status)) 458 + comp_pkt->completion_status = resp->status; 459 + complete(&comp_pkt->host_event); 460 + } 461 + 462 + static struct hv_pci_dev *get_pcichild_wslot(struct hv_pcibus_device *hbus, 463 + u32 wslot); 464 + static void get_pcichild(struct hv_pci_dev *hv_pcidev, 465 + enum hv_pcidev_ref_reason reason); 466 + static void put_pcichild(struct hv_pci_dev *hv_pcidev, 467 + enum hv_pcidev_ref_reason reason); 468 + 469 + static void get_hvpcibus(struct hv_pcibus_device *hv_pcibus); 470 + static void put_hvpcibus(struct hv_pcibus_device *hv_pcibus); 471 + 472 + /** 473 + * devfn_to_wslot() - Convert from Linux PCI slot to Windows 474 + * @devfn: The Linux representation of PCI slot 475 + * 476 + * Windows uses a slightly different representation of PCI slot. 477 + * 478 + * Return: The Windows representation 479 + */ 480 + static u32 devfn_to_wslot(int devfn) 481 + { 482 + union win_slot_encoding wslot; 483 + 484 + wslot.slot = 0; 485 + wslot.bits.func = PCI_SLOT(devfn) | (PCI_FUNC(devfn) << 5); 486 + 487 + return wslot.slot; 488 + } 489 + 490 + /** 491 + * wslot_to_devfn() - Convert from Windows PCI slot to Linux 492 + * @wslot: The Windows representation of PCI slot 493 + * 494 + * Windows uses a slightly different representation of PCI slot. 495 + * 496 + * Return: The Linux representation 497 + */ 498 + static int wslot_to_devfn(u32 wslot) 499 + { 500 + union win_slot_encoding slot_no; 501 + 502 + slot_no.slot = wslot; 503 + return PCI_DEVFN(0, slot_no.bits.func); 504 + } 505 + 506 + /* 507 + * PCI Configuration Space for these root PCI buses is implemented as a pair 508 + * of pages in memory-mapped I/O space. Writing to the first page chooses 509 + * the PCI function being written or read. Once the first page has been 510 + * written to, the following page maps in the entire configuration space of 511 + * the function. 512 + */ 513 + 514 + /** 515 + * _hv_pcifront_read_config() - Internal PCI config read 516 + * @hpdev: The PCI driver's representation of the device 517 + * @where: Offset within config space 518 + * @size: Size of the transfer 519 + * @val: Pointer to the buffer receiving the data 520 + */ 521 + static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where, 522 + int size, u32 *val) 523 + { 524 + unsigned long flags; 525 + void __iomem *addr = hpdev->hbus->cfg_addr + CFG_PAGE_OFFSET + where; 526 + 527 + /* 528 + * If the attempt is to read the IDs or the ROM BAR, simulate that. 529 + */ 530 + if (where + size <= PCI_COMMAND) { 531 + memcpy(val, ((u8 *)&hpdev->desc.v_id) + where, size); 532 + } else if (where >= PCI_CLASS_REVISION && where + size <= 533 + PCI_CACHE_LINE_SIZE) { 534 + memcpy(val, ((u8 *)&hpdev->desc.rev) + where - 535 + PCI_CLASS_REVISION, size); 536 + } else if (where >= PCI_SUBSYSTEM_VENDOR_ID && where + size <= 537 + PCI_ROM_ADDRESS) { 538 + memcpy(val, (u8 *)&hpdev->desc.subsystem_id + where - 539 + PCI_SUBSYSTEM_VENDOR_ID, size); 540 + } else if (where >= PCI_ROM_ADDRESS && where + size <= 541 + PCI_CAPABILITY_LIST) { 542 + /* ROM BARs are unimplemented */ 543 + *val = 0; 544 + } else if (where >= PCI_INTERRUPT_LINE && where + size <= 545 + PCI_INTERRUPT_PIN) { 546 + /* 547 + * Interrupt Line and Interrupt PIN are hard-wired to zero 548 + * because this front-end only supports message-signaled 549 + * interrupts. 550 + */ 551 + *val = 0; 552 + } else if (where + size <= CFG_PAGE_SIZE) { 553 + spin_lock_irqsave(&hpdev->hbus->config_lock, flags); 554 + /* Choose the function to be read. (See comment above) */ 555 + writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); 556 + /* Read from that function's config space. */ 557 + switch (size) { 558 + case 1: 559 + *val = readb(addr); 560 + break; 561 + case 2: 562 + *val = readw(addr); 563 + break; 564 + default: 565 + *val = readl(addr); 566 + break; 567 + } 568 + spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); 569 + } else { 570 + dev_err(&hpdev->hbus->hdev->device, 571 + "Attempt to read beyond a function's config space.\n"); 572 + } 573 + } 574 + 575 + /** 576 + * _hv_pcifront_write_config() - Internal PCI config write 577 + * @hpdev: The PCI driver's representation of the device 578 + * @where: Offset within config space 579 + * @size: Size of the transfer 580 + * @val: The data being transferred 581 + */ 582 + static void _hv_pcifront_write_config(struct hv_pci_dev *hpdev, int where, 583 + int size, u32 val) 584 + { 585 + unsigned long flags; 586 + void __iomem *addr = hpdev->hbus->cfg_addr + CFG_PAGE_OFFSET + where; 587 + 588 + if (where >= PCI_SUBSYSTEM_VENDOR_ID && 589 + where + size <= PCI_CAPABILITY_LIST) { 590 + /* SSIDs and ROM BARs are read-only */ 591 + } else if (where >= PCI_COMMAND && where + size <= CFG_PAGE_SIZE) { 592 + spin_lock_irqsave(&hpdev->hbus->config_lock, flags); 593 + /* Choose the function to be written. (See comment above) */ 594 + writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr); 595 + /* Write to that function's config space. */ 596 + switch (size) { 597 + case 1: 598 + writeb(val, addr); 599 + break; 600 + case 2: 601 + writew(val, addr); 602 + break; 603 + default: 604 + writel(val, addr); 605 + break; 606 + } 607 + spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags); 608 + } else { 609 + dev_err(&hpdev->hbus->hdev->device, 610 + "Attempt to write beyond a function's config space.\n"); 611 + } 612 + } 613 + 614 + /** 615 + * hv_pcifront_read_config() - Read configuration space 616 + * @bus: PCI Bus structure 617 + * @devfn: Device/function 618 + * @where: Offset from base 619 + * @size: Byte/word/dword 620 + * @val: Value to be read 621 + * 622 + * Return: PCIBIOS_SUCCESSFUL on success 623 + * PCIBIOS_DEVICE_NOT_FOUND on failure 624 + */ 625 + static int hv_pcifront_read_config(struct pci_bus *bus, unsigned int devfn, 626 + int where, int size, u32 *val) 627 + { 628 + struct hv_pcibus_device *hbus = 629 + container_of(bus->sysdata, struct hv_pcibus_device, sysdata); 630 + struct hv_pci_dev *hpdev; 631 + 632 + hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(devfn)); 633 + if (!hpdev) 634 + return PCIBIOS_DEVICE_NOT_FOUND; 635 + 636 + _hv_pcifront_read_config(hpdev, where, size, val); 637 + 638 + put_pcichild(hpdev, hv_pcidev_ref_by_slot); 639 + return PCIBIOS_SUCCESSFUL; 640 + } 641 + 642 + /** 643 + * hv_pcifront_write_config() - Write configuration space 644 + * @bus: PCI Bus structure 645 + * @devfn: Device/function 646 + * @where: Offset from base 647 + * @size: Byte/word/dword 648 + * @val: Value to be written to device 649 + * 650 + * Return: PCIBIOS_SUCCESSFUL on success 651 + * PCIBIOS_DEVICE_NOT_FOUND on failure 652 + */ 653 + static int hv_pcifront_write_config(struct pci_bus *bus, unsigned int devfn, 654 + int where, int size, u32 val) 655 + { 656 + struct hv_pcibus_device *hbus = 657 + container_of(bus->sysdata, struct hv_pcibus_device, sysdata); 658 + struct hv_pci_dev *hpdev; 659 + 660 + hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(devfn)); 661 + if (!hpdev) 662 + return PCIBIOS_DEVICE_NOT_FOUND; 663 + 664 + _hv_pcifront_write_config(hpdev, where, size, val); 665 + 666 + put_pcichild(hpdev, hv_pcidev_ref_by_slot); 667 + return PCIBIOS_SUCCESSFUL; 668 + } 669 + 670 + /* PCIe operations */ 671 + static struct pci_ops hv_pcifront_ops = { 672 + .read = hv_pcifront_read_config, 673 + .write = hv_pcifront_write_config, 674 + }; 675 + 676 + /* Interrupt management hooks */ 677 + static void hv_int_desc_free(struct hv_pci_dev *hpdev, 678 + struct tran_int_desc *int_desc) 679 + { 680 + struct pci_delete_interrupt *int_pkt; 681 + struct { 682 + struct pci_packet pkt; 683 + u8 buffer[sizeof(struct pci_delete_interrupt) - 684 + sizeof(struct pci_message)]; 685 + } ctxt; 686 + 687 + memset(&ctxt, 0, sizeof(ctxt)); 688 + int_pkt = (struct pci_delete_interrupt *)&ctxt.pkt.message; 689 + int_pkt->message_type.message_type = 690 + PCI_DELETE_INTERRUPT_MESSAGE; 691 + int_pkt->wslot.slot = hpdev->desc.win_slot.slot; 692 + int_pkt->int_desc = *int_desc; 693 + vmbus_sendpacket(hpdev->hbus->hdev->channel, int_pkt, sizeof(*int_pkt), 694 + (unsigned long)&ctxt.pkt, VM_PKT_DATA_INBAND, 0); 695 + kfree(int_desc); 696 + } 697 + 698 + /** 699 + * hv_msi_free() - Free the MSI. 700 + * @domain: The interrupt domain pointer 701 + * @info: Extra MSI-related context 702 + * @irq: Identifies the IRQ. 703 + * 704 + * The Hyper-V parent partition and hypervisor are tracking the 705 + * messages that are in use, keeping the interrupt redirection 706 + * table up to date. This callback sends a message that frees 707 + * the IRT entry and related tracking nonsense. 708 + */ 709 + static void hv_msi_free(struct irq_domain *domain, struct msi_domain_info *info, 710 + unsigned int irq) 711 + { 712 + struct hv_pcibus_device *hbus; 713 + struct hv_pci_dev *hpdev; 714 + struct pci_dev *pdev; 715 + struct tran_int_desc *int_desc; 716 + struct irq_data *irq_data = irq_domain_get_irq_data(domain, irq); 717 + struct msi_desc *msi = irq_data_get_msi_desc(irq_data); 718 + 719 + pdev = msi_desc_to_pci_dev(msi); 720 + hbus = info->data; 721 + hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); 722 + if (!hpdev) 723 + return; 724 + 725 + int_desc = irq_data_get_irq_chip_data(irq_data); 726 + if (int_desc) { 727 + irq_data->chip_data = NULL; 728 + hv_int_desc_free(hpdev, int_desc); 729 + } 730 + 731 + put_pcichild(hpdev, hv_pcidev_ref_by_slot); 732 + } 733 + 734 + static int hv_set_affinity(struct irq_data *data, const struct cpumask *dest, 735 + bool force) 736 + { 737 + struct irq_data *parent = data->parent_data; 738 + 739 + return parent->chip->irq_set_affinity(parent, dest, force); 740 + } 741 + 742 + void hv_irq_mask(struct irq_data *data) 743 + { 744 + pci_msi_mask_irq(data); 745 + } 746 + 747 + /** 748 + * hv_irq_unmask() - "Unmask" the IRQ by setting its current 749 + * affinity. 750 + * @data: Describes the IRQ 751 + * 752 + * Build new a destination for the MSI and make a hypercall to 753 + * update the Interrupt Redirection Table. "Device Logical ID" 754 + * is built out of this PCI bus's instance GUID and the function 755 + * number of the device. 756 + */ 757 + void hv_irq_unmask(struct irq_data *data) 758 + { 759 + struct msi_desc *msi_desc = irq_data_get_msi_desc(data); 760 + struct irq_cfg *cfg = irqd_cfg(data); 761 + struct retarget_msi_interrupt params; 762 + struct hv_pcibus_device *hbus; 763 + struct cpumask *dest; 764 + struct pci_bus *pbus; 765 + struct pci_dev *pdev; 766 + int cpu; 767 + 768 + dest = irq_data_get_affinity_mask(data); 769 + pdev = msi_desc_to_pci_dev(msi_desc); 770 + pbus = pdev->bus; 771 + hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); 772 + 773 + memset(&params, 0, sizeof(params)); 774 + params.partition_id = HV_PARTITION_ID_SELF; 775 + params.source = 1; /* MSI(-X) */ 776 + params.address = msi_desc->msg.address_lo; 777 + params.data = msi_desc->msg.data; 778 + params.device_id = (hbus->hdev->dev_instance.b[5] << 24) | 779 + (hbus->hdev->dev_instance.b[4] << 16) | 780 + (hbus->hdev->dev_instance.b[7] << 8) | 781 + (hbus->hdev->dev_instance.b[6] & 0xf8) | 782 + PCI_FUNC(pdev->devfn); 783 + params.vector = cfg->vector; 784 + 785 + for_each_cpu_and(cpu, dest, cpu_online_mask) 786 + params.vp_mask |= (1ULL << vmbus_cpu_number_to_vp_number(cpu)); 787 + 788 + hv_do_hypercall(HVCALL_RETARGET_INTERRUPT, &params, NULL); 789 + 790 + pci_msi_unmask_irq(data); 791 + } 792 + 793 + struct compose_comp_ctxt { 794 + struct hv_pci_compl comp_pkt; 795 + struct tran_int_desc int_desc; 796 + }; 797 + 798 + static void hv_pci_compose_compl(void *context, struct pci_response *resp, 799 + int resp_packet_size) 800 + { 801 + struct compose_comp_ctxt *comp_pkt = context; 802 + struct pci_create_int_response *int_resp = 803 + (struct pci_create_int_response *)resp; 804 + 805 + comp_pkt->comp_pkt.completion_status = resp->status; 806 + comp_pkt->int_desc = int_resp->int_desc; 807 + complete(&comp_pkt->comp_pkt.host_event); 808 + } 809 + 810 + /** 811 + * hv_compose_msi_msg() - Supplies a valid MSI address/data 812 + * @data: Everything about this MSI 813 + * @msg: Buffer that is filled in by this function 814 + * 815 + * This function unpacks the IRQ looking for target CPU set, IDT 816 + * vector and mode and sends a message to the parent partition 817 + * asking for a mapping for that tuple in this partition. The 818 + * response supplies a data value and address to which that data 819 + * should be written to trigger that interrupt. 820 + */ 821 + static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 822 + { 823 + struct irq_cfg *cfg = irqd_cfg(data); 824 + struct hv_pcibus_device *hbus; 825 + struct hv_pci_dev *hpdev; 826 + struct pci_bus *pbus; 827 + struct pci_dev *pdev; 828 + struct pci_create_interrupt *int_pkt; 829 + struct compose_comp_ctxt comp; 830 + struct tran_int_desc *int_desc; 831 + struct cpumask *affinity; 832 + struct { 833 + struct pci_packet pkt; 834 + u8 buffer[sizeof(struct pci_create_interrupt) - 835 + sizeof(struct pci_message)]; 836 + } ctxt; 837 + int cpu; 838 + int ret; 839 + 840 + pdev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); 841 + pbus = pdev->bus; 842 + hbus = container_of(pbus->sysdata, struct hv_pcibus_device, sysdata); 843 + hpdev = get_pcichild_wslot(hbus, devfn_to_wslot(pdev->devfn)); 844 + if (!hpdev) 845 + goto return_null_message; 846 + 847 + /* Free any previous message that might have already been composed. */ 848 + if (data->chip_data) { 849 + int_desc = data->chip_data; 850 + data->chip_data = NULL; 851 + hv_int_desc_free(hpdev, int_desc); 852 + } 853 + 854 + int_desc = kzalloc(sizeof(*int_desc), GFP_KERNEL); 855 + if (!int_desc) 856 + goto drop_reference; 857 + 858 + memset(&ctxt, 0, sizeof(ctxt)); 859 + init_completion(&comp.comp_pkt.host_event); 860 + ctxt.pkt.completion_func = hv_pci_compose_compl; 861 + ctxt.pkt.compl_ctxt = &comp; 862 + int_pkt = (struct pci_create_interrupt *)&ctxt.pkt.message; 863 + int_pkt->message_type.message_type = PCI_CREATE_INTERRUPT_MESSAGE; 864 + int_pkt->wslot.slot = hpdev->desc.win_slot.slot; 865 + int_pkt->int_desc.vector = cfg->vector; 866 + int_pkt->int_desc.vector_count = 1; 867 + int_pkt->int_desc.delivery_mode = 868 + (apic->irq_delivery_mode == dest_LowestPrio) ? 1 : 0; 869 + 870 + /* 871 + * This bit doesn't have to work on machines with more than 64 872 + * processors because Hyper-V only supports 64 in a guest. 873 + */ 874 + affinity = irq_data_get_affinity_mask(data); 875 + for_each_cpu_and(cpu, affinity, cpu_online_mask) { 876 + int_pkt->int_desc.cpu_mask |= 877 + (1ULL << vmbus_cpu_number_to_vp_number(cpu)); 878 + } 879 + 880 + ret = vmbus_sendpacket(hpdev->hbus->hdev->channel, int_pkt, 881 + sizeof(*int_pkt), (unsigned long)&ctxt.pkt, 882 + VM_PKT_DATA_INBAND, 883 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 884 + if (!ret) 885 + wait_for_completion(&comp.comp_pkt.host_event); 886 + 887 + if (comp.comp_pkt.completion_status < 0) { 888 + dev_err(&hbus->hdev->device, 889 + "Request for interrupt failed: 0x%x", 890 + comp.comp_pkt.completion_status); 891 + goto free_int_desc; 892 + } 893 + 894 + /* 895 + * Record the assignment so that this can be unwound later. Using 896 + * irq_set_chip_data() here would be appropriate, but the lock it takes 897 + * is already held. 898 + */ 899 + *int_desc = comp.int_desc; 900 + data->chip_data = int_desc; 901 + 902 + /* Pass up the result. */ 903 + msg->address_hi = comp.int_desc.address >> 32; 904 + msg->address_lo = comp.int_desc.address & 0xffffffff; 905 + msg->data = comp.int_desc.data; 906 + 907 + put_pcichild(hpdev, hv_pcidev_ref_by_slot); 908 + return; 909 + 910 + free_int_desc: 911 + kfree(int_desc); 912 + drop_reference: 913 + put_pcichild(hpdev, hv_pcidev_ref_by_slot); 914 + return_null_message: 915 + msg->address_hi = 0; 916 + msg->address_lo = 0; 917 + msg->data = 0; 918 + } 919 + 920 + /* HW Interrupt Chip Descriptor */ 921 + static struct irq_chip hv_msi_irq_chip = { 922 + .name = "Hyper-V PCIe MSI", 923 + .irq_compose_msi_msg = hv_compose_msi_msg, 924 + .irq_set_affinity = hv_set_affinity, 925 + .irq_ack = irq_chip_ack_parent, 926 + .irq_mask = hv_irq_mask, 927 + .irq_unmask = hv_irq_unmask, 928 + }; 929 + 930 + static irq_hw_number_t hv_msi_domain_ops_get_hwirq(struct msi_domain_info *info, 931 + msi_alloc_info_t *arg) 932 + { 933 + return arg->msi_hwirq; 934 + } 935 + 936 + static struct msi_domain_ops hv_msi_ops = { 937 + .get_hwirq = hv_msi_domain_ops_get_hwirq, 938 + .msi_prepare = pci_msi_prepare, 939 + .set_desc = pci_msi_set_desc, 940 + .msi_free = hv_msi_free, 941 + }; 942 + 943 + /** 944 + * hv_pcie_init_irq_domain() - Initialize IRQ domain 945 + * @hbus: The root PCI bus 946 + * 947 + * This function creates an IRQ domain which will be used for 948 + * interrupts from devices that have been passed through. These 949 + * devices only support MSI and MSI-X, not line-based interrupts 950 + * or simulations of line-based interrupts through PCIe's 951 + * fabric-layer messages. Because interrupts are remapped, we 952 + * can support multi-message MSI here. 953 + * 954 + * Return: '0' on success and error value on failure 955 + */ 956 + static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus) 957 + { 958 + hbus->msi_info.chip = &hv_msi_irq_chip; 959 + hbus->msi_info.ops = &hv_msi_ops; 960 + hbus->msi_info.flags = (MSI_FLAG_USE_DEF_DOM_OPS | 961 + MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_MULTI_PCI_MSI | 962 + MSI_FLAG_PCI_MSIX); 963 + hbus->msi_info.handler = handle_edge_irq; 964 + hbus->msi_info.handler_name = "edge"; 965 + hbus->msi_info.data = hbus; 966 + hbus->irq_domain = pci_msi_create_irq_domain(hbus->sysdata.fwnode, 967 + &hbus->msi_info, 968 + x86_vector_domain); 969 + if (!hbus->irq_domain) { 970 + dev_err(&hbus->hdev->device, 971 + "Failed to build an MSI IRQ domain\n"); 972 + return -ENODEV; 973 + } 974 + 975 + return 0; 976 + } 977 + 978 + /** 979 + * get_bar_size() - Get the address space consumed by a BAR 980 + * @bar_val: Value that a BAR returned after -1 was written 981 + * to it. 982 + * 983 + * This function returns the size of the BAR, rounded up to 1 984 + * page. It has to be rounded up because the hypervisor's page 985 + * table entry that maps the BAR into the VM can't specify an 986 + * offset within a page. The invariant is that the hypervisor 987 + * must place any BARs of smaller than page length at the 988 + * beginning of a page. 989 + * 990 + * Return: Size in bytes of the consumed MMIO space. 991 + */ 992 + static u64 get_bar_size(u64 bar_val) 993 + { 994 + return round_up((1 + ~(bar_val & PCI_BASE_ADDRESS_MEM_MASK)), 995 + PAGE_SIZE); 996 + } 997 + 998 + /** 999 + * survey_child_resources() - Total all MMIO requirements 1000 + * @hbus: Root PCI bus, as understood by this driver 1001 + */ 1002 + static void survey_child_resources(struct hv_pcibus_device *hbus) 1003 + { 1004 + struct list_head *iter; 1005 + struct hv_pci_dev *hpdev; 1006 + resource_size_t bar_size = 0; 1007 + unsigned long flags; 1008 + struct completion *event; 1009 + u64 bar_val; 1010 + int i; 1011 + 1012 + /* If nobody is waiting on the answer, don't compute it. */ 1013 + event = xchg(&hbus->survey_event, NULL); 1014 + if (!event) 1015 + return; 1016 + 1017 + /* If the answer has already been computed, go with it. */ 1018 + if (hbus->low_mmio_space || hbus->high_mmio_space) { 1019 + complete(event); 1020 + return; 1021 + } 1022 + 1023 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1024 + 1025 + /* 1026 + * Due to an interesting quirk of the PCI spec, all memory regions 1027 + * for a child device are a power of 2 in size and aligned in memory, 1028 + * so it's sufficient to just add them up without tracking alignment. 1029 + */ 1030 + list_for_each(iter, &hbus->children) { 1031 + hpdev = container_of(iter, struct hv_pci_dev, list_entry); 1032 + for (i = 0; i < 6; i++) { 1033 + if (hpdev->probed_bar[i] & PCI_BASE_ADDRESS_SPACE_IO) 1034 + dev_err(&hbus->hdev->device, 1035 + "There's an I/O BAR in this list!\n"); 1036 + 1037 + if (hpdev->probed_bar[i] != 0) { 1038 + /* 1039 + * A probed BAR has all the upper bits set that 1040 + * can be changed. 1041 + */ 1042 + 1043 + bar_val = hpdev->probed_bar[i]; 1044 + if (bar_val & PCI_BASE_ADDRESS_MEM_TYPE_64) 1045 + bar_val |= 1046 + ((u64)hpdev->probed_bar[++i] << 32); 1047 + else 1048 + bar_val |= 0xffffffff00000000ULL; 1049 + 1050 + bar_size = get_bar_size(bar_val); 1051 + 1052 + if (bar_val & PCI_BASE_ADDRESS_MEM_TYPE_64) 1053 + hbus->high_mmio_space += bar_size; 1054 + else 1055 + hbus->low_mmio_space += bar_size; 1056 + } 1057 + } 1058 + } 1059 + 1060 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1061 + complete(event); 1062 + } 1063 + 1064 + /** 1065 + * prepopulate_bars() - Fill in BARs with defaults 1066 + * @hbus: Root PCI bus, as understood by this driver 1067 + * 1068 + * The core PCI driver code seems much, much happier if the BARs 1069 + * for a device have values upon first scan. So fill them in. 1070 + * The algorithm below works down from large sizes to small, 1071 + * attempting to pack the assignments optimally. The assumption, 1072 + * enforced in other parts of the code, is that the beginning of 1073 + * the memory-mapped I/O space will be aligned on the largest 1074 + * BAR size. 1075 + */ 1076 + static void prepopulate_bars(struct hv_pcibus_device *hbus) 1077 + { 1078 + resource_size_t high_size = 0; 1079 + resource_size_t low_size = 0; 1080 + resource_size_t high_base = 0; 1081 + resource_size_t low_base = 0; 1082 + resource_size_t bar_size; 1083 + struct hv_pci_dev *hpdev; 1084 + struct list_head *iter; 1085 + unsigned long flags; 1086 + u64 bar_val; 1087 + u32 command; 1088 + bool high; 1089 + int i; 1090 + 1091 + if (hbus->low_mmio_space) { 1092 + low_size = 1ULL << (63 - __builtin_clzll(hbus->low_mmio_space)); 1093 + low_base = hbus->low_mmio_res->start; 1094 + } 1095 + 1096 + if (hbus->high_mmio_space) { 1097 + high_size = 1ULL << 1098 + (63 - __builtin_clzll(hbus->high_mmio_space)); 1099 + high_base = hbus->high_mmio_res->start; 1100 + } 1101 + 1102 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1103 + 1104 + /* Pick addresses for the BARs. */ 1105 + do { 1106 + list_for_each(iter, &hbus->children) { 1107 + hpdev = container_of(iter, struct hv_pci_dev, 1108 + list_entry); 1109 + for (i = 0; i < 6; i++) { 1110 + bar_val = hpdev->probed_bar[i]; 1111 + if (bar_val == 0) 1112 + continue; 1113 + high = bar_val & PCI_BASE_ADDRESS_MEM_TYPE_64; 1114 + if (high) { 1115 + bar_val |= 1116 + ((u64)hpdev->probed_bar[i + 1] 1117 + << 32); 1118 + } else { 1119 + bar_val |= 0xffffffffULL << 32; 1120 + } 1121 + bar_size = get_bar_size(bar_val); 1122 + if (high) { 1123 + if (high_size != bar_size) { 1124 + i++; 1125 + continue; 1126 + } 1127 + _hv_pcifront_write_config(hpdev, 1128 + PCI_BASE_ADDRESS_0 + (4 * i), 1129 + 4, 1130 + (u32)(high_base & 0xffffff00)); 1131 + i++; 1132 + _hv_pcifront_write_config(hpdev, 1133 + PCI_BASE_ADDRESS_0 + (4 * i), 1134 + 4, (u32)(high_base >> 32)); 1135 + high_base += bar_size; 1136 + } else { 1137 + if (low_size != bar_size) 1138 + continue; 1139 + _hv_pcifront_write_config(hpdev, 1140 + PCI_BASE_ADDRESS_0 + (4 * i), 1141 + 4, 1142 + (u32)(low_base & 0xffffff00)); 1143 + low_base += bar_size; 1144 + } 1145 + } 1146 + if (high_size <= 1 && low_size <= 1) { 1147 + /* Set the memory enable bit. */ 1148 + _hv_pcifront_read_config(hpdev, PCI_COMMAND, 2, 1149 + &command); 1150 + command |= PCI_COMMAND_MEMORY; 1151 + _hv_pcifront_write_config(hpdev, PCI_COMMAND, 2, 1152 + command); 1153 + break; 1154 + } 1155 + } 1156 + 1157 + high_size >>= 1; 1158 + low_size >>= 1; 1159 + } while (high_size || low_size); 1160 + 1161 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1162 + } 1163 + 1164 + /** 1165 + * create_root_hv_pci_bus() - Expose a new root PCI bus 1166 + * @hbus: Root PCI bus, as understood by this driver 1167 + * 1168 + * Return: 0 on success, -errno on failure 1169 + */ 1170 + static int create_root_hv_pci_bus(struct hv_pcibus_device *hbus) 1171 + { 1172 + /* Register the device */ 1173 + hbus->pci_bus = pci_create_root_bus(&hbus->hdev->device, 1174 + 0, /* bus number is always zero */ 1175 + &hv_pcifront_ops, 1176 + &hbus->sysdata, 1177 + &hbus->resources_for_children); 1178 + if (!hbus->pci_bus) 1179 + return -ENODEV; 1180 + 1181 + hbus->pci_bus->msi = &hbus->msi_chip; 1182 + hbus->pci_bus->msi->dev = &hbus->hdev->device; 1183 + 1184 + pci_scan_child_bus(hbus->pci_bus); 1185 + pci_bus_assign_resources(hbus->pci_bus); 1186 + pci_bus_add_devices(hbus->pci_bus); 1187 + hbus->state = hv_pcibus_installed; 1188 + return 0; 1189 + } 1190 + 1191 + struct q_res_req_compl { 1192 + struct completion host_event; 1193 + struct hv_pci_dev *hpdev; 1194 + }; 1195 + 1196 + /** 1197 + * q_resource_requirements() - Query Resource Requirements 1198 + * @context: The completion context. 1199 + * @resp: The response that came from the host. 1200 + * @resp_packet_size: The size in bytes of resp. 1201 + * 1202 + * This function is invoked on completion of a Query Resource 1203 + * Requirements packet. 1204 + */ 1205 + static void q_resource_requirements(void *context, struct pci_response *resp, 1206 + int resp_packet_size) 1207 + { 1208 + struct q_res_req_compl *completion = context; 1209 + struct pci_q_res_req_response *q_res_req = 1210 + (struct pci_q_res_req_response *)resp; 1211 + int i; 1212 + 1213 + if (resp->status < 0) { 1214 + dev_err(&completion->hpdev->hbus->hdev->device, 1215 + "query resource requirements failed: %x\n", 1216 + resp->status); 1217 + } else { 1218 + for (i = 0; i < 6; i++) { 1219 + completion->hpdev->probed_bar[i] = 1220 + q_res_req->probed_bar[i]; 1221 + } 1222 + } 1223 + 1224 + complete(&completion->host_event); 1225 + } 1226 + 1227 + static void get_pcichild(struct hv_pci_dev *hpdev, 1228 + enum hv_pcidev_ref_reason reason) 1229 + { 1230 + atomic_inc(&hpdev->refs); 1231 + } 1232 + 1233 + static void put_pcichild(struct hv_pci_dev *hpdev, 1234 + enum hv_pcidev_ref_reason reason) 1235 + { 1236 + if (atomic_dec_and_test(&hpdev->refs)) 1237 + kfree(hpdev); 1238 + } 1239 + 1240 + /** 1241 + * new_pcichild_device() - Create a new child device 1242 + * @hbus: The internal struct tracking this root PCI bus. 1243 + * @desc: The information supplied so far from the host 1244 + * about the device. 1245 + * 1246 + * This function creates the tracking structure for a new child 1247 + * device and kicks off the process of figuring out what it is. 1248 + * 1249 + * Return: Pointer to the new tracking struct 1250 + */ 1251 + static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus, 1252 + struct pci_function_description *desc) 1253 + { 1254 + struct hv_pci_dev *hpdev; 1255 + struct pci_child_message *res_req; 1256 + struct q_res_req_compl comp_pkt; 1257 + union { 1258 + struct pci_packet init_packet; 1259 + u8 buffer[0x100]; 1260 + } pkt; 1261 + unsigned long flags; 1262 + int ret; 1263 + 1264 + hpdev = kzalloc(sizeof(*hpdev), GFP_ATOMIC); 1265 + if (!hpdev) 1266 + return NULL; 1267 + 1268 + hpdev->hbus = hbus; 1269 + 1270 + memset(&pkt, 0, sizeof(pkt)); 1271 + init_completion(&comp_pkt.host_event); 1272 + comp_pkt.hpdev = hpdev; 1273 + pkt.init_packet.compl_ctxt = &comp_pkt; 1274 + pkt.init_packet.completion_func = q_resource_requirements; 1275 + res_req = (struct pci_child_message *)&pkt.init_packet.message; 1276 + res_req->message_type = PCI_QUERY_RESOURCE_REQUIREMENTS; 1277 + res_req->wslot.slot = desc->win_slot.slot; 1278 + 1279 + ret = vmbus_sendpacket(hbus->hdev->channel, res_req, 1280 + sizeof(struct pci_child_message), 1281 + (unsigned long)&pkt.init_packet, 1282 + VM_PKT_DATA_INBAND, 1283 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 1284 + if (ret) 1285 + goto error; 1286 + 1287 + wait_for_completion(&comp_pkt.host_event); 1288 + 1289 + hpdev->desc = *desc; 1290 + get_pcichild(hpdev, hv_pcidev_ref_initial); 1291 + get_pcichild(hpdev, hv_pcidev_ref_childlist); 1292 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1293 + list_add_tail(&hpdev->list_entry, &hbus->children); 1294 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1295 + return hpdev; 1296 + 1297 + error: 1298 + kfree(hpdev); 1299 + return NULL; 1300 + } 1301 + 1302 + /** 1303 + * get_pcichild_wslot() - Find device from slot 1304 + * @hbus: Root PCI bus, as understood by this driver 1305 + * @wslot: Location on the bus 1306 + * 1307 + * This function looks up a PCI device and returns the internal 1308 + * representation of it. It acquires a reference on it, so that 1309 + * the device won't be deleted while somebody is using it. The 1310 + * caller is responsible for calling put_pcichild() to release 1311 + * this reference. 1312 + * 1313 + * Return: Internal representation of a PCI device 1314 + */ 1315 + static struct hv_pci_dev *get_pcichild_wslot(struct hv_pcibus_device *hbus, 1316 + u32 wslot) 1317 + { 1318 + unsigned long flags; 1319 + struct hv_pci_dev *iter, *hpdev = NULL; 1320 + 1321 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1322 + list_for_each_entry(iter, &hbus->children, list_entry) { 1323 + if (iter->desc.win_slot.slot == wslot) { 1324 + hpdev = iter; 1325 + get_pcichild(hpdev, hv_pcidev_ref_by_slot); 1326 + break; 1327 + } 1328 + } 1329 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1330 + 1331 + return hpdev; 1332 + } 1333 + 1334 + /** 1335 + * pci_devices_present_work() - Handle new list of child devices 1336 + * @work: Work struct embedded in struct hv_dr_work 1337 + * 1338 + * "Bus Relations" is the Windows term for "children of this 1339 + * bus." The terminology is preserved here for people trying to 1340 + * debug the interaction between Hyper-V and Linux. This 1341 + * function is called when the parent partition reports a list 1342 + * of functions that should be observed under this PCI Express 1343 + * port (bus). 1344 + * 1345 + * This function updates the list, and must tolerate being 1346 + * called multiple times with the same information. The typical 1347 + * number of child devices is one, with very atypical cases 1348 + * involving three or four, so the algorithms used here can be 1349 + * simple and inefficient. 1350 + * 1351 + * It must also treat the omission of a previously observed device as 1352 + * notification that the device no longer exists. 1353 + * 1354 + * Note that this function is a work item, and it may not be 1355 + * invoked in the order that it was queued. Back to back 1356 + * updates of the list of present devices may involve queuing 1357 + * multiple work items, and this one may run before ones that 1358 + * were sent later. As such, this function only does something 1359 + * if is the last one in the queue. 1360 + */ 1361 + static void pci_devices_present_work(struct work_struct *work) 1362 + { 1363 + u32 child_no; 1364 + bool found; 1365 + struct list_head *iter; 1366 + struct pci_function_description *new_desc; 1367 + struct hv_pci_dev *hpdev; 1368 + struct hv_pcibus_device *hbus; 1369 + struct list_head removed; 1370 + struct hv_dr_work *dr_wrk; 1371 + struct hv_dr_state *dr = NULL; 1372 + unsigned long flags; 1373 + 1374 + dr_wrk = container_of(work, struct hv_dr_work, wrk); 1375 + hbus = dr_wrk->bus; 1376 + kfree(dr_wrk); 1377 + 1378 + INIT_LIST_HEAD(&removed); 1379 + 1380 + if (down_interruptible(&hbus->enum_sem)) { 1381 + put_hvpcibus(hbus); 1382 + return; 1383 + } 1384 + 1385 + /* Pull this off the queue and process it if it was the last one. */ 1386 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1387 + while (!list_empty(&hbus->dr_list)) { 1388 + dr = list_first_entry(&hbus->dr_list, struct hv_dr_state, 1389 + list_entry); 1390 + list_del(&dr->list_entry); 1391 + 1392 + /* Throw this away if the list still has stuff in it. */ 1393 + if (!list_empty(&hbus->dr_list)) { 1394 + kfree(dr); 1395 + continue; 1396 + } 1397 + } 1398 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1399 + 1400 + if (!dr) { 1401 + up(&hbus->enum_sem); 1402 + put_hvpcibus(hbus); 1403 + return; 1404 + } 1405 + 1406 + /* First, mark all existing children as reported missing. */ 1407 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1408 + list_for_each(iter, &hbus->children) { 1409 + hpdev = container_of(iter, struct hv_pci_dev, 1410 + list_entry); 1411 + hpdev->reported_missing = true; 1412 + } 1413 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1414 + 1415 + /* Next, add back any reported devices. */ 1416 + for (child_no = 0; child_no < dr->device_count; child_no++) { 1417 + found = false; 1418 + new_desc = &dr->func[child_no]; 1419 + 1420 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1421 + list_for_each(iter, &hbus->children) { 1422 + hpdev = container_of(iter, struct hv_pci_dev, 1423 + list_entry); 1424 + if ((hpdev->desc.win_slot.slot == 1425 + new_desc->win_slot.slot) && 1426 + (hpdev->desc.v_id == new_desc->v_id) && 1427 + (hpdev->desc.d_id == new_desc->d_id) && 1428 + (hpdev->desc.ser == new_desc->ser)) { 1429 + hpdev->reported_missing = false; 1430 + found = true; 1431 + } 1432 + } 1433 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1434 + 1435 + if (!found) { 1436 + hpdev = new_pcichild_device(hbus, new_desc); 1437 + if (!hpdev) 1438 + dev_err(&hbus->hdev->device, 1439 + "couldn't record a child device.\n"); 1440 + } 1441 + } 1442 + 1443 + /* Move missing children to a list on the stack. */ 1444 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1445 + do { 1446 + found = false; 1447 + list_for_each(iter, &hbus->children) { 1448 + hpdev = container_of(iter, struct hv_pci_dev, 1449 + list_entry); 1450 + if (hpdev->reported_missing) { 1451 + found = true; 1452 + put_pcichild(hpdev, hv_pcidev_ref_childlist); 1453 + list_del(&hpdev->list_entry); 1454 + list_add_tail(&hpdev->list_entry, &removed); 1455 + break; 1456 + } 1457 + } 1458 + } while (found); 1459 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1460 + 1461 + /* Delete everything that should no longer exist. */ 1462 + while (!list_empty(&removed)) { 1463 + hpdev = list_first_entry(&removed, struct hv_pci_dev, 1464 + list_entry); 1465 + list_del(&hpdev->list_entry); 1466 + put_pcichild(hpdev, hv_pcidev_ref_initial); 1467 + } 1468 + 1469 + /* Tell the core to rescan bus because there may have been changes. */ 1470 + if (hbus->state == hv_pcibus_installed) { 1471 + pci_lock_rescan_remove(); 1472 + pci_scan_child_bus(hbus->pci_bus); 1473 + pci_unlock_rescan_remove(); 1474 + } else { 1475 + survey_child_resources(hbus); 1476 + } 1477 + 1478 + up(&hbus->enum_sem); 1479 + put_hvpcibus(hbus); 1480 + kfree(dr); 1481 + } 1482 + 1483 + /** 1484 + * hv_pci_devices_present() - Handles list of new children 1485 + * @hbus: Root PCI bus, as understood by this driver 1486 + * @relations: Packet from host listing children 1487 + * 1488 + * This function is invoked whenever a new list of devices for 1489 + * this bus appears. 1490 + */ 1491 + static void hv_pci_devices_present(struct hv_pcibus_device *hbus, 1492 + struct pci_bus_relations *relations) 1493 + { 1494 + struct hv_dr_state *dr; 1495 + struct hv_dr_work *dr_wrk; 1496 + unsigned long flags; 1497 + 1498 + dr_wrk = kzalloc(sizeof(*dr_wrk), GFP_NOWAIT); 1499 + if (!dr_wrk) 1500 + return; 1501 + 1502 + dr = kzalloc(offsetof(struct hv_dr_state, func) + 1503 + (sizeof(struct pci_function_description) * 1504 + (relations->device_count)), GFP_NOWAIT); 1505 + if (!dr) { 1506 + kfree(dr_wrk); 1507 + return; 1508 + } 1509 + 1510 + INIT_WORK(&dr_wrk->wrk, pci_devices_present_work); 1511 + dr_wrk->bus = hbus; 1512 + dr->device_count = relations->device_count; 1513 + if (dr->device_count != 0) { 1514 + memcpy(dr->func, relations->func, 1515 + sizeof(struct pci_function_description) * 1516 + dr->device_count); 1517 + } 1518 + 1519 + spin_lock_irqsave(&hbus->device_list_lock, flags); 1520 + list_add_tail(&dr->list_entry, &hbus->dr_list); 1521 + spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1522 + 1523 + get_hvpcibus(hbus); 1524 + schedule_work(&dr_wrk->wrk); 1525 + } 1526 + 1527 + /** 1528 + * hv_eject_device_work() - Asynchronously handles ejection 1529 + * @work: Work struct embedded in internal device struct 1530 + * 1531 + * This function handles ejecting a device. Windows will 1532 + * attempt to gracefully eject a device, waiting 60 seconds to 1533 + * hear back from the guest OS that this completed successfully. 1534 + * If this timer expires, the device will be forcibly removed. 1535 + */ 1536 + static void hv_eject_device_work(struct work_struct *work) 1537 + { 1538 + struct pci_eject_response *ejct_pkt; 1539 + struct hv_pci_dev *hpdev; 1540 + struct pci_dev *pdev; 1541 + unsigned long flags; 1542 + int wslot; 1543 + struct { 1544 + struct pci_packet pkt; 1545 + u8 buffer[sizeof(struct pci_eject_response) - 1546 + sizeof(struct pci_message)]; 1547 + } ctxt; 1548 + 1549 + hpdev = container_of(work, struct hv_pci_dev, wrk); 1550 + 1551 + if (hpdev->state != hv_pcichild_ejecting) { 1552 + put_pcichild(hpdev, hv_pcidev_ref_pnp); 1553 + return; 1554 + } 1555 + 1556 + /* 1557 + * Ejection can come before or after the PCI bus has been set up, so 1558 + * attempt to find it and tear down the bus state, if it exists. This 1559 + * must be done without constructs like pci_domain_nr(hbus->pci_bus) 1560 + * because hbus->pci_bus may not exist yet. 1561 + */ 1562 + wslot = wslot_to_devfn(hpdev->desc.win_slot.slot); 1563 + pdev = pci_get_domain_bus_and_slot(hpdev->hbus->sysdata.domain, 0, 1564 + wslot); 1565 + if (pdev) { 1566 + pci_stop_and_remove_bus_device(pdev); 1567 + pci_dev_put(pdev); 1568 + } 1569 + 1570 + memset(&ctxt, 0, sizeof(ctxt)); 1571 + ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message; 1572 + ejct_pkt->message_type = PCI_EJECTION_COMPLETE; 1573 + ejct_pkt->wslot.slot = hpdev->desc.win_slot.slot; 1574 + vmbus_sendpacket(hpdev->hbus->hdev->channel, ejct_pkt, 1575 + sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt, 1576 + VM_PKT_DATA_INBAND, 0); 1577 + 1578 + spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags); 1579 + list_del(&hpdev->list_entry); 1580 + spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags); 1581 + 1582 + put_pcichild(hpdev, hv_pcidev_ref_childlist); 1583 + put_pcichild(hpdev, hv_pcidev_ref_pnp); 1584 + put_hvpcibus(hpdev->hbus); 1585 + } 1586 + 1587 + /** 1588 + * hv_pci_eject_device() - Handles device ejection 1589 + * @hpdev: Internal device tracking struct 1590 + * 1591 + * This function is invoked when an ejection packet arrives. It 1592 + * just schedules work so that we don't re-enter the packet 1593 + * delivery code handling the ejection. 1594 + */ 1595 + static void hv_pci_eject_device(struct hv_pci_dev *hpdev) 1596 + { 1597 + hpdev->state = hv_pcichild_ejecting; 1598 + get_pcichild(hpdev, hv_pcidev_ref_pnp); 1599 + INIT_WORK(&hpdev->wrk, hv_eject_device_work); 1600 + get_hvpcibus(hpdev->hbus); 1601 + schedule_work(&hpdev->wrk); 1602 + } 1603 + 1604 + /** 1605 + * hv_pci_onchannelcallback() - Handles incoming packets 1606 + * @context: Internal bus tracking struct 1607 + * 1608 + * This function is invoked whenever the host sends a packet to 1609 + * this channel (which is private to this root PCI bus). 1610 + */ 1611 + static void hv_pci_onchannelcallback(void *context) 1612 + { 1613 + const int packet_size = 0x100; 1614 + int ret; 1615 + struct hv_pcibus_device *hbus = context; 1616 + u32 bytes_recvd; 1617 + u64 req_id; 1618 + struct vmpacket_descriptor *desc; 1619 + unsigned char *buffer; 1620 + int bufferlen = packet_size; 1621 + struct pci_packet *comp_packet; 1622 + struct pci_response *response; 1623 + struct pci_incoming_message *new_message; 1624 + struct pci_bus_relations *bus_rel; 1625 + struct pci_dev_incoming *dev_message; 1626 + struct hv_pci_dev *hpdev; 1627 + 1628 + buffer = kmalloc(bufferlen, GFP_ATOMIC); 1629 + if (!buffer) 1630 + return; 1631 + 1632 + while (1) { 1633 + ret = vmbus_recvpacket_raw(hbus->hdev->channel, buffer, 1634 + bufferlen, &bytes_recvd, &req_id); 1635 + 1636 + if (ret == -ENOBUFS) { 1637 + kfree(buffer); 1638 + /* Handle large packet */ 1639 + bufferlen = bytes_recvd; 1640 + buffer = kmalloc(bytes_recvd, GFP_ATOMIC); 1641 + if (!buffer) 1642 + return; 1643 + continue; 1644 + } 1645 + 1646 + /* 1647 + * All incoming packets must be at least as large as a 1648 + * response. 1649 + */ 1650 + if (bytes_recvd <= sizeof(struct pci_response)) { 1651 + kfree(buffer); 1652 + return; 1653 + } 1654 + desc = (struct vmpacket_descriptor *)buffer; 1655 + 1656 + switch (desc->type) { 1657 + case VM_PKT_COMP: 1658 + 1659 + /* 1660 + * The host is trusted, and thus it's safe to interpret 1661 + * this transaction ID as a pointer. 1662 + */ 1663 + comp_packet = (struct pci_packet *)req_id; 1664 + response = (struct pci_response *)buffer; 1665 + comp_packet->completion_func(comp_packet->compl_ctxt, 1666 + response, 1667 + bytes_recvd); 1668 + kfree(buffer); 1669 + return; 1670 + 1671 + case VM_PKT_DATA_INBAND: 1672 + 1673 + new_message = (struct pci_incoming_message *)buffer; 1674 + switch (new_message->message_type.message_type) { 1675 + case PCI_BUS_RELATIONS: 1676 + 1677 + bus_rel = (struct pci_bus_relations *)buffer; 1678 + if (bytes_recvd < 1679 + offsetof(struct pci_bus_relations, func) + 1680 + (sizeof(struct pci_function_description) * 1681 + (bus_rel->device_count))) { 1682 + dev_err(&hbus->hdev->device, 1683 + "bus relations too small\n"); 1684 + break; 1685 + } 1686 + 1687 + hv_pci_devices_present(hbus, bus_rel); 1688 + break; 1689 + 1690 + case PCI_EJECT: 1691 + 1692 + dev_message = (struct pci_dev_incoming *)buffer; 1693 + hpdev = get_pcichild_wslot(hbus, 1694 + dev_message->wslot.slot); 1695 + if (hpdev) { 1696 + hv_pci_eject_device(hpdev); 1697 + put_pcichild(hpdev, 1698 + hv_pcidev_ref_by_slot); 1699 + } 1700 + break; 1701 + 1702 + default: 1703 + dev_warn(&hbus->hdev->device, 1704 + "Unimplemented protocol message %x\n", 1705 + new_message->message_type.message_type); 1706 + break; 1707 + } 1708 + break; 1709 + 1710 + default: 1711 + dev_err(&hbus->hdev->device, 1712 + "unhandled packet type %d, tid %llx len %d\n", 1713 + desc->type, req_id, bytes_recvd); 1714 + break; 1715 + } 1716 + break; 1717 + } 1718 + } 1719 + 1720 + /** 1721 + * hv_pci_protocol_negotiation() - Set up protocol 1722 + * @hdev: VMBus's tracking struct for this root PCI bus 1723 + * 1724 + * This driver is intended to support running on Windows 10 1725 + * (server) and later versions. It will not run on earlier 1726 + * versions, as they assume that many of the operations which 1727 + * Linux needs accomplished with a spinlock held were done via 1728 + * asynchronous messaging via VMBus. Windows 10 increases the 1729 + * surface area of PCI emulation so that these actions can take 1730 + * place by suspending a virtual processor for their duration. 1731 + * 1732 + * This function negotiates the channel protocol version, 1733 + * failing if the host doesn't support the necessary protocol 1734 + * level. 1735 + */ 1736 + static int hv_pci_protocol_negotiation(struct hv_device *hdev) 1737 + { 1738 + struct pci_version_request *version_req; 1739 + struct hv_pci_compl comp_pkt; 1740 + struct pci_packet *pkt; 1741 + int ret; 1742 + 1743 + /* 1744 + * Initiate the handshake with the host and negotiate 1745 + * a version that the host can support. We start with the 1746 + * highest version number and go down if the host cannot 1747 + * support it. 1748 + */ 1749 + pkt = kzalloc(sizeof(*pkt) + sizeof(*version_req), GFP_KERNEL); 1750 + if (!pkt) 1751 + return -ENOMEM; 1752 + 1753 + init_completion(&comp_pkt.host_event); 1754 + pkt->completion_func = hv_pci_generic_compl; 1755 + pkt->compl_ctxt = &comp_pkt; 1756 + version_req = (struct pci_version_request *)&pkt->message; 1757 + version_req->message_type.message_type = PCI_QUERY_PROTOCOL_VERSION; 1758 + version_req->protocol_version = PCI_PROTOCOL_VERSION_CURRENT; 1759 + 1760 + ret = vmbus_sendpacket(hdev->channel, version_req, 1761 + sizeof(struct pci_version_request), 1762 + (unsigned long)pkt, VM_PKT_DATA_INBAND, 1763 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 1764 + if (ret) 1765 + goto exit; 1766 + 1767 + wait_for_completion(&comp_pkt.host_event); 1768 + 1769 + if (comp_pkt.completion_status < 0) { 1770 + dev_err(&hdev->device, 1771 + "PCI Pass-through VSP failed version request %x\n", 1772 + comp_pkt.completion_status); 1773 + ret = -EPROTO; 1774 + goto exit; 1775 + } 1776 + 1777 + ret = 0; 1778 + 1779 + exit: 1780 + kfree(pkt); 1781 + return ret; 1782 + } 1783 + 1784 + /** 1785 + * hv_pci_free_bridge_windows() - Release memory regions for the 1786 + * bus 1787 + * @hbus: Root PCI bus, as understood by this driver 1788 + */ 1789 + static void hv_pci_free_bridge_windows(struct hv_pcibus_device *hbus) 1790 + { 1791 + /* 1792 + * Set the resources back to the way they looked when they 1793 + * were allocated by setting IORESOURCE_BUSY again. 1794 + */ 1795 + 1796 + if (hbus->low_mmio_space && hbus->low_mmio_res) { 1797 + hbus->low_mmio_res->flags |= IORESOURCE_BUSY; 1798 + release_mem_region(hbus->low_mmio_res->start, 1799 + resource_size(hbus->low_mmio_res)); 1800 + } 1801 + 1802 + if (hbus->high_mmio_space && hbus->high_mmio_res) { 1803 + hbus->high_mmio_res->flags |= IORESOURCE_BUSY; 1804 + release_mem_region(hbus->high_mmio_res->start, 1805 + resource_size(hbus->high_mmio_res)); 1806 + } 1807 + } 1808 + 1809 + /** 1810 + * hv_pci_allocate_bridge_windows() - Allocate memory regions 1811 + * for the bus 1812 + * @hbus: Root PCI bus, as understood by this driver 1813 + * 1814 + * This function calls vmbus_allocate_mmio(), which is itself a 1815 + * bit of a compromise. Ideally, we might change the pnp layer 1816 + * in the kernel such that it comprehends either PCI devices 1817 + * which are "grandchildren of ACPI," with some intermediate bus 1818 + * node (in this case, VMBus) or change it such that it 1819 + * understands VMBus. The pnp layer, however, has been declared 1820 + * deprecated, and not subject to change. 1821 + * 1822 + * The workaround, implemented here, is to ask VMBus to allocate 1823 + * MMIO space for this bus. VMBus itself knows which ranges are 1824 + * appropriate by looking at its own ACPI objects. Then, after 1825 + * these ranges are claimed, they're modified to look like they 1826 + * would have looked if the ACPI and pnp code had allocated 1827 + * bridge windows. These descriptors have to exist in this form 1828 + * in order to satisfy the code which will get invoked when the 1829 + * endpoint PCI function driver calls request_mem_region() or 1830 + * request_mem_region_exclusive(). 1831 + * 1832 + * Return: 0 on success, -errno on failure 1833 + */ 1834 + static int hv_pci_allocate_bridge_windows(struct hv_pcibus_device *hbus) 1835 + { 1836 + resource_size_t align; 1837 + int ret; 1838 + 1839 + if (hbus->low_mmio_space) { 1840 + align = 1ULL << (63 - __builtin_clzll(hbus->low_mmio_space)); 1841 + ret = vmbus_allocate_mmio(&hbus->low_mmio_res, hbus->hdev, 0, 1842 + (u64)(u32)0xffffffff, 1843 + hbus->low_mmio_space, 1844 + align, false); 1845 + if (ret) { 1846 + dev_err(&hbus->hdev->device, 1847 + "Need %#llx of low MMIO space. Consider reconfiguring the VM.\n", 1848 + hbus->low_mmio_space); 1849 + return ret; 1850 + } 1851 + 1852 + /* Modify this resource to become a bridge window. */ 1853 + hbus->low_mmio_res->flags |= IORESOURCE_WINDOW; 1854 + hbus->low_mmio_res->flags &= ~IORESOURCE_BUSY; 1855 + pci_add_resource(&hbus->resources_for_children, 1856 + hbus->low_mmio_res); 1857 + } 1858 + 1859 + if (hbus->high_mmio_space) { 1860 + align = 1ULL << (63 - __builtin_clzll(hbus->high_mmio_space)); 1861 + ret = vmbus_allocate_mmio(&hbus->high_mmio_res, hbus->hdev, 1862 + 0x100000000, -1, 1863 + hbus->high_mmio_space, align, 1864 + false); 1865 + if (ret) { 1866 + dev_err(&hbus->hdev->device, 1867 + "Need %#llx of high MMIO space. Consider reconfiguring the VM.\n", 1868 + hbus->high_mmio_space); 1869 + goto release_low_mmio; 1870 + } 1871 + 1872 + /* Modify this resource to become a bridge window. */ 1873 + hbus->high_mmio_res->flags |= IORESOURCE_WINDOW; 1874 + hbus->high_mmio_res->flags &= ~IORESOURCE_BUSY; 1875 + pci_add_resource(&hbus->resources_for_children, 1876 + hbus->high_mmio_res); 1877 + } 1878 + 1879 + return 0; 1880 + 1881 + release_low_mmio: 1882 + if (hbus->low_mmio_res) { 1883 + release_mem_region(hbus->low_mmio_res->start, 1884 + resource_size(hbus->low_mmio_res)); 1885 + } 1886 + 1887 + return ret; 1888 + } 1889 + 1890 + /** 1891 + * hv_allocate_config_window() - Find MMIO space for PCI Config 1892 + * @hbus: Root PCI bus, as understood by this driver 1893 + * 1894 + * This function claims memory-mapped I/O space for accessing 1895 + * configuration space for the functions on this bus. 1896 + * 1897 + * Return: 0 on success, -errno on failure 1898 + */ 1899 + static int hv_allocate_config_window(struct hv_pcibus_device *hbus) 1900 + { 1901 + int ret; 1902 + 1903 + /* 1904 + * Set up a region of MMIO space to use for accessing configuration 1905 + * space. 1906 + */ 1907 + ret = vmbus_allocate_mmio(&hbus->mem_config, hbus->hdev, 0, -1, 1908 + PCI_CONFIG_MMIO_LENGTH, 0x1000, false); 1909 + if (ret) 1910 + return ret; 1911 + 1912 + /* 1913 + * vmbus_allocate_mmio() gets used for allocating both device endpoint 1914 + * resource claims (those which cannot be overlapped) and the ranges 1915 + * which are valid for the children of this bus, which are intended 1916 + * to be overlapped by those children. Set the flag on this claim 1917 + * meaning that this region can't be overlapped. 1918 + */ 1919 + 1920 + hbus->mem_config->flags |= IORESOURCE_BUSY; 1921 + 1922 + return 0; 1923 + } 1924 + 1925 + static void hv_free_config_window(struct hv_pcibus_device *hbus) 1926 + { 1927 + release_mem_region(hbus->mem_config->start, PCI_CONFIG_MMIO_LENGTH); 1928 + } 1929 + 1930 + /** 1931 + * hv_pci_enter_d0() - Bring the "bus" into the D0 power state 1932 + * @hdev: VMBus's tracking struct for this root PCI bus 1933 + * 1934 + * Return: 0 on success, -errno on failure 1935 + */ 1936 + static int hv_pci_enter_d0(struct hv_device *hdev) 1937 + { 1938 + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 1939 + struct pci_bus_d0_entry *d0_entry; 1940 + struct hv_pci_compl comp_pkt; 1941 + struct pci_packet *pkt; 1942 + int ret; 1943 + 1944 + /* 1945 + * Tell the host that the bus is ready to use, and moved into the 1946 + * powered-on state. This includes telling the host which region 1947 + * of memory-mapped I/O space has been chosen for configuration space 1948 + * access. 1949 + */ 1950 + pkt = kzalloc(sizeof(*pkt) + sizeof(*d0_entry), GFP_KERNEL); 1951 + if (!pkt) 1952 + return -ENOMEM; 1953 + 1954 + init_completion(&comp_pkt.host_event); 1955 + pkt->completion_func = hv_pci_generic_compl; 1956 + pkt->compl_ctxt = &comp_pkt; 1957 + d0_entry = (struct pci_bus_d0_entry *)&pkt->message; 1958 + d0_entry->message_type.message_type = PCI_BUS_D0ENTRY; 1959 + d0_entry->mmio_base = hbus->mem_config->start; 1960 + 1961 + ret = vmbus_sendpacket(hdev->channel, d0_entry, sizeof(*d0_entry), 1962 + (unsigned long)pkt, VM_PKT_DATA_INBAND, 1963 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 1964 + if (ret) 1965 + goto exit; 1966 + 1967 + wait_for_completion(&comp_pkt.host_event); 1968 + 1969 + if (comp_pkt.completion_status < 0) { 1970 + dev_err(&hdev->device, 1971 + "PCI Pass-through VSP failed D0 Entry with status %x\n", 1972 + comp_pkt.completion_status); 1973 + ret = -EPROTO; 1974 + goto exit; 1975 + } 1976 + 1977 + ret = 0; 1978 + 1979 + exit: 1980 + kfree(pkt); 1981 + return ret; 1982 + } 1983 + 1984 + /** 1985 + * hv_pci_query_relations() - Ask host to send list of child 1986 + * devices 1987 + * @hdev: VMBus's tracking struct for this root PCI bus 1988 + * 1989 + * Return: 0 on success, -errno on failure 1990 + */ 1991 + static int hv_pci_query_relations(struct hv_device *hdev) 1992 + { 1993 + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 1994 + struct pci_message message; 1995 + struct completion comp; 1996 + int ret; 1997 + 1998 + /* Ask the host to send along the list of child devices */ 1999 + init_completion(&comp); 2000 + if (cmpxchg(&hbus->survey_event, NULL, &comp)) 2001 + return -ENOTEMPTY; 2002 + 2003 + memset(&message, 0, sizeof(message)); 2004 + message.message_type = PCI_QUERY_BUS_RELATIONS; 2005 + 2006 + ret = vmbus_sendpacket(hdev->channel, &message, sizeof(message), 2007 + 0, VM_PKT_DATA_INBAND, 0); 2008 + if (ret) 2009 + return ret; 2010 + 2011 + wait_for_completion(&comp); 2012 + return 0; 2013 + } 2014 + 2015 + /** 2016 + * hv_send_resources_allocated() - Report local resource choices 2017 + * @hdev: VMBus's tracking struct for this root PCI bus 2018 + * 2019 + * The host OS is expecting to be sent a request as a message 2020 + * which contains all the resources that the device will use. 2021 + * The response contains those same resources, "translated" 2022 + * which is to say, the values which should be used by the 2023 + * hardware, when it delivers an interrupt. (MMIO resources are 2024 + * used in local terms.) This is nice for Windows, and lines up 2025 + * with the FDO/PDO split, which doesn't exist in Linux. Linux 2026 + * is deeply expecting to scan an emulated PCI configuration 2027 + * space. So this message is sent here only to drive the state 2028 + * machine on the host forward. 2029 + * 2030 + * Return: 0 on success, -errno on failure 2031 + */ 2032 + static int hv_send_resources_allocated(struct hv_device *hdev) 2033 + { 2034 + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 2035 + struct pci_resources_assigned *res_assigned; 2036 + struct hv_pci_compl comp_pkt; 2037 + struct hv_pci_dev *hpdev; 2038 + struct pci_packet *pkt; 2039 + u32 wslot; 2040 + int ret; 2041 + 2042 + pkt = kmalloc(sizeof(*pkt) + sizeof(*res_assigned), GFP_KERNEL); 2043 + if (!pkt) 2044 + return -ENOMEM; 2045 + 2046 + ret = 0; 2047 + 2048 + for (wslot = 0; wslot < 256; wslot++) { 2049 + hpdev = get_pcichild_wslot(hbus, wslot); 2050 + if (!hpdev) 2051 + continue; 2052 + 2053 + memset(pkt, 0, sizeof(*pkt) + sizeof(*res_assigned)); 2054 + init_completion(&comp_pkt.host_event); 2055 + pkt->completion_func = hv_pci_generic_compl; 2056 + pkt->compl_ctxt = &comp_pkt; 2057 + pkt->message.message_type = PCI_RESOURCES_ASSIGNED; 2058 + res_assigned = (struct pci_resources_assigned *)&pkt->message; 2059 + res_assigned->wslot.slot = hpdev->desc.win_slot.slot; 2060 + 2061 + put_pcichild(hpdev, hv_pcidev_ref_by_slot); 2062 + 2063 + ret = vmbus_sendpacket( 2064 + hdev->channel, &pkt->message, 2065 + sizeof(*res_assigned), 2066 + (unsigned long)pkt, 2067 + VM_PKT_DATA_INBAND, 2068 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2069 + if (ret) 2070 + break; 2071 + 2072 + wait_for_completion(&comp_pkt.host_event); 2073 + 2074 + if (comp_pkt.completion_status < 0) { 2075 + ret = -EPROTO; 2076 + dev_err(&hdev->device, 2077 + "resource allocated returned 0x%x", 2078 + comp_pkt.completion_status); 2079 + break; 2080 + } 2081 + } 2082 + 2083 + kfree(pkt); 2084 + return ret; 2085 + } 2086 + 2087 + /** 2088 + * hv_send_resources_released() - Report local resources 2089 + * released 2090 + * @hdev: VMBus's tracking struct for this root PCI bus 2091 + * 2092 + * Return: 0 on success, -errno on failure 2093 + */ 2094 + static int hv_send_resources_released(struct hv_device *hdev) 2095 + { 2096 + struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 2097 + struct pci_child_message pkt; 2098 + struct hv_pci_dev *hpdev; 2099 + u32 wslot; 2100 + int ret; 2101 + 2102 + for (wslot = 0; wslot < 256; wslot++) { 2103 + hpdev = get_pcichild_wslot(hbus, wslot); 2104 + if (!hpdev) 2105 + continue; 2106 + 2107 + memset(&pkt, 0, sizeof(pkt)); 2108 + pkt.message_type = PCI_RESOURCES_RELEASED; 2109 + pkt.wslot.slot = hpdev->desc.win_slot.slot; 2110 + 2111 + put_pcichild(hpdev, hv_pcidev_ref_by_slot); 2112 + 2113 + ret = vmbus_sendpacket(hdev->channel, &pkt, sizeof(pkt), 0, 2114 + VM_PKT_DATA_INBAND, 0); 2115 + if (ret) 2116 + return ret; 2117 + } 2118 + 2119 + return 0; 2120 + } 2121 + 2122 + static void get_hvpcibus(struct hv_pcibus_device *hbus) 2123 + { 2124 + atomic_inc(&hbus->remove_lock); 2125 + } 2126 + 2127 + static void put_hvpcibus(struct hv_pcibus_device *hbus) 2128 + { 2129 + if (atomic_dec_and_test(&hbus->remove_lock)) 2130 + complete(&hbus->remove_event); 2131 + } 2132 + 2133 + /** 2134 + * hv_pci_probe() - New VMBus channel probe, for a root PCI bus 2135 + * @hdev: VMBus's tracking struct for this root PCI bus 2136 + * @dev_id: Identifies the device itself 2137 + * 2138 + * Return: 0 on success, -errno on failure 2139 + */ 2140 + static int hv_pci_probe(struct hv_device *hdev, 2141 + const struct hv_vmbus_device_id *dev_id) 2142 + { 2143 + struct hv_pcibus_device *hbus; 2144 + int ret; 2145 + 2146 + hbus = kzalloc(sizeof(*hbus), GFP_KERNEL); 2147 + if (!hbus) 2148 + return -ENOMEM; 2149 + 2150 + /* 2151 + * The PCI bus "domain" is what is called "segment" in ACPI and 2152 + * other specs. Pull it from the instance ID, to get something 2153 + * unique. Bytes 8 and 9 are what is used in Windows guests, so 2154 + * do the same thing for consistency. Note that, since this code 2155 + * only runs in a Hyper-V VM, Hyper-V can (and does) guarantee 2156 + * that (1) the only domain in use for something that looks like 2157 + * a physical PCI bus (which is actually emulated by the 2158 + * hypervisor) is domain 0 and (2) there will be no overlap 2159 + * between domains derived from these instance IDs in the same 2160 + * VM. 2161 + */ 2162 + hbus->sysdata.domain = hdev->dev_instance.b[9] | 2163 + hdev->dev_instance.b[8] << 8; 2164 + 2165 + hbus->hdev = hdev; 2166 + atomic_inc(&hbus->remove_lock); 2167 + INIT_LIST_HEAD(&hbus->children); 2168 + INIT_LIST_HEAD(&hbus->dr_list); 2169 + INIT_LIST_HEAD(&hbus->resources_for_children); 2170 + spin_lock_init(&hbus->config_lock); 2171 + spin_lock_init(&hbus->device_list_lock); 2172 + sema_init(&hbus->enum_sem, 1); 2173 + init_completion(&hbus->remove_event); 2174 + 2175 + ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0, 2176 + hv_pci_onchannelcallback, hbus); 2177 + if (ret) 2178 + goto free_bus; 2179 + 2180 + hv_set_drvdata(hdev, hbus); 2181 + 2182 + ret = hv_pci_protocol_negotiation(hdev); 2183 + if (ret) 2184 + goto close; 2185 + 2186 + ret = hv_allocate_config_window(hbus); 2187 + if (ret) 2188 + goto close; 2189 + 2190 + hbus->cfg_addr = ioremap(hbus->mem_config->start, 2191 + PCI_CONFIG_MMIO_LENGTH); 2192 + if (!hbus->cfg_addr) { 2193 + dev_err(&hdev->device, 2194 + "Unable to map a virtual address for config space\n"); 2195 + ret = -ENOMEM; 2196 + goto free_config; 2197 + } 2198 + 2199 + hbus->sysdata.fwnode = irq_domain_alloc_fwnode(hbus); 2200 + if (!hbus->sysdata.fwnode) { 2201 + ret = -ENOMEM; 2202 + goto unmap; 2203 + } 2204 + 2205 + ret = hv_pcie_init_irq_domain(hbus); 2206 + if (ret) 2207 + goto free_fwnode; 2208 + 2209 + ret = hv_pci_query_relations(hdev); 2210 + if (ret) 2211 + goto free_irq_domain; 2212 + 2213 + ret = hv_pci_enter_d0(hdev); 2214 + if (ret) 2215 + goto free_irq_domain; 2216 + 2217 + ret = hv_pci_allocate_bridge_windows(hbus); 2218 + if (ret) 2219 + goto free_irq_domain; 2220 + 2221 + ret = hv_send_resources_allocated(hdev); 2222 + if (ret) 2223 + goto free_windows; 2224 + 2225 + prepopulate_bars(hbus); 2226 + 2227 + hbus->state = hv_pcibus_probed; 2228 + 2229 + ret = create_root_hv_pci_bus(hbus); 2230 + if (ret) 2231 + goto free_windows; 2232 + 2233 + return 0; 2234 + 2235 + free_windows: 2236 + hv_pci_free_bridge_windows(hbus); 2237 + free_irq_domain: 2238 + irq_domain_remove(hbus->irq_domain); 2239 + free_fwnode: 2240 + irq_domain_free_fwnode(hbus->sysdata.fwnode); 2241 + unmap: 2242 + iounmap(hbus->cfg_addr); 2243 + free_config: 2244 + hv_free_config_window(hbus); 2245 + close: 2246 + vmbus_close(hdev->channel); 2247 + free_bus: 2248 + kfree(hbus); 2249 + return ret; 2250 + } 2251 + 2252 + /** 2253 + * hv_pci_remove() - Remove routine for this VMBus channel 2254 + * @hdev: VMBus's tracking struct for this root PCI bus 2255 + * 2256 + * Return: 0 on success, -errno on failure 2257 + */ 2258 + static int hv_pci_remove(struct hv_device *hdev) 2259 + { 2260 + int ret; 2261 + struct hv_pcibus_device *hbus; 2262 + union { 2263 + struct pci_packet teardown_packet; 2264 + u8 buffer[0x100]; 2265 + } pkt; 2266 + struct pci_bus_relations relations; 2267 + struct hv_pci_compl comp_pkt; 2268 + 2269 + hbus = hv_get_drvdata(hdev); 2270 + 2271 + ret = hv_send_resources_released(hdev); 2272 + if (ret) 2273 + dev_err(&hdev->device, 2274 + "Couldn't send resources released packet(s)\n"); 2275 + 2276 + memset(&pkt.teardown_packet, 0, sizeof(pkt.teardown_packet)); 2277 + init_completion(&comp_pkt.host_event); 2278 + pkt.teardown_packet.completion_func = hv_pci_generic_compl; 2279 + pkt.teardown_packet.compl_ctxt = &comp_pkt; 2280 + pkt.teardown_packet.message.message_type = PCI_BUS_D0EXIT; 2281 + 2282 + ret = vmbus_sendpacket(hdev->channel, &pkt.teardown_packet.message, 2283 + sizeof(struct pci_message), 2284 + (unsigned long)&pkt.teardown_packet, 2285 + VM_PKT_DATA_INBAND, 2286 + VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED); 2287 + if (!ret) 2288 + wait_for_completion_timeout(&comp_pkt.host_event, 10 * HZ); 2289 + 2290 + if (hbus->state == hv_pcibus_installed) { 2291 + /* Remove the bus from PCI's point of view. */ 2292 + pci_lock_rescan_remove(); 2293 + pci_stop_root_bus(hbus->pci_bus); 2294 + pci_remove_root_bus(hbus->pci_bus); 2295 + pci_unlock_rescan_remove(); 2296 + } 2297 + 2298 + vmbus_close(hdev->channel); 2299 + 2300 + /* Delete any children which might still exist. */ 2301 + memset(&relations, 0, sizeof(relations)); 2302 + hv_pci_devices_present(hbus, &relations); 2303 + 2304 + iounmap(hbus->cfg_addr); 2305 + hv_free_config_window(hbus); 2306 + pci_free_resource_list(&hbus->resources_for_children); 2307 + hv_pci_free_bridge_windows(hbus); 2308 + irq_domain_remove(hbus->irq_domain); 2309 + irq_domain_free_fwnode(hbus->sysdata.fwnode); 2310 + put_hvpcibus(hbus); 2311 + wait_for_completion(&hbus->remove_event); 2312 + kfree(hbus); 2313 + return 0; 2314 + } 2315 + 2316 + static const struct hv_vmbus_device_id hv_pci_id_table[] = { 2317 + /* PCI Pass-through Class ID */ 2318 + /* 44C4F61D-4444-4400-9D52-802E27EDE19F */ 2319 + { HV_PCIE_GUID, }, 2320 + { }, 2321 + }; 2322 + 2323 + MODULE_DEVICE_TABLE(vmbus, hv_pci_id_table); 2324 + 2325 + static struct hv_driver hv_pci_drv = { 2326 + .name = "hv_pci", 2327 + .id_table = hv_pci_id_table, 2328 + .probe = hv_pci_probe, 2329 + .remove = hv_pci_remove, 2330 + }; 2331 + 2332 + static void __exit exit_hv_pci_drv(void) 2333 + { 2334 + vmbus_driver_unregister(&hv_pci_drv); 2335 + } 2336 + 2337 + static int __init init_hv_pci_drv(void) 2338 + { 2339 + return vmbus_driver_register(&hv_pci_drv); 2340 + } 2341 + 2342 + module_init(init_hv_pci_drv); 2343 + module_exit(exit_hv_pci_drv); 2344 + 2345 + MODULE_DESCRIPTION("Hyper-V PCI"); 2346 + MODULE_LICENSE("GPL v2");
+75 -91
drivers/pci/host/pci-imx6.c
··· 39 39 struct pcie_port pp; 40 40 struct regmap *iomuxc_gpr; 41 41 void __iomem *mem_base; 42 + u32 tx_deemph_gen1; 43 + u32 tx_deemph_gen2_3p5db; 44 + u32 tx_deemph_gen2_6db; 45 + u32 tx_swing_full; 46 + u32 tx_swing_low; 42 47 }; 43 48 44 49 /* PCIe Root Complex registers (memory-mapped) */ ··· 207 202 return 0; 208 203 } 209 204 205 + static void imx6_pcie_reset_phy(struct pcie_port *pp) 206 + { 207 + u32 tmp; 208 + 209 + pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &tmp); 210 + tmp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN | 211 + PHY_RX_OVRD_IN_LO_RX_PLL_EN); 212 + pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, tmp); 213 + 214 + usleep_range(2000, 3000); 215 + 216 + pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &tmp); 217 + tmp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN | 218 + PHY_RX_OVRD_IN_LO_RX_PLL_EN); 219 + pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, tmp); 220 + } 221 + 210 222 /* Added for PCI abort handling */ 211 223 static int imx6q_pcie_abort_handler(unsigned long addr, 212 224 unsigned int fsr, struct pt_regs *regs) ··· 339 317 IMX6Q_GPR12_LOS_LEVEL, 9 << 4); 340 318 341 319 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 342 - IMX6Q_GPR8_TX_DEEMPH_GEN1, 0 << 0); 320 + IMX6Q_GPR8_TX_DEEMPH_GEN1, 321 + imx6_pcie->tx_deemph_gen1 << 0); 343 322 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 344 - IMX6Q_GPR8_TX_DEEMPH_GEN2_3P5DB, 0 << 6); 323 + IMX6Q_GPR8_TX_DEEMPH_GEN2_3P5DB, 324 + imx6_pcie->tx_deemph_gen2_3p5db << 6); 345 325 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 346 - IMX6Q_GPR8_TX_DEEMPH_GEN2_6DB, 20 << 12); 326 + IMX6Q_GPR8_TX_DEEMPH_GEN2_6DB, 327 + imx6_pcie->tx_deemph_gen2_6db << 12); 347 328 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 348 - IMX6Q_GPR8_TX_SWING_FULL, 127 << 18); 329 + IMX6Q_GPR8_TX_SWING_FULL, 330 + imx6_pcie->tx_swing_full << 18); 349 331 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 350 - IMX6Q_GPR8_TX_SWING_LOW, 127 << 25); 332 + IMX6Q_GPR8_TX_SWING_LOW, 333 + imx6_pcie->tx_swing_low << 25); 351 334 } 352 335 353 336 static int imx6_pcie_wait_for_link(struct pcie_port *pp) 354 337 { 355 - unsigned int retries; 338 + /* check if the link is up or not */ 339 + if (!dw_pcie_wait_for_link(pp)) 340 + return 0; 356 341 357 - for (retries = 0; retries < 200; retries++) { 358 - if (dw_pcie_link_up(pp)) 359 - return 0; 360 - usleep_range(100, 1000); 361 - } 362 - 363 - dev_err(pp->dev, "phy link never came up\n"); 364 342 dev_dbg(pp->dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n", 365 343 readl(pp->dbi_base + PCIE_PHY_DEBUG_R0), 366 344 readl(pp->dbi_base + PCIE_PHY_DEBUG_R1)); 367 - return -EINVAL; 345 + return -ETIMEDOUT; 368 346 } 369 347 370 348 static int imx6_pcie_wait_for_speed_change(struct pcie_port *pp) ··· 412 390 IMX6Q_GPR12_PCIE_CTL_2, 1 << 10); 413 391 414 392 ret = imx6_pcie_wait_for_link(pp); 415 - if (ret) 416 - return ret; 393 + if (ret) { 394 + dev_info(pp->dev, "Link never came up\n"); 395 + goto err_reset_phy; 396 + } 417 397 418 398 /* Allow Gen2 mode after the link is up. */ 419 399 tmp = readl(pp->dbi_base + PCIE_RC_LCR); ··· 434 410 ret = imx6_pcie_wait_for_speed_change(pp); 435 411 if (ret) { 436 412 dev_err(pp->dev, "Failed to bring link up!\n"); 437 - return ret; 413 + goto err_reset_phy; 438 414 } 439 415 440 416 /* Make sure link training is finished as well! */ 441 417 ret = imx6_pcie_wait_for_link(pp); 442 418 if (ret) { 443 419 dev_err(pp->dev, "Failed to bring link up!\n"); 444 - return ret; 420 + goto err_reset_phy; 445 421 } 446 422 447 423 tmp = readl(pp->dbi_base + PCIE_RC_LCSR); 448 424 dev_dbg(pp->dev, "Link up, Gen=%i\n", (tmp >> 16) & 0xf); 425 + 449 426 return 0; 427 + 428 + err_reset_phy: 429 + dev_dbg(pp->dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n", 430 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R0), 431 + readl(pp->dbi_base + PCIE_PHY_DEBUG_R1)); 432 + imx6_pcie_reset_phy(pp); 433 + 434 + return ret; 450 435 } 451 436 452 437 static void imx6_pcie_host_init(struct pcie_port *pp) ··· 474 441 dw_pcie_msi_init(pp); 475 442 } 476 443 477 - static void imx6_pcie_reset_phy(struct pcie_port *pp) 478 - { 479 - u32 tmp; 480 - 481 - pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &tmp); 482 - tmp |= (PHY_RX_OVRD_IN_LO_RX_DATA_EN | 483 - PHY_RX_OVRD_IN_LO_RX_PLL_EN); 484 - pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, tmp); 485 - 486 - usleep_range(2000, 3000); 487 - 488 - pcie_phy_read(pp->dbi_base, PHY_RX_OVRD_IN_LO, &tmp); 489 - tmp &= ~(PHY_RX_OVRD_IN_LO_RX_DATA_EN | 490 - PHY_RX_OVRD_IN_LO_RX_PLL_EN); 491 - pcie_phy_write(pp->dbi_base, PHY_RX_OVRD_IN_LO, tmp); 492 - } 493 - 494 444 static int imx6_pcie_link_up(struct pcie_port *pp) 495 445 { 496 - u32 rc, debug_r0, rx_valid; 497 - int count = 5; 498 - 499 - /* 500 - * Test if the PHY reports that the link is up and also that the LTSSM 501 - * training finished. There are three possible states of the link when 502 - * this code is called: 503 - * 1) The link is DOWN (unlikely) 504 - * The link didn't come up yet for some reason. This usually means 505 - * we have a real problem somewhere. Reset the PHY and exit. This 506 - * state calls for inspection of the DEBUG registers. 507 - * 2) The link is UP, but still in LTSSM training 508 - * Wait for the training to finish, which should take a very short 509 - * time. If the training does not finish, we have a problem and we 510 - * need to inspect the DEBUG registers. If the training does finish, 511 - * the link is up and operating correctly. 512 - * 3) The link is UP and no longer in LTSSM training 513 - * The link is up and operating correctly. 514 - */ 515 - while (1) { 516 - rc = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1); 517 - if (!(rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_UP)) 518 - break; 519 - if (!(rc & PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING)) 520 - return 1; 521 - if (!count--) 522 - break; 523 - dev_dbg(pp->dev, "Link is up, but still in training\n"); 524 - /* 525 - * Wait a little bit, then re-check if the link finished 526 - * the training. 527 - */ 528 - usleep_range(1000, 2000); 529 - } 530 - /* 531 - * From L0, initiate MAC entry to gen2 if EP/RC supports gen2. 532 - * Wait 2ms (LTSSM timeout is 24ms, PHY lock is ~5us in gen2). 533 - * If (MAC/LTSSM.state == Recovery.RcvrLock) 534 - * && (PHY/rx_valid==0) then pulse PHY/rx_reset. Transition 535 - * to gen2 is stuck 536 - */ 537 - pcie_phy_read(pp->dbi_base, PCIE_PHY_RX_ASIC_OUT, &rx_valid); 538 - debug_r0 = readl(pp->dbi_base + PCIE_PHY_DEBUG_R0); 539 - 540 - if (rx_valid & PCIE_PHY_RX_ASIC_OUT_VALID) 541 - return 0; 542 - 543 - if ((debug_r0 & 0x3f) != 0x0d) 544 - return 0; 545 - 546 - dev_err(pp->dev, "transition to gen2 is stuck, reset PHY!\n"); 547 - dev_dbg(pp->dev, "debug_r0=%08x debug_r1=%08x\n", debug_r0, rc); 548 - 549 - imx6_pcie_reset_phy(pp); 550 - 551 - return 0; 446 + return readl(pp->dbi_base + PCIE_PHY_DEBUG_R1) & 447 + PCIE_PHY_DEBUG_R1_XMLH_LINK_UP; 552 448 } 553 449 554 450 static struct pcie_host_ops imx6_pcie_host_ops = { ··· 524 562 struct imx6_pcie *imx6_pcie; 525 563 struct pcie_port *pp; 526 564 struct resource *dbi_base; 565 + struct device_node *node = pdev->dev.of_node; 527 566 int ret; 528 567 529 568 imx6_pcie = devm_kzalloc(&pdev->dev, sizeof(*imx6_pcie), GFP_KERNEL); ··· 576 613 dev_err(&pdev->dev, "unable to find iomuxc registers\n"); 577 614 return PTR_ERR(imx6_pcie->iomuxc_gpr); 578 615 } 616 + 617 + /* Grab PCIe PHY Tx Settings */ 618 + if (of_property_read_u32(node, "fsl,tx-deemph-gen1", 619 + &imx6_pcie->tx_deemph_gen1)) 620 + imx6_pcie->tx_deemph_gen1 = 0; 621 + 622 + if (of_property_read_u32(node, "fsl,tx-deemph-gen2-3p5db", 623 + &imx6_pcie->tx_deemph_gen2_3p5db)) 624 + imx6_pcie->tx_deemph_gen2_3p5db = 0; 625 + 626 + if (of_property_read_u32(node, "fsl,tx-deemph-gen2-6db", 627 + &imx6_pcie->tx_deemph_gen2_6db)) 628 + imx6_pcie->tx_deemph_gen2_6db = 20; 629 + 630 + if (of_property_read_u32(node, "fsl,tx-swing-full", 631 + &imx6_pcie->tx_swing_full)) 632 + imx6_pcie->tx_swing_full = 127; 633 + 634 + if (of_property_read_u32(node, "fsl,tx-swing-low", 635 + &imx6_pcie->tx_swing_low)) 636 + imx6_pcie->tx_swing_low = 127; 579 637 580 638 ret = imx6_add_pcie_port(pp, pdev); 581 639 if (ret < 0)
+7 -6
drivers/pci/host/pci-keystone.c
··· 97 97 return 0; 98 98 } 99 99 100 - ks_dw_pcie_initiate_link_train(ks_pcie); 101 100 /* check if the link is up or not */ 102 - for (retries = 0; retries < 200; retries++) { 103 - if (dw_pcie_link_up(pp)) 104 - return 0; 105 - usleep_range(100, 1000); 101 + for (retries = 0; retries < 5; retries++) { 106 102 ks_dw_pcie_initiate_link_train(ks_pcie); 103 + if (!dw_pcie_wait_for_link(pp)) 104 + return 0; 107 105 } 108 106 109 107 dev_err(pp->dev, "phy link never came up\n"); 110 - return -EINVAL; 108 + return -ETIMEDOUT; 111 109 } 112 110 113 111 static void ks_pcie_msi_irq_handler(struct irq_desc *desc) ··· 357 359 358 360 /* initialize SerDes Phy if present */ 359 361 phy = devm_phy_get(dev, "pcie-phy"); 362 + if (PTR_ERR_OR_ZERO(phy) == -EPROBE_DEFER) 363 + return PTR_ERR(phy); 364 + 360 365 if (!IS_ERR_OR_NULL(phy)) { 361 366 ret = phy_init(phy); 362 367 if (ret < 0)
+1
drivers/pci/host/pci-layerscape.c
··· 208 208 { .compatible = "fsl,ls1021a-pcie", .data = &ls1021_drvdata }, 209 209 { .compatible = "fsl,ls1043a-pcie", .data = &ls1043_drvdata }, 210 210 { .compatible = "fsl,ls2080a-pcie", .data = &ls2080_drvdata }, 211 + { .compatible = "fsl,ls2085a-pcie", .data = &ls2080_drvdata }, 211 212 { }, 212 213 }; 213 214 MODULE_DEVICE_TABLE(of, ls_pcie_of_match);
+64 -23
drivers/pci/host/pci-tegra.c
··· 281 281 struct resource prefetch; 282 282 struct resource busn; 283 283 284 + struct { 285 + resource_size_t mem; 286 + resource_size_t io; 287 + } offset; 288 + 284 289 struct clk *pex_clk; 285 290 struct clk *afi_clk; 286 291 struct clk *pll_e; ··· 300 295 struct tegra_msi msi; 301 296 302 297 struct list_head ports; 303 - unsigned int num_ports; 304 298 u32 xbar_config; 305 299 306 300 struct regulator_bulk_data *supplies; ··· 430 426 return ERR_PTR(err); 431 427 } 432 428 433 - /* 434 - * Look up a virtual address mapping for the specified bus number. If no such 435 - * mapping exists, try to create one. 436 - */ 437 - static void __iomem *tegra_pcie_bus_map(struct tegra_pcie *pcie, 438 - unsigned int busnr) 429 + static int tegra_pcie_add_bus(struct pci_bus *bus) 439 430 { 440 - struct tegra_pcie_bus *bus; 431 + struct tegra_pcie *pcie = sys_to_pcie(bus->sysdata); 432 + struct tegra_pcie_bus *b; 441 433 442 - list_for_each_entry(bus, &pcie->buses, list) 443 - if (bus->nr == busnr) 444 - return (void __iomem *)bus->area->addr; 434 + b = tegra_pcie_bus_alloc(pcie, bus->number); 435 + if (IS_ERR(b)) 436 + return PTR_ERR(b); 445 437 446 - bus = tegra_pcie_bus_alloc(pcie, busnr); 447 - if (IS_ERR(bus)) 448 - return NULL; 438 + list_add_tail(&b->list, &pcie->buses); 449 439 450 - list_add_tail(&bus->list, &pcie->buses); 451 - 452 - return (void __iomem *)bus->area->addr; 440 + return 0; 453 441 } 454 442 455 - static void __iomem *tegra_pcie_conf_address(struct pci_bus *bus, 456 - unsigned int devfn, 457 - int where) 443 + static void tegra_pcie_remove_bus(struct pci_bus *child) 444 + { 445 + struct tegra_pcie *pcie = sys_to_pcie(child->sysdata); 446 + struct tegra_pcie_bus *bus, *tmp; 447 + 448 + list_for_each_entry_safe(bus, tmp, &pcie->buses, list) { 449 + if (bus->nr == child->number) { 450 + vunmap(bus->area->addr); 451 + list_del(&bus->list); 452 + kfree(bus); 453 + break; 454 + } 455 + } 456 + } 457 + 458 + static void __iomem *tegra_pcie_map_bus(struct pci_bus *bus, 459 + unsigned int devfn, 460 + int where) 458 461 { 459 462 struct tegra_pcie *pcie = sys_to_pcie(bus->sysdata); 460 463 void __iomem *addr = NULL; ··· 477 466 } 478 467 } 479 468 } else { 480 - addr = tegra_pcie_bus_map(pcie, bus->number); 469 + struct tegra_pcie_bus *b; 470 + 471 + list_for_each_entry(b, &pcie->buses, list) 472 + if (b->nr == bus->number) 473 + addr = (void __iomem *)b->area->addr; 474 + 481 475 if (!addr) { 482 476 dev_err(pcie->dev, 483 477 "failed to map cfg. space for bus %u\n", ··· 497 481 } 498 482 499 483 static struct pci_ops tegra_pcie_ops = { 500 - .map_bus = tegra_pcie_conf_address, 484 + .add_bus = tegra_pcie_add_bus, 485 + .remove_bus = tegra_pcie_remove_bus, 486 + .map_bus = tegra_pcie_map_bus, 501 487 .read = pci_generic_config_read32, 502 488 .write = pci_generic_config_write32, 503 489 }; ··· 616 598 struct tegra_pcie *pcie = sys_to_pcie(sys); 617 599 int err; 618 600 601 + sys->mem_offset = pcie->offset.mem; 602 + sys->io_offset = pcie->offset.io; 603 + 604 + err = devm_request_resource(pcie->dev, &pcie->all, &pcie->io); 605 + if (err < 0) 606 + return err; 607 + 608 + err = devm_request_resource(pcie->dev, &ioport_resource, &pcie->pio); 609 + if (err < 0) 610 + return err; 611 + 619 612 err = devm_request_resource(pcie->dev, &pcie->all, &pcie->mem); 620 613 if (err < 0) 621 614 return err; ··· 635 606 if (err) 636 607 return err; 637 608 609 + pci_add_resource_offset(&sys->resources, &pcie->pio, sys->io_offset); 638 610 pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); 639 611 pci_add_resource_offset(&sys->resources, &pcie->prefetch, 640 612 sys->mem_offset); ··· 771 741 afi_writel(pcie, 0, AFI_FPCI_BAR5); 772 742 773 743 /* map all upstream transactions as uncached */ 774 - afi_writel(pcie, PHYS_OFFSET, AFI_CACHE_BAR0_ST); 744 + afi_writel(pcie, 0, AFI_CACHE_BAR0_ST); 775 745 afi_writel(pcie, 0, AFI_CACHE_BAR0_SZ); 776 746 afi_writel(pcie, 0, AFI_CACHE_BAR1_ST); 777 747 afi_writel(pcie, 0, AFI_CACHE_BAR1_SZ); ··· 1631 1601 1632 1602 switch (res.flags & IORESOURCE_TYPE_BITS) { 1633 1603 case IORESOURCE_IO: 1604 + /* Track the bus -> CPU I/O mapping offset. */ 1605 + pcie->offset.io = res.start - range.pci_addr; 1606 + 1634 1607 memcpy(&pcie->pio, &res, sizeof(res)); 1635 1608 pcie->pio.name = np->full_name; 1636 1609 ··· 1654 1621 break; 1655 1622 1656 1623 case IORESOURCE_MEM: 1624 + /* 1625 + * Track the bus -> CPU memory mapping offset. This 1626 + * assumes that the prefetchable and non-prefetchable 1627 + * regions will be the last of type IORESOURCE_MEM in 1628 + * the ranges property. 1629 + * */ 1630 + pcie->offset.mem = res.start - range.pci_addr; 1631 + 1657 1632 if (res.flags & IORESOURCE_PREFETCH) { 1658 1633 memcpy(&pcie->prefetch, &res, sizeof(res)); 1659 1634 pcie->prefetch.name = "prefetchable";
+403
drivers/pci/host/pci-thunder-ecam.c
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 2015, 2016 Cavium, Inc. 7 + */ 8 + 9 + #include <linux/kernel.h> 10 + #include <linux/module.h> 11 + #include <linux/ioport.h> 12 + #include <linux/of_pci.h> 13 + #include <linux/of.h> 14 + #include <linux/platform_device.h> 15 + 16 + #include "pci-host-common.h" 17 + 18 + /* Mapping is standard ECAM */ 19 + static void __iomem *thunder_ecam_map_bus(struct pci_bus *bus, 20 + unsigned int devfn, 21 + int where) 22 + { 23 + struct gen_pci *pci = bus->sysdata; 24 + resource_size_t idx = bus->number - pci->cfg.bus_range->start; 25 + 26 + return pci->cfg.win[idx] + ((devfn << 12) | where); 27 + } 28 + 29 + static void set_val(u32 v, int where, int size, u32 *val) 30 + { 31 + int shift = (where & 3) * 8; 32 + 33 + pr_debug("set_val %04x: %08x\n", (unsigned)(where & ~3), v); 34 + v >>= shift; 35 + if (size == 1) 36 + v &= 0xff; 37 + else if (size == 2) 38 + v &= 0xffff; 39 + *val = v; 40 + } 41 + 42 + static int handle_ea_bar(u32 e0, int bar, struct pci_bus *bus, 43 + unsigned int devfn, int where, int size, u32 *val) 44 + { 45 + void __iomem *addr; 46 + u32 v; 47 + 48 + /* Entries are 16-byte aligned; bits[2,3] select word in entry */ 49 + int where_a = where & 0xc; 50 + 51 + if (where_a == 0) { 52 + set_val(e0, where, size, val); 53 + return PCIBIOS_SUCCESSFUL; 54 + } 55 + if (where_a == 0x4) { 56 + addr = bus->ops->map_bus(bus, devfn, bar); /* BAR 0 */ 57 + if (!addr) { 58 + *val = ~0; 59 + return PCIBIOS_DEVICE_NOT_FOUND; 60 + } 61 + v = readl(addr); 62 + v &= ~0xf; 63 + v |= 2; /* EA entry-1. Base-L */ 64 + set_val(v, where, size, val); 65 + return PCIBIOS_SUCCESSFUL; 66 + } 67 + if (where_a == 0x8) { 68 + u32 barl_orig; 69 + u32 barl_rb; 70 + 71 + addr = bus->ops->map_bus(bus, devfn, bar); /* BAR 0 */ 72 + if (!addr) { 73 + *val = ~0; 74 + return PCIBIOS_DEVICE_NOT_FOUND; 75 + } 76 + barl_orig = readl(addr + 0); 77 + writel(0xffffffff, addr + 0); 78 + barl_rb = readl(addr + 0); 79 + writel(barl_orig, addr + 0); 80 + /* zeros in unsettable bits */ 81 + v = ~barl_rb & ~3; 82 + v |= 0xc; /* EA entry-2. Offset-L */ 83 + set_val(v, where, size, val); 84 + return PCIBIOS_SUCCESSFUL; 85 + } 86 + if (where_a == 0xc) { 87 + addr = bus->ops->map_bus(bus, devfn, bar + 4); /* BAR 1 */ 88 + if (!addr) { 89 + *val = ~0; 90 + return PCIBIOS_DEVICE_NOT_FOUND; 91 + } 92 + v = readl(addr); /* EA entry-3. Base-H */ 93 + set_val(v, where, size, val); 94 + return PCIBIOS_SUCCESSFUL; 95 + } 96 + return PCIBIOS_DEVICE_NOT_FOUND; 97 + } 98 + 99 + static int thunder_ecam_p2_config_read(struct pci_bus *bus, unsigned int devfn, 100 + int where, int size, u32 *val) 101 + { 102 + struct gen_pci *pci = bus->sysdata; 103 + int where_a = where & ~3; 104 + void __iomem *addr; 105 + u32 node_bits; 106 + u32 v; 107 + 108 + /* EA Base[63:32] may be missing some bits ... */ 109 + switch (where_a) { 110 + case 0xa8: 111 + case 0xbc: 112 + case 0xd0: 113 + case 0xe4: 114 + break; 115 + default: 116 + return pci_generic_config_read(bus, devfn, where, size, val); 117 + } 118 + 119 + addr = bus->ops->map_bus(bus, devfn, where_a); 120 + if (!addr) { 121 + *val = ~0; 122 + return PCIBIOS_DEVICE_NOT_FOUND; 123 + } 124 + 125 + v = readl(addr); 126 + 127 + /* 128 + * Bit 44 of the 64-bit Base must match the same bit in 129 + * the config space access window. Since we are working with 130 + * the high-order 32 bits, shift everything down by 32 bits. 131 + */ 132 + node_bits = (pci->cfg.res.start >> 32) & (1 << 12); 133 + 134 + v |= node_bits; 135 + set_val(v, where, size, val); 136 + 137 + return PCIBIOS_SUCCESSFUL; 138 + } 139 + 140 + static int thunder_ecam_config_read(struct pci_bus *bus, unsigned int devfn, 141 + int where, int size, u32 *val) 142 + { 143 + u32 v; 144 + u32 vendor_device; 145 + u32 class_rev; 146 + void __iomem *addr; 147 + int cfg_type; 148 + int where_a = where & ~3; 149 + 150 + addr = bus->ops->map_bus(bus, devfn, 0xc); 151 + if (!addr) { 152 + *val = ~0; 153 + return PCIBIOS_DEVICE_NOT_FOUND; 154 + } 155 + 156 + v = readl(addr); 157 + 158 + /* Check for non type-00 header */ 159 + cfg_type = (v >> 16) & 0x7f; 160 + 161 + addr = bus->ops->map_bus(bus, devfn, 8); 162 + if (!addr) { 163 + *val = ~0; 164 + return PCIBIOS_DEVICE_NOT_FOUND; 165 + } 166 + 167 + class_rev = readl(addr); 168 + if (class_rev == 0xffffffff) 169 + goto no_emulation; 170 + 171 + if ((class_rev & 0xff) >= 8) { 172 + /* Pass-2 handling */ 173 + if (cfg_type) 174 + goto no_emulation; 175 + return thunder_ecam_p2_config_read(bus, devfn, where, 176 + size, val); 177 + } 178 + 179 + /* 180 + * All BARs have fixed addresses specified by the EA 181 + * capability; they must return zero on read. 182 + */ 183 + if (cfg_type == 0 && 184 + ((where >= 0x10 && where < 0x2c) || 185 + (where >= 0x1a4 && where < 0x1bc))) { 186 + /* BAR or SR-IOV BAR */ 187 + *val = 0; 188 + return PCIBIOS_SUCCESSFUL; 189 + } 190 + 191 + addr = bus->ops->map_bus(bus, devfn, 0); 192 + if (!addr) { 193 + *val = ~0; 194 + return PCIBIOS_DEVICE_NOT_FOUND; 195 + } 196 + 197 + vendor_device = readl(addr); 198 + if (vendor_device == 0xffffffff) 199 + goto no_emulation; 200 + 201 + pr_debug("%04x:%04x - Fix pass#: %08x, where: %03x, devfn: %03x\n", 202 + vendor_device & 0xffff, vendor_device >> 16, class_rev, 203 + (unsigned) where, devfn); 204 + 205 + /* Check for non type-00 header */ 206 + if (cfg_type == 0) { 207 + bool has_msix; 208 + bool is_nic = (vendor_device == 0xa01e177d); 209 + bool is_tns = (vendor_device == 0xa01f177d); 210 + 211 + addr = bus->ops->map_bus(bus, devfn, 0x70); 212 + if (!addr) { 213 + *val = ~0; 214 + return PCIBIOS_DEVICE_NOT_FOUND; 215 + } 216 + /* E_CAP */ 217 + v = readl(addr); 218 + has_msix = (v & 0xff00) != 0; 219 + 220 + if (!has_msix && where_a == 0x70) { 221 + v |= 0xbc00; /* next capability is EA at 0xbc */ 222 + set_val(v, where, size, val); 223 + return PCIBIOS_SUCCESSFUL; 224 + } 225 + if (where_a == 0xb0) { 226 + addr = bus->ops->map_bus(bus, devfn, where_a); 227 + if (!addr) { 228 + *val = ~0; 229 + return PCIBIOS_DEVICE_NOT_FOUND; 230 + } 231 + v = readl(addr); 232 + if (v & 0xff00) 233 + pr_err("Bad MSIX cap header: %08x\n", v); 234 + v |= 0xbc00; /* next capability is EA at 0xbc */ 235 + set_val(v, where, size, val); 236 + return PCIBIOS_SUCCESSFUL; 237 + } 238 + if (where_a == 0xbc) { 239 + if (is_nic) 240 + v = 0x40014; /* EA last in chain, 4 entries */ 241 + else if (is_tns) 242 + v = 0x30014; /* EA last in chain, 3 entries */ 243 + else if (has_msix) 244 + v = 0x20014; /* EA last in chain, 2 entries */ 245 + else 246 + v = 0x10014; /* EA last in chain, 1 entry */ 247 + set_val(v, where, size, val); 248 + return PCIBIOS_SUCCESSFUL; 249 + } 250 + if (where_a >= 0xc0 && where_a < 0xd0) 251 + /* EA entry-0. PP=0, BAR0 Size:3 */ 252 + return handle_ea_bar(0x80ff0003, 253 + 0x10, bus, devfn, where, 254 + size, val); 255 + if (where_a >= 0xd0 && where_a < 0xe0 && has_msix) 256 + /* EA entry-1. PP=0, BAR4 Size:3 */ 257 + return handle_ea_bar(0x80ff0043, 258 + 0x20, bus, devfn, where, 259 + size, val); 260 + if (where_a >= 0xe0 && where_a < 0xf0 && is_tns) 261 + /* EA entry-2. PP=0, BAR2, Size:3 */ 262 + return handle_ea_bar(0x80ff0023, 263 + 0x18, bus, devfn, where, 264 + size, val); 265 + if (where_a >= 0xe0 && where_a < 0xf0 && is_nic) 266 + /* EA entry-2. PP=4, VF_BAR0 (9), Size:3 */ 267 + return handle_ea_bar(0x80ff0493, 268 + 0x1a4, bus, devfn, where, 269 + size, val); 270 + if (where_a >= 0xf0 && where_a < 0x100 && is_nic) 271 + /* EA entry-3. PP=4, VF_BAR4 (d), Size:3 */ 272 + return handle_ea_bar(0x80ff04d3, 273 + 0x1b4, bus, devfn, where, 274 + size, val); 275 + } else if (cfg_type == 1) { 276 + bool is_rsl_bridge = devfn == 0x08; 277 + bool is_rad_bridge = devfn == 0xa0; 278 + bool is_zip_bridge = devfn == 0xa8; 279 + bool is_dfa_bridge = devfn == 0xb0; 280 + bool is_nic_bridge = devfn == 0x10; 281 + 282 + if (where_a == 0x70) { 283 + addr = bus->ops->map_bus(bus, devfn, where_a); 284 + if (!addr) { 285 + *val = ~0; 286 + return PCIBIOS_DEVICE_NOT_FOUND; 287 + } 288 + v = readl(addr); 289 + if (v & 0xff00) 290 + pr_err("Bad PCIe cap header: %08x\n", v); 291 + v |= 0xbc00; /* next capability is EA at 0xbc */ 292 + set_val(v, where, size, val); 293 + return PCIBIOS_SUCCESSFUL; 294 + } 295 + if (where_a == 0xbc) { 296 + if (is_nic_bridge) 297 + v = 0x10014; /* EA last in chain, 1 entry */ 298 + else 299 + v = 0x00014; /* EA last in chain, no entries */ 300 + set_val(v, where, size, val); 301 + return PCIBIOS_SUCCESSFUL; 302 + } 303 + if (where_a == 0xc0) { 304 + if (is_rsl_bridge || is_nic_bridge) 305 + v = 0x0101; /* subordinate:secondary = 1:1 */ 306 + else if (is_rad_bridge) 307 + v = 0x0202; /* subordinate:secondary = 2:2 */ 308 + else if (is_zip_bridge) 309 + v = 0x0303; /* subordinate:secondary = 3:3 */ 310 + else if (is_dfa_bridge) 311 + v = 0x0404; /* subordinate:secondary = 4:4 */ 312 + set_val(v, where, size, val); 313 + return PCIBIOS_SUCCESSFUL; 314 + } 315 + if (where_a == 0xc4 && is_nic_bridge) { 316 + /* Enabled, not-Write, SP=ff, PP=05, BEI=6, ES=4 */ 317 + v = 0x80ff0564; 318 + set_val(v, where, size, val); 319 + return PCIBIOS_SUCCESSFUL; 320 + } 321 + if (where_a == 0xc8 && is_nic_bridge) { 322 + v = 0x00000002; /* Base-L 64-bit */ 323 + set_val(v, where, size, val); 324 + return PCIBIOS_SUCCESSFUL; 325 + } 326 + if (where_a == 0xcc && is_nic_bridge) { 327 + v = 0xfffffffe; /* MaxOffset-L 64-bit */ 328 + set_val(v, where, size, val); 329 + return PCIBIOS_SUCCESSFUL; 330 + } 331 + if (where_a == 0xd0 && is_nic_bridge) { 332 + v = 0x00008430; /* NIC Base-H */ 333 + set_val(v, where, size, val); 334 + return PCIBIOS_SUCCESSFUL; 335 + } 336 + if (where_a == 0xd4 && is_nic_bridge) { 337 + v = 0x0000000f; /* MaxOffset-H */ 338 + set_val(v, where, size, val); 339 + return PCIBIOS_SUCCESSFUL; 340 + } 341 + } 342 + no_emulation: 343 + return pci_generic_config_read(bus, devfn, where, size, val); 344 + } 345 + 346 + static int thunder_ecam_config_write(struct pci_bus *bus, unsigned int devfn, 347 + int where, int size, u32 val) 348 + { 349 + /* 350 + * All BARs have fixed addresses; ignore BAR writes so they 351 + * don't get corrupted. 352 + */ 353 + if ((where >= 0x10 && where < 0x2c) || 354 + (where >= 0x1a4 && where < 0x1bc)) 355 + /* BAR or SR-IOV BAR */ 356 + return PCIBIOS_SUCCESSFUL; 357 + 358 + return pci_generic_config_write(bus, devfn, where, size, val); 359 + } 360 + 361 + static struct gen_pci_cfg_bus_ops thunder_ecam_bus_ops = { 362 + .bus_shift = 20, 363 + .ops = { 364 + .map_bus = thunder_ecam_map_bus, 365 + .read = thunder_ecam_config_read, 366 + .write = thunder_ecam_config_write, 367 + } 368 + }; 369 + 370 + static const struct of_device_id thunder_ecam_of_match[] = { 371 + { .compatible = "cavium,pci-host-thunder-ecam", 372 + .data = &thunder_ecam_bus_ops }, 373 + 374 + { }, 375 + }; 376 + MODULE_DEVICE_TABLE(of, thunder_ecam_of_match); 377 + 378 + static int thunder_ecam_probe(struct platform_device *pdev) 379 + { 380 + struct device *dev = &pdev->dev; 381 + const struct of_device_id *of_id; 382 + struct gen_pci *pci = devm_kzalloc(dev, sizeof(*pci), GFP_KERNEL); 383 + 384 + if (!pci) 385 + return -ENOMEM; 386 + 387 + of_id = of_match_node(thunder_ecam_of_match, dev->of_node); 388 + pci->cfg.ops = (struct gen_pci_cfg_bus_ops *)of_id->data; 389 + 390 + return pci_host_common_probe(pdev, pci); 391 + } 392 + 393 + static struct platform_driver thunder_ecam_driver = { 394 + .driver = { 395 + .name = KBUILD_MODNAME, 396 + .of_match_table = thunder_ecam_of_match, 397 + }, 398 + .probe = thunder_ecam_probe, 399 + }; 400 + module_platform_driver(thunder_ecam_driver); 401 + 402 + MODULE_DESCRIPTION("Thunder ECAM PCI host driver"); 403 + MODULE_LICENSE("GPL v2");
+346
drivers/pci/host/pci-thunder-pem.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License version 2 as 4 + * published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * You should have received a copy of the GNU General Public License 12 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 13 + * 14 + * Copyright (C) 2015 - 2016 Cavium, Inc. 15 + */ 16 + 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/of_address.h> 20 + #include <linux/of_pci.h> 21 + #include <linux/platform_device.h> 22 + 23 + #include "pci-host-common.h" 24 + 25 + #define PEM_CFG_WR 0x28 26 + #define PEM_CFG_RD 0x30 27 + 28 + struct thunder_pem_pci { 29 + struct gen_pci gen_pci; 30 + u32 ea_entry[3]; 31 + void __iomem *pem_reg_base; 32 + }; 33 + 34 + static void __iomem *thunder_pem_map_bus(struct pci_bus *bus, 35 + unsigned int devfn, int where) 36 + { 37 + struct gen_pci *pci = bus->sysdata; 38 + resource_size_t idx = bus->number - pci->cfg.bus_range->start; 39 + 40 + return pci->cfg.win[idx] + ((devfn << 16) | where); 41 + } 42 + 43 + static int thunder_pem_bridge_read(struct pci_bus *bus, unsigned int devfn, 44 + int where, int size, u32 *val) 45 + { 46 + u64 read_val; 47 + struct thunder_pem_pci *pem_pci; 48 + struct gen_pci *pci = bus->sysdata; 49 + 50 + pem_pci = container_of(pci, struct thunder_pem_pci, gen_pci); 51 + 52 + if (devfn != 0 || where >= 2048) { 53 + *val = ~0; 54 + return PCIBIOS_DEVICE_NOT_FOUND; 55 + } 56 + 57 + /* 58 + * 32-bit accesses only. Write the address to the low order 59 + * bits of PEM_CFG_RD, then trigger the read by reading back. 60 + * The config data lands in the upper 32-bits of PEM_CFG_RD. 61 + */ 62 + read_val = where & ~3ull; 63 + writeq(read_val, pem_pci->pem_reg_base + PEM_CFG_RD); 64 + read_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD); 65 + read_val >>= 32; 66 + 67 + /* 68 + * The config space contains some garbage, fix it up. Also 69 + * synthesize an EA capability for the BAR used by MSI-X. 70 + */ 71 + switch (where & ~3) { 72 + case 0x40: 73 + read_val &= 0xffff00ff; 74 + read_val |= 0x00007000; /* Skip MSI CAP */ 75 + break; 76 + case 0x70: /* Express Cap */ 77 + /* PME interrupt on vector 2*/ 78 + read_val |= (2u << 25); 79 + break; 80 + case 0xb0: /* MSI-X Cap */ 81 + /* TableSize=4, Next Cap is EA */ 82 + read_val &= 0xc00000ff; 83 + read_val |= 0x0003bc00; 84 + break; 85 + case 0xb4: 86 + /* Table offset=0, BIR=0 */ 87 + read_val = 0x00000000; 88 + break; 89 + case 0xb8: 90 + /* BPA offset=0xf0000, BIR=0 */ 91 + read_val = 0x000f0000; 92 + break; 93 + case 0xbc: 94 + /* EA, 1 entry, no next Cap */ 95 + read_val = 0x00010014; 96 + break; 97 + case 0xc0: 98 + /* DW2 for type-1 */ 99 + read_val = 0x00000000; 100 + break; 101 + case 0xc4: 102 + /* Entry BEI=0, PP=0x00, SP=0xff, ES=3 */ 103 + read_val = 0x80ff0003; 104 + break; 105 + case 0xc8: 106 + read_val = pem_pci->ea_entry[0]; 107 + break; 108 + case 0xcc: 109 + read_val = pem_pci->ea_entry[1]; 110 + break; 111 + case 0xd0: 112 + read_val = pem_pci->ea_entry[2]; 113 + break; 114 + default: 115 + break; 116 + } 117 + read_val >>= (8 * (where & 3)); 118 + switch (size) { 119 + case 1: 120 + read_val &= 0xff; 121 + break; 122 + case 2: 123 + read_val &= 0xffff; 124 + break; 125 + default: 126 + break; 127 + } 128 + *val = read_val; 129 + return PCIBIOS_SUCCESSFUL; 130 + } 131 + 132 + static int thunder_pem_config_read(struct pci_bus *bus, unsigned int devfn, 133 + int where, int size, u32 *val) 134 + { 135 + struct gen_pci *pci = bus->sysdata; 136 + 137 + if (bus->number < pci->cfg.bus_range->start || 138 + bus->number > pci->cfg.bus_range->end) 139 + return PCIBIOS_DEVICE_NOT_FOUND; 140 + 141 + /* 142 + * The first device on the bus is the PEM PCIe bridge. 143 + * Special case its config access. 144 + */ 145 + if (bus->number == pci->cfg.bus_range->start) 146 + return thunder_pem_bridge_read(bus, devfn, where, size, val); 147 + 148 + return pci_generic_config_read(bus, devfn, where, size, val); 149 + } 150 + 151 + /* 152 + * Some of the w1c_bits below also include read-only or non-writable 153 + * reserved bits, this makes the code simpler and is OK as the bits 154 + * are not affected by writing zeros to them. 155 + */ 156 + static u32 thunder_pem_bridge_w1c_bits(int where) 157 + { 158 + u32 w1c_bits = 0; 159 + 160 + switch (where & ~3) { 161 + case 0x04: /* Command/Status */ 162 + case 0x1c: /* Base and I/O Limit/Secondary Status */ 163 + w1c_bits = 0xff000000; 164 + break; 165 + case 0x44: /* Power Management Control and Status */ 166 + w1c_bits = 0xfffffe00; 167 + break; 168 + case 0x78: /* Device Control/Device Status */ 169 + case 0x80: /* Link Control/Link Status */ 170 + case 0x88: /* Slot Control/Slot Status */ 171 + case 0x90: /* Root Status */ 172 + case 0xa0: /* Link Control 2 Registers/Link Status 2 */ 173 + w1c_bits = 0xffff0000; 174 + break; 175 + case 0x104: /* Uncorrectable Error Status */ 176 + case 0x110: /* Correctable Error Status */ 177 + case 0x130: /* Error Status */ 178 + case 0x160: /* Link Control 4 */ 179 + w1c_bits = 0xffffffff; 180 + break; 181 + default: 182 + break; 183 + } 184 + return w1c_bits; 185 + } 186 + 187 + static int thunder_pem_bridge_write(struct pci_bus *bus, unsigned int devfn, 188 + int where, int size, u32 val) 189 + { 190 + struct gen_pci *pci = bus->sysdata; 191 + struct thunder_pem_pci *pem_pci; 192 + u64 write_val, read_val; 193 + u32 mask = 0; 194 + 195 + pem_pci = container_of(pci, struct thunder_pem_pci, gen_pci); 196 + 197 + if (devfn != 0 || where >= 2048) 198 + return PCIBIOS_DEVICE_NOT_FOUND; 199 + 200 + /* 201 + * 32-bit accesses only. If the write is for a size smaller 202 + * than 32-bits, we must first read the 32-bit value and merge 203 + * in the desired bits and then write the whole 32-bits back 204 + * out. 205 + */ 206 + switch (size) { 207 + case 1: 208 + read_val = where & ~3ull; 209 + writeq(read_val, pem_pci->pem_reg_base + PEM_CFG_RD); 210 + read_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD); 211 + read_val >>= 32; 212 + mask = ~(0xff << (8 * (where & 3))); 213 + read_val &= mask; 214 + val = (val & 0xff) << (8 * (where & 3)); 215 + val |= (u32)read_val; 216 + break; 217 + case 2: 218 + read_val = where & ~3ull; 219 + writeq(read_val, pem_pci->pem_reg_base + PEM_CFG_RD); 220 + read_val = readq(pem_pci->pem_reg_base + PEM_CFG_RD); 221 + read_val >>= 32; 222 + mask = ~(0xffff << (8 * (where & 3))); 223 + read_val &= mask; 224 + val = (val & 0xffff) << (8 * (where & 3)); 225 + val |= (u32)read_val; 226 + break; 227 + default: 228 + break; 229 + } 230 + 231 + /* 232 + * By expanding the write width to 32 bits, we may 233 + * inadvertently hit some W1C bits that were not intended to 234 + * be written. Calculate the mask that must be applied to the 235 + * data to be written to avoid these cases. 236 + */ 237 + if (mask) { 238 + u32 w1c_bits = thunder_pem_bridge_w1c_bits(where); 239 + 240 + if (w1c_bits) { 241 + mask &= w1c_bits; 242 + val &= ~mask; 243 + } 244 + } 245 + 246 + /* 247 + * Low order bits are the config address, the high order 32 248 + * bits are the data to be written. 249 + */ 250 + write_val = where & ~3ull; 251 + write_val |= (((u64)val) << 32); 252 + writeq(write_val, pem_pci->pem_reg_base + PEM_CFG_WR); 253 + return PCIBIOS_SUCCESSFUL; 254 + } 255 + 256 + static int thunder_pem_config_write(struct pci_bus *bus, unsigned int devfn, 257 + int where, int size, u32 val) 258 + { 259 + struct gen_pci *pci = bus->sysdata; 260 + 261 + if (bus->number < pci->cfg.bus_range->start || 262 + bus->number > pci->cfg.bus_range->end) 263 + return PCIBIOS_DEVICE_NOT_FOUND; 264 + /* 265 + * The first device on the bus is the PEM PCIe bridge. 266 + * Special case its config access. 267 + */ 268 + if (bus->number == pci->cfg.bus_range->start) 269 + return thunder_pem_bridge_write(bus, devfn, where, size, val); 270 + 271 + 272 + return pci_generic_config_write(bus, devfn, where, size, val); 273 + } 274 + 275 + static struct gen_pci_cfg_bus_ops thunder_pem_bus_ops = { 276 + .bus_shift = 24, 277 + .ops = { 278 + .map_bus = thunder_pem_map_bus, 279 + .read = thunder_pem_config_read, 280 + .write = thunder_pem_config_write, 281 + } 282 + }; 283 + 284 + static const struct of_device_id thunder_pem_of_match[] = { 285 + { .compatible = "cavium,pci-host-thunder-pem", 286 + .data = &thunder_pem_bus_ops }, 287 + 288 + { }, 289 + }; 290 + MODULE_DEVICE_TABLE(of, thunder_pem_of_match); 291 + 292 + static int thunder_pem_probe(struct platform_device *pdev) 293 + { 294 + struct device *dev = &pdev->dev; 295 + const struct of_device_id *of_id; 296 + resource_size_t bar4_start; 297 + struct resource *res_pem; 298 + struct thunder_pem_pci *pem_pci; 299 + 300 + pem_pci = devm_kzalloc(dev, sizeof(*pem_pci), GFP_KERNEL); 301 + if (!pem_pci) 302 + return -ENOMEM; 303 + 304 + of_id = of_match_node(thunder_pem_of_match, dev->of_node); 305 + pem_pci->gen_pci.cfg.ops = (struct gen_pci_cfg_bus_ops *)of_id->data; 306 + 307 + /* 308 + * The second register range is the PEM bridge to the PCIe 309 + * bus. It has a different config access method than those 310 + * devices behind the bridge. 311 + */ 312 + res_pem = platform_get_resource(pdev, IORESOURCE_MEM, 1); 313 + if (!res_pem) { 314 + dev_err(dev, "missing \"reg[1]\"property\n"); 315 + return -EINVAL; 316 + } 317 + 318 + pem_pci->pem_reg_base = devm_ioremap(dev, res_pem->start, 0x10000); 319 + if (!pem_pci->pem_reg_base) 320 + return -ENOMEM; 321 + 322 + /* 323 + * The MSI-X BAR for the PEM and AER interrupts is located at 324 + * a fixed offset from the PEM register base. Generate a 325 + * fragment of the synthesized Enhanced Allocation capability 326 + * structure here for the BAR. 327 + */ 328 + bar4_start = res_pem->start + 0xf00000; 329 + pem_pci->ea_entry[0] = (u32)bar4_start | 2; 330 + pem_pci->ea_entry[1] = (u32)(res_pem->end - bar4_start) & ~3u; 331 + pem_pci->ea_entry[2] = (u32)(bar4_start >> 32); 332 + 333 + return pci_host_common_probe(pdev, &pem_pci->gen_pci); 334 + } 335 + 336 + static struct platform_driver thunder_pem_driver = { 337 + .driver = { 338 + .name = KBUILD_MODNAME, 339 + .of_match_table = thunder_pem_of_match, 340 + }, 341 + .probe = thunder_pem_probe, 342 + }; 343 + module_platform_driver(thunder_pem_driver); 344 + 345 + MODULE_DESCRIPTION("Thunder PEM PCIe host driver"); 346 + MODULE_LICENSE("GPL v2");
+2 -1
drivers/pci/host/pcie-altera.c
··· 40 40 #define P2A_INT_ENABLE 0x3070 41 41 #define P2A_INT_ENA_ALL 0xf 42 42 #define RP_LTSSM 0x3c64 43 + #define RP_LTSSM_MASK 0x1f 43 44 #define LTSSM_L0 0xf 44 45 45 46 /* TLP configuration type 0 and 1 */ ··· 141 140 142 141 static bool altera_pcie_link_is_up(struct altera_pcie *pcie) 143 142 { 144 - return !!(cra_readl(pcie, RP_LTSSM) & LTSSM_L0); 143 + return !!((cra_readl(pcie, RP_LTSSM) & RP_LTSSM_MASK) == LTSSM_L0); 145 144 } 146 145 147 146 static bool altera_pcie_valid_config(struct altera_pcie *pcie,
+138
drivers/pci/host/pcie-designware-plat.c
··· 1 + /* 2 + * PCIe RC driver for Synopsys DesignWare Core 3 + * 4 + * Copyright (C) 2015-2016 Synopsys, Inc. (www.synopsys.com) 5 + * 6 + * Authors: Joao Pinto <jpinto@synopsys.com> 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + */ 12 + #include <linux/clk.h> 13 + #include <linux/delay.h> 14 + #include <linux/gpio.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/of_gpio.h> 19 + #include <linux/pci.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/resource.h> 22 + #include <linux/signal.h> 23 + #include <linux/types.h> 24 + 25 + #include "pcie-designware.h" 26 + 27 + struct dw_plat_pcie { 28 + void __iomem *mem_base; 29 + struct pcie_port pp; 30 + }; 31 + 32 + static irqreturn_t dw_plat_pcie_msi_irq_handler(int irq, void *arg) 33 + { 34 + struct pcie_port *pp = arg; 35 + 36 + return dw_handle_msi_irq(pp); 37 + } 38 + 39 + static void dw_plat_pcie_host_init(struct pcie_port *pp) 40 + { 41 + dw_pcie_setup_rc(pp); 42 + dw_pcie_wait_for_link(pp); 43 + 44 + if (IS_ENABLED(CONFIG_PCI_MSI)) 45 + dw_pcie_msi_init(pp); 46 + } 47 + 48 + static struct pcie_host_ops dw_plat_pcie_host_ops = { 49 + .host_init = dw_plat_pcie_host_init, 50 + }; 51 + 52 + static int dw_plat_add_pcie_port(struct pcie_port *pp, 53 + struct platform_device *pdev) 54 + { 55 + int ret; 56 + 57 + pp->irq = platform_get_irq(pdev, 1); 58 + if (pp->irq < 0) 59 + return pp->irq; 60 + 61 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 62 + pp->msi_irq = platform_get_irq(pdev, 0); 63 + if (pp->msi_irq < 0) 64 + return pp->msi_irq; 65 + 66 + ret = devm_request_irq(&pdev->dev, pp->msi_irq, 67 + dw_plat_pcie_msi_irq_handler, 68 + IRQF_SHARED, "dw-plat-pcie-msi", pp); 69 + if (ret) { 70 + dev_err(&pdev->dev, "failed to request MSI IRQ\n"); 71 + return ret; 72 + } 73 + } 74 + 75 + pp->root_bus_nr = -1; 76 + pp->ops = &dw_plat_pcie_host_ops; 77 + 78 + ret = dw_pcie_host_init(pp); 79 + if (ret) { 80 + dev_err(&pdev->dev, "failed to initialize host\n"); 81 + return ret; 82 + } 83 + 84 + return 0; 85 + } 86 + 87 + static int dw_plat_pcie_probe(struct platform_device *pdev) 88 + { 89 + struct dw_plat_pcie *dw_plat_pcie; 90 + struct pcie_port *pp; 91 + struct resource *res; /* Resource from DT */ 92 + int ret; 93 + 94 + dw_plat_pcie = devm_kzalloc(&pdev->dev, sizeof(*dw_plat_pcie), 95 + GFP_KERNEL); 96 + if (!dw_plat_pcie) 97 + return -ENOMEM; 98 + 99 + pp = &dw_plat_pcie->pp; 100 + pp->dev = &pdev->dev; 101 + 102 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 103 + if (!res) 104 + return -ENODEV; 105 + 106 + dw_plat_pcie->mem_base = devm_ioremap_resource(&pdev->dev, res); 107 + if (IS_ERR(dw_plat_pcie->mem_base)) 108 + return PTR_ERR(dw_plat_pcie->mem_base); 109 + 110 + pp->dbi_base = dw_plat_pcie->mem_base; 111 + 112 + ret = dw_plat_add_pcie_port(pp, pdev); 113 + if (ret < 0) 114 + return ret; 115 + 116 + platform_set_drvdata(pdev, dw_plat_pcie); 117 + return 0; 118 + } 119 + 120 + static const struct of_device_id dw_plat_pcie_of_match[] = { 121 + { .compatible = "snps,dw-pcie", }, 122 + {}, 123 + }; 124 + MODULE_DEVICE_TABLE(of, dw_plat_pcie_of_match); 125 + 126 + static struct platform_driver dw_plat_pcie_driver = { 127 + .driver = { 128 + .name = "dw-pcie", 129 + .of_match_table = dw_plat_pcie_of_match, 130 + }, 131 + .probe = dw_plat_pcie_probe, 132 + }; 133 + 134 + module_platform_driver(dw_plat_pcie_driver); 135 + 136 + MODULE_AUTHOR("Joao Pinto <Joao.Pinto@synopsys.com>"); 137 + MODULE_DESCRIPTION("Synopsys PCIe host controller glue platform driver"); 138 + MODULE_LICENSE("GPL v2");
+37 -7
drivers/pci/host/pcie-designware.c
··· 22 22 #include <linux/pci_regs.h> 23 23 #include <linux/platform_device.h> 24 24 #include <linux/types.h> 25 + #include <linux/delay.h> 25 26 26 27 #include "pcie-designware.h" 27 28 ··· 69 68 #define PCIE_ATU_DEV(x) (((x) & 0x1f) << 19) 70 69 #define PCIE_ATU_FUNC(x) (((x) & 0x7) << 16) 71 70 #define PCIE_ATU_UPPER_TARGET 0x91C 71 + 72 + /* PCIe Port Logic registers */ 73 + #define PLR_OFFSET 0x700 74 + #define PCIE_PHY_DEBUG_R1 (PLR_OFFSET + 0x2c) 75 + #define PCIE_PHY_DEBUG_R1_LINK_UP 0x00000010 72 76 73 77 static struct pci_ops dw_pcie_ops; 74 78 ··· 386 380 .teardown_irq = dw_msi_teardown_irq, 387 381 }; 388 382 383 + int dw_pcie_wait_for_link(struct pcie_port *pp) 384 + { 385 + int retries; 386 + 387 + /* check if the link is up or not */ 388 + for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 389 + if (dw_pcie_link_up(pp)) { 390 + dev_info(pp->dev, "link up\n"); 391 + return 0; 392 + } 393 + usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 394 + } 395 + 396 + dev_err(pp->dev, "phy link never came up\n"); 397 + 398 + return -ETIMEDOUT; 399 + } 400 + 389 401 int dw_pcie_link_up(struct pcie_port *pp) 390 402 { 403 + u32 val; 404 + 391 405 if (pp->ops->link_up) 392 406 return pp->ops->link_up(pp); 393 407 394 - return 0; 408 + val = readl(pp->dbi_base + PCIE_PHY_DEBUG_R1); 409 + return val & PCIE_PHY_DEBUG_R1_LINK_UP; 395 410 } 396 411 397 412 static int dw_pcie_msi_map(struct irq_domain *domain, unsigned int irq, ··· 544 517 if (pp->ops->host_init) 545 518 pp->ops->host_init(pp); 546 519 520 + /* 521 + * If the platform provides ->rd_other_conf, it means the platform 522 + * uses its own address translation component rather than ATU, so 523 + * we should not program the ATU here. 524 + */ 547 525 if (!pp->ops->rd_other_conf) 548 526 dw_pcie_prog_outbound_atu(pp, PCIE_ATU_REGION_INDEX1, 549 527 PCIE_ATU_TYPE_MEM, pp->mem_base, ··· 583 551 pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 584 552 #endif 585 553 586 - if (!pci_has_flag(PCI_PROBE_ONLY)) { 587 - pci_bus_size_bridges(bus); 588 - pci_bus_assign_resources(bus); 554 + pci_bus_size_bridges(bus); 555 + pci_bus_assign_resources(bus); 589 556 590 - list_for_each_entry(child, &bus->children, node) 591 - pcie_bus_configure_settings(child); 592 - } 557 + list_for_each_entry(child, &bus->children, node) 558 + pcie_bus_configure_settings(child); 593 559 594 560 pci_bus_add_devices(bus); 595 561 return 0;
+6
drivers/pci/host/pcie-designware.h
··· 22 22 #define MAX_MSI_IRQS 32 23 23 #define MAX_MSI_CTRLS (MAX_MSI_IRQS / 32) 24 24 25 + /* Parameters for the waiting for link up routine */ 26 + #define LINK_WAIT_MAX_RETRIES 10 27 + #define LINK_WAIT_USLEEP_MIN 90000 28 + #define LINK_WAIT_USLEEP_MAX 100000 29 + 25 30 struct pcie_port { 26 31 struct device *dev; 27 32 u8 root_bus_nr; ··· 81 76 int dw_pcie_cfg_write(void __iomem *addr, int size, u32 val); 82 77 irqreturn_t dw_handle_msi_irq(struct pcie_port *pp); 83 78 void dw_pcie_msi_init(struct pcie_port *pp); 79 + int dw_pcie_wait_for_link(struct pcie_port *pp); 84 80 int dw_pcie_link_up(struct pcie_port *pp); 85 81 void dw_pcie_setup_rc(struct pcie_port *pp); 86 82 int dw_pcie_host_init(struct pcie_port *pp);
+1 -11
drivers/pci/host/pcie-qcom.c
··· 116 116 117 117 static int qcom_pcie_establish_link(struct qcom_pcie *pcie) 118 118 { 119 - struct device *dev = pcie->dev; 120 - unsigned int retries = 0; 121 119 u32 val; 122 120 123 121 if (dw_pcie_link_up(&pcie->pp)) ··· 126 128 val |= PCIE20_ELBI_SYS_CTRL_LT_ENABLE; 127 129 writel(val, pcie->elbi + PCIE20_ELBI_SYS_CTRL); 128 130 129 - do { 130 - if (dw_pcie_link_up(&pcie->pp)) 131 - return 0; 132 - usleep_range(250, 1000); 133 - } while (retries < 200); 134 - 135 - dev_warn(dev, "phy link never came up\n"); 136 - 137 - return -ETIMEDOUT; 131 + return dw_pcie_wait_for_link(&pcie->pp); 138 132 } 139 133 140 134 static int qcom_pcie_get_resources_v0(struct qcom_pcie *pcie)
+5 -9
drivers/pci/host/pcie-rcar.c
··· 390 390 391 391 rcar_pcie_setup(&res, pcie); 392 392 393 - /* Do not reassign resources if probe only */ 394 - if (!pci_has_flag(PCI_PROBE_ONLY)) 395 - pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 393 + pci_add_flags(PCI_REASSIGN_ALL_RSRC | PCI_REASSIGN_ALL_BUS); 396 394 397 395 if (IS_ENABLED(CONFIG_PCI_MSI)) 398 396 bus = pci_scan_root_bus_msi(pcie->dev, pcie->root_bus_nr, ··· 406 408 407 409 pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 408 410 409 - if (!pci_has_flag(PCI_PROBE_ONLY)) { 410 - pci_bus_size_bridges(bus); 411 - pci_bus_assign_resources(bus); 411 + pci_bus_size_bridges(bus); 412 + pci_bus_assign_resources(bus); 412 413 413 - list_for_each_entry(child, &bus->children, node) 414 - pcie_bus_configure_settings(child); 415 - } 414 + list_for_each_entry(child, &bus->children, node) 415 + pcie_bus_configure_settings(child); 416 416 417 417 pci_bus_add_devices(bus); 418 418
+1 -13
drivers/pci/host/pcie-spear13xx.c
··· 13 13 */ 14 14 15 15 #include <linux/clk.h> 16 - #include <linux/delay.h> 17 16 #include <linux/interrupt.h> 18 17 #include <linux/kernel.h> 19 18 #include <linux/module.h> ··· 148 149 struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pp); 149 150 struct pcie_app_reg *app_reg = spear13xx_pcie->app_base; 150 151 u32 exp_cap_off = EXP_CAP_ID_OFFSET; 151 - unsigned int retries; 152 152 153 153 if (dw_pcie_link_up(pp)) { 154 154 dev_err(pp->dev, "link already up\n"); ··· 198 200 | ((u32)1 << REG_TRANSLATION_ENABLE), 199 201 &app_reg->app_ctrl_0); 200 202 201 - /* check if the link is up or not */ 202 - for (retries = 0; retries < 10; retries++) { 203 - if (dw_pcie_link_up(pp)) { 204 - dev_info(pp->dev, "link up\n"); 205 - return 0; 206 - } 207 - mdelay(100); 208 - } 209 - 210 - dev_err(pp->dev, "link Fail\n"); 211 - return -EINVAL; 203 + return dw_pcie_wait_for_link(pp); 212 204 } 213 205 214 206 static irqreturn_t spear13xx_pcie_irq_handler(int irq, void *arg)
+881
drivers/pci/host/pcie-xilinx-nwl.c
··· 1 + /* 2 + * PCIe host controller driver for NWL PCIe Bridge 3 + * Based on pcie-xilinx.c, pci-tegra.c 4 + * 5 + * (C) Copyright 2014 - 2015, Xilinx, Inc. 6 + * 7 + * This program is free software: you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License as published by 9 + * the Free Software Foundation, either version 2 of the License, or 10 + * (at your option) any later version. 11 + */ 12 + 13 + #include <linux/delay.h> 14 + #include <linux/interrupt.h> 15 + #include <linux/irq.h> 16 + #include <linux/irqdomain.h> 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/msi.h> 20 + #include <linux/of_address.h> 21 + #include <linux/of_pci.h> 22 + #include <linux/of_platform.h> 23 + #include <linux/of_irq.h> 24 + #include <linux/pci.h> 25 + #include <linux/platform_device.h> 26 + #include <linux/irqchip/chained_irq.h> 27 + 28 + /* Bridge core config registers */ 29 + #define BRCFG_PCIE_RX0 0x00000000 30 + #define BRCFG_INTERRUPT 0x00000010 31 + #define BRCFG_PCIE_RX_MSG_FILTER 0x00000020 32 + 33 + /* Egress - Bridge translation registers */ 34 + #define E_BREG_CAPABILITIES 0x00000200 35 + #define E_BREG_CONTROL 0x00000208 36 + #define E_BREG_BASE_LO 0x00000210 37 + #define E_BREG_BASE_HI 0x00000214 38 + #define E_ECAM_CAPABILITIES 0x00000220 39 + #define E_ECAM_CONTROL 0x00000228 40 + #define E_ECAM_BASE_LO 0x00000230 41 + #define E_ECAM_BASE_HI 0x00000234 42 + 43 + /* Ingress - address translations */ 44 + #define I_MSII_CAPABILITIES 0x00000300 45 + #define I_MSII_CONTROL 0x00000308 46 + #define I_MSII_BASE_LO 0x00000310 47 + #define I_MSII_BASE_HI 0x00000314 48 + 49 + #define I_ISUB_CONTROL 0x000003E8 50 + #define SET_ISUB_CONTROL BIT(0) 51 + /* Rxed msg fifo - Interrupt status registers */ 52 + #define MSGF_MISC_STATUS 0x00000400 53 + #define MSGF_MISC_MASK 0x00000404 54 + #define MSGF_LEG_STATUS 0x00000420 55 + #define MSGF_LEG_MASK 0x00000424 56 + #define MSGF_MSI_STATUS_LO 0x00000440 57 + #define MSGF_MSI_STATUS_HI 0x00000444 58 + #define MSGF_MSI_MASK_LO 0x00000448 59 + #define MSGF_MSI_MASK_HI 0x0000044C 60 + 61 + /* Msg filter mask bits */ 62 + #define CFG_ENABLE_PM_MSG_FWD BIT(1) 63 + #define CFG_ENABLE_INT_MSG_FWD BIT(2) 64 + #define CFG_ENABLE_ERR_MSG_FWD BIT(3) 65 + #define CFG_ENABLE_SLT_MSG_FWD BIT(5) 66 + #define CFG_ENABLE_VEN_MSG_FWD BIT(7) 67 + #define CFG_ENABLE_OTH_MSG_FWD BIT(13) 68 + #define CFG_ENABLE_VEN_MSG_EN BIT(14) 69 + #define CFG_ENABLE_VEN_MSG_VEN_INV BIT(15) 70 + #define CFG_ENABLE_VEN_MSG_VEN_ID GENMASK(31, 16) 71 + #define CFG_ENABLE_MSG_FILTER_MASK (CFG_ENABLE_PM_MSG_FWD | \ 72 + CFG_ENABLE_INT_MSG_FWD | \ 73 + CFG_ENABLE_ERR_MSG_FWD | \ 74 + CFG_ENABLE_SLT_MSG_FWD | \ 75 + CFG_ENABLE_VEN_MSG_FWD | \ 76 + CFG_ENABLE_OTH_MSG_FWD | \ 77 + CFG_ENABLE_VEN_MSG_EN | \ 78 + CFG_ENABLE_VEN_MSG_VEN_INV | \ 79 + CFG_ENABLE_VEN_MSG_VEN_ID) 80 + 81 + /* Misc interrupt status mask bits */ 82 + #define MSGF_MISC_SR_RXMSG_AVAIL BIT(0) 83 + #define MSGF_MISC_SR_RXMSG_OVER BIT(1) 84 + #define MSGF_MISC_SR_SLAVE_ERR BIT(4) 85 + #define MSGF_MISC_SR_MASTER_ERR BIT(5) 86 + #define MSGF_MISC_SR_I_ADDR_ERR BIT(6) 87 + #define MSGF_MISC_SR_E_ADDR_ERR BIT(7) 88 + #define MSGF_MISC_SR_UR_DETECT BIT(20) 89 + 90 + #define MSGF_MISC_SR_PCIE_CORE GENMASK(18, 16) 91 + #define MSGF_MISC_SR_PCIE_CORE_ERR GENMASK(31, 22) 92 + 93 + #define MSGF_MISC_SR_MASKALL (MSGF_MISC_SR_RXMSG_AVAIL | \ 94 + MSGF_MISC_SR_RXMSG_OVER | \ 95 + MSGF_MISC_SR_SLAVE_ERR | \ 96 + MSGF_MISC_SR_MASTER_ERR | \ 97 + MSGF_MISC_SR_I_ADDR_ERR | \ 98 + MSGF_MISC_SR_E_ADDR_ERR | \ 99 + MSGF_MISC_SR_UR_DETECT | \ 100 + MSGF_MISC_SR_PCIE_CORE | \ 101 + MSGF_MISC_SR_PCIE_CORE_ERR) 102 + 103 + /* Legacy interrupt status mask bits */ 104 + #define MSGF_LEG_SR_INTA BIT(0) 105 + #define MSGF_LEG_SR_INTB BIT(1) 106 + #define MSGF_LEG_SR_INTC BIT(2) 107 + #define MSGF_LEG_SR_INTD BIT(3) 108 + #define MSGF_LEG_SR_MASKALL (MSGF_LEG_SR_INTA | MSGF_LEG_SR_INTB | \ 109 + MSGF_LEG_SR_INTC | MSGF_LEG_SR_INTD) 110 + 111 + /* MSI interrupt status mask bits */ 112 + #define MSGF_MSI_SR_LO_MASK BIT(0) 113 + #define MSGF_MSI_SR_HI_MASK BIT(0) 114 + 115 + #define MSII_PRESENT BIT(0) 116 + #define MSII_ENABLE BIT(0) 117 + #define MSII_STATUS_ENABLE BIT(15) 118 + 119 + /* Bridge config interrupt mask */ 120 + #define BRCFG_INTERRUPT_MASK BIT(0) 121 + #define BREG_PRESENT BIT(0) 122 + #define BREG_ENABLE BIT(0) 123 + #define BREG_ENABLE_FORCE BIT(1) 124 + 125 + /* E_ECAM status mask bits */ 126 + #define E_ECAM_PRESENT BIT(0) 127 + #define E_ECAM_CR_ENABLE BIT(0) 128 + #define E_ECAM_SIZE_LOC GENMASK(20, 16) 129 + #define E_ECAM_SIZE_SHIFT 16 130 + #define ECAM_BUS_LOC_SHIFT 20 131 + #define ECAM_DEV_LOC_SHIFT 12 132 + #define NWL_ECAM_VALUE_DEFAULT 12 133 + 134 + #define CFG_DMA_REG_BAR GENMASK(2, 0) 135 + 136 + #define INT_PCI_MSI_NR (2 * 32) 137 + #define INTX_NUM 4 138 + 139 + /* Readin the PS_LINKUP */ 140 + #define PS_LINKUP_OFFSET 0x00000238 141 + #define PCIE_PHY_LINKUP_BIT BIT(0) 142 + #define PHY_RDY_LINKUP_BIT BIT(1) 143 + 144 + /* Parameters for the waiting for link up routine */ 145 + #define LINK_WAIT_MAX_RETRIES 10 146 + #define LINK_WAIT_USLEEP_MIN 90000 147 + #define LINK_WAIT_USLEEP_MAX 100000 148 + 149 + struct nwl_msi { /* MSI information */ 150 + struct irq_domain *msi_domain; 151 + unsigned long *bitmap; 152 + struct irq_domain *dev_domain; 153 + struct mutex lock; /* protect bitmap variable */ 154 + int irq_msi0; 155 + int irq_msi1; 156 + }; 157 + 158 + struct nwl_pcie { 159 + struct device *dev; 160 + void __iomem *breg_base; 161 + void __iomem *pcireg_base; 162 + void __iomem *ecam_base; 163 + phys_addr_t phys_breg_base; /* Physical Bridge Register Base */ 164 + phys_addr_t phys_pcie_reg_base; /* Physical PCIe Controller Base */ 165 + phys_addr_t phys_ecam_base; /* Physical Configuration Base */ 166 + u32 breg_size; 167 + u32 pcie_reg_size; 168 + u32 ecam_size; 169 + int irq_intx; 170 + int irq_misc; 171 + u32 ecam_value; 172 + u8 last_busno; 173 + u8 root_busno; 174 + struct nwl_msi msi; 175 + struct irq_domain *legacy_irq_domain; 176 + }; 177 + 178 + static inline u32 nwl_bridge_readl(struct nwl_pcie *pcie, u32 off) 179 + { 180 + return readl(pcie->breg_base + off); 181 + } 182 + 183 + static inline void nwl_bridge_writel(struct nwl_pcie *pcie, u32 val, u32 off) 184 + { 185 + writel(val, pcie->breg_base + off); 186 + } 187 + 188 + static bool nwl_pcie_link_up(struct nwl_pcie *pcie) 189 + { 190 + if (readl(pcie->pcireg_base + PS_LINKUP_OFFSET) & PCIE_PHY_LINKUP_BIT) 191 + return true; 192 + return false; 193 + } 194 + 195 + static bool nwl_phy_link_up(struct nwl_pcie *pcie) 196 + { 197 + if (readl(pcie->pcireg_base + PS_LINKUP_OFFSET) & PHY_RDY_LINKUP_BIT) 198 + return true; 199 + return false; 200 + } 201 + 202 + static int nwl_wait_for_link(struct nwl_pcie *pcie) 203 + { 204 + int retries; 205 + 206 + /* check if the link is up or not */ 207 + for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 208 + if (nwl_phy_link_up(pcie)) 209 + return 0; 210 + usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 211 + } 212 + 213 + dev_err(pcie->dev, "PHY link never came up\n"); 214 + return -ETIMEDOUT; 215 + } 216 + 217 + static bool nwl_pcie_valid_device(struct pci_bus *bus, unsigned int devfn) 218 + { 219 + struct nwl_pcie *pcie = bus->sysdata; 220 + 221 + /* Check link before accessing downstream ports */ 222 + if (bus->number != pcie->root_busno) { 223 + if (!nwl_pcie_link_up(pcie)) 224 + return false; 225 + } 226 + 227 + /* Only one device down on each root port */ 228 + if (bus->number == pcie->root_busno && devfn > 0) 229 + return false; 230 + 231 + return true; 232 + } 233 + 234 + /** 235 + * nwl_pcie_map_bus - Get configuration base 236 + * 237 + * @bus: Bus structure of current bus 238 + * @devfn: Device/function 239 + * @where: Offset from base 240 + * 241 + * Return: Base address of the configuration space needed to be 242 + * accessed. 243 + */ 244 + static void __iomem *nwl_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 245 + int where) 246 + { 247 + struct nwl_pcie *pcie = bus->sysdata; 248 + int relbus; 249 + 250 + if (!nwl_pcie_valid_device(bus, devfn)) 251 + return NULL; 252 + 253 + relbus = (bus->number << ECAM_BUS_LOC_SHIFT) | 254 + (devfn << ECAM_DEV_LOC_SHIFT); 255 + 256 + return pcie->ecam_base + relbus + where; 257 + } 258 + 259 + /* PCIe operations */ 260 + static struct pci_ops nwl_pcie_ops = { 261 + .map_bus = nwl_pcie_map_bus, 262 + .read = pci_generic_config_read, 263 + .write = pci_generic_config_write, 264 + }; 265 + 266 + static irqreturn_t nwl_pcie_misc_handler(int irq, void *data) 267 + { 268 + struct nwl_pcie *pcie = data; 269 + u32 misc_stat; 270 + 271 + /* Checking for misc interrupts */ 272 + misc_stat = nwl_bridge_readl(pcie, MSGF_MISC_STATUS) & 273 + MSGF_MISC_SR_MASKALL; 274 + if (!misc_stat) 275 + return IRQ_NONE; 276 + 277 + if (misc_stat & MSGF_MISC_SR_RXMSG_OVER) 278 + dev_err(pcie->dev, "Received Message FIFO Overflow\n"); 279 + 280 + if (misc_stat & MSGF_MISC_SR_SLAVE_ERR) 281 + dev_err(pcie->dev, "Slave error\n"); 282 + 283 + if (misc_stat & MSGF_MISC_SR_MASTER_ERR) 284 + dev_err(pcie->dev, "Master error\n"); 285 + 286 + if (misc_stat & MSGF_MISC_SR_I_ADDR_ERR) 287 + dev_err(pcie->dev, 288 + "In Misc Ingress address translation error\n"); 289 + 290 + if (misc_stat & MSGF_MISC_SR_E_ADDR_ERR) 291 + dev_err(pcie->dev, 292 + "In Misc Egress address translation error\n"); 293 + 294 + if (misc_stat & MSGF_MISC_SR_PCIE_CORE_ERR) 295 + dev_err(pcie->dev, "PCIe Core error\n"); 296 + 297 + /* Clear misc interrupt status */ 298 + nwl_bridge_writel(pcie, misc_stat, MSGF_MISC_STATUS); 299 + 300 + return IRQ_HANDLED; 301 + } 302 + 303 + static void nwl_pcie_leg_handler(struct irq_desc *desc) 304 + { 305 + struct irq_chip *chip = irq_desc_get_chip(desc); 306 + struct nwl_pcie *pcie; 307 + unsigned long status; 308 + u32 bit; 309 + u32 virq; 310 + 311 + chained_irq_enter(chip, desc); 312 + pcie = irq_desc_get_handler_data(desc); 313 + 314 + while ((status = nwl_bridge_readl(pcie, MSGF_LEG_STATUS) & 315 + MSGF_LEG_SR_MASKALL) != 0) { 316 + for_each_set_bit(bit, &status, INTX_NUM) { 317 + virq = irq_find_mapping(pcie->legacy_irq_domain, 318 + bit + 1); 319 + if (virq) 320 + generic_handle_irq(virq); 321 + } 322 + } 323 + 324 + chained_irq_exit(chip, desc); 325 + } 326 + 327 + static void nwl_pcie_handle_msi_irq(struct nwl_pcie *pcie, u32 status_reg) 328 + { 329 + struct nwl_msi *msi; 330 + unsigned long status; 331 + u32 bit; 332 + u32 virq; 333 + 334 + msi = &pcie->msi; 335 + 336 + while ((status = nwl_bridge_readl(pcie, status_reg)) != 0) { 337 + for_each_set_bit(bit, &status, 32) { 338 + nwl_bridge_writel(pcie, 1 << bit, status_reg); 339 + virq = irq_find_mapping(msi->dev_domain, bit); 340 + if (virq) 341 + generic_handle_irq(virq); 342 + } 343 + } 344 + } 345 + 346 + static void nwl_pcie_msi_handler_high(struct irq_desc *desc) 347 + { 348 + struct irq_chip *chip = irq_desc_get_chip(desc); 349 + struct nwl_pcie *pcie = irq_desc_get_handler_data(desc); 350 + 351 + chained_irq_enter(chip, desc); 352 + nwl_pcie_handle_msi_irq(pcie, MSGF_MSI_STATUS_HI); 353 + chained_irq_exit(chip, desc); 354 + } 355 + 356 + static void nwl_pcie_msi_handler_low(struct irq_desc *desc) 357 + { 358 + struct irq_chip *chip = irq_desc_get_chip(desc); 359 + struct nwl_pcie *pcie = irq_desc_get_handler_data(desc); 360 + 361 + chained_irq_enter(chip, desc); 362 + nwl_pcie_handle_msi_irq(pcie, MSGF_MSI_STATUS_LO); 363 + chained_irq_exit(chip, desc); 364 + } 365 + 366 + static int nwl_legacy_map(struct irq_domain *domain, unsigned int irq, 367 + irq_hw_number_t hwirq) 368 + { 369 + irq_set_chip_and_handler(irq, &dummy_irq_chip, handle_simple_irq); 370 + irq_set_chip_data(irq, domain->host_data); 371 + 372 + return 0; 373 + } 374 + 375 + static const struct irq_domain_ops legacy_domain_ops = { 376 + .map = nwl_legacy_map, 377 + }; 378 + 379 + #ifdef CONFIG_PCI_MSI 380 + static struct irq_chip nwl_msi_irq_chip = { 381 + .name = "nwl_pcie:msi", 382 + .irq_enable = unmask_msi_irq, 383 + .irq_disable = mask_msi_irq, 384 + .irq_mask = mask_msi_irq, 385 + .irq_unmask = unmask_msi_irq, 386 + 387 + }; 388 + 389 + static struct msi_domain_info nwl_msi_domain_info = { 390 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 391 + MSI_FLAG_MULTI_PCI_MSI), 392 + .chip = &nwl_msi_irq_chip, 393 + }; 394 + #endif 395 + 396 + static void nwl_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 397 + { 398 + struct nwl_pcie *pcie = irq_data_get_irq_chip_data(data); 399 + phys_addr_t msi_addr = pcie->phys_pcie_reg_base; 400 + 401 + msg->address_lo = lower_32_bits(msi_addr); 402 + msg->address_hi = upper_32_bits(msi_addr); 403 + msg->data = data->hwirq; 404 + } 405 + 406 + static int nwl_msi_set_affinity(struct irq_data *irq_data, 407 + const struct cpumask *mask, bool force) 408 + { 409 + return -EINVAL; 410 + } 411 + 412 + static struct irq_chip nwl_irq_chip = { 413 + .name = "Xilinx MSI", 414 + .irq_compose_msi_msg = nwl_compose_msi_msg, 415 + .irq_set_affinity = nwl_msi_set_affinity, 416 + }; 417 + 418 + static int nwl_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 419 + unsigned int nr_irqs, void *args) 420 + { 421 + struct nwl_pcie *pcie = domain->host_data; 422 + struct nwl_msi *msi = &pcie->msi; 423 + int bit; 424 + int i; 425 + 426 + mutex_lock(&msi->lock); 427 + bit = bitmap_find_next_zero_area(msi->bitmap, INT_PCI_MSI_NR, 0, 428 + nr_irqs, 0); 429 + if (bit >= INT_PCI_MSI_NR) { 430 + mutex_unlock(&msi->lock); 431 + return -ENOSPC; 432 + } 433 + 434 + bitmap_set(msi->bitmap, bit, nr_irqs); 435 + 436 + for (i = 0; i < nr_irqs; i++) { 437 + irq_domain_set_info(domain, virq + i, bit + i, &nwl_irq_chip, 438 + domain->host_data, handle_simple_irq, 439 + NULL, NULL); 440 + } 441 + mutex_unlock(&msi->lock); 442 + return 0; 443 + } 444 + 445 + static void nwl_irq_domain_free(struct irq_domain *domain, unsigned int virq, 446 + unsigned int nr_irqs) 447 + { 448 + struct irq_data *data = irq_domain_get_irq_data(domain, virq); 449 + struct nwl_pcie *pcie = irq_data_get_irq_chip_data(data); 450 + struct nwl_msi *msi = &pcie->msi; 451 + 452 + mutex_lock(&msi->lock); 453 + bitmap_clear(msi->bitmap, data->hwirq, nr_irqs); 454 + mutex_unlock(&msi->lock); 455 + } 456 + 457 + static const struct irq_domain_ops dev_msi_domain_ops = { 458 + .alloc = nwl_irq_domain_alloc, 459 + .free = nwl_irq_domain_free, 460 + }; 461 + 462 + static void nwl_msi_free_irq_domain(struct nwl_pcie *pcie) 463 + { 464 + struct nwl_msi *msi = &pcie->msi; 465 + 466 + if (msi->irq_msi0) 467 + irq_set_chained_handler_and_data(msi->irq_msi0, NULL, NULL); 468 + if (msi->irq_msi1) 469 + irq_set_chained_handler_and_data(msi->irq_msi1, NULL, NULL); 470 + 471 + if (msi->msi_domain) 472 + irq_domain_remove(msi->msi_domain); 473 + if (msi->dev_domain) 474 + irq_domain_remove(msi->dev_domain); 475 + 476 + kfree(msi->bitmap); 477 + msi->bitmap = NULL; 478 + } 479 + 480 + static void nwl_pcie_free_irq_domain(struct nwl_pcie *pcie) 481 + { 482 + int i; 483 + u32 irq; 484 + 485 + for (i = 0; i < INTX_NUM; i++) { 486 + irq = irq_find_mapping(pcie->legacy_irq_domain, i + 1); 487 + if (irq > 0) 488 + irq_dispose_mapping(irq); 489 + } 490 + if (pcie->legacy_irq_domain) 491 + irq_domain_remove(pcie->legacy_irq_domain); 492 + 493 + nwl_msi_free_irq_domain(pcie); 494 + } 495 + 496 + static int nwl_pcie_init_msi_irq_domain(struct nwl_pcie *pcie) 497 + { 498 + #ifdef CONFIG_PCI_MSI 499 + struct fwnode_handle *fwnode = of_node_to_fwnode(pcie->dev->of_node); 500 + struct nwl_msi *msi = &pcie->msi; 501 + 502 + msi->dev_domain = irq_domain_add_linear(NULL, INT_PCI_MSI_NR, 503 + &dev_msi_domain_ops, pcie); 504 + if (!msi->dev_domain) { 505 + dev_err(pcie->dev, "failed to create dev IRQ domain\n"); 506 + return -ENOMEM; 507 + } 508 + msi->msi_domain = pci_msi_create_irq_domain(fwnode, 509 + &nwl_msi_domain_info, 510 + msi->dev_domain); 511 + if (!msi->msi_domain) { 512 + dev_err(pcie->dev, "failed to create msi IRQ domain\n"); 513 + irq_domain_remove(msi->dev_domain); 514 + return -ENOMEM; 515 + } 516 + #endif 517 + return 0; 518 + } 519 + 520 + static int nwl_pcie_init_irq_domain(struct nwl_pcie *pcie) 521 + { 522 + struct device_node *node = pcie->dev->of_node; 523 + struct device_node *legacy_intc_node; 524 + 525 + legacy_intc_node = of_get_next_child(node, NULL); 526 + if (!legacy_intc_node) { 527 + dev_err(pcie->dev, "No legacy intc node found\n"); 528 + return -EINVAL; 529 + } 530 + 531 + pcie->legacy_irq_domain = irq_domain_add_linear(legacy_intc_node, 532 + INTX_NUM, 533 + &legacy_domain_ops, 534 + pcie); 535 + 536 + if (!pcie->legacy_irq_domain) { 537 + dev_err(pcie->dev, "failed to create IRQ domain\n"); 538 + return -ENOMEM; 539 + } 540 + 541 + nwl_pcie_init_msi_irq_domain(pcie); 542 + return 0; 543 + } 544 + 545 + static int nwl_pcie_enable_msi(struct nwl_pcie *pcie, struct pci_bus *bus) 546 + { 547 + struct platform_device *pdev = to_platform_device(pcie->dev); 548 + struct nwl_msi *msi = &pcie->msi; 549 + unsigned long base; 550 + int ret; 551 + int size = BITS_TO_LONGS(INT_PCI_MSI_NR) * sizeof(long); 552 + 553 + mutex_init(&msi->lock); 554 + 555 + msi->bitmap = kzalloc(size, GFP_KERNEL); 556 + if (!msi->bitmap) 557 + return -ENOMEM; 558 + 559 + /* Get msi_1 IRQ number */ 560 + msi->irq_msi1 = platform_get_irq_byname(pdev, "msi1"); 561 + if (msi->irq_msi1 < 0) { 562 + dev_err(&pdev->dev, "failed to get IRQ#%d\n", msi->irq_msi1); 563 + ret = -EINVAL; 564 + goto err; 565 + } 566 + 567 + irq_set_chained_handler_and_data(msi->irq_msi1, 568 + nwl_pcie_msi_handler_high, pcie); 569 + 570 + /* Get msi_0 IRQ number */ 571 + msi->irq_msi0 = platform_get_irq_byname(pdev, "msi0"); 572 + if (msi->irq_msi0 < 0) { 573 + dev_err(&pdev->dev, "failed to get IRQ#%d\n", msi->irq_msi0); 574 + ret = -EINVAL; 575 + goto err; 576 + } 577 + 578 + irq_set_chained_handler_and_data(msi->irq_msi0, 579 + nwl_pcie_msi_handler_low, pcie); 580 + 581 + /* Check for msii_present bit */ 582 + ret = nwl_bridge_readl(pcie, I_MSII_CAPABILITIES) & MSII_PRESENT; 583 + if (!ret) { 584 + dev_err(pcie->dev, "MSI not present\n"); 585 + ret = -EIO; 586 + goto err; 587 + } 588 + 589 + /* Enable MSII */ 590 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, I_MSII_CONTROL) | 591 + MSII_ENABLE, I_MSII_CONTROL); 592 + 593 + /* Enable MSII status */ 594 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, I_MSII_CONTROL) | 595 + MSII_STATUS_ENABLE, I_MSII_CONTROL); 596 + 597 + /* setup AFI/FPCI range */ 598 + base = pcie->phys_pcie_reg_base; 599 + nwl_bridge_writel(pcie, lower_32_bits(base), I_MSII_BASE_LO); 600 + nwl_bridge_writel(pcie, upper_32_bits(base), I_MSII_BASE_HI); 601 + 602 + /* 603 + * For high range MSI interrupts: disable, clear any pending, 604 + * and enable 605 + */ 606 + nwl_bridge_writel(pcie, (u32)~MSGF_MSI_SR_HI_MASK, MSGF_MSI_MASK_HI); 607 + 608 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, MSGF_MSI_STATUS_HI) & 609 + MSGF_MSI_SR_HI_MASK, MSGF_MSI_STATUS_HI); 610 + 611 + nwl_bridge_writel(pcie, MSGF_MSI_SR_HI_MASK, MSGF_MSI_MASK_HI); 612 + 613 + /* 614 + * For low range MSI interrupts: disable, clear any pending, 615 + * and enable 616 + */ 617 + nwl_bridge_writel(pcie, (u32)~MSGF_MSI_SR_LO_MASK, MSGF_MSI_MASK_LO); 618 + 619 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, MSGF_MSI_STATUS_LO) & 620 + MSGF_MSI_SR_LO_MASK, MSGF_MSI_STATUS_LO); 621 + 622 + nwl_bridge_writel(pcie, MSGF_MSI_SR_LO_MASK, MSGF_MSI_MASK_LO); 623 + 624 + return 0; 625 + err: 626 + kfree(msi->bitmap); 627 + msi->bitmap = NULL; 628 + return ret; 629 + } 630 + 631 + static int nwl_pcie_bridge_init(struct nwl_pcie *pcie) 632 + { 633 + struct platform_device *pdev = to_platform_device(pcie->dev); 634 + u32 breg_val, ecam_val, first_busno = 0; 635 + int err; 636 + 637 + breg_val = nwl_bridge_readl(pcie, E_BREG_CAPABILITIES) & BREG_PRESENT; 638 + if (!breg_val) { 639 + dev_err(pcie->dev, "BREG is not present\n"); 640 + return breg_val; 641 + } 642 + 643 + /* Write bridge_off to breg base */ 644 + nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_breg_base), 645 + E_BREG_BASE_LO); 646 + nwl_bridge_writel(pcie, upper_32_bits(pcie->phys_breg_base), 647 + E_BREG_BASE_HI); 648 + 649 + /* Enable BREG */ 650 + nwl_bridge_writel(pcie, ~BREG_ENABLE_FORCE & BREG_ENABLE, 651 + E_BREG_CONTROL); 652 + 653 + /* Disable DMA channel registers */ 654 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, BRCFG_PCIE_RX0) | 655 + CFG_DMA_REG_BAR, BRCFG_PCIE_RX0); 656 + 657 + /* Enable Ingress subtractive decode translation */ 658 + nwl_bridge_writel(pcie, SET_ISUB_CONTROL, I_ISUB_CONTROL); 659 + 660 + /* Enable msg filtering details */ 661 + nwl_bridge_writel(pcie, CFG_ENABLE_MSG_FILTER_MASK, 662 + BRCFG_PCIE_RX_MSG_FILTER); 663 + 664 + err = nwl_wait_for_link(pcie); 665 + if (err) 666 + return err; 667 + 668 + ecam_val = nwl_bridge_readl(pcie, E_ECAM_CAPABILITIES) & E_ECAM_PRESENT; 669 + if (!ecam_val) { 670 + dev_err(pcie->dev, "ECAM is not present\n"); 671 + return ecam_val; 672 + } 673 + 674 + /* Enable ECAM */ 675 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 676 + E_ECAM_CR_ENABLE, E_ECAM_CONTROL); 677 + 678 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 679 + (pcie->ecam_value << E_ECAM_SIZE_SHIFT), 680 + E_ECAM_CONTROL); 681 + 682 + nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base), 683 + E_ECAM_BASE_LO); 684 + nwl_bridge_writel(pcie, upper_32_bits(pcie->phys_ecam_base), 685 + E_ECAM_BASE_HI); 686 + 687 + /* Get bus range */ 688 + ecam_val = nwl_bridge_readl(pcie, E_ECAM_CONTROL); 689 + pcie->last_busno = (ecam_val & E_ECAM_SIZE_LOC) >> E_ECAM_SIZE_SHIFT; 690 + /* Write primary, secondary and subordinate bus numbers */ 691 + ecam_val = first_busno; 692 + ecam_val |= (first_busno + 1) << 8; 693 + ecam_val |= (pcie->last_busno << E_ECAM_SIZE_SHIFT); 694 + writel(ecam_val, (pcie->ecam_base + PCI_PRIMARY_BUS)); 695 + 696 + if (nwl_pcie_link_up(pcie)) 697 + dev_info(pcie->dev, "Link is UP\n"); 698 + else 699 + dev_info(pcie->dev, "Link is DOWN\n"); 700 + 701 + /* Get misc IRQ number */ 702 + pcie->irq_misc = platform_get_irq_byname(pdev, "misc"); 703 + if (pcie->irq_misc < 0) { 704 + dev_err(&pdev->dev, "failed to get misc IRQ %d\n", 705 + pcie->irq_misc); 706 + return -EINVAL; 707 + } 708 + 709 + err = devm_request_irq(pcie->dev, pcie->irq_misc, 710 + nwl_pcie_misc_handler, IRQF_SHARED, 711 + "nwl_pcie:misc", pcie); 712 + if (err) { 713 + dev_err(pcie->dev, "fail to register misc IRQ#%d\n", 714 + pcie->irq_misc); 715 + return err; 716 + } 717 + 718 + /* Disable all misc interrupts */ 719 + nwl_bridge_writel(pcie, (u32)~MSGF_MISC_SR_MASKALL, MSGF_MISC_MASK); 720 + 721 + /* Clear pending misc interrupts */ 722 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, MSGF_MISC_STATUS) & 723 + MSGF_MISC_SR_MASKALL, MSGF_MISC_STATUS); 724 + 725 + /* Enable all misc interrupts */ 726 + nwl_bridge_writel(pcie, MSGF_MISC_SR_MASKALL, MSGF_MISC_MASK); 727 + 728 + 729 + /* Disable all legacy interrupts */ 730 + nwl_bridge_writel(pcie, (u32)~MSGF_LEG_SR_MASKALL, MSGF_LEG_MASK); 731 + 732 + /* Clear pending legacy interrupts */ 733 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, MSGF_LEG_STATUS) & 734 + MSGF_LEG_SR_MASKALL, MSGF_LEG_STATUS); 735 + 736 + /* Enable all legacy interrupts */ 737 + nwl_bridge_writel(pcie, MSGF_LEG_SR_MASKALL, MSGF_LEG_MASK); 738 + 739 + /* Enable the bridge config interrupt */ 740 + nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, BRCFG_INTERRUPT) | 741 + BRCFG_INTERRUPT_MASK, BRCFG_INTERRUPT); 742 + 743 + return 0; 744 + } 745 + 746 + static int nwl_pcie_parse_dt(struct nwl_pcie *pcie, 747 + struct platform_device *pdev) 748 + { 749 + struct device_node *node = pcie->dev->of_node; 750 + struct resource *res; 751 + const char *type; 752 + 753 + /* Check for device type */ 754 + type = of_get_property(node, "device_type", NULL); 755 + if (!type || strcmp(type, "pci")) { 756 + dev_err(pcie->dev, "invalid \"device_type\" %s\n", type); 757 + return -EINVAL; 758 + } 759 + 760 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "breg"); 761 + pcie->breg_base = devm_ioremap_resource(pcie->dev, res); 762 + if (IS_ERR(pcie->breg_base)) 763 + return PTR_ERR(pcie->breg_base); 764 + pcie->phys_breg_base = res->start; 765 + 766 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pcireg"); 767 + pcie->pcireg_base = devm_ioremap_resource(pcie->dev, res); 768 + if (IS_ERR(pcie->pcireg_base)) 769 + return PTR_ERR(pcie->pcireg_base); 770 + pcie->phys_pcie_reg_base = res->start; 771 + 772 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg"); 773 + pcie->ecam_base = devm_ioremap_resource(pcie->dev, res); 774 + if (IS_ERR(pcie->ecam_base)) 775 + return PTR_ERR(pcie->ecam_base); 776 + pcie->phys_ecam_base = res->start; 777 + 778 + /* Get intx IRQ number */ 779 + pcie->irq_intx = platform_get_irq_byname(pdev, "intx"); 780 + if (pcie->irq_intx < 0) { 781 + dev_err(&pdev->dev, "failed to get intx IRQ %d\n", 782 + pcie->irq_intx); 783 + return -EINVAL; 784 + } 785 + 786 + irq_set_chained_handler_and_data(pcie->irq_intx, 787 + nwl_pcie_leg_handler, pcie); 788 + 789 + return 0; 790 + } 791 + 792 + static const struct of_device_id nwl_pcie_of_match[] = { 793 + { .compatible = "xlnx,nwl-pcie-2.11", }, 794 + {} 795 + }; 796 + 797 + static int nwl_pcie_probe(struct platform_device *pdev) 798 + { 799 + struct device_node *node = pdev->dev.of_node; 800 + struct nwl_pcie *pcie; 801 + struct pci_bus *bus; 802 + struct pci_bus *child; 803 + int err; 804 + resource_size_t iobase = 0; 805 + LIST_HEAD(res); 806 + 807 + pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL); 808 + if (!pcie) 809 + return -ENOMEM; 810 + 811 + pcie->dev = &pdev->dev; 812 + pcie->ecam_value = NWL_ECAM_VALUE_DEFAULT; 813 + 814 + err = nwl_pcie_parse_dt(pcie, pdev); 815 + if (err) { 816 + dev_err(pcie->dev, "Parsing DT failed\n"); 817 + return err; 818 + } 819 + 820 + err = nwl_pcie_bridge_init(pcie); 821 + if (err) { 822 + dev_err(pcie->dev, "HW Initalization failed\n"); 823 + return err; 824 + } 825 + 826 + err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase); 827 + if (err) { 828 + pr_err("Getting bridge resources failed\n"); 829 + return err; 830 + } 831 + 832 + err = nwl_pcie_init_irq_domain(pcie); 833 + if (err) { 834 + dev_err(pcie->dev, "Failed creating IRQ Domain\n"); 835 + return err; 836 + } 837 + 838 + bus = pci_create_root_bus(&pdev->dev, pcie->root_busno, 839 + &nwl_pcie_ops, pcie, &res); 840 + if (!bus) 841 + return -ENOMEM; 842 + 843 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 844 + err = nwl_pcie_enable_msi(pcie, bus); 845 + if (err < 0) { 846 + dev_err(&pdev->dev, 847 + "failed to enable MSI support: %d\n", err); 848 + return err; 849 + } 850 + } 851 + pci_scan_child_bus(bus); 852 + pci_assign_unassigned_bus_resources(bus); 853 + list_for_each_entry(child, &bus->children, node) 854 + pcie_bus_configure_settings(child); 855 + pci_bus_add_devices(bus); 856 + platform_set_drvdata(pdev, pcie); 857 + return 0; 858 + } 859 + 860 + static int nwl_pcie_remove(struct platform_device *pdev) 861 + { 862 + struct nwl_pcie *pcie = platform_get_drvdata(pdev); 863 + 864 + nwl_pcie_free_irq_domain(pcie); 865 + platform_set_drvdata(pdev, NULL); 866 + return 0; 867 + } 868 + 869 + static struct platform_driver nwl_pcie_driver = { 870 + .driver = { 871 + .name = "nwl-pcie", 872 + .of_match_table = nwl_pcie_of_match, 873 + }, 874 + .probe = nwl_pcie_probe, 875 + .remove = nwl_pcie_remove, 876 + }; 877 + module_platform_driver(nwl_pcie_driver); 878 + 879 + MODULE_AUTHOR("Xilinx, Inc"); 880 + MODULE_DESCRIPTION("NWL PCIe driver"); 881 + MODULE_LICENSE("GPL");
+23 -168
drivers/pci/host/pcie-xilinx.c
··· 94 94 /* Number of MSI IRQs */ 95 95 #define XILINX_NUM_MSI_IRQS 128 96 96 97 - /* Number of Memory Resources */ 98 - #define XILINX_MAX_NUM_RESOURCES 3 99 - 100 97 /** 101 98 * struct xilinx_pcie_port - PCIe port information 102 99 * @reg_base: IO Mapped Register Base ··· 102 105 * @root_busno: Root Bus number 103 106 * @dev: Device pointer 104 107 * @irq_domain: IRQ domain pointer 105 - * @bus_range: Bus range 106 108 * @resources: Bus Resources 107 109 */ 108 110 struct xilinx_pcie_port { ··· 111 115 u8 root_busno; 112 116 struct device *dev; 113 117 struct irq_domain *irq_domain; 114 - struct resource bus_range; 115 118 struct list_head resources; 116 119 }; 117 120 118 121 static DECLARE_BITMAP(msi_irq_in_use, XILINX_NUM_MSI_IRQS); 119 - 120 - static inline struct xilinx_pcie_port *sys_to_pcie(struct pci_sys_data *sys) 121 - { 122 - return sys->private_data; 123 - } 124 122 125 123 static inline u32 pcie_read(struct xilinx_pcie_port *port, u32 reg) 126 124 { ··· 157 167 */ 158 168 static bool xilinx_pcie_valid_device(struct pci_bus *bus, unsigned int devfn) 159 169 { 160 - struct xilinx_pcie_port *port = sys_to_pcie(bus->sysdata); 170 + struct xilinx_pcie_port *port = bus->sysdata; 161 171 162 172 /* Check if link is up when trying to access downstream ports */ 163 173 if (bus->number != port->root_busno) ··· 190 200 static void __iomem *xilinx_pcie_map_bus(struct pci_bus *bus, 191 201 unsigned int devfn, int where) 192 202 { 193 - struct xilinx_pcie_port *port = sys_to_pcie(bus->sysdata); 203 + struct xilinx_pcie_port *port = bus->sysdata; 194 204 int relbus; 195 205 196 206 if (!xilinx_pcie_valid_device(bus, devfn)) ··· 222 232 223 233 if (!test_bit(irq, msi_irq_in_use)) { 224 234 msi = irq_get_msi_desc(irq); 225 - port = sys_to_pcie(msi_desc_to_pci_sysdata(msi)); 235 + port = msi_desc_to_pci_sysdata(msi); 226 236 dev_err(port->dev, "Trying to free unused MSI#%d\n", irq); 227 237 } else { 228 238 clear_bit(irq, msi_irq_in_use); ··· 271 281 struct pci_dev *pdev, 272 282 struct msi_desc *desc) 273 283 { 274 - struct xilinx_pcie_port *port = sys_to_pcie(pdev->bus->sysdata); 284 + struct xilinx_pcie_port *port = pdev->bus->sysdata; 275 285 unsigned int irq; 276 286 int hwirq; 277 287 struct msi_msg msg; ··· 608 618 } 609 619 610 620 /** 611 - * xilinx_pcie_setup - Setup memory resources 612 - * @nr: Bus number 613 - * @sys: Per controller structure 614 - * 615 - * Return: '1' on success and error value on failure 616 - */ 617 - static int xilinx_pcie_setup(int nr, struct pci_sys_data *sys) 618 - { 619 - struct xilinx_pcie_port *port = sys_to_pcie(sys); 620 - 621 - list_splice_init(&port->resources, &sys->resources); 622 - 623 - return 1; 624 - } 625 - 626 - /** 627 - * xilinx_pcie_scan_bus - Scan PCIe bus for devices 628 - * @nr: Bus number 629 - * @sys: Per controller structure 630 - * 631 - * Return: Valid Bus pointer on success and NULL on failure 632 - */ 633 - static struct pci_bus *xilinx_pcie_scan_bus(int nr, struct pci_sys_data *sys) 634 - { 635 - struct xilinx_pcie_port *port = sys_to_pcie(sys); 636 - struct pci_bus *bus; 637 - 638 - port->root_busno = sys->busnr; 639 - 640 - if (IS_ENABLED(CONFIG_PCI_MSI)) 641 - bus = pci_scan_root_bus_msi(port->dev, sys->busnr, 642 - &xilinx_pcie_ops, sys, 643 - &sys->resources, 644 - &xilinx_pcie_msi_chip); 645 - else 646 - bus = pci_scan_root_bus(port->dev, sys->busnr, 647 - &xilinx_pcie_ops, sys, &sys->resources); 648 - return bus; 649 - } 650 - 651 - /** 652 - * xilinx_pcie_parse_and_add_res - Add resources by parsing ranges 653 - * @port: PCIe port information 654 - * 655 - * Return: '0' on success and error value on failure 656 - */ 657 - static int xilinx_pcie_parse_and_add_res(struct xilinx_pcie_port *port) 658 - { 659 - struct device *dev = port->dev; 660 - struct device_node *node = dev->of_node; 661 - struct resource *mem; 662 - resource_size_t offset; 663 - struct of_pci_range_parser parser; 664 - struct of_pci_range range; 665 - struct resource_entry *win; 666 - int err = 0, mem_resno = 0; 667 - 668 - /* Get the ranges */ 669 - if (of_pci_range_parser_init(&parser, node)) { 670 - dev_err(dev, "missing \"ranges\" property\n"); 671 - return -EINVAL; 672 - } 673 - 674 - /* Parse the ranges and add the resources found to the list */ 675 - for_each_of_pci_range(&parser, &range) { 676 - 677 - if (mem_resno >= XILINX_MAX_NUM_RESOURCES) { 678 - dev_err(dev, "Maximum memory resources exceeded\n"); 679 - return -EINVAL; 680 - } 681 - 682 - mem = devm_kmalloc(dev, sizeof(*mem), GFP_KERNEL); 683 - if (!mem) { 684 - err = -ENOMEM; 685 - goto free_resources; 686 - } 687 - 688 - of_pci_range_to_resource(&range, node, mem); 689 - 690 - switch (mem->flags & IORESOURCE_TYPE_BITS) { 691 - case IORESOURCE_MEM: 692 - offset = range.cpu_addr - range.pci_addr; 693 - mem_resno++; 694 - break; 695 - default: 696 - err = -EINVAL; 697 - break; 698 - } 699 - 700 - if (err < 0) { 701 - dev_warn(dev, "Invalid resource found %pR\n", mem); 702 - continue; 703 - } 704 - 705 - err = request_resource(&iomem_resource, mem); 706 - if (err) 707 - goto free_resources; 708 - 709 - pci_add_resource_offset(&port->resources, mem, offset); 710 - } 711 - 712 - /* Get the bus range */ 713 - if (of_pci_parse_bus_range(node, &port->bus_range)) { 714 - u32 val = pcie_read(port, XILINX_PCIE_REG_BIR); 715 - u8 last; 716 - 717 - last = (val & XILINX_PCIE_BIR_ECAM_SZ_MASK) >> 718 - XILINX_PCIE_BIR_ECAM_SZ_SHIFT; 719 - 720 - port->bus_range = (struct resource) { 721 - .name = node->name, 722 - .start = 0, 723 - .end = last, 724 - .flags = IORESOURCE_BUS, 725 - }; 726 - } 727 - 728 - /* Register bus resource */ 729 - pci_add_resource(&port->resources, &port->bus_range); 730 - 731 - return 0; 732 - 733 - free_resources: 734 - release_child_resources(&iomem_resource); 735 - resource_list_for_each_entry(win, &port->resources) 736 - devm_kfree(dev, win->res); 737 - pci_free_resource_list(&port->resources); 738 - 739 - return err; 740 - } 741 - 742 - /** 743 621 * xilinx_pcie_parse_dt - Parse Device tree 744 622 * @port: PCIe port information 745 623 * ··· 658 800 static int xilinx_pcie_probe(struct platform_device *pdev) 659 801 { 660 802 struct xilinx_pcie_port *port; 661 - struct hw_pci hw; 662 803 struct device *dev = &pdev->dev; 804 + struct pci_bus *bus; 805 + 663 806 int err; 807 + resource_size_t iobase = 0; 808 + LIST_HEAD(res); 664 809 665 810 if (!dev->of_node) 666 811 return -ENODEV; ··· 688 827 return err; 689 828 } 690 829 691 - /* 692 - * Parse PCI ranges, configuration bus range and 693 - * request their resources 694 - */ 695 - INIT_LIST_HEAD(&port->resources); 696 - err = xilinx_pcie_parse_and_add_res(port); 830 + err = of_pci_get_host_bridge_resources(dev->of_node, 0, 0xff, &res, 831 + &iobase); 697 832 if (err) { 698 - dev_err(dev, "Failed adding resources\n"); 833 + dev_err(dev, "Getting bridge resources failed\n"); 699 834 return err; 700 835 } 701 - 702 - platform_set_drvdata(pdev, port); 703 - 704 - /* Register the device */ 705 - memset(&hw, 0, sizeof(hw)); 706 - hw = (struct hw_pci) { 707 - .nr_controllers = 1, 708 - .private_data = (void **)&port, 709 - .setup = xilinx_pcie_setup, 710 - .map_irq = of_irq_parse_and_map_pci, 711 - .scan = xilinx_pcie_scan_bus, 712 - .ops = &xilinx_pcie_ops, 713 - }; 836 + bus = pci_create_root_bus(&pdev->dev, 0, 837 + &xilinx_pcie_ops, port, &res); 838 + if (!bus) 839 + return -ENOMEM; 714 840 715 841 #ifdef CONFIG_PCI_MSI 716 842 xilinx_pcie_msi_chip.dev = port->dev; 843 + bus->msi = &xilinx_pcie_msi_chip; 717 844 #endif 718 - pci_common_init_dev(dev, &hw); 845 + pci_scan_child_bus(bus); 846 + pci_assign_unassigned_bus_resources(bus); 847 + #ifndef CONFIG_MICROBLAZE 848 + pci_fixup_irqs(pci_common_swizzle, of_irq_parse_and_map_pci); 849 + #endif 850 + pci_bus_add_devices(bus); 851 + platform_set_drvdata(pdev, port); 719 852 720 853 return 0; 721 854 }
-4
drivers/pci/iov.c
··· 387 387 struct resource *res; 388 388 struct pci_dev *pdev; 389 389 390 - if (pci_pcie_type(dev) != PCI_EXP_TYPE_RC_END && 391 - pci_pcie_type(dev) != PCI_EXP_TYPE_ENDPOINT) 392 - return -ENODEV; 393 - 394 390 pci_read_config_word(dev, pos + PCI_SRIOV_CTRL, &ctrl); 395 391 if (ctrl & PCI_SRIOV_CTRL_VFE) { 396 392 pci_write_config_word(dev, pos + PCI_SRIOV_CTRL, 0);
+1 -1
drivers/pci/pci-label.c
··· 16 16 * the instance number and string from the type 41 record and exports 17 17 * it to sysfs. 18 18 * 19 - * Please see http://linux.dell.com/wiki/index.php/Oss/libnetdevname for more 19 + * Please see http://linux.dell.com/files/biosdevname/ for more 20 20 * information. 21 21 */ 22 22
+44 -48
drivers/pci/pci-sysfs.c
··· 769 769 { 770 770 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 771 771 772 - if (off > bin_attr->size) 773 - count = 0; 774 - else if (count > bin_attr->size - off) 775 - count = bin_attr->size - off; 772 + if (bin_attr->size > 0) { 773 + if (off > bin_attr->size) 774 + count = 0; 775 + else if (count > bin_attr->size - off) 776 + count = bin_attr->size - off; 777 + } 776 778 777 779 return pci_read_vpd(dev, off, count, buf); 778 780 } ··· 785 783 { 786 784 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 787 785 788 - if (off > bin_attr->size) 789 - count = 0; 790 - else if (count > bin_attr->size - off) 791 - count = bin_attr->size - off; 786 + if (bin_attr->size > 0) { 787 + if (off > bin_attr->size) 788 + count = 0; 789 + else if (count > bin_attr->size - off) 790 + count = bin_attr->size - off; 791 + } 792 792 793 793 return pci_write_vpd(dev, off, count, buf); 794 794 } ··· 1138 1134 /* allocate attribute structure, piggyback attribute name */ 1139 1135 int name_len = write_combine ? 13 : 10; 1140 1136 struct bin_attribute *res_attr; 1137 + char *res_attr_name; 1141 1138 int retval; 1142 1139 1143 1140 res_attr = kzalloc(sizeof(*res_attr) + name_len, GFP_ATOMIC); 1144 - if (res_attr) { 1145 - char *res_attr_name = (char *)(res_attr + 1); 1141 + if (!res_attr) 1142 + return -ENOMEM; 1146 1143 1147 - sysfs_bin_attr_init(res_attr); 1148 - if (write_combine) { 1149 - pdev->res_attr_wc[num] = res_attr; 1150 - sprintf(res_attr_name, "resource%d_wc", num); 1151 - res_attr->mmap = pci_mmap_resource_wc; 1152 - } else { 1153 - pdev->res_attr[num] = res_attr; 1154 - sprintf(res_attr_name, "resource%d", num); 1155 - res_attr->mmap = pci_mmap_resource_uc; 1156 - } 1157 - if (pci_resource_flags(pdev, num) & IORESOURCE_IO) { 1158 - res_attr->read = pci_read_resource_io; 1159 - res_attr->write = pci_write_resource_io; 1160 - } 1161 - res_attr->attr.name = res_attr_name; 1162 - res_attr->attr.mode = S_IRUSR | S_IWUSR; 1163 - res_attr->size = pci_resource_len(pdev, num); 1164 - res_attr->private = &pdev->resource[num]; 1165 - retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr); 1166 - } else 1167 - retval = -ENOMEM; 1144 + res_attr_name = (char *)(res_attr + 1); 1145 + 1146 + sysfs_bin_attr_init(res_attr); 1147 + if (write_combine) { 1148 + pdev->res_attr_wc[num] = res_attr; 1149 + sprintf(res_attr_name, "resource%d_wc", num); 1150 + res_attr->mmap = pci_mmap_resource_wc; 1151 + } else { 1152 + pdev->res_attr[num] = res_attr; 1153 + sprintf(res_attr_name, "resource%d", num); 1154 + res_attr->mmap = pci_mmap_resource_uc; 1155 + } 1156 + if (pci_resource_flags(pdev, num) & IORESOURCE_IO) { 1157 + res_attr->read = pci_read_resource_io; 1158 + res_attr->write = pci_write_resource_io; 1159 + } 1160 + res_attr->attr.name = res_attr_name; 1161 + res_attr->attr.mode = S_IRUSR | S_IWUSR; 1162 + res_attr->size = pci_resource_len(pdev, num); 1163 + res_attr->private = &pdev->resource[num]; 1164 + retval = sysfs_create_bin_file(&pdev->dev.kobj, res_attr); 1165 + if (retval) 1166 + kfree(res_attr); 1168 1167 1169 1168 return retval; 1170 1169 } ··· 1326 1319 return -ENOMEM; 1327 1320 1328 1321 sysfs_bin_attr_init(attr); 1329 - attr->size = dev->vpd->len; 1322 + attr->size = 0; 1330 1323 attr->attr.name = "vpd"; 1331 1324 attr->attr.mode = S_IRUSR | S_IWUSR; 1332 1325 attr->read = read_vpd_attr; ··· 1363 1356 int __must_check pci_create_sysfs_dev_files(struct pci_dev *pdev) 1364 1357 { 1365 1358 int retval; 1366 - int rom_size = 0; 1359 + int rom_size; 1367 1360 struct bin_attribute *attr; 1368 1361 1369 1362 if (!sysfs_initialized) ··· 1380 1373 if (retval) 1381 1374 goto err_config_file; 1382 1375 1383 - if (pci_resource_len(pdev, PCI_ROM_RESOURCE)) 1384 - rom_size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 1385 - else if (pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW) 1386 - rom_size = 0x20000; 1387 - 1388 1376 /* If the device has a ROM, try to expose it in sysfs. */ 1377 + rom_size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 1389 1378 if (rom_size) { 1390 1379 attr = kzalloc(sizeof(*attr), GFP_ATOMIC); 1391 1380 if (!attr) { ··· 1412 1409 return 0; 1413 1410 1414 1411 err_rom_file: 1415 - if (rom_size) { 1412 + if (pdev->rom_attr) { 1416 1413 sysfs_remove_bin_file(&pdev->dev.kobj, pdev->rom_attr); 1417 1414 kfree(pdev->rom_attr); 1418 1415 pdev->rom_attr = NULL; ··· 1450 1447 */ 1451 1448 void pci_remove_sysfs_dev_files(struct pci_dev *pdev) 1452 1449 { 1453 - int rom_size = 0; 1454 - 1455 1450 if (!sysfs_initialized) 1456 1451 return; 1457 1452 ··· 1462 1461 1463 1462 pci_remove_resource_files(pdev); 1464 1463 1465 - if (pci_resource_len(pdev, PCI_ROM_RESOURCE)) 1466 - rom_size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 1467 - else if (pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW) 1468 - rom_size = 0x20000; 1469 - 1470 - if (rom_size && pdev->rom_attr) { 1464 + if (pdev->rom_attr) { 1471 1465 sysfs_remove_bin_file(&pdev->dev.kobj, pdev->rom_attr); 1472 1466 kfree(pdev->rom_attr); 1467 + pdev->rom_attr = NULL; 1473 1468 } 1474 1469 1475 1470 pci_remove_firmware_label_files(pdev); 1476 - 1477 1471 } 1478 1472 1479 1473 static int __init pci_sysfs_init(void)
+25 -15
drivers/pci/pci.c
··· 25 25 #include <linux/device.h> 26 26 #include <linux/pm_runtime.h> 27 27 #include <linux/pci_hotplug.h> 28 - #include <asm-generic/pci-bridge.h> 29 28 #include <asm/setup.h> 30 29 #include <linux/aer.h> 31 30 #include "pci.h" ··· 3385 3386 } 3386 3387 EXPORT_SYMBOL_GPL(pci_check_and_unmask_intx); 3387 3388 3388 - int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size) 3389 - { 3390 - return dma_set_max_seg_size(&dev->dev, size); 3391 - } 3392 - EXPORT_SYMBOL(pci_set_dma_max_seg_size); 3393 - 3394 - int pci_set_dma_seg_boundary(struct pci_dev *dev, unsigned long mask) 3395 - { 3396 - return dma_set_seg_boundary(&dev->dev, mask); 3397 - } 3398 - EXPORT_SYMBOL(pci_set_dma_seg_boundary); 3399 - 3400 3389 /** 3401 3390 * pci_wait_for_pending_transaction - waits for pending transaction 3402 3391 * @dev: the PCI device to operate on ··· 3401 3414 } 3402 3415 EXPORT_SYMBOL(pci_wait_for_pending_transaction); 3403 3416 3417 + /* 3418 + * We should only need to wait 100ms after FLR, but some devices take longer. 3419 + * Wait for up to 1000ms for config space to return something other than -1. 3420 + * Intel IGD requires this when an LCD panel is attached. We read the 2nd 3421 + * dword because VFs don't implement the 1st dword. 3422 + */ 3423 + static void pci_flr_wait(struct pci_dev *dev) 3424 + { 3425 + int i = 0; 3426 + u32 id; 3427 + 3428 + do { 3429 + msleep(100); 3430 + pci_read_config_dword(dev, PCI_COMMAND, &id); 3431 + } while (i++ < 10 && id == ~0); 3432 + 3433 + if (id == ~0) 3434 + dev_warn(&dev->dev, "Failed to return from FLR\n"); 3435 + else if (i > 1) 3436 + dev_info(&dev->dev, "Required additional %dms to return from FLR\n", 3437 + (i - 1) * 100); 3438 + } 3439 + 3404 3440 static int pcie_flr(struct pci_dev *dev, int probe) 3405 3441 { 3406 3442 u32 cap; ··· 3439 3429 dev_err(&dev->dev, "timed out waiting for pending transaction; performing function level reset anyway\n"); 3440 3430 3441 3431 pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR); 3442 - msleep(100); 3432 + pci_flr_wait(dev); 3443 3433 return 0; 3444 3434 } 3445 3435 ··· 3469 3459 dev_err(&dev->dev, "timed out waiting for pending transaction; performing AF function level reset anyway\n"); 3470 3460 3471 3461 pci_write_config_byte(dev, pos + PCI_AF_CTRL, PCI_AF_CTRL_FLR); 3472 - msleep(100); 3462 + pci_flr_wait(dev); 3473 3463 return 0; 3474 3464 } 3475 3465
+8 -8
drivers/pci/pci.h
··· 97 97 struct pci_vpd_ops { 98 98 ssize_t (*read)(struct pci_dev *dev, loff_t pos, size_t count, void *buf); 99 99 ssize_t (*write)(struct pci_dev *dev, loff_t pos, size_t count, const void *buf); 100 - void (*release)(struct pci_dev *dev); 101 100 }; 102 101 103 102 struct pci_vpd { 104 - unsigned int len; 105 103 const struct pci_vpd_ops *ops; 106 104 struct bin_attribute *attr; /* descriptor for sysfs VPD entry */ 105 + struct mutex lock; 106 + unsigned int len; 107 + u16 flag; 108 + u8 cap; 109 + u8 busy:1; 110 + u8 valid:1; 107 111 }; 108 112 109 - int pci_vpd_pci22_init(struct pci_dev *dev); 110 - static inline void pci_vpd_release(struct pci_dev *dev) 111 - { 112 - if (dev->vpd) 113 - dev->vpd->ops->release(dev); 114 - } 113 + int pci_vpd_init(struct pci_dev *dev); 114 + void pci_vpd_release(struct pci_dev *dev); 115 115 116 116 /* PCI /proc functions */ 117 117 #ifdef CONFIG_PROC_FS
+4 -3
drivers/pci/pcie/Kconfig
··· 44 44 /sys/module/pcie_aspm/parameters/policy 45 45 46 46 When in doubt, say Y. 47 + 47 48 config PCIEASPM_DEBUG 48 49 bool "Debug PCI Express ASPM" 49 50 depends on PCIEASPM ··· 59 58 depends on PCIEASPM 60 59 61 60 config PCIEASPM_DEFAULT 62 - bool "BIOS default" 61 + bool "BIOS default" 63 62 depends on PCIEASPM 64 63 help 65 64 Use the BIOS defaults for PCI Express ASPM. 66 65 67 66 config PCIEASPM_POWERSAVE 68 - bool "Powersave" 67 + bool "Powersave" 69 68 depends on PCIEASPM 70 69 help 71 70 Enable PCI Express ASPM L0s and L1 where possible, even if the 72 71 BIOS did not. 73 72 74 73 config PCIEASPM_PERFORMANCE 75 - bool "Performance" 74 + bool "Performance" 76 75 depends on PCIEASPM 77 76 help 78 77 Disable PCI Express ASPM L0s and L1, even if the BIOS enabled them.
+63 -27
drivers/pci/pcie/aer/aer_inject.c
··· 25 25 #include <linux/fs.h> 26 26 #include <linux/uaccess.h> 27 27 #include <linux/stddef.h> 28 + #include <linux/device.h> 28 29 #include "aerdrv.h" 29 30 30 31 /* Override the existing corrected and uncorrected error masks */ ··· 125 124 static struct pci_bus_ops *pci_bus_ops_pop(void) 126 125 { 127 126 unsigned long flags; 128 - struct pci_bus_ops *bus_ops = NULL; 127 + struct pci_bus_ops *bus_ops; 129 128 130 129 spin_lock_irqsave(&inject_lock, flags); 131 - if (list_empty(&pci_bus_ops_list)) 132 - bus_ops = NULL; 133 - else { 134 - struct list_head *lh = pci_bus_ops_list.next; 135 - list_del(lh); 136 - bus_ops = list_entry(lh, struct pci_bus_ops, list); 137 - } 130 + bus_ops = list_first_entry_or_null(&pci_bus_ops_list, 131 + struct pci_bus_ops, list); 132 + if (bus_ops) 133 + list_del(&bus_ops->list); 138 134 spin_unlock_irqrestore(&inject_lock, flags); 139 135 return bus_ops; 140 136 } ··· 179 181 return target; 180 182 } 181 183 182 - static int pci_read_aer(struct pci_bus *bus, unsigned int devfn, int where, 183 - int size, u32 *val) 184 + static int aer_inj_read_config(struct pci_bus *bus, unsigned int devfn, 185 + int where, int size, u32 *val) 184 186 { 185 187 u32 *sim; 186 188 struct aer_error *err; 187 189 unsigned long flags; 188 190 struct pci_ops *ops; 191 + struct pci_ops *my_ops; 189 192 int domain; 193 + int rv; 190 194 191 195 spin_lock_irqsave(&inject_lock, flags); 192 196 if (size != sizeof(u32)) ··· 208 208 } 209 209 out: 210 210 ops = __find_pci_bus_ops(bus); 211 + /* 212 + * pci_lock must already be held, so we can directly 213 + * manipulate bus->ops. Many config access functions, 214 + * including pci_generic_config_read() require the original 215 + * bus->ops be installed to function, so temporarily put them 216 + * back. 217 + */ 218 + my_ops = bus->ops; 219 + bus->ops = ops; 220 + rv = ops->read(bus, devfn, where, size, val); 221 + bus->ops = my_ops; 211 222 spin_unlock_irqrestore(&inject_lock, flags); 212 - return ops->read(bus, devfn, where, size, val); 223 + return rv; 213 224 } 214 225 215 - static int pci_write_aer(struct pci_bus *bus, unsigned int devfn, int where, 216 - int size, u32 val) 226 + static int aer_inj_write_config(struct pci_bus *bus, unsigned int devfn, 227 + int where, int size, u32 val) 217 228 { 218 229 u32 *sim; 219 230 struct aer_error *err; 220 231 unsigned long flags; 221 232 int rw1cs; 222 233 struct pci_ops *ops; 234 + struct pci_ops *my_ops; 223 235 int domain; 236 + int rv; 224 237 225 238 spin_lock_irqsave(&inject_lock, flags); 226 239 if (size != sizeof(u32)) ··· 256 243 } 257 244 out: 258 245 ops = __find_pci_bus_ops(bus); 246 + /* 247 + * pci_lock must already be held, so we can directly 248 + * manipulate bus->ops. Many config access functions, 249 + * including pci_generic_config_write() require the original 250 + * bus->ops be installed to function, so temporarily put them 251 + * back. 252 + */ 253 + my_ops = bus->ops; 254 + bus->ops = ops; 255 + rv = ops->write(bus, devfn, where, size, val); 256 + bus->ops = my_ops; 259 257 spin_unlock_irqrestore(&inject_lock, flags); 260 - return ops->write(bus, devfn, where, size, val); 258 + return rv; 261 259 } 262 260 263 - static struct pci_ops pci_ops_aer = { 264 - .read = pci_read_aer, 265 - .write = pci_write_aer, 261 + static struct pci_ops aer_inj_pci_ops = { 262 + .read = aer_inj_read_config, 263 + .write = aer_inj_write_config, 266 264 }; 267 265 268 266 static void pci_bus_ops_init(struct pci_bus_ops *bus_ops, ··· 294 270 bus_ops = kmalloc(sizeof(*bus_ops), GFP_KERNEL); 295 271 if (!bus_ops) 296 272 return -ENOMEM; 297 - ops = pci_bus_set_ops(bus, &pci_ops_aer); 273 + ops = pci_bus_set_ops(bus, &aer_inj_pci_ops); 298 274 spin_lock_irqsave(&inject_lock, flags); 299 - if (ops == &pci_ops_aer) 275 + if (ops == &aer_inj_pci_ops) 300 276 goto out; 301 277 pci_bus_ops_init(bus_ops, bus, ops); 302 278 list_add(&bus_ops->list, &pci_bus_ops_list); ··· 358 334 return -ENODEV; 359 335 rpdev = pcie_find_root_port(dev); 360 336 if (!rpdev) { 337 + dev_err(&dev->dev, "aer_inject: Root port not found\n"); 361 338 ret = -ENODEV; 362 339 goto out_put; 363 340 } 364 341 365 342 pos_cap_err = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ERR); 366 343 if (!pos_cap_err) { 367 - ret = -EPERM; 344 + dev_err(&dev->dev, "aer_inject: Device doesn't support AER\n"); 345 + ret = -EPROTONOSUPPORT; 368 346 goto out_put; 369 347 } 370 348 pci_read_config_dword(dev, pos_cap_err + PCI_ERR_UNCOR_SEVER, &sever); ··· 376 350 377 351 rp_pos_cap_err = pci_find_ext_capability(rpdev, PCI_EXT_CAP_ID_ERR); 378 352 if (!rp_pos_cap_err) { 379 - ret = -EPERM; 353 + dev_err(&rpdev->dev, 354 + "aer_inject: Root port doesn't support AER\n"); 355 + ret = -EPROTONOSUPPORT; 380 356 goto out_put; 381 357 } 382 358 ··· 425 397 if (!aer_mask_override && einj->cor_status && 426 398 !(einj->cor_status & ~cor_mask)) { 427 399 ret = -EINVAL; 428 - printk(KERN_WARNING "The correctable error(s) is masked by device\n"); 400 + dev_warn(&dev->dev, 401 + "aer_inject: The correctable error(s) is masked by device\n"); 429 402 spin_unlock_irqrestore(&inject_lock, flags); 430 403 goto out_put; 431 404 } 432 405 if (!aer_mask_override && einj->uncor_status && 433 406 !(einj->uncor_status & ~uncor_mask)) { 434 407 ret = -EINVAL; 435 - printk(KERN_WARNING "The uncorrectable error(s) is masked by device\n"); 408 + dev_warn(&dev->dev, 409 + "aer_inject: The uncorrectable error(s) is masked by device\n"); 436 410 spin_unlock_irqrestore(&inject_lock, flags); 437 411 goto out_put; 438 412 } ··· 487 457 488 458 if (find_aer_device(rpdev, &edev)) { 489 459 if (!get_service_data(edev)) { 490 - printk(KERN_WARNING "AER service is not initialized\n"); 491 - ret = -EINVAL; 460 + dev_warn(&edev->device, 461 + "aer_inject: AER service is not initialized\n"); 462 + ret = -EPROTONOSUPPORT; 492 463 goto out_put; 493 464 } 465 + dev_info(&edev->device, 466 + "aer_inject: Injecting errors %08x/%08x into device %s\n", 467 + einj->cor_status, einj->uncor_status, pci_name(dev)); 494 468 aer_irq(-1, edev); 495 - } else 496 - ret = -EINVAL; 469 + } else { 470 + dev_err(&rpdev->dev, "aer_inject: AER device not found\n"); 471 + ret = -ENODEV; 472 + } 497 473 out_put: 498 474 kfree(err_alloc); 499 475 kfree(rperr_alloc);
+6 -5
drivers/pci/pcie/pme.c
··· 396 396 { 397 397 struct pcie_pme_service_data *data = get_service_data(srv); 398 398 struct pci_dev *port = srv->port; 399 - bool wakeup; 399 + bool wakeup, wake_irq_enabled = false; 400 400 int ret; 401 401 402 402 if (device_may_wakeup(&port->dev)) { ··· 409 409 spin_lock_irq(&data->lock); 410 410 if (wakeup) { 411 411 ret = enable_irq_wake(srv->irq); 412 - data->suspend_level = PME_SUSPEND_WAKEUP; 412 + if (ret == 0) { 413 + data->suspend_level = PME_SUSPEND_WAKEUP; 414 + wake_irq_enabled = true; 415 + } 413 416 } 414 - if (!wakeup || ret) { 415 - struct pci_dev *port = srv->port; 416 - 417 + if (!wake_irq_enabled) { 417 418 pcie_pme_interrupt_enable(port, false); 418 419 pcie_clear_root_pme_status(port); 419 420 data->suspend_level = PME_SUSPEND_NOIRQ;
+43 -2
drivers/pci/probe.c
··· 15 15 #include <linux/pci-aspm.h> 16 16 #include <linux/aer.h> 17 17 #include <linux/acpi.h> 18 - #include <asm-generic/pci-bridge.h> 18 + #include <linux/irqdomain.h> 19 19 #include "pci.h" 20 20 21 21 #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ ··· 178 178 u64 l64, sz64, mask64; 179 179 u16 orig_cmd; 180 180 struct pci_bus_region region, inverted_region; 181 + 182 + if (dev->non_compliant_bars) 183 + return 0; 181 184 182 185 mask = type ? PCI_ROM_ADDRESS_MASK : ~0; 183 186 ··· 678 675 if (!d) 679 676 d = pci_host_bridge_acpi_msi_domain(bus); 680 677 678 + #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN 679 + /* 680 + * If no IRQ domain was found via the OF tree, try looking it up 681 + * directly through the fwnode_handle. 682 + */ 683 + if (!d) { 684 + struct fwnode_handle *fwnode = pci_root_bus_fwnode(bus); 685 + 686 + if (fwnode) 687 + d = irq_find_matching_fwnode(fwnode, 688 + DOMAIN_BUS_PCI_MSI); 689 + } 690 + #endif 691 + 681 692 return d; 682 693 } 683 694 ··· 774 757 WARN_ON(ret < 0); 775 758 776 759 pcibios_add_bus(child); 760 + 761 + if (child->ops->add_bus) { 762 + ret = child->ops->add_bus(child); 763 + if (WARN_ON(ret < 0)) 764 + dev_err(&child->dev, "failed to add bus: %d\n", ret); 765 + } 777 766 778 767 /* Create legacy_io and legacy_mem files for this bus */ 779 768 pci_create_legacy_files(child); ··· 1194 1171 int pci_setup_device(struct pci_dev *dev) 1195 1172 { 1196 1173 u32 class; 1174 + u16 cmd; 1197 1175 u8 hdr_type; 1198 1176 int pos = 0; 1199 1177 struct pci_bus_region region; ··· 1237 1213 pci_fixup_device(pci_fixup_early, dev); 1238 1214 /* device class may be changed after fixup */ 1239 1215 class = dev->class >> 8; 1216 + 1217 + if (dev->non_compliant_bars) { 1218 + pci_read_config_word(dev, PCI_COMMAND, &cmd); 1219 + if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { 1220 + dev_info(&dev->dev, "device has non-compliant BARs; disabling IO/MEM decoding\n"); 1221 + cmd &= ~PCI_COMMAND_IO; 1222 + cmd &= ~PCI_COMMAND_MEMORY; 1223 + pci_write_config_word(dev, PCI_COMMAND, cmd); 1224 + } 1225 + } 1240 1226 1241 1227 switch (dev->hdr_type) { /* header type */ 1242 1228 case PCI_HEADER_TYPE_NORMAL: /* standard header */ ··· 1642 1608 pci_pm_init(dev); 1643 1609 1644 1610 /* Vital Product Data */ 1645 - pci_vpd_pci22_init(dev); 1611 + pci_vpd_init(dev); 1646 1612 1647 1613 /* Alternative Routing-ID Forwarding */ 1648 1614 pci_configure_ari(dev); ··· 1837 1803 return 0; 1838 1804 if (pci_pcie_type(parent) == PCI_EXP_TYPE_ROOT_PORT) 1839 1805 return 1; 1806 + 1807 + /* 1808 + * PCIe downstream ports are bridges that normally lead to only a 1809 + * device 0, but if PCI_SCAN_ALL_PCIE_DEVS is set, scan all 1810 + * possible devices, not just device 0. See PCIe spec r3.0, 1811 + * sec 7.3.1. 1812 + */ 1840 1813 if (parent->has_secondary_link && 1841 1814 !pci_has_flag(PCI_SCAN_ALL_PCIE_DEVS)) 1842 1815 return 1;
+45 -1
drivers/pci/quirks.c
··· 438 438 u32 class = pdev->class; 439 439 440 440 /* Use "USB Device (not host controller)" class */ 441 - pdev->class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe; 441 + pdev->class = PCI_CLASS_SERIAL_USB_DEVICE; 442 442 dev_info(&pdev->dev, "PCI class overridden (%#08x -> %#08x) so dwc3 driver can claim this instead of xhci\n", 443 443 class, pdev->class); 444 444 } ··· 2133 2133 } 2134 2134 } 2135 2135 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching); 2136 + 2137 + /* 2138 + * If a device follows the VPD format spec, the PCI core will not read or 2139 + * write past the VPD End Tag. But some vendors do not follow the VPD 2140 + * format spec, so we can't tell how much data is safe to access. Devices 2141 + * may behave unpredictably if we access too much. Blacklist these devices 2142 + * so we don't touch VPD at all. 2143 + */ 2144 + static void quirk_blacklist_vpd(struct pci_dev *dev) 2145 + { 2146 + if (dev->vpd) { 2147 + dev->vpd->len = 0; 2148 + dev_warn(&dev->dev, FW_BUG "VPD access disabled\n"); 2149 + } 2150 + } 2151 + 2152 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0060, quirk_blacklist_vpd); 2153 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x007c, quirk_blacklist_vpd); 2154 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0413, quirk_blacklist_vpd); 2155 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0078, quirk_blacklist_vpd); 2156 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0079, quirk_blacklist_vpd); 2157 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0073, quirk_blacklist_vpd); 2158 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x0071, quirk_blacklist_vpd); 2159 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005b, quirk_blacklist_vpd); 2160 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x002f, quirk_blacklist_vpd); 2161 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd); 2162 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd); 2163 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID, 2164 + quirk_blacklist_vpd); 2136 2165 2137 2166 /* 2138 2167 * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the ··· 3861 3832 #endif 3862 3833 } 3863 3834 3835 + static int pci_quirk_cavium_acs(struct pci_dev *dev, u16 acs_flags) 3836 + { 3837 + /* 3838 + * Cavium devices matching this quirk do not perform peer-to-peer 3839 + * with other functions, allowing masking out these bits as if they 3840 + * were unimplemented in the ACS capability. 3841 + */ 3842 + acs_flags &= ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | 3843 + PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); 3844 + 3845 + return acs_flags ? 0 : 1; 3846 + } 3847 + 3864 3848 /* 3865 3849 * Many Intel PCH root ports do provide ACS-like features to disable peer 3866 3850 * transactions and validate bus numbers in requests, but do not provide an ··· 4026 3984 { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs }, 4027 3985 { 0x19a2, 0x710, pci_quirk_mf_endpoint_acs }, /* Emulex BE3-R */ 4028 3986 { 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */ 3987 + /* Cavium ThunderX */ 3988 + { PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs }, 4029 3989 { 0 } 4030 3990 }; 4031 3991
+4 -1
drivers/pci/remove.c
··· 7 7 { 8 8 int i; 9 9 10 - pci_cleanup_rom(dev); 11 10 for (i = 0; i < PCI_NUM_RESOURCES; i++) { 12 11 struct resource *res = dev->resource + i; 13 12 if (res->parent) ··· 53 54 pci_bus_release_busn_res(bus); 54 55 up_write(&pci_bus_sem); 55 56 pci_remove_legacy_files(bus); 57 + 58 + if (bus->ops->remove_bus) 59 + bus->ops->remove_bus(bus); 60 + 56 61 pcibios_remove_bus(bus); 57 62 device_unregister(&bus->dev); 58 63 }
+24 -57
drivers/pci/rom.c
··· 24 24 */ 25 25 int pci_enable_rom(struct pci_dev *pdev) 26 26 { 27 - struct resource *res = pdev->resource + PCI_ROM_RESOURCE; 27 + struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 28 28 struct pci_bus_region region; 29 29 u32 rom_addr; 30 30 31 31 if (!res->flags) 32 32 return -1; 33 + 34 + /* Nothing to enable if we're using a shadow copy in RAM */ 35 + if (res->flags & IORESOURCE_ROM_SHADOW) 36 + return 0; 33 37 34 38 pcibios_resource_to_bus(pdev->bus, &region, res); 35 39 pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr); ··· 53 49 */ 54 50 void pci_disable_rom(struct pci_dev *pdev) 55 51 { 52 + struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 56 53 u32 rom_addr; 54 + 55 + if (res->flags & IORESOURCE_ROM_SHADOW) 56 + return; 57 + 57 58 pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr); 58 59 rom_addr &= ~PCI_ROM_ADDRESS_ENABLE; 59 60 pci_write_config_dword(pdev, pdev->rom_base_reg, rom_addr); ··· 128 119 loff_t start; 129 120 void __iomem *rom; 130 121 131 - /* 132 - * IORESOURCE_ROM_SHADOW set on x86, x86_64 and IA64 supports legacy 133 - * memory map if the VGA enable bit of the Bridge Control register is 134 - * set for embedded VGA. 135 - */ 136 - if (res->flags & IORESOURCE_ROM_SHADOW) { 137 - /* primary video rom always starts here */ 138 - start = (loff_t)0xC0000; 139 - *size = 0x20000; /* cover C000:0 through E000:0 */ 140 - } else { 141 - if (res->flags & 142 - (IORESOURCE_ROM_COPY | IORESOURCE_ROM_BIOS_COPY)) { 143 - *size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 144 - return (void __iomem *)(unsigned long) 145 - pci_resource_start(pdev, PCI_ROM_RESOURCE); 146 - } else { 147 - /* assign the ROM an address if it doesn't have one */ 148 - if (res->parent == NULL && 149 - pci_assign_resource(pdev, PCI_ROM_RESOURCE)) 150 - return NULL; 151 - start = pci_resource_start(pdev, PCI_ROM_RESOURCE); 152 - *size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 153 - if (*size == 0) 154 - return NULL; 122 + /* assign the ROM an address if it doesn't have one */ 123 + if (res->parent == NULL && pci_assign_resource(pdev, PCI_ROM_RESOURCE)) 124 + return NULL; 155 125 156 - /* Enable ROM space decodes */ 157 - if (pci_enable_rom(pdev)) 158 - return NULL; 159 - } 160 - } 126 + start = pci_resource_start(pdev, PCI_ROM_RESOURCE); 127 + *size = pci_resource_len(pdev, PCI_ROM_RESOURCE); 128 + if (*size == 0) 129 + return NULL; 130 + 131 + /* Enable ROM space decodes */ 132 + if (pci_enable_rom(pdev)) 133 + return NULL; 161 134 162 135 rom = ioremap(start, *size); 163 136 if (!rom) { 164 137 /* restore enable if ioremap fails */ 165 - if (!(res->flags & (IORESOURCE_ROM_ENABLE | 166 - IORESOURCE_ROM_SHADOW | 167 - IORESOURCE_ROM_COPY))) 138 + if (!(res->flags & IORESOURCE_ROM_ENABLE)) 168 139 pci_disable_rom(pdev); 169 140 return NULL; 170 141 } ··· 170 181 { 171 182 struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 172 183 173 - if (res->flags & (IORESOURCE_ROM_COPY | IORESOURCE_ROM_BIOS_COPY)) 174 - return; 175 - 176 184 iounmap(rom); 177 185 178 - /* Disable again before continuing, leave enabled if pci=rom */ 179 - if (!(res->flags & (IORESOURCE_ROM_ENABLE | IORESOURCE_ROM_SHADOW))) 186 + /* Disable again before continuing */ 187 + if (!(res->flags & IORESOURCE_ROM_ENABLE)) 180 188 pci_disable_rom(pdev); 181 189 } 182 190 EXPORT_SYMBOL(pci_unmap_rom); 183 - 184 - /** 185 - * pci_cleanup_rom - free the ROM copy created by pci_map_rom_copy 186 - * @pdev: pointer to pci device struct 187 - * 188 - * Free the copied ROM if we allocated one. 189 - */ 190 - void pci_cleanup_rom(struct pci_dev *pdev) 191 - { 192 - struct resource *res = &pdev->resource[PCI_ROM_RESOURCE]; 193 - 194 - if (res->flags & IORESOURCE_ROM_COPY) { 195 - kfree((void *)(unsigned long)res->start); 196 - res->flags |= IORESOURCE_UNSET; 197 - res->flags &= ~IORESOURCE_ROM_COPY; 198 - res->start = 0; 199 - res->end = 0; 200 - } 201 - } 202 191 203 192 /** 204 193 * pci_platform_rom - provides a pointer to any ROM image provided by the
-1
drivers/pci/setup-bus.c
··· 25 25 #include <linux/ioport.h> 26 26 #include <linux/cache.h> 27 27 #include <linux/slab.h> 28 - #include <asm-generic/pci-bridge.h> 29 28 #include "pci.h" 30 29 31 30 unsigned int pci_flags;
+6
drivers/pci/setup-res.c
··· 276 276 resource_size_t align, size; 277 277 int ret; 278 278 279 + if (res->flags & IORESOURCE_PCI_FIXED) 280 + return 0; 281 + 279 282 res->flags |= IORESOURCE_UNSET; 280 283 align = pci_resource_alignment(dev, res); 281 284 if (!align) { ··· 323 320 unsigned long flags; 324 321 resource_size_t new_size; 325 322 int ret; 323 + 324 + if (res->flags & IORESOURCE_PCI_FIXED) 325 + return 0; 326 326 327 327 flags = res->flags; 328 328 res->flags |= IORESOURCE_UNSET;
+1 -1
drivers/scsi/mac53c94.c
··· 18 18 #include <linux/spinlock.h> 19 19 #include <linux/interrupt.h> 20 20 #include <linux/module.h> 21 + #include <linux/pci.h> 21 22 #include <asm/dbdma.h> 22 23 #include <asm/io.h> 23 24 #include <asm/pgtable.h> 24 25 #include <asm/prom.h> 25 - #include <asm/pci-bridge.h> 26 26 #include <asm/macio.h> 27 27 28 28 #include <scsi/scsi.h>
+1 -1
drivers/scsi/mesh.c
··· 29 29 #include <linux/interrupt.h> 30 30 #include <linux/reboot.h> 31 31 #include <linux/spinlock.h> 32 + #include <linux/pci.h> 32 33 #include <asm/dbdma.h> 33 34 #include <asm/io.h> 34 35 #include <asm/pgtable.h> ··· 39 38 #include <asm/processor.h> 40 39 #include <asm/machdep.h> 41 40 #include <asm/pmac_feature.h> 42 - #include <asm/pci-bridge.h> 43 41 #include <asm/macio.h> 44 42 45 43 #include <scsi/scsi.h>
-1
drivers/usb/core/hcd-pci.c
··· 28 28 #ifdef CONFIG_PPC_PMAC 29 29 #include <asm/machdep.h> 30 30 #include <asm/pmac_feature.h> 31 - #include <asm/pci-bridge.h> 32 31 #include <asm/prom.h> 33 32 #endif 34 33
+1 -1
drivers/usb/gadget/udc/amd5536udc.c
··· 3397 3397 static const struct pci_device_id pci_id[] = { 3398 3398 { 3399 3399 PCI_DEVICE(PCI_VENDOR_ID_AMD, 0x2096), 3400 - .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 3400 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3401 3401 .class_mask = 0xffffffff, 3402 3402 }, 3403 3403 {},
+1 -1
drivers/usb/gadget/udc/goku_udc.c
··· 1846 1846 /*-------------------------------------------------------------------------*/ 1847 1847 1848 1848 static const struct pci_device_id pci_ids[] = { { 1849 - .class = ((PCI_CLASS_SERIAL_USB << 8) | 0xfe), 1849 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 1850 1850 .class_mask = ~0, 1851 1851 .vendor = 0x102f, /* Toshiba */ 1852 1852 .device = 0x0107, /* this UDC */
+4 -4
drivers/usb/gadget/udc/net2280.c
··· 3735 3735 /*-------------------------------------------------------------------------*/ 3736 3736 3737 3737 static const struct pci_device_id pci_ids[] = { { 3738 - .class = ((PCI_CLASS_SERIAL_USB << 8) | 0xfe), 3738 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3739 3739 .class_mask = ~0, 3740 3740 .vendor = PCI_VENDOR_ID_PLX_LEGACY, 3741 3741 .device = 0x2280, ··· 3743 3743 .subdevice = PCI_ANY_ID, 3744 3744 .driver_data = PLX_LEGACY | PLX_2280, 3745 3745 }, { 3746 - .class = ((PCI_CLASS_SERIAL_USB << 8) | 0xfe), 3746 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3747 3747 .class_mask = ~0, 3748 3748 .vendor = PCI_VENDOR_ID_PLX_LEGACY, 3749 3749 .device = 0x2282, ··· 3752 3752 .driver_data = PLX_LEGACY, 3753 3753 }, 3754 3754 { 3755 - .class = ((PCI_CLASS_SERIAL_USB << 8) | 0xfe), 3755 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3756 3756 .class_mask = ~0, 3757 3757 .vendor = PCI_VENDOR_ID_PLX, 3758 3758 .device = 0x3380, ··· 3761 3761 .driver_data = PLX_SUPERSPEED, 3762 3762 }, 3763 3763 { 3764 - .class = ((PCI_CLASS_SERIAL_USB << 8) | 0xfe), 3764 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3765 3765 .class_mask = ~0, 3766 3766 .vendor = PCI_VENDOR_ID_PLX, 3767 3767 .device = 0x3382,
+4 -4
drivers/usb/gadget/udc/pch_udc.c
··· 3234 3234 { 3235 3235 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 3236 3236 PCI_DEVICE_ID_INTEL_QUARK_X1000_UDC), 3237 - .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 3237 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3238 3238 .class_mask = 0xffffffff, 3239 3239 }, 3240 3240 { 3241 3241 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EG20T_UDC), 3242 - .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 3242 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3243 3243 .class_mask = 0xffffffff, 3244 3244 }, 3245 3245 { 3246 3246 PCI_DEVICE(PCI_VENDOR_ID_ROHM, PCI_DEVICE_ID_ML7213_IOH_UDC), 3247 - .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 3247 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3248 3248 .class_mask = 0xffffffff, 3249 3249 }, 3250 3250 { 3251 3251 PCI_DEVICE(PCI_VENDOR_ID_ROHM, PCI_DEVICE_ID_ML7831_IOH_UDC), 3252 - .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 3252 + .class = PCI_CLASS_SERIAL_USB_DEVICE, 3253 3253 .class_mask = 0xffffffff, 3254 3254 }, 3255 3255 { 0 },
-1
drivers/video/fbdev/aty/aty128fb.c
··· 68 68 #include <asm/machdep.h> 69 69 #include <asm/pmac_feature.h> 70 70 #include <asm/prom.h> 71 - #include <asm/pci-bridge.h> 72 71 #include "../macmodes.h" 73 72 #endif 74 73
-1
drivers/video/fbdev/aty/radeon_base.c
··· 76 76 77 77 #ifdef CONFIG_PPC 78 78 79 - #include <asm/pci-bridge.h> 80 79 #include "../macmodes.h" 81 80 82 81 #ifdef CONFIG_BOOTX_TEXT
-1
drivers/video/fbdev/imsttfb.c
··· 33 33 #if defined(CONFIG_PPC) 34 34 #include <linux/nvram.h> 35 35 #include <asm/prom.h> 36 - #include <asm/pci-bridge.h> 37 36 #include "macmodes.h" 38 37 #endif 39 38
-1
drivers/video/fbdev/matrox/matroxfb_base.h
··· 47 47 48 48 #if defined(CONFIG_PPC_PMAC) 49 49 #include <asm/prom.h> 50 - #include <asm/pci-bridge.h> 51 50 #include "../macmodes.h" 52 51 #endif 53 52
-4
drivers/video/fbdev/offb.c
··· 28 28 #include <linux/pci.h> 29 29 #include <asm/io.h> 30 30 31 - #ifdef CONFIG_PPC64 32 - #include <asm/pci-bridge.h> 33 - #endif 34 - 35 31 #ifdef CONFIG_PPC32 36 32 #include <asm/bootx.h> 37 33 #endif
+1 -1
drivers/virtio/virtio_pci_common.c
··· 467 467 468 468 /* Qumranet donated their vendor ID for devices 0x1000 thru 0x10FF. */ 469 469 static const struct pci_device_id virtio_pci_id_table[] = { 470 - { PCI_DEVICE(0x1af4, PCI_ANY_ID) }, 470 + { PCI_DEVICE(PCI_VENDOR_ID_REDHAT_QUMRANET, PCI_ANY_ID) }, 471 471 { 0 } 472 472 }; 473 473
-74
include/asm-generic/pci-bridge.h
··· 1 - /* 2 - * This program is free software; you can redistribute it and/or 3 - * modify it under the terms of the GNU General Public License 4 - * as published by the Free Software Foundation; either version 5 - * 2 of the License, or (at your option) any later version. 6 - */ 7 - #ifndef _ASM_GENERIC_PCI_BRIDGE_H 8 - #define _ASM_GENERIC_PCI_BRIDGE_H 9 - 10 - #ifdef __KERNEL__ 11 - 12 - enum { 13 - /* Force re-assigning all resources (ignore firmware 14 - * setup completely) 15 - */ 16 - PCI_REASSIGN_ALL_RSRC = 0x00000001, 17 - 18 - /* Re-assign all bus numbers */ 19 - PCI_REASSIGN_ALL_BUS = 0x00000002, 20 - 21 - /* Do not try to assign, just use existing setup */ 22 - PCI_PROBE_ONLY = 0x00000004, 23 - 24 - /* Don't bother with ISA alignment unless the bridge has 25 - * ISA forwarding enabled 26 - */ 27 - PCI_CAN_SKIP_ISA_ALIGN = 0x00000008, 28 - 29 - /* Enable domain numbers in /proc */ 30 - PCI_ENABLE_PROC_DOMAINS = 0x00000010, 31 - /* ... except for domain 0 */ 32 - PCI_COMPAT_DOMAIN_0 = 0x00000020, 33 - 34 - /* PCIe downstream ports are bridges that normally lead to only a 35 - * device 0, but if this is set, we scan all possible devices, not 36 - * just device 0. 37 - */ 38 - PCI_SCAN_ALL_PCIE_DEVS = 0x00000040, 39 - }; 40 - 41 - #ifdef CONFIG_PCI 42 - extern unsigned int pci_flags; 43 - 44 - static inline void pci_set_flags(int flags) 45 - { 46 - pci_flags = flags; 47 - } 48 - 49 - static inline void pci_add_flags(int flags) 50 - { 51 - pci_flags |= flags; 52 - } 53 - 54 - static inline void pci_clear_flags(int flags) 55 - { 56 - pci_flags &= ~flags; 57 - } 58 - 59 - static inline int pci_has_flag(int flag) 60 - { 61 - return pci_flags & flag; 62 - } 63 - #else 64 - static inline void pci_set_flags(int flags) { } 65 - static inline void pci_add_flags(int flags) { } 66 - static inline void pci_clear_flags(int flags) { } 67 - static inline int pci_has_flag(int flag) 68 - { 69 - return 0; 70 - } 71 - #endif /* CONFIG_PCI */ 72 - 73 - #endif /* __KERNEL__ */ 74 - #endif /* _ASM_GENERIC_PCI_BRIDGE_H */
+29
include/asm-generic/pci-dma-compat.h include/linux/pci-dma-compat.h
··· 6 6 7 7 #include <linux/dma-mapping.h> 8 8 9 + /* This defines the direction arg to the DMA mapping routines. */ 10 + #define PCI_DMA_BIDIRECTIONAL 0 11 + #define PCI_DMA_TODEVICE 1 12 + #define PCI_DMA_FROMDEVICE 2 13 + #define PCI_DMA_NONE 3 14 + 9 15 static inline void * 10 16 pci_alloc_consistent(struct pci_dev *hwdev, size_t size, 11 17 dma_addr_t *dma_handle) ··· 119 113 { 120 114 return dma_set_coherent_mask(&dev->dev, mask); 121 115 } 116 + 117 + static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, 118 + unsigned int size) 119 + { 120 + return dma_set_max_seg_size(&dev->dev, size); 121 + } 122 + 123 + static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, 124 + unsigned long mask) 125 + { 126 + return dma_set_seg_boundary(&dev->dev, mask); 127 + } 128 + #else 129 + static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) 130 + { return -EIO; } 131 + static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask) 132 + { return -EIO; } 133 + static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, 134 + unsigned int size) 135 + { return -EIO; } 136 + static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, 137 + unsigned long mask) 138 + { return -EIO; } 122 139 #endif 123 140 124 141 #endif
+1 -3
include/linux/ioport.h
··· 106 106 107 107 /* PCI ROM control bits (IORESOURCE_BITS) */ 108 108 #define IORESOURCE_ROM_ENABLE (1<<0) /* ROM is enabled, same as PCI_ROM_ADDRESS_ENABLE */ 109 - #define IORESOURCE_ROM_SHADOW (1<<1) /* ROM is copy at C000:0 */ 110 - #define IORESOURCE_ROM_COPY (1<<2) /* ROM is alloc'd copy, resource field overlaid */ 111 - #define IORESOURCE_ROM_BIOS_COPY (1<<3) /* ROM is BIOS copy, resource field overlaid */ 109 + #define IORESOURCE_ROM_SHADOW (1<<1) /* Use RAM image, not ROM BAR */ 112 110 113 111 /* PCI control bits. Shares IORESOURCE_BITS with above PCI ROM. */ 114 112 #define IORESOURCE_PCI_FIXED (1<<4) /* Do not move resource */
+59 -20
include/linux/pci.h
··· 70 70 pci_mmap_mem 71 71 }; 72 72 73 - /* This defines the direction arg to the DMA mapping routines. */ 74 - #define PCI_DMA_BIDIRECTIONAL 0 75 - #define PCI_DMA_TODEVICE 1 76 - #define PCI_DMA_FROMDEVICE 2 77 - #define PCI_DMA_NONE 3 78 - 79 73 /* 80 74 * For PCI devices, the region numbers are assigned this way: 81 75 */ ··· 353 359 unsigned int io_window_1k:1; /* Intel P2P bridge 1K I/O windows */ 354 360 unsigned int irq_managed:1; 355 361 unsigned int has_secondary_link:1; 362 + unsigned int non_compliant_bars:1; /* broken BARs; ignore them */ 356 363 pci_dev_flags_t dev_flags; 357 364 atomic_t enable_cnt; /* pci_enable_device has been called */ 358 365 ··· 573 578 /* Low-level architecture-dependent routines */ 574 579 575 580 struct pci_ops { 581 + int (*add_bus)(struct pci_bus *bus); 582 + void (*remove_bus)(struct pci_bus *bus); 576 583 void __iomem *(*map_bus)(struct pci_bus *bus, unsigned int devfn, int where); 577 584 int (*read)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val); 578 585 int (*write)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val); ··· 743 746 .vendor = PCI_VENDOR_ID_##vend, .device = (dev), \ 744 747 .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, 0, 0 745 748 749 + enum { 750 + PCI_REASSIGN_ALL_RSRC = 0x00000001, /* ignore firmware setup */ 751 + PCI_REASSIGN_ALL_BUS = 0x00000002, /* reassign all bus numbers */ 752 + PCI_PROBE_ONLY = 0x00000004, /* use existing setup */ 753 + PCI_CAN_SKIP_ISA_ALIGN = 0x00000008, /* don't do ISA alignment */ 754 + PCI_ENABLE_PROC_DOMAINS = 0x00000010, /* enable domains in /proc */ 755 + PCI_COMPAT_DOMAIN_0 = 0x00000020, /* ... except domain 0 */ 756 + PCI_SCAN_ALL_PCIE_DEVS = 0x00000040, /* scan all, not just dev 0 */ 757 + }; 758 + 746 759 /* these external functions are only available when PCI support is enabled */ 747 760 #ifdef CONFIG_PCI 761 + 762 + extern unsigned int pci_flags; 763 + 764 + static inline void pci_set_flags(int flags) { pci_flags = flags; } 765 + static inline void pci_add_flags(int flags) { pci_flags |= flags; } 766 + static inline void pci_clear_flags(int flags) { pci_flags &= ~flags; } 767 + static inline int pci_has_flag(int flag) { return pci_flags & flag; } 748 768 749 769 void pcie_bus_configure_settings(struct pci_bus *bus); 750 770 ··· 1018 1004 bool pci_intx_mask_supported(struct pci_dev *dev); 1019 1005 bool pci_check_and_mask_intx(struct pci_dev *dev); 1020 1006 bool pci_check_and_unmask_intx(struct pci_dev *dev); 1021 - int pci_set_dma_max_seg_size(struct pci_dev *dev, unsigned int size); 1022 - int pci_set_dma_seg_boundary(struct pci_dev *dev, unsigned long mask); 1023 1007 int pci_wait_for_pending(struct pci_dev *dev, int pos, u16 mask); 1024 1008 int pci_wait_for_pending_transaction(struct pci_dev *dev); 1025 1009 int pcix_get_max_mmrbc(struct pci_dev *dev); ··· 1233 1221 1234 1222 int pci_set_vga_state(struct pci_dev *pdev, bool decode, 1235 1223 unsigned int command_bits, u32 flags); 1224 + 1236 1225 /* kmem_cache style wrapper around pci_alloc_consistent() */ 1237 1226 1238 1227 #include <linux/pci-dma.h> ··· 1401 1388 1402 1389 #else /* CONFIG_PCI is not enabled */ 1403 1390 1391 + static inline void pci_set_flags(int flags) { } 1392 + static inline void pci_add_flags(int flags) { } 1393 + static inline void pci_clear_flags(int flags) { } 1394 + static inline int pci_has_flag(int flag) { return 0; } 1395 + 1404 1396 /* 1405 1397 * If the system does not have PCI, clearly these return errors. Define 1406 1398 * these as simple inline functions to avoid hair in drivers. ··· 1445 1427 static inline void pci_set_master(struct pci_dev *dev) { } 1446 1428 static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; } 1447 1429 static inline void pci_disable_device(struct pci_dev *dev) { } 1448 - static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) 1449 - { return -EIO; } 1450 - static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask) 1451 - { return -EIO; } 1452 - static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, 1453 - unsigned int size) 1454 - { return -EIO; } 1455 - static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, 1456 - unsigned long mask) 1457 - { return -EIO; } 1458 1430 static inline int pci_assign_resource(struct pci_dev *dev, int i) 1459 1431 { return -EBUSY; } 1460 1432 static inline int __pci_register_driver(struct pci_driver *drv, ··· 1505 1497 /* Include architecture-dependent settings and functions */ 1506 1498 1507 1499 #include <asm/pci.h> 1500 + 1501 + #ifndef pci_root_bus_fwnode 1502 + #define pci_root_bus_fwnode(bus) NULL 1503 + #endif 1508 1504 1509 1505 /* these helpers provide future and backwards compatibility 1510 1506 * for accessing popular PCI BAR info */ ··· 1829 1817 #define PCI_VPD_LRDT_RW_DATA PCI_VPD_LRDT_ID(PCI_VPD_LTIN_RW_DATA) 1830 1818 1831 1819 /* Small Resource Data Type Tag Item Names */ 1832 - #define PCI_VPD_STIN_END 0x78 /* End */ 1820 + #define PCI_VPD_STIN_END 0x0f /* End */ 1833 1821 1834 - #define PCI_VPD_SRDT_END PCI_VPD_STIN_END 1822 + #define PCI_VPD_SRDT_END (PCI_VPD_STIN_END << 3) 1835 1823 1836 1824 #define PCI_VPD_SRDT_TIN_MASK 0x78 1837 1825 #define PCI_VPD_SRDT_LEN_MASK 0x07 1826 + #define PCI_VPD_LRDT_TIN_MASK 0x7f 1838 1827 1839 1828 #define PCI_VPD_LRDT_TAG_SIZE 3 1840 1829 #define PCI_VPD_SRDT_TAG_SIZE 1 ··· 1859 1846 } 1860 1847 1861 1848 /** 1849 + * pci_vpd_lrdt_tag - Extracts the Large Resource Data Type Tag Item 1850 + * @lrdt: Pointer to the beginning of the Large Resource Data Type tag 1851 + * 1852 + * Returns the extracted Large Resource Data Type Tag item. 1853 + */ 1854 + static inline u16 pci_vpd_lrdt_tag(const u8 *lrdt) 1855 + { 1856 + return (u16)(lrdt[0] & PCI_VPD_LRDT_TIN_MASK); 1857 + } 1858 + 1859 + /** 1862 1860 * pci_vpd_srdt_size - Extracts the Small Resource Data Type length 1863 1861 * @lrdt: Pointer to the beginning of the Small Resource Data Type tag 1864 1862 * ··· 1878 1854 static inline u8 pci_vpd_srdt_size(const u8 *srdt) 1879 1855 { 1880 1856 return (*srdt) & PCI_VPD_SRDT_LEN_MASK; 1857 + } 1858 + 1859 + /** 1860 + * pci_vpd_srdt_tag - Extracts the Small Resource Data Type Tag Item 1861 + * @lrdt: Pointer to the beginning of the Small Resource Data Type tag 1862 + * 1863 + * Returns the extracted Small Resource Data Type Tag Item. 1864 + */ 1865 + static inline u8 pci_vpd_srdt_tag(const u8 *srdt) 1866 + { 1867 + return ((*srdt) & PCI_VPD_SRDT_TIN_MASK) >> 3; 1881 1868 } 1882 1869 1883 1870 /** ··· 2007 1972 { 2008 1973 return bus->self && bus->self->ari_enabled; 2009 1974 } 1975 + 1976 + /* provide the legacy pci_dma_* API */ 1977 + #include <linux/pci-dma-compat.h> 1978 + 2010 1979 #endif /* LINUX_PCI_H */
+5
include/linux/pci_ids.h
··· 110 110 #define PCI_CLASS_SERIAL_USB_OHCI 0x0c0310 111 111 #define PCI_CLASS_SERIAL_USB_EHCI 0x0c0320 112 112 #define PCI_CLASS_SERIAL_USB_XHCI 0x0c0330 113 + #define PCI_CLASS_SERIAL_USB_DEVICE 0x0c03fe 113 114 #define PCI_CLASS_SERIAL_FIBER 0x0c04 114 115 #define PCI_CLASS_SERIAL_SMBUS 0x0c05 115 116 ··· 2506 2505 #define PCI_VENDOR_ID_QMI 0x1a32 2507 2506 2508 2507 #define PCI_VENDOR_ID_AZWAVE 0x1a3b 2508 + 2509 + #define PCI_VENDOR_ID_REDHAT_QUMRANET 0x1af4 2510 + #define PCI_SUBVENDOR_ID_REDHAT_QUMRANET 0x1af4 2511 + #define PCI_SUBDEVICE_ID_QEMU 0x1100 2509 2512 2510 2513 #define PCI_VENDOR_ID_ASMEDIA 0x1b21 2511 2514
+2 -2
sound/pci/intel8x0.c
··· 2980 2980 goto fini; 2981 2981 2982 2982 /* check for known (emulated) devices */ 2983 - if (pci->subsystem_vendor == 0x1af4 && 2984 - pci->subsystem_device == 0x1100) { 2983 + if (pci->subsystem_vendor == PCI_SUBVENDOR_ID_REDHAT_QUMRANET && 2984 + pci->subsystem_device == PCI_SUBDEVICE_ID_QEMU) { 2985 2985 /* KVM emulated sound, PCI SSID: 1af4:1100 */ 2986 2986 msg = "enable KVM"; 2987 2987 } else if (pci->subsystem_vendor == 0x1ab8) {
-1
sound/ppc/pmac.c
··· 34 34 #include "pmac.h" 35 35 #include <sound/pcm_params.h> 36 36 #include <asm/pmac_feature.h> 37 - #include <asm/pci-bridge.h> 38 37 39 38 40 39 /* fixed frequency table for awacs, screamer, burgundy, DACA (44100 max) */