Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"Enumeration:
- Print IRQ number used by PCIe Link Bandwidth Notification (Dongdong
Liu)
- Add schedule point in pci_read_config() to reduce max latency
(Jiang Biao)
- Add Kconfig options for MPS/MRRS strategy (Jim Quinlan)

Resource management:
- Fix pci_iounmap() memory leak when !CONFIG_GENERIC_IOMAP (Lorenzo
Pieralisi)

PCIe native device hotplug:
- Reduce noisiness on hot removal (Lukas Wunner)

Power management:
- Revert "PCI/PM: Apply D2 delay as milliseconds, not microseconds"
that was done on the basis of spec typo (Bjorn Helgaas)
- Rename pci_dev.d3_delay to d3hot_delay to remove D3hot/D3cold
ambiguity (Krzysztof Wilczyński)
- Remove unused pcibios_pm_ops (Vaibhav Gupta)

IOMMU:
- Enable Translation Blocking for external devices to harden against
DMA attacks (Rajat Jain)

Error handling:
- Add an ACPI APEI notifier chain for vendor CPER records to enable
device-specific error handling (Shiju Jose)

ASPM:
- Remove struct aspm_register_info to simplify code (Saheed O.
Bolarinwa)

Amlogic Meson PCIe controller driver:
- Build as module by default (Kevin Hilman)

Ampere Altra PCIe controller driver:
- Add MCFG quirk to work around non-standard ECAM implementation
(Tuan Phan)

Broadcom iProc PCIe controller driver:
- Set affinity mask on MSI interrupts (Mark Tomlinson)

Broadcom STB PCIe controller driver:
- Make PCIE_BRCMSTB depend on ARCH_BRCMSTB (Jim Quinlan)
- Add DT bindings for more Brcmstb chips (Jim Quinlan)
- Add bcm7278 register info (Jim Quinlan)
- Add bcm7278 PERST# support (Jim Quinlan)
- Add suspend and resume pm_ops (Jim Quinlan)
- Add control of rescal reset (Jim Quinlan)
- Set additional internal memory DMA viewport sizes (Jim Quinlan)
- Accommodate MSI for older chips (Jim Quinlan)
- Set bus max burst size by chip type (Jim Quinlan)
- Add support for bcm7211, bcm7216, bcm7445, bcm7278 (Jim Quinlan)

Freescale i.MX6 PCIe controller driver:
- Use dev_err_probe() to reduce redundant messages (Anson Huang)

Freescale Layerscape PCIe controller driver:
- Enforce 4K DMA buffer alignment in endpoint test (Hou Zhiqiang)
- Add DT compatible strings for ls1088a, ls2088a (Xiaowei Bao)
- Add endpoint support for ls1088a, ls2088a (Xiaowei Bao)
- Add endpoint test support for lS1088a (Xiaowei Bao)
- Add MSI-X support for ls1088a (Xiaowei Bao)

HiSilicon HIP PCIe controller driver:
- Handle HIP-specific errors via ACPI APEI (Yicong Yang)

HiSilicon Kirin PCIe controller driver:
- Return -EPROBE_DEFER if the GPIO isn't ready (Bean Huo)

Intel VMD host bridge driver:
- Factor out physical offset, bus offset, IRQ domain, IRQ allocation
(Jon Derrick)
- Use generic PCI PM correctly (Jon Derrick)

Marvell Aardvark PCIe controller driver:
- Fix compilation on s390 (Pali Rohár)
- Implement driver 'remove' function and allow to build it as module
(Pali Rohár)
- Move PCIe reset card code to advk_pcie_train_link() (Pali Rohár)
- Convert mvebu a3700 internal SMCC firmware return codes to errno
(Pali Rohár)
- Fix initialization with old Marvell's Arm Trusted Firmware (Pali
Rohár)

Microsoft Hyper-V host bridge driver:
- Fix hibernation in case interrupts are not re-created (Dexuan Cui)

NVIDIA Tegra PCIe controller driver:
- Stop checking return value of debugfs_create() functions (Greg
Kroah-Hartman)
- Convert to use DEFINE_SEQ_ATTRIBUTE macro (Liu Shixin)

Qualcomm PCIe controller driver:
- Reset PCIe to work around Qsdk U-Boot issue (Ansuel Smith)

Renesas R-Car PCIe controller driver:
- Add DT documentation for r8a774a1, r8a774b1, r8a774e1 endpoints
(Lad Prabhakar)
- Add RZ/G2M, RZ/G2N, RZ/G2H IDs to endpoint test (Lad Prabhakar)
- Add DT support for r8a7742 (Lad Prabhakar)

Socionext UniPhier Pro5 controller driver:
- Add DT descriptions of iATU register (host and endpoint) (Kunihiko
Hayashi)

Synopsys DesignWare PCIe controller driver:
- Add link up check in dw_child_pcie_ops.map_bus() (racy, but seems
unavoidable) (Hou Zhiqiang)
- Fix endpoint Header Type check so multi-function devices work (Hou
Zhiqiang)
- Skip PCIE_MSI_INTR0* programming if MSI is disabled (Jisheng Zhang)
- Stop leaking MSI page in suspend/resume (Jisheng Zhang)
- Add common iATU register support instead of keystone-specific code
(Kunihiko Hayashi)
- Major config space access and other cleanups in dwc core and
drivers that use it (al, exynos, histb, imx6, intel-gw, keystone,
kirin, meson, qcom, tegra) (Rob Herring)
- Add multiple PFs support for endpoint (Xiaowei Bao)
- Add MSI-X doorbell mode in endpoint mode (Xiaowei Bao)

Miscellaneous:
- Use fallthrough pseudo-keyword (Gustavo A. R. Silva)
- Fix "0 used as NULL pointer" warnings (Gustavo Pimentel)
- Fix "cast truncates bits from constant value" warnings (Gustavo
Pimentel)
- Remove redundant zeroing for sg_init_table() (Julia Lawall)
- Use scnprintf(), not snprintf(), in sysfs "show" functions
(Krzysztof Wilczyński)
- Remove unused assignments (Krzysztof Wilczyński)
- Fix "0 used as NULL pointer" warning (Krzysztof Wilczyński)
- Simplify bool comparisons (Krzysztof Wilczyński)
- Use for_each_child_of_node() and for_each_node_by_name() (Qinglang
Miao)
- Simplify return expressions (Qinglang Miao)"

* tag 'pci-v5.10-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (147 commits)
PCI: vmd: Update VMD PM to correctly use generic PCI PM
PCI: vmd: Create IRQ allocation helper
PCI: vmd: Create IRQ Domain configuration helper
PCI: vmd: Create bus offset configuration helper
PCI: vmd: Create physical offset helper
PCI: v3-semi: Remove unneeded break
PCI: dwc: Add link up check in dw_child_pcie_ops.map_bus()
PCI/ASPM: Remove struct pcie_link_state.l1ss
PCI/ASPM: Remove struct aspm_register_info.l1ss_cap
PCI/ASPM: Pass L1SS Capabilities value, not struct aspm_register_info
PCI/ASPM: Remove struct aspm_register_info.l1ss_ctl1
PCI/ASPM: Remove struct aspm_register_info.l1ss_ctl2 (unused)
PCI/ASPM: Remove struct aspm_register_info.l1ss_cap_ptr
PCI/ASPM: Remove struct aspm_register_info.latency_encoding
PCI/ASPM: Remove struct aspm_register_info.enabled
PCI/ASPM: Remove struct aspm_register_info.support
PCI/ASPM: Use 'parent' and 'child' for readability
PCI/ASPM: Move LTR path check to where it's used
PCI/ASPM: Move pci_clear_and_set_dword() earlier
PCI: dwc: Fix MSI page leakage in suspend/resume
...

+2551 -1792
+49 -7
Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
··· 9 9 maintainers: 10 10 - Nicolas Saenz Julienne <nsaenzjulienne@suse.de> 11 11 12 - allOf: 13 - - $ref: /schemas/pci/pci-bus.yaml# 14 - 15 12 properties: 16 13 compatible: 17 - const: brcm,bcm2711-pcie # The Raspberry Pi 4 14 + items: 15 + - enum: 16 + - brcm,bcm2711-pcie # The Raspberry Pi 4 17 + - brcm,bcm7211-pcie # Broadcom STB version of RPi4 18 + - brcm,bcm7278-pcie # Broadcom 7278 Arm 19 + - brcm,bcm7216-pcie # Broadcom 7216 Arm 20 + - brcm,bcm7445-pcie # Broadcom 7445 Arm 18 21 19 22 reg: 20 23 maxItems: 1 ··· 37 34 - const: msi 38 35 39 36 ranges: 40 - maxItems: 1 37 + minItems: 1 38 + maxItems: 4 41 39 42 40 dma-ranges: 43 - maxItems: 1 41 + minItems: 1 42 + maxItems: 6 44 43 45 44 clocks: 46 45 maxItems: 1 ··· 63 58 64 59 aspm-no-l0s: true 65 60 61 + resets: 62 + description: for "brcm,bcm7216-pcie", must be a valid reset 63 + phandle pointing to the RESCAL reset controller provider node. 64 + $ref: "/schemas/types.yaml#/definitions/phandle" 65 + 66 + reset-names: 67 + items: 68 + - const: rescal 69 + 70 + brcm,scb-sizes: 71 + description: u64 giving the 64bit PCIe memory 72 + viewport size of a memory controller. There may be up to 73 + three controllers, and each size must be a power of two 74 + with a size greater or equal to the amount of memory the 75 + controller supports. Note that each memory controller 76 + may have two component regions -- base and extended -- so 77 + this information cannot be deduced from the dma-ranges. 78 + $ref: /schemas/types.yaml#/definitions/uint64-array 79 + items: 80 + minItems: 1 81 + maxItems: 3 82 + 66 83 required: 67 84 - reg 85 + - ranges 68 86 - dma-ranges 69 87 - "#interrupt-cells" 70 88 - interrupts ··· 95 67 - interrupt-map-mask 96 68 - interrupt-map 97 69 - msi-controller 70 + 71 + allOf: 72 + - $ref: /schemas/pci/pci-bus.yaml# 73 + - if: 74 + properties: 75 + compatible: 76 + contains: 77 + const: brcm,bcm7216-pcie 78 + then: 79 + required: 80 + - resets 81 + - reset-names 98 82 99 83 unevaluatedProperties: false 100 84 ··· 133 93 msi-parent = <&pcie0>; 134 94 msi-controller; 135 95 ranges = <0x02000000 0x0 0xf8000000 0x6 0x00000000 0x0 0x04000000>; 136 - dma-ranges = <0x02000000 0x0 0x00000000 0x0 0x00000000 0x0 0x80000000>; 96 + dma-ranges = <0x42000000 0x1 0x00000000 0x0 0x40000000 0x0 0x80000000>, 97 + <0x42000000 0x1 0x80000000 0x3 0x00000000 0x0 0x80000000>; 137 98 brcm,enable-ssc; 99 + brcm,scb-sizes = <0x0000000080000000 0x0000000080000000>; 138 100 }; 139 101 };
+2
Documentation/devicetree/bindings/pci/layerscape-pci.txt
··· 24 24 "fsl,ls1028a-pcie" 25 25 EP mode: 26 26 "fsl,ls1046a-pcie-ep", "fsl,ls-pcie-ep" 27 + "fsl,ls1088a-pcie-ep", "fsl,ls-pcie-ep" 28 + "fsl,ls2088a-pcie-ep", "fsl,ls-pcie-ep" 27 29 - reg: base addresses and lengths of the PCIe controller register blocks. 28 30 - interrupts: A list of interrupt outputs of the controller. Must contain an 29 31 entry for each entry in the interrupt-names property.
+6 -2
Documentation/devicetree/bindings/pci/rcar-pci-ep.yaml
··· 14 14 properties: 15 15 compatible: 16 16 items: 17 - - const: renesas,r8a774c0-pcie-ep 18 - - const: renesas,rcar-gen3-pcie-ep 17 + - enum: 18 + - renesas,r8a774a1-pcie-ep # RZ/G2M 19 + - renesas,r8a774b1-pcie-ep # RZ/G2N 20 + - renesas,r8a774c0-pcie-ep # RZ/G2E 21 + - renesas,r8a774e1-pcie-ep # RZ/G2H 22 + - const: renesas,rcar-gen3-pcie-ep # R-Car Gen3 and RZ/G2 19 23 20 24 reg: 21 25 maxItems: 5
+2 -1
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 1 1 * Renesas R-Car PCIe interface 2 2 3 3 Required properties: 4 - compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC; 4 + compatible: "renesas,pcie-r8a7742" for the R8A7742 SoC; 5 + "renesas,pcie-r8a7743" for the R8A7743 SoC; 5 6 "renesas,pcie-r8a7744" for the R8A7744 SoC; 6 7 "renesas,pcie-r8a774a1" for the R8A774A1 SoC; 7 8 "renesas,pcie-r8a774b1" for the R8A774B1 SoC;
+14 -6
Documentation/devicetree/bindings/pci/socionext,uniphier-pcie-ep.yaml
··· 23 23 const: socionext,uniphier-pro5-pcie-ep 24 24 25 25 reg: 26 - maxItems: 4 26 + minItems: 4 27 + maxItems: 5 27 28 28 29 reg-names: 29 - items: 30 - - const: dbi 31 - - const: dbi2 32 - - const: link 33 - - const: addr_space 30 + oneOf: 31 + - items: 32 + - const: dbi 33 + - const: dbi2 34 + - const: link 35 + - const: addr_space 36 + - items: 37 + - const: dbi 38 + - const: dbi2 39 + - const: link 40 + - const: addr_space 41 + - const: atu 34 42 35 43 clocks: 36 44 maxItems: 2
+1
Documentation/devicetree/bindings/pci/uniphier-pcie.txt
··· 16 16 "dbi" - controller configuration registers 17 17 "link" - SoC-specific glue layer registers 18 18 "config" - PCIe configuration space 19 + "atu" - iATU registers for DWC version 4.80 or later 19 20 - clocks: A phandle to the clock gate for PCIe glue layer including 20 21 the host controller. 21 22 - resets: A phandle to the reset line for PCIe glue layer including
+1 -1
Documentation/power/pci.rst
··· 320 320 unsigned int d2_support:1; /* Low power state D2 is supported */ 321 321 unsigned int no_d1d2:1; /* D1 and D2 are forbidden */ 322 322 unsigned int wakeup_prepared:1; /* Device prepared for wake up */ 323 - unsigned int d3_delay; /* D3->D0 transition time in ms */ 323 + unsigned int d3hot_delay; /* D3hot->D0 transition time in ms */ 324 324 ... 325 325 }; 326 326
-7
arch/arm/include/asm/mach/pci.h
··· 17 17 struct device; 18 18 19 19 struct hw_pci { 20 - struct msi_controller *msi_ctrl; 21 20 struct pci_ops *ops; 22 21 int nr_controllers; 23 - unsigned int io_optional:1; 24 22 void **private_data; 25 23 int (*setup)(int nr, struct pci_sys_data *); 26 24 int (*scan)(int nr, struct pci_host_bridge *); ··· 26 28 void (*postinit)(void); 27 29 u8 (*swizzle)(struct pci_dev *dev, u8 *pin); 28 30 int (*map_irq)(const struct pci_dev *dev, u8 slot, u8 pin); 29 - resource_size_t (*align_resource)(struct pci_dev *dev, 30 - const struct resource *res, 31 - resource_size_t start, 32 - resource_size_t size, 33 - resource_size_t align); 34 31 }; 35 32 36 33 /*
+2 -14
arch/arm/kernel/bios32.c
··· 394 394 return irq; 395 395 } 396 396 397 - static int pcibios_init_resource(int busnr, struct pci_sys_data *sys, 398 - int io_optional) 397 + static int pcibios_init_resource(int busnr, struct pci_sys_data *sys) 399 398 { 400 399 int ret; 401 400 struct resource_entry *window; ··· 403 404 pci_add_resource_offset(&sys->resources, 404 405 &iomem_resource, sys->mem_offset); 405 406 } 406 - 407 - /* 408 - * If a platform says I/O port support is optional, we don't add 409 - * the default I/O space. The platform is responsible for adding 410 - * any I/O space it needs. 411 - */ 412 - if (io_optional) 413 - return 0; 414 407 415 408 resource_list_for_each_entry(window, &sys->resources) 416 409 if (resource_type(window->res) == IORESOURCE_IO) ··· 453 462 454 463 if (ret > 0) { 455 464 456 - ret = pcibios_init_resource(nr, sys, hw->io_optional); 465 + ret = pcibios_init_resource(nr, sys); 457 466 if (ret) { 458 467 pci_free_host_bridge(bridge); 459 468 break; ··· 471 480 bridge->sysdata = sys; 472 481 bridge->busnr = sys->busnr; 473 482 bridge->ops = hw->ops; 474 - bridge->msi = hw->msi_ctrl; 475 - bridge->align_resource = 476 - hw->align_resource; 477 483 478 484 ret = pci_scan_root_bus_bridge(bridge); 479 485 }
+7 -10
arch/sparc/include/asm/io_32.h
··· 11 11 #define memcpy_fromio(d,s,sz) _memcpy_fromio(d,s,sz) 12 12 #define memcpy_toio(d,s,sz) _memcpy_toio(d,s,sz) 13 13 14 + /* 15 + * Bus number may be embedded in the higher bits of the physical address. 16 + * This is why we have no bus number argument to ioremap(). 17 + */ 18 + void __iomem *ioremap(phys_addr_t offset, size_t size); 19 + void iounmap(volatile void __iomem *addr); 20 + 14 21 #include <asm-generic/io.h> 15 22 16 23 static inline void _memset_io(volatile void __iomem *dst, ··· 128 121 } 129 122 } 130 123 131 - #ifdef __KERNEL__ 132 - 133 - /* 134 - * Bus number may be embedded in the higher bits of the physical address. 135 - * This is why we have no bus number argument to ioremap(). 136 - */ 137 - void __iomem *ioremap(phys_addr_t offset, size_t size); 138 - void iounmap(volatile void __iomem *addr); 139 124 /* Create a virtual mapping cookie for an IO port range */ 140 125 void __iomem *ioport_map(unsigned long port, unsigned int nr); 141 126 void ioport_unmap(void __iomem *); ··· 146 147 } 147 148 struct device; 148 149 void sbus_set_sbus64(struct device *, int); 149 - 150 - #endif 151 150 152 151 #define __ARCH_HAS_NO_PAGE_ZERO_MAPPED 1 153 152
+1 -1
arch/x86/pci/fixup.c
··· 587 587 static void pci_fixup_amd_ehci_pme(struct pci_dev *dev) 588 588 { 589 589 dev_info(&dev->dev, "PME# does not work under D3, disabling it\n"); 590 - dev->pme_support &= ~((PCI_PM_CAP_PME_D3 | PCI_PM_CAP_PME_D3cold) 590 + dev->pme_support &= ~((PCI_PM_CAP_PME_D3hot | PCI_PM_CAP_PME_D3cold) 591 591 >> PCI_PM_CAP_PME_SHIFT); 592 592 } 593 593 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x7808, pci_fixup_amd_ehci_pme);
+2 -1
arch/x86/pci/intel_mid_pci.c
··· 33 33 #include <asm/hw_irq.h> 34 34 #include <asm/io_apic.h> 35 35 #include <asm/intel-mid.h> 36 + #include <asm/acpi.h> 36 37 37 38 #define PCIE_CAP_OFFSET 0x100 38 39 ··· 323 322 */ 324 323 if (type1_access_ok(dev->bus->number, dev->devfn, PCI_DEVICE_ID)) 325 324 return; 326 - dev->d3_delay = 0; 325 + dev->d3hot_delay = 0; 327 326 } 328 327 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_d3delay_fixup); 329 328
+63
drivers/acpi/apei/ghes.c
··· 79 79 ((struct acpi_hest_generic_status *) \ 80 80 ((struct ghes_estatus_node *)(estatus_node) + 1)) 81 81 82 + #define GHES_VENDOR_ENTRY_LEN(gdata_len) \ 83 + (sizeof(struct ghes_vendor_record_entry) + (gdata_len)) 84 + #define GHES_GDATA_FROM_VENDOR_ENTRY(vendor_entry) \ 85 + ((struct acpi_hest_generic_data *) \ 86 + ((struct ghes_vendor_record_entry *)(vendor_entry) + 1)) 87 + 82 88 /* 83 89 * NMI-like notifications vary by architecture, before the compiler can prune 84 90 * unused static functions it needs a value for these enums. ··· 128 122 * simultaneously. 129 123 */ 130 124 static DEFINE_SPINLOCK(ghes_notify_lock_irq); 125 + 126 + struct ghes_vendor_record_entry { 127 + struct work_struct work; 128 + int error_severity; 129 + char vendor_record[]; 130 + }; 131 131 132 132 static struct gen_pool *ghes_estatus_pool; 133 133 static unsigned long ghes_estatus_pool_size_request; ··· 523 511 #endif 524 512 } 525 513 514 + static BLOCKING_NOTIFIER_HEAD(vendor_record_notify_list); 515 + 516 + int ghes_register_vendor_record_notifier(struct notifier_block *nb) 517 + { 518 + return blocking_notifier_chain_register(&vendor_record_notify_list, nb); 519 + } 520 + EXPORT_SYMBOL_GPL(ghes_register_vendor_record_notifier); 521 + 522 + void ghes_unregister_vendor_record_notifier(struct notifier_block *nb) 523 + { 524 + blocking_notifier_chain_unregister(&vendor_record_notify_list, nb); 525 + } 526 + EXPORT_SYMBOL_GPL(ghes_unregister_vendor_record_notifier); 527 + 528 + static void ghes_vendor_record_work_func(struct work_struct *work) 529 + { 530 + struct ghes_vendor_record_entry *entry; 531 + struct acpi_hest_generic_data *gdata; 532 + u32 len; 533 + 534 + entry = container_of(work, struct ghes_vendor_record_entry, work); 535 + gdata = GHES_GDATA_FROM_VENDOR_ENTRY(entry); 536 + 537 + blocking_notifier_call_chain(&vendor_record_notify_list, 538 + entry->error_severity, gdata); 539 + 540 + len = GHES_VENDOR_ENTRY_LEN(acpi_hest_get_record_size(gdata)); 541 + gen_pool_free(ghes_estatus_pool, (unsigned long)entry, len); 542 + } 543 + 544 + static void ghes_defer_non_standard_event(struct acpi_hest_generic_data *gdata, 545 + int sev) 546 + { 547 + struct acpi_hest_generic_data *copied_gdata; 548 + struct ghes_vendor_record_entry *entry; 549 + u32 len; 550 + 551 + len = GHES_VENDOR_ENTRY_LEN(acpi_hest_get_record_size(gdata)); 552 + entry = (void *)gen_pool_alloc(ghes_estatus_pool, len); 553 + if (!entry) 554 + return; 555 + 556 + copied_gdata = GHES_GDATA_FROM_VENDOR_ENTRY(entry); 557 + memcpy(copied_gdata, gdata, acpi_hest_get_record_size(gdata)); 558 + entry->error_severity = sev; 559 + 560 + INIT_WORK(&entry->work, ghes_vendor_record_work_func); 561 + schedule_work(&entry->work); 562 + } 563 + 526 564 static bool ghes_do_proc(struct ghes *ghes, 527 565 const struct acpi_hest_generic_status *estatus) 528 566 { ··· 611 549 } else { 612 550 void *err = acpi_hest_get_payload(gdata); 613 551 552 + ghes_defer_non_standard_event(gdata, sev); 614 553 log_non_standard_event(sec_type, fru_id, fru_text, 615 554 sec_sev, err, 616 555 gdata->error_data_length);
+21 -1
drivers/acpi/pci_mcfg.c
··· 142 142 XGENE_V2_ECAM_MCFG(4, 0), 143 143 XGENE_V2_ECAM_MCFG(4, 1), 144 144 XGENE_V2_ECAM_MCFG(4, 2), 145 + 146 + #define ALTRA_ECAM_QUIRK(rev, seg) \ 147 + { "Ampere", "Altra ", rev, seg, MCFG_BUS_ANY, &pci_32b_read_ops } 148 + 149 + ALTRA_ECAM_QUIRK(1, 0), 150 + ALTRA_ECAM_QUIRK(1, 1), 151 + ALTRA_ECAM_QUIRK(1, 2), 152 + ALTRA_ECAM_QUIRK(1, 3), 153 + ALTRA_ECAM_QUIRK(1, 4), 154 + ALTRA_ECAM_QUIRK(1, 5), 155 + ALTRA_ECAM_QUIRK(1, 6), 156 + ALTRA_ECAM_QUIRK(1, 7), 157 + ALTRA_ECAM_QUIRK(1, 8), 158 + ALTRA_ECAM_QUIRK(1, 9), 159 + ALTRA_ECAM_QUIRK(1, 10), 160 + ALTRA_ECAM_QUIRK(1, 11), 161 + ALTRA_ECAM_QUIRK(1, 12), 162 + ALTRA_ECAM_QUIRK(1, 13), 163 + ALTRA_ECAM_QUIRK(1, 14), 164 + ALTRA_ECAM_QUIRK(1, 15), 145 165 }; 146 166 147 167 static char mcfg_oem_id[ACPI_OEM_ID_SIZE]; ··· 300 280 { 301 281 int err = acpi_table_parse(ACPI_SIG_MCFG, pci_mcfg_parse); 302 282 if (err) 303 - pr_err("Failed to parse MCFG (%d)\n", err); 283 + pr_debug("Failed to parse MCFG (%d)\n", err); 304 284 }
+1 -1
drivers/hid/intel-ish-hid/ipc/ipc.c
··· 755 755 csr |= PCI_D3hot; 756 756 pci_write_config_word(pdev, pdev->pm_cap + PCI_PM_CTRL, csr); 757 757 758 - mdelay(pdev->d3_delay); 758 + mdelay(pdev->d3hot_delay); 759 759 760 760 csr &= ~PCI_PM_CTRL_STATE_MASK; 761 761 csr |= PCI_D0;
+14 -3
drivers/misc/pci_endpoint_test.c
··· 70 70 71 71 #define PCI_DEVICE_ID_TI_J721E 0xb00d 72 72 #define PCI_DEVICE_ID_TI_AM654 0xb00c 73 + #define PCI_DEVICE_ID_LS1088A 0x80c0 73 74 74 75 #define is_am654_pci_dev(pdev) \ 75 76 ((pdev)->device == PCI_DEVICE_ID_TI_AM654) 76 77 78 + #define PCI_DEVICE_ID_RENESAS_R8A774A1 0x0028 79 + #define PCI_DEVICE_ID_RENESAS_R8A774B1 0x002b 77 80 #define PCI_DEVICE_ID_RENESAS_R8A774C0 0x002d 81 + #define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025 78 82 79 83 static DEFINE_IDA(pci_endpoint_test_ida); 80 84 ··· 949 945 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x), 950 946 .driver_data = (kernel_ulong_t)&default_data, 951 947 }, 952 - { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0) }, 948 + { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0), 949 + .driver_data = (kernel_ulong_t)&default_data, 950 + }, 951 + { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, PCI_DEVICE_ID_LS1088A), 952 + .driver_data = (kernel_ulong_t)&default_data, 953 + }, 953 954 { PCI_DEVICE_DATA(SYNOPSYS, EDDA, NULL) }, 954 955 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654), 955 956 .driver_data = (kernel_ulong_t)&am654_data 956 957 }, 957 - { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0), 958 - }, 958 + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774A1),}, 959 + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774B1),}, 960 + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774C0),}, 961 + { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, PCI_DEVICE_ID_RENESAS_R8A774E1),}, 959 962 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721E), 960 963 .driver_data = (kernel_ulong_t)&j721e_data, 961 964 },
+1 -1
drivers/net/ethernet/marvell/sky2.c
··· 5105 5105 INIT_WORK(&hw->restart_work, sky2_restart); 5106 5106 5107 5107 pci_set_drvdata(pdev, hw); 5108 - pdev->d3_delay = 300; 5108 + pdev->d3hot_delay = 300; 5109 5109 5110 5110 return 0; 5111 5111
+62
drivers/pci/Kconfig
··· 190 190 The PCI device frontend driver allows the kernel to import arbitrary 191 191 PCI devices from a PCI backend to support PCI driver domains. 192 192 193 + choice 194 + prompt "PCI Express hierarchy optimization setting" 195 + default PCIE_BUS_DEFAULT 196 + depends on PCI && EXPERT 197 + help 198 + MPS (Max Payload Size) and MRRS (Max Read Request Size) are PCIe 199 + device parameters that affect performance and the ability to 200 + support hotplug and peer-to-peer DMA. 201 + 202 + The following choices set the MPS and MRRS optimization strategy 203 + at compile-time. The choices are the same as those offered for 204 + the kernel command-line parameter 'pci', i.e., 205 + 'pci=pcie_bus_tune_off', 'pci=pcie_bus_safe', 206 + 'pci=pcie_bus_perf', and 'pci=pcie_bus_peer2peer'. 207 + 208 + This is a compile-time setting and can be overridden by the above 209 + command-line parameters. If unsure, choose PCIE_BUS_DEFAULT. 210 + 211 + config PCIE_BUS_TUNE_OFF 212 + bool "Tune Off" 213 + depends on PCI 214 + help 215 + Use the BIOS defaults; don't touch MPS at all. This is the same 216 + as booting with 'pci=pcie_bus_tune_off'. 217 + 218 + config PCIE_BUS_DEFAULT 219 + bool "Default" 220 + depends on PCI 221 + help 222 + Default choice; ensure that the MPS matches upstream bridge. 223 + 224 + config PCIE_BUS_SAFE 225 + bool "Safe" 226 + depends on PCI 227 + help 228 + Use largest MPS that boot-time devices support. If you have a 229 + closed system with no possibility of adding new devices, this 230 + will use the largest MPS that's supported by all devices. This 231 + is the same as booting with 'pci=pcie_bus_safe'. 232 + 233 + config PCIE_BUS_PERFORMANCE 234 + bool "Performance" 235 + depends on PCI 236 + help 237 + Use MPS and MRRS for best performance. Ensure that a given 238 + device's MPS is no larger than its parent MPS, which allows us to 239 + keep all switches/bridges to the max MPS supported by their 240 + parent. This is the same as booting with 'pci=pcie_bus_perf'. 241 + 242 + config PCIE_BUS_PEER2PEER 243 + bool "Peer2peer" 244 + depends on PCI 245 + help 246 + Set MPS = 128 for all devices. MPS configuration effected by the 247 + other options could cause the MPS on one root port to be 248 + different than that of the MPS on another, which may cause 249 + hot-added devices or peer-to-peer DMA to fail. Set MPS to the 250 + smallest possible value (128B) system-wide to avoid these issues. 251 + This is the same as booting with 'pci=pcie_bus_peer2peer'. 252 + 253 + endchoice 254 + 193 255 source "drivers/pci/hotplug/Kconfig" 194 256 source "drivers/pci/controller/Kconfig" 195 257 source "drivers/pci/endpoint/Kconfig"
+10 -2
drivers/pci/controller/Kconfig
··· 12 12 select PCI_BRIDGE_EMUL 13 13 14 14 config PCI_AARDVARK 15 - bool "Aardvark PCIe controller" 15 + tristate "Aardvark PCIe controller" 16 16 depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST 17 17 depends on OF 18 18 depends on PCI_MSI_IRQ_DOMAIN ··· 273 273 274 274 config PCIE_BRCMSTB 275 275 tristate "Broadcom Brcmstb PCIe host controller" 276 - depends on ARCH_BCM2835 || COMPILE_TEST 276 + depends on ARCH_BRCMSTB || ARCH_BCM2835 || COMPILE_TEST 277 277 depends on OF 278 278 depends on PCI_MSI_IRQ_DOMAIN 279 + default ARCH_BRCMSTB 279 280 help 280 281 Say Y here to enable PCIe host controller support for 281 282 Broadcom STB based SoCs, like the Raspberry Pi 4. ··· 297 296 help 298 297 Say Y here if you want to enable PCI controller support on 299 298 Loongson systems. 299 + 300 + config PCIE_HISI_ERR 301 + depends on ACPI_APEI_GHES && (ARM64 || COMPILE_TEST) 302 + bool "HiSilicon HIP PCIe controller error handling driver" 303 + help 304 + Say Y here if you want error handling support 305 + for the PCIe controller's errors on HiSilicon HIP SoCs 300 306 301 307 source "drivers/pci/controller/dwc/Kconfig" 302 308 source "drivers/pci/controller/mobiveil/Kconfig"
+1
drivers/pci/controller/Makefile
··· 31 31 obj-$(CONFIG_VMD) += vmd.o 32 32 obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o 33 33 obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o 34 + obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o 34 35 # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW 35 36 obj-y += dwc/ 36 37 obj-y += mobiveil/
-1
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 328 328 cdns_pcie_ep_assert_intx(ep, fn, intx, true); 329 329 /* 330 330 * The mdelay() value was taken from dra7xx_pcie_raise_legacy_irq() 331 - * from drivers/pci/dwc/pci-dra7xx.c 332 331 */ 333 332 mdelay(1); 334 333 cdns_pcie_ep_assert_intx(ep, fn, intx, false);
+2 -6
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 337 337 struct resource_entry *entry; 338 338 u64 cpu_addr = cfg_res->start; 339 339 u32 addr0, addr1, desc1; 340 - int r, err, busnr = 0; 340 + int r, busnr = 0; 341 341 342 342 entry = resource_list_first_type(&bridge->windows, IORESOURCE_BUS); 343 343 if (entry) ··· 383 383 r++; 384 384 } 385 385 386 - err = cdns_pcie_host_map_dma_ranges(rc); 387 - if (err) 388 - return err; 389 - 390 - return 0; 386 + return cdns_pcie_host_map_dma_ranges(rc); 391 387 } 392 388 393 389 static int cdns_pcie_host_init(struct device *dev,
+2 -1
drivers/pci/controller/dwc/Kconfig
··· 237 237 Say Y here if you want PCIe controller support on HiSilicon STB SoCs 238 238 239 239 config PCI_MESON 240 - bool "MESON PCIe controller" 240 + tristate "MESON PCIe controller" 241 241 depends on PCI_MSI_IRQ_DOMAIN 242 + default m if ARCH_MESON 242 243 select PCIE_DW_HOST 243 244 help 244 245 Say Y here if you want to enable PCI controller support on Amlogic
+17 -29
drivers/pci/controller/dwc/pci-dra7xx.c
··· 73 73 #define LINK_UP BIT(16) 74 74 #define DRA7XX_CPU_TO_BUS_ADDR 0x0FFFFFFF 75 75 76 - #define EXP_CAP_ID_OFFSET 0x70 77 - 78 76 #define PCIECTRL_TI_CONF_INTX_ASSERT 0x0124 79 77 #define PCIECTRL_TI_CONF_INTX_DEASSERT 0x0128 80 78 ··· 89 91 void __iomem *base; /* DT ti_conf */ 90 92 int phy_count; /* DT phy-names count */ 91 93 struct phy **phy; 92 - int link_gen; 93 94 struct irq_domain *irq_domain; 94 95 enum dw_pcie_device_mode mode; 95 96 }; ··· 139 142 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); 140 143 struct device *dev = pci->dev; 141 144 u32 reg; 142 - u32 exp_cap_off = EXP_CAP_ID_OFFSET; 143 145 144 146 if (dw_pcie_link_up(pci)) { 145 147 dev_err(dev, "link is already up\n"); 146 148 return 0; 147 - } 148 - 149 - if (dra7xx->link_gen == 1) { 150 - dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCAP, 151 - 4, &reg); 152 - if ((reg & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 153 - reg &= ~((u32)PCI_EXP_LNKCAP_SLS); 154 - reg |= PCI_EXP_LNKCAP_SLS_2_5GB; 155 - dw_pcie_write(pci->dbi_base + exp_cap_off + 156 - PCI_EXP_LNKCAP, 4, reg); 157 - } 158 - 159 - dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCTL2, 160 - 2, &reg); 161 - if ((reg & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 162 - reg &= ~((u32)PCI_EXP_LNKCAP_SLS); 163 - reg |= PCI_EXP_LNKCAP_SLS_2_5GB; 164 - dw_pcie_write(pci->dbi_base + exp_cap_off + 165 - PCI_EXP_LNKCTL2, 2, reg); 166 - } 167 149 } 168 150 169 151 reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD); ··· 466 490 static int dra7xx_pcie_msi_host_init(struct pcie_port *pp) 467 491 { 468 492 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 493 + struct device *dev = pci->dev; 469 494 u32 ctrl, num_ctrls; 495 + int ret; 470 496 471 497 pp->msi_irq_chip = &dra7xx_pci_msi_bottom_irq_chip; 472 498 ··· 484 506 ~0); 485 507 } 486 508 487 - return dw_pcie_allocate_domains(pp); 509 + ret = dw_pcie_allocate_domains(pp); 510 + if (ret) 511 + return ret; 512 + 513 + pp->msi_data = dma_map_single_attrs(dev, &pp->msi_msg, 514 + sizeof(pp->msi_msg), 515 + DMA_FROM_DEVICE, 516 + DMA_ATTR_SKIP_CPU_SYNC); 517 + ret = dma_mapping_error(dev, pp->msi_data); 518 + if (ret) { 519 + dev_err(dev, "Failed to map MSI data\n"); 520 + pp->msi_data = 0; 521 + dw_pcie_free_msi(pp); 522 + } 523 + return ret; 488 524 } 489 525 490 526 static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = { ··· 928 936 reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD); 929 937 reg &= ~LTSSM_EN; 930 938 dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_DEVICE_CMD, reg); 931 - 932 - dra7xx->link_gen = of_pci_get_max_link_speed(np); 933 - if (dra7xx->link_gen < 0 || dra7xx->link_gen > 2) 934 - dra7xx->link_gen = 2; 935 939 936 940 switch (mode) { 937 941 case DW_PCIE_RC_TYPE:
+25 -20
drivers/pci/controller/dwc/pci-exynos.c
··· 336 336 exynos_pcie_sideband_dbi_w_mode(ep, false); 337 337 } 338 338 339 - static int exynos_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, 340 - u32 *val) 339 + static int exynos_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn, 340 + int where, int size, u32 *val) 341 341 { 342 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 343 - struct exynos_pcie *ep = to_exynos_pcie(pci); 344 - int ret; 342 + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); 345 343 346 - exynos_pcie_sideband_dbi_r_mode(ep, true); 347 - ret = dw_pcie_read(pci->dbi_base + where, size, val); 348 - exynos_pcie_sideband_dbi_r_mode(ep, false); 349 - return ret; 344 + if (PCI_SLOT(devfn)) { 345 + *val = ~0; 346 + return PCIBIOS_DEVICE_NOT_FOUND; 347 + } 348 + 349 + *val = dw_pcie_read_dbi(pci, where, size); 350 + return PCIBIOS_SUCCESSFUL; 350 351 } 351 352 352 - static int exynos_pcie_wr_own_conf(struct pcie_port *pp, int where, int size, 353 - u32 val) 353 + static int exynos_pcie_wr_own_conf(struct pci_bus *bus, unsigned int devfn, 354 + int where, int size, u32 val) 354 355 { 355 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 356 - struct exynos_pcie *ep = to_exynos_pcie(pci); 357 - int ret; 356 + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); 358 357 359 - exynos_pcie_sideband_dbi_w_mode(ep, true); 360 - ret = dw_pcie_write(pci->dbi_base + where, size, val); 361 - exynos_pcie_sideband_dbi_w_mode(ep, false); 362 - return ret; 358 + if (PCI_SLOT(devfn)) 359 + return PCIBIOS_DEVICE_NOT_FOUND; 360 + 361 + dw_pcie_write_dbi(pci, where, size, val); 362 + return PCIBIOS_SUCCESSFUL; 363 363 } 364 + 365 + static struct pci_ops exynos_pci_ops = { 366 + .read = exynos_pcie_rd_own_conf, 367 + .write = exynos_pcie_wr_own_conf, 368 + }; 364 369 365 370 static int exynos_pcie_link_up(struct dw_pcie *pci) 366 371 { ··· 384 379 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 385 380 struct exynos_pcie *ep = to_exynos_pcie(pci); 386 381 382 + pp->bridge->ops = &exynos_pci_ops; 383 + 387 384 exynos_pcie_establish_link(ep); 388 385 exynos_pcie_enable_interrupts(ep); 389 386 ··· 393 386 } 394 387 395 388 static const struct dw_pcie_host_ops exynos_pcie_host_ops = { 396 - .rd_own_conf = exynos_pcie_rd_own_conf, 397 - .wr_own_conf = exynos_pcie_wr_own_conf, 398 389 .host_init = exynos_pcie_host_init, 399 390 }; 400 391
+33 -54
drivers/pci/controller/dwc/pci-imx6.c
··· 79 79 u32 tx_deemph_gen2_6db; 80 80 u32 tx_swing_full; 81 81 u32 tx_swing_low; 82 - int link_gen; 83 82 struct regulator *vpcie; 84 83 void __iomem *phy_base; 85 84 ··· 93 94 #define PHY_PLL_LOCK_WAIT_USLEEP_MAX 200 94 95 #define PHY_PLL_LOCK_WAIT_TIMEOUT (2000 * PHY_PLL_LOCK_WAIT_USLEEP_MAX) 95 96 96 - /* PCIe Root Complex registers (memory-mapped) */ 97 - #define PCIE_RC_IMX6_MSI_CAP 0x50 98 - #define PCIE_RC_LCR 0x7c 99 - #define PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN1 0x1 100 - #define PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2 0x2 101 - #define PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK 0xf 102 - 103 - #define PCIE_RC_LCSR 0x80 104 - 105 97 /* PCIe Port Logic registers (memory-mapped) */ 106 98 #define PL_OFFSET 0x700 107 99 ··· 105 115 106 116 #define PCIE_PHY_STAT (PL_OFFSET + 0x110) 107 117 #define PCIE_PHY_STAT_ACK BIT(16) 108 - 109 - #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C 110 118 111 119 /* PHY registers (not memory-mapped) */ 112 120 #define PCIE_PHY_ATEOVRD 0x10 ··· 749 761 { 750 762 struct dw_pcie *pci = imx6_pcie->pci; 751 763 struct device *dev = pci->dev; 764 + u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 752 765 u32 tmp; 753 766 int ret; 754 767 ··· 758 769 * started in Gen2 mode, there is a possibility the devices on the 759 770 * bus will not be detected at all. This happens with PCIe switches. 760 771 */ 761 - tmp = dw_pcie_readl_dbi(pci, PCIE_RC_LCR); 762 - tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK; 763 - tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN1; 764 - dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp); 772 + tmp = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 773 + tmp &= ~PCI_EXP_LNKCAP_SLS; 774 + tmp |= PCI_EXP_LNKCAP_SLS_2_5GB; 775 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, tmp); 765 776 766 777 /* Start LTSSM. */ 767 778 imx6_pcie_ltssm_enable(dev); ··· 770 781 if (ret) 771 782 goto err_reset_phy; 772 783 773 - if (imx6_pcie->link_gen == 2) { 784 + if (pci->link_gen == 2) { 774 785 /* Allow Gen2 mode after the link is up. */ 775 - tmp = dw_pcie_readl_dbi(pci, PCIE_RC_LCR); 776 - tmp &= ~PCIE_RC_LCR_MAX_LINK_SPEEDS_MASK; 777 - tmp |= PCIE_RC_LCR_MAX_LINK_SPEEDS_GEN2; 778 - dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp); 786 + tmp = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 787 + tmp &= ~PCI_EXP_LNKCAP_SLS; 788 + tmp |= PCI_EXP_LNKCAP_SLS_5_0GB; 789 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, tmp); 779 790 780 791 /* 781 792 * Start Directed Speed Change so the best possible ··· 813 824 dev_info(dev, "Link: Gen2 disabled\n"); 814 825 } 815 826 816 - tmp = dw_pcie_readl_dbi(pci, PCIE_RC_LCSR); 817 - dev_info(dev, "Link up, Gen%i\n", (tmp >> 16) & 0xf); 827 + tmp = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 828 + dev_info(dev, "Link up, Gen%i\n", tmp & PCI_EXP_LNKSTA_CLS); 818 829 return 0; 819 830 820 831 err_reset_phy: ··· 836 847 imx6_setup_phy_mpll(imx6_pcie); 837 848 dw_pcie_setup_rc(pp); 838 849 imx6_pcie_establish_link(imx6_pcie); 839 - 840 - if (IS_ENABLED(CONFIG_PCI_MSI)) 841 - dw_pcie_msi_init(pp); 850 + dw_pcie_msi_init(pp); 842 851 843 852 return 0; 844 853 } ··· 1060 1073 1061 1074 /* Fetch clocks */ 1062 1075 imx6_pcie->pcie_phy = devm_clk_get(dev, "pcie_phy"); 1063 - if (IS_ERR(imx6_pcie->pcie_phy)) { 1064 - dev_err(dev, "pcie_phy clock source missing or invalid\n"); 1065 - return PTR_ERR(imx6_pcie->pcie_phy); 1066 - } 1076 + if (IS_ERR(imx6_pcie->pcie_phy)) 1077 + return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_phy), 1078 + "pcie_phy clock source missing or invalid\n"); 1067 1079 1068 1080 imx6_pcie->pcie_bus = devm_clk_get(dev, "pcie_bus"); 1069 - if (IS_ERR(imx6_pcie->pcie_bus)) { 1070 - dev_err(dev, "pcie_bus clock source missing or invalid\n"); 1071 - return PTR_ERR(imx6_pcie->pcie_bus); 1072 - } 1081 + if (IS_ERR(imx6_pcie->pcie_bus)) 1082 + return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_bus), 1083 + "pcie_bus clock source missing or invalid\n"); 1073 1084 1074 1085 imx6_pcie->pcie = devm_clk_get(dev, "pcie"); 1075 - if (IS_ERR(imx6_pcie->pcie)) { 1076 - dev_err(dev, "pcie clock source missing or invalid\n"); 1077 - return PTR_ERR(imx6_pcie->pcie); 1078 - } 1086 + if (IS_ERR(imx6_pcie->pcie)) 1087 + return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie), 1088 + "pcie clock source missing or invalid\n"); 1079 1089 1080 1090 switch (imx6_pcie->drvdata->variant) { 1081 1091 case IMX6SX: 1082 1092 imx6_pcie->pcie_inbound_axi = devm_clk_get(dev, 1083 1093 "pcie_inbound_axi"); 1084 - if (IS_ERR(imx6_pcie->pcie_inbound_axi)) { 1085 - dev_err(dev, "pcie_inbound_axi clock missing or invalid\n"); 1086 - return PTR_ERR(imx6_pcie->pcie_inbound_axi); 1087 - } 1094 + if (IS_ERR(imx6_pcie->pcie_inbound_axi)) 1095 + return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_inbound_axi), 1096 + "pcie_inbound_axi clock missing or invalid\n"); 1088 1097 break; 1089 1098 case IMX8MQ: 1090 1099 imx6_pcie->pcie_aux = devm_clk_get(dev, "pcie_aux"); 1091 - if (IS_ERR(imx6_pcie->pcie_aux)) { 1092 - dev_err(dev, "pcie_aux clock source missing or invalid\n"); 1093 - return PTR_ERR(imx6_pcie->pcie_aux); 1094 - } 1100 + if (IS_ERR(imx6_pcie->pcie_aux)) 1101 + return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_aux), 1102 + "pcie_aux clock source missing or invalid\n"); 1095 1103 fallthrough; 1096 1104 case IMX7D: 1097 1105 if (dbi_base->start == IMX8MQ_PCIE2_BASE_ADDR) ··· 1147 1165 imx6_pcie->tx_swing_low = 127; 1148 1166 1149 1167 /* Limit link speed */ 1150 - ret = of_property_read_u32(node, "fsl,max-link-speed", 1151 - &imx6_pcie->link_gen); 1152 - if (ret) 1153 - imx6_pcie->link_gen = 1; 1168 + pci->link_gen = 1; 1169 + ret = of_property_read_u32(node, "fsl,max-link-speed", &pci->link_gen); 1154 1170 1155 1171 imx6_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); 1156 1172 if (IS_ERR(imx6_pcie->vpcie)) { ··· 1168 1188 return ret; 1169 1189 1170 1190 if (pci_msi_enabled()) { 1171 - val = dw_pcie_readw_dbi(pci, PCIE_RC_IMX6_MSI_CAP + 1172 - PCI_MSI_FLAGS); 1191 + u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); 1192 + val = dw_pcie_readw_dbi(pci, offset + PCI_MSI_FLAGS); 1173 1193 val |= PCI_MSI_FLAGS_ENABLE; 1174 - dw_pcie_writew_dbi(pci, PCIE_RC_IMX6_MSI_CAP + PCI_MSI_FLAGS, 1175 - val); 1194 + dw_pcie_writew_dbi(pci, offset + PCI_MSI_FLAGS, val); 1176 1195 } 1177 1196 1178 1197 return 0;
+38 -108
drivers/pci/controller/dwc/pci-keystone.c
··· 96 96 #define LEG_EP 0x1 97 97 #define RC 0x2 98 98 99 - #define EXP_CAP_ID_OFFSET 0x70 100 - 101 99 #define KS_PCIE_SYSCLOCKOUTEN BIT(0) 102 100 103 101 #define AM654_PCIE_DEV_TYPE_MASK 0x3 ··· 121 123 122 124 int msi_host_irq; 123 125 int num_lanes; 124 - u32 num_viewport; 125 126 struct phy **phy; 126 127 struct device_link **link; 127 128 struct device_node *msi_intc_np; ··· 394 397 static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) 395 398 { 396 399 u32 val; 397 - u32 num_viewport = ks_pcie->num_viewport; 398 400 struct dw_pcie *pci = ks_pcie->pci; 399 401 struct pcie_port *pp = &pci->pp; 400 - u64 start = pp->mem->start; 401 - u64 end = pp->mem->end; 402 + u32 num_viewport = pci->num_viewport; 403 + u64 start, end; 404 + struct resource *mem; 402 405 int i; 406 + 407 + mem = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM)->res; 408 + start = mem->start; 409 + end = mem->end; 403 410 404 411 /* Disable BARs for inbound access */ 405 412 ks_pcie_set_dbi_mode(ks_pcie); ··· 431 430 ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 432 431 } 433 432 434 - static int ks_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 435 - unsigned int devfn, int where, int size, 436 - u32 *val) 433 + static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus, 434 + unsigned int devfn, int where) 437 435 { 436 + struct pcie_port *pp = bus->sysdata; 438 437 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 439 438 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 440 439 u32 reg; ··· 445 444 reg |= CFG_TYPE1; 446 445 ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg); 447 446 448 - return dw_pcie_read(pp->va_cfg0_base + where, size, val); 447 + return pp->va_cfg0_base + where; 449 448 } 450 449 451 - static int ks_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 452 - unsigned int devfn, int where, int size, 453 - u32 val) 454 - { 455 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 456 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 457 - u32 reg; 458 - 459 - reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) | 460 - CFG_FUNC(PCI_FUNC(devfn)); 461 - if (!pci_is_root_bus(bus->parent)) 462 - reg |= CFG_TYPE1; 463 - ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg); 464 - 465 - return dw_pcie_write(pp->va_cfg0_base + where, size, val); 466 - } 450 + static struct pci_ops ks_child_pcie_ops = { 451 + .map_bus = ks_pcie_other_map_bus, 452 + .read = pci_generic_config_read, 453 + .write = pci_generic_config_write, 454 + }; 467 455 468 456 /** 469 - * ks_pcie_v3_65_scan_bus() - keystone scan_bus post initialization 457 + * ks_pcie_v3_65_add_bus() - keystone add_bus post initialization 470 458 * 471 459 * This sets BAR0 to enable inbound access for MSI_IRQ register 472 460 */ 473 - static void ks_pcie_v3_65_scan_bus(struct pcie_port *pp) 461 + static int ks_pcie_v3_65_add_bus(struct pci_bus *bus) 474 462 { 463 + struct pcie_port *pp = bus->sysdata; 475 464 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 476 465 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 466 + 467 + if (!pci_is_root_bus(bus)) 468 + return 0; 477 469 478 470 /* Configure and set up BAR0 */ 479 471 ks_pcie_set_dbi_mode(ks_pcie); ··· 482 488 * be sufficient. Use physical address to avoid any conflicts. 483 489 */ 484 490 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start); 491 + 492 + return 0; 485 493 } 494 + 495 + static struct pci_ops ks_pcie_ops = { 496 + .map_bus = dw_pcie_own_conf_map_bus, 497 + .read = pci_generic_config_read, 498 + .write = pci_generic_config_write, 499 + .add_bus = ks_pcie_v3_65_add_bus, 500 + }; 486 501 487 502 /** 488 503 * ks_pcie_link_up() - Check if link up ··· 810 807 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 811 808 int ret; 812 809 810 + pp->bridge->ops = &ks_pcie_ops; 811 + pp->bridge->child_ops = &ks_child_pcie_ops; 812 + 813 813 ret = ks_pcie_config_legacy_irq(ks_pcie); 814 814 if (ret) 815 815 return ret; ··· 848 842 } 849 843 850 844 static const struct dw_pcie_host_ops ks_pcie_host_ops = { 851 - .rd_other_conf = ks_pcie_rd_other_conf, 852 - .wr_other_conf = ks_pcie_wr_other_conf, 853 845 .host_init = ks_pcie_host_init, 854 846 .msi_host_init = ks_pcie_msi_host_init, 855 - .scan_bus = ks_pcie_v3_65_scan_bus, 856 847 }; 857 848 858 849 static const struct dw_pcie_host_ops ks_pcie_am654_host_ops = { ··· 870 867 struct dw_pcie *pci = ks_pcie->pci; 871 868 struct pcie_port *pp = &pci->pp; 872 869 struct device *dev = &pdev->dev; 873 - struct resource *res; 874 870 int ret; 875 - 876 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 877 - pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 878 - if (IS_ERR(pp->va_cfg0_base)) 879 - return PTR_ERR(pp->va_cfg0_base); 880 - 881 - pp->va_cfg1_base = pp->va_cfg0_base; 882 871 883 872 ret = dw_pcie_host_init(pp); 884 873 if (ret) { ··· 879 884 } 880 885 881 886 return 0; 882 - } 883 - 884 - static u32 ks_pcie_am654_read_dbi2(struct dw_pcie *pci, void __iomem *base, 885 - u32 reg, size_t size) 886 - { 887 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 888 - u32 val; 889 - 890 - ks_pcie_set_dbi_mode(ks_pcie); 891 - dw_pcie_read(base + reg, size, &val); 892 - ks_pcie_clear_dbi_mode(ks_pcie); 893 - return val; 894 887 } 895 888 896 889 static void ks_pcie_am654_write_dbi2(struct dw_pcie *pci, void __iomem *base, ··· 895 912 .start_link = ks_pcie_start_link, 896 913 .stop_link = ks_pcie_stop_link, 897 914 .link_up = ks_pcie_link_up, 898 - .read_dbi2 = ks_pcie_am654_read_dbi2, 899 915 .write_dbi2 = ks_pcie_am654_write_dbi2, 900 916 }; 901 917 ··· 1107 1125 return 0; 1108 1126 } 1109 1127 1110 - static void ks_pcie_set_link_speed(struct dw_pcie *pci, int link_speed) 1111 - { 1112 - u32 val; 1113 - 1114 - dw_pcie_dbi_ro_wr_en(pci); 1115 - 1116 - val = dw_pcie_readl_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCAP); 1117 - if ((val & PCI_EXP_LNKCAP_SLS) != link_speed) { 1118 - val &= ~((u32)PCI_EXP_LNKCAP_SLS); 1119 - val |= link_speed; 1120 - dw_pcie_writel_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCAP, 1121 - val); 1122 - } 1123 - 1124 - val = dw_pcie_readl_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCTL2); 1125 - if ((val & PCI_EXP_LNKCAP_SLS) != link_speed) { 1126 - val &= ~((u32)PCI_EXP_LNKCAP_SLS); 1127 - val |= link_speed; 1128 - dw_pcie_writel_dbi(pci, EXP_CAP_ID_OFFSET + PCI_EXP_LNKCTL2, 1129 - val); 1130 - } 1131 - 1132 - dw_pcie_dbi_ro_wr_dis(pci); 1133 - } 1134 - 1135 1128 static const struct ks_pcie_of_data ks_pcie_rc_of_data = { 1136 1129 .host_ops = &ks_pcie_host_ops, 1137 1130 .version = 0x365A, ··· 1154 1197 struct keystone_pcie *ks_pcie; 1155 1198 struct device_link **link; 1156 1199 struct gpio_desc *gpiod; 1157 - void __iomem *atu_base; 1158 1200 struct resource *res; 1159 1201 unsigned int version; 1160 1202 void __iomem *base; 1161 - u32 num_viewport; 1162 1203 struct phy **phy; 1163 - int link_speed; 1164 1204 u32 num_lanes; 1165 1205 char name[10]; 1166 1206 int ret; ··· 1274 1320 goto err_get_sync; 1275 1321 } 1276 1322 1277 - if (pci->version >= 0x480A) { 1278 - atu_base = devm_platform_ioremap_resource_byname(pdev, "atu"); 1279 - if (IS_ERR(atu_base)) { 1280 - ret = PTR_ERR(atu_base); 1281 - goto err_get_sync; 1282 - } 1283 - 1284 - pci->atu_base = atu_base; 1285 - 1323 + if (pci->version >= 0x480A) 1286 1324 ret = ks_pcie_am654_set_mode(dev, mode); 1287 - if (ret < 0) 1288 - goto err_get_sync; 1289 - } else { 1325 + else 1290 1326 ret = ks_pcie_set_mode(dev); 1291 - if (ret < 0) 1292 - goto err_get_sync; 1293 - } 1294 - 1295 - link_speed = of_pci_get_max_link_speed(np); 1296 - if (link_speed < 0) 1297 - link_speed = 2; 1298 - 1299 - ks_pcie_set_link_speed(pci, link_speed); 1327 + if (ret < 0) 1328 + goto err_get_sync; 1300 1329 1301 1330 switch (mode) { 1302 1331 case DW_PCIE_RC_TYPE: 1303 1332 if (!IS_ENABLED(CONFIG_PCI_KEYSTONE_HOST)) { 1304 1333 ret = -ENODEV; 1305 - goto err_get_sync; 1306 - } 1307 - 1308 - ret = of_property_read_u32(np, "num-viewport", &num_viewport); 1309 - if (ret < 0) { 1310 - dev_err(dev, "unable to read *num-viewport* property\n"); 1311 1334 goto err_get_sync; 1312 1335 } 1313 1336 ··· 1301 1370 gpiod_set_value_cansleep(gpiod, 1); 1302 1371 } 1303 1372 1304 - ks_pcie->num_viewport = num_viewport; 1305 1373 pci->pp.ops = host_ops; 1306 1374 ret = ks_pcie_add_pcie_port(ks_pcie, pdev); 1307 1375 if (ret < 0)
+75 -25
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 20 20 21 21 #define PCIE_DBI2_OFFSET 0x1000 /* DBI2 base address*/ 22 22 23 - struct ls_pcie_ep { 24 - struct dw_pcie *pci; 23 + #define to_ls_pcie_ep(x) dev_get_drvdata((x)->dev) 24 + 25 + struct ls_pcie_ep_drvdata { 26 + u32 func_offset; 27 + const struct dw_pcie_ep_ops *ops; 28 + const struct dw_pcie_ops *dw_pcie_ops; 25 29 }; 26 30 27 - #define to_ls_pcie_ep(x) dev_get_drvdata((x)->dev) 31 + struct ls_pcie_ep { 32 + struct dw_pcie *pci; 33 + struct pci_epc_features *ls_epc; 34 + const struct ls_pcie_ep_drvdata *drvdata; 35 + }; 28 36 29 37 static int ls_pcie_establish_link(struct dw_pcie *pci) 30 38 { 31 39 return 0; 32 40 } 33 41 34 - static const struct dw_pcie_ops ls_pcie_ep_ops = { 42 + static const struct dw_pcie_ops dw_ls_pcie_ep_ops = { 35 43 .start_link = ls_pcie_establish_link, 36 - }; 37 - 38 - static const struct of_device_id ls_pcie_ep_of_match[] = { 39 - { .compatible = "fsl,ls-pcie-ep",}, 40 - { }, 41 - }; 42 - 43 - static const struct pci_epc_features ls_pcie_epc_features = { 44 - .linkup_notifier = false, 45 - .msi_capable = true, 46 - .msix_capable = false, 47 - .bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4), 48 44 }; 49 45 50 46 static const struct pci_epc_features* 51 47 ls_pcie_ep_get_features(struct dw_pcie_ep *ep) 52 48 { 53 - return &ls_pcie_epc_features; 49 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 50 + struct ls_pcie_ep *pcie = to_ls_pcie_ep(pci); 51 + 52 + return pcie->ls_epc; 54 53 } 55 54 56 55 static void ls_pcie_ep_init(struct dw_pcie_ep *ep) 57 56 { 58 57 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 58 + struct ls_pcie_ep *pcie = to_ls_pcie_ep(pci); 59 + struct dw_pcie_ep_func *ep_func; 59 60 enum pci_barno bar; 61 + 62 + ep_func = dw_pcie_ep_get_func_from_ep(ep, 0); 63 + if (!ep_func) 64 + return; 60 65 61 66 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 62 67 dw_pcie_ep_reset_bar(pci, bar); 68 + 69 + pcie->ls_epc->msi_capable = ep_func->msi_cap ? true : false; 70 + pcie->ls_epc->msix_capable = ep_func->msix_cap ? true : false; 63 71 } 64 72 65 73 static int ls_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 66 - enum pci_epc_irq_type type, u16 interrupt_num) 74 + enum pci_epc_irq_type type, u16 interrupt_num) 67 75 { 68 76 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 69 77 ··· 81 73 case PCI_EPC_IRQ_MSI: 82 74 return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 83 75 case PCI_EPC_IRQ_MSIX: 84 - return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num); 76 + return dw_pcie_ep_raise_msix_irq_doorbell(ep, func_no, 77 + interrupt_num); 85 78 default: 86 79 dev_err(pci->dev, "UNKNOWN IRQ type\n"); 87 80 return -EINVAL; 88 81 } 89 82 } 90 83 91 - static const struct dw_pcie_ep_ops pcie_ep_ops = { 84 + static unsigned int ls_pcie_ep_func_conf_select(struct dw_pcie_ep *ep, 85 + u8 func_no) 86 + { 87 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 88 + struct ls_pcie_ep *pcie = to_ls_pcie_ep(pci); 89 + 90 + WARN_ON(func_no && !pcie->drvdata->func_offset); 91 + return pcie->drvdata->func_offset * func_no; 92 + } 93 + 94 + static const struct dw_pcie_ep_ops ls_pcie_ep_ops = { 92 95 .ep_init = ls_pcie_ep_init, 93 96 .raise_irq = ls_pcie_ep_raise_irq, 94 97 .get_features = ls_pcie_ep_get_features, 98 + .func_conf_select = ls_pcie_ep_func_conf_select, 99 + }; 100 + 101 + static const struct ls_pcie_ep_drvdata ls1_ep_drvdata = { 102 + .ops = &ls_pcie_ep_ops, 103 + .dw_pcie_ops = &dw_ls_pcie_ep_ops, 104 + }; 105 + 106 + static const struct ls_pcie_ep_drvdata ls2_ep_drvdata = { 107 + .func_offset = 0x20000, 108 + .ops = &ls_pcie_ep_ops, 109 + .dw_pcie_ops = &dw_ls_pcie_ep_ops, 110 + }; 111 + 112 + static const struct of_device_id ls_pcie_ep_of_match[] = { 113 + { .compatible = "fsl,ls1046a-pcie-ep", .data = &ls1_ep_drvdata }, 114 + { .compatible = "fsl,ls1088a-pcie-ep", .data = &ls2_ep_drvdata }, 115 + { .compatible = "fsl,ls2088a-pcie-ep", .data = &ls2_ep_drvdata }, 116 + { }, 95 117 }; 96 118 97 119 static int __init ls_add_pcie_ep(struct ls_pcie_ep *pcie, 98 - struct platform_device *pdev) 120 + struct platform_device *pdev) 99 121 { 100 122 struct dw_pcie *pci = pcie->pci; 101 123 struct device *dev = pci->dev; ··· 134 96 int ret; 135 97 136 98 ep = &pci->ep; 137 - ep->ops = &pcie_ep_ops; 99 + ep->ops = pcie->drvdata->ops; 138 100 139 101 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 140 102 if (!res) ··· 157 119 struct device *dev = &pdev->dev; 158 120 struct dw_pcie *pci; 159 121 struct ls_pcie_ep *pcie; 122 + struct pci_epc_features *ls_epc; 160 123 struct resource *dbi_base; 161 124 int ret; 162 125 ··· 169 130 if (!pci) 170 131 return -ENOMEM; 171 132 133 + ls_epc = devm_kzalloc(dev, sizeof(*ls_epc), GFP_KERNEL); 134 + if (!ls_epc) 135 + return -ENOMEM; 136 + 137 + pcie->drvdata = of_device_get_match_data(dev); 138 + 139 + pci->dev = dev; 140 + pci->ops = pcie->drvdata->dw_pcie_ops; 141 + 142 + ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4), 143 + 144 + pcie->pci = pci; 145 + pcie->ls_epc = ls_epc; 146 + 172 147 dbi_base = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs"); 173 148 pci->dbi_base = devm_pci_remap_cfg_resource(dev, dbi_base); 174 149 if (IS_ERR(pci->dbi_base)) 175 150 return PTR_ERR(pci->dbi_base); 176 151 177 152 pci->dbi_base2 = pci->dbi_base + PCIE_DBI2_OFFSET; 178 - pci->dev = dev; 179 - pci->ops = &ls_pcie_ep_ops; 180 - pcie->pci = pci; 181 153 182 154 platform_set_drvdata(pdev, pcie); 183 155
+44 -120
drivers/pci/controller/dwc/pci-meson.c
··· 17 17 #include <linux/resource.h> 18 18 #include <linux/types.h> 19 19 #include <linux/phy/phy.h> 20 + #include <linux/module.h> 20 21 21 22 #include "pcie-designware.h" 22 23 23 24 #define to_meson_pcie(x) dev_get_drvdata((x)->dev) 24 25 25 - /* External local bus interface registers */ 26 - #define PLR_OFFSET 0x700 27 - #define PCIE_PORT_LINK_CTRL_OFF (PLR_OFFSET + 0x10) 28 - #define FAST_LINK_MODE BIT(7) 29 - #define LINK_CAPABLE_MASK GENMASK(21, 16) 30 - #define LINK_CAPABLE_X1 BIT(16) 31 - 32 - #define PCIE_GEN2_CTRL_OFF (PLR_OFFSET + 0x10c) 33 - #define NUM_OF_LANES_MASK GENMASK(12, 8) 34 - #define NUM_OF_LANES_X1 BIT(8) 35 - #define DIRECT_SPEED_CHANGE BIT(17) 36 - 37 - #define TYPE1_HDR_OFFSET 0x0 38 - #define PCIE_STATUS_COMMAND (TYPE1_HDR_OFFSET + 0x04) 39 - #define PCI_IO_EN BIT(0) 40 - #define PCI_MEM_SPACE_EN BIT(1) 41 - #define PCI_BUS_MASTER_EN BIT(2) 42 - 43 - #define PCIE_BASE_ADDR0 (TYPE1_HDR_OFFSET + 0x10) 44 - #define PCIE_BASE_ADDR1 (TYPE1_HDR_OFFSET + 0x14) 45 - 46 - #define PCIE_CAP_OFFSET 0x70 47 - #define PCIE_DEV_CTRL_DEV_STUS (PCIE_CAP_OFFSET + 0x08) 48 - #define PCIE_CAP_MAX_PAYLOAD_MASK GENMASK(7, 5) 49 26 #define PCIE_CAP_MAX_PAYLOAD_SIZE(x) ((x) << 5) 50 - #define PCIE_CAP_MAX_READ_REQ_MASK GENMASK(14, 12) 51 27 #define PCIE_CAP_MAX_READ_REQ_SIZE(x) ((x) << 12) 52 28 53 29 /* PCIe specific config registers */ ··· 53 77 PCIE_GEN4 54 78 }; 55 79 56 - struct meson_pcie_mem_res { 57 - void __iomem *elbi_base; 58 - void __iomem *cfg_base; 59 - }; 60 - 61 80 struct meson_pcie_clk_res { 62 81 struct clk *clk; 63 82 struct clk *port_clk; ··· 66 95 67 96 struct meson_pcie { 68 97 struct dw_pcie pci; 69 - struct meson_pcie_mem_res mem_res; 98 + void __iomem *cfg_base; 70 99 struct meson_pcie_clk_res clk_res; 71 100 struct meson_pcie_rc_reset mrst; 72 101 struct gpio_desc *reset_gpio; ··· 105 134 return 0; 106 135 } 107 136 108 - static void __iomem *meson_pcie_get_mem(struct platform_device *pdev, 109 - struct meson_pcie *mp, 110 - const char *id) 111 - { 112 - struct device *dev = mp->pci.dev; 113 - struct resource *res; 114 - 115 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, id); 116 - 117 - return devm_ioremap_resource(dev, res); 118 - } 119 - 120 137 static int meson_pcie_get_mems(struct platform_device *pdev, 121 138 struct meson_pcie *mp) 122 139 { 123 - mp->mem_res.elbi_base = meson_pcie_get_mem(pdev, mp, "elbi"); 124 - if (IS_ERR(mp->mem_res.elbi_base)) 125 - return PTR_ERR(mp->mem_res.elbi_base); 140 + struct dw_pcie *pci = &mp->pci; 126 141 127 - mp->mem_res.cfg_base = meson_pcie_get_mem(pdev, mp, "cfg"); 128 - if (IS_ERR(mp->mem_res.cfg_base)) 129 - return PTR_ERR(mp->mem_res.cfg_base); 142 + pci->dbi_base = devm_platform_ioremap_resource_byname(pdev, "elbi"); 143 + if (IS_ERR(pci->dbi_base)) 144 + return PTR_ERR(pci->dbi_base); 145 + 146 + mp->cfg_base = devm_platform_ioremap_resource_byname(pdev, "cfg"); 147 + if (IS_ERR(mp->cfg_base)) 148 + return PTR_ERR(mp->cfg_base); 130 149 131 150 return 0; 132 151 } ··· 214 253 return 0; 215 254 } 216 255 217 - static inline void meson_elb_writel(struct meson_pcie *mp, u32 val, u32 reg) 218 - { 219 - writel(val, mp->mem_res.elbi_base + reg); 220 - } 221 - 222 - static inline u32 meson_elb_readl(struct meson_pcie *mp, u32 reg) 223 - { 224 - return readl(mp->mem_res.elbi_base + reg); 225 - } 226 - 227 256 static inline u32 meson_cfg_readl(struct meson_pcie *mp, u32 reg) 228 257 { 229 - return readl(mp->mem_res.cfg_base + reg); 258 + return readl(mp->cfg_base + reg); 230 259 } 231 260 232 261 static inline void meson_cfg_writel(struct meson_pcie *mp, u32 val, u32 reg) 233 262 { 234 - writel(val, mp->mem_res.cfg_base + reg); 263 + writel(val, mp->cfg_base + reg); 235 264 } 236 265 237 266 static void meson_pcie_assert_reset(struct meson_pcie *mp) ··· 238 287 val = meson_cfg_readl(mp, PCIE_CFG0); 239 288 val |= APP_LTSSM_ENABLE; 240 289 meson_cfg_writel(mp, val, PCIE_CFG0); 241 - 242 - val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF); 243 - val &= ~(LINK_CAPABLE_MASK | FAST_LINK_MODE); 244 - meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF); 245 - 246 - val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF); 247 - val |= LINK_CAPABLE_X1; 248 - meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF); 249 - 250 - val = meson_elb_readl(mp, PCIE_GEN2_CTRL_OFF); 251 - val &= ~NUM_OF_LANES_MASK; 252 - meson_elb_writel(mp, val, PCIE_GEN2_CTRL_OFF); 253 - 254 - val = meson_elb_readl(mp, PCIE_GEN2_CTRL_OFF); 255 - val |= NUM_OF_LANES_X1 | DIRECT_SPEED_CHANGE; 256 - meson_elb_writel(mp, val, PCIE_GEN2_CTRL_OFF); 257 - 258 - meson_elb_writel(mp, 0x0, PCIE_BASE_ADDR0); 259 - meson_elb_writel(mp, 0x0, PCIE_BASE_ADDR1); 260 290 } 261 291 262 292 static int meson_size_to_payload(struct meson_pcie *mp, int size) ··· 259 327 260 328 static void meson_set_max_payload(struct meson_pcie *mp, int size) 261 329 { 330 + struct dw_pcie *pci = &mp->pci; 262 331 u32 val; 332 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 263 333 int max_payload_size = meson_size_to_payload(mp, size); 264 334 265 - val = meson_elb_readl(mp, PCIE_DEV_CTRL_DEV_STUS); 266 - val &= ~PCIE_CAP_MAX_PAYLOAD_MASK; 267 - meson_elb_writel(mp, val, PCIE_DEV_CTRL_DEV_STUS); 335 + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_DEVCTL); 336 + val &= ~PCI_EXP_DEVCTL_PAYLOAD; 337 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_DEVCTL, val); 268 338 269 - val = meson_elb_readl(mp, PCIE_DEV_CTRL_DEV_STUS); 339 + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_DEVCTL); 270 340 val |= PCIE_CAP_MAX_PAYLOAD_SIZE(max_payload_size); 271 - meson_elb_writel(mp, val, PCIE_DEV_CTRL_DEV_STUS); 341 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_DEVCTL, val); 272 342 } 273 343 274 344 static void meson_set_max_rd_req_size(struct meson_pcie *mp, int size) 275 345 { 346 + struct dw_pcie *pci = &mp->pci; 276 347 u32 val; 348 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 277 349 int max_rd_req_size = meson_size_to_payload(mp, size); 278 350 279 - val = meson_elb_readl(mp, PCIE_DEV_CTRL_DEV_STUS); 280 - val &= ~PCIE_CAP_MAX_READ_REQ_MASK; 281 - meson_elb_writel(mp, val, PCIE_DEV_CTRL_DEV_STUS); 351 + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_DEVCTL); 352 + val &= ~PCI_EXP_DEVCTL_READRQ; 353 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_DEVCTL, val); 282 354 283 - val = meson_elb_readl(mp, PCIE_DEV_CTRL_DEV_STUS); 355 + val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_DEVCTL); 284 356 val |= PCIE_CAP_MAX_READ_REQ_SIZE(max_rd_req_size); 285 - meson_elb_writel(mp, val, PCIE_DEV_CTRL_DEV_STUS); 286 - } 287 - 288 - static inline void meson_enable_memory_space(struct meson_pcie *mp) 289 - { 290 - /* Set the RC Bus Master, Memory Space and I/O Space enables */ 291 - meson_elb_writel(mp, PCI_IO_EN | PCI_MEM_SPACE_EN | PCI_BUS_MASTER_EN, 292 - PCIE_STATUS_COMMAND); 357 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_DEVCTL, val); 293 358 } 294 359 295 360 static int meson_pcie_establish_link(struct meson_pcie *mp) ··· 299 370 meson_set_max_rd_req_size(mp, MAX_READ_REQ_SIZE); 300 371 301 372 dw_pcie_setup_rc(pp); 302 - meson_enable_memory_space(mp); 303 373 304 374 meson_pcie_assert_reset(mp); 305 375 306 376 return dw_pcie_wait_for_link(pci); 307 377 } 308 378 309 - static void meson_pcie_enable_interrupts(struct meson_pcie *mp) 379 + static int meson_pcie_rd_own_conf(struct pci_bus *bus, u32 devfn, 380 + int where, int size, u32 *val) 310 381 { 311 - if (IS_ENABLED(CONFIG_PCI_MSI)) 312 - dw_pcie_msi_init(&mp->pci.pp); 313 - } 314 - 315 - static int meson_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, 316 - u32 *val) 317 - { 318 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 319 382 int ret; 320 383 321 - ret = dw_pcie_read(pci->dbi_base + where, size, val); 384 + ret = pci_generic_config_read(bus, devfn, where, size, val); 322 385 if (ret != PCIBIOS_SUCCESSFUL) 323 386 return ret; 324 387 ··· 331 410 return PCIBIOS_SUCCESSFUL; 332 411 } 333 412 334 - static int meson_pcie_wr_own_conf(struct pcie_port *pp, int where, 335 - int size, u32 val) 336 - { 337 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 338 - 339 - return dw_pcie_write(pci->dbi_base + where, size, val); 340 - } 413 + static struct pci_ops meson_pci_ops = { 414 + .map_bus = dw_pcie_own_conf_map_bus, 415 + .read = meson_pcie_rd_own_conf, 416 + .write = pci_generic_config_write, 417 + }; 341 418 342 419 static int meson_pcie_link_up(struct dw_pcie *pci) 343 420 { ··· 382 463 struct meson_pcie *mp = to_meson_pcie(pci); 383 464 int ret; 384 465 466 + pp->bridge->ops = &meson_pci_ops; 467 + 385 468 ret = meson_pcie_establish_link(mp); 386 469 if (ret) 387 470 return ret; 388 471 389 - meson_pcie_enable_interrupts(mp); 472 + dw_pcie_msi_init(pp); 390 473 391 474 return 0; 392 475 } 393 476 394 477 static const struct dw_pcie_host_ops meson_pcie_host_ops = { 395 - .rd_own_conf = meson_pcie_rd_own_conf, 396 - .wr_own_conf = meson_pcie_wr_own_conf, 397 478 .host_init = meson_pcie_host_init, 398 479 }; 399 480 ··· 412 493 } 413 494 414 495 pp->ops = &meson_pcie_host_ops; 415 - pci->dbi_base = mp->mem_res.elbi_base; 416 496 417 497 ret = dw_pcie_host_init(pp); 418 498 if (ret) { ··· 440 522 pci = &mp->pci; 441 523 pci->dev = dev; 442 524 pci->ops = &dw_pcie_ops; 525 + pci->num_lanes = 1; 443 526 444 527 mp->phy = devm_phy_get(dev, "pcie"); 445 528 if (IS_ERR(mp->phy)) { ··· 508 589 }, 509 590 {}, 510 591 }; 592 + MODULE_DEVICE_TABLE(of, meson_pcie_of_match); 511 593 512 594 static struct platform_driver meson_pcie_driver = { 513 595 .probe = meson_pcie_probe, ··· 518 598 }, 519 599 }; 520 600 521 - builtin_platform_driver(meson_pcie_driver); 601 + module_platform_driver(meson_pcie_driver); 602 + 603 + MODULE_AUTHOR("Yue Wang <yue.wang@amlogic.com>"); 604 + MODULE_DESCRIPTION("Amlogic PCIe Controller driver"); 605 + MODULE_LICENSE("GPL v2");
+17 -53
drivers/pci/controller/dwc/pcie-al.c
··· 217 217 reg); 218 218 } 219 219 220 - static void __iomem *al_pcie_conf_addr_map(struct al_pcie *pcie, 221 - unsigned int busnr, 222 - unsigned int devfn) 220 + static void __iomem *al_pcie_conf_addr_map_bus(struct pci_bus *bus, 221 + unsigned int devfn, int where) 223 222 { 223 + struct pcie_port *pp = bus->sysdata; 224 + struct al_pcie *pcie = to_al_pcie(to_dw_pcie_from_pp(pp)); 225 + unsigned int busnr = bus->number; 224 226 struct al_pcie_target_bus_cfg *target_bus_cfg = &pcie->target_bus_cfg; 225 227 unsigned int busnr_ecam = busnr & target_bus_cfg->ecam_mask; 226 228 unsigned int busnr_reg = busnr & target_bus_cfg->reg_mask; 227 - struct pcie_port *pp = &pcie->pci->pp; 228 229 void __iomem *pci_base_addr; 229 230 230 231 pci_base_addr = (void __iomem *)((uintptr_t)pp->va_cfg0_base + ··· 241 240 target_bus_cfg->reg_mask); 242 241 } 243 242 244 - return pci_base_addr; 243 + return pci_base_addr + where; 245 244 } 246 245 247 - static int al_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 248 - unsigned int devfn, int where, int size, 249 - u32 *val) 250 - { 251 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 252 - struct al_pcie *pcie = to_al_pcie(pci); 253 - unsigned int busnr = bus->number; 254 - void __iomem *pci_addr; 255 - int rc; 256 - 257 - pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn); 258 - 259 - rc = dw_pcie_read(pci_addr + where, size, val); 260 - 261 - dev_dbg(pci->dev, "%d-byte config read from %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n", 262 - size, pci_domain_nr(bus), bus->number, 263 - PCI_SLOT(devfn), PCI_FUNC(devfn), where, 264 - (pci_addr + where), *val); 265 - 266 - return rc; 267 - } 268 - 269 - static int al_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 270 - unsigned int devfn, int where, int size, 271 - u32 val) 272 - { 273 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 274 - struct al_pcie *pcie = to_al_pcie(pci); 275 - unsigned int busnr = bus->number; 276 - void __iomem *pci_addr; 277 - int rc; 278 - 279 - pci_addr = al_pcie_conf_addr_map(pcie, busnr, devfn); 280 - 281 - rc = dw_pcie_write(pci_addr + where, size, val); 282 - 283 - dev_dbg(pci->dev, "%d-byte config write to %04x:%02x:%02x.%d offset 0x%x (pci_addr: 0x%px) - val:0x%x\n", 284 - size, pci_domain_nr(bus), bus->number, 285 - PCI_SLOT(devfn), PCI_FUNC(devfn), where, 286 - (pci_addr + where), val); 287 - 288 - return rc; 289 - } 246 + static struct pci_ops al_child_pci_ops = { 247 + .map_bus = al_pcie_conf_addr_map_bus, 248 + .read = pci_generic_config_read, 249 + .write = pci_generic_config_write, 250 + }; 290 251 291 252 static void al_pcie_config_prepare(struct al_pcie *pcie) 292 253 { ··· 260 297 u8 secondary_bus; 261 298 u32 cfg_control; 262 299 u32 reg; 300 + struct resource *bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res; 263 301 264 302 target_bus_cfg = &pcie->target_bus_cfg; 265 303 ··· 274 310 target_bus_cfg->ecam_mask = ecam_bus_mask; 275 311 /* This portion is taken from the cfg_target_bus reg */ 276 312 target_bus_cfg->reg_mask = ~target_bus_cfg->ecam_mask; 277 - target_bus_cfg->reg_val = pp->busn->start & target_bus_cfg->reg_mask; 313 + target_bus_cfg->reg_val = bus->start & target_bus_cfg->reg_mask; 278 314 279 315 al_pcie_target_bus_set(pcie, target_bus_cfg->reg_val, 280 316 target_bus_cfg->reg_mask); 281 317 282 - secondary_bus = pp->busn->start + 1; 283 - subordinate_bus = pp->busn->end; 318 + secondary_bus = bus->start + 1; 319 + subordinate_bus = bus->end; 284 320 285 321 /* Set the valid values of secondary and subordinate buses */ 286 322 cfg_control_offset = AXI_BASE_OFFSET + pcie->reg_offsets.ob_ctrl + ··· 303 339 struct al_pcie *pcie = to_al_pcie(pci); 304 340 int rc; 305 341 342 + pp->bridge->child_ops = &al_child_pci_ops; 343 + 306 344 rc = al_pcie_rev_id_get(pcie, &pcie->controller_rev_id); 307 345 if (rc) 308 346 return rc; ··· 319 353 } 320 354 321 355 static const struct dw_pcie_host_ops al_pcie_host_ops = { 322 - .rd_other_conf = al_pcie_rd_other_conf, 323 - .wr_other_conf = al_pcie_wr_other_conf, 324 356 .host_init = al_pcie_host_init, 325 357 }; 326 358
+5 -43
drivers/pci/controller/dwc/pcie-artpec6.c
··· 44 44 45 45 static const struct of_device_id artpec6_pcie_of_match[]; 46 46 47 - /* PCIe Port Logic registers (memory-mapped) */ 48 - #define PL_OFFSET 0x700 49 - 50 - #define ACK_F_ASPM_CTRL_OFF (PL_OFFSET + 0xc) 51 - #define ACK_N_FTS_MASK GENMASK(15, 8) 52 - #define ACK_N_FTS(x) (((x) << 8) & ACK_N_FTS_MASK) 53 - 54 47 /* ARTPEC-6 specific registers */ 55 48 #define PCIECFG 0x18 56 49 #define PCIECFG_DBG_OEN BIT(24) ··· 282 289 } 283 290 } 284 291 285 - static void artpec6_pcie_set_nfts(struct artpec6_pcie *artpec6_pcie) 286 - { 287 - struct dw_pcie *pci = artpec6_pcie->pci; 288 - u32 val; 289 - 290 - if (artpec6_pcie->variant != ARTPEC7) 291 - return; 292 - 293 - /* 294 - * Increase the N_FTS (Number of Fast Training Sequences) 295 - * to be transmitted when transitioning from L0s to L0. 296 - */ 297 - val = dw_pcie_readl_dbi(pci, ACK_F_ASPM_CTRL_OFF); 298 - val &= ~ACK_N_FTS_MASK; 299 - val |= ACK_N_FTS(180); 300 - dw_pcie_writel_dbi(pci, ACK_F_ASPM_CTRL_OFF, val); 301 - 302 - /* 303 - * Set the Number of Fast Training Sequences that the core 304 - * advertises as its N_FTS during Gen2 or Gen3 link training. 305 - */ 306 - dw_pcie_link_set_n_fts(pci, 180); 307 - } 308 - 309 292 static void artpec6_pcie_assert_core_reset(struct artpec6_pcie *artpec6_pcie) 310 293 { 311 294 u32 val; ··· 315 346 usleep_range(100, 200); 316 347 } 317 348 318 - static void artpec6_pcie_enable_interrupts(struct artpec6_pcie *artpec6_pcie) 319 - { 320 - struct dw_pcie *pci = artpec6_pcie->pci; 321 - struct pcie_port *pp = &pci->pp; 322 - 323 - if (IS_ENABLED(CONFIG_PCI_MSI)) 324 - dw_pcie_msi_init(pp); 325 - } 326 - 327 349 static int artpec6_pcie_host_init(struct pcie_port *pp) 328 350 { 329 351 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 330 352 struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci); 331 353 354 + if (artpec6_pcie->variant == ARTPEC7) { 355 + pci->n_fts[0] = 180; 356 + pci->n_fts[1] = 180; 357 + } 332 358 artpec6_pcie_assert_core_reset(artpec6_pcie); 333 359 artpec6_pcie_init_phy(artpec6_pcie); 334 360 artpec6_pcie_deassert_core_reset(artpec6_pcie); 335 361 artpec6_pcie_wait_for_phy(artpec6_pcie); 336 - artpec6_pcie_set_nfts(artpec6_pcie); 337 362 dw_pcie_setup_rc(pp); 338 363 artpec6_pcie_establish_link(pci); 339 364 dw_pcie_wait_for_link(pci); 340 - artpec6_pcie_enable_interrupts(artpec6_pcie); 365 + dw_pcie_msi_init(pp); 341 366 342 367 return 0; 343 368 } ··· 375 412 artpec6_pcie_init_phy(artpec6_pcie); 376 413 artpec6_pcie_deassert_core_reset(artpec6_pcie); 377 414 artpec6_pcie_wait_for_phy(artpec6_pcie); 378 - artpec6_pcie_set_nfts(artpec6_pcie); 379 415 380 416 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 381 417 dw_pcie_ep_reset_bar(pci, bar);
+209 -52
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 12 12 #include <linux/pci-epc.h> 13 13 #include <linux/pci-epf.h> 14 14 15 + #include "../../pci.h" 16 + 15 17 void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) 16 18 { 17 19 struct pci_epc *epc = ep->epc; ··· 30 28 } 31 29 EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify); 32 30 33 - static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar, 34 - int flags) 31 + struct dw_pcie_ep_func * 32 + dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no) 33 + { 34 + struct dw_pcie_ep_func *ep_func; 35 + 36 + list_for_each_entry(ep_func, &ep->func_list, list) { 37 + if (ep_func->func_no == func_no) 38 + return ep_func; 39 + } 40 + 41 + return NULL; 42 + } 43 + 44 + static unsigned int dw_pcie_ep_func_select(struct dw_pcie_ep *ep, u8 func_no) 45 + { 46 + unsigned int func_offset = 0; 47 + 48 + if (ep->ops->func_conf_select) 49 + func_offset = ep->ops->func_conf_select(ep, func_no); 50 + 51 + return func_offset; 52 + } 53 + 54 + static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, u8 func_no, 55 + enum pci_barno bar, int flags) 35 56 { 36 57 u32 reg; 58 + unsigned int func_offset = 0; 59 + struct dw_pcie_ep *ep = &pci->ep; 37 60 38 - reg = PCI_BASE_ADDRESS_0 + (4 * bar); 61 + func_offset = dw_pcie_ep_func_select(ep, func_no); 62 + 63 + reg = func_offset + PCI_BASE_ADDRESS_0 + (4 * bar); 39 64 dw_pcie_dbi_ro_wr_en(pci); 40 65 dw_pcie_writel_dbi2(pci, reg, 0x0); 41 66 dw_pcie_writel_dbi(pci, reg, 0x0); ··· 75 46 76 47 void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) 77 48 { 78 - __dw_pcie_ep_reset_bar(pci, bar, 0); 49 + u8 func_no, funcs; 50 + 51 + funcs = pci->ep.epc->max_functions; 52 + 53 + for (func_no = 0; func_no < funcs; func_no++) 54 + __dw_pcie_ep_reset_bar(pci, func_no, bar, 0); 55 + } 56 + 57 + static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no, 58 + u8 cap_ptr, u8 cap) 59 + { 60 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 61 + unsigned int func_offset = 0; 62 + u8 cap_id, next_cap_ptr; 63 + u16 reg; 64 + 65 + if (!cap_ptr) 66 + return 0; 67 + 68 + func_offset = dw_pcie_ep_func_select(ep, func_no); 69 + 70 + reg = dw_pcie_readw_dbi(pci, func_offset + cap_ptr); 71 + cap_id = (reg & 0x00ff); 72 + 73 + if (cap_id > PCI_CAP_ID_MAX) 74 + return 0; 75 + 76 + if (cap_id == cap) 77 + return cap_ptr; 78 + 79 + next_cap_ptr = (reg & 0xff00) >> 8; 80 + return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap); 81 + } 82 + 83 + static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap) 84 + { 85 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 86 + unsigned int func_offset = 0; 87 + u8 next_cap_ptr; 88 + u16 reg; 89 + 90 + func_offset = dw_pcie_ep_func_select(ep, func_no); 91 + 92 + reg = dw_pcie_readw_dbi(pci, func_offset + PCI_CAPABILITY_LIST); 93 + next_cap_ptr = (reg & 0x00ff); 94 + 95 + return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap); 79 96 } 80 97 81 98 static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, ··· 129 54 { 130 55 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 131 56 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 57 + unsigned int func_offset = 0; 58 + 59 + func_offset = dw_pcie_ep_func_select(ep, func_no); 132 60 133 61 dw_pcie_dbi_ro_wr_en(pci); 134 - dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, hdr->vendorid); 135 - dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, hdr->deviceid); 136 - dw_pcie_writeb_dbi(pci, PCI_REVISION_ID, hdr->revid); 137 - dw_pcie_writeb_dbi(pci, PCI_CLASS_PROG, hdr->progif_code); 138 - dw_pcie_writew_dbi(pci, PCI_CLASS_DEVICE, 62 + dw_pcie_writew_dbi(pci, func_offset + PCI_VENDOR_ID, hdr->vendorid); 63 + dw_pcie_writew_dbi(pci, func_offset + PCI_DEVICE_ID, hdr->deviceid); 64 + dw_pcie_writeb_dbi(pci, func_offset + PCI_REVISION_ID, hdr->revid); 65 + dw_pcie_writeb_dbi(pci, func_offset + PCI_CLASS_PROG, hdr->progif_code); 66 + dw_pcie_writew_dbi(pci, func_offset + PCI_CLASS_DEVICE, 139 67 hdr->subclass_code | hdr->baseclass_code << 8); 140 - dw_pcie_writeb_dbi(pci, PCI_CACHE_LINE_SIZE, 68 + dw_pcie_writeb_dbi(pci, func_offset + PCI_CACHE_LINE_SIZE, 141 69 hdr->cache_line_size); 142 - dw_pcie_writew_dbi(pci, PCI_SUBSYSTEM_VENDOR_ID, 70 + dw_pcie_writew_dbi(pci, func_offset + PCI_SUBSYSTEM_VENDOR_ID, 143 71 hdr->subsys_vendor_id); 144 - dw_pcie_writew_dbi(pci, PCI_SUBSYSTEM_ID, hdr->subsys_id); 145 - dw_pcie_writeb_dbi(pci, PCI_INTERRUPT_PIN, 72 + dw_pcie_writew_dbi(pci, func_offset + PCI_SUBSYSTEM_ID, hdr->subsys_id); 73 + dw_pcie_writeb_dbi(pci, func_offset + PCI_INTERRUPT_PIN, 146 74 hdr->interrupt_pin); 147 75 dw_pcie_dbi_ro_wr_dis(pci); 148 76 149 77 return 0; 150 78 } 151 79 152 - static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, enum pci_barno bar, 153 - dma_addr_t cpu_addr, 80 + static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, 81 + enum pci_barno bar, dma_addr_t cpu_addr, 154 82 enum dw_pcie_as_type as_type) 155 83 { 156 84 int ret; ··· 166 88 return -EINVAL; 167 89 } 168 90 169 - ret = dw_pcie_prog_inbound_atu(pci, free_win, bar, cpu_addr, 91 + ret = dw_pcie_prog_inbound_atu(pci, func_no, free_win, bar, cpu_addr, 170 92 as_type); 171 93 if (ret < 0) { 172 94 dev_err(pci->dev, "Failed to program IB window\n"); ··· 179 101 return 0; 180 102 } 181 103 182 - static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, phys_addr_t phys_addr, 104 + static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, u8 func_no, 105 + phys_addr_t phys_addr, 183 106 u64 pci_addr, size_t size) 184 107 { 185 108 u32 free_win; ··· 192 113 return -EINVAL; 193 114 } 194 115 195 - dw_pcie_prog_outbound_atu(pci, free_win, PCIE_ATU_TYPE_MEM, 196 - phys_addr, pci_addr, size); 116 + dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM, 117 + phys_addr, pci_addr, size); 197 118 198 119 set_bit(free_win, ep->ob_window_map); 199 120 ep->outbound_addr[free_win] = phys_addr; ··· 209 130 enum pci_barno bar = epf_bar->barno; 210 131 u32 atu_index = ep->bar_to_atu[bar]; 211 132 212 - __dw_pcie_ep_reset_bar(pci, bar, epf_bar->flags); 133 + __dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags); 213 134 214 135 dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND); 215 136 clear_bit(atu_index, ep->ib_window_map); ··· 226 147 size_t size = epf_bar->size; 227 148 int flags = epf_bar->flags; 228 149 enum dw_pcie_as_type as_type; 229 - u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar); 150 + u32 reg; 151 + unsigned int func_offset = 0; 152 + 153 + func_offset = dw_pcie_ep_func_select(ep, func_no); 154 + 155 + reg = PCI_BASE_ADDRESS_0 + (4 * bar) + func_offset; 230 156 231 157 if (!(flags & PCI_BASE_ADDRESS_SPACE)) 232 158 as_type = DW_PCIE_AS_MEM; 233 159 else 234 160 as_type = DW_PCIE_AS_IO; 235 161 236 - ret = dw_pcie_ep_inbound_atu(ep, bar, epf_bar->phys_addr, as_type); 162 + ret = dw_pcie_ep_inbound_atu(ep, func_no, bar, 163 + epf_bar->phys_addr, as_type); 237 164 if (ret) 238 165 return ret; 239 166 ··· 298 213 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 299 214 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 300 215 301 - ret = dw_pcie_ep_outbound_atu(ep, addr, pci_addr, size); 216 + ret = dw_pcie_ep_outbound_atu(ep, func_no, addr, pci_addr, size); 302 217 if (ret) { 303 218 dev_err(pci->dev, "Failed to enable address\n"); 304 219 return ret; ··· 312 227 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 313 228 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 314 229 u32 val, reg; 230 + unsigned int func_offset = 0; 231 + struct dw_pcie_ep_func *ep_func; 315 232 316 - if (!ep->msi_cap) 233 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 234 + if (!ep_func || !ep_func->msi_cap) 317 235 return -EINVAL; 318 236 319 - reg = ep->msi_cap + PCI_MSI_FLAGS; 237 + func_offset = dw_pcie_ep_func_select(ep, func_no); 238 + 239 + reg = ep_func->msi_cap + func_offset + PCI_MSI_FLAGS; 320 240 val = dw_pcie_readw_dbi(pci, reg); 321 241 if (!(val & PCI_MSI_FLAGS_ENABLE)) 322 242 return -EINVAL; ··· 336 246 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 337 247 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 338 248 u32 val, reg; 249 + unsigned int func_offset = 0; 250 + struct dw_pcie_ep_func *ep_func; 339 251 340 - if (!ep->msi_cap) 252 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 253 + if (!ep_func || !ep_func->msi_cap) 341 254 return -EINVAL; 342 255 343 - reg = ep->msi_cap + PCI_MSI_FLAGS; 256 + func_offset = dw_pcie_ep_func_select(ep, func_no); 257 + 258 + reg = ep_func->msi_cap + func_offset + PCI_MSI_FLAGS; 344 259 val = dw_pcie_readw_dbi(pci, reg); 345 260 val &= ~PCI_MSI_FLAGS_QMASK; 346 261 val |= (interrupts << 1) & PCI_MSI_FLAGS_QMASK; ··· 361 266 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 362 267 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 363 268 u32 val, reg; 269 + unsigned int func_offset = 0; 270 + struct dw_pcie_ep_func *ep_func; 364 271 365 - if (!ep->msix_cap) 272 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 273 + if (!ep_func || !ep_func->msix_cap) 366 274 return -EINVAL; 367 275 368 - reg = ep->msix_cap + PCI_MSIX_FLAGS; 276 + func_offset = dw_pcie_ep_func_select(ep, func_no); 277 + 278 + reg = ep_func->msix_cap + func_offset + PCI_MSIX_FLAGS; 369 279 val = dw_pcie_readw_dbi(pci, reg); 370 280 if (!(val & PCI_MSIX_FLAGS_ENABLE)) 371 281 return -EINVAL; ··· 386 286 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 387 287 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 388 288 u32 val, reg; 289 + unsigned int func_offset = 0; 290 + struct dw_pcie_ep_func *ep_func; 389 291 390 - if (!ep->msix_cap) 292 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 293 + if (!ep_func || !ep_func->msix_cap) 391 294 return -EINVAL; 392 295 393 296 dw_pcie_dbi_ro_wr_en(pci); 394 297 395 - reg = ep->msix_cap + PCI_MSIX_FLAGS; 298 + func_offset = dw_pcie_ep_func_select(ep, func_no); 299 + 300 + reg = ep_func->msix_cap + func_offset + PCI_MSIX_FLAGS; 396 301 val = dw_pcie_readw_dbi(pci, reg); 397 302 val &= ~PCI_MSIX_FLAGS_QSIZE; 398 303 val |= interrupts; 399 304 dw_pcie_writew_dbi(pci, reg, val); 400 305 401 - reg = ep->msix_cap + PCI_MSIX_TABLE; 306 + reg = ep_func->msix_cap + func_offset + PCI_MSIX_TABLE; 402 307 val = offset | bir; 403 308 dw_pcie_writel_dbi(pci, reg, val); 404 309 405 - reg = ep->msix_cap + PCI_MSIX_PBA; 310 + reg = ep_func->msix_cap + func_offset + PCI_MSIX_PBA; 406 311 val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir; 407 312 dw_pcie_writel_dbi(pci, reg, val); 408 313 ··· 490 385 u8 interrupt_num) 491 386 { 492 387 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 388 + struct dw_pcie_ep_func *ep_func; 493 389 struct pci_epc *epc = ep->epc; 494 390 unsigned int aligned_offset; 391 + unsigned int func_offset = 0; 495 392 u16 msg_ctrl, msg_data; 496 393 u32 msg_addr_lower, msg_addr_upper, reg; 497 394 u64 msg_addr; 498 395 bool has_upper; 499 396 int ret; 500 397 501 - if (!ep->msi_cap) 398 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 399 + if (!ep_func || !ep_func->msi_cap) 502 400 return -EINVAL; 503 401 402 + func_offset = dw_pcie_ep_func_select(ep, func_no); 403 + 504 404 /* Raise MSI per the PCI Local Bus Specification Revision 3.0, 6.8.1. */ 505 - reg = ep->msi_cap + PCI_MSI_FLAGS; 405 + reg = ep_func->msi_cap + func_offset + PCI_MSI_FLAGS; 506 406 msg_ctrl = dw_pcie_readw_dbi(pci, reg); 507 407 has_upper = !!(msg_ctrl & PCI_MSI_FLAGS_64BIT); 508 - reg = ep->msi_cap + PCI_MSI_ADDRESS_LO; 408 + reg = ep_func->msi_cap + func_offset + PCI_MSI_ADDRESS_LO; 509 409 msg_addr_lower = dw_pcie_readl_dbi(pci, reg); 510 410 if (has_upper) { 511 - reg = ep->msi_cap + PCI_MSI_ADDRESS_HI; 411 + reg = ep_func->msi_cap + func_offset + PCI_MSI_ADDRESS_HI; 512 412 msg_addr_upper = dw_pcie_readl_dbi(pci, reg); 513 - reg = ep->msi_cap + PCI_MSI_DATA_64; 413 + reg = ep_func->msi_cap + func_offset + PCI_MSI_DATA_64; 514 414 msg_data = dw_pcie_readw_dbi(pci, reg); 515 415 } else { 516 416 msg_addr_upper = 0; 517 - reg = ep->msi_cap + PCI_MSI_DATA_32; 417 + reg = ep_func->msi_cap + func_offset + PCI_MSI_DATA_32; 518 418 msg_data = dw_pcie_readw_dbi(pci, reg); 519 419 } 520 420 aligned_offset = msg_addr_lower & (epc->mem->window.page_size - 1); ··· 537 427 return 0; 538 428 } 539 429 540 - int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, 541 - u16 interrupt_num) 430 + int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, 431 + u16 interrupt_num) 542 432 { 543 433 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 434 + struct dw_pcie_ep_func *ep_func; 435 + u32 msg_data; 436 + 437 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 438 + if (!ep_func || !ep_func->msix_cap) 439 + return -EINVAL; 440 + 441 + msg_data = (func_no << PCIE_MSIX_DOORBELL_PF_SHIFT) | 442 + (interrupt_num - 1); 443 + 444 + dw_pcie_writel_dbi(pci, PCIE_MSIX_DOORBELL, msg_data); 445 + 446 + return 0; 447 + } 448 + 449 + int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, 450 + u16 interrupt_num) 451 + { 452 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 453 + struct dw_pcie_ep_func *ep_func; 544 454 struct pci_epf_msix_tbl *msix_tbl; 545 455 struct pci_epc *epc = ep->epc; 456 + unsigned int func_offset = 0; 546 457 u32 reg, msg_data, vec_ctrl; 547 458 unsigned int aligned_offset; 548 459 u32 tbl_offset; ··· 571 440 int ret; 572 441 u8 bir; 573 442 574 - reg = ep->msix_cap + PCI_MSIX_TABLE; 443 + ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 444 + if (!ep_func || !ep_func->msix_cap) 445 + return -EINVAL; 446 + 447 + func_offset = dw_pcie_ep_func_select(ep, func_no); 448 + 449 + reg = ep_func->msix_cap + func_offset + PCI_MSIX_TABLE; 575 450 tbl_offset = dw_pcie_readl_dbi(pci, reg); 576 451 bir = (tbl_offset & PCI_MSIX_TABLE_BIR); 577 452 tbl_offset &= PCI_MSIX_TABLE_OFFSET; ··· 642 505 u32 reg; 643 506 int i; 644 507 645 - hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE); 508 + hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE) & 509 + PCI_HEADER_TYPE_MASK; 646 510 if (hdr_type != PCI_HEADER_TYPE_NORMAL) { 647 511 dev_err(pci->dev, 648 512 "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n", ··· 651 513 return -EIO; 652 514 } 653 515 654 - ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); 655 - 656 - ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX); 657 - 658 516 offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 517 + 518 + dw_pcie_dbi_ro_wr_en(pci); 519 + 659 520 if (offset) { 660 521 reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 661 522 nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 662 523 PCI_REBAR_CTRL_NBAR_SHIFT; 663 524 664 - dw_pcie_dbi_ro_wr_en(pci); 665 525 for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) 666 526 dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0); 667 - dw_pcie_dbi_ro_wr_dis(pci); 668 527 } 669 528 670 529 dw_pcie_setup(pci); 530 + dw_pcie_dbi_ro_wr_dis(pci); 671 531 672 532 return 0; 673 533 } ··· 675 539 { 676 540 int ret; 677 541 void *addr; 542 + u8 func_no; 678 543 struct pci_epc *epc; 679 544 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 680 545 struct device *dev = pci->dev; 681 546 struct device_node *np = dev->of_node; 682 547 const struct pci_epc_features *epc_features; 548 + struct dw_pcie_ep_func *ep_func; 549 + 550 + INIT_LIST_HEAD(&ep->func_list); 683 551 684 552 if (!pci->dbi_base || !pci->dbi_base2) { 685 553 dev_err(dev, "dbi_base/dbi_base2 is not populated\n"); ··· 730 590 return -ENOMEM; 731 591 ep->outbound_addr = addr; 732 592 593 + if (pci->link_gen < 1) 594 + pci->link_gen = of_pci_get_max_link_speed(np); 595 + 733 596 epc = devm_pci_epc_create(dev, &epc_ops); 734 597 if (IS_ERR(epc)) { 735 598 dev_err(dev, "Failed to create epc device\n"); ··· 742 599 ep->epc = epc; 743 600 epc_set_drvdata(epc, ep); 744 601 745 - if (ep->ops->ep_init) 746 - ep->ops->ep_init(ep); 747 - 748 602 ret = of_property_read_u8(np, "max-functions", &epc->max_functions); 749 603 if (ret < 0) 750 604 epc->max_functions = 1; 605 + 606 + for (func_no = 0; func_no < epc->max_functions; func_no++) { 607 + ep_func = devm_kzalloc(dev, sizeof(*ep_func), GFP_KERNEL); 608 + if (!ep_func) 609 + return -ENOMEM; 610 + 611 + ep_func->func_no = func_no; 612 + ep_func->msi_cap = dw_pcie_ep_find_capability(ep, func_no, 613 + PCI_CAP_ID_MSI); 614 + ep_func->msix_cap = dw_pcie_ep_find_capability(ep, func_no, 615 + PCI_CAP_ID_MSIX); 616 + 617 + list_add_tail(&ep_func->list, &ep->func_list); 618 + } 619 + 620 + if (ep->ops->ep_init) 621 + ep->ops->ep_init(ep); 751 622 752 623 ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, 753 624 ep->page_size);
+140 -226
drivers/pci/controller/dwc/pcie-designware-host.c
··· 20 20 #include "pcie-designware.h" 21 21 22 22 static struct pci_ops dw_pcie_ops; 23 - 24 - static int dw_pcie_rd_own_conf(struct pcie_port *pp, int where, int size, 25 - u32 *val) 26 - { 27 - struct dw_pcie *pci; 28 - 29 - if (pp->ops->rd_own_conf) 30 - return pp->ops->rd_own_conf(pp, where, size, val); 31 - 32 - pci = to_dw_pcie_from_pp(pp); 33 - return dw_pcie_read(pci->dbi_base + where, size, val); 34 - } 35 - 36 - static int dw_pcie_wr_own_conf(struct pcie_port *pp, int where, int size, 37 - u32 val) 38 - { 39 - struct dw_pcie *pci; 40 - 41 - if (pp->ops->wr_own_conf) 42 - return pp->ops->wr_own_conf(pp, where, size, val); 43 - 44 - pci = to_dw_pcie_from_pp(pp); 45 - return dw_pcie_write(pci->dbi_base + where, size, val); 46 - } 23 + static struct pci_ops dw_child_pcie_ops; 47 24 48 25 static void dw_msi_ack_irq(struct irq_data *d) 49 26 { ··· 59 82 unsigned long val; 60 83 u32 status, num_ctrls; 61 84 irqreturn_t ret = IRQ_NONE; 85 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 62 86 63 87 num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 64 88 65 89 for (i = 0; i < num_ctrls; i++) { 66 - dw_pcie_rd_own_conf(pp, PCIE_MSI_INTR0_STATUS + 67 - (i * MSI_REG_CTRL_BLOCK_SIZE), 68 - 4, &status); 90 + status = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS + 91 + (i * MSI_REG_CTRL_BLOCK_SIZE)); 69 92 if (!status) 70 93 continue; 71 94 ··· 125 148 static void dw_pci_bottom_mask(struct irq_data *d) 126 149 { 127 150 struct pcie_port *pp = irq_data_get_irq_chip_data(d); 151 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 128 152 unsigned int res, bit, ctrl; 129 153 unsigned long flags; 130 154 ··· 136 158 bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 137 159 138 160 pp->irq_mask[ctrl] |= BIT(bit); 139 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 140 - pp->irq_mask[ctrl]); 161 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res, pp->irq_mask[ctrl]); 141 162 142 163 raw_spin_unlock_irqrestore(&pp->lock, flags); 143 164 } ··· 144 167 static void dw_pci_bottom_unmask(struct irq_data *d) 145 168 { 146 169 struct pcie_port *pp = irq_data_get_irq_chip_data(d); 170 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 147 171 unsigned int res, bit, ctrl; 148 172 unsigned long flags; 149 173 ··· 155 177 bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 156 178 157 179 pp->irq_mask[ctrl] &= ~BIT(bit); 158 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, 159 - pp->irq_mask[ctrl]); 180 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res, pp->irq_mask[ctrl]); 160 181 161 182 raw_spin_unlock_irqrestore(&pp->lock, flags); 162 183 } ··· 163 186 static void dw_pci_bottom_ack(struct irq_data *d) 164 187 { 165 188 struct pcie_port *pp = irq_data_get_irq_chip_data(d); 189 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 166 190 unsigned int res, bit, ctrl; 167 191 168 192 ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 169 193 res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 170 194 bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 171 195 172 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + res, 4, BIT(bit)); 196 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_STATUS + res, BIT(bit)); 173 197 } 174 198 175 199 static struct irq_chip dw_pci_msi_bottom_irq_chip = { ··· 266 288 irq_domain_remove(pp->msi_domain); 267 289 irq_domain_remove(pp->irq_domain); 268 290 269 - if (pp->msi_page) 270 - __free_page(pp->msi_page); 291 + if (pp->msi_data) { 292 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 293 + struct device *dev = pci->dev; 294 + 295 + dma_unmap_single_attrs(dev, pp->msi_data, sizeof(pp->msi_msg), 296 + DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); 297 + } 271 298 } 272 299 273 300 void dw_pcie_msi_init(struct pcie_port *pp) 274 301 { 275 302 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 276 - struct device *dev = pci->dev; 277 - u64 msi_target; 303 + u64 msi_target = (u64)pp->msi_data; 278 304 279 - pp->msi_page = alloc_page(GFP_KERNEL); 280 - pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE, 281 - DMA_FROM_DEVICE); 282 - if (dma_mapping_error(dev, pp->msi_data)) { 283 - dev_err(dev, "Failed to map MSI data\n"); 284 - __free_page(pp->msi_page); 285 - pp->msi_page = NULL; 305 + if (!IS_ENABLED(CONFIG_PCI_MSI)) 286 306 return; 287 - } 288 - msi_target = (u64)pp->msi_data; 289 307 290 308 /* Program the msi_data */ 291 - dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_LO, 4, 292 - lower_32_bits(msi_target)); 293 - dw_pcie_wr_own_conf(pp, PCIE_MSI_ADDR_HI, 4, 294 - upper_32_bits(msi_target)); 309 + dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_LO, lower_32_bits(msi_target)); 310 + dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_HI, upper_32_bits(msi_target)); 295 311 } 296 312 EXPORT_SYMBOL_GPL(dw_pcie_msi_init); 297 313 ··· 296 324 struct device_node *np = dev->of_node; 297 325 struct platform_device *pdev = to_platform_device(dev); 298 326 struct resource_entry *win; 299 - struct pci_bus *child; 300 327 struct pci_host_bridge *bridge; 301 328 struct resource *cfg_res; 302 - u32 hdr_type; 303 329 int ret; 304 330 305 331 raw_spin_lock_init(&pci->pp.lock); 306 332 307 333 cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 308 334 if (cfg_res) { 309 - pp->cfg0_size = resource_size(cfg_res) >> 1; 310 - pp->cfg1_size = resource_size(cfg_res) >> 1; 335 + pp->cfg0_size = resource_size(cfg_res); 311 336 pp->cfg0_base = cfg_res->start; 312 - pp->cfg1_base = cfg_res->start + pp->cfg0_size; 313 337 } else if (!pp->va_cfg0_base) { 314 338 dev_err(dev, "Missing *config* reg space\n"); 315 339 } ··· 314 346 if (!bridge) 315 347 return -ENOMEM; 316 348 349 + pp->bridge = bridge; 350 + 317 351 /* Get the I/O and memory ranges from DT */ 318 352 resource_list_for_each_entry(win, &bridge->windows) { 319 353 switch (resource_type(win->res)) { 320 354 case IORESOURCE_IO: 321 - pp->io = win->res; 322 - pp->io->name = "I/O"; 323 - pp->io_size = resource_size(pp->io); 324 - pp->io_bus_addr = pp->io->start - win->offset; 325 - pp->io_base = pci_pio_to_address(pp->io->start); 326 - break; 327 - case IORESOURCE_MEM: 328 - pp->mem = win->res; 329 - pp->mem->name = "MEM"; 330 - pp->mem_size = resource_size(pp->mem); 331 - pp->mem_bus_addr = pp->mem->start - win->offset; 355 + pp->io_size = resource_size(win->res); 356 + pp->io_bus_addr = win->res->start - win->offset; 357 + pp->io_base = pci_pio_to_address(win->res->start); 332 358 break; 333 359 case 0: 334 - pp->cfg = win->res; 335 - pp->cfg0_size = resource_size(pp->cfg) >> 1; 336 - pp->cfg1_size = resource_size(pp->cfg) >> 1; 337 - pp->cfg0_base = pp->cfg->start; 338 - pp->cfg1_base = pp->cfg->start + pp->cfg0_size; 339 - break; 340 - case IORESOURCE_BUS: 341 - pp->busn = win->res; 360 + dev_err(dev, "Missing *config* reg space\n"); 361 + pp->cfg0_size = resource_size(win->res); 362 + pp->cfg0_base = win->res->start; 363 + if (!pci->dbi_base) { 364 + pci->dbi_base = devm_pci_remap_cfgspace(dev, 365 + pp->cfg0_base, 366 + pp->cfg0_size); 367 + if (!pci->dbi_base) { 368 + dev_err(dev, "Error with ioremap\n"); 369 + return -ENOMEM; 370 + } 371 + } 342 372 break; 343 373 } 344 374 } 345 - 346 - if (!pci->dbi_base) { 347 - pci->dbi_base = devm_pci_remap_cfgspace(dev, 348 - pp->cfg->start, 349 - resource_size(pp->cfg)); 350 - if (!pci->dbi_base) { 351 - dev_err(dev, "Error with ioremap\n"); 352 - return -ENOMEM; 353 - } 354 - } 355 - 356 - pp->mem_base = pp->mem->start; 357 375 358 376 if (!pp->va_cfg0_base) { 359 377 pp->va_cfg0_base = devm_pci_remap_cfgspace(dev, ··· 350 396 } 351 397 } 352 398 353 - if (!pp->va_cfg1_base) { 354 - pp->va_cfg1_base = devm_pci_remap_cfgspace(dev, 355 - pp->cfg1_base, 356 - pp->cfg1_size); 357 - if (!pp->va_cfg1_base) { 358 - dev_err(dev, "Error with ioremap\n"); 359 - return -ENOMEM; 360 - } 361 - } 362 - 363 399 ret = of_property_read_u32(np, "num-viewport", &pci->num_viewport); 364 400 if (ret) 365 401 pci->num_viewport = 2; 402 + 403 + if (pci->link_gen < 1) 404 + pci->link_gen = of_pci_get_max_link_speed(np); 366 405 367 406 if (pci_msi_enabled()) { 368 407 /* ··· 387 440 irq_set_chained_handler_and_data(pp->msi_irq, 388 441 dw_chained_msi_isr, 389 442 pp); 443 + 444 + pp->msi_data = dma_map_single_attrs(pci->dev, &pp->msi_msg, 445 + sizeof(pp->msi_msg), 446 + DMA_FROM_DEVICE, 447 + DMA_ATTR_SKIP_CPU_SYNC); 448 + if (dma_mapping_error(pci->dev, pp->msi_data)) { 449 + dev_err(pci->dev, "Failed to map MSI data\n"); 450 + pp->msi_data = 0; 451 + goto err_free_msi; 452 + } 390 453 } else { 391 454 ret = pp->ops->msi_host_init(pp); 392 455 if (ret < 0) ··· 404 447 } 405 448 } 406 449 450 + /* Set default bus ops */ 451 + bridge->ops = &dw_pcie_ops; 452 + bridge->child_ops = &dw_child_pcie_ops; 453 + 407 454 if (pp->ops->host_init) { 408 455 ret = pp->ops->host_init(pp); 409 456 if (ret) 410 457 goto err_free_msi; 411 458 } 412 459 413 - ret = dw_pcie_rd_own_conf(pp, PCI_HEADER_TYPE, 1, &hdr_type); 414 - if (ret != PCIBIOS_SUCCESSFUL) { 415 - dev_err(pci->dev, "Failed reading PCI_HEADER_TYPE cfg space reg (ret: 0x%x)\n", 416 - ret); 417 - ret = pcibios_err_to_errno(ret); 418 - goto err_free_msi; 419 - } 420 - if (hdr_type != PCI_HEADER_TYPE_BRIDGE) { 421 - dev_err(pci->dev, 422 - "PCIe controller is not set to bridge type (hdr_type: 0x%x)!\n", 423 - hdr_type); 424 - ret = -EIO; 425 - goto err_free_msi; 426 - } 427 - 428 460 bridge->sysdata = pp; 429 - bridge->ops = &dw_pcie_ops; 430 461 431 - ret = pci_scan_root_bus_bridge(bridge); 432 - if (ret) 433 - goto err_free_msi; 434 - 435 - pp->root_bus = bridge->bus; 436 - 437 - if (pp->ops->scan_bus) 438 - pp->ops->scan_bus(pp); 439 - 440 - pci_bus_size_bridges(pp->root_bus); 441 - pci_bus_assign_resources(pp->root_bus); 442 - 443 - list_for_each_entry(child, &pp->root_bus->children, node) 444 - pcie_bus_configure_settings(child); 445 - 446 - pci_bus_add_devices(pp->root_bus); 447 - return 0; 462 + ret = pci_host_probe(bridge); 463 + if (!ret) 464 + return 0; 448 465 449 466 err_free_msi: 450 467 if (pci_msi_enabled() && !pp->ops->msi_host_init) ··· 429 498 430 499 void dw_pcie_host_deinit(struct pcie_port *pp) 431 500 { 432 - pci_stop_root_bus(pp->root_bus); 433 - pci_remove_root_bus(pp->root_bus); 501 + pci_stop_root_bus(pp->bridge->bus); 502 + pci_remove_root_bus(pp->bridge->bus); 434 503 if (pci_msi_enabled() && !pp->ops->msi_host_init) 435 504 dw_pcie_free_msi(pp); 436 505 } 437 506 EXPORT_SYMBOL_GPL(dw_pcie_host_deinit); 438 507 439 - static int dw_pcie_access_other_conf(struct pcie_port *pp, struct pci_bus *bus, 440 - u32 devfn, int where, int size, u32 *val, 441 - bool write) 508 + static void __iomem *dw_pcie_other_conf_map_bus(struct pci_bus *bus, 509 + unsigned int devfn, int where) 442 510 { 443 - int ret, type; 444 - u32 busdev, cfg_size; 445 - u64 cpu_addr; 446 - void __iomem *va_cfg_base; 511 + int type; 512 + u32 busdev; 513 + struct pcie_port *pp = bus->sysdata; 447 514 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 515 + 516 + /* 517 + * Checking whether the link is up here is a last line of defense 518 + * against platforms that forward errors on the system bus as 519 + * SError upon PCI configuration transactions issued when the link 520 + * is down. This check is racy by definition and does not stop 521 + * the system from triggering an SError if the link goes down 522 + * after this check is performed. 523 + */ 524 + if (!dw_pcie_link_up(pci)) 525 + return NULL; 448 526 449 527 busdev = PCIE_ATU_BUS(bus->number) | PCIE_ATU_DEV(PCI_SLOT(devfn)) | 450 528 PCIE_ATU_FUNC(PCI_FUNC(devfn)); 451 529 452 - if (pci_is_root_bus(bus->parent)) { 530 + if (pci_is_root_bus(bus->parent)) 453 531 type = PCIE_ATU_TYPE_CFG0; 454 - cpu_addr = pp->cfg0_base; 455 - cfg_size = pp->cfg0_size; 456 - va_cfg_base = pp->va_cfg0_base; 457 - } else { 532 + else 458 533 type = PCIE_ATU_TYPE_CFG1; 459 - cpu_addr = pp->cfg1_base; 460 - cfg_size = pp->cfg1_size; 461 - va_cfg_base = pp->va_cfg1_base; 462 - } 534 + 463 535 464 536 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1, 465 - type, cpu_addr, 466 - busdev, cfg_size); 467 - if (write) 468 - ret = dw_pcie_write(va_cfg_base + where, size, *val); 469 - else 470 - ret = dw_pcie_read(va_cfg_base + where, size, val); 537 + type, pp->cfg0_base, 538 + busdev, pp->cfg0_size); 471 539 472 - if (pci->num_viewport <= 2) 540 + return pp->va_cfg0_base + where; 541 + } 542 + 543 + static int dw_pcie_rd_other_conf(struct pci_bus *bus, unsigned int devfn, 544 + int where, int size, u32 *val) 545 + { 546 + int ret; 547 + struct pcie_port *pp = bus->sysdata; 548 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 549 + 550 + ret = pci_generic_config_read(bus, devfn, where, size, val); 551 + 552 + if (!ret && pci->num_viewport <= 2) 473 553 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1, 474 554 PCIE_ATU_TYPE_IO, pp->io_base, 475 555 pp->io_bus_addr, pp->io_size); ··· 488 546 return ret; 489 547 } 490 548 491 - static int dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 492 - u32 devfn, int where, int size, u32 *val) 549 + static int dw_pcie_wr_other_conf(struct pci_bus *bus, unsigned int devfn, 550 + int where, int size, u32 val) 493 551 { 494 - if (pp->ops->rd_other_conf) 495 - return pp->ops->rd_other_conf(pp, bus, devfn, where, 496 - size, val); 497 - 498 - return dw_pcie_access_other_conf(pp, bus, devfn, where, size, val, 499 - false); 500 - } 501 - 502 - static int dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 503 - u32 devfn, int where, int size, u32 val) 504 - { 505 - if (pp->ops->wr_other_conf) 506 - return pp->ops->wr_other_conf(pp, bus, devfn, where, 507 - size, val); 508 - 509 - return dw_pcie_access_other_conf(pp, bus, devfn, where, size, &val, 510 - true); 511 - } 512 - 513 - static int dw_pcie_valid_device(struct pcie_port *pp, struct pci_bus *bus, 514 - int dev) 515 - { 552 + int ret; 553 + struct pcie_port *pp = bus->sysdata; 516 554 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 517 555 518 - /* If there is no link, then there is no device */ 519 - if (!pci_is_root_bus(bus)) { 520 - if (!dw_pcie_link_up(pci)) 521 - return 0; 522 - } else if (dev > 0) 523 - /* Access only one slot on each root port */ 524 - return 0; 556 + ret = pci_generic_config_write(bus, devfn, where, size, val); 525 557 526 - return 1; 558 + if (!ret && pci->num_viewport <= 2) 559 + dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX1, 560 + PCIE_ATU_TYPE_IO, pp->io_base, 561 + pp->io_bus_addr, pp->io_size); 562 + 563 + return ret; 527 564 } 528 565 529 - static int dw_pcie_rd_conf(struct pci_bus *bus, u32 devfn, int where, 530 - int size, u32 *val) 566 + static struct pci_ops dw_child_pcie_ops = { 567 + .map_bus = dw_pcie_other_conf_map_bus, 568 + .read = dw_pcie_rd_other_conf, 569 + .write = dw_pcie_wr_other_conf, 570 + }; 571 + 572 + void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus, unsigned int devfn, int where) 531 573 { 532 574 struct pcie_port *pp = bus->sysdata; 575 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 533 576 534 - if (!dw_pcie_valid_device(pp, bus, PCI_SLOT(devfn))) { 535 - *val = 0xffffffff; 536 - return PCIBIOS_DEVICE_NOT_FOUND; 537 - } 577 + if (PCI_SLOT(devfn) > 0) 578 + return NULL; 538 579 539 - if (pci_is_root_bus(bus)) 540 - return dw_pcie_rd_own_conf(pp, where, size, val); 541 - 542 - return dw_pcie_rd_other_conf(pp, bus, devfn, where, size, val); 580 + return pci->dbi_base + where; 543 581 } 544 - 545 - static int dw_pcie_wr_conf(struct pci_bus *bus, u32 devfn, 546 - int where, int size, u32 val) 547 - { 548 - struct pcie_port *pp = bus->sysdata; 549 - 550 - if (!dw_pcie_valid_device(pp, bus, PCI_SLOT(devfn))) 551 - return PCIBIOS_DEVICE_NOT_FOUND; 552 - 553 - if (pci_is_root_bus(bus)) 554 - return dw_pcie_wr_own_conf(pp, where, size, val); 555 - 556 - return dw_pcie_wr_other_conf(pp, bus, devfn, where, size, val); 557 - } 582 + EXPORT_SYMBOL_GPL(dw_pcie_own_conf_map_bus); 558 583 559 584 static struct pci_ops dw_pcie_ops = { 560 - .read = dw_pcie_rd_conf, 561 - .write = dw_pcie_wr_conf, 585 + .map_bus = dw_pcie_own_conf_map_bus, 586 + .read = pci_generic_config_read, 587 + .write = pci_generic_config_write, 562 588 }; 563 589 564 590 void dw_pcie_setup_rc(struct pcie_port *pp) ··· 542 632 543 633 dw_pcie_setup(pci); 544 634 545 - if (!pp->ops->msi_host_init) { 635 + if (pci_msi_enabled() && !pp->ops->msi_host_init) { 546 636 num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 547 637 548 638 /* Initialize IRQ Status array */ 549 639 for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 550 640 pp->irq_mask[ctrl] = ~0; 551 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + 641 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + 552 642 (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 553 - 4, pp->irq_mask[ctrl]); 554 - dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + 643 + pp->irq_mask[ctrl]); 644 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_ENABLE + 555 645 (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 556 - 4, ~0); 646 + ~0); 557 647 } 558 648 } 559 649 ··· 581 671 dw_pcie_writel_dbi(pci, PCI_COMMAND, val); 582 672 583 673 /* 584 - * If the platform provides ->rd_other_conf, it means the platform 585 - * uses its own address translation component rather than ATU, so 586 - * we should not program the ATU here. 674 + * If the platform provides its own child bus config accesses, it means 675 + * the platform uses its own address translation component rather than 676 + * ATU, so we should not program the ATU here. 587 677 */ 588 - if (!pp->ops->rd_other_conf) { 678 + if (pp->bridge->child_ops == &dw_child_pcie_ops) { 679 + struct resource_entry *entry = 680 + resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM); 681 + 589 682 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0, 590 - PCIE_ATU_TYPE_MEM, pp->mem_base, 591 - pp->mem_bus_addr, pp->mem_size); 683 + PCIE_ATU_TYPE_MEM, entry->res->start, 684 + entry->res->start - entry->offset, 685 + resource_size(entry->res)); 592 686 if (pci->num_viewport > 2) 593 687 dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX2, 594 688 PCIE_ATU_TYPE_IO, pp->io_base, 595 689 pp->io_bus_addr, pp->io_size); 596 690 } 597 691 598 - dw_pcie_wr_own_conf(pp, PCI_BASE_ADDRESS_0, 4, 0); 692 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); 599 693 600 694 /* Program correct class for RC */ 601 - dw_pcie_wr_own_conf(pp, PCI_CLASS_DEVICE, 2, PCI_CLASS_BRIDGE_PCI); 695 + dw_pcie_writew_dbi(pci, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI); 602 696 603 - dw_pcie_rd_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, &val); 697 + val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 604 698 val |= PORT_LOGIC_SPEED_CHANGE; 605 - dw_pcie_wr_own_conf(pp, PCIE_LINK_WIDTH_SPEED_CONTROL, 4, val); 699 + dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 606 700 607 701 dw_pcie_dbi_ro_wr_dis(pci); 608 702 }
+1 -3
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 39 39 40 40 dw_pcie_setup_rc(pp); 41 41 dw_pcie_wait_for_link(pci); 42 - 43 - if (IS_ENABLED(CONFIG_PCI_MSI)) 44 - dw_pcie_msi_init(pp); 42 + dw_pcie_msi_init(pp); 45 43 46 44 return 0; 47 45 }
+98 -72
drivers/pci/controller/dwc/pcie-designware.c
··· 10 10 11 11 #include <linux/delay.h> 12 12 #include <linux/of.h> 13 + #include <linux/of_platform.h> 13 14 #include <linux/types.h> 14 15 15 16 #include "../../pci.h" ··· 167 166 } 168 167 EXPORT_SYMBOL_GPL(dw_pcie_write_dbi); 169 168 170 - u32 dw_pcie_read_dbi2(struct dw_pcie *pci, u32 reg, size_t size) 171 - { 172 - int ret; 173 - u32 val; 174 - 175 - if (pci->ops->read_dbi2) 176 - return pci->ops->read_dbi2(pci, pci->dbi_base2, reg, size); 177 - 178 - ret = dw_pcie_read(pci->dbi_base2 + reg, size, &val); 179 - if (ret) 180 - dev_err(pci->dev, "read DBI address failed\n"); 181 - 182 - return val; 183 - } 184 - 185 169 void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val) 186 170 { 187 171 int ret; ··· 181 195 dev_err(pci->dev, "write DBI address failed\n"); 182 196 } 183 197 184 - u32 dw_pcie_read_atu(struct dw_pcie *pci, u32 reg, size_t size) 198 + static u32 dw_pcie_readl_atu(struct dw_pcie *pci, u32 reg) 185 199 { 186 200 int ret; 187 201 u32 val; 188 202 189 203 if (pci->ops->read_dbi) 190 - return pci->ops->read_dbi(pci, pci->atu_base, reg, size); 204 + return pci->ops->read_dbi(pci, pci->atu_base, reg, 4); 191 205 192 - ret = dw_pcie_read(pci->atu_base + reg, size, &val); 206 + ret = dw_pcie_read(pci->atu_base + reg, 4, &val); 193 207 if (ret) 194 208 dev_err(pci->dev, "Read ATU address failed\n"); 195 209 196 210 return val; 197 211 } 198 212 199 - void dw_pcie_write_atu(struct dw_pcie *pci, u32 reg, size_t size, u32 val) 213 + static void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val) 200 214 { 201 215 int ret; 202 216 203 217 if (pci->ops->write_dbi) { 204 - pci->ops->write_dbi(pci, pci->atu_base, reg, size, val); 218 + pci->ops->write_dbi(pci, pci->atu_base, reg, 4, val); 205 219 return; 206 220 } 207 221 208 - ret = dw_pcie_write(pci->atu_base + reg, size, val); 222 + ret = dw_pcie_write(pci->atu_base + reg, 4, val); 209 223 if (ret) 210 224 dev_err(pci->dev, "Write ATU address failed\n"); 211 225 } ··· 225 239 dw_pcie_writel_atu(pci, offset + reg, val); 226 240 } 227 241 228 - static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, int index, 229 - int type, u64 cpu_addr, 230 - u64 pci_addr, u32 size) 242 + static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no, 243 + int index, int type, 244 + u64 cpu_addr, u64 pci_addr, 245 + u32 size) 231 246 { 232 247 u32 retries, val; 233 248 u64 limit_addr = cpu_addr + size - 1; ··· 246 259 dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET, 247 260 upper_32_bits(pci_addr)); 248 261 dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, 249 - type); 262 + type | PCIE_ATU_FUNC_NUM(func_no)); 250 263 dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2, 251 264 PCIE_ATU_ENABLE); 252 265 ··· 265 278 dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 266 279 } 267 280 268 - void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, 269 - u64 cpu_addr, u64 pci_addr, u32 size) 281 + static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no, 282 + int index, int type, u64 cpu_addr, 283 + u64 pci_addr, u32 size) 270 284 { 271 285 u32 retries, val; 272 286 ··· 275 287 cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr); 276 288 277 289 if (pci->iatu_unroll_enabled) { 278 - dw_pcie_prog_outbound_atu_unroll(pci, index, type, cpu_addr, 279 - pci_addr, size); 290 + dw_pcie_prog_outbound_atu_unroll(pci, func_no, index, type, 291 + cpu_addr, pci_addr, size); 280 292 return; 281 293 } 282 294 ··· 292 304 lower_32_bits(pci_addr)); 293 305 dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET, 294 306 upper_32_bits(pci_addr)); 295 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type); 307 + dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type | 308 + PCIE_ATU_FUNC_NUM(func_no)); 296 309 dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE); 297 310 298 311 /* ··· 308 319 mdelay(LINK_WAIT_IATU); 309 320 } 310 321 dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 322 + } 323 + 324 + void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, 325 + u64 cpu_addr, u64 pci_addr, u32 size) 326 + { 327 + __dw_pcie_prog_outbound_atu(pci, 0, index, type, 328 + cpu_addr, pci_addr, size); 329 + } 330 + 331 + void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 332 + int type, u64 cpu_addr, u64 pci_addr, 333 + u32 size) 334 + { 335 + __dw_pcie_prog_outbound_atu(pci, func_no, index, type, 336 + cpu_addr, pci_addr, size); 311 337 } 312 338 313 339 static u32 dw_pcie_readl_ib_unroll(struct dw_pcie *pci, u32 index, u32 reg) ··· 340 336 dw_pcie_writel_atu(pci, offset + reg, val); 341 337 } 342 338 343 - static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, int index, 344 - int bar, u64 cpu_addr, 339 + static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, u8 func_no, 340 + int index, int bar, u64 cpu_addr, 345 341 enum dw_pcie_as_type as_type) 346 342 { 347 343 int type; ··· 363 359 return -EINVAL; 364 360 } 365 361 366 - dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, type); 362 + dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, type | 363 + PCIE_ATU_FUNC_NUM(func_no)); 367 364 dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2, 365 + PCIE_ATU_FUNC_NUM_MATCH_EN | 368 366 PCIE_ATU_ENABLE | 369 367 PCIE_ATU_BAR_MODE_ENABLE | (bar << 8)); 370 368 ··· 387 381 return -EBUSY; 388 382 } 389 383 390 - int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int bar, 391 - u64 cpu_addr, enum dw_pcie_as_type as_type) 384 + int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, 385 + int bar, u64 cpu_addr, 386 + enum dw_pcie_as_type as_type) 392 387 { 393 388 int type; 394 389 u32 retries, val; 395 390 396 391 if (pci->iatu_unroll_enabled) 397 - return dw_pcie_prog_inbound_atu_unroll(pci, index, bar, 392 + return dw_pcie_prog_inbound_atu_unroll(pci, func_no, index, bar, 398 393 cpu_addr, as_type); 399 394 400 395 dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, PCIE_ATU_REGION_INBOUND | ··· 414 407 return -EINVAL; 415 408 } 416 409 417 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type); 418 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE 419 - | PCIE_ATU_BAR_MODE_ENABLE | (bar << 8)); 410 + dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type | 411 + PCIE_ATU_FUNC_NUM(func_no)); 412 + dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE | 413 + PCIE_ATU_FUNC_NUM_MATCH_EN | 414 + PCIE_ATU_BAR_MODE_ENABLE | (bar << 8)); 420 415 421 416 /* 422 417 * Make sure ATU enable takes effect before any subsequent config ··· 453 444 } 454 445 455 446 dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index); 456 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, (u32)~PCIE_ATU_ENABLE); 447 + dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~(u32)PCIE_ATU_ENABLE); 457 448 } 458 449 459 450 int dw_pcie_wait_for_link(struct dw_pcie *pci) ··· 497 488 } 498 489 EXPORT_SYMBOL_GPL(dw_pcie_upconfig_setup); 499 490 500 - void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen) 491 + static void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen) 501 492 { 502 - u32 reg, val; 493 + u32 cap, ctrl2, link_speed; 503 494 u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 504 495 505 - reg = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCTL2); 506 - reg &= ~PCI_EXP_LNKCTL2_TLS; 496 + cap = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 497 + ctrl2 = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCTL2); 498 + ctrl2 &= ~PCI_EXP_LNKCTL2_TLS; 507 499 508 500 switch (pcie_link_speed[link_gen]) { 509 501 case PCIE_SPEED_2_5GT: 510 - reg |= PCI_EXP_LNKCTL2_TLS_2_5GT; 502 + link_speed = PCI_EXP_LNKCTL2_TLS_2_5GT; 511 503 break; 512 504 case PCIE_SPEED_5_0GT: 513 - reg |= PCI_EXP_LNKCTL2_TLS_5_0GT; 505 + link_speed = PCI_EXP_LNKCTL2_TLS_5_0GT; 514 506 break; 515 507 case PCIE_SPEED_8_0GT: 516 - reg |= PCI_EXP_LNKCTL2_TLS_8_0GT; 508 + link_speed = PCI_EXP_LNKCTL2_TLS_8_0GT; 517 509 break; 518 510 case PCIE_SPEED_16_0GT: 519 - reg |= PCI_EXP_LNKCTL2_TLS_16_0GT; 511 + link_speed = PCI_EXP_LNKCTL2_TLS_16_0GT; 520 512 break; 521 513 default: 522 514 /* Use hardware capability */ 523 - val = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 524 - val = FIELD_GET(PCI_EXP_LNKCAP_SLS, val); 525 - reg &= ~PCI_EXP_LNKCTL2_HASD; 526 - reg |= FIELD_PREP(PCI_EXP_LNKCTL2_TLS, val); 515 + link_speed = FIELD_GET(PCI_EXP_LNKCAP_SLS, cap); 516 + ctrl2 &= ~PCI_EXP_LNKCTL2_HASD; 527 517 break; 528 518 } 529 519 530 - dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCTL2, reg); 531 - } 532 - EXPORT_SYMBOL_GPL(dw_pcie_link_set_max_speed); 520 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCTL2, ctrl2 | link_speed); 533 521 534 - void dw_pcie_link_set_n_fts(struct dw_pcie *pci, u32 n_fts) 535 - { 536 - u32 val; 522 + cap &= ~((u32)PCI_EXP_LNKCAP_SLS); 523 + dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, cap | link_speed); 537 524 538 - val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 539 - val &= ~PORT_LOGIC_N_FTS_MASK; 540 - val |= n_fts & PORT_LOGIC_N_FTS_MASK; 541 - dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 542 525 } 543 - EXPORT_SYMBOL_GPL(dw_pcie_link_set_n_fts); 544 526 545 527 static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) 546 528 { ··· 546 546 547 547 void dw_pcie_setup(struct dw_pcie *pci) 548 548 { 549 - int ret; 550 549 u32 val; 551 - u32 lanes; 552 550 struct device *dev = pci->dev; 553 551 struct device_node *np = dev->of_node; 552 + struct platform_device *pdev = to_platform_device(dev); 554 553 555 554 if (pci->version >= 0x480A || (!pci->version && 556 555 dw_pcie_iatu_unroll_enabled(pci))) { 557 556 pci->iatu_unroll_enabled = true; 558 557 if (!pci->atu_base) 558 + pci->atu_base = 559 + devm_platform_ioremap_resource_byname(pdev, "atu"); 560 + if (IS_ERR(pci->atu_base)) 559 561 pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; 560 562 } 561 563 dev_dbg(pci->dev, "iATU unroll: %s\n", pci->iatu_unroll_enabled ? 562 564 "enabled" : "disabled"); 563 565 566 + if (pci->link_gen > 0) 567 + dw_pcie_link_set_max_speed(pci, pci->link_gen); 564 568 565 - ret = of_property_read_u32(np, "num-lanes", &lanes); 566 - if (ret) { 567 - dev_dbg(pci->dev, "property num-lanes isn't found\n"); 569 + /* Configure Gen1 N_FTS */ 570 + if (pci->n_fts[0]) { 571 + val = dw_pcie_readl_dbi(pci, PCIE_PORT_AFR); 572 + val &= ~(PORT_AFR_N_FTS_MASK | PORT_AFR_CC_N_FTS_MASK); 573 + val |= PORT_AFR_N_FTS(pci->n_fts[0]); 574 + val |= PORT_AFR_CC_N_FTS(pci->n_fts[0]); 575 + dw_pcie_writel_dbi(pci, PCIE_PORT_AFR, val); 576 + } 577 + 578 + /* Configure Gen2+ N_FTS */ 579 + if (pci->n_fts[1]) { 580 + val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 581 + val &= ~PORT_LOGIC_N_FTS_MASK; 582 + val |= pci->n_fts[pci->link_gen - 1]; 583 + dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 584 + } 585 + 586 + val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL); 587 + val &= ~PORT_LINK_FAST_LINK_MODE; 588 + val |= PORT_LINK_DLL_LINK_EN; 589 + dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val); 590 + 591 + of_property_read_u32(np, "num-lanes", &pci->num_lanes); 592 + if (!pci->num_lanes) { 593 + dev_dbg(pci->dev, "Using h/w default number of lanes\n"); 568 594 return; 569 595 } 570 596 571 597 /* Set the number of lanes */ 572 - val = dw_pcie_readl_dbi(pci, PCIE_PORT_LINK_CONTROL); 598 + val &= ~PORT_LINK_FAST_LINK_MODE; 573 599 val &= ~PORT_LINK_MODE_MASK; 574 - switch (lanes) { 600 + switch (pci->num_lanes) { 575 601 case 1: 576 602 val |= PORT_LINK_MODE_1_LANES; 577 603 break; ··· 611 585 val |= PORT_LINK_MODE_8_LANES; 612 586 break; 613 587 default: 614 - dev_err(pci->dev, "num-lanes %u: invalid value\n", lanes); 588 + dev_err(pci->dev, "num-lanes %u: invalid value\n", pci->num_lanes); 615 589 return; 616 590 } 617 591 dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val); ··· 619 593 /* Set link width speed control register */ 620 594 val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 621 595 val &= ~PORT_LOGIC_LINK_WIDTH_MASK; 622 - switch (lanes) { 596 + switch (pci->num_lanes) { 623 597 case 1: 624 598 val |= PORT_LOGIC_LINK_WIDTH_1_LANES; 625 599 break;
+65 -45
drivers/pci/controller/dwc/pcie-designware.h
··· 32 32 /* Synopsys-specific PCIe configuration registers */ 33 33 #define PCIE_PORT_AFR 0x70C 34 34 #define PORT_AFR_N_FTS_MASK GENMASK(15, 8) 35 + #define PORT_AFR_N_FTS(n) FIELD_PREP(PORT_AFR_N_FTS_MASK, n) 35 36 #define PORT_AFR_CC_N_FTS_MASK GENMASK(23, 16) 37 + #define PORT_AFR_CC_N_FTS(n) FIELD_PREP(PORT_AFR_CC_N_FTS_MASK, n) 38 + #define PORT_AFR_ENTER_ASPM BIT(30) 39 + #define PORT_AFR_L0S_ENTRANCE_LAT_SHIFT 24 40 + #define PORT_AFR_L0S_ENTRANCE_LAT_MASK GENMASK(26, 24) 41 + #define PORT_AFR_L1_ENTRANCE_LAT_SHIFT 27 42 + #define PORT_AFR_L1_ENTRANCE_LAT_MASK GENMASK(29, 27) 36 43 37 44 #define PCIE_PORT_LINK_CONTROL 0x710 38 45 #define PORT_LINK_DLL_LINK_EN BIT(5) 46 + #define PORT_LINK_FAST_LINK_MODE BIT(7) 39 47 #define PORT_LINK_MODE_MASK GENMASK(21, 16) 40 48 #define PORT_LINK_MODE(n) FIELD_PREP(PORT_LINK_MODE_MASK, n) 41 49 #define PORT_LINK_MODE_1_LANES PORT_LINK_MODE(0x1) ··· 88 80 #define PCIE_ATU_TYPE_IO 0x2 89 81 #define PCIE_ATU_TYPE_CFG0 0x4 90 82 #define PCIE_ATU_TYPE_CFG1 0x5 83 + #define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20) 91 84 #define PCIE_ATU_CR2 0x908 92 85 #define PCIE_ATU_ENABLE BIT(31) 93 86 #define PCIE_ATU_BAR_MODE_ENABLE BIT(30) 87 + #define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19) 94 88 #define PCIE_ATU_LOWER_BASE 0x90C 95 89 #define PCIE_ATU_UPPER_BASE 0x910 96 90 #define PCIE_ATU_LIMIT 0x914 ··· 104 94 105 95 #define PCIE_MISC_CONTROL_1_OFF 0x8BC 106 96 #define PCIE_DBI_RO_WR_EN BIT(0) 97 + 98 + #define PCIE_MSIX_DOORBELL 0x948 99 + #define PCIE_MSIX_DOORBELL_PF_SHIFT 24 107 100 108 101 #define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20 109 102 #define PCIE_PL_CHK_REG_CHK_REG_START BIT(0) ··· 173 160 }; 174 161 175 162 struct dw_pcie_host_ops { 176 - int (*rd_own_conf)(struct pcie_port *pp, int where, int size, u32 *val); 177 - int (*wr_own_conf)(struct pcie_port *pp, int where, int size, u32 val); 178 - int (*rd_other_conf)(struct pcie_port *pp, struct pci_bus *bus, 179 - unsigned int devfn, int where, int size, u32 *val); 180 - int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus, 181 - unsigned int devfn, int where, int size, u32 val); 182 163 int (*host_init)(struct pcie_port *pp); 183 - void (*scan_bus)(struct pcie_port *pp); 184 164 void (*set_num_vectors)(struct pcie_port *pp); 185 165 int (*msi_host_init)(struct pcie_port *pp); 186 166 }; ··· 182 176 u64 cfg0_base; 183 177 void __iomem *va_cfg0_base; 184 178 u32 cfg0_size; 185 - u64 cfg1_base; 186 - void __iomem *va_cfg1_base; 187 - u32 cfg1_size; 188 179 resource_size_t io_base; 189 180 phys_addr_t io_bus_addr; 190 181 u32 io_size; 191 - u64 mem_base; 192 - phys_addr_t mem_bus_addr; 193 - u32 mem_size; 194 - struct resource *cfg; 195 - struct resource *io; 196 - struct resource *mem; 197 - struct resource *busn; 198 182 int irq; 199 183 const struct dw_pcie_host_ops *ops; 200 184 int msi_irq; 201 185 struct irq_domain *irq_domain; 202 186 struct irq_domain *msi_domain; 187 + u16 msi_msg; 203 188 dma_addr_t msi_data; 204 - struct page *msi_page; 205 189 struct irq_chip *msi_irq_chip; 206 190 u32 num_vectors; 207 191 u32 irq_mask[MAX_MSI_CTRLS]; 208 - struct pci_bus *root_bus; 192 + struct pci_host_bridge *bridge; 209 193 raw_spinlock_t lock; 210 194 DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS); 211 195 }; ··· 211 215 int (*raise_irq)(struct dw_pcie_ep *ep, u8 func_no, 212 216 enum pci_epc_irq_type type, u16 interrupt_num); 213 217 const struct pci_epc_features* (*get_features)(struct dw_pcie_ep *ep); 218 + /* 219 + * Provide a method to implement the different func config space 220 + * access for different platform, if different func have different 221 + * offset, return the offset of func. if use write a register way 222 + * return a 0, and implement code in callback function of platform 223 + * driver. 224 + */ 225 + unsigned int (*func_conf_select)(struct dw_pcie_ep *ep, u8 func_no); 226 + }; 227 + 228 + struct dw_pcie_ep_func { 229 + struct list_head list; 230 + u8 func_no; 231 + u8 msi_cap; /* MSI capability offset */ 232 + u8 msix_cap; /* MSI-X capability offset */ 214 233 }; 215 234 216 235 struct dw_pcie_ep { 217 236 struct pci_epc *epc; 237 + struct list_head func_list; 218 238 const struct dw_pcie_ep_ops *ops; 219 239 phys_addr_t phys_base; 220 240 size_t addr_size; ··· 243 231 u32 num_ob_windows; 244 232 void __iomem *msi_mem; 245 233 phys_addr_t msi_mem_phys; 246 - u8 msi_cap; /* MSI capability offset */ 247 - u8 msix_cap; /* MSI-X capability offset */ 248 234 struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS]; 249 235 }; 250 236 ··· 252 242 size_t size); 253 243 void (*write_dbi)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 254 244 size_t size, u32 val); 255 - u32 (*read_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 256 - size_t size); 257 245 void (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 258 246 size_t size, u32 val); 259 247 int (*link_up)(struct dw_pcie *pcie); ··· 271 263 struct dw_pcie_ep ep; 272 264 const struct dw_pcie_ops *ops; 273 265 unsigned int version; 266 + int num_lanes; 267 + int link_gen; 268 + u8 n_fts[2]; 274 269 }; 275 270 276 271 #define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp) ··· 289 278 290 279 u32 dw_pcie_read_dbi(struct dw_pcie *pci, u32 reg, size_t size); 291 280 void dw_pcie_write_dbi(struct dw_pcie *pci, u32 reg, size_t size, u32 val); 292 - u32 dw_pcie_read_dbi2(struct dw_pcie *pci, u32 reg, size_t size); 293 281 void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val); 294 - u32 dw_pcie_read_atu(struct dw_pcie *pci, u32 reg, size_t size); 295 - void dw_pcie_write_atu(struct dw_pcie *pci, u32 reg, size_t size, u32 val); 296 282 int dw_pcie_link_up(struct dw_pcie *pci); 297 283 void dw_pcie_upconfig_setup(struct dw_pcie *pci); 298 - void dw_pcie_link_set_max_speed(struct dw_pcie *pci, u32 link_gen); 299 - void dw_pcie_link_set_n_fts(struct dw_pcie *pci, u32 n_fts); 300 284 int dw_pcie_wait_for_link(struct dw_pcie *pci); 301 285 void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, 302 286 int type, u64 cpu_addr, u64 pci_addr, 303 287 u32 size); 304 - int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int bar, 305 - u64 cpu_addr, enum dw_pcie_as_type as_type); 288 + void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 289 + int type, u64 cpu_addr, u64 pci_addr, 290 + u32 size); 291 + int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, 292 + int bar, u64 cpu_addr, 293 + enum dw_pcie_as_type as_type); 306 294 void dw_pcie_disable_atu(struct dw_pcie *pci, int index, 307 295 enum dw_pcie_region_type type); 308 296 void dw_pcie_setup(struct dw_pcie *pci); ··· 341 331 dw_pcie_write_dbi2(pci, reg, 0x4, val); 342 332 } 343 333 344 - static inline u32 dw_pcie_readl_dbi2(struct dw_pcie *pci, u32 reg) 345 - { 346 - return dw_pcie_read_dbi2(pci, reg, 0x4); 347 - } 348 - 349 - static inline void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val) 350 - { 351 - dw_pcie_write_atu(pci, reg, 0x4, val); 352 - } 353 - 354 - static inline u32 dw_pcie_readl_atu(struct dw_pcie *pci, u32 reg) 355 - { 356 - return dw_pcie_read_atu(pci, reg, 0x4); 357 - } 358 - 359 334 static inline void dw_pcie_dbi_ro_wr_en(struct dw_pcie *pci) 360 335 { 361 336 u32 reg; ··· 371 376 int dw_pcie_host_init(struct pcie_port *pp); 372 377 void dw_pcie_host_deinit(struct pcie_port *pp); 373 378 int dw_pcie_allocate_domains(struct pcie_port *pp); 379 + void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus, unsigned int devfn, 380 + int where); 374 381 #else 375 382 static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) 376 383 { ··· 404 407 { 405 408 return 0; 406 409 } 410 + static inline void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus, 411 + unsigned int devfn, 412 + int where) 413 + { 414 + return NULL; 415 + } 407 416 #endif 408 417 409 418 #ifdef CONFIG_PCIE_DW_EP ··· 423 420 u8 interrupt_num); 424 421 int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no, 425 422 u16 interrupt_num); 423 + int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, 424 + u16 interrupt_num); 426 425 void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar); 426 + struct dw_pcie_ep_func * 427 + dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no); 427 428 #else 428 429 static inline void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) 429 430 { ··· 468 461 return 0; 469 462 } 470 463 464 + static inline int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, 465 + u8 func_no, 466 + u16 interrupt_num) 467 + { 468 + return 0; 469 + } 470 + 471 471 static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) 472 472 { 473 + } 474 + 475 + static inline struct dw_pcie_ep_func * 476 + dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no) 477 + { 478 + return NULL; 473 479 } 474 480 #endif 475 481 #endif /* _PCIE_DESIGNWARE_H */
+24 -21
drivers/pci/controller/dwc/pcie-histb.c
··· 122 122 histb_pcie_dbi_w_mode(&pci->pp, false); 123 123 } 124 124 125 - static int histb_pcie_rd_own_conf(struct pcie_port *pp, int where, 126 - int size, u32 *val) 125 + static int histb_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn, 126 + int where, int size, u32 *val) 127 127 { 128 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 129 - int ret; 128 + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); 130 129 131 - histb_pcie_dbi_r_mode(pp, true); 132 - ret = dw_pcie_read(pci->dbi_base + where, size, val); 133 - histb_pcie_dbi_r_mode(pp, false); 130 + if (PCI_SLOT(devfn)) { 131 + *val = ~0; 132 + return PCIBIOS_DEVICE_NOT_FOUND; 133 + } 134 134 135 - return ret; 135 + *val = dw_pcie_read_dbi(pci, where, size); 136 + return PCIBIOS_SUCCESSFUL; 136 137 } 137 138 138 - static int histb_pcie_wr_own_conf(struct pcie_port *pp, int where, 139 - int size, u32 val) 139 + static int histb_pcie_wr_own_conf(struct pci_bus *bus, unsigned int devfn, 140 + int where, int size, u32 val) 140 141 { 141 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 142 - int ret; 142 + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); 143 143 144 - histb_pcie_dbi_w_mode(pp, true); 145 - ret = dw_pcie_write(pci->dbi_base + where, size, val); 146 - histb_pcie_dbi_w_mode(pp, false); 144 + if (PCI_SLOT(devfn)) 145 + return PCIBIOS_DEVICE_NOT_FOUND; 147 146 148 - return ret; 147 + dw_pcie_write_dbi(pci, where, size, val); 148 + return PCIBIOS_SUCCESSFUL; 149 149 } 150 + 151 + static struct pci_ops histb_pci_ops = { 152 + .read = histb_pcie_rd_own_conf, 153 + .write = histb_pcie_wr_own_conf, 154 + }; 150 155 151 156 static int histb_pcie_link_up(struct dw_pcie *pci) 152 157 { ··· 199 194 200 195 static int histb_pcie_host_init(struct pcie_port *pp) 201 196 { 202 - histb_pcie_establish_link(pp); 197 + pp->bridge->ops = &histb_pci_ops; 203 198 204 - if (IS_ENABLED(CONFIG_PCI_MSI)) 205 - dw_pcie_msi_init(pp); 199 + histb_pcie_establish_link(pp); 200 + dw_pcie_msi_init(pp); 206 201 207 202 return 0; 208 203 } 209 204 210 205 static const struct dw_pcie_host_ops histb_pcie_host_ops = { 211 - .rd_own_conf = histb_pcie_rd_own_conf, 212 - .wr_own_conf = histb_pcie_wr_own_conf, 213 206 .host_init = histb_pcie_host_init, 214 207 }; 215 208
+12 -53
drivers/pci/controller/dwc/pcie-intel-gw.c
··· 67 67 void __iomem *app_base; 68 68 struct gpio_desc *reset_gpio; 69 69 u32 rst_intrvl; 70 - u32 max_speed; 71 - u32 link_gen; 72 - u32 max_width; 73 - u32 n_fts; 74 70 struct clk *core_clk; 75 71 struct reset_control *core_rst; 76 72 struct phy *phy; 77 - u8 pcie_cap_ofst; 78 73 }; 79 74 80 75 static void pcie_update_bits(void __iomem *base, u32 ofs, u32 mask, u32 val) ··· 129 134 static void intel_pcie_link_setup(struct intel_pcie_port *lpp) 130 135 { 131 136 u32 val; 132 - u8 offset = lpp->pcie_cap_ofst; 133 - 134 - val = pcie_rc_cfg_rd(lpp, offset + PCI_EXP_LNKCAP); 135 - lpp->max_speed = FIELD_GET(PCI_EXP_LNKCAP_SLS, val); 136 - lpp->max_width = FIELD_GET(PCI_EXP_LNKCAP_MLW, val); 137 + u8 offset = dw_pcie_find_capability(&lpp->pci, PCI_CAP_ID_EXP); 137 138 138 139 val = pcie_rc_cfg_rd(lpp, offset + PCI_EXP_LNKCTL); 139 140 ··· 137 146 pcie_rc_cfg_wr(lpp, offset + PCI_EXP_LNKCTL, val); 138 147 } 139 148 140 - static void intel_pcie_port_logic_setup(struct intel_pcie_port *lpp) 149 + static void intel_pcie_init_n_fts(struct dw_pcie *pci) 141 150 { 142 - u32 val, mask; 143 - 144 - switch (pcie_link_speed[lpp->max_speed]) { 145 - case PCIE_SPEED_8_0GT: 146 - lpp->n_fts = PORT_AFR_N_FTS_GEN3; 151 + switch (pci->link_gen) { 152 + case 3: 153 + pci->n_fts[1] = PORT_AFR_N_FTS_GEN3; 147 154 break; 148 - case PCIE_SPEED_16_0GT: 149 - lpp->n_fts = PORT_AFR_N_FTS_GEN4; 155 + case 4: 156 + pci->n_fts[1] = PORT_AFR_N_FTS_GEN4; 150 157 break; 151 158 default: 152 - lpp->n_fts = PORT_AFR_N_FTS_GEN12_DFT; 159 + pci->n_fts[1] = PORT_AFR_N_FTS_GEN12_DFT; 153 160 break; 154 161 } 155 - 156 - mask = PORT_AFR_N_FTS_MASK | PORT_AFR_CC_N_FTS_MASK; 157 - val = FIELD_PREP(PORT_AFR_N_FTS_MASK, lpp->n_fts) | 158 - FIELD_PREP(PORT_AFR_CC_N_FTS_MASK, lpp->n_fts); 159 - pcie_rc_cfg_wr_mask(lpp, PCIE_PORT_AFR, mask, val); 160 - 161 - /* Port Link Control Register */ 162 - pcie_rc_cfg_wr_mask(lpp, PCIE_PORT_LINK_CONTROL, PORT_LINK_DLL_LINK_EN, 163 - PORT_LINK_DLL_LINK_EN); 162 + pci->n_fts[0] = PORT_AFR_N_FTS_GEN12_DFT; 164 163 } 165 164 166 165 static void intel_pcie_rc_setup(struct intel_pcie_port *lpp) 167 166 { 168 167 intel_pcie_ltssm_disable(lpp); 169 168 intel_pcie_link_setup(lpp); 169 + intel_pcie_init_n_fts(&lpp->pci); 170 170 dw_pcie_setup_rc(&lpp->pci.pp); 171 171 dw_pcie_upconfig_setup(&lpp->pci); 172 - intel_pcie_port_logic_setup(lpp); 173 - dw_pcie_link_set_max_speed(&lpp->pci, lpp->link_gen); 174 - dw_pcie_link_set_n_fts(&lpp->pci, lpp->n_fts); 175 172 } 176 173 177 174 static int intel_pcie_ep_rst_init(struct intel_pcie_port *lpp) ··· 254 275 return ret; 255 276 } 256 277 257 - ret = device_property_match_string(dev, "device_type", "pci"); 258 - if (ret) { 259 - dev_err(dev, "Failed to find pci device type: %d\n", ret); 260 - return ret; 261 - } 262 - 263 278 ret = device_property_read_u32(dev, "reset-assert-ms", 264 279 &lpp->rst_intrvl); 265 280 if (ret) 266 281 lpp->rst_intrvl = RESET_INTERVAL_MS; 267 - 268 - ret = of_pci_get_max_link_speed(dev->of_node); 269 - lpp->link_gen = ret < 0 ? 0 : ret; 270 282 271 283 lpp->app_base = devm_platform_ioremap_resource_byname(pdev, "app"); 272 284 if (IS_ERR(lpp->app_base)) ··· 283 313 { 284 314 u32 value; 285 315 int ret; 316 + struct dw_pcie *pci = &lpp->pci; 286 317 287 - if (pcie_link_speed[lpp->max_speed] < PCIE_SPEED_8_0GT) 318 + if (pci->link_gen < 3) 288 319 return 0; 289 320 290 321 /* Send PME_TURN_OFF message */ ··· 314 343 315 344 static int intel_pcie_host_setup(struct intel_pcie_port *lpp) 316 345 { 317 - struct device *dev = lpp->pci.dev; 318 346 int ret; 319 347 320 348 intel_pcie_core_rst_assert(lpp); ··· 329 359 if (ret) { 330 360 dev_err(lpp->pci.dev, "Core clock enable failed: %d\n", ret); 331 361 goto clk_err; 332 - } 333 - 334 - if (!lpp->pcie_cap_ofst) { 335 - ret = dw_pcie_find_capability(&lpp->pci, PCI_CAP_ID_EXP); 336 - if (!ret) { 337 - ret = -ENXIO; 338 - dev_err(dev, "Invalid PCIe capability offset\n"); 339 - goto app_init_err; 340 - } 341 - 342 - lpp->pcie_cap_ofst = ret; 343 362 } 344 363 345 364 intel_pcie_rc_setup(lpp);
+27 -22
drivers/pci/controller/dwc/pcie-kirin.c
··· 330 330 kirin_apb_ctrl_writel(kirin_pcie, val, SOC_PCIECTRL_CTRL1_ADDR); 331 331 } 332 332 333 - static int kirin_pcie_rd_own_conf(struct pcie_port *pp, 333 + static int kirin_pcie_rd_own_conf(struct pci_bus *bus, unsigned int devfn, 334 334 int where, int size, u32 *val) 335 335 { 336 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 337 - struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 338 - int ret; 336 + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); 339 337 340 - kirin_pcie_sideband_dbi_r_mode(kirin_pcie, true); 341 - ret = dw_pcie_read(pci->dbi_base + where, size, val); 342 - kirin_pcie_sideband_dbi_r_mode(kirin_pcie, false); 338 + if (PCI_SLOT(devfn)) { 339 + *val = ~0; 340 + return PCIBIOS_DEVICE_NOT_FOUND; 341 + } 343 342 344 - return ret; 343 + *val = dw_pcie_read_dbi(pci, where, size); 344 + return PCIBIOS_SUCCESSFUL; 345 345 } 346 346 347 - static int kirin_pcie_wr_own_conf(struct pcie_port *pp, 347 + static int kirin_pcie_wr_own_conf(struct pci_bus *bus, unsigned int devfn, 348 348 int where, int size, u32 val) 349 349 { 350 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 351 - struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 352 - int ret; 350 + struct dw_pcie *pci = to_dw_pcie_from_pp(bus->sysdata); 353 351 354 - kirin_pcie_sideband_dbi_w_mode(kirin_pcie, true); 355 - ret = dw_pcie_write(pci->dbi_base + where, size, val); 356 - kirin_pcie_sideband_dbi_w_mode(kirin_pcie, false); 352 + if (PCI_SLOT(devfn)) 353 + return PCIBIOS_DEVICE_NOT_FOUND; 357 354 358 - return ret; 355 + dw_pcie_write_dbi(pci, where, size, val); 356 + return PCIBIOS_SUCCESSFUL; 359 357 } 358 + 359 + static struct pci_ops kirin_pci_ops = { 360 + .read = kirin_pcie_rd_own_conf, 361 + .write = kirin_pcie_wr_own_conf, 362 + }; 360 363 361 364 static u32 kirin_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, 362 365 u32 reg, size_t size) ··· 426 423 427 424 static int kirin_pcie_host_init(struct pcie_port *pp) 428 425 { 429 - kirin_pcie_establish_link(pp); 426 + pp->bridge->ops = &kirin_pci_ops; 430 427 431 - if (IS_ENABLED(CONFIG_PCI_MSI)) 432 - dw_pcie_msi_init(pp); 428 + kirin_pcie_establish_link(pp); 429 + dw_pcie_msi_init(pp); 433 430 434 431 return 0; 435 432 } ··· 441 438 }; 442 439 443 440 static const struct dw_pcie_host_ops kirin_pcie_host_ops = { 444 - .rd_own_conf = kirin_pcie_rd_own_conf, 445 - .wr_own_conf = kirin_pcie_wr_own_conf, 446 441 .host_init = kirin_pcie_host_init, 447 442 }; 448 443 ··· 508 507 509 508 kirin_pcie->gpio_id_reset = of_get_named_gpio(dev->of_node, 510 509 "reset-gpios", 0); 511 - if (kirin_pcie->gpio_id_reset < 0) 510 + if (kirin_pcie->gpio_id_reset == -EPROBE_DEFER) { 511 + return -EPROBE_DEFER; 512 + } else if (!gpio_is_valid(kirin_pcie->gpio_id_reset)) { 513 + dev_err(dev, "unable to get a valid gpio pin\n"); 512 514 return -ENODEV; 515 + } 513 516 514 517 ret = kirin_pcie_power_on(kirin_pcie); 515 518 if (ret)
+22 -24
drivers/pci/controller/dwc/pcie-qcom.c
··· 67 67 #define PCIE20_AXI_MSTR_RESP_COMP_CTRL1 0x81c 68 68 #define CFG_BRIDGE_SB_INIT BIT(0) 69 69 70 - #define PCIE20_CAP 0x70 71 - #define PCIE20_DEVICE_CONTROL2_STATUS2 (PCIE20_CAP + PCI_EXP_DEVCTL2) 72 - #define PCIE20_CAP_LINK_CAPABILITIES (PCIE20_CAP + PCI_EXP_LNKCAP) 73 - #define PCIE20_CAP_LINK_1 (PCIE20_CAP + 0x14) 74 70 #define PCIE_CAP_LINK1_VAL 0x2FD7F 75 71 76 72 #define PCIE20_PARF_Q2A_FLUSH 0x1AC ··· 189 193 struct phy *phy; 190 194 struct gpio_desc *reset; 191 195 const struct qcom_pcie_ops *ops; 192 - int gen; 193 196 }; 194 197 195 198 #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) ··· 297 302 reset_control_assert(res->por_reset); 298 303 reset_control_assert(res->ext_reset); 299 304 reset_control_assert(res->phy_reset); 305 + 306 + writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL); 307 + 300 308 regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 301 309 } 302 310 ··· 311 313 struct device_node *node = dev->of_node; 312 314 u32 val; 313 315 int ret; 316 + 317 + /* reset the PCIe interface as uboot can leave it undefined state */ 318 + reset_control_assert(res->pci_reset); 319 + reset_control_assert(res->axi_reset); 320 + reset_control_assert(res->ahb_reset); 321 + reset_control_assert(res->por_reset); 322 + reset_control_assert(res->ext_reset); 323 + reset_control_assert(res->phy_reset); 324 + 325 + writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL); 314 326 315 327 ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies); 316 328 if (ret < 0) { ··· 401 393 402 394 /* wait for clock acquisition */ 403 395 usleep_range(1000, 1500); 404 - 405 - if (pcie->gen == 1) { 406 - val = readl(pci->dbi_base + PCIE20_LNK_CONTROL2_LINK_STATUS2); 407 - val |= PCI_EXP_LNKSTA_CLS_2_5GB; 408 - writel(val, pci->dbi_base + PCIE20_LNK_CONTROL2_LINK_STATUS2); 409 - } 410 396 411 397 /* Set the Max TLP size to 2K, instead of using default of 4K */ 412 398 writel(CFG_REMOTE_RD_REQ_BRIDGE_SIZE_2K, ··· 1019 1017 struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; 1020 1018 struct dw_pcie *pci = pcie->pci; 1021 1019 struct device *dev = pci->dev; 1020 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1022 1021 int i, ret; 1023 1022 u32 val; 1024 1023 ··· 1095 1092 1096 1093 writel(PCI_COMMAND_MASTER, pci->dbi_base + PCI_COMMAND); 1097 1094 writel(DBI_RO_WR_EN, pci->dbi_base + PCIE20_MISC_CONTROL_1_REG); 1098 - writel(PCIE_CAP_LINK1_VAL, pci->dbi_base + PCIE20_CAP_LINK_1); 1095 + writel(PCIE_CAP_LINK1_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); 1099 1096 1100 - val = readl(pci->dbi_base + PCIE20_CAP_LINK_CAPABILITIES); 1097 + val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); 1101 1098 val &= ~PCI_EXP_LNKCAP_ASPMS; 1102 - writel(val, pci->dbi_base + PCIE20_CAP_LINK_CAPABILITIES); 1099 + writel(val, pci->dbi_base + offset + PCI_EXP_LNKCAP); 1103 1100 1104 - writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + 1105 - PCIE20_DEVICE_CONTROL2_STATUS2); 1101 + writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + 1102 + PCI_EXP_DEVCTL2); 1106 1103 1107 1104 return 0; 1108 1105 ··· 1255 1252 1256 1253 static int qcom_pcie_link_up(struct dw_pcie *pci) 1257 1254 { 1258 - u16 val = readw(pci->dbi_base + PCIE20_CAP + PCI_EXP_LNKSTA); 1255 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1256 + u16 val = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA); 1259 1257 1260 1258 return !!(val & PCI_EXP_LNKSTA_DLLLA); 1261 1259 } ··· 1284 1280 } 1285 1281 1286 1282 dw_pcie_setup_rc(pp); 1287 - 1288 - if (IS_ENABLED(CONFIG_PCI_MSI)) 1289 - dw_pcie_msi_init(pp); 1283 + dw_pcie_msi_init(pp); 1290 1284 1291 1285 qcom_ep_reset_deassert(pcie); 1292 1286 ··· 1400 1398 ret = PTR_ERR(pcie->reset); 1401 1399 goto err_pm_runtime_put; 1402 1400 } 1403 - 1404 - pcie->gen = of_pci_get_max_link_speed(pdev->dev.of_node); 1405 - if (pcie->gen < 0) 1406 - pcie->gen = 2; 1407 1401 1408 1402 pcie->parf = devm_platform_ioremap_resource_byname(pdev, "parf"); 1409 1403 if (IS_ERR(pcie->parf)) {
+6 -33
drivers/pci/controller/dwc/pcie-spear13xx.c
··· 26 26 void __iomem *app_base; 27 27 struct phy *phy; 28 28 struct clk *clk; 29 - bool is_gen1; 30 29 }; 31 30 32 31 struct pcie_app_reg { ··· 64 65 /* CR6 */ 65 66 #define MSI_CTRL_INT (1 << 26) 66 67 67 - #define EXP_CAP_ID_OFFSET 0x70 68 - 69 68 #define to_spear13xx_pcie(x) dev_get_drvdata((x)->dev) 70 69 71 70 static int spear13xx_pcie_establish_link(struct spear13xx_pcie *spear13xx_pcie) ··· 72 75 struct pcie_port *pp = &pci->pp; 73 76 struct pcie_app_reg *app_reg = spear13xx_pcie->app_base; 74 77 u32 val; 75 - u32 exp_cap_off = EXP_CAP_ID_OFFSET; 78 + u32 exp_cap_off = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 76 79 77 80 if (dw_pcie_link_up(pci)) { 78 81 dev_err(pci->dev, "link already up\n"); ··· 86 89 * default value in capability register is 512 bytes. So force 87 90 * it to 128 here. 88 91 */ 89 - dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, &val); 92 + val = dw_pcie_readw_dbi(pci, exp_cap_off + PCI_EXP_DEVCTL); 90 93 val &= ~PCI_EXP_DEVCTL_READRQ; 91 - dw_pcie_write(pci->dbi_base + exp_cap_off + PCI_EXP_DEVCTL, 2, val); 94 + dw_pcie_writew_dbi(pci, exp_cap_off + PCI_EXP_DEVCTL, val); 92 95 93 - dw_pcie_write(pci->dbi_base + PCI_VENDOR_ID, 2, 0x104A); 94 - dw_pcie_write(pci->dbi_base + PCI_DEVICE_ID, 2, 0xCD80); 95 - 96 - /* 97 - * if is_gen1 is set then handle it, so that some buggy card 98 - * also works 99 - */ 100 - if (spear13xx_pcie->is_gen1) { 101 - dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCAP, 102 - 4, &val); 103 - if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 104 - val &= ~((u32)PCI_EXP_LNKCAP_SLS); 105 - val |= PCI_EXP_LNKCAP_SLS_2_5GB; 106 - dw_pcie_write(pci->dbi_base + exp_cap_off + 107 - PCI_EXP_LNKCAP, 4, val); 108 - } 109 - 110 - dw_pcie_read(pci->dbi_base + exp_cap_off + PCI_EXP_LNKCTL2, 111 - 2, &val); 112 - if ((val & PCI_EXP_LNKCAP_SLS) != PCI_EXP_LNKCAP_SLS_2_5GB) { 113 - val &= ~((u32)PCI_EXP_LNKCAP_SLS); 114 - val |= PCI_EXP_LNKCAP_SLS_2_5GB; 115 - dw_pcie_write(pci->dbi_base + exp_cap_off + 116 - PCI_EXP_LNKCTL2, 2, val); 117 - } 118 - } 96 + dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, 0x104A); 97 + dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, 0xCD80); 119 98 120 99 /* enable ltssm */ 121 100 writel(DEVICE_TYPE_RC | (1 << MISCTRL_EN_ID) ··· 251 278 spear13xx_pcie->app_base = pci->dbi_base + 0x2000; 252 279 253 280 if (of_property_read_bool(np, "st,pcie-is-gen1")) 254 - spear13xx_pcie->is_gen1 = true; 281 + pci->link_gen = 1; 255 282 256 283 platform_set_drvdata(pdev, spear13xx_pcie); 257 284
+38 -102
drivers/pci/controller/dwc/pcie-tegra194.c
··· 183 183 #define EVENT_COUNTER_GROUP_SEL_SHIFT 24 184 184 #define EVENT_COUNTER_GROUP_5 0x5 185 185 186 - #define PORT_LOGIC_ACK_F_ASPM_CTRL 0x70C 187 - #define ENTER_ASPM BIT(30) 188 - #define L0S_ENTRANCE_LAT_SHIFT 24 189 - #define L0S_ENTRANCE_LAT_MASK GENMASK(26, 24) 190 - #define L1_ENTRANCE_LAT_SHIFT 27 191 - #define L1_ENTRANCE_LAT_MASK GENMASK(29, 27) 192 - #define N_FTS_SHIFT 8 193 - #define N_FTS_MASK GENMASK(7, 0) 194 186 #define N_FTS_VAL 52 195 - 196 - #define PORT_LOGIC_GEN2_CTRL 0x80C 197 - #define PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE BIT(17) 198 - #define FTS_MASK GENMASK(7, 0) 199 187 #define FTS_VAL 52 200 188 201 189 #define PORT_LOGIC_MSI_CTRL_INT_0_EN 0x828 ··· 284 296 u8 init_link_width; 285 297 u32 msi_ctrl_int; 286 298 u32 num_lanes; 287 - u32 max_speed; 288 299 u32 cid; 289 300 u32 cfg_link_cap_l1sub; 290 301 u32 pcie_cap_base; ··· 388 401 val |= APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N; 389 402 appl_writel(pcie, val, APPL_CAR_RESET_OVRD); 390 403 391 - val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); 392 - val |= PORT_LOGIC_GEN2_CTRL_DIRECT_SPEED_CHANGE; 393 - dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); 404 + val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 405 + val |= PORT_LOGIC_SPEED_CHANGE; 406 + dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 394 407 } 395 408 } 396 409 ··· 555 568 return IRQ_HANDLED; 556 569 } 557 570 558 - static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size, 559 - u32 *val) 571 + static int tegra_pcie_dw_rd_own_conf(struct pci_bus *bus, u32 devfn, int where, 572 + int size, u32 *val) 560 573 { 561 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 562 - 563 574 /* 564 575 * This is an endpoint mode specific register happen to appear even 565 576 * when controller is operating in root port mode and system hangs 566 577 * when it is accessed with link being in ASPM-L1 state. 567 578 * So skip accessing it altogether 568 579 */ 569 - if (where == PORT_LOGIC_MSIX_DOORBELL) { 580 + if (!PCI_SLOT(devfn) && where == PORT_LOGIC_MSIX_DOORBELL) { 570 581 *val = 0x00000000; 571 582 return PCIBIOS_SUCCESSFUL; 572 583 } 573 584 574 - return dw_pcie_read(pci->dbi_base + where, size, val); 585 + return pci_generic_config_read(bus, devfn, where, size, val); 575 586 } 576 587 577 - static int tegra_pcie_dw_wr_own_conf(struct pcie_port *pp, int where, int size, 578 - u32 val) 588 + static int tegra_pcie_dw_wr_own_conf(struct pci_bus *bus, u32 devfn, int where, 589 + int size, u32 val) 579 590 { 580 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 581 - 582 591 /* 583 592 * This is an endpoint mode specific register happen to appear even 584 593 * when controller is operating in root port mode and system hangs 585 594 * when it is accessed with link being in ASPM-L1 state. 586 595 * So skip accessing it altogether 587 596 */ 588 - if (where == PORT_LOGIC_MSIX_DOORBELL) 597 + if (!PCI_SLOT(devfn) && where == PORT_LOGIC_MSIX_DOORBELL) 589 598 return PCIBIOS_SUCCESSFUL; 590 599 591 - return dw_pcie_write(pci->dbi_base + where, size, val); 600 + return pci_generic_config_write(bus, devfn, where, size, val); 592 601 } 602 + 603 + static struct pci_ops tegra_pci_ops = { 604 + .map_bus = dw_pcie_own_conf_map_bus, 605 + .read = tegra_pcie_dw_rd_own_conf, 606 + .write = tegra_pcie_dw_wr_own_conf, 607 + }; 593 608 594 609 #if defined(CONFIG_PCIEASPM) 595 610 static void disable_aspm_l11(struct tegra_pcie_dw *pcie) ··· 681 692 dw_pcie_writel_dbi(pci, pcie->cfg_link_cap_l1sub, val); 682 693 683 694 /* Program L0s and L1 entrance latencies */ 684 - val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); 685 - val &= ~L0S_ENTRANCE_LAT_MASK; 686 - val |= (pcie->aspm_l0s_enter_lat << L0S_ENTRANCE_LAT_SHIFT); 687 - val |= ENTER_ASPM; 688 - dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); 695 + val = dw_pcie_readl_dbi(pci, PCIE_PORT_AFR); 696 + val &= ~PORT_AFR_L0S_ENTRANCE_LAT_MASK; 697 + val |= (pcie->aspm_l0s_enter_lat << PORT_AFR_L0S_ENTRANCE_LAT_SHIFT); 698 + val |= PORT_AFR_ENTER_ASPM; 699 + dw_pcie_writel_dbi(pci, PCIE_PORT_AFR, val); 689 700 } 690 701 691 - static int init_debugfs(struct tegra_pcie_dw *pcie) 702 + static void init_debugfs(struct tegra_pcie_dw *pcie) 692 703 { 693 - struct dentry *d; 694 - 695 - d = debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt", 696 - pcie->debugfs, aspm_state_cnt); 697 - if (IS_ERR_OR_NULL(d)) 698 - dev_err(pcie->dev, 699 - "Failed to create debugfs file \"aspm_state_cnt\"\n"); 700 - 701 - return 0; 704 + debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt", pcie->debugfs, 705 + aspm_state_cnt); 702 706 } 703 707 #else 704 708 static inline void disable_aspm_l12(struct tegra_pcie_dw *pcie) { return; } 705 709 static inline void disable_aspm_l11(struct tegra_pcie_dw *pcie) { return; } 706 710 static inline void init_host_aspm(struct tegra_pcie_dw *pcie) { return; } 707 - static inline int init_debugfs(struct tegra_pcie_dw *pcie) { return 0; } 711 + static inline void init_debugfs(struct tegra_pcie_dw *pcie) { return; } 708 712 #endif 709 713 710 714 static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp) ··· 809 827 810 828 /* Program init preset */ 811 829 for (i = 0; i < pcie->num_lanes; i++) { 812 - dw_pcie_read(pci->dbi_base + CAP_SPCIE_CAP_OFF 813 - + (i * 2), 2, &val); 830 + val = dw_pcie_readw_dbi(pci, CAP_SPCIE_CAP_OFF + (i * 2)); 814 831 val &= ~CAP_SPCIE_CAP_OFF_DSP_TX_PRESET0_MASK; 815 832 val |= GEN3_GEN4_EQ_PRESET_INIT; 816 833 val &= ~CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK; 817 834 val |= (GEN3_GEN4_EQ_PRESET_INIT << 818 835 CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT); 819 - dw_pcie_write(pci->dbi_base + CAP_SPCIE_CAP_OFF 820 - + (i * 2), 2, val); 836 + dw_pcie_writew_dbi(pci, CAP_SPCIE_CAP_OFF + (i * 2), val); 821 837 822 838 offset = dw_pcie_find_ext_capability(pci, 823 839 PCI_EXT_CAP_ID_PL_16GT) + 824 840 PCI_PL_16GT_LE_CTRL; 825 - dw_pcie_read(pci->dbi_base + offset + i, 1, &val); 841 + val = dw_pcie_readb_dbi(pci, offset + i); 826 842 val &= ~PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK; 827 843 val |= GEN3_GEN4_EQ_PRESET_INIT; 828 844 val &= ~PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK; 829 845 val |= (GEN3_GEN4_EQ_PRESET_INIT << 830 846 PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT); 831 - dw_pcie_write(pci->dbi_base + offset + i, 1, val); 847 + dw_pcie_writeb_dbi(pci, offset + i, val); 832 848 } 833 849 834 850 val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); ··· 872 892 873 893 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); 874 894 875 - /* Configure FTS */ 876 - val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); 877 - val &= ~(N_FTS_MASK << N_FTS_SHIFT); 878 - val |= N_FTS_VAL << N_FTS_SHIFT; 879 - dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); 880 - 881 - val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); 882 - val &= ~FTS_MASK; 883 - val |= FTS_VAL; 884 - dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); 885 - 886 895 /* Enable as 0xFFFF0001 response for CRS */ 887 896 val = dw_pcie_readl_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT); 888 897 val &= ~(AMBA_ERROR_RESPONSE_CRS_MASK << AMBA_ERROR_RESPONSE_CRS_SHIFT); 889 898 val |= (AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 << 890 899 AMBA_ERROR_RESPONSE_CRS_SHIFT); 891 900 dw_pcie_writel_dbi(pci, PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT, val); 892 - 893 - /* Configure Max Speed from DT */ 894 - if (pcie->max_speed && pcie->max_speed != -EINVAL) { 895 - val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + 896 - PCI_EXP_LNKCAP); 897 - val &= ~PCI_EXP_LNKCAP_SLS; 898 - val |= pcie->max_speed; 899 - dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, 900 - val); 901 - } 902 901 903 902 /* Configure Max lane width from DT */ 904 903 val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP); ··· 928 969 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 929 970 struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 930 971 u32 val, tmp, offset, speed; 972 + 973 + pp->bridge->ops = &tegra_pci_ops; 931 974 932 975 tegra_pcie_prepare_host(pp); 933 976 ··· 1018 1057 }; 1019 1058 1020 1059 static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = { 1021 - .rd_own_conf = tegra_pcie_dw_rd_own_conf, 1022 - .wr_own_conf = tegra_pcie_dw_wr_own_conf, 1023 1060 .host_init = tegra_pcie_dw_host_init, 1024 1061 .set_num_vectors = tegra_pcie_set_msi_vec_num, 1025 1062 }; ··· 1087 1128 dev_err(pcie->dev, "Failed to read num-lanes: %d\n", ret); 1088 1129 return ret; 1089 1130 } 1090 - 1091 - pcie->max_speed = of_pci_get_max_link_speed(np); 1092 1131 1093 1132 ret = of_property_read_u32_index(np, "nvidia,bpmp", 1, &pcie->cid); 1094 1133 if (ret) { ··· 1219 1262 * 5.2 Link State Power Management (Page #428). 1220 1263 */ 1221 1264 1222 - list_for_each_entry(child, &pp->root_bus->children, node) { 1265 + list_for_each_entry(child, &pp->bridge->bus->children, node) { 1223 1266 /* Bring downstream devices to D0 if they are not already in */ 1224 - if (child->parent == pp->root_bus) { 1267 + if (child->parent == pp->bridge->bus) { 1225 1268 root_bus = child; 1226 1269 break; 1227 1270 } ··· 1598 1641 } 1599 1642 1600 1643 pcie->debugfs = debugfs_create_dir(name, NULL); 1601 - if (!pcie->debugfs) 1602 - dev_err(dev, "Failed to create debugfs\n"); 1603 - else 1604 - init_debugfs(pcie); 1644 + init_debugfs(pcie); 1605 1645 1606 1646 return ret; 1607 1647 ··· 1770 1816 val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 1771 1817 val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 1772 1818 dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 1773 - 1774 - /* Configure N_FTS & FTS */ 1775 - val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); 1776 - val &= ~(N_FTS_MASK << N_FTS_SHIFT); 1777 - val |= N_FTS_VAL << N_FTS_SHIFT; 1778 - dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); 1779 - 1780 - val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); 1781 - val &= ~FTS_MASK; 1782 - val |= FTS_VAL; 1783 - dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); 1784 - 1785 - /* Configure Max Speed from DT */ 1786 - if (pcie->max_speed && pcie->max_speed != -EINVAL) { 1787 - val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + 1788 - PCI_EXP_LNKCAP); 1789 - val &= ~PCI_EXP_LNKCAP_SLS; 1790 - val |= pcie->max_speed; 1791 - dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, 1792 - val); 1793 - } 1794 1819 1795 1820 pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, 1796 1821 PCI_CAP_ID_EXP); ··· 1999 2066 pci = &pcie->pci; 2000 2067 pci->dev = &pdev->dev; 2001 2068 pci->ops = &tegra_dw_pcie_ops; 2069 + pci->n_fts[0] = N_FTS_VAL; 2070 + pci->n_fts[1] = FTS_VAL; 2071 + 2002 2072 pp = &pci->pp; 2003 2073 pcie->dev = &pdev->dev; 2004 2074 pcie->mode = (enum dw_pcie_device_mode)data->mode;
+1 -2
drivers/pci/controller/dwc/pcie-uniphier.c
··· 322 322 if (ret) 323 323 return ret; 324 324 325 - if (IS_ENABLED(CONFIG_PCI_MSI)) 326 - dw_pcie_msi_init(pp); 325 + dw_pcie_msi_init(pp); 327 326 328 327 return 0; 329 328 }
+1 -6
drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
··· 480 480 struct device *dev = &pcie->pdev->dev; 481 481 struct device_node *node = dev->of_node; 482 482 struct mobiveil_root_port *rp = &pcie->rp; 483 - int ret; 484 483 485 484 /* setup INTx */ 486 485 rp->intx_domain = irq_domain_add_linear(node, PCI_NUM_INTX, ··· 493 494 raw_spin_lock_init(&rp->intx_mask_lock); 494 495 495 496 /* setup MSI */ 496 - ret = mobiveil_allocate_msi_domains(pcie); 497 - if (ret) 498 - return ret; 499 - 500 - return 0; 497 + return mobiveil_allocate_msi_domains(pcie); 501 498 } 502 499 503 500 static int mobiveil_pcie_integrated_interrupt_init(struct mobiveil_pcie *pcie)
+69 -39
drivers/pci/controller/pci-aardvark.c
··· 9 9 */ 10 10 11 11 #include <linux/delay.h> 12 - #include <linux/gpio.h> 12 + #include <linux/gpio/consumer.h> 13 13 #include <linux/interrupt.h> 14 14 #include <linux/irq.h> 15 15 #include <linux/irqdomain.h> 16 16 #include <linux/kernel.h> 17 + #include <linux/module.h> 17 18 #include <linux/pci.h> 18 19 #include <linux/init.h> 19 20 #include <linux/phy/phy.h> ··· 252 251 } 253 252 } 254 253 254 + static void advk_pcie_issue_perst(struct advk_pcie *pcie) 255 + { 256 + u32 reg; 257 + 258 + if (!pcie->reset_gpio) 259 + return; 260 + 261 + /* PERST does not work for some cards when link training is enabled */ 262 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 263 + reg &= ~LINK_TRAINING_EN; 264 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 265 + 266 + /* 10ms delay is needed for some cards */ 267 + dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n"); 268 + gpiod_set_value_cansleep(pcie->reset_gpio, 1); 269 + usleep_range(10000, 11000); 270 + gpiod_set_value_cansleep(pcie->reset_gpio, 0); 271 + } 272 + 255 273 static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen) 256 274 { 257 275 int ret, neg_gen; ··· 319 299 int neg_gen = -1, gen; 320 300 321 301 /* 302 + * Reset PCIe card via PERST# signal. Some cards are not detected 303 + * during link training when they are in some non-initial state. 304 + */ 305 + advk_pcie_issue_perst(pcie); 306 + 307 + /* 308 + * PERST# signal could have been asserted by pinctrl subsystem before 309 + * probe() callback has been called or issued explicitly by reset gpio 310 + * function advk_pcie_issue_perst(), making the endpoint going into 311 + * fundamental reset. As required by PCI Express spec a delay for at 312 + * least 100ms after such a reset before link training is needed. 313 + */ 314 + msleep(PCI_PM_D3COLD_WAIT); 315 + 316 + /* 322 317 * Try link training at link gen specified by device tree property 323 318 * 'max-link-speed'. If this fails, iteratively train at lower gen. 324 319 */ ··· 365 330 dev_err(dev, "link never came up\n"); 366 331 } 367 332 368 - static void advk_pcie_issue_perst(struct advk_pcie *pcie) 369 - { 370 - u32 reg; 371 - 372 - if (!pcie->reset_gpio) 373 - return; 374 - 375 - /* PERST does not work for some cards when link training is enabled */ 376 - reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 377 - reg &= ~LINK_TRAINING_EN; 378 - advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 379 - 380 - /* 10ms delay is needed for some cards */ 381 - dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n"); 382 - gpiod_set_value_cansleep(pcie->reset_gpio, 1); 383 - usleep_range(10000, 11000); 384 - gpiod_set_value_cansleep(pcie->reset_gpio, 0); 385 - } 386 - 387 333 static void advk_pcie_setup_hw(struct advk_pcie *pcie) 388 334 { 389 335 u32 reg; 390 - 391 - advk_pcie_issue_perst(pcie); 392 336 393 337 /* Enable TX */ 394 338 reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG); ··· 444 430 reg = advk_readl(pcie, PIO_CTRL); 445 431 reg |= PIO_CTRL_ADDR_WIN_DISABLE; 446 432 advk_writel(pcie, reg, PIO_CTRL); 447 - 448 - /* 449 - * PERST# signal could have been asserted by pinctrl subsystem before 450 - * probe() callback has been called or issued explicitly by reset gpio 451 - * function advk_pcie_issue_perst(), making the endpoint going into 452 - * fundamental reset. As required by PCI Express spec a delay for at 453 - * least 100ms after such a reset before link training is needed. 454 - */ 455 - msleep(PCI_PM_D3COLD_WAIT); 456 433 457 434 advk_pcie_train_link(pcie); 458 435 ··· 612 607 * Initialize the configuration space of the PCI-to-PCI bridge 613 608 * associated with the given PCIe interface. 614 609 */ 615 - static void advk_sw_pci_bridge_init(struct advk_pcie *pcie) 610 + static int advk_sw_pci_bridge_init(struct advk_pcie *pcie) 616 611 { 617 612 struct pci_bridge_emul *bridge = &pcie->bridge; 618 613 ··· 638 633 bridge->data = pcie; 639 634 bridge->ops = &advk_pci_bridge_emul_ops; 640 635 641 - pci_bridge_emul_init(bridge, 0); 642 - 636 + return pci_bridge_emul_init(bridge, 0); 643 637 } 644 638 645 639 static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, ··· 1081 1077 } 1082 1078 1083 1079 ret = phy_power_on(pcie->phy); 1084 - if (ret) { 1080 + if (ret == -EOPNOTSUPP) { 1081 + dev_warn(&pcie->pdev->dev, "PHY unsupported by firmware\n"); 1082 + } else if (ret) { 1085 1083 phy_exit(pcie->phy); 1086 1084 return ret; 1087 1085 } ··· 1128 1122 1129 1123 pcie = pci_host_bridge_priv(bridge); 1130 1124 pcie->pdev = pdev; 1125 + platform_set_drvdata(pdev, pcie); 1131 1126 1132 1127 pcie->base = devm_platform_ioremap_resource(pdev, 0); 1133 1128 if (IS_ERR(pcie->base)) ··· 1174 1167 1175 1168 advk_pcie_setup_hw(pcie); 1176 1169 1177 - advk_sw_pci_bridge_init(pcie); 1170 + ret = advk_sw_pci_bridge_init(pcie); 1171 + if (ret) { 1172 + dev_err(dev, "Failed to register emulated root PCI bridge\n"); 1173 + return ret; 1174 + } 1178 1175 1179 1176 ret = advk_pcie_init_irq_domain(pcie); 1180 1177 if (ret) { ··· 1206 1195 return 0; 1207 1196 } 1208 1197 1198 + static int advk_pcie_remove(struct platform_device *pdev) 1199 + { 1200 + struct advk_pcie *pcie = platform_get_drvdata(pdev); 1201 + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 1202 + 1203 + pci_lock_rescan_remove(); 1204 + pci_stop_root_bus(bridge->bus); 1205 + pci_remove_root_bus(bridge->bus); 1206 + pci_unlock_rescan_remove(); 1207 + 1208 + advk_pcie_remove_msi_irq_domain(pcie); 1209 + advk_pcie_remove_irq_domain(pcie); 1210 + 1211 + return 0; 1212 + } 1213 + 1209 1214 static const struct of_device_id advk_pcie_of_match_table[] = { 1210 1215 { .compatible = "marvell,armada-3700-pcie", }, 1211 1216 {}, 1212 1217 }; 1218 + MODULE_DEVICE_TABLE(of, advk_pcie_of_match_table); 1213 1219 1214 1220 static struct platform_driver advk_pcie_driver = { 1215 1221 .driver = { 1216 1222 .name = "advk-pcie", 1217 1223 .of_match_table = advk_pcie_of_match_table, 1218 - /* Driver unloading/unbinding currently not supported */ 1219 - .suppress_bind_attrs = true, 1220 1224 }, 1221 1225 .probe = advk_pcie_probe, 1226 + .remove = advk_pcie_remove, 1222 1227 }; 1223 - builtin_platform_driver(advk_pcie_driver); 1228 + module_platform_driver(advk_pcie_driver); 1229 + 1230 + MODULE_DESCRIPTION("Aardvark PCIe controller"); 1231 + MODULE_LICENSE("GPL v2");
+47 -3
drivers/pci/controller/pci-hyperv.c
··· 1276 1276 exit_unlock: 1277 1277 spin_unlock_irqrestore(&hbus->retarget_msi_interrupt_lock, flags); 1278 1278 1279 - if (res) { 1279 + /* 1280 + * During hibernation, when a CPU is offlined, the kernel tries 1281 + * to move the interrupt to the remaining CPUs that haven't 1282 + * been offlined yet. In this case, the below hv_do_hypercall() 1283 + * always fails since the vmbus channel has been closed: 1284 + * refer to cpu_disable_common() -> fixup_irqs() -> 1285 + * irq_migrate_all_off_this_cpu() -> migrate_one_irq(). 1286 + * 1287 + * Suppress the error message for hibernation because the failure 1288 + * during hibernation does not matter (at this time all the devices 1289 + * have been frozen). Note: the correct affinity info is still updated 1290 + * into the irqdata data structure in migrate_one_irq() -> 1291 + * irq_do_set_affinity() -> hv_set_affinity(), so later when the VM 1292 + * resumes, hv_pci_restore_msi_state() is able to correctly restore 1293 + * the interrupt with the correct affinity. 1294 + */ 1295 + if (res && hbus->state != hv_pcibus_removing) 1280 1296 dev_err(&hbus->hdev->device, 1281 1297 "%s() failed: %#llx", __func__, res); 1282 - return; 1283 - } 1284 1298 1285 1299 pci_msi_unmask_irq(data); 1286 1300 } ··· 3381 3367 return 0; 3382 3368 } 3383 3369 3370 + static int hv_pci_restore_msi_msg(struct pci_dev *pdev, void *arg) 3371 + { 3372 + struct msi_desc *entry; 3373 + struct irq_data *irq_data; 3374 + 3375 + for_each_pci_msi_entry(entry, pdev) { 3376 + irq_data = irq_get_irq_data(entry->irq); 3377 + if (WARN_ON_ONCE(!irq_data)) 3378 + return -EINVAL; 3379 + 3380 + hv_compose_msi_msg(irq_data, &entry->msg); 3381 + } 3382 + 3383 + return 0; 3384 + } 3385 + 3386 + /* 3387 + * Upon resume, pci_restore_msi_state() -> ... -> __pci_write_msi_msg() 3388 + * directly writes the MSI/MSI-X registers via MMIO, but since Hyper-V 3389 + * doesn't trap and emulate the MMIO accesses, here hv_compose_msi_msg() 3390 + * must be used to ask Hyper-V to re-create the IOMMU Interrupt Remapping 3391 + * Table entries. 3392 + */ 3393 + static void hv_pci_restore_msi_state(struct hv_pcibus_device *hbus) 3394 + { 3395 + pci_walk_bus(hbus->pci_bus, hv_pci_restore_msi_msg, NULL); 3396 + } 3397 + 3384 3398 static int hv_pci_resume(struct hv_device *hdev) 3385 3399 { 3386 3400 struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); ··· 3441 3399 goto out; 3442 3400 3443 3401 prepopulate_bars(hbus); 3402 + 3403 + hv_pci_restore_msi_state(hbus); 3444 3404 3445 3405 hbus->state = hv_pcibus_installed; 3446 3406 return 0;
+1 -6
drivers/pci/controller/pci-loongson.c
··· 183 183 struct device_node *node = dev->of_node; 184 184 struct pci_host_bridge *bridge; 185 185 struct resource *regs; 186 - int err; 187 186 188 187 if (!node) 189 188 return -ENODEV; ··· 221 222 bridge->ops = &loongson_pci_ops; 222 223 bridge->map_irq = loongson_map_irq; 223 224 224 - err = pci_host_probe(bridge); 225 - if (err) 226 - return err; 227 - 228 - return 0; 225 + return pci_host_probe(bridge); 229 226 } 230 227 231 228 static struct platform_driver loongson_pci_driver = {
-3
drivers/pci/controller/pci-mvebu.c
··· 12 12 #include <linux/gpio.h> 13 13 #include <linux/init.h> 14 14 #include <linux/mbus.h> 15 - #include <linux/msi.h> 16 15 #include <linux/slab.h> 17 16 #include <linux/platform_device.h> 18 17 #include <linux/of_address.h> ··· 69 70 struct mvebu_pcie { 70 71 struct platform_device *pdev; 71 72 struct mvebu_pcie_port *ports; 72 - struct msi_controller *msi; 73 73 struct resource io; 74 74 struct resource realio; 75 75 struct resource mem; ··· 1125 1127 bridge->sysdata = pcie; 1126 1128 bridge->ops = &mvebu_pcie_ops; 1127 1129 bridge->align_resource = mvebu_pcie_align_resource; 1128 - bridge->msi = pcie->msi; 1129 1130 1130 1131 return mvebu_pci_host_probe(bridge); 1131 1132 }
+7 -44
drivers/pci/controller/pci-tegra.c
··· 2564 2564 return 0; 2565 2565 } 2566 2566 2567 - static const struct seq_operations tegra_pcie_ports_seq_ops = { 2567 + static const struct seq_operations tegra_pcie_ports_sops = { 2568 2568 .start = tegra_pcie_ports_seq_start, 2569 2569 .next = tegra_pcie_ports_seq_next, 2570 2570 .stop = tegra_pcie_ports_seq_stop, 2571 2571 .show = tegra_pcie_ports_seq_show, 2572 2572 }; 2573 2573 2574 - static int tegra_pcie_ports_open(struct inode *inode, struct file *file) 2575 - { 2576 - struct tegra_pcie *pcie = inode->i_private; 2577 - struct seq_file *s; 2578 - int err; 2579 - 2580 - err = seq_open(file, &tegra_pcie_ports_seq_ops); 2581 - if (err) 2582 - return err; 2583 - 2584 - s = file->private_data; 2585 - s->private = pcie; 2586 - 2587 - return 0; 2588 - } 2589 - 2590 - static const struct file_operations tegra_pcie_ports_ops = { 2591 - .owner = THIS_MODULE, 2592 - .open = tegra_pcie_ports_open, 2593 - .read = seq_read, 2594 - .llseek = seq_lseek, 2595 - .release = seq_release, 2596 - }; 2574 + DEFINE_SEQ_ATTRIBUTE(tegra_pcie_ports); 2597 2575 2598 2576 static void tegra_pcie_debugfs_exit(struct tegra_pcie *pcie) 2599 2577 { ··· 2579 2601 pcie->debugfs = NULL; 2580 2602 } 2581 2603 2582 - static int tegra_pcie_debugfs_init(struct tegra_pcie *pcie) 2604 + static void tegra_pcie_debugfs_init(struct tegra_pcie *pcie) 2583 2605 { 2584 - struct dentry *file; 2585 - 2586 2606 pcie->debugfs = debugfs_create_dir("pcie", NULL); 2587 - if (!pcie->debugfs) 2588 - return -ENOMEM; 2589 2607 2590 - file = debugfs_create_file("ports", S_IFREG | S_IRUGO, pcie->debugfs, 2591 - pcie, &tegra_pcie_ports_ops); 2592 - if (!file) 2593 - goto remove; 2594 - 2595 - return 0; 2596 - 2597 - remove: 2598 - tegra_pcie_debugfs_exit(pcie); 2599 - return -ENOMEM; 2608 + debugfs_create_file("ports", S_IFREG | S_IRUGO, pcie->debugfs, pcie, 2609 + &tegra_pcie_ports_fops); 2600 2610 } 2601 2611 2602 2612 static int tegra_pcie_probe(struct platform_device *pdev) ··· 2638 2672 goto pm_runtime_put; 2639 2673 } 2640 2674 2641 - if (IS_ENABLED(CONFIG_DEBUG_FS)) { 2642 - err = tegra_pcie_debugfs_init(pcie); 2643 - if (err < 0) 2644 - dev_err(dev, "failed to setup debugfs: %d\n", err); 2645 - } 2675 + if (IS_ENABLED(CONFIG_DEBUG_FS)) 2676 + tegra_pcie_debugfs_init(pcie); 2646 2677 2647 2678 return 0; 2648 2679
-1
drivers/pci/controller/pci-v3-semi.c
··· 658 658 default: 659 659 dev_err(v3->dev, "illegal dma memory chunk size\n"); 660 660 return -EINVAL; 661 - break; 662 661 } 663 662 val |= V3_PCI_MAP_M_REG_EN | V3_PCI_MAP_M_ENABLE; 664 663 *pci_map = val;
+2 -2
drivers/pci/controller/pci-xgene-msi.c
··· 493 493 */ 494 494 for (irq_index = 0; irq_index < NR_HW_IRQS; irq_index++) { 495 495 for (msi_idx = 0; msi_idx < IDX_PER_GROUP; msi_idx++) 496 - msi_val = xgene_msi_ir_read(xgene_msi, irq_index, 497 - msi_idx); 496 + xgene_msi_ir_read(xgene_msi, irq_index, msi_idx); 497 + 498 498 /* Read MSIINTn to confirm */ 499 499 msi_val = xgene_msi_int_read(xgene_msi, irq_index); 500 500 if (msi_val) {
+382 -64
drivers/pci/controller/pcie-brcmstb.c
··· 23 23 #include <linux/of_platform.h> 24 24 #include <linux/pci.h> 25 25 #include <linux/printk.h> 26 + #include <linux/reset.h> 26 27 #include <linux/sizes.h> 27 28 #include <linux/slab.h> 28 29 #include <linux/string.h> ··· 53 52 #define PCIE_MISC_MISC_CTRL_SCB_ACCESS_EN_MASK 0x1000 54 53 #define PCIE_MISC_MISC_CTRL_CFG_READ_UR_MODE_MASK 0x2000 55 54 #define PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_MASK 0x300000 56 - #define PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_128 0x0 55 + 57 56 #define PCIE_MISC_MISC_CTRL_SCB0_SIZE_MASK 0xf8000000 57 + #define PCIE_MISC_MISC_CTRL_SCB1_SIZE_MASK 0x07c00000 58 + #define PCIE_MISC_MISC_CTRL_SCB2_SIZE_MASK 0x0000001f 59 + #define SCB_SIZE_MASK(x) PCIE_MISC_MISC_CTRL_SCB ## x ## _SIZE_MASK 58 60 59 61 #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO 0x400c 60 62 #define PCIE_MEM_WIN0_LO(win) \ ··· 81 77 #define PCIE_MISC_MSI_BAR_CONFIG_HI 0x4048 82 78 83 79 #define PCIE_MISC_MSI_DATA_CONFIG 0x404c 84 - #define PCIE_MISC_MSI_DATA_CONFIG_VAL 0xffe06540 80 + #define PCIE_MISC_MSI_DATA_CONFIG_VAL_32 0xffe06540 81 + #define PCIE_MISC_MSI_DATA_CONFIG_VAL_8 0xfff86540 85 82 86 83 #define PCIE_MISC_PCIE_CTRL 0x4064 87 84 #define PCIE_MISC_PCIE_CTRL_PCIE_L23_REQUEST_MASK 0x1 85 + #define PCIE_MISC_PCIE_CTRL_PCIE_PERSTB_MASK 0x4 88 86 89 87 #define PCIE_MISC_PCIE_STATUS 0x4068 90 88 #define PCIE_MISC_PCIE_STATUS_PCIE_PORT_MASK 0x80 91 89 #define PCIE_MISC_PCIE_STATUS_PCIE_DL_ACTIVE_MASK 0x20 92 90 #define PCIE_MISC_PCIE_STATUS_PCIE_PHYLINKUP_MASK 0x10 93 91 #define PCIE_MISC_PCIE_STATUS_PCIE_LINK_IN_L23_MASK 0x40 92 + 93 + #define PCIE_MISC_REVISION 0x406c 94 + #define BRCM_PCIE_HW_REV_33 0x0303 94 95 95 96 #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT 0x4070 96 97 #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_BASE_LIMIT_LIMIT_MASK 0xfff00000 ··· 117 108 #define PCIE_MISC_HARD_PCIE_HARD_DEBUG_CLKREQ_DEBUG_ENABLE_MASK 0x2 118 109 #define PCIE_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK 0x08000000 119 110 120 - #define PCIE_MSI_INTR2_STATUS 0x4500 121 - #define PCIE_MSI_INTR2_CLR 0x4508 122 - #define PCIE_MSI_INTR2_MASK_SET 0x4510 123 - #define PCIE_MSI_INTR2_MASK_CLR 0x4514 111 + 112 + #define PCIE_INTR2_CPU_BASE 0x4300 113 + #define PCIE_MSI_INTR2_BASE 0x4500 114 + /* Offsets from PCIE_INTR2_CPU_BASE and PCIE_MSI_INTR2_BASE */ 115 + #define MSI_INT_STATUS 0x0 116 + #define MSI_INT_CLR 0x8 117 + #define MSI_INT_MASK_SET 0x10 118 + #define MSI_INT_MASK_CLR 0x14 124 119 125 120 #define PCIE_EXT_CFG_DATA 0x8000 126 121 ··· 133 120 #define PCIE_EXT_SLOT_SHIFT 15 134 121 #define PCIE_EXT_FUNC_SHIFT 12 135 122 136 - #define PCIE_RGR1_SW_INIT_1 0x9210 137 123 #define PCIE_RGR1_SW_INIT_1_PERST_MASK 0x1 138 - #define PCIE_RGR1_SW_INIT_1_INIT_MASK 0x2 124 + #define PCIE_RGR1_SW_INIT_1_PERST_SHIFT 0x0 125 + 126 + #define RGR1_SW_INIT_1_INIT_GENERIC_MASK 0x2 127 + #define RGR1_SW_INIT_1_INIT_GENERIC_SHIFT 0x1 128 + #define RGR1_SW_INIT_1_INIT_7278_MASK 0x1 129 + #define RGR1_SW_INIT_1_INIT_7278_SHIFT 0x0 139 130 140 131 /* PCIe parameters */ 141 132 #define BRCM_NUM_PCIE_OUT_WINS 0x4 142 133 #define BRCM_INT_PCI_MSI_NR 32 134 + #define BRCM_INT_PCI_MSI_LEGACY_NR 8 135 + #define BRCM_INT_PCI_MSI_SHIFT 0 143 136 144 137 /* MSI target adresses */ 145 138 #define BRCM_MSI_TARGET_ADDR_LT_4GB 0x0fffffffcULL ··· 170 151 #define SSC_STATUS_OFFSET 0x1 171 152 #define SSC_STATUS_SSC_MASK 0x400 172 153 #define SSC_STATUS_PLL_LOCK_MASK 0x800 154 + #define PCIE_BRCM_MAX_MEMC 3 155 + 156 + #define IDX_ADDR(pcie) (pcie->reg_offsets[EXT_CFG_INDEX]) 157 + #define DATA_ADDR(pcie) (pcie->reg_offsets[EXT_CFG_DATA]) 158 + #define PCIE_RGR1_SW_INIT_1(pcie) (pcie->reg_offsets[RGR1_SW_INIT_1]) 159 + 160 + /* Rescal registers */ 161 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL 0xc700 162 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_NFLDS 0x3 163 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_DIG_RESET_MASK 0x4 164 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_DIG_RESET_SHIFT 0x2 165 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_RESET_MASK 0x2 166 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_RESET_SHIFT 0x1 167 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_PWRDN_MASK 0x1 168 + #define PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_PWRDN_SHIFT 0x0 169 + 170 + /* Forward declarations */ 171 + struct brcm_pcie; 172 + static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val); 173 + static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val); 174 + static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val); 175 + static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val); 176 + 177 + enum { 178 + RGR1_SW_INIT_1, 179 + EXT_CFG_INDEX, 180 + EXT_CFG_DATA, 181 + }; 182 + 183 + enum { 184 + RGR1_SW_INIT_1_INIT_MASK, 185 + RGR1_SW_INIT_1_INIT_SHIFT, 186 + }; 187 + 188 + enum pcie_type { 189 + GENERIC, 190 + BCM7278, 191 + BCM2711, 192 + }; 193 + 194 + struct pcie_cfg_data { 195 + const int *offsets; 196 + const enum pcie_type type; 197 + void (*perst_set)(struct brcm_pcie *pcie, u32 val); 198 + void (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val); 199 + }; 200 + 201 + static const int pcie_offsets[] = { 202 + [RGR1_SW_INIT_1] = 0x9210, 203 + [EXT_CFG_INDEX] = 0x9000, 204 + [EXT_CFG_DATA] = 0x9004, 205 + }; 206 + 207 + static const struct pcie_cfg_data generic_cfg = { 208 + .offsets = pcie_offsets, 209 + .type = GENERIC, 210 + .perst_set = brcm_pcie_perst_set_generic, 211 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 212 + }; 213 + 214 + static const int pcie_offset_bcm7278[] = { 215 + [RGR1_SW_INIT_1] = 0xc010, 216 + [EXT_CFG_INDEX] = 0x9000, 217 + [EXT_CFG_DATA] = 0x9004, 218 + }; 219 + 220 + static const struct pcie_cfg_data bcm7278_cfg = { 221 + .offsets = pcie_offset_bcm7278, 222 + .type = BCM7278, 223 + .perst_set = brcm_pcie_perst_set_7278, 224 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_7278, 225 + }; 226 + 227 + static const struct pcie_cfg_data bcm2711_cfg = { 228 + .offsets = pcie_offsets, 229 + .type = BCM2711, 230 + .perst_set = brcm_pcie_perst_set_generic, 231 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 232 + }; 173 233 174 234 struct brcm_msi { 175 235 struct device *dev; ··· 261 163 int irq; 262 164 /* used indicates which MSI interrupts have been alloc'd */ 263 165 unsigned long used; 166 + bool legacy; 167 + /* Some chips have MSIs in bits [31..24] of a shared register. */ 168 + int legacy_shift; 169 + int nr; /* No. of MSI available, depends on chip */ 170 + /* This is the base pointer for interrupt status/set/clr regs */ 171 + void __iomem *intr_base; 264 172 }; 265 173 266 174 /* Internal PCIe Host Controller Information.*/ ··· 279 175 int gen; 280 176 u64 msi_target_addr; 281 177 struct brcm_msi *msi; 178 + const int *reg_offsets; 179 + enum pcie_type type; 180 + struct reset_control *rescal; 181 + int num_memc; 182 + u64 memc_size[PCIE_BRCM_MAX_MEMC]; 183 + u32 hw_rev; 184 + void (*perst_set)(struct brcm_pcie *pcie, u32 val); 185 + void (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val); 282 186 }; 283 187 284 188 /* ··· 477 365 msi = irq_desc_get_handler_data(desc); 478 366 dev = msi->dev; 479 367 480 - status = readl(msi->base + PCIE_MSI_INTR2_STATUS); 481 - for_each_set_bit(bit, &status, BRCM_INT_PCI_MSI_NR) { 368 + status = readl(msi->intr_base + MSI_INT_STATUS); 369 + status >>= msi->legacy_shift; 370 + 371 + for_each_set_bit(bit, &status, msi->nr) { 482 372 virq = irq_find_mapping(msi->inner_domain, bit); 483 373 if (virq) 484 374 generic_handle_irq(virq); ··· 497 383 498 384 msg->address_lo = lower_32_bits(msi->target_addr); 499 385 msg->address_hi = upper_32_bits(msi->target_addr); 500 - msg->data = (0xffff & PCIE_MISC_MSI_DATA_CONFIG_VAL) | data->hwirq; 386 + msg->data = (0xffff & PCIE_MISC_MSI_DATA_CONFIG_VAL_32) | data->hwirq; 501 387 } 502 388 503 389 static int brcm_msi_set_affinity(struct irq_data *irq_data, ··· 509 395 static void brcm_msi_ack_irq(struct irq_data *data) 510 396 { 511 397 struct brcm_msi *msi = irq_data_get_irq_chip_data(data); 398 + const int shift_amt = data->hwirq + msi->legacy_shift; 512 399 513 - writel(1 << data->hwirq, msi->base + PCIE_MSI_INTR2_CLR); 400 + writel(1 << shift_amt, msi->intr_base + MSI_INT_CLR); 514 401 } 515 402 516 403 ··· 527 412 int hwirq; 528 413 529 414 mutex_lock(&msi->lock); 530 - hwirq = bitmap_find_free_region(&msi->used, BRCM_INT_PCI_MSI_NR, 0); 415 + hwirq = bitmap_find_free_region(&msi->used, msi->nr, 0); 531 416 mutex_unlock(&msi->lock); 532 417 533 418 return hwirq; ··· 576 461 struct fwnode_handle *fwnode = of_node_to_fwnode(msi->np); 577 462 struct device *dev = msi->dev; 578 463 579 - msi->inner_domain = irq_domain_add_linear(NULL, BRCM_INT_PCI_MSI_NR, 580 - &msi_domain_ops, msi); 464 + msi->inner_domain = irq_domain_add_linear(NULL, msi->nr, &msi_domain_ops, msi); 581 465 if (!msi->inner_domain) { 582 466 dev_err(dev, "failed to create IRQ domain\n"); 583 467 return -ENOMEM; ··· 613 499 614 500 static void brcm_msi_set_regs(struct brcm_msi *msi) 615 501 { 616 - writel(0xffffffff, msi->base + PCIE_MSI_INTR2_MASK_CLR); 502 + u32 val = __GENMASK(31, msi->legacy_shift); 503 + 504 + writel(val, msi->intr_base + MSI_INT_MASK_CLR); 505 + writel(val, msi->intr_base + MSI_INT_CLR); 617 506 618 507 /* 619 508 * The 0 bit of PCIE_MISC_MSI_BAR_CONFIG_LO is repurposed to MSI ··· 627 510 writel(upper_32_bits(msi->target_addr), 628 511 msi->base + PCIE_MISC_MSI_BAR_CONFIG_HI); 629 512 630 - writel(PCIE_MISC_MSI_DATA_CONFIG_VAL, 631 - msi->base + PCIE_MISC_MSI_DATA_CONFIG); 513 + val = msi->legacy ? PCIE_MISC_MSI_DATA_CONFIG_VAL_8 : PCIE_MISC_MSI_DATA_CONFIG_VAL_32; 514 + writel(val, msi->base + PCIE_MISC_MSI_DATA_CONFIG); 632 515 } 633 516 634 517 static int brcm_pcie_enable_msi(struct brcm_pcie *pcie) ··· 653 536 msi->np = pcie->np; 654 537 msi->target_addr = pcie->msi_target_addr; 655 538 msi->irq = irq; 539 + msi->legacy = pcie->hw_rev < BRCM_PCIE_HW_REV_33; 540 + 541 + if (msi->legacy) { 542 + msi->intr_base = msi->base + PCIE_INTR2_CPU_BASE; 543 + msi->nr = BRCM_INT_PCI_MSI_LEGACY_NR; 544 + msi->legacy_shift = 24; 545 + } else { 546 + msi->intr_base = msi->base + PCIE_MSI_INTR2_BASE; 547 + msi->nr = BRCM_INT_PCI_MSI_NR; 548 + msi->legacy_shift = 0; 549 + } 656 550 657 551 ret = brcm_allocate_domains(msi); 658 552 if (ret) ··· 727 599 .write = pci_generic_config_write, 728 600 }; 729 601 730 - static inline void brcm_pcie_bridge_sw_init_set(struct brcm_pcie *pcie, u32 val) 602 + static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val) 731 603 { 732 - u32 tmp; 604 + u32 tmp, mask = RGR1_SW_INIT_1_INIT_GENERIC_MASK; 605 + u32 shift = RGR1_SW_INIT_1_INIT_GENERIC_SHIFT; 733 606 734 - tmp = readl(pcie->base + PCIE_RGR1_SW_INIT_1); 735 - u32p_replace_bits(&tmp, val, PCIE_RGR1_SW_INIT_1_INIT_MASK); 736 - writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1); 607 + tmp = readl(pcie->base + PCIE_RGR1_SW_INIT_1(pcie)); 608 + tmp = (tmp & ~mask) | ((val << shift) & mask); 609 + writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie)); 737 610 } 738 611 739 - static inline void brcm_pcie_perst_set(struct brcm_pcie *pcie, u32 val) 612 + static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val) 613 + { 614 + u32 tmp, mask = RGR1_SW_INIT_1_INIT_7278_MASK; 615 + u32 shift = RGR1_SW_INIT_1_INIT_7278_SHIFT; 616 + 617 + tmp = readl(pcie->base + PCIE_RGR1_SW_INIT_1(pcie)); 618 + tmp = (tmp & ~mask) | ((val << shift) & mask); 619 + writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie)); 620 + } 621 + 622 + static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val) 740 623 { 741 624 u32 tmp; 742 625 743 - tmp = readl(pcie->base + PCIE_RGR1_SW_INIT_1); 626 + /* Perst bit has moved and assert value is 0 */ 627 + tmp = readl(pcie->base + PCIE_MISC_PCIE_CTRL); 628 + u32p_replace_bits(&tmp, !val, PCIE_MISC_PCIE_CTRL_PCIE_PERSTB_MASK); 629 + writel(tmp, pcie->base + PCIE_MISC_PCIE_CTRL); 630 + } 631 + 632 + static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val) 633 + { 634 + u32 tmp; 635 + 636 + tmp = readl(pcie->base + PCIE_RGR1_SW_INIT_1(pcie)); 744 637 u32p_replace_bits(&tmp, val, PCIE_RGR1_SW_INIT_1_PERST_MASK); 745 - writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1); 638 + writel(tmp, pcie->base + PCIE_RGR1_SW_INIT_1(pcie)); 746 639 } 747 640 748 641 static inline int brcm_pcie_get_rc_bar2_size_and_offset(struct brcm_pcie *pcie, ··· 771 622 u64 *rc_bar2_offset) 772 623 { 773 624 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 774 - struct device *dev = pcie->dev; 775 625 struct resource_entry *entry; 626 + struct device *dev = pcie->dev; 627 + u64 lowest_pcie_addr = ~(u64)0; 628 + int ret, i = 0; 629 + u64 size = 0; 776 630 777 - entry = resource_list_first_type(&bridge->dma_ranges, IORESOURCE_MEM); 778 - if (!entry) 779 - return -ENODEV; 631 + resource_list_for_each_entry(entry, &bridge->dma_ranges) { 632 + u64 pcie_beg = entry->res->start - entry->offset; 780 633 634 + size += entry->res->end - entry->res->start + 1; 635 + if (pcie_beg < lowest_pcie_addr) 636 + lowest_pcie_addr = pcie_beg; 637 + } 781 638 782 - /* 783 - * The controller expects the inbound window offset to be calculated as 784 - * the difference between PCIe's address space and CPU's. The offset 785 - * provided by the firmware is calculated the opposite way, so we 786 - * negate it. 787 - */ 788 - *rc_bar2_offset = -entry->offset; 789 - *rc_bar2_size = 1ULL << fls64(entry->res->end - entry->res->start); 639 + if (lowest_pcie_addr == ~(u64)0) { 640 + dev_err(dev, "DT node has no dma-ranges\n"); 641 + return -EINVAL; 642 + } 643 + 644 + ret = of_property_read_variable_u64_array(pcie->np, "brcm,scb-sizes", pcie->memc_size, 1, 645 + PCIE_BRCM_MAX_MEMC); 646 + 647 + if (ret <= 0) { 648 + /* Make an educated guess */ 649 + pcie->num_memc = 1; 650 + pcie->memc_size[0] = 1ULL << fls64(size - 1); 651 + } else { 652 + pcie->num_memc = ret; 653 + } 654 + 655 + /* Each memc is viewed through a "port" that is a power of 2 */ 656 + for (i = 0, size = 0; i < pcie->num_memc; i++) 657 + size += pcie->memc_size[i]; 658 + 659 + /* System memory starts at this address in PCIe-space */ 660 + *rc_bar2_offset = lowest_pcie_addr; 661 + /* The sum of all memc views must also be a power of 2 */ 662 + *rc_bar2_size = 1ULL << fls64(size - 1); 790 663 791 664 /* 792 665 * We validate the inbound memory view even though we should trust ··· 860 689 void __iomem *base = pcie->base; 861 690 struct device *dev = pcie->dev; 862 691 struct resource_entry *entry; 863 - unsigned int scb_size_val; 864 692 bool ssc_good = false; 865 693 struct resource *res; 866 694 int num_out_wins = 0; 867 695 u16 nlw, cls, lnksta; 868 - int i, ret; 869 - u32 tmp, aspm_support; 696 + int i, ret, memc; 697 + u32 tmp, burst, aspm_support; 870 698 871 699 /* Reset the bridge */ 872 - brcm_pcie_bridge_sw_init_set(pcie, 1); 873 - brcm_pcie_perst_set(pcie, 1); 874 - 700 + pcie->bridge_sw_init_set(pcie, 1); 875 701 usleep_range(100, 200); 876 702 877 703 /* Take the bridge out of reset */ 878 - brcm_pcie_bridge_sw_init_set(pcie, 0); 704 + pcie->bridge_sw_init_set(pcie, 0); 879 705 880 706 tmp = readl(base + PCIE_MISC_HARD_PCIE_HARD_DEBUG); 881 707 tmp &= ~PCIE_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK; ··· 880 712 /* Wait for SerDes to be stable */ 881 713 usleep_range(100, 200); 882 714 715 + /* 716 + * SCB_MAX_BURST_SIZE is a two bit field. For GENERIC chips it 717 + * is encoded as 0=128, 1=256, 2=512, 3=Rsvd, for BCM7278 it 718 + * is encoded as 0=Rsvd, 1=128, 2=256, 3=512. 719 + */ 720 + if (pcie->type == BCM2711) 721 + burst = 0x0; /* 128B */ 722 + else if (pcie->type == BCM7278) 723 + burst = 0x3; /* 512 bytes */ 724 + else 725 + burst = 0x2; /* 512 bytes */ 726 + 883 727 /* Set SCB_MAX_BURST_SIZE, CFG_READ_UR_MODE, SCB_ACCESS_EN */ 884 728 u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_SCB_ACCESS_EN_MASK); 885 729 u32p_replace_bits(&tmp, 1, PCIE_MISC_MISC_CTRL_CFG_READ_UR_MODE_MASK); 886 - u32p_replace_bits(&tmp, PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_128, 887 - PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_MASK); 730 + u32p_replace_bits(&tmp, burst, PCIE_MISC_MISC_CTRL_MAX_BURST_SIZE_MASK); 888 731 writel(tmp, base + PCIE_MISC_MISC_CTRL); 889 732 890 733 ret = brcm_pcie_get_rc_bar2_size_and_offset(pcie, &rc_bar2_size, ··· 910 731 writel(upper_32_bits(rc_bar2_offset), 911 732 base + PCIE_MISC_RC_BAR2_CONFIG_HI); 912 733 913 - scb_size_val = rc_bar2_size ? 914 - ilog2(rc_bar2_size) - 15 : 0xf; /* 0xf is 1GB */ 915 734 tmp = readl(base + PCIE_MISC_MISC_CTRL); 916 - u32p_replace_bits(&tmp, scb_size_val, 917 - PCIE_MISC_MISC_CTRL_SCB0_SIZE_MASK); 735 + for (memc = 0; memc < pcie->num_memc; memc++) { 736 + u32 scb_size_val = ilog2(pcie->memc_size[memc]) - 15; 737 + 738 + if (memc == 0) 739 + u32p_replace_bits(&tmp, scb_size_val, SCB_SIZE_MASK(0)); 740 + else if (memc == 1) 741 + u32p_replace_bits(&tmp, scb_size_val, SCB_SIZE_MASK(1)); 742 + else if (memc == 2) 743 + u32p_replace_bits(&tmp, scb_size_val, SCB_SIZE_MASK(2)); 744 + } 918 745 writel(tmp, base + PCIE_MISC_MISC_CTRL); 919 746 920 747 /* ··· 945 760 tmp &= ~PCIE_MISC_RC_BAR3_CONFIG_LO_SIZE_MASK; 946 761 writel(tmp, base + PCIE_MISC_RC_BAR3_CONFIG_LO); 947 762 948 - /* Mask all interrupts since we are not handling any yet */ 949 - writel(0xffffffff, pcie->base + PCIE_MSI_INTR2_MASK_SET); 950 - 951 - /* clear any interrupts we find on boot */ 952 - writel(0xffffffff, pcie->base + PCIE_MSI_INTR2_CLR); 953 - 954 763 if (pcie->gen) 955 764 brcm_pcie_set_gen(pcie, pcie->gen); 956 765 957 766 /* Unassert the fundamental reset */ 958 - brcm_pcie_perst_set(pcie, 0); 767 + pcie->perst_set(pcie, 0); 959 768 960 769 /* 961 770 * Give the RC/EP time to wake up, before trying to configure RC. ··· 1061 882 dev_err(pcie->dev, "failed to enter low-power link state\n"); 1062 883 } 1063 884 885 + static int brcm_phy_cntl(struct brcm_pcie *pcie, const int start) 886 + { 887 + static const u32 shifts[PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_NFLDS] = { 888 + PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_PWRDN_SHIFT, 889 + PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_RESET_SHIFT, 890 + PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_DIG_RESET_SHIFT,}; 891 + static const u32 masks[PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_NFLDS] = { 892 + PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_PWRDN_MASK, 893 + PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_RESET_MASK, 894 + PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_DIG_RESET_MASK,}; 895 + const int beg = start ? 0 : PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_NFLDS - 1; 896 + const int end = start ? PCIE_DVT_PMU_PCIE_PHY_CTRL_DAST_NFLDS : -1; 897 + u32 tmp, combined_mask = 0; 898 + u32 val; 899 + void __iomem *base = pcie->base; 900 + int i, ret; 901 + 902 + for (i = beg; i != end; start ? i++ : i--) { 903 + val = start ? BIT_MASK(shifts[i]) : 0; 904 + tmp = readl(base + PCIE_DVT_PMU_PCIE_PHY_CTRL); 905 + tmp = (tmp & ~masks[i]) | (val & masks[i]); 906 + writel(tmp, base + PCIE_DVT_PMU_PCIE_PHY_CTRL); 907 + usleep_range(50, 200); 908 + combined_mask |= masks[i]; 909 + } 910 + 911 + tmp = readl(base + PCIE_DVT_PMU_PCIE_PHY_CTRL); 912 + val = start ? combined_mask : 0; 913 + 914 + ret = (tmp & combined_mask) == val ? 0 : -EIO; 915 + if (ret) 916 + dev_err(pcie->dev, "failed to %s phy\n", (start ? "start" : "stop")); 917 + 918 + return ret; 919 + } 920 + 921 + static inline int brcm_phy_start(struct brcm_pcie *pcie) 922 + { 923 + return pcie->rescal ? brcm_phy_cntl(pcie, 1) : 0; 924 + } 925 + 926 + static inline int brcm_phy_stop(struct brcm_pcie *pcie) 927 + { 928 + return pcie->rescal ? brcm_phy_cntl(pcie, 0) : 0; 929 + } 930 + 1064 931 static void brcm_pcie_turn_off(struct brcm_pcie *pcie) 1065 932 { 1066 933 void __iomem *base = pcie->base; ··· 1115 890 if (brcm_pcie_link_up(pcie)) 1116 891 brcm_pcie_enter_l23(pcie); 1117 892 /* Assert fundamental reset */ 1118 - brcm_pcie_perst_set(pcie, 1); 893 + pcie->perst_set(pcie, 1); 1119 894 1120 895 /* Deassert request for L23 in case it was asserted */ 1121 896 tmp = readl(base + PCIE_MISC_PCIE_CTRL); ··· 1128 903 writel(tmp, base + PCIE_MISC_HARD_PCIE_HARD_DEBUG); 1129 904 1130 905 /* Shutdown PCIe bridge */ 1131 - brcm_pcie_bridge_sw_init_set(pcie, 1); 906 + pcie->bridge_sw_init_set(pcie, 1); 907 + } 908 + 909 + static int brcm_pcie_suspend(struct device *dev) 910 + { 911 + struct brcm_pcie *pcie = dev_get_drvdata(dev); 912 + int ret; 913 + 914 + brcm_pcie_turn_off(pcie); 915 + ret = brcm_phy_stop(pcie); 916 + clk_disable_unprepare(pcie->clk); 917 + 918 + return ret; 919 + } 920 + 921 + static int brcm_pcie_resume(struct device *dev) 922 + { 923 + struct brcm_pcie *pcie = dev_get_drvdata(dev); 924 + void __iomem *base; 925 + u32 tmp; 926 + int ret; 927 + 928 + base = pcie->base; 929 + clk_prepare_enable(pcie->clk); 930 + 931 + ret = brcm_phy_start(pcie); 932 + if (ret) 933 + goto err; 934 + 935 + /* Take bridge out of reset so we can access the SERDES reg */ 936 + pcie->bridge_sw_init_set(pcie, 0); 937 + 938 + /* SERDES_IDDQ = 0 */ 939 + tmp = readl(base + PCIE_MISC_HARD_PCIE_HARD_DEBUG); 940 + u32p_replace_bits(&tmp, 0, PCIE_MISC_HARD_PCIE_HARD_DEBUG_SERDES_IDDQ_MASK); 941 + writel(tmp, base + PCIE_MISC_HARD_PCIE_HARD_DEBUG); 942 + 943 + /* wait for serdes to be stable */ 944 + udelay(100); 945 + 946 + ret = brcm_pcie_setup(pcie); 947 + if (ret) 948 + goto err; 949 + 950 + if (pcie->msi) 951 + brcm_msi_set_regs(pcie->msi); 952 + 953 + return 0; 954 + 955 + err: 956 + clk_disable_unprepare(pcie->clk); 957 + return ret; 1132 958 } 1133 959 1134 960 static void __brcm_pcie_remove(struct brcm_pcie *pcie) 1135 961 { 1136 962 brcm_msi_remove(pcie); 1137 963 brcm_pcie_turn_off(pcie); 964 + brcm_phy_stop(pcie); 965 + reset_control_assert(pcie->rescal); 1138 966 clk_disable_unprepare(pcie->clk); 1139 967 } 1140 968 ··· 1203 925 return 0; 1204 926 } 1205 927 928 + static const struct of_device_id brcm_pcie_match[] = { 929 + { .compatible = "brcm,bcm2711-pcie", .data = &bcm2711_cfg }, 930 + { .compatible = "brcm,bcm7211-pcie", .data = &generic_cfg }, 931 + { .compatible = "brcm,bcm7278-pcie", .data = &bcm7278_cfg }, 932 + { .compatible = "brcm,bcm7216-pcie", .data = &bcm7278_cfg }, 933 + { .compatible = "brcm,bcm7445-pcie", .data = &generic_cfg }, 934 + {}, 935 + }; 936 + 1206 937 static int brcm_pcie_probe(struct platform_device *pdev) 1207 938 { 1208 939 struct device_node *np = pdev->dev.of_node, *msi_np; 1209 940 struct pci_host_bridge *bridge; 941 + const struct pcie_cfg_data *data; 1210 942 struct brcm_pcie *pcie; 1211 943 int ret; 1212 944 ··· 1224 936 if (!bridge) 1225 937 return -ENOMEM; 1226 938 939 + data = of_device_get_match_data(&pdev->dev); 940 + if (!data) { 941 + pr_err("failed to look up compatible string\n"); 942 + return -EINVAL; 943 + } 944 + 1227 945 pcie = pci_host_bridge_priv(bridge); 1228 946 pcie->dev = &pdev->dev; 1229 947 pcie->np = np; 948 + pcie->reg_offsets = data->offsets; 949 + pcie->type = data->type; 950 + pcie->perst_set = data->perst_set; 951 + pcie->bridge_sw_init_set = data->bridge_sw_init_set; 1230 952 1231 953 pcie->base = devm_platform_ioremap_resource(pdev, 0); 1232 954 if (IS_ERR(pcie->base)) ··· 1256 958 dev_err(&pdev->dev, "could not enable clock\n"); 1257 959 return ret; 1258 960 } 961 + pcie->rescal = devm_reset_control_get_optional_shared(&pdev->dev, "rescal"); 962 + if (IS_ERR(pcie->rescal)) { 963 + clk_disable_unprepare(pcie->clk); 964 + return PTR_ERR(pcie->rescal); 965 + } 966 + 967 + ret = reset_control_deassert(pcie->rescal); 968 + if (ret) 969 + dev_err(&pdev->dev, "failed to deassert 'rescal'\n"); 970 + 971 + ret = brcm_phy_start(pcie); 972 + if (ret) { 973 + reset_control_assert(pcie->rescal); 974 + clk_disable_unprepare(pcie->clk); 975 + return ret; 976 + } 1259 977 1260 978 ret = brcm_pcie_setup(pcie); 1261 979 if (ret) 1262 980 goto fail; 981 + 982 + pcie->hw_rev = readl(pcie->base + PCIE_MISC_REVISION); 1263 983 1264 984 msi_np = of_parse_phandle(pcie->np, "msi-parent", 0); 1265 985 if (pci_msi_enabled() && msi_np == pcie->np) { ··· 1299 983 return ret; 1300 984 } 1301 985 1302 - static const struct of_device_id brcm_pcie_match[] = { 1303 - { .compatible = "brcm,bcm2711-pcie" }, 1304 - {}, 1305 - }; 1306 986 MODULE_DEVICE_TABLE(of, brcm_pcie_match); 987 + 988 + static const struct dev_pm_ops brcm_pcie_pm_ops = { 989 + .suspend = brcm_pcie_suspend, 990 + .resume = brcm_pcie_resume, 991 + }; 1307 992 1308 993 static struct platform_driver brcm_pcie_driver = { 1309 994 .probe = brcm_pcie_probe, ··· 1312 995 .driver = { 1313 996 .name = "brcm-pcie", 1314 997 .of_match_table = brcm_pcie_match, 998 + .pm = &brcm_pcie_pm_ops, 1315 999 }, 1316 1000 }; 1317 1001 module_platform_driver(brcm_pcie_driver);
+327
drivers/pci/controller/pcie-hisi-error.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for handling the PCIe controller errors on 4 + * HiSilicon HIP SoCs. 5 + * 6 + * Copyright (c) 2020 HiSilicon Limited. 7 + */ 8 + 9 + #include <linux/acpi.h> 10 + #include <acpi/ghes.h> 11 + #include <linux/bitops.h> 12 + #include <linux/delay.h> 13 + #include <linux/pci.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/kfifo.h> 16 + #include <linux/spinlock.h> 17 + 18 + /* HISI PCIe controller error definitions */ 19 + #define HISI_PCIE_ERR_MISC_REGS 33 20 + 21 + #define HISI_PCIE_LOCAL_VALID_VERSION BIT(0) 22 + #define HISI_PCIE_LOCAL_VALID_SOC_ID BIT(1) 23 + #define HISI_PCIE_LOCAL_VALID_SOCKET_ID BIT(2) 24 + #define HISI_PCIE_LOCAL_VALID_NIMBUS_ID BIT(3) 25 + #define HISI_PCIE_LOCAL_VALID_SUB_MODULE_ID BIT(4) 26 + #define HISI_PCIE_LOCAL_VALID_CORE_ID BIT(5) 27 + #define HISI_PCIE_LOCAL_VALID_PORT_ID BIT(6) 28 + #define HISI_PCIE_LOCAL_VALID_ERR_TYPE BIT(7) 29 + #define HISI_PCIE_LOCAL_VALID_ERR_SEVERITY BIT(8) 30 + #define HISI_PCIE_LOCAL_VALID_ERR_MISC 9 31 + 32 + static guid_t hisi_pcie_sec_guid = 33 + GUID_INIT(0xB2889FC9, 0xE7D7, 0x4F9D, 34 + 0xA8, 0x67, 0xAF, 0x42, 0xE9, 0x8B, 0xE7, 0x72); 35 + 36 + /* 37 + * Firmware reports the socket port ID where the error occurred. These 38 + * macros convert that to the core ID and core port ID required by the 39 + * ACPI reset method. 40 + */ 41 + #define HISI_PCIE_PORT_ID(core, v) (((v) >> 1) + ((core) << 3)) 42 + #define HISI_PCIE_CORE_ID(v) ((v) >> 3) 43 + #define HISI_PCIE_CORE_PORT_ID(v) (((v) & 7) << 1) 44 + 45 + struct hisi_pcie_error_data { 46 + u64 val_bits; 47 + u8 version; 48 + u8 soc_id; 49 + u8 socket_id; 50 + u8 nimbus_id; 51 + u8 sub_module_id; 52 + u8 core_id; 53 + u8 port_id; 54 + u8 err_severity; 55 + u16 err_type; 56 + u8 reserv[2]; 57 + u32 err_misc[HISI_PCIE_ERR_MISC_REGS]; 58 + }; 59 + 60 + struct hisi_pcie_error_private { 61 + struct notifier_block nb; 62 + struct device *dev; 63 + }; 64 + 65 + enum hisi_pcie_submodule_id { 66 + HISI_PCIE_SUB_MODULE_ID_AP, 67 + HISI_PCIE_SUB_MODULE_ID_TL, 68 + HISI_PCIE_SUB_MODULE_ID_MAC, 69 + HISI_PCIE_SUB_MODULE_ID_DL, 70 + HISI_PCIE_SUB_MODULE_ID_SDI, 71 + }; 72 + 73 + static const char * const hisi_pcie_sub_module[] = { 74 + [HISI_PCIE_SUB_MODULE_ID_AP] = "AP Layer", 75 + [HISI_PCIE_SUB_MODULE_ID_TL] = "TL Layer", 76 + [HISI_PCIE_SUB_MODULE_ID_MAC] = "MAC Layer", 77 + [HISI_PCIE_SUB_MODULE_ID_DL] = "DL Layer", 78 + [HISI_PCIE_SUB_MODULE_ID_SDI] = "SDI Layer", 79 + }; 80 + 81 + enum hisi_pcie_err_severity { 82 + HISI_PCIE_ERR_SEV_RECOVERABLE, 83 + HISI_PCIE_ERR_SEV_FATAL, 84 + HISI_PCIE_ERR_SEV_CORRECTED, 85 + HISI_PCIE_ERR_SEV_NONE, 86 + }; 87 + 88 + static const char * const hisi_pcie_error_sev[] = { 89 + [HISI_PCIE_ERR_SEV_RECOVERABLE] = "recoverable", 90 + [HISI_PCIE_ERR_SEV_FATAL] = "fatal", 91 + [HISI_PCIE_ERR_SEV_CORRECTED] = "corrected", 92 + [HISI_PCIE_ERR_SEV_NONE] = "none", 93 + }; 94 + 95 + static const char *hisi_pcie_get_string(const char * const *array, 96 + size_t n, u32 id) 97 + { 98 + u32 index; 99 + 100 + for (index = 0; index < n; index++) { 101 + if (index == id && array[index]) 102 + return array[index]; 103 + } 104 + 105 + return "unknown"; 106 + } 107 + 108 + static int hisi_pcie_port_reset(struct platform_device *pdev, 109 + u32 chip_id, u32 port_id) 110 + { 111 + struct device *dev = &pdev->dev; 112 + acpi_handle handle = ACPI_HANDLE(dev); 113 + union acpi_object arg[3]; 114 + struct acpi_object_list arg_list; 115 + acpi_status s; 116 + unsigned long long data = 0; 117 + 118 + arg[0].type = ACPI_TYPE_INTEGER; 119 + arg[0].integer.value = chip_id; 120 + arg[1].type = ACPI_TYPE_INTEGER; 121 + arg[1].integer.value = HISI_PCIE_CORE_ID(port_id); 122 + arg[2].type = ACPI_TYPE_INTEGER; 123 + arg[2].integer.value = HISI_PCIE_CORE_PORT_ID(port_id); 124 + 125 + arg_list.count = 3; 126 + arg_list.pointer = arg; 127 + 128 + s = acpi_evaluate_integer(handle, "RST", &arg_list, &data); 129 + if (ACPI_FAILURE(s)) { 130 + dev_err(dev, "No RST method\n"); 131 + return -EIO; 132 + } 133 + 134 + if (data) { 135 + dev_err(dev, "Failed to Reset\n"); 136 + return -EIO; 137 + } 138 + 139 + return 0; 140 + } 141 + 142 + static int hisi_pcie_port_do_recovery(struct platform_device *dev, 143 + u32 chip_id, u32 port_id) 144 + { 145 + acpi_status s; 146 + struct device *device = &dev->dev; 147 + acpi_handle root_handle = ACPI_HANDLE(device); 148 + struct acpi_pci_root *pci_root; 149 + struct pci_bus *root_bus; 150 + struct pci_dev *pdev; 151 + u32 domain, busnr, devfn; 152 + 153 + s = acpi_get_parent(root_handle, &root_handle); 154 + if (ACPI_FAILURE(s)) 155 + return -ENODEV; 156 + pci_root = acpi_pci_find_root(root_handle); 157 + if (!pci_root) 158 + return -ENODEV; 159 + root_bus = pci_root->bus; 160 + domain = pci_root->segment; 161 + 162 + busnr = root_bus->number; 163 + devfn = PCI_DEVFN(port_id, 0); 164 + pdev = pci_get_domain_bus_and_slot(domain, busnr, devfn); 165 + if (!pdev) { 166 + dev_info(device, "Fail to get root port %04x:%02x:%02x.%d device\n", 167 + domain, busnr, PCI_SLOT(devfn), PCI_FUNC(devfn)); 168 + return -ENODEV; 169 + } 170 + 171 + pci_stop_and_remove_bus_device_locked(pdev); 172 + pci_dev_put(pdev); 173 + 174 + if (hisi_pcie_port_reset(dev, chip_id, port_id)) 175 + return -EIO; 176 + 177 + /* 178 + * The initialization time of subordinate devices after 179 + * hot reset is no more than 1s, which is required by 180 + * the PCI spec v5.0 sec 6.6.1. The time will shorten 181 + * if Readiness Notifications mechanisms are used. But 182 + * wait 1s here to adapt any conditions. 183 + */ 184 + ssleep(1UL); 185 + 186 + /* add root port and downstream devices */ 187 + pci_lock_rescan_remove(); 188 + pci_rescan_bus(root_bus); 189 + pci_unlock_rescan_remove(); 190 + 191 + return 0; 192 + } 193 + 194 + static void hisi_pcie_handle_error(struct platform_device *pdev, 195 + const struct hisi_pcie_error_data *edata) 196 + { 197 + struct device *dev = &pdev->dev; 198 + int idx, rc; 199 + const unsigned long valid_bits[] = {BITMAP_FROM_U64(edata->val_bits)}; 200 + 201 + if (edata->val_bits == 0) { 202 + dev_warn(dev, "%s: no valid error information\n", __func__); 203 + return; 204 + } 205 + 206 + dev_info(dev, "\nHISI : HIP : PCIe controller error\n"); 207 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_SOC_ID) 208 + dev_info(dev, "Table version = %d\n", edata->version); 209 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_SOCKET_ID) 210 + dev_info(dev, "Socket ID = %d\n", edata->socket_id); 211 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_NIMBUS_ID) 212 + dev_info(dev, "Nimbus ID = %d\n", edata->nimbus_id); 213 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_SUB_MODULE_ID) 214 + dev_info(dev, "Sub Module = %s\n", 215 + hisi_pcie_get_string(hisi_pcie_sub_module, 216 + ARRAY_SIZE(hisi_pcie_sub_module), 217 + edata->sub_module_id)); 218 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_CORE_ID) 219 + dev_info(dev, "Core ID = core%d\n", edata->core_id); 220 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_PORT_ID) 221 + dev_info(dev, "Port ID = port%d\n", edata->port_id); 222 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_ERR_SEVERITY) 223 + dev_info(dev, "Error severity = %s\n", 224 + hisi_pcie_get_string(hisi_pcie_error_sev, 225 + ARRAY_SIZE(hisi_pcie_error_sev), 226 + edata->err_severity)); 227 + if (edata->val_bits & HISI_PCIE_LOCAL_VALID_ERR_TYPE) 228 + dev_info(dev, "Error type = 0x%x\n", edata->err_type); 229 + 230 + dev_info(dev, "Reg Dump:\n"); 231 + idx = HISI_PCIE_LOCAL_VALID_ERR_MISC; 232 + for_each_set_bit_from(idx, valid_bits, 233 + HISI_PCIE_LOCAL_VALID_ERR_MISC + HISI_PCIE_ERR_MISC_REGS) 234 + dev_info(dev, "ERR_MISC_%d = 0x%x\n", idx - HISI_PCIE_LOCAL_VALID_ERR_MISC, 235 + edata->err_misc[idx - HISI_PCIE_LOCAL_VALID_ERR_MISC]); 236 + 237 + if (edata->err_severity != HISI_PCIE_ERR_SEV_RECOVERABLE) 238 + return; 239 + 240 + /* Recovery for the PCIe controller errors, try reset 241 + * PCI port for the error recovery 242 + */ 243 + rc = hisi_pcie_port_do_recovery(pdev, edata->socket_id, 244 + HISI_PCIE_PORT_ID(edata->core_id, edata->port_id)); 245 + if (rc) 246 + dev_info(dev, "fail to do hisi pcie port reset\n"); 247 + } 248 + 249 + static int hisi_pcie_notify_error(struct notifier_block *nb, 250 + unsigned long event, void *data) 251 + { 252 + struct acpi_hest_generic_data *gdata = data; 253 + const struct hisi_pcie_error_data *error_data = acpi_hest_get_payload(gdata); 254 + struct hisi_pcie_error_private *priv; 255 + struct device *dev; 256 + struct platform_device *pdev; 257 + guid_t err_sec_guid; 258 + u8 socket; 259 + 260 + import_guid(&err_sec_guid, gdata->section_type); 261 + if (!guid_equal(&err_sec_guid, &hisi_pcie_sec_guid)) 262 + return NOTIFY_DONE; 263 + 264 + priv = container_of(nb, struct hisi_pcie_error_private, nb); 265 + dev = priv->dev; 266 + 267 + if (device_property_read_u8(dev, "socket", &socket)) 268 + return NOTIFY_DONE; 269 + 270 + if (error_data->socket_id != socket) 271 + return NOTIFY_DONE; 272 + 273 + pdev = container_of(dev, struct platform_device, dev); 274 + hisi_pcie_handle_error(pdev, error_data); 275 + 276 + return NOTIFY_OK; 277 + } 278 + 279 + static int hisi_pcie_error_handler_probe(struct platform_device *pdev) 280 + { 281 + struct hisi_pcie_error_private *priv; 282 + int ret; 283 + 284 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 285 + if (!priv) 286 + return -ENOMEM; 287 + 288 + priv->nb.notifier_call = hisi_pcie_notify_error; 289 + priv->dev = &pdev->dev; 290 + ret = ghes_register_vendor_record_notifier(&priv->nb); 291 + if (ret) { 292 + dev_err(&pdev->dev, 293 + "Failed to register hisi pcie controller error handler with apei\n"); 294 + return ret; 295 + } 296 + 297 + platform_set_drvdata(pdev, priv); 298 + 299 + return 0; 300 + } 301 + 302 + static int hisi_pcie_error_handler_remove(struct platform_device *pdev) 303 + { 304 + struct hisi_pcie_error_private *priv = platform_get_drvdata(pdev); 305 + 306 + ghes_unregister_vendor_record_notifier(&priv->nb); 307 + 308 + return 0; 309 + } 310 + 311 + static const struct acpi_device_id hisi_pcie_acpi_match[] = { 312 + { "HISI0361", 0 }, 313 + { } 314 + }; 315 + 316 + static struct platform_driver hisi_pcie_error_handler_driver = { 317 + .driver = { 318 + .name = "hisi-pcie-error-handler", 319 + .acpi_match_table = hisi_pcie_acpi_match, 320 + }, 321 + .probe = hisi_pcie_error_handler_probe, 322 + .remove = hisi_pcie_error_handler_remove, 323 + }; 324 + module_platform_driver(hisi_pcie_error_handler_driver); 325 + 326 + MODULE_DESCRIPTION("HiSilicon HIP PCIe controller error handling driver"); 327 + MODULE_LICENSE("GPL v2");
+1 -12
drivers/pci/controller/pcie-iproc-bcma.c
··· 94 94 .probe = iproc_pcie_bcma_probe, 95 95 .remove = iproc_pcie_bcma_remove, 96 96 }; 97 - 98 - static int __init iproc_pcie_bcma_init(void) 99 - { 100 - return bcma_driver_register(&iproc_pcie_bcma_driver); 101 - } 102 - module_init(iproc_pcie_bcma_init); 103 - 104 - static void __exit iproc_pcie_bcma_exit(void) 105 - { 106 - bcma_driver_unregister(&iproc_pcie_bcma_driver); 107 - } 108 - module_exit(iproc_pcie_bcma_exit); 97 + module_bcma_driver(iproc_pcie_bcma_driver); 109 98 110 99 MODULE_AUTHOR("Hauke Mehrtens"); 111 100 MODULE_DESCRIPTION("Broadcom iProc PCIe BCMA driver");
+9 -4
drivers/pci/controller/pcie-iproc-msi.c
··· 209 209 struct iproc_msi *msi = irq_data_get_irq_chip_data(data); 210 210 int target_cpu = cpumask_first(mask); 211 211 int curr_cpu; 212 + int ret; 212 213 213 214 curr_cpu = hwirq_to_cpu(msi, data->hwirq); 214 215 if (curr_cpu == target_cpu) 215 - return IRQ_SET_MASK_OK_DONE; 216 + ret = IRQ_SET_MASK_OK_DONE; 217 + else { 218 + /* steer MSI to the target CPU */ 219 + data->hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq) + target_cpu; 220 + ret = IRQ_SET_MASK_OK; 221 + } 216 222 217 - /* steer MSI to the target CPU */ 218 - data->hwirq = hwirq_to_canonical_hwirq(msi, data->hwirq) + target_cpu; 223 + irq_data_update_effective_affinity(data, cpumask_of(target_cpu)); 219 224 220 - return IRQ_SET_MASK_OK; 225 + return ret; 221 226 } 222 227 223 228 static void iproc_msi_irq_compose_msi_msg(struct irq_data *data,
+1 -1
drivers/pci/controller/pcie-iproc-platform.c
··· 99 99 switch (pcie->type) { 100 100 case IPROC_PCIE_PAXC: 101 101 case IPROC_PCIE_PAXC_V2: 102 - pcie->map_irq = 0; 102 + pcie->map_irq = NULL; 103 103 break; 104 104 default: 105 105 break;
-4
drivers/pci/controller/pcie-xilinx-cpm.c
··· 572 572 goto err_setup_irq; 573 573 } 574 574 575 - bridge->dev.parent = dev; 576 575 bridge->sysdata = port->cfg; 577 - bridge->busnr = port->cfg->busr.start; 578 576 bridge->ops = (struct pci_ops *)&pci_generic_ecam_ops.pci_ops; 579 - bridge->map_irq = of_irq_parse_and_map_pci; 580 - bridge->swizzle_irq = pci_common_swizzle; 581 577 582 578 err = pci_host_probe(bridge); 583 579 if (err < 0)
+181 -125
drivers/pci/controller/vmd.c
··· 298 298 .chip = &vmd_msi_controller, 299 299 }; 300 300 301 + static int vmd_create_irq_domain(struct vmd_dev *vmd) 302 + { 303 + struct fwnode_handle *fn; 304 + 305 + fn = irq_domain_alloc_named_id_fwnode("VMD-MSI", vmd->sysdata.domain); 306 + if (!fn) 307 + return -ENODEV; 308 + 309 + vmd->irq_domain = pci_msi_create_irq_domain(fn, &vmd_msi_domain_info, NULL); 310 + if (!vmd->irq_domain) { 311 + irq_domain_free_fwnode(fn); 312 + return -ENODEV; 313 + } 314 + 315 + return 0; 316 + } 317 + 318 + static void vmd_remove_irq_domain(struct vmd_dev *vmd) 319 + { 320 + if (vmd->irq_domain) { 321 + struct fwnode_handle *fn = vmd->irq_domain->fwnode; 322 + 323 + irq_domain_remove(vmd->irq_domain); 324 + irq_domain_free_fwnode(fn); 325 + } 326 + } 327 + 301 328 static char __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus, 302 329 unsigned int devfn, int reg, int len) 303 330 { ··· 444 417 return domain + 1; 445 418 } 446 419 420 + static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint, 421 + resource_size_t *offset1, 422 + resource_size_t *offset2) 423 + { 424 + struct pci_dev *dev = vmd->dev; 425 + u64 phys1, phys2; 426 + 427 + if (native_hint) { 428 + u32 vmlock; 429 + int ret; 430 + 431 + ret = pci_read_config_dword(dev, PCI_REG_VMLOCK, &vmlock); 432 + if (ret || vmlock == ~0) 433 + return -ENODEV; 434 + 435 + if (MB2_SHADOW_EN(vmlock)) { 436 + void __iomem *membar2; 437 + 438 + membar2 = pci_iomap(dev, VMD_MEMBAR2, 0); 439 + if (!membar2) 440 + return -ENOMEM; 441 + phys1 = readq(membar2 + MB2_SHADOW_OFFSET); 442 + phys2 = readq(membar2 + MB2_SHADOW_OFFSET + 8); 443 + pci_iounmap(dev, membar2); 444 + } else 445 + return 0; 446 + } else { 447 + /* Hypervisor-Emulated Vendor-Specific Capability */ 448 + int pos = pci_find_capability(dev, PCI_CAP_ID_VNDR); 449 + u32 reg, regu; 450 + 451 + pci_read_config_dword(dev, pos + 4, &reg); 452 + 453 + /* "SHDW" */ 454 + if (pos && reg == 0x53484457) { 455 + pci_read_config_dword(dev, pos + 8, &reg); 456 + pci_read_config_dword(dev, pos + 12, &regu); 457 + phys1 = (u64) regu << 32 | reg; 458 + 459 + pci_read_config_dword(dev, pos + 16, &reg); 460 + pci_read_config_dword(dev, pos + 20, &regu); 461 + phys2 = (u64) regu << 32 | reg; 462 + } else 463 + return 0; 464 + } 465 + 466 + *offset1 = dev->resource[VMD_MEMBAR1].start - 467 + (phys1 & PCI_BASE_ADDRESS_MEM_MASK); 468 + *offset2 = dev->resource[VMD_MEMBAR2].start - 469 + (phys2 & PCI_BASE_ADDRESS_MEM_MASK); 470 + 471 + return 0; 472 + } 473 + 474 + static int vmd_get_bus_number_start(struct vmd_dev *vmd) 475 + { 476 + struct pci_dev *dev = vmd->dev; 477 + u16 reg; 478 + 479 + pci_read_config_word(dev, PCI_REG_VMCAP, &reg); 480 + if (BUS_RESTRICT_CAP(reg)) { 481 + pci_read_config_word(dev, PCI_REG_VMCONFIG, &reg); 482 + 483 + switch (BUS_RESTRICT_CFG(reg)) { 484 + case 0: 485 + vmd->busn_start = 0; 486 + break; 487 + case 1: 488 + vmd->busn_start = 128; 489 + break; 490 + case 2: 491 + vmd->busn_start = 224; 492 + break; 493 + default: 494 + pci_err(dev, "Unknown Bus Offset Setting (%d)\n", 495 + BUS_RESTRICT_CFG(reg)); 496 + return -ENODEV; 497 + } 498 + } 499 + 500 + return 0; 501 + } 502 + 503 + static irqreturn_t vmd_irq(int irq, void *data) 504 + { 505 + struct vmd_irq_list *irqs = data; 506 + struct vmd_irq *vmdirq; 507 + int idx; 508 + 509 + idx = srcu_read_lock(&irqs->srcu); 510 + list_for_each_entry_rcu(vmdirq, &irqs->irq_list, node) 511 + generic_handle_irq(vmdirq->virq); 512 + srcu_read_unlock(&irqs->srcu, idx); 513 + 514 + return IRQ_HANDLED; 515 + } 516 + 517 + static int vmd_alloc_irqs(struct vmd_dev *vmd) 518 + { 519 + struct pci_dev *dev = vmd->dev; 520 + int i, err; 521 + 522 + vmd->msix_count = pci_msix_vec_count(dev); 523 + if (vmd->msix_count < 0) 524 + return -ENODEV; 525 + 526 + vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count, 527 + PCI_IRQ_MSIX); 528 + if (vmd->msix_count < 0) 529 + return vmd->msix_count; 530 + 531 + vmd->irqs = devm_kcalloc(&dev->dev, vmd->msix_count, sizeof(*vmd->irqs), 532 + GFP_KERNEL); 533 + if (!vmd->irqs) 534 + return -ENOMEM; 535 + 536 + for (i = 0; i < vmd->msix_count; i++) { 537 + err = init_srcu_struct(&vmd->irqs[i].srcu); 538 + if (err) 539 + return err; 540 + 541 + INIT_LIST_HEAD(&vmd->irqs[i].irq_list); 542 + err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i), 543 + vmd_irq, IRQF_NO_THREAD, 544 + "vmd", &vmd->irqs[i]); 545 + if (err) 546 + return err; 547 + } 548 + 549 + return 0; 550 + } 551 + 447 552 static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) 448 553 { 449 554 struct pci_sysdata *sd = &vmd->sysdata; 450 - struct fwnode_handle *fn; 451 555 struct resource *res; 452 556 u32 upper_bits; 453 557 unsigned long flags; ··· 586 428 resource_size_t offset[2] = {0}; 587 429 resource_size_t membar2_offset = 0x2000; 588 430 struct pci_bus *child; 431 + int ret; 589 432 590 433 /* 591 434 * Shadow registers may exist in certain VMD device ids which allow ··· 595 436 * or 0, depending on an enable bit in the VMD device. 596 437 */ 597 438 if (features & VMD_FEAT_HAS_MEMBAR_SHADOW) { 598 - u32 vmlock; 599 - int ret; 600 - 601 439 membar2_offset = MB2_SHADOW_OFFSET + MB2_SHADOW_SIZE; 602 - ret = pci_read_config_dword(vmd->dev, PCI_REG_VMLOCK, &vmlock); 603 - if (ret || vmlock == ~0) 604 - return -ENODEV; 605 - 606 - if (MB2_SHADOW_EN(vmlock)) { 607 - void __iomem *membar2; 608 - 609 - membar2 = pci_iomap(vmd->dev, VMD_MEMBAR2, 0); 610 - if (!membar2) 611 - return -ENOMEM; 612 - offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - 613 - (readq(membar2 + MB2_SHADOW_OFFSET) & 614 - PCI_BASE_ADDRESS_MEM_MASK); 615 - offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - 616 - (readq(membar2 + MB2_SHADOW_OFFSET + 8) & 617 - PCI_BASE_ADDRESS_MEM_MASK); 618 - pci_iounmap(vmd->dev, membar2); 619 - } 620 - } 621 - 622 - if (features & VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP) { 623 - int pos = pci_find_capability(vmd->dev, PCI_CAP_ID_VNDR); 624 - u32 reg, regu; 625 - 626 - pci_read_config_dword(vmd->dev, pos + 4, &reg); 627 - 628 - /* "SHDW" */ 629 - if (pos && reg == 0x53484457) { 630 - pci_read_config_dword(vmd->dev, pos + 8, &reg); 631 - pci_read_config_dword(vmd->dev, pos + 12, &regu); 632 - offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - 633 - (((u64) regu << 32 | reg) & 634 - PCI_BASE_ADDRESS_MEM_MASK); 635 - 636 - pci_read_config_dword(vmd->dev, pos + 16, &reg); 637 - pci_read_config_dword(vmd->dev, pos + 20, &regu); 638 - offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - 639 - (((u64) regu << 32 | reg) & 640 - PCI_BASE_ADDRESS_MEM_MASK); 641 - } 440 + ret = vmd_get_phys_offsets(vmd, true, &offset[0], &offset[1]); 441 + if (ret) 442 + return ret; 443 + } else if (features & VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP) { 444 + ret = vmd_get_phys_offsets(vmd, false, &offset[0], &offset[1]); 445 + if (ret) 446 + return ret; 642 447 } 643 448 644 449 /* ··· 610 487 * limits the bus range to between 0-127, 128-255, or 224-255 611 488 */ 612 489 if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) { 613 - u16 reg16; 614 - 615 - pci_read_config_word(vmd->dev, PCI_REG_VMCAP, &reg16); 616 - if (BUS_RESTRICT_CAP(reg16)) { 617 - pci_read_config_word(vmd->dev, PCI_REG_VMCONFIG, 618 - &reg16); 619 - 620 - switch (BUS_RESTRICT_CFG(reg16)) { 621 - case 1: 622 - vmd->busn_start = 128; 623 - break; 624 - case 2: 625 - vmd->busn_start = 224; 626 - break; 627 - case 3: 628 - pci_err(vmd->dev, "Unknown Bus Offset Setting\n"); 629 - return -ENODEV; 630 - default: 631 - break; 632 - } 633 - } 490 + ret = vmd_get_bus_number_start(vmd); 491 + if (ret) 492 + return ret; 634 493 } 635 494 636 495 res = &vmd->dev->resource[VMD_CFGBAR]; ··· 673 568 674 569 sd->node = pcibus_to_node(vmd->dev->bus); 675 570 676 - fn = irq_domain_alloc_named_id_fwnode("VMD-MSI", vmd->sysdata.domain); 677 - if (!fn) 678 - return -ENODEV; 679 - 680 - vmd->irq_domain = pci_msi_create_irq_domain(fn, &vmd_msi_domain_info, 681 - NULL); 682 - 683 - if (!vmd->irq_domain) { 684 - irq_domain_free_fwnode(fn); 685 - return -ENODEV; 686 - } 571 + ret = vmd_create_irq_domain(vmd); 572 + if (ret) 573 + return ret; 687 574 688 575 /* 689 576 * Override the irq domain bus token so the domain can be distinguished ··· 691 594 &vmd_ops, sd, &resources); 692 595 if (!vmd->bus) { 693 596 pci_free_resource_list(&resources); 694 - irq_domain_remove(vmd->irq_domain); 695 - irq_domain_free_fwnode(fn); 597 + vmd_remove_irq_domain(vmd); 696 598 return -ENODEV; 697 599 } 698 600 699 601 vmd_attach_resources(vmd); 700 - dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); 602 + if (vmd->irq_domain) 603 + dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); 701 604 702 605 pci_scan_child_bus(vmd->bus); 703 606 pci_assign_unassigned_bus_resources(vmd->bus); ··· 717 620 return 0; 718 621 } 719 622 720 - static irqreturn_t vmd_irq(int irq, void *data) 721 - { 722 - struct vmd_irq_list *irqs = data; 723 - struct vmd_irq *vmdirq; 724 - int idx; 725 - 726 - idx = srcu_read_lock(&irqs->srcu); 727 - list_for_each_entry_rcu(vmdirq, &irqs->irq_list, node) 728 - generic_handle_irq(vmdirq->virq); 729 - srcu_read_unlock(&irqs->srcu, idx); 730 - 731 - return IRQ_HANDLED; 732 - } 733 - 734 623 static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id) 735 624 { 736 625 struct vmd_dev *vmd; 737 - int i, err; 626 + int err; 738 627 739 628 if (resource_size(&dev->resource[VMD_CFGBAR]) < (1 << 20)) 740 629 return -ENOMEM; ··· 743 660 dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) 744 661 return -ENODEV; 745 662 746 - vmd->msix_count = pci_msix_vec_count(dev); 747 - if (vmd->msix_count < 0) 748 - return -ENODEV; 749 - 750 - vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count, 751 - PCI_IRQ_MSIX); 752 - if (vmd->msix_count < 0) 753 - return vmd->msix_count; 754 - 755 - vmd->irqs = devm_kcalloc(&dev->dev, vmd->msix_count, sizeof(*vmd->irqs), 756 - GFP_KERNEL); 757 - if (!vmd->irqs) 758 - return -ENOMEM; 759 - 760 - for (i = 0; i < vmd->msix_count; i++) { 761 - err = init_srcu_struct(&vmd->irqs[i].srcu); 762 - if (err) 763 - return err; 764 - 765 - INIT_LIST_HEAD(&vmd->irqs[i].irq_list); 766 - err = devm_request_irq(&dev->dev, pci_irq_vector(dev, i), 767 - vmd_irq, IRQF_NO_THREAD, 768 - "vmd", &vmd->irqs[i]); 769 - if (err) 770 - return err; 771 - } 663 + err = vmd_alloc_irqs(vmd); 664 + if (err) 665 + return err; 772 666 773 667 spin_lock_init(&vmd->cfg_lock); 774 668 pci_set_drvdata(dev, vmd); ··· 769 709 static void vmd_remove(struct pci_dev *dev) 770 710 { 771 711 struct vmd_dev *vmd = pci_get_drvdata(dev); 772 - struct fwnode_handle *fn = vmd->irq_domain->fwnode; 773 712 774 713 sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 775 714 pci_stop_root_bus(vmd->bus); 776 715 pci_remove_root_bus(vmd->bus); 777 716 vmd_cleanup_srcu(vmd); 778 717 vmd_detach_resources(vmd); 779 - irq_domain_remove(vmd->irq_domain); 780 - irq_domain_free_fwnode(fn); 718 + vmd_remove_irq_domain(vmd); 781 719 } 782 720 783 721 #ifdef CONFIG_PM_SLEEP ··· 788 730 for (i = 0; i < vmd->msix_count; i++) 789 731 devm_free_irq(dev, pci_irq_vector(pdev, i), &vmd->irqs[i]); 790 732 791 - pci_save_state(pdev); 792 733 return 0; 793 734 } 794 735 ··· 805 748 return err; 806 749 } 807 750 808 - pci_restore_state(pdev); 809 751 return 0; 810 752 } 811 753 #endif
+10
drivers/pci/ecam.c
··· 168 168 .write = pci_generic_config_write32, 169 169 } 170 170 }; 171 + 172 + /* ECAM ops for 32-bit read only (non-compliant) */ 173 + const struct pci_ecam_ops pci_32b_read_ops = { 174 + .bus_shift = 20, 175 + .pci_ops = { 176 + .map_bus = pci_ecam_map_bus, 177 + .read = pci_generic_config_read32, 178 + .write = pci_generic_config_write, 179 + } 180 + }; 171 181 #endif
+1 -3
drivers/pci/hotplug/pciehp_ctrl.c
··· 73 73 74 74 /* Check link training status */ 75 75 retval = pciehp_check_link_status(ctrl); 76 - if (retval) { 77 - ctrl_err(ctrl, "Failed to check link status\n"); 76 + if (retval) 78 77 goto err_exit; 79 - } 80 78 81 79 /* Check for a power fault */ 82 80 if (ctrl->power_fault_detected || pciehp_query_power_fault(ctrl)) {
+9 -6
drivers/pci/hotplug/pciehp_hpc.c
··· 283 283 msleep(10); 284 284 timeout -= 10; 285 285 } while (timeout > 0); 286 - 287 - pci_info(pdev, "Timeout waiting for Presence Detect\n"); 288 286 } 289 287 290 288 int pciehp_check_link_status(struct controller *ctrl) ··· 291 293 bool found; 292 294 u16 lnk_status; 293 295 294 - if (!pcie_wait_for_link(pdev, true)) 296 + if (!pcie_wait_for_link(pdev, true)) { 297 + ctrl_info(ctrl, "Slot(%s): No link\n", slot_name(ctrl)); 295 298 return -1; 299 + } 296 300 297 301 if (ctrl->inband_presence_disabled) 298 302 pcie_wait_for_presence(pdev); ··· 311 311 ctrl_dbg(ctrl, "%s: lnk_status = %x\n", __func__, lnk_status); 312 312 if ((lnk_status & PCI_EXP_LNKSTA_LT) || 313 313 !(lnk_status & PCI_EXP_LNKSTA_NLW)) { 314 - ctrl_err(ctrl, "link training error: status %#06x\n", 315 - lnk_status); 314 + ctrl_info(ctrl, "Slot(%s): Cannot train link: status %#06x\n", 315 + slot_name(ctrl), lnk_status); 316 316 return -1; 317 317 } 318 318 319 319 pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status); 320 320 321 - if (!found) 321 + if (!found) { 322 + ctrl_info(ctrl, "Slot(%s): No device found\n", 323 + slot_name(ctrl)); 322 324 return -1; 325 + } 323 326 324 327 return 0; 325 328 }
+4 -4
drivers/pci/hotplug/rpadlpar_core.c
··· 40 40 static struct device_node *find_vio_slot_node(char *drc_name) 41 41 { 42 42 struct device_node *parent = of_find_node_by_name(NULL, "vdevice"); 43 - struct device_node *dn = NULL; 43 + struct device_node *dn; 44 44 int rc; 45 45 46 46 if (!parent) 47 47 return NULL; 48 48 49 - while ((dn = of_get_next_child(parent, dn))) { 49 + for_each_child_of_node(parent, dn) { 50 50 rc = rpaphp_check_drc_props(dn, drc_name, NULL); 51 51 if (rc == 0) 52 52 break; ··· 60 60 static struct device_node *find_php_slot_pci_node(char *drc_name, 61 61 char *drc_type) 62 62 { 63 - struct device_node *np = NULL; 63 + struct device_node *np; 64 64 int rc; 65 65 66 - while ((np = of_find_node_by_name(np, "pci"))) { 66 + for_each_node_by_name(np, "pci") { 67 67 rc = rpaphp_check_drc_props(np, drc_name, drc_type); 68 68 if (rc == 0) 69 69 break;
-1
drivers/pci/hotplug/shpchp_ctrl.c
··· 299 299 if (p_slot->status == 0xFF) { 300 300 /* power fault occurred, but it was benign */ 301 301 ctrl_dbg(ctrl, "%s: Power fault\n", __func__); 302 - rc = POWER_FAILURE; 303 302 p_slot->status = 0; 304 303 goto err_exit; 305 304 }
+5 -5
drivers/pci/p2pdma.c
··· 53 53 if (pdev->p2pdma->pool) 54 54 size = gen_pool_size(pdev->p2pdma->pool); 55 55 56 - return snprintf(buf, PAGE_SIZE, "%zd\n", size); 56 + return scnprintf(buf, PAGE_SIZE, "%zd\n", size); 57 57 } 58 58 static DEVICE_ATTR_RO(size); 59 59 ··· 66 66 if (pdev->p2pdma->pool) 67 67 avail = gen_pool_avail(pdev->p2pdma->pool); 68 68 69 - return snprintf(buf, PAGE_SIZE, "%zd\n", avail); 69 + return scnprintf(buf, PAGE_SIZE, "%zd\n", avail); 70 70 } 71 71 static DEVICE_ATTR_RO(available); 72 72 ··· 75 75 { 76 76 struct pci_dev *pdev = to_pci_dev(dev); 77 77 78 - return snprintf(buf, PAGE_SIZE, "%d\n", 79 - pdev->p2pdma->p2pmem_published); 78 + return scnprintf(buf, PAGE_SIZE, "%d\n", 79 + pdev->p2pdma->p2pmem_published); 80 80 } 81 81 static DEVICE_ATTR_RO(published); 82 82 ··· 762 762 struct scatterlist *sg; 763 763 void *addr; 764 764 765 - sg = kzalloc(sizeof(*sg), GFP_KERNEL); 765 + sg = kmalloc(sizeof(*sg), GFP_KERNEL); 766 766 if (!sg) 767 767 return NULL; 768 768
+3 -3
drivers/pci/pci-acpi.c
··· 1177 1177 * @pdev: the PCI device whose delay is to be updated 1178 1178 * @handle: ACPI handle of this device 1179 1179 * 1180 - * Update the d3_delay and d3cold_delay of a PCI device from the ACPI _DSM 1180 + * Update the d3hot_delay and d3cold_delay of a PCI device from the ACPI _DSM 1181 1181 * control method of either the device itself or the PCI host bridge. 1182 1182 * 1183 1183 * Function 8, "Reset Delay," applies to the entire hierarchy below a PCI ··· 1216 1216 } 1217 1217 if (elements[3].type == ACPI_TYPE_INTEGER) { 1218 1218 value = (int)elements[3].integer.value / 1000; 1219 - if (value < PCI_PM_D3_WAIT) 1220 - pdev->d3_delay = value; 1219 + if (value < PCI_PM_D3HOT_WAIT) 1220 + pdev->d3hot_delay = value; 1221 1221 } 1222 1222 } 1223 1223 ACPI_FREE(obj);
+4
drivers/pci/pci-bridge-emul.c
··· 294 294 295 295 return 0; 296 296 } 297 + EXPORT_SYMBOL_GPL(pci_bridge_emul_init); 297 298 298 299 /* 299 300 * Cleanup a pci_bridge_emul structure that was previously initialized ··· 306 305 kfree(bridge->pcie_cap_regs_behavior); 307 306 kfree(bridge->pci_regs_behavior); 308 307 } 308 + EXPORT_SYMBOL_GPL(pci_bridge_emul_cleanup); 309 309 310 310 /* 311 311 * Should be called by the PCI controller driver when reading the PCI ··· 368 366 369 367 return PCIBIOS_SUCCESSFUL; 370 368 } 369 + EXPORT_SYMBOL_GPL(pci_bridge_emul_conf_read); 371 370 372 371 /* 373 372 * Should be called by the PCI controller driver when writing the PCI ··· 433 430 434 431 return PCIBIOS_SUCCESSFUL; 435 432 } 433 + EXPORT_SYMBOL_GPL(pci_bridge_emul_conf_write);
-26
drivers/pci/pci-driver.c
··· 970 970 971 971 #ifdef CONFIG_HIBERNATE_CALLBACKS 972 972 973 - /* 974 - * pcibios_pm_ops - provide arch-specific hooks when a PCI device is doing 975 - * a hibernate transition 976 - */ 977 - struct dev_pm_ops __weak pcibios_pm_ops; 978 - 979 973 static int pci_pm_freeze(struct device *dev) 980 974 { 981 975 struct pci_dev *pci_dev = to_pci_dev(dev); ··· 1028 1034 1029 1035 pci_pm_set_unknown_state(pci_dev); 1030 1036 1031 - if (pcibios_pm_ops.freeze_noirq) 1032 - return pcibios_pm_ops.freeze_noirq(dev); 1033 - 1034 1037 return 0; 1035 1038 } 1036 1039 ··· 1035 1044 { 1036 1045 struct pci_dev *pci_dev = to_pci_dev(dev); 1037 1046 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1038 - int error; 1039 - 1040 - if (pcibios_pm_ops.thaw_noirq) { 1041 - error = pcibios_pm_ops.thaw_noirq(dev); 1042 - if (error) 1043 - return error; 1044 - } 1045 1047 1046 1048 /* 1047 1049 * The pm->thaw_noirq() callback assumes the device has been ··· 1159 1175 1160 1176 pci_fixup_device(pci_fixup_suspend_late, pci_dev); 1161 1177 1162 - if (pcibios_pm_ops.poweroff_noirq) 1163 - return pcibios_pm_ops.poweroff_noirq(dev); 1164 - 1165 1178 return 0; 1166 1179 } 1167 1180 ··· 1166 1185 { 1167 1186 struct pci_dev *pci_dev = to_pci_dev(dev); 1168 1187 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 1169 - int error; 1170 - 1171 - if (pcibios_pm_ops.restore_noirq) { 1172 - error = pcibios_pm_ops.restore_noirq(dev); 1173 - if (error) 1174 - return error; 1175 - } 1176 1188 1177 1189 pci_pm_default_resume_early(pci_dev); 1178 1190 pci_fixup_device(pci_fixup_resume_early, pci_dev);
+1 -13
drivers/pci/pci-pf-stub.c
··· 37 37 .probe = pci_pf_stub_probe, 38 38 .sriov_configure = pci_sriov_configure_simple, 39 39 }; 40 - 41 - static int __init pci_pf_stub_init(void) 42 - { 43 - return pci_register_driver(&pf_stub_driver); 44 - } 45 - 46 - static void __exit pci_pf_stub_exit(void) 47 - { 48 - pci_unregister_driver(&pf_stub_driver); 49 - } 50 - 51 - module_init(pci_pf_stub_init); 52 - module_exit(pci_pf_stub_exit); 40 + module_pci_driver(pf_stub_driver); 53 41 54 42 MODULE_LICENSE("GPL");
+4 -3
drivers/pci/pci-sysfs.c
··· 574 574 ssize_t len; 575 575 576 576 device_lock(dev); 577 - len = snprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override); 577 + len = scnprintf(buf, PAGE_SIZE, "%s\n", pdev->driver_override); 578 578 device_unlock(dev); 579 579 return len; 580 580 } ··· 708 708 data[off - init_off + 3] = (val >> 24) & 0xff; 709 709 off += 4; 710 710 size -= 4; 711 + cond_resched(); 711 712 } 712 713 713 714 if (size >= 2) { ··· 1197 1196 } 1198 1197 return 0; 1199 1198 } 1200 - #else /* !HAVE_PCI_MMAP */ 1199 + #else /* !(defined(HAVE_PCI_MMAP) || defined(ARCH_GENERIC_PCI_MMAP_RESOURCE)) */ 1201 1200 int __weak pci_create_resource_files(struct pci_dev *dev) { return 0; } 1202 1201 void __weak pci_remove_resource_files(struct pci_dev *dev) { return; } 1203 - #endif /* HAVE_PCI_MMAP */ 1202 + #endif 1204 1203 1205 1204 /** 1206 1205 * pci_write_rom - used to enable access to the PCI ROM display
+30 -24
drivers/pci/pci.c
··· 15 15 #include <linux/init.h> 16 16 #include <linux/msi.h> 17 17 #include <linux/of.h> 18 - #include <linux/of_pci.h> 19 18 #include <linux/pci.h> 20 19 #include <linux/pm.h> 21 20 #include <linux/slab.h> ··· 29 30 #include <linux/pm_runtime.h> 30 31 #include <linux/pci_hotplug.h> 31 32 #include <linux/vmalloc.h> 32 - #include <linux/pci-ats.h> 33 - #include <asm/setup.h> 34 33 #include <asm/dma.h> 35 34 #include <linux/aer.h> 36 35 #include "pci.h" ··· 46 49 int pci_pci_problems; 47 50 EXPORT_SYMBOL(pci_pci_problems); 48 51 49 - unsigned int pci_pm_d3_delay; 52 + unsigned int pci_pm_d3hot_delay; 50 53 51 54 static void pci_pme_list_scan(struct work_struct *work); 52 55 ··· 63 66 64 67 static void pci_dev_d3_sleep(struct pci_dev *dev) 65 68 { 66 - unsigned int delay = dev->d3_delay; 69 + unsigned int delay = dev->d3hot_delay; 67 70 68 - if (delay < pci_pm_d3_delay) 69 - delay = pci_pm_d3_delay; 71 + if (delay < pci_pm_d3hot_delay) 72 + delay = pci_pm_d3hot_delay; 70 73 71 74 if (delay) 72 75 msleep(delay); ··· 98 101 #define DEFAULT_HOTPLUG_BUS_SIZE 1 99 102 unsigned long pci_hotplug_bus_size = DEFAULT_HOTPLUG_BUS_SIZE; 100 103 104 + 105 + /* PCIe MPS/MRRS strategy; can be overridden by kernel command-line param */ 106 + #ifdef CONFIG_PCIE_BUS_TUNE_OFF 107 + enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_TUNE_OFF; 108 + #elif defined CONFIG_PCIE_BUS_SAFE 109 + enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_SAFE; 110 + #elif defined CONFIG_PCIE_BUS_PERFORMANCE 111 + enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_PERFORMANCE; 112 + #elif defined CONFIG_PCIE_BUS_PEER2PEER 113 + enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_PEER2PEER; 114 + #else 101 115 enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_DEFAULT; 116 + #endif 102 117 103 118 /* 104 119 * The default CLS is used if arch didn't set CLS explicitly and not ··· 885 876 /* Upstream Forwarding */ 886 877 ctrl |= (cap & PCI_ACS_UF); 887 878 879 + /* Enable Translation Blocking for external devices */ 880 + if (dev->external_facing || dev->untrusted) 881 + ctrl |= (cap & PCI_ACS_TB); 882 + 888 883 pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); 889 884 } 890 885 ··· 1078 1065 if (state == PCI_D3hot || dev->current_state == PCI_D3hot) 1079 1066 pci_dev_d3_sleep(dev); 1080 1067 else if (state == PCI_D2 || dev->current_state == PCI_D2) 1081 - msleep(PCI_PM_D2_DELAY); 1068 + udelay(PCI_PM_D2_DELAY); 1082 1069 1083 1070 pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); 1084 1071 dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK); ··· 3026 3013 } 3027 3014 3028 3015 dev->pm_cap = pm; 3029 - dev->d3_delay = PCI_PM_D3_WAIT; 3016 + dev->d3hot_delay = PCI_PM_D3HOT_WAIT; 3030 3017 dev->d3cold_delay = PCI_PM_D3COLD_WAIT; 3031 3018 dev->bridge_d3 = pci_bridge_d3_possible(dev); 3032 3019 dev->d3cold_allowed = true; ··· 3051 3038 (pmc & PCI_PM_CAP_PME_D0) ? " D0" : "", 3052 3039 (pmc & PCI_PM_CAP_PME_D1) ? " D1" : "", 3053 3040 (pmc & PCI_PM_CAP_PME_D2) ? " D2" : "", 3054 - (pmc & PCI_PM_CAP_PME_D3) ? " D3hot" : "", 3041 + (pmc & PCI_PM_CAP_PME_D3hot) ? " D3hot" : "", 3055 3042 (pmc & PCI_PM_CAP_PME_D3cold) ? " D3cold" : ""); 3056 3043 dev->pme_support = pmc >> PCI_PM_CAP_PME_SHIFT; 3057 3044 dev->pme_poll = true; ··· 4634 4621 * 4635 4622 * NOTE: This causes the caller to sleep for twice the device power transition 4636 4623 * cooldown period, which for the D0->D3hot and D3hot->D0 transitions is 10 ms 4637 - * by default (i.e. unless the @dev's d3_delay field has a different value). 4624 + * by default (i.e. unless the @dev's d3hot_delay field has a different value). 4638 4625 * Moreover, only devices in D0 can be reset by this function. 4639 4626 */ 4640 4627 static int pci_pm_reset(struct pci_dev *dev, int probe) ··· 4714 4701 } 4715 4702 if (active && ret) 4716 4703 msleep(delay); 4717 - else if (ret != active) 4718 - pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", 4719 - active ? "set" : "cleared"); 4704 + 4720 4705 return ret == active; 4721 4706 } 4722 4707 ··· 4839 4828 delay); 4840 4829 if (!pcie_wait_for_link_delay(dev, true, delay)) { 4841 4830 /* Did not train, no need to wait any further */ 4831 + pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n"); 4842 4832 return; 4843 4833 } 4844 4834 } ··· 4932 4920 4933 4921 static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe) 4934 4922 { 4935 - struct pci_dev *pdev; 4936 - 4937 - if (dev->subordinate || !dev->slot || 4923 + if (dev->multifunction || dev->subordinate || !dev->slot || 4938 4924 dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET) 4939 4925 return -ENOTTY; 4940 - 4941 - list_for_each_entry(pdev, &dev->bus->devices, bus_list) 4942 - if (pdev != dev && pdev->slot == dev->slot) 4943 - return -ENOTTY; 4944 4926 4945 4927 return pci_reset_hotplug_slot(dev->slot->hotplug, probe); 4946 4928 } ··· 6011 6005 6012 6006 if (flags & PCI_VGA_STATE_CHANGE_DECODES) { 6013 6007 pci_read_config_word(dev, PCI_COMMAND, &cmd); 6014 - if (decode == true) 6008 + if (decode) 6015 6009 cmd |= command_bits; 6016 6010 else 6017 6011 cmd &= ~command_bits; ··· 6027 6021 if (bridge) { 6028 6022 pci_read_config_word(bridge, PCI_BRIDGE_CONTROL, 6029 6023 &cmd); 6030 - if (decode == true) 6024 + if (decode) 6031 6025 cmd |= PCI_BRIDGE_CTL_VGA; 6032 6026 else 6033 6027 cmd &= ~PCI_BRIDGE_CTL_VGA; ··· 6356 6350 6357 6351 spin_lock(&resource_alignment_lock); 6358 6352 if (resource_alignment_param) 6359 - count = snprintf(buf, PAGE_SIZE, "%s", resource_alignment_param); 6353 + count = scnprintf(buf, PAGE_SIZE, "%s", resource_alignment_param); 6360 6354 spin_unlock(&resource_alignment_lock); 6361 6355 6362 6356 /*
+4 -5
drivers/pci/pci.h
··· 43 43 int pci_bridge_secondary_bus_reset(struct pci_dev *dev); 44 44 int pci_bus_error_reset(struct pci_dev *dev); 45 45 46 - #define PCI_PM_D2_DELAY 200 47 - #define PCI_PM_D3_WAIT 10 48 - #define PCI_PM_D3COLD_WAIT 100 49 - #define PCI_PM_BUS_WAIT 50 46 + #define PCI_PM_D2_DELAY 200 /* usec; see PCIe r4.0, sec 5.9.1 */ 47 + #define PCI_PM_D3HOT_WAIT 10 /* msec */ 48 + #define PCI_PM_D3COLD_WAIT 100 /* msec */ 50 49 51 50 /** 52 51 * struct pci_platform_pm_ops - Firmware PM callbacks ··· 177 178 178 179 extern raw_spinlock_t pci_lock; 179 180 180 - extern unsigned int pci_pm_d3_delay; 181 + extern unsigned int pci_pm_d3hot_delay; 181 182 182 183 #ifdef CONFIG_PCI_MSI 183 184 void pci_no_msi(void);
+142 -150
drivers/pci/pcie/aspm.c
··· 74 74 * has one slot under it, so at most there are 8 functions. 75 75 */ 76 76 struct aspm_latency acceptable[8]; 77 - 78 - /* L1 PM Substate info */ 79 - struct { 80 - u32 up_cap_ptr; /* L1SS cap ptr in upstream dev */ 81 - u32 dw_cap_ptr; /* L1SS cap ptr in downstream dev */ 82 - u32 ctl1; /* value to be programmed in ctl1 */ 83 - u32 ctl2; /* value to be programmed in ctl2 */ 84 - } l1ss; 85 77 }; 86 78 87 79 static int aspm_disabled, aspm_force; ··· 300 308 } 301 309 302 310 /* Convert L0s latency encoding to ns */ 303 - static u32 calc_l0s_latency(u32 encoding) 311 + static u32 calc_l0s_latency(u32 lnkcap) 304 312 { 313 + u32 encoding = (lnkcap & PCI_EXP_LNKCAP_L0SEL) >> 12; 314 + 305 315 if (encoding == 0x7) 306 316 return (5 * 1000); /* > 4us */ 307 317 return (64 << encoding); ··· 318 324 } 319 325 320 326 /* Convert L1 latency encoding to ns */ 321 - static u32 calc_l1_latency(u32 encoding) 327 + static u32 calc_l1_latency(u32 lnkcap) 322 328 { 329 + u32 encoding = (lnkcap & PCI_EXP_LNKCAP_L1EL) >> 15; 330 + 323 331 if (encoding == 0x7) 324 332 return (65 * 1000); /* > 64us */ 325 333 return (1000 << encoding); ··· 374 378 *scale = 5; 375 379 *value = threshold_ns >> 25; 376 380 } 377 - } 378 - 379 - struct aspm_register_info { 380 - u32 support:2; 381 - u32 enabled:2; 382 - u32 latency_encoding_l0s; 383 - u32 latency_encoding_l1; 384 - 385 - /* L1 substates */ 386 - u32 l1ss_cap_ptr; 387 - u32 l1ss_cap; 388 - u32 l1ss_ctl1; 389 - u32 l1ss_ctl2; 390 - }; 391 - 392 - static void pcie_get_aspm_reg(struct pci_dev *pdev, 393 - struct aspm_register_info *info) 394 - { 395 - u16 reg16; 396 - u32 reg32; 397 - 398 - pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &reg32); 399 - info->support = (reg32 & PCI_EXP_LNKCAP_ASPMS) >> 10; 400 - info->latency_encoding_l0s = (reg32 & PCI_EXP_LNKCAP_L0SEL) >> 12; 401 - info->latency_encoding_l1 = (reg32 & PCI_EXP_LNKCAP_L1EL) >> 15; 402 - pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &reg16); 403 - info->enabled = reg16 & PCI_EXP_LNKCTL_ASPMC; 404 - 405 - /* Read L1 PM substate capabilities */ 406 - info->l1ss_cap = info->l1ss_ctl1 = info->l1ss_ctl2 = 0; 407 - info->l1ss_cap_ptr = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); 408 - if (!info->l1ss_cap_ptr) 409 - return; 410 - pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CAP, 411 - &info->l1ss_cap); 412 - if (!(info->l1ss_cap & PCI_L1SS_CAP_L1_PM_SS)) { 413 - info->l1ss_cap = 0; 414 - return; 415 - } 416 - 417 - /* 418 - * If we don't have LTR for the entire path from the Root Complex 419 - * to this device, we can't use ASPM L1.2 because it relies on the 420 - * LTR_L1.2_THRESHOLD. See PCIe r4.0, secs 5.5.4, 6.18. 421 - */ 422 - if (!pdev->ltr_path) 423 - info->l1ss_cap &= ~PCI_L1SS_CAP_ASPM_L1_2; 424 - 425 - pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL1, 426 - &info->l1ss_ctl1); 427 - pci_read_config_dword(pdev, info->l1ss_cap_ptr + PCI_L1SS_CTL2, 428 - &info->l1ss_ctl2); 429 381 } 430 382 431 383 static void pcie_aspm_check_latency(struct pci_dev *endpoint) ··· 437 493 return NULL; 438 494 } 439 495 496 + static void pci_clear_and_set_dword(struct pci_dev *pdev, int pos, 497 + u32 clear, u32 set) 498 + { 499 + u32 val; 500 + 501 + pci_read_config_dword(pdev, pos, &val); 502 + val &= ~clear; 503 + val |= set; 504 + pci_write_config_dword(pdev, pos, val); 505 + } 506 + 440 507 /* Calculate L1.2 PM substate timing parameters */ 441 508 static void aspm_calc_l1ss_info(struct pcie_link_state *link, 442 - struct aspm_register_info *upreg, 443 - struct aspm_register_info *dwreg) 509 + u32 parent_l1ss_cap, u32 child_l1ss_cap) 444 510 { 511 + struct pci_dev *child = link->downstream, *parent = link->pdev; 445 512 u32 val1, val2, scale1, scale2; 446 513 u32 t_common_mode, t_power_on, l1_2_threshold, scale, value; 447 - 448 - link->l1ss.up_cap_ptr = upreg->l1ss_cap_ptr; 449 - link->l1ss.dw_cap_ptr = dwreg->l1ss_cap_ptr; 450 - link->l1ss.ctl1 = link->l1ss.ctl2 = 0; 514 + u32 ctl1 = 0, ctl2 = 0; 515 + u32 pctl1, pctl2, cctl1, cctl2; 516 + u32 pl1_2_enables, cl1_2_enables; 451 517 452 518 if (!(link->aspm_support & ASPM_STATE_L1_2_MASK)) 453 519 return; 454 520 455 521 /* Choose the greater of the two Port Common_Mode_Restore_Times */ 456 - val1 = (upreg->l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 457 - val2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 522 + val1 = (parent_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 523 + val2 = (child_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; 458 524 t_common_mode = max(val1, val2); 459 525 460 526 /* Choose the greater of the two Port T_POWER_ON times */ 461 - val1 = (upreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 462 - scale1 = (upreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 463 - val2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 464 - scale2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 527 + val1 = (parent_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 528 + scale1 = (parent_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 529 + val2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 530 + scale2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 465 531 466 - if (calc_l1ss_pwron(link->pdev, scale1, val1) > 467 - calc_l1ss_pwron(link->downstream, scale2, val2)) { 468 - link->l1ss.ctl2 |= scale1 | (val1 << 3); 469 - t_power_on = calc_l1ss_pwron(link->pdev, scale1, val1); 532 + if (calc_l1ss_pwron(parent, scale1, val1) > 533 + calc_l1ss_pwron(child, scale2, val2)) { 534 + ctl2 |= scale1 | (val1 << 3); 535 + t_power_on = calc_l1ss_pwron(parent, scale1, val1); 470 536 } else { 471 - link->l1ss.ctl2 |= scale2 | (val2 << 3); 472 - t_power_on = calc_l1ss_pwron(link->downstream, scale2, val2); 537 + ctl2 |= scale2 | (val2 << 3); 538 + t_power_on = calc_l1ss_pwron(child, scale2, val2); 473 539 } 474 540 475 541 /* ··· 494 540 */ 495 541 l1_2_threshold = 2 + 4 + t_common_mode + t_power_on; 496 542 encode_l12_threshold(l1_2_threshold, &scale, &value); 497 - link->l1ss.ctl1 |= t_common_mode << 8 | scale << 29 | value << 16; 543 + ctl1 |= t_common_mode << 8 | scale << 29 | value << 16; 544 + 545 + pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, &pctl1); 546 + pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, &pctl2); 547 + pci_read_config_dword(child, child->l1ss + PCI_L1SS_CTL1, &cctl1); 548 + pci_read_config_dword(child, child->l1ss + PCI_L1SS_CTL2, &cctl2); 549 + 550 + if (ctl1 == pctl1 && ctl1 == cctl1 && 551 + ctl2 == pctl2 && ctl2 == cctl2) 552 + return; 553 + 554 + /* Disable L1.2 while updating. See PCIe r5.0, sec 5.5.4, 7.8.3.3 */ 555 + pl1_2_enables = pctl1 & PCI_L1SS_CTL1_L1_2_MASK; 556 + cl1_2_enables = cctl1 & PCI_L1SS_CTL1_L1_2_MASK; 557 + 558 + if (pl1_2_enables || cl1_2_enables) { 559 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 560 + PCI_L1SS_CTL1_L1_2_MASK, 0); 561 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 562 + PCI_L1SS_CTL1_L1_2_MASK, 0); 563 + } 564 + 565 + /* Program T_POWER_ON times in both ports */ 566 + pci_write_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, ctl2); 567 + pci_write_config_dword(child, child->l1ss + PCI_L1SS_CTL2, ctl2); 568 + 569 + /* Program Common_Mode_Restore_Time in upstream device */ 570 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 571 + PCI_L1SS_CTL1_CM_RESTORE_TIME, ctl1); 572 + 573 + /* Program LTR_L1.2_THRESHOLD time in both ports */ 574 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 575 + PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 576 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE, ctl1); 577 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 578 + PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 579 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE, ctl1); 580 + 581 + if (pl1_2_enables || cl1_2_enables) { 582 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 0, 583 + pl1_2_enables); 584 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 0, 585 + cl1_2_enables); 586 + } 498 587 } 499 588 500 589 static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) 501 590 { 502 591 struct pci_dev *child = link->downstream, *parent = link->pdev; 592 + u32 parent_lnkcap, child_lnkcap; 593 + u16 parent_lnkctl, child_lnkctl; 594 + u32 parent_l1ss_cap, child_l1ss_cap; 595 + u32 parent_l1ss_ctl1 = 0, child_l1ss_ctl1 = 0; 503 596 struct pci_bus *linkbus = parent->subordinate; 504 - struct aspm_register_info upreg, dwreg; 505 597 506 598 if (blacklist) { 507 599 /* Set enabled/disable so that we will disable ASPM later */ ··· 556 556 return; 557 557 } 558 558 559 - /* Get upstream/downstream components' register state */ 560 - pcie_get_aspm_reg(parent, &upreg); 561 - pcie_get_aspm_reg(child, &dwreg); 562 - 563 559 /* 564 560 * If ASPM not supported, don't mess with the clocks and link, 565 561 * bail out now. 566 562 */ 567 - if (!(upreg.support & dwreg.support)) 563 + pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &parent_lnkcap); 564 + pcie_capability_read_dword(child, PCI_EXP_LNKCAP, &child_lnkcap); 565 + if (!(parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPMS)) 568 566 return; 569 567 570 568 /* Configure common clock before checking latencies */ 571 569 pcie_aspm_configure_common_clock(link); 572 570 573 571 /* 574 - * Re-read upstream/downstream components' register state 575 - * after clock configuration 572 + * Re-read upstream/downstream components' register state after 573 + * clock configuration. L0s & L1 exit latencies in the otherwise 574 + * read-only Link Capabilities may change depending on common clock 575 + * configuration (PCIe r5.0, sec 7.5.3.6). 576 576 */ 577 - pcie_get_aspm_reg(parent, &upreg); 578 - pcie_get_aspm_reg(child, &dwreg); 577 + pcie_capability_read_dword(parent, PCI_EXP_LNKCAP, &parent_lnkcap); 578 + pcie_capability_read_dword(child, PCI_EXP_LNKCAP, &child_lnkcap); 579 + pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &parent_lnkctl); 580 + pcie_capability_read_word(child, PCI_EXP_LNKCTL, &child_lnkctl); 579 581 580 582 /* 581 583 * Setup L0s state ··· 586 584 * given link unless components on both sides of the link each 587 585 * support L0s. 588 586 */ 589 - if (dwreg.support & upreg.support & PCIE_LINK_STATE_L0S) 587 + if (parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPM_L0S) 590 588 link->aspm_support |= ASPM_STATE_L0S; 591 - if (dwreg.enabled & PCIE_LINK_STATE_L0S) 589 + 590 + if (child_lnkctl & PCI_EXP_LNKCTL_ASPM_L0S) 592 591 link->aspm_enabled |= ASPM_STATE_L0S_UP; 593 - if (upreg.enabled & PCIE_LINK_STATE_L0S) 592 + if (parent_lnkctl & PCI_EXP_LNKCTL_ASPM_L0S) 594 593 link->aspm_enabled |= ASPM_STATE_L0S_DW; 595 - link->latency_up.l0s = calc_l0s_latency(upreg.latency_encoding_l0s); 596 - link->latency_dw.l0s = calc_l0s_latency(dwreg.latency_encoding_l0s); 594 + link->latency_up.l0s = calc_l0s_latency(parent_lnkcap); 595 + link->latency_dw.l0s = calc_l0s_latency(child_lnkcap); 597 596 598 597 /* Setup L1 state */ 599 - if (upreg.support & dwreg.support & PCIE_LINK_STATE_L1) 598 + if (parent_lnkcap & child_lnkcap & PCI_EXP_LNKCAP_ASPM_L1) 600 599 link->aspm_support |= ASPM_STATE_L1; 601 - if (upreg.enabled & dwreg.enabled & PCIE_LINK_STATE_L1) 600 + 601 + if (parent_lnkctl & child_lnkctl & PCI_EXP_LNKCTL_ASPM_L1) 602 602 link->aspm_enabled |= ASPM_STATE_L1; 603 - link->latency_up.l1 = calc_l1_latency(upreg.latency_encoding_l1); 604 - link->latency_dw.l1 = calc_l1_latency(dwreg.latency_encoding_l1); 603 + link->latency_up.l1 = calc_l1_latency(parent_lnkcap); 604 + link->latency_dw.l1 = calc_l1_latency(child_lnkcap); 605 605 606 606 /* Setup L1 substate */ 607 - if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_ASPM_L1_1) 607 + pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CAP, 608 + &parent_l1ss_cap); 609 + pci_read_config_dword(child, child->l1ss + PCI_L1SS_CAP, 610 + &child_l1ss_cap); 611 + 612 + if (!(parent_l1ss_cap & PCI_L1SS_CAP_L1_PM_SS)) 613 + parent_l1ss_cap = 0; 614 + if (!(child_l1ss_cap & PCI_L1SS_CAP_L1_PM_SS)) 615 + child_l1ss_cap = 0; 616 + 617 + /* 618 + * If we don't have LTR for the entire path from the Root Complex 619 + * to this device, we can't use ASPM L1.2 because it relies on the 620 + * LTR_L1.2_THRESHOLD. See PCIe r4.0, secs 5.5.4, 6.18. 621 + */ 622 + if (!child->ltr_path) 623 + child_l1ss_cap &= ~PCI_L1SS_CAP_ASPM_L1_2; 624 + 625 + if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_ASPM_L1_1) 608 626 link->aspm_support |= ASPM_STATE_L1_1; 609 - if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_ASPM_L1_2) 627 + if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_ASPM_L1_2) 610 628 link->aspm_support |= ASPM_STATE_L1_2; 611 - if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_1) 629 + if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_1) 612 630 link->aspm_support |= ASPM_STATE_L1_1_PCIPM; 613 - if (upreg.l1ss_cap & dwreg.l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_2) 631 + if (parent_l1ss_cap & child_l1ss_cap & PCI_L1SS_CAP_PCIPM_L1_2) 614 632 link->aspm_support |= ASPM_STATE_L1_2_PCIPM; 615 633 616 - if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_1) 634 + if (parent_l1ss_cap) 635 + pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 636 + &parent_l1ss_ctl1); 637 + if (child_l1ss_cap) 638 + pci_read_config_dword(child, child->l1ss + PCI_L1SS_CTL1, 639 + &child_l1ss_ctl1); 640 + 641 + if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_1) 617 642 link->aspm_enabled |= ASPM_STATE_L1_1; 618 - if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_2) 643 + if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_ASPM_L1_2) 619 644 link->aspm_enabled |= ASPM_STATE_L1_2; 620 - if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_1) 645 + if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_1) 621 646 link->aspm_enabled |= ASPM_STATE_L1_1_PCIPM; 622 - if (upreg.l1ss_ctl1 & dwreg.l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_2) 647 + if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_2) 623 648 link->aspm_enabled |= ASPM_STATE_L1_2_PCIPM; 624 649 625 650 if (link->aspm_support & ASPM_STATE_L1SS) 626 - aspm_calc_l1ss_info(link, &upreg, &dwreg); 651 + aspm_calc_l1ss_info(link, parent_l1ss_cap, child_l1ss_cap); 627 652 628 653 /* Save default state */ 629 654 link->aspm_default = link->aspm_enabled; ··· 680 651 } 681 652 } 682 653 683 - static void pci_clear_and_set_dword(struct pci_dev *pdev, int pos, 684 - u32 clear, u32 set) 685 - { 686 - u32 val; 687 - 688 - pci_read_config_dword(pdev, pos, &val); 689 - val &= ~clear; 690 - val |= set; 691 - pci_write_config_dword(pdev, pos, val); 692 - } 693 - 694 654 /* Configure the ASPM L1 substates */ 695 655 static void pcie_config_aspm_l1ss(struct pcie_link_state *link, u32 state) 696 656 { 697 657 u32 val, enable_req; 698 658 struct pci_dev *child = link->downstream, *parent = link->pdev; 699 - u32 up_cap_ptr = link->l1ss.up_cap_ptr; 700 - u32 dw_cap_ptr = link->l1ss.dw_cap_ptr; 701 659 702 660 enable_req = (link->aspm_enabled ^ state) & state; 703 661 ··· 702 686 */ 703 687 704 688 /* Disable all L1 substates */ 705 - pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, 689 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 706 690 PCI_L1SS_CTL1_L1SS_MASK, 0); 707 - pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 691 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 708 692 PCI_L1SS_CTL1_L1SS_MASK, 0); 709 693 /* 710 694 * If needed, disable L1, and it gets enabled later ··· 715 699 PCI_EXP_LNKCTL_ASPM_L1, 0); 716 700 pcie_capability_clear_and_set_word(parent, PCI_EXP_LNKCTL, 717 701 PCI_EXP_LNKCTL_ASPM_L1, 0); 718 - } 719 - 720 - if (enable_req & ASPM_STATE_L1_2_MASK) { 721 - 722 - /* Program T_POWER_ON times in both ports */ 723 - pci_write_config_dword(parent, up_cap_ptr + PCI_L1SS_CTL2, 724 - link->l1ss.ctl2); 725 - pci_write_config_dword(child, dw_cap_ptr + PCI_L1SS_CTL2, 726 - link->l1ss.ctl2); 727 - 728 - /* Program Common_Mode_Restore_Time in upstream device */ 729 - pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 730 - PCI_L1SS_CTL1_CM_RESTORE_TIME, 731 - link->l1ss.ctl1); 732 - 733 - /* Program LTR_L1.2_THRESHOLD time in both ports */ 734 - pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 735 - PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 736 - PCI_L1SS_CTL1_LTR_L12_TH_SCALE, 737 - link->l1ss.ctl1); 738 - pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, 739 - PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 740 - PCI_L1SS_CTL1_LTR_L12_TH_SCALE, 741 - link->l1ss.ctl1); 742 702 } 743 703 744 704 val = 0; ··· 728 736 val |= PCI_L1SS_CTL1_PCIPM_L1_2; 729 737 730 738 /* Enable what we need to enable */ 731 - pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 739 + pci_clear_and_set_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 732 740 PCI_L1SS_CTL1_L1SS_MASK, val); 733 - pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, 741 + pci_clear_and_set_dword(child, child->l1ss + PCI_L1SS_CTL1, 734 742 PCI_L1SS_CTL1_L1SS_MASK, val); 735 743 } 736 744
+3
drivers/pci/pcie/bw_notification.c
··· 14 14 * and warns when links become degraded in operation. 15 15 */ 16 16 17 + #define dev_fmt(fmt) "bw_notification: " fmt 18 + 17 19 #include "../pci.h" 18 20 #include "portdrv.h" 19 21 ··· 99 97 return ret; 100 98 101 99 pcie_enable_link_bandwidth_notification(srv->port); 100 + pci_info(srv->port, "enabled with IRQ %d\n", srv->irq); 102 101 103 102 return 0; 104 103 }
+5 -2
drivers/pci/pcie/dpc.c
··· 103 103 * Wait until the Link is inactive, then clear DPC Trigger Status 104 104 * to allow the Port to leave DPC. 105 105 */ 106 - pcie_wait_for_link(pdev, false); 106 + if (!pcie_wait_for_link(pdev, false)) 107 + pci_info(pdev, "Data Link Layer Link Active not cleared in 1000 msec\n"); 107 108 108 109 if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev)) 109 110 return PCI_ERS_RESULT_DISCONNECT; ··· 112 111 pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, 113 112 PCI_EXP_DPC_STATUS_TRIGGER); 114 113 115 - if (!pcie_wait_for_link(pdev, true)) 114 + if (!pcie_wait_for_link(pdev, true)) { 115 + pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n"); 116 116 return PCI_ERS_RESULT_DISCONNECT; 117 + } 117 118 118 119 return PCI_ERS_RESULT_RECOVERED; 119 120 }
+16 -1
drivers/pci/probe.c
··· 941 941 942 942 pcibios_add_bus(bus); 943 943 944 + if (bus->ops->add_bus) { 945 + err = bus->ops->add_bus(bus); 946 + if (WARN_ON(err < 0)) 947 + dev_err(&bus->dev, "failed to add bus: %d\n", err); 948 + } 949 + 944 950 /* Create legacy_io and legacy_mem files for this bus */ 945 951 pci_create_legacy_files(bus); 946 952 ··· 1042 1036 struct pci_dev *bridge, int busnr) 1043 1037 { 1044 1038 struct pci_bus *child; 1039 + struct pci_host_bridge *host; 1045 1040 int i; 1046 1041 int ret; 1047 1042 ··· 1052 1045 return NULL; 1053 1046 1054 1047 child->parent = parent; 1055 - child->ops = parent->ops; 1056 1048 child->msi = parent->msi; 1057 1049 child->sysdata = parent->sysdata; 1058 1050 child->bus_flags = parent->bus_flags; 1051 + 1052 + host = pci_find_host_bridge(parent); 1053 + if (host->child_ops) 1054 + child->ops = host->child_ops; 1055 + else 1056 + child->ops = parent->ops; 1059 1057 1060 1058 /* 1061 1059 * Initialize some portions of the bus device, but don't register ··· 2117 2105 2118 2106 if (!pci_is_pcie(dev)) 2119 2107 return; 2108 + 2109 + /* Read L1 PM substate capabilities */ 2110 + dev->l1ss = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_L1SS); 2120 2111 2121 2112 pcie_capability_read_dword(dev, PCI_EXP_DEVCAP2, &cap); 2122 2113 if (!(cap & PCI_EXP_DEVCAP2_LTR))
+42 -32
drivers/pci/quirks.c
··· 1846 1846 */ 1847 1847 static void quirk_intel_pcie_pm(struct pci_dev *dev) 1848 1848 { 1849 - pci_pm_d3_delay = 120; 1849 + pci_pm_d3hot_delay = 120; 1850 1850 dev->no_d1d2 = 1; 1851 1851 } 1852 1852 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25e2, quirk_intel_pcie_pm); ··· 1873 1873 1874 1874 static void quirk_d3hot_delay(struct pci_dev *dev, unsigned int delay) 1875 1875 { 1876 - if (dev->d3_delay >= delay) 1876 + if (dev->d3hot_delay >= delay) 1877 1877 return; 1878 1878 1879 - dev->d3_delay = delay; 1879 + dev->d3hot_delay = delay; 1880 1880 pci_info(dev, "extending delay after power-on from D3hot to %d msec\n", 1881 - dev->d3_delay); 1881 + dev->d3hot_delay); 1882 1882 } 1883 1883 1884 1884 static void quirk_radeon_pm(struct pci_dev *dev) ··· 3387 3387 * PCI devices which are on Intel chips can skip the 10ms delay 3388 3388 * before entering D3 mode. 3389 3389 */ 3390 - static void quirk_remove_d3_delay(struct pci_dev *dev) 3390 + static void quirk_remove_d3hot_delay(struct pci_dev *dev) 3391 3391 { 3392 - dev->d3_delay = 0; 3392 + dev->d3hot_delay = 0; 3393 3393 } 3394 - /* C600 Series devices do not need 10ms d3_delay */ 3395 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0412, quirk_remove_d3_delay); 3396 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c00, quirk_remove_d3_delay); 3397 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c0c, quirk_remove_d3_delay); 3398 - /* Lynxpoint-H PCH devices do not need 10ms d3_delay */ 3399 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c02, quirk_remove_d3_delay); 3400 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c18, quirk_remove_d3_delay); 3401 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c1c, quirk_remove_d3_delay); 3402 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c20, quirk_remove_d3_delay); 3403 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c22, quirk_remove_d3_delay); 3404 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c26, quirk_remove_d3_delay); 3405 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c2d, quirk_remove_d3_delay); 3406 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c31, quirk_remove_d3_delay); 3407 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3a, quirk_remove_d3_delay); 3408 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3d, quirk_remove_d3_delay); 3409 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c4e, quirk_remove_d3_delay); 3410 - /* Intel Cherrytrail devices do not need 10ms d3_delay */ 3411 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2280, quirk_remove_d3_delay); 3412 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3_delay); 3413 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3_delay); 3414 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b0, quirk_remove_d3_delay); 3415 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b5, quirk_remove_d3_delay); 3416 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3_delay); 3417 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b8, quirk_remove_d3_delay); 3418 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22d8, quirk_remove_d3_delay); 3419 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22dc, quirk_remove_d3_delay); 3394 + /* C600 Series devices do not need 10ms d3hot_delay */ 3395 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0412, quirk_remove_d3hot_delay); 3396 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c00, quirk_remove_d3hot_delay); 3397 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c0c, quirk_remove_d3hot_delay); 3398 + /* Lynxpoint-H PCH devices do not need 10ms d3hot_delay */ 3399 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c02, quirk_remove_d3hot_delay); 3400 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c18, quirk_remove_d3hot_delay); 3401 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c1c, quirk_remove_d3hot_delay); 3402 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c20, quirk_remove_d3hot_delay); 3403 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c22, quirk_remove_d3hot_delay); 3404 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c26, quirk_remove_d3hot_delay); 3405 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c2d, quirk_remove_d3hot_delay); 3406 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c31, quirk_remove_d3hot_delay); 3407 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3a, quirk_remove_d3hot_delay); 3408 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3d, quirk_remove_d3hot_delay); 3409 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c4e, quirk_remove_d3hot_delay); 3410 + /* Intel Cherrytrail devices do not need 10ms d3hot_delay */ 3411 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2280, quirk_remove_d3hot_delay); 3412 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3hot_delay); 3413 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3hot_delay); 3414 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b0, quirk_remove_d3hot_delay); 3415 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b5, quirk_remove_d3hot_delay); 3416 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3hot_delay); 3417 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b8, quirk_remove_d3hot_delay); 3418 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22d8, quirk_remove_d3hot_delay); 3419 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22dc, quirk_remove_d3hot_delay); 3420 3420 3421 3421 /* 3422 3422 * Some devices may pass our check in pci_intx_mask_supported() if ··· 4892 4892 } 4893 4893 } 4894 4894 4895 + /* 4896 + * Currently this quirk does the equivalent of 4897 + * PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF 4898 + * 4899 + * TODO: This quirk also needs to do equivalent of PCI_ACS_TB, 4900 + * if dev->external_facing || dev->untrusted 4901 + */ 4895 4902 static int pci_quirk_enable_intel_pch_acs(struct pci_dev *dev) 4896 4903 { 4897 4904 if (!pci_quirk_intel_pch_acs_match(dev)) ··· 4937 4930 ctrl |= (cap & PCI_ACS_RR); 4938 4931 ctrl |= (cap & PCI_ACS_CR); 4939 4932 ctrl |= (cap & PCI_ACS_UF); 4933 + 4934 + if (dev->external_facing || dev->untrusted) 4935 + ctrl |= (cap & PCI_ACS_TB); 4940 4936 4941 4937 pci_write_config_dword(dev, pos + INTEL_SPT_ACS_CTRL, ctrl); 4942 4938
+11 -3
drivers/phy/marvell/phy-mvebu-a3700-comphy.c
··· 26 26 #define COMPHY_SIP_POWER_ON 0x82000001 27 27 #define COMPHY_SIP_POWER_OFF 0x82000002 28 28 #define COMPHY_SIP_PLL_LOCK 0x82000003 29 - #define COMPHY_FW_NOT_SUPPORTED (-1) 30 29 31 30 #define COMPHY_FW_MODE_SATA 0x1 32 31 #define COMPHY_FW_MODE_SGMII 0x2 ··· 111 112 unsigned long mode) 112 113 { 113 114 struct arm_smccc_res res; 115 + s32 ret; 114 116 115 117 arm_smccc_smc(function, lane, mode, 0, 0, 0, 0, 0, &res); 118 + ret = res.a0; 116 119 117 - return res.a0; 120 + switch (ret) { 121 + case SMCCC_RET_SUCCESS: 122 + return 0; 123 + case SMCCC_RET_NOT_SUPPORTED: 124 + return -EOPNOTSUPP; 125 + default: 126 + return -EINVAL; 127 + } 118 128 } 119 129 120 130 static int mvebu_a3700_comphy_get_fw_mode(int lane, int port, ··· 228 220 } 229 221 230 222 ret = mvebu_a3700_comphy_smc(COMPHY_SIP_POWER_ON, lane->id, fw_param); 231 - if (ret == COMPHY_FW_NOT_SUPPORTED) 223 + if (ret == -EOPNOTSUPP) 232 224 dev_err(lane->dev, 233 225 "unsupported SMC call, try updating your firmware\n"); 234 226
+11 -3
drivers/phy/marvell/phy-mvebu-cp110-comphy.c
··· 123 123 124 124 #define COMPHY_SIP_POWER_ON 0x82000001 125 125 #define COMPHY_SIP_POWER_OFF 0x82000002 126 - #define COMPHY_FW_NOT_SUPPORTED (-1) 127 126 128 127 /* 129 128 * A lane is described by the following bitfields: ··· 272 273 unsigned long lane, unsigned long mode) 273 274 { 274 275 struct arm_smccc_res res; 276 + s32 ret; 275 277 276 278 arm_smccc_smc(function, phys, lane, mode, 0, 0, 0, 0, &res); 279 + ret = res.a0; 277 280 278 - return res.a0; 281 + switch (ret) { 282 + case SMCCC_RET_SUCCESS: 283 + return 0; 284 + case SMCCC_RET_NOT_SUPPORTED: 285 + return -EOPNOTSUPP; 286 + default: 287 + return -EINVAL; 288 + } 279 289 } 280 290 281 291 static int mvebu_comphy_get_mode(bool fw_mode, int lane, int port, ··· 827 819 if (!ret) 828 820 return ret; 829 821 830 - if (ret == COMPHY_FW_NOT_SUPPORTED) 822 + if (ret == -EOPNOTSUPP) 831 823 dev_err(priv->dev, 832 824 "unsupported SMC call, try updating your firmware\n"); 833 825
+1 -1
drivers/staging/media/atomisp/pci/atomisp_v4l2.c
··· 1570 1570 spin_lock_init(&isp->lock); 1571 1571 1572 1572 /* This is not a true PCI device on SoC, so the delay is not needed. */ 1573 - pdev->d3_delay = 0; 1573 + pdev->d3hot_delay = 0; 1574 1574 1575 1575 pci_set_drvdata(pdev, isp); 1576 1576
+18
include/acpi/ghes.h
··· 53 53 GHES_SEV_PANIC = 0x3, 54 54 }; 55 55 56 + #ifdef CONFIG_ACPI_APEI_GHES 57 + /** 58 + * ghes_register_vendor_record_notifier - register a notifier for vendor 59 + * records that the kernel would otherwise ignore. 60 + * @nb: pointer to the notifier_block structure of the event handler. 61 + * 62 + * return 0 : SUCCESS, non-zero : FAIL 63 + */ 64 + int ghes_register_vendor_record_notifier(struct notifier_block *nb); 65 + 66 + /** 67 + * ghes_unregister_vendor_record_notifier - unregister the previously 68 + * registered vendor record notifier. 69 + * @nb: pointer to the notifier_block structure of the vendor record handler. 70 + */ 71 + void ghes_unregister_vendor_record_notifier(struct notifier_block *nb); 72 + #endif 73 + 56 74 int ghes_estatus_pool_init(int num_ghes); 57 75 58 76 /* From drivers/edac/ghes_edac.c */
+27 -12
include/asm-generic/io.h
··· 911 911 #include <linux/vmalloc.h> 912 912 #define __io_virt(x) ((void __force *)(x)) 913 913 914 - #ifndef CONFIG_GENERIC_IOMAP 915 - struct pci_dev; 916 - extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max); 917 - 918 - #ifndef pci_iounmap 919 - #define pci_iounmap pci_iounmap 920 - static inline void pci_iounmap(struct pci_dev *dev, void __iomem *p) 921 - { 922 - } 923 - #endif 924 - #endif /* CONFIG_GENERIC_IOMAP */ 925 - 926 914 /* 927 915 * Change virtual addresses to physical addresses and vv. 928 916 * These are pretty trivial ··· 1004 1016 port &= IO_SPACE_LIMIT; 1005 1017 return (port > MMIO_UPPER_LIMIT) ? NULL : PCI_IOBASE + port; 1006 1018 } 1019 + #define __pci_ioport_unmap __pci_ioport_unmap 1020 + static inline void __pci_ioport_unmap(void __iomem *p) 1021 + { 1022 + uintptr_t start = (uintptr_t) PCI_IOBASE; 1023 + uintptr_t addr = (uintptr_t) p; 1024 + 1025 + if (addr >= start && addr < start + IO_SPACE_LIMIT) 1026 + return; 1027 + iounmap(p); 1028 + } 1007 1029 #endif 1008 1030 1009 1031 #ifndef ioport_unmap ··· 1027 1029 extern void ioport_unmap(void __iomem *p); 1028 1030 #endif /* CONFIG_GENERIC_IOMAP */ 1029 1031 #endif /* CONFIG_HAS_IOPORT_MAP */ 1032 + 1033 + #ifndef CONFIG_GENERIC_IOMAP 1034 + struct pci_dev; 1035 + extern void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max); 1036 + 1037 + #ifndef __pci_ioport_unmap 1038 + static inline void __pci_ioport_unmap(void __iomem *p) {} 1039 + #endif 1040 + 1041 + #ifndef pci_iounmap 1042 + #define pci_iounmap pci_iounmap 1043 + static inline void pci_iounmap(struct pci_dev *dev, void __iomem *p) 1044 + { 1045 + __pci_ioport_unmap(p); 1046 + } 1047 + #endif 1048 + #endif /* CONFIG_GENERIC_IOMAP */ 1030 1049 1031 1050 /* 1032 1051 * Convert a virtual cached pointer to an uncached pointer
+1
include/linux/pci-ecam.h
··· 51 51 52 52 #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 53 53 extern const struct pci_ecam_ops pci_32b_ops; /* 32-bit accesses only */ 54 + extern const struct pci_ecam_ops pci_32b_read_ops; /* 32-bit read only */ 54 55 extern const struct pci_ecam_ops hisi_pcie_ops; /* HiSilicon */ 55 56 extern const struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */ 56 57 extern const struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */
+2 -2
include/linux/pci-ep-cfs.h
··· 19 19 #else 20 20 static inline struct config_group *pci_ep_cfs_add_epc_group(const char *name) 21 21 { 22 - return 0; 22 + return NULL; 23 23 } 24 24 25 25 static inline void pci_ep_cfs_remove_epc_group(struct config_group *group) ··· 28 28 29 29 static inline struct config_group *pci_ep_cfs_add_epf_group(const char *name) 30 30 { 31 - return 0; 31 + return NULL; 32 32 } 33 33 34 34 static inline void pci_ep_cfs_remove_epf_group(struct config_group *group)
+3 -5
include/linux/pci.h
··· 373 373 user sysfs */ 374 374 unsigned int clear_retrain_link:1; /* Need to clear Retrain Link 375 375 bit manually */ 376 - unsigned int d3_delay; /* D3->D0 transition time in ms */ 376 + unsigned int d3hot_delay; /* D3hot->D0 transition time in ms */ 377 377 unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */ 378 378 379 379 #ifdef CONFIG_PCIEASPM 380 380 struct pcie_link_state *link_state; /* ASPM link state */ 381 381 unsigned int ltr_path:1; /* Latency Tolerance Reporting 382 382 supported from root to here */ 383 + int l1ss; /* L1SS Capability pointer */ 383 384 #endif 384 385 unsigned int eetlp_prefix_path:1; /* End-to-End TLP Prefix */ 385 386 ··· 524 523 struct device dev; 525 524 struct pci_bus *bus; /* Root bus */ 526 525 struct pci_ops *ops; 526 + struct pci_ops *child_ops; 527 527 void *sysdata; 528 528 int busnr; 529 529 struct list_head windows; /* resource_entry */ ··· 2035 2033 int pcibios_alloc_irq(struct pci_dev *dev); 2036 2034 void pcibios_free_irq(struct pci_dev *dev); 2037 2035 resource_size_t pcibios_default_alignment(void); 2038 - 2039 - #ifdef CONFIG_HIBERNATE_CALLBACKS 2040 - extern struct dev_pm_ops pcibios_pm_ops; 2041 - #endif 2042 2036 2043 2037 #if defined(CONFIG_PCI_MMCONFIG) || defined(CONFIG_ACPI_MCFG) 2044 2038 void __init pci_mmcfg_early_init(void);
+5 -1
include/uapi/linux/pci_regs.h
··· 76 76 #define PCI_CACHE_LINE_SIZE 0x0c /* 8 bits */ 77 77 #define PCI_LATENCY_TIMER 0x0d /* 8 bits */ 78 78 #define PCI_HEADER_TYPE 0x0e /* 8 bits */ 79 + #define PCI_HEADER_TYPE_MASK 0x7f 79 80 #define PCI_HEADER_TYPE_NORMAL 0 80 81 #define PCI_HEADER_TYPE_BRIDGE 1 81 82 #define PCI_HEADER_TYPE_CARDBUS 2 ··· 247 246 #define PCI_PM_CAP_PME_D0 0x0800 /* PME# from D0 */ 248 247 #define PCI_PM_CAP_PME_D1 0x1000 /* PME# from D1 */ 249 248 #define PCI_PM_CAP_PME_D2 0x2000 /* PME# from D2 */ 250 - #define PCI_PM_CAP_PME_D3 0x4000 /* PME# from D3 (hot) */ 249 + #define PCI_PM_CAP_PME_D3hot 0x4000 /* PME# from D3 (hot) */ 251 250 #define PCI_PM_CAP_PME_D3cold 0x8000 /* PME# from D3 (cold) */ 252 251 #define PCI_PM_CAP_PME_SHIFT 11 /* Start of the PME Mask in PMC */ 253 252 #define PCI_PM_CTRL 4 /* PM control and status register */ ··· 533 532 #define PCI_EXP_LNKCAP_SLS_32_0GB 0x00000005 /* LNKCAP2 SLS Vector bit 4 */ 534 533 #define PCI_EXP_LNKCAP_MLW 0x000003f0 /* Maximum Link Width */ 535 534 #define PCI_EXP_LNKCAP_ASPMS 0x00000c00 /* ASPM Support */ 535 + #define PCI_EXP_LNKCAP_ASPM_L0S 0x00000400 /* ASPM L0s Support */ 536 + #define PCI_EXP_LNKCAP_ASPM_L1 0x00000800 /* ASPM L1 Support */ 536 537 #define PCI_EXP_LNKCAP_L0SEL 0x00007000 /* L0s Exit Latency */ 537 538 #define PCI_EXP_LNKCAP_L1EL 0x00038000 /* L1 Exit Latency */ 538 539 #define PCI_EXP_LNKCAP_CLKPM 0x00040000 /* Clock Power Management */ ··· 1059 1056 #define PCI_L1SS_CTL1_PCIPM_L1_1 0x00000002 /* PCI-PM L1.1 Enable */ 1060 1057 #define PCI_L1SS_CTL1_ASPM_L1_2 0x00000004 /* ASPM L1.2 Enable */ 1061 1058 #define PCI_L1SS_CTL1_ASPM_L1_1 0x00000008 /* ASPM L1.1 Enable */ 1059 + #define PCI_L1SS_CTL1_L1_2_MASK 0x00000005 1062 1060 #define PCI_L1SS_CTL1_L1SS_MASK 0x0000000f 1063 1061 #define PCI_L1SS_CTL1_CM_RESTORE_TIME 0x0000ff00 /* Common_Mode_Restore_Time */ 1064 1062 #define PCI_L1SS_CTL1_LTR_L12_TH_VALUE 0x03ff0000 /* LTR_L1.2_THRESHOLD_Value */