Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
"Enumeration:

- Program MPS for RCiEP devices (Ashok Raj)

- Fix pci_register_host_bridge() device_register() error handling
(Rob Herring)

- Fix pci_host_bridge struct device release/free handling (Rob
Herring)

Resource management:

- Allow resizing BARs for devices on root bus (Ard Biesheuvel)

Power management:

- Reduce Thunderbolt resume time by working around devices that don't
support DLL Link Active reporting (Mika Westerberg)

- Work around a Pericom USB controller OHCI/EHCI PME# defect
(Kai-Heng Feng)

Virtualization:

- Add ACS quirk for Intel Root Complex Integrated Endpoints (Ashok
Raj)

- Avoid FLR for AMD Starship USB 3.0 (Kevin Buettner)

- Avoid FLR for AMD Matisse HD Audio & USB 3.0 (Marcos Scriven)

Error handling:

- Use only _OSC (not HEST FIRMWARE_FIRST) to determine AER ownership
(Alexandru Gagniuc, Kuppuswamy Sathyanarayanan)

- Reduce verbosity by logging only ACPI_NOTIFY_DISCONNECT_RECOVER
events (Kuppuswamy Sathyanarayanan)

- Don't enable AER by default in Kconfig (Bjorn Helgaas)

Peer-to-peer DMA:

- Add AMD Zen Raven and Renoir Root Ports to whitelist (Alex Deucher)

ASPM:

- Allow ASPM on links to PCIe-to-PCI/PCI-X Bridges (Kai-Heng Feng)

Endpoint framework:

- Fix DMA channel release in test (Kunihiko Hayashi)

- Add page size as argument to pci_epc_mem_init() (Lad Prabhakar)

- Add support to handle multiple base for mapping outbound memory
(Lad Prabhakar)

Generic host bridge driver:

- Support building as module (Rob Herring)

- Eliminate pci_host_common_probe wrappers (Rob Herring)

Amlogic Meson PCIe controller driver:

- Don't use FAST_LINK_MODE to set up link (Marc Zyngier)

Broadcom STB PCIe controller driver:

- Disable ASPM L0s if 'aspm-no-l0s' in DT (Jim Quinlan)

- Fix clk_put() error (Jim Quinlan)

- Fix window register offset (Jim Quinlan)

- Assert fundamental reset on initialization (Nicolas Saenz Julienne)

- Add notify xHCI reset property (Nicolas Saenz Julienne)

- Add init routine for Raspberry Pi 4 VL805 USB controller (Nicolas
Saenz Julienne)

- Sync with Raspberry Pi 4 firmware for VL805 initialization (Nicolas
Saenz Julienne)

Cadence PCIe controller driver:

- Remove "cdns,max-outbound-regions" DT property (replaced by
"ranges") (Kishon Vijay Abraham I)

- Read 32-bit (not 16-bit) Vendor ID/Device ID property from DT
(Kishon Vijay Abraham I)

Marvell Aardvark PCIe controller driver:

- Improve link training (Marek Behún)

- Add PHY support (Marek Behún)

- Add "phys", "max-link-speed", "reset-gpios" to dt-binding (Marek
Behún)

- Train link immediately after enabling training to work around
detection issues with some cards (Pali Rohár)

- Issue PERST via GPIO to work around detection issues (Pali Rohár)

- Don't blindly enable ASPM L0s (Pali Rohár)

- Replace custom macros by standard linux/pci_regs.h macros (Pali
Rohár)

Microsoft Hyper-V host bridge driver:

- Fix probe failure path to release resource (Wei Hu)

- Retry PCI bus D0 entry on invalid device state for kdump (Wei Hu)

Renesas R-Car PCIe controller driver:

- Fix incorrect programming of OB windows (Andrew Murray)

- Add suspend/resume (Kazufumi Ikeda)

- Rename pcie-rcar.c to pcie-rcar-host.c (Lad Prabhakar)

- Add endpoint controller driver (Lad Prabhakar)

- Fix PCIEPAMR mask calculation (Lad Prabhakar)

- Add r8a77961 to DT binding (Yoshihiro Shimoda)

Socionext UniPhier Pro5 controller driver:

- Add endpoint controller driver (Kunihiko Hayashi)

Synopsys DesignWare PCIe controller driver:

- Program outbound ATU upper limit register (Alan Mikhak)

- Fix inner MSI IRQ domain registration (Marc Zyngier)

Miscellaneous:

- Check for platform_get_irq() failure consistently (negative return
means failure) (Aman Sharma)

- Fix several runtime PM get/put imbalances (Dinghao Liu)

- Use flexible-array and struct_size() helpers for code cleanup
(Gustavo A. R. Silva)

- Update & fix issues in bridge emulation of PCIe registers (Jon
Derrick)

- Add macros for bridge window names (PCI_BRIDGE_IO_WINDOW, etc)
(Krzysztof Wilczyński)

- Work around Intel PCH MROMs that have invalid BARs (Xiaochun Lee)"

* tag 'pci-v5.8-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (100 commits)
PCI: uniphier: Add Socionext UniPhier Pro5 PCIe endpoint controller driver
PCI: Add ACS quirk for Intel Root Complex Integrated Endpoints
PCI/DPC: Print IRQ number used by port
PCI/AER: Use "aer" variable for capability offset
PCI/AER: Remove redundant dev->aer_cap checks
PCI/AER: Remove redundant pci_is_pcie() checks
PCI/AER: Remove HEST/FIRMWARE_FIRST parsing for AER ownership
PCI: tegra: Fix runtime PM imbalance on error
PCI: vmd: Filter resource type bits from shadow register
PCI: tegra194: Fix runtime PM imbalance on error
dt-bindings: PCI: Add UniPhier PCIe endpoint controller description
PCI: hv: Use struct_size() helper
PCI: Rename _DSM constants to align with spec
PCI: Avoid FLR for AMD Starship USB 3.0
PCI: Avoid FLR for AMD Matisse HD Audio & USB 3.0
x86/PCI: Drop unused xen_register_pirq() gsi_override parameter
PCI: dwc: Use private data pointer of "struct irq_domain" to get pcie_port
PCI: amlogic: meson: Don't use FAST_LINK_MODE to set up link
PCI: dwc: Fix inner MSI IRQ domain registration
PCI: dwc: pci-dra7xx: Use devm_platform_ioremap_resource_byname()
...

+3693 -2017
+8 -8
Documentation/PCI/endpoint/pci-endpoint.rst
··· 78 78 Cleanup the pci_epc_mem structure allocated during pci_epc_mem_init(). 79 79 80 80 81 - APIs for the PCI Endpoint Function Driver 82 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 81 + EPC APIs for the PCI Endpoint Function Driver 82 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 83 83 84 84 This section lists the APIs that the PCI Endpoint core provides to be used 85 85 by the PCI endpoint function driver. ··· 117 117 The PCI endpoint function driver should use pci_epc_mem_free_addr() to 118 118 free the memory space allocated using pci_epc_mem_alloc_addr(). 119 119 120 - Other APIs 121 - ~~~~~~~~~~ 120 + Other EPC APIs 121 + ~~~~~~~~~~~~~~ 122 122 123 123 There are other APIs provided by the EPC library. These are used for binding 124 124 the EPF device with EPC device. pci-ep-cfs.c can be used as reference for ··· 160 160 The EPF library provides APIs to be used by the function driver and the EPC 161 161 library to provide endpoint mode functionality. 162 162 163 - APIs for the PCI Endpoint Function Driver 164 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 163 + EPF APIs for the PCI Endpoint Function Driver 164 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 165 165 166 166 This section lists the APIs that the PCI Endpoint core provides to be used 167 167 by the PCI endpoint function driver. ··· 204 204 The PCI endpoint controller library invokes pci_epf_linkup() when the 205 205 EPC device has established the connection to the host. 206 206 207 - Other APIs 208 - ~~~~~~~~~~ 207 + Other EPF APIs 208 + ~~~~~~~~~~~~~~ 209 209 210 210 There are other APIs provided by the EPF library. These are used to notify 211 211 the function driver when the EPF device is bound to the EPC device.
+4
Documentation/devicetree/bindings/pci/aardvark-pci.txt
··· 19 19 - interrupt-map-mask and interrupt-map: standard PCI properties to 20 20 define the mapping of the PCIe interface to interrupt numbers. 21 21 - bus-range: PCI bus numbers covered 22 + - phys: the PCIe PHY handle 23 + - max-link-speed: see pci.txt 24 + - reset-gpios: see pci.txt 22 25 23 26 In addition, the Device Tree describing an Aardvark PCIe controller 24 27 must include a sub-node that describes the legacy interrupt controller ··· 51 48 <0 0 0 2 &pcie_intc 1>, 52 49 <0 0 0 3 &pcie_intc 2>, 53 50 <0 0 0 4 &pcie_intc 3>; 51 + phys = <&comphy1 0>; 54 52 pcie_intc: interrupt-controller { 55 53 interrupt-controller; 56 54 #interrupt-cells = <1>;
+2
Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
··· 56 56 description: Indicates usage of spread-spectrum clocking. 57 57 type: boolean 58 58 59 + aspm-no-l0s: true 60 + 59 61 required: 60 62 - reg 61 63 - dma-ranges
+1 -1
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
··· 10 10 - Tom Joseph <tjoseph@cadence.com> 11 11 12 12 allOf: 13 - - $ref: "cdns-pcie.yaml#" 13 + - $ref: "cdns-pcie-ep.yaml#" 14 14 - $ref: "pci-ep.yaml#" 15 15 16 16 properties:
+1 -2
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
··· 45 45 #size-cells = <2>; 46 46 bus-range = <0x0 0xff>; 47 47 linux,pci-domain = <0>; 48 - cdns,max-outbound-regions = <16>; 49 - cdns,no-bar-match-nbits = <32>; 50 48 vendor-id = <0x17cd>; 51 49 device-id = <0x0200>; 52 50 ··· 55 57 56 58 ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>, 57 59 <0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>; 60 + dma-ranges = <0x02000000 0x0 0x0 0x0 0x0 0x1 0x00000000>; 58 61 59 62 #interrupt-cells = <0x1>; 60 63
+25
Documentation/devicetree/bindings/pci/cdns-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/pci/cdns-pcie-ep.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Cadence PCIe Device 8 + 9 + maintainers: 10 + - Tom Joseph <tjoseph@cadence.com> 11 + 12 + allOf: 13 + - $ref: "cdns-pcie.yaml#" 14 + 15 + properties: 16 + cdns,max-outbound-regions: 17 + description: maximum number of outbound regions 18 + allOf: 19 + - $ref: /schemas/types.yaml#/definitions/uint32 20 + minimum: 1 21 + maximum: 32 22 + default: 32 23 + 24 + required: 25 + - cdns,max-outbound-regions
+10
Documentation/devicetree/bindings/pci/cdns-pcie-host.yaml
··· 14 14 - $ref: "cdns-pcie.yaml#" 15 15 16 16 properties: 17 + cdns,max-outbound-regions: 18 + description: maximum number of outbound regions 19 + allOf: 20 + - $ref: /schemas/types.yaml#/definitions/uint32 21 + minimum: 1 22 + maximum: 32 23 + default: 32 24 + deprecated: true 25 + 17 26 cdns,no-bar-match-nbits: 18 27 description: 19 28 Set into the no BAR match register to configure the number of least ··· 31 22 minimum: 0 32 23 maximum: 64 33 24 default: 32 25 + deprecated: true 34 26 35 27 msi-parent: true
-7
Documentation/devicetree/bindings/pci/cdns-pcie.yaml
··· 10 10 - Tom Joseph <tjoseph@cadence.com> 11 11 12 12 properties: 13 - cdns,max-outbound-regions: 14 - description: maximum number of outbound regions 15 - $ref: /schemas/types.yaml#/definitions/uint32 16 - minimum: 1 17 - maximum: 32 18 - default: 32 19 - 20 13 phys: 21 14 description: 22 15 One per lane if more than one in the list. If only one PHY listed it must
+77
Documentation/devicetree/bindings/pci/rcar-pci-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2020 Renesas Electronics Europe GmbH - https://www.renesas.com/eu/en/ 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/pci/rcar-pci-ep.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Renesas R-Car PCIe Endpoint 9 + 10 + maintainers: 11 + - Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> 12 + - Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 13 + 14 + properties: 15 + compatible: 16 + items: 17 + - const: renesas,r8a774c0-pcie-ep 18 + - const: renesas,rcar-gen3-pcie-ep 19 + 20 + reg: 21 + maxItems: 5 22 + 23 + reg-names: 24 + items: 25 + - const: apb-base 26 + - const: memory0 27 + - const: memory1 28 + - const: memory2 29 + - const: memory3 30 + 31 + power-domains: 32 + maxItems: 1 33 + 34 + resets: 35 + maxItems: 1 36 + 37 + clocks: 38 + maxItems: 1 39 + 40 + clock-names: 41 + items: 42 + - const: pcie 43 + 44 + max-functions: 45 + minimum: 1 46 + maximum: 1 47 + 48 + required: 49 + - compatible 50 + - reg 51 + - reg-names 52 + - resets 53 + - power-domains 54 + - clocks 55 + - clock-names 56 + - max-functions 57 + 58 + examples: 59 + - | 60 + #include <dt-bindings/clock/r8a774c0-cpg-mssr.h> 61 + #include <dt-bindings/power/r8a774c0-sysc.h> 62 + 63 + pcie0_ep: pcie-ep@fe000000 { 64 + compatible = "renesas,r8a774c0-pcie-ep", 65 + "renesas,rcar-gen3-pcie-ep"; 66 + reg = <0xfe000000 0x80000>, 67 + <0xfe100000 0x100000>, 68 + <0xfe200000 0x200000>, 69 + <0x30000000 0x8000000>, 70 + <0x38000000 0x8000000>; 71 + reg-names = "apb-base", "memory0", "memory1", "memory2", "memory3"; 72 + resets = <&cpg 319>; 73 + power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>; 74 + clocks = <&cpg CPG_MOD 319>; 75 + clock-names = "pcie"; 76 + max-functions = /bits/ 8 <1>; 77 + };
+2 -1
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 11 11 "renesas,pcie-r8a7791" for the R8A7791 SoC; 12 12 "renesas,pcie-r8a7793" for the R8A7793 SoC; 13 13 "renesas,pcie-r8a7795" for the R8A7795 SoC; 14 - "renesas,pcie-r8a7796" for the R8A7796 SoC; 14 + "renesas,pcie-r8a7796" for the R8A77960 SoC; 15 + "renesas,pcie-r8a77961" for the R8A77961 SoC; 15 16 "renesas,pcie-r8a77980" for the R8A77980 SoC; 16 17 "renesas,pcie-r8a77990" for the R8A77990 SoC; 17 18 "renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or
+92
Documentation/devicetree/bindings/pci/socionext,uniphier-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/socionext,uniphier-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Socionext UniPhier PCIe endpoint controller 8 + 9 + description: | 10 + UniPhier PCIe endpoint controller is based on the Synopsys DesignWare 11 + PCI core. It shares common features with the PCIe DesignWare core and 12 + inherits common properties defined in 13 + Documentation/devicetree/bindings/pci/designware-pcie.txt. 14 + 15 + maintainers: 16 + - Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 17 + 18 + allOf: 19 + - $ref: "pci-ep.yaml#" 20 + 21 + properties: 22 + compatible: 23 + const: socionext,uniphier-pro5-pcie-ep 24 + 25 + reg: 26 + maxItems: 4 27 + 28 + reg-names: 29 + items: 30 + - const: dbi 31 + - const: dbi2 32 + - const: link 33 + - const: addr_space 34 + 35 + clocks: 36 + maxItems: 2 37 + 38 + clock-names: 39 + items: 40 + - const: gio 41 + - const: link 42 + 43 + resets: 44 + maxItems: 2 45 + 46 + reset-names: 47 + items: 48 + - const: gio 49 + - const: link 50 + 51 + num-ib-windows: 52 + const: 16 53 + 54 + num-ob-windows: 55 + const: 16 56 + 57 + num-lanes: true 58 + 59 + phys: 60 + maxItems: 1 61 + 62 + phy-names: 63 + const: pcie-phy 64 + 65 + required: 66 + - compatible 67 + - reg 68 + - reg-names 69 + - clocks 70 + - clock-names 71 + - resets 72 + - reset-names 73 + 74 + additionalProperties: false 75 + 76 + examples: 77 + - | 78 + pcie_ep: pcie-ep@66000000 { 79 + compatible = "socionext,uniphier-pro5-pcie-ep"; 80 + reg-names = "dbi", "dbi2", "link", "addr_space"; 81 + reg = <0x66000000 0x1000>, <0x66001000 0x1000>, 82 + <0x66010000 0x10000>, <0x67000000 0x400000>; 83 + clock-names = "gio", "link"; 84 + clocks = <&sys_clk 12>, <&sys_clk 24>; 85 + reset-names = "gio", "link"; 86 + resets = <&sys_rst 12>, <&sys_rst 24>; 87 + num-ib-windows = <16>; 88 + num-ob-windows = <16>; 89 + num-lanes = <4>; 90 + phy-names = "pcie-phy"; 91 + phys = <&pcie_phy>; 92 + };
+4 -3
MAINTAINERS
··· 13074 13074 L: linux-arm-kernel@lists.infradead.org 13075 13075 S: Maintained 13076 13076 F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt 13077 - F: drivers/pci/controller/mobibeil/pcie-layerscape-gen4.c 13077 + F: drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c 13078 13078 13079 13079 PCI DRIVER FOR RENESAS R-CAR 13080 13080 M: Marek Vasut <marek.vasut+renesas@gmail.com> ··· 13082 13082 L: linux-pci@vger.kernel.org 13083 13083 L: linux-renesas-soc@vger.kernel.org 13084 13084 S: Maintained 13085 + F: Documentation/devicetree/bindings/pci/*rcar* 13085 13086 F: drivers/pci/controller/*rcar* 13086 13087 13087 13088 PCI DRIVER FOR SAMSUNG EXYNOS ··· 13276 13275 M: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 13277 13276 L: linux-pci@vger.kernel.org 13278 13277 S: Maintained 13279 - F: Documentation/devicetree/bindings/pci/uniphier-pcie.txt 13280 - F: drivers/pci/controller/dwc/pcie-uniphier.c 13278 + F: Documentation/devicetree/bindings/pci/uniphier-pcie* 13279 + F: drivers/pci/controller/dwc/pcie-uniphier* 13281 13280 13282 13281 PCIE DRIVER FOR ST SPEAR13XX 13283 13282 M: Pratyush Anand <pratyush.anand@gmail.com>
+2 -2
arch/arm64/kernel/pci.c
··· 117 117 struct device *dev = &root->device->dev; 118 118 struct resource *bus_res = &root->secondary; 119 119 u16 seg = root->segment; 120 - struct pci_ecam_ops *ecam_ops; 120 + const struct pci_ecam_ops *ecam_ops; 121 121 struct resource cfgres; 122 122 struct acpi_device *adev; 123 123 struct pci_config_window *cfg; ··· 185 185 186 186 root_ops->release_info = pci_acpi_generic_release_info; 187 187 root_ops->prepare_resources = pci_acpi_root_prepare_resources; 188 - root_ops->pci_ops = &ri->cfg->ops->pci_ops; 188 + root_ops->pci_ops = (struct pci_ops *)&ri->cfg->ops->pci_ops; 189 189 bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg); 190 190 if (!bus) 191 191 return NULL;
+4
arch/x86/pci/fixup.c
··· 572 572 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6f60, pci_invalid_bar); 573 573 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_invalid_bar); 574 574 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_invalid_bar); 575 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa1ec, pci_invalid_bar); 576 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa1ed, pci_invalid_bar); 577 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa26c, pci_invalid_bar); 578 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa26d, pci_invalid_bar); 575 579 576 580 /* 577 581 * Device [1022:7808]
+6 -10
arch/x86/pci/xen.c
··· 60 60 } 61 61 62 62 #ifdef CONFIG_ACPI 63 - static int xen_register_pirq(u32 gsi, int gsi_override, int triggering, 64 - bool set_pirq) 63 + static int xen_register_pirq(u32 gsi, int triggering, bool set_pirq) 65 64 { 66 65 int rc, pirq = -1, irq = -1; 67 66 struct physdev_map_pirq map_irq; ··· 93 94 name = "ioapic-level"; 94 95 } 95 96 96 - if (gsi_override >= 0) 97 - gsi = gsi_override; 98 - 99 97 irq = xen_bind_pirq_gsi_to_irq(gsi, map_irq.pirq, shareable, name); 100 98 if (irq < 0) 101 99 goto out; ··· 108 112 if (!xen_hvm_domain()) 109 113 return -1; 110 114 111 - return xen_register_pirq(gsi, -1 /* no GSI override */, trigger, 115 + return xen_register_pirq(gsi, trigger, 112 116 false /* no mapping of GSI to PIRQ */); 113 117 } 114 118 115 119 #ifdef CONFIG_XEN_DOM0 116 - static int xen_register_gsi(u32 gsi, int gsi_override, int triggering, int polarity) 120 + static int xen_register_gsi(u32 gsi, int triggering, int polarity) 117 121 { 118 122 int rc, irq; 119 123 struct physdev_setup_gsi setup_gsi; ··· 124 128 printk(KERN_DEBUG "xen: registering gsi %u triggering %d polarity %d\n", 125 129 gsi, triggering, polarity); 126 130 127 - irq = xen_register_pirq(gsi, gsi_override, triggering, true); 131 + irq = xen_register_pirq(gsi, triggering, true); 128 132 129 133 setup_gsi.gsi = gsi; 130 134 setup_gsi.triggering = (triggering == ACPI_EDGE_SENSITIVE ? 0 : 1); ··· 144 148 static int acpi_register_gsi_xen(struct device *dev, u32 gsi, 145 149 int trigger, int polarity) 146 150 { 147 - return xen_register_gsi(gsi, -1 /* no GSI override */, trigger, polarity); 151 + return xen_register_gsi(gsi, trigger, polarity); 148 152 } 149 153 #endif 150 154 #endif ··· 487 491 if (acpi_get_override_irq(irq, &trigger, &polarity) == -1) 488 492 continue; 489 493 490 - xen_register_pirq(irq, -1 /* no GSI override */, 494 + xen_register_pirq(irq, 491 495 trigger ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE, 492 496 true /* Map GSI to PIRQ */); 493 497 }
+4 -4
drivers/acpi/pci_mcfg.c
··· 29 29 u32 oem_revision; 30 30 u16 segment; 31 31 struct resource bus_range; 32 - struct pci_ecam_ops *ops; 32 + const struct pci_ecam_ops *ops; 33 33 struct resource cfgres; 34 34 }; 35 35 ··· 165 165 166 166 static void pci_mcfg_apply_quirks(struct acpi_pci_root *root, 167 167 struct resource *cfgres, 168 - struct pci_ecam_ops **ecam_ops) 168 + const struct pci_ecam_ops **ecam_ops) 169 169 { 170 170 #ifdef CONFIG_PCI_QUIRKS 171 171 u16 segment = root->segment; ··· 191 191 static LIST_HEAD(pci_mcfg_list); 192 192 193 193 int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres, 194 - struct pci_ecam_ops **ecam_ops) 194 + const struct pci_ecam_ops **ecam_ops) 195 195 { 196 - struct pci_ecam_ops *ops = &pci_generic_ecam_ops; 196 + const struct pci_ecam_ops *ops = &pci_generic_ecam_ops; 197 197 struct resource *bus_res = &root->secondary; 198 198 u16 seg = root->segment; 199 199 struct mcfg_entry *e;
+3 -8
drivers/acpi/pci_root.c
··· 483 483 if (IS_ENABLED(CONFIG_HOTPLUG_PCI_SHPC)) 484 484 control |= OSC_PCI_SHPC_NATIVE_HP_CONTROL; 485 485 486 - if (pci_aer_available()) { 487 - if (aer_acpi_firmware_first()) 488 - dev_info(&device->dev, 489 - "PCIe AER handled by firmware\n"); 490 - else 491 - control |= OSC_PCI_EXPRESS_AER_CONTROL; 492 - } 486 + if (pci_aer_available()) 487 + control |= OSC_PCI_EXPRESS_AER_CONTROL; 493 488 494 489 /* 495 490 * Per the Downstream Port Containment Related Enhancements ECN to ··· 933 938 * assignments made by firmware for this host bridge. 934 939 */ 935 940 obj = acpi_evaluate_dsm(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 1, 936 - IGNORE_PCI_BOOT_CONFIG_DSM, NULL); 941 + DSM_PCI_PRESERVE_BOOT_CONFIG, NULL); 937 942 if (obj && obj->type == ACPI_TYPE_INTEGER && obj->integer.value == 0) 938 943 host_bridge->preserve_config = 1; 939 944 ACPI_FREE(obj);
+25 -15
drivers/base/platform.c
··· 153 153 * if (irq < 0) 154 154 * return irq; 155 155 * 156 - * Return: IRQ number on success, negative error number on failure. 156 + * Return: non-zero IRQ number on success, negative error number on failure. 157 157 */ 158 158 int platform_get_irq_optional(struct platform_device *dev, unsigned int num) 159 159 { 160 + int ret; 160 161 #ifdef CONFIG_SPARC 161 162 /* sparc does not have irqs represented as IORESOURCE_IRQ resources */ 162 163 if (!dev || num >= dev->archdata.num_irqs) 163 164 return -ENXIO; 164 - return dev->archdata.irqs[num]; 165 + ret = dev->archdata.irqs[num]; 166 + goto out; 165 167 #else 166 168 struct resource *r; 167 - int ret; 168 169 169 170 if (IS_ENABLED(CONFIG_OF_IRQ) && dev->dev.of_node) { 170 171 ret = of_irq_get(dev->dev.of_node, num); 171 172 if (ret > 0 || ret == -EPROBE_DEFER) 172 - return ret; 173 + goto out; 173 174 } 174 175 175 176 r = platform_get_resource(dev, IORESOURCE_IRQ, num); ··· 178 177 if (r && r->flags & IORESOURCE_DISABLED) { 179 178 ret = acpi_irq_get(ACPI_HANDLE(&dev->dev), num, r); 180 179 if (ret) 181 - return ret; 180 + goto out; 182 181 } 183 182 } 184 183 ··· 192 191 struct irq_data *irqd; 193 192 194 193 irqd = irq_get_irq_data(r->start); 195 - if (!irqd) 196 - return -ENXIO; 194 + if (!irqd) { 195 + ret = -ENXIO; 196 + goto out; 197 + } 197 198 irqd_set_trigger_type(irqd, r->flags & IORESOURCE_BITS); 198 199 } 199 200 200 - if (r) 201 - return r->start; 201 + if (r) { 202 + ret = r->start; 203 + goto out; 204 + } 202 205 203 206 /* 204 207 * For the index 0 interrupt, allow falling back to GpioInt ··· 215 210 ret = acpi_dev_gpio_irq_get(ACPI_COMPANION(&dev->dev), num); 216 211 /* Our callers expect -ENXIO for missing IRQs. */ 217 212 if (ret >= 0 || ret == -EPROBE_DEFER) 218 - return ret; 213 + goto out; 219 214 } 220 215 221 - return -ENXIO; 216 + ret = -ENXIO; 222 217 #endif 218 + out: 219 + WARN(ret == 0, "0 is an invalid IRQ number\n"); 220 + return ret; 223 221 } 224 222 EXPORT_SYMBOL_GPL(platform_get_irq_optional); 225 223 ··· 241 233 * if (irq < 0) 242 234 * return irq; 243 235 * 244 - * Return: IRQ number on success, negative error number on failure. 236 + * Return: non-zero IRQ number on success, negative error number on failure. 245 237 */ 246 238 int platform_get_irq(struct platform_device *dev, unsigned int num) 247 239 { ··· 313 305 } 314 306 315 307 r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name); 316 - if (r) 308 + if (r) { 309 + WARN(r->start == 0, "0 is an invalid IRQ number\n"); 317 310 return r->start; 311 + } 318 312 319 313 return -ENXIO; 320 314 } ··· 328 318 * 329 319 * Get an IRQ like platform_get_irq(), but then by name rather then by index. 330 320 * 331 - * Return: IRQ number on success, negative error number on failure. 321 + * Return: non-zero IRQ number on success, negative error number on failure. 332 322 */ 333 323 int platform_get_irq_byname(struct platform_device *dev, const char *name) 334 324 { ··· 350 340 * Get an optional IRQ by name like platform_get_irq_byname(). Except that it 351 341 * does not print an error message if an IRQ can not be obtained. 352 342 * 353 - * Return: IRQ number on success, negative error number on failure. 343 + * Return: non-zero IRQ number on success, negative error number on failure. 354 344 */ 355 345 int platform_get_irq_byname_optional(struct platform_device *dev, 356 346 const char *name)
+2 -1
drivers/firmware/Kconfig
··· 178 178 Otherwise, say N. 179 179 180 180 config RASPBERRYPI_FIRMWARE 181 - tristate "Raspberry Pi Firmware Driver" 181 + bool "Raspberry Pi Firmware Driver" 182 182 depends on BCM2835_MBOX 183 + default USB_PCI 183 184 help 184 185 This option enables support for communicating with the firmware on the 185 186 Raspberry Pi.
+61
drivers/firmware/raspberrypi.c
··· 12 12 #include <linux/of_platform.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/slab.h> 15 + #include <linux/pci.h> 16 + #include <linux/delay.h> 15 17 #include <soc/bcm2835/raspberrypi-firmware.h> 16 18 17 19 #define MBOX_MSG(chan, data28) (((data28) & ~0xf) | ((chan) & 0xf)) 18 20 #define MBOX_CHAN(msg) ((msg) & 0xf) 19 21 #define MBOX_DATA28(msg) ((msg) & ~0xf) 20 22 #define MBOX_CHAN_PROPERTY 8 23 + 24 + #define VL805_PCI_CONFIG_VERSION_OFFSET 0x50 21 25 22 26 static struct platform_device *rpi_hwmon; 23 27 static struct platform_device *rpi_clk; ··· 283 279 return platform_get_drvdata(pdev); 284 280 } 285 281 EXPORT_SYMBOL_GPL(rpi_firmware_get); 282 + 283 + /* 284 + * The Raspberry Pi 4 gets its USB functionality from VL805, a PCIe chip that 285 + * implements xHCI. After a PCI reset, VL805's firmware may either be loaded 286 + * directly from an EEPROM or, if not present, by the SoC's co-processor, 287 + * VideoCore. RPi4's VideoCore OS contains both the non public firmware load 288 + * logic and the VL805 firmware blob. This function triggers the aforementioned 289 + * process. 290 + */ 291 + int rpi_firmware_init_vl805(struct pci_dev *pdev) 292 + { 293 + struct device_node *fw_np; 294 + struct rpi_firmware *fw; 295 + u32 dev_addr, version; 296 + int ret; 297 + 298 + fw_np = of_find_compatible_node(NULL, NULL, 299 + "raspberrypi,bcm2835-firmware"); 300 + if (!fw_np) 301 + return 0; 302 + 303 + fw = rpi_firmware_get(fw_np); 304 + of_node_put(fw_np); 305 + if (!fw) 306 + return -ENODEV; 307 + 308 + /* 309 + * Make sure we don't trigger a firmware load unnecessarily. 310 + * 311 + * If something went wrong with PCI, this whole exercise would be 312 + * futile as VideoCore expects from us a configured PCI bus. Just take 313 + * the faulty version (likely ~0) and let xHCI's registration fail 314 + * further down the line. 315 + */ 316 + pci_read_config_dword(pdev, VL805_PCI_CONFIG_VERSION_OFFSET, &version); 317 + if (version) 318 + goto exit; 319 + 320 + dev_addr = pdev->bus->number << 20 | PCI_SLOT(pdev->devfn) << 15 | 321 + PCI_FUNC(pdev->devfn) << 12; 322 + 323 + ret = rpi_firmware_property(fw, RPI_FIRMWARE_NOTIFY_XHCI_RESET, 324 + &dev_addr, sizeof(dev_addr)); 325 + if (ret) 326 + return ret; 327 + 328 + /* Wait for vl805 to startup */ 329 + usleep_range(200, 1000); 330 + 331 + pci_read_config_dword(pdev, VL805_PCI_CONFIG_VERSION_OFFSET, 332 + &version); 333 + exit: 334 + pci_info(pdev, "VL805 firmware version %08x\n", version); 335 + 336 + return 0; 337 + } 338 + EXPORT_SYMBOL_GPL(rpi_firmware_init_vl805); 286 339 287 340 static const struct of_device_id rpi_firmware_of_match[] = { 288 341 { .compatible = "raspberrypi,bcm2835-firmware", },
+20 -2
drivers/pci/controller/Kconfig
··· 58 58 bool "Renesas R-Car PCIe controller" 59 59 depends on ARCH_RENESAS || COMPILE_TEST 60 60 depends on PCI_MSI_IRQ_DOMAIN 61 + select PCIE_RCAR_HOST 61 62 help 62 63 Say Y here if you want PCIe controller support on R-Car SoCs. 64 + This option will be removed after arm64 defconfig is updated. 65 + 66 + config PCIE_RCAR_HOST 67 + bool "Renesas R-Car PCIe host controller" 68 + depends on ARCH_RENESAS || COMPILE_TEST 69 + depends on PCI_MSI_IRQ_DOMAIN 70 + help 71 + Say Y here if you want PCIe controller support on R-Car SoCs in host 72 + mode. 73 + 74 + config PCIE_RCAR_EP 75 + bool "Renesas R-Car PCIe endpoint controller" 76 + depends on ARCH_RENESAS || COMPILE_TEST 77 + depends on PCI_ENDPOINT 78 + help 79 + Say Y here if you want PCIe controller support on R-Car SoCs in 80 + endpoint mode. 63 81 64 82 config PCI_HOST_COMMON 65 - bool 83 + tristate 66 84 select PCI_ECAM 67 85 68 86 config PCI_HOST_GENERIC 69 - bool "Generic PCI host controller" 87 + tristate "Generic PCI host controller" 70 88 depends on OF 71 89 select PCI_HOST_COMMON 72 90 select IRQ_DOMAIN
+2 -1
drivers/pci/controller/Makefile
··· 7 7 obj-$(CONFIG_PCI_AARDVARK) += pci-aardvark.o 8 8 obj-$(CONFIG_PCI_TEGRA) += pci-tegra.o 9 9 obj-$(CONFIG_PCI_RCAR_GEN2) += pci-rcar-gen2.o 10 - obj-$(CONFIG_PCIE_RCAR) += pcie-rcar.o 10 + obj-$(CONFIG_PCIE_RCAR_HOST) += pcie-rcar.o pcie-rcar-host.o 11 + obj-$(CONFIG_PCIE_RCAR_EP) += pcie-rcar.o pcie-rcar-ep.o 11 12 obj-$(CONFIG_PCI_HOST_COMMON) += pci-host-common.o 12 13 obj-$(CONFIG_PCI_HOST_GENERIC) += pci-host-generic.o 13 14 obj-$(CONFIG_PCIE_XILINX) += pcie-xilinx.o
+1 -1
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 450 450 epc->max_functions = 1; 451 451 452 452 ret = pci_epc_mem_init(epc, pcie->mem_res->start, 453 - resource_size(pcie->mem_res)); 453 + resource_size(pcie->mem_res), PAGE_SIZE); 454 454 if (ret < 0) { 455 455 dev_err(dev, "failed to initialize the memory space\n"); 456 456 goto err_init;
+2 -8
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 140 140 for_each_of_pci_range(&parser, &range) { 141 141 bool is_io; 142 142 143 - if (r >= rc->max_regions) 144 - break; 145 - 146 143 if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_MEM) 147 144 is_io = false; 148 145 else if ((range.flags & IORESOURCE_TYPE_BITS) == IORESOURCE_IO) ··· 216 219 pcie = &rc->pcie; 217 220 pcie->is_rc = true; 218 221 219 - rc->max_regions = 32; 220 - of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions); 221 - 222 222 rc->no_bar_nbits = 32; 223 223 of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits); 224 224 225 225 rc->vendor_id = 0xffff; 226 - of_property_read_u16(np, "vendor-id", &rc->vendor_id); 226 + of_property_read_u32(np, "vendor-id", &rc->vendor_id); 227 227 228 228 rc->device_id = 0xffff; 229 - of_property_read_u16(np, "device-id", &rc->device_id); 229 + of_property_read_u32(np, "device-id", &rc->device_id); 230 230 231 231 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg"); 232 232 pcie->reg_base = devm_ioremap_resource(dev, res);
+2 -4
drivers/pci/controller/cadence/pcie-cadence.h
··· 251 251 * @bus_range: first/last buses behind the PCIe host controller 252 252 * @cfg_base: IO mapped window to access the PCI configuration space of a 253 253 * single function at a time 254 - * @max_regions: maximum number of regions supported by the hardware 255 254 * @no_bar_nbits: Number of bits to keep for inbound (PCIe -> CPU) address 256 255 * translation (nbits sets into the "no BAR match" register) 257 256 * @vendor_id: PCI vendor ID ··· 261 262 struct resource *cfg_res; 262 263 struct resource *bus_range; 263 264 void __iomem *cfg_base; 264 - u32 max_regions; 265 265 u32 no_bar_nbits; 266 - u16 vendor_id; 267 - u16 device_id; 266 + u32 vendor_id; 267 + u32 device_id; 268 268 }; 269 269 270 270 /**
+13 -4
drivers/pci/controller/dwc/Kconfig
··· 26 26 depends on OF && HAS_IOMEM && TI_PIPE3 27 27 select PCIE_DW_HOST 28 28 select PCI_DRA7XX 29 - default y 29 + default y if SOC_DRA7XX 30 30 help 31 31 Enables support for the PCIe controller in the DRA7xx SoC to work in 32 32 host mode. There are two instances of PCIe controller in DRA7xx. ··· 111 111 depends on PCI_MSI_IRQ_DOMAIN 112 112 select PCIE_DW_HOST 113 113 select PCI_KEYSTONE 114 - default y 115 114 help 116 115 Enables support for the PCIe controller in the Keystone SoC to 117 116 work in host mode. The PCI controller on Keystone is based on ··· 280 281 selected. This uses the DesignWare core. 281 282 282 283 config PCIE_UNIPHIER 283 - bool "Socionext UniPhier PCIe controllers" 284 + bool "Socionext UniPhier PCIe host controllers" 284 285 depends on ARCH_UNIPHIER || COMPILE_TEST 285 286 depends on OF && HAS_IOMEM 286 287 depends on PCI_MSI_IRQ_DOMAIN 287 288 select PCIE_DW_HOST 288 289 help 289 - Say Y here if you want PCIe controller support on UniPhier SoCs. 290 + Say Y here if you want PCIe host controller support on UniPhier SoCs. 290 291 This driver supports LD20 and PXs3 SoCs. 292 + 293 + config PCIE_UNIPHIER_EP 294 + bool "Socionext UniPhier PCIe endpoint controllers" 295 + depends on ARCH_UNIPHIER || COMPILE_TEST 296 + depends on OF && HAS_IOMEM 297 + depends on PCI_ENDPOINT 298 + select PCIE_DW_EP 299 + help 300 + Say Y here if you want PCIe endpoint controller support on 301 + UniPhier SoCs. This driver supports Pro5 SoC. 291 302 292 303 config PCIE_AL 293 304 bool "Amazon Annapurna Labs PCIe controller"
+1
drivers/pci/controller/dwc/Makefile
··· 19 19 obj-$(CONFIG_PCI_MESON) += pci-meson.o 20 20 obj-$(CONFIG_PCIE_TEGRA194) += pcie-tegra194.o 21 21 obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o 22 + obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o 22 23 23 24 # The following drivers are for devices that use the generic ACPI 24 25 # pci_root.c driver but don't support standard ECAM config access.
+3 -5
drivers/pci/controller/dwc/pci-dra7xx.c
··· 840 840 struct phy **phy; 841 841 struct device_link **link; 842 842 void __iomem *base; 843 - struct resource *res; 844 843 struct dw_pcie *pci; 845 844 struct dra7xx_pcie *dra7xx; 846 845 struct device *dev = &pdev->dev; ··· 876 877 return irq; 877 878 } 878 879 879 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ti_conf"); 880 - base = devm_ioremap(dev, res->start, resource_size(res)); 881 - if (!base) 882 - return -ENOMEM; 880 + base = devm_platform_ioremap_resource_byname(pdev, "ti_conf"); 881 + if (IS_ERR(base)) 882 + return PTR_ERR(base); 883 883 884 884 phy_count = of_property_count_strings(np, "phy-names"); 885 885 if (phy_count < 0) {
+2 -2
drivers/pci/controller/dwc/pci-imx6.c
··· 868 868 869 869 if (IS_ENABLED(CONFIG_PCI_MSI)) { 870 870 pp->msi_irq = platform_get_irq_byname(pdev, "msi"); 871 - if (pp->msi_irq <= 0) { 871 + if (pp->msi_irq < 0) { 872 872 dev_err(dev, "failed to get MSI irq\n"); 873 - return -ENODEV; 873 + return pp->msi_irq; 874 874 } 875 875 } 876 876
+2 -2
drivers/pci/controller/dwc/pci-meson.c
··· 289 289 meson_cfg_writel(mp, val, PCIE_CFG0); 290 290 291 291 val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF); 292 - val &= ~LINK_CAPABLE_MASK; 292 + val &= ~(LINK_CAPABLE_MASK | FAST_LINK_MODE); 293 293 meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF); 294 294 295 295 val = meson_elb_readl(mp, PCIE_PORT_LINK_CTRL_OFF); 296 - val |= LINK_CAPABLE_X1 | FAST_LINK_MODE; 296 + val |= LINK_CAPABLE_X1; 297 297 meson_elb_writel(mp, val, PCIE_PORT_LINK_CTRL_OFF); 298 298 299 299 val = meson_elb_readl(mp, PCIE_GEN2_CTRL_OFF);
+1 -1
drivers/pci/controller/dwc/pcie-al.c
··· 80 80 return 0; 81 81 } 82 82 83 - struct pci_ecam_ops al_pcie_ops = { 83 + const struct pci_ecam_ops al_pcie_ops = { 84 84 .bus_shift = 20, 85 85 .init = al_pcie_init, 86 86 .pci_ops = {
+9 -13
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 412 412 reg = ep->msi_cap + PCI_MSI_DATA_32; 413 413 msg_data = dw_pcie_readw_dbi(pci, reg); 414 414 } 415 - aligned_offset = msg_addr_lower & (epc->mem->page_size - 1); 415 + aligned_offset = msg_addr_lower & (epc->mem->window.page_size - 1); 416 416 msg_addr = ((u64)msg_addr_upper) << 32 | 417 417 (msg_addr_lower & ~aligned_offset); 418 418 ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 419 - epc->mem->page_size); 419 + epc->mem->window.page_size); 420 420 if (ret) 421 421 return ret; 422 422 ··· 433 433 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 434 434 struct pci_epf_msix_tbl *msix_tbl; 435 435 struct pci_epc *epc = ep->epc; 436 - struct pci_epf_bar *epf_bar; 437 436 u32 reg, msg_data, vec_ctrl; 438 437 unsigned int aligned_offset; 439 438 u32 tbl_offset; ··· 445 446 bir = (tbl_offset & PCI_MSIX_TABLE_BIR); 446 447 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 447 448 448 - epf_bar = ep->epf_bar[bir]; 449 - msix_tbl = epf_bar->addr; 450 - msix_tbl = (struct pci_epf_msix_tbl *)((char *)msix_tbl + tbl_offset); 451 - 449 + msix_tbl = ep->epf_bar[bir]->addr + tbl_offset; 452 450 msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr; 453 451 msg_data = msix_tbl[(interrupt_num - 1)].msg_data; 454 452 vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl; ··· 455 459 return -EPERM; 456 460 } 457 461 458 - aligned_offset = msg_addr & (epc->mem->page_size - 1); 462 + aligned_offset = msg_addr & (epc->mem->window.page_size - 1); 459 463 ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 460 - epc->mem->page_size); 464 + epc->mem->window.page_size); 461 465 if (ret) 462 466 return ret; 463 467 ··· 473 477 struct pci_epc *epc = ep->epc; 474 478 475 479 pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, 476 - epc->mem->page_size); 480 + epc->mem->window.page_size); 477 481 478 482 pci_epc_mem_exit(epc); 479 483 } ··· 606 610 if (ret < 0) 607 611 epc->max_functions = 1; 608 612 609 - ret = __pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, 610 - ep->page_size); 613 + ret = pci_epc_mem_init(epc, ep->phys_base, ep->addr_size, 614 + ep->page_size); 611 615 if (ret < 0) { 612 616 dev_err(dev, "Failed to initialize address space\n"); 613 617 return ret; 614 618 } 615 619 616 620 ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, 617 - epc->mem->page_size); 621 + epc->mem->window.page_size); 618 622 if (!ep->msi_mem) { 619 623 dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); 620 624 return -ENOMEM;
+3 -1
drivers/pci/controller/dwc/pcie-designware-host.c
··· 236 236 unsigned int virq, unsigned int nr_irqs) 237 237 { 238 238 struct irq_data *d = irq_domain_get_irq_data(domain, virq); 239 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 239 + struct pcie_port *pp = domain->host_data; 240 240 unsigned long flags; 241 241 242 242 raw_spin_lock_irqsave(&pp->lock, flags); ··· 263 263 dev_err(pci->dev, "Failed to create IRQ domain\n"); 264 264 return -ENOMEM; 265 265 } 266 + 267 + irq_domain_update_bus_token(pp->irq_domain, DOMAIN_BUS_NEXUS); 266 268 267 269 pp->msi_domain = pci_msi_create_irq_domain(fwnode, 268 270 &dw_pcie_msi_domain_info,
+5 -2
drivers/pci/controller/dwc/pcie-designware.c
··· 244 244 u64 pci_addr, u32 size) 245 245 { 246 246 u32 retries, val; 247 + u64 limit_addr = cpu_addr + size - 1; 247 248 248 249 dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_BASE, 249 250 lower_32_bits(cpu_addr)); 250 251 dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_BASE, 251 252 upper_32_bits(cpu_addr)); 252 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LIMIT, 253 - lower_32_bits(cpu_addr + size - 1)); 253 + dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_LIMIT, 254 + lower_32_bits(limit_addr)); 255 + dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_LIMIT, 256 + upper_32_bits(limit_addr)); 254 257 dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_TARGET, 255 258 lower_32_bits(pci_addr)); 256 259 dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET,
+2 -1
drivers/pci/controller/dwc/pcie-designware.h
··· 112 112 #define PCIE_ATU_UNR_REGION_CTRL2 0x04 113 113 #define PCIE_ATU_UNR_LOWER_BASE 0x08 114 114 #define PCIE_ATU_UNR_UPPER_BASE 0x0C 115 - #define PCIE_ATU_UNR_LIMIT 0x10 115 + #define PCIE_ATU_UNR_LOWER_LIMIT 0x10 116 116 #define PCIE_ATU_UNR_LOWER_TARGET 0x14 117 117 #define PCIE_ATU_UNR_UPPER_TARGET 0x18 118 + #define PCIE_ATU_UNR_UPPER_LIMIT 0x20 118 119 119 120 /* 120 121 * The default address offset between dbi_base and atu_base. Root controller
+5 -14
drivers/pci/controller/dwc/pcie-hisi.c
··· 104 104 return 0; 105 105 } 106 106 107 - struct pci_ecam_ops hisi_pcie_ops = { 107 + const struct pci_ecam_ops hisi_pcie_ops = { 108 108 .bus_shift = 20, 109 109 .init = hisi_pcie_init, 110 110 .pci_ops = { ··· 332 332 }; 333 333 builtin_platform_driver(hisi_pcie_driver); 334 334 335 - static int hisi_pcie_almost_ecam_probe(struct platform_device *pdev) 336 - { 337 - struct device *dev = &pdev->dev; 338 - struct pci_ecam_ops *ops; 339 - 340 - ops = (struct pci_ecam_ops *)of_device_get_match_data(dev); 341 - return pci_host_common_probe(pdev, ops); 342 - } 343 - 344 335 static int hisi_pcie_platform_init(struct pci_config_window *cfg) 345 336 { 346 337 struct device *dev = cfg->parent; ··· 353 362 return 0; 354 363 } 355 364 356 - struct pci_ecam_ops hisi_pcie_platform_ops = { 365 + static const struct pci_ecam_ops hisi_pcie_platform_ops = { 357 366 .bus_shift = 20, 358 367 .init = hisi_pcie_platform_init, 359 368 .pci_ops = { ··· 366 375 static const struct of_device_id hisi_pcie_almost_ecam_of_match[] = { 367 376 { 368 377 .compatible = "hisilicon,hip06-pcie-ecam", 369 - .data = (void *) &hisi_pcie_platform_ops, 378 + .data = &hisi_pcie_platform_ops, 370 379 }, 371 380 { 372 381 .compatible = "hisilicon,hip07-pcie-ecam", 373 - .data = (void *) &hisi_pcie_platform_ops, 382 + .data = &hisi_pcie_platform_ops, 374 383 }, 375 384 {}, 376 385 }; 377 386 378 387 static struct platform_driver hisi_pcie_almost_ecam_driver = { 379 - .probe = hisi_pcie_almost_ecam_probe, 388 + .probe = pci_host_common_probe, 380 389 .driver = { 381 390 .name = "hisi-pcie-almost-ecam", 382 391 .of_match_table = hisi_pcie_almost_ecam_of_match,
+1 -1
drivers/pci/controller/dwc/pcie-intel-gw.c
··· 453 453 return 0; 454 454 } 455 455 456 - u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr) 456 + static u64 intel_pcie_cpu_addr(struct dw_pcie *pcie, u64 cpu_addr) 457 457 { 458 458 return cpu_addr + BUS_IATU_OFFSET; 459 459 }
+4 -5
drivers/pci/controller/dwc/pcie-tegra194.c
··· 1623 1623 ret = pinctrl_pm_select_default_state(dev); 1624 1624 if (ret < 0) { 1625 1625 dev_err(dev, "Failed to configure sideband pins: %d\n", ret); 1626 - goto fail_pinctrl; 1626 + goto fail_pm_get_sync; 1627 1627 } 1628 1628 1629 1629 tegra_pcie_init_controller(pcie); ··· 1650 1650 1651 1651 fail_host_init: 1652 1652 tegra_pcie_deinit_controller(pcie); 1653 - fail_pinctrl: 1654 - pm_runtime_put_sync(dev); 1655 1653 fail_pm_get_sync: 1654 + pm_runtime_put_sync(dev); 1656 1655 pm_runtime_disable(dev); 1657 1656 return ret; 1658 1657 } ··· 2189 2190 } 2190 2191 2191 2192 pp->irq = platform_get_irq_byname(pdev, "intr"); 2192 - if (!pp->irq) { 2193 + if (pp->irq < 0) { 2193 2194 dev_err(dev, "Failed to get \"intr\" interrupt\n"); 2194 - return -ENODEV; 2195 + return pp->irq; 2195 2196 } 2196 2197 2197 2198 pcie->bpmp = tegra_bpmp_get(dev);
+383
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe endpoint controller driver for UniPhier SoCs 4 + * Copyright 2018 Socionext Inc. 5 + * Author: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 6 + */ 7 + 8 + #include <linux/bitops.h> 9 + #include <linux/bitfield.h> 10 + #include <linux/clk.h> 11 + #include <linux/delay.h> 12 + #include <linux/init.h> 13 + #include <linux/of_device.h> 14 + #include <linux/pci.h> 15 + #include <linux/phy/phy.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/reset.h> 18 + 19 + #include "pcie-designware.h" 20 + 21 + /* Link Glue registers */ 22 + #define PCL_RSTCTRL0 0x0010 23 + #define PCL_RSTCTRL_AXI_REG BIT(3) 24 + #define PCL_RSTCTRL_AXI_SLAVE BIT(2) 25 + #define PCL_RSTCTRL_AXI_MASTER BIT(1) 26 + #define PCL_RSTCTRL_PIPE3 BIT(0) 27 + 28 + #define PCL_RSTCTRL1 0x0020 29 + #define PCL_RSTCTRL_PERST BIT(0) 30 + 31 + #define PCL_RSTCTRL2 0x0024 32 + #define PCL_RSTCTRL_PHY_RESET BIT(0) 33 + 34 + #define PCL_MODE 0x8000 35 + #define PCL_MODE_REGEN BIT(8) 36 + #define PCL_MODE_REGVAL BIT(0) 37 + 38 + #define PCL_APP_CLK_CTRL 0x8004 39 + #define PCL_APP_CLK_REQ BIT(0) 40 + 41 + #define PCL_APP_READY_CTRL 0x8008 42 + #define PCL_APP_LTSSM_ENABLE BIT(0) 43 + 44 + #define PCL_APP_MSI0 0x8040 45 + #define PCL_APP_VEN_MSI_TC_MASK GENMASK(10, 8) 46 + #define PCL_APP_VEN_MSI_VECTOR_MASK GENMASK(4, 0) 47 + 48 + #define PCL_APP_MSI1 0x8044 49 + #define PCL_APP_MSI_REQ BIT(0) 50 + 51 + #define PCL_APP_INTX 0x8074 52 + #define PCL_APP_INTX_SYS_INT BIT(0) 53 + 54 + /* assertion time of INTx in usec */ 55 + #define PCL_INTX_WIDTH_USEC 30 56 + 57 + struct uniphier_pcie_ep_priv { 58 + void __iomem *base; 59 + struct dw_pcie pci; 60 + struct clk *clk, *clk_gio; 61 + struct reset_control *rst, *rst_gio; 62 + struct phy *phy; 63 + const struct pci_epc_features *features; 64 + }; 65 + 66 + #define to_uniphier_pcie(x) dev_get_drvdata((x)->dev) 67 + 68 + static void uniphier_pcie_ltssm_enable(struct uniphier_pcie_ep_priv *priv, 69 + bool enable) 70 + { 71 + u32 val; 72 + 73 + val = readl(priv->base + PCL_APP_READY_CTRL); 74 + if (enable) 75 + val |= PCL_APP_LTSSM_ENABLE; 76 + else 77 + val &= ~PCL_APP_LTSSM_ENABLE; 78 + writel(val, priv->base + PCL_APP_READY_CTRL); 79 + } 80 + 81 + static void uniphier_pcie_phy_reset(struct uniphier_pcie_ep_priv *priv, 82 + bool assert) 83 + { 84 + u32 val; 85 + 86 + val = readl(priv->base + PCL_RSTCTRL2); 87 + if (assert) 88 + val |= PCL_RSTCTRL_PHY_RESET; 89 + else 90 + val &= ~PCL_RSTCTRL_PHY_RESET; 91 + writel(val, priv->base + PCL_RSTCTRL2); 92 + } 93 + 94 + static void uniphier_pcie_init_ep(struct uniphier_pcie_ep_priv *priv) 95 + { 96 + u32 val; 97 + 98 + /* set EP mode */ 99 + val = readl(priv->base + PCL_MODE); 100 + val |= PCL_MODE_REGEN | PCL_MODE_REGVAL; 101 + writel(val, priv->base + PCL_MODE); 102 + 103 + /* clock request */ 104 + val = readl(priv->base + PCL_APP_CLK_CTRL); 105 + val &= ~PCL_APP_CLK_REQ; 106 + writel(val, priv->base + PCL_APP_CLK_CTRL); 107 + 108 + /* deassert PIPE3 and AXI reset */ 109 + val = readl(priv->base + PCL_RSTCTRL0); 110 + val |= PCL_RSTCTRL_AXI_REG | PCL_RSTCTRL_AXI_SLAVE 111 + | PCL_RSTCTRL_AXI_MASTER | PCL_RSTCTRL_PIPE3; 112 + writel(val, priv->base + PCL_RSTCTRL0); 113 + 114 + uniphier_pcie_ltssm_enable(priv, false); 115 + 116 + msleep(100); 117 + } 118 + 119 + static int uniphier_pcie_start_link(struct dw_pcie *pci) 120 + { 121 + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); 122 + 123 + uniphier_pcie_ltssm_enable(priv, true); 124 + 125 + return 0; 126 + } 127 + 128 + static void uniphier_pcie_stop_link(struct dw_pcie *pci) 129 + { 130 + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); 131 + 132 + uniphier_pcie_ltssm_enable(priv, false); 133 + } 134 + 135 + static void uniphier_pcie_ep_init(struct dw_pcie_ep *ep) 136 + { 137 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 138 + enum pci_barno bar; 139 + 140 + for (bar = BAR_0; bar <= BAR_5; bar++) 141 + dw_pcie_ep_reset_bar(pci, bar); 142 + } 143 + 144 + static int uniphier_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep) 145 + { 146 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 147 + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); 148 + u32 val; 149 + 150 + /* 151 + * This makes pulse signal to send INTx to the RC, so this should 152 + * be cleared as soon as possible. This sequence is covered with 153 + * mutex in pci_epc_raise_irq(). 154 + */ 155 + /* assert INTx */ 156 + val = readl(priv->base + PCL_APP_INTX); 157 + val |= PCL_APP_INTX_SYS_INT; 158 + writel(val, priv->base + PCL_APP_INTX); 159 + 160 + udelay(PCL_INTX_WIDTH_USEC); 161 + 162 + /* deassert INTx */ 163 + val &= ~PCL_APP_INTX_SYS_INT; 164 + writel(val, priv->base + PCL_APP_INTX); 165 + 166 + return 0; 167 + } 168 + 169 + static int uniphier_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, 170 + u8 func_no, u16 interrupt_num) 171 + { 172 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 173 + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); 174 + u32 val; 175 + 176 + val = FIELD_PREP(PCL_APP_VEN_MSI_TC_MASK, func_no) 177 + | FIELD_PREP(PCL_APP_VEN_MSI_VECTOR_MASK, interrupt_num - 1); 178 + writel(val, priv->base + PCL_APP_MSI0); 179 + 180 + val = readl(priv->base + PCL_APP_MSI1); 181 + val |= PCL_APP_MSI_REQ; 182 + writel(val, priv->base + PCL_APP_MSI1); 183 + 184 + return 0; 185 + } 186 + 187 + static int uniphier_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 188 + enum pci_epc_irq_type type, 189 + u16 interrupt_num) 190 + { 191 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 192 + 193 + switch (type) { 194 + case PCI_EPC_IRQ_LEGACY: 195 + return uniphier_pcie_ep_raise_legacy_irq(ep); 196 + case PCI_EPC_IRQ_MSI: 197 + return uniphier_pcie_ep_raise_msi_irq(ep, func_no, 198 + interrupt_num); 199 + default: 200 + dev_err(pci->dev, "UNKNOWN IRQ type (%d)\n", type); 201 + } 202 + 203 + return 0; 204 + } 205 + 206 + static const struct pci_epc_features* 207 + uniphier_pcie_get_features(struct dw_pcie_ep *ep) 208 + { 209 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 210 + struct uniphier_pcie_ep_priv *priv = to_uniphier_pcie(pci); 211 + 212 + return priv->features; 213 + } 214 + 215 + static const struct dw_pcie_ep_ops uniphier_pcie_ep_ops = { 216 + .ep_init = uniphier_pcie_ep_init, 217 + .raise_irq = uniphier_pcie_ep_raise_irq, 218 + .get_features = uniphier_pcie_get_features, 219 + }; 220 + 221 + static int uniphier_add_pcie_ep(struct uniphier_pcie_ep_priv *priv, 222 + struct platform_device *pdev) 223 + { 224 + struct dw_pcie *pci = &priv->pci; 225 + struct dw_pcie_ep *ep = &pci->ep; 226 + struct device *dev = &pdev->dev; 227 + struct resource *res; 228 + int ret; 229 + 230 + ep->ops = &uniphier_pcie_ep_ops; 231 + 232 + pci->dbi_base2 = devm_platform_ioremap_resource_byname(pdev, "dbi2"); 233 + if (IS_ERR(pci->dbi_base2)) 234 + return PTR_ERR(pci->dbi_base2); 235 + 236 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 237 + if (!res) 238 + return -EINVAL; 239 + 240 + ep->phys_base = res->start; 241 + ep->addr_size = resource_size(res); 242 + 243 + ret = dw_pcie_ep_init(ep); 244 + if (ret) 245 + dev_err(dev, "Failed to initialize endpoint (%d)\n", ret); 246 + 247 + return ret; 248 + } 249 + 250 + static int uniphier_pcie_ep_enable(struct uniphier_pcie_ep_priv *priv) 251 + { 252 + int ret; 253 + 254 + ret = clk_prepare_enable(priv->clk); 255 + if (ret) 256 + return ret; 257 + 258 + ret = clk_prepare_enable(priv->clk_gio); 259 + if (ret) 260 + goto out_clk_disable; 261 + 262 + ret = reset_control_deassert(priv->rst); 263 + if (ret) 264 + goto out_clk_gio_disable; 265 + 266 + ret = reset_control_deassert(priv->rst_gio); 267 + if (ret) 268 + goto out_rst_assert; 269 + 270 + uniphier_pcie_init_ep(priv); 271 + 272 + uniphier_pcie_phy_reset(priv, true); 273 + 274 + ret = phy_init(priv->phy); 275 + if (ret) 276 + goto out_rst_gio_assert; 277 + 278 + uniphier_pcie_phy_reset(priv, false); 279 + 280 + return 0; 281 + 282 + out_rst_gio_assert: 283 + reset_control_assert(priv->rst_gio); 284 + out_rst_assert: 285 + reset_control_assert(priv->rst); 286 + out_clk_gio_disable: 287 + clk_disable_unprepare(priv->clk_gio); 288 + out_clk_disable: 289 + clk_disable_unprepare(priv->clk); 290 + 291 + return ret; 292 + } 293 + 294 + static const struct dw_pcie_ops dw_pcie_ops = { 295 + .start_link = uniphier_pcie_start_link, 296 + .stop_link = uniphier_pcie_stop_link, 297 + }; 298 + 299 + static int uniphier_pcie_ep_probe(struct platform_device *pdev) 300 + { 301 + struct device *dev = &pdev->dev; 302 + struct uniphier_pcie_ep_priv *priv; 303 + struct resource *res; 304 + int ret; 305 + 306 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 307 + if (!priv) 308 + return -ENOMEM; 309 + 310 + priv->features = of_device_get_match_data(dev); 311 + if (WARN_ON(!priv->features)) 312 + return -EINVAL; 313 + 314 + priv->pci.dev = dev; 315 + priv->pci.ops = &dw_pcie_ops; 316 + 317 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 318 + priv->pci.dbi_base = devm_pci_remap_cfg_resource(dev, res); 319 + if (IS_ERR(priv->pci.dbi_base)) 320 + return PTR_ERR(priv->pci.dbi_base); 321 + 322 + priv->base = devm_platform_ioremap_resource_byname(pdev, "link"); 323 + if (IS_ERR(priv->base)) 324 + return PTR_ERR(priv->base); 325 + 326 + priv->clk_gio = devm_clk_get(dev, "gio"); 327 + if (IS_ERR(priv->clk_gio)) 328 + return PTR_ERR(priv->clk_gio); 329 + 330 + priv->rst_gio = devm_reset_control_get_shared(dev, "gio"); 331 + if (IS_ERR(priv->rst_gio)) 332 + return PTR_ERR(priv->rst_gio); 333 + 334 + priv->clk = devm_clk_get(dev, "link"); 335 + if (IS_ERR(priv->clk)) 336 + return PTR_ERR(priv->clk); 337 + 338 + priv->rst = devm_reset_control_get_shared(dev, "link"); 339 + if (IS_ERR(priv->rst)) 340 + return PTR_ERR(priv->rst); 341 + 342 + priv->phy = devm_phy_optional_get(dev, "pcie-phy"); 343 + if (IS_ERR(priv->phy)) { 344 + ret = PTR_ERR(priv->phy); 345 + dev_err(dev, "Failed to get phy (%d)\n", ret); 346 + return ret; 347 + } 348 + 349 + platform_set_drvdata(pdev, priv); 350 + 351 + ret = uniphier_pcie_ep_enable(priv); 352 + if (ret) 353 + return ret; 354 + 355 + return uniphier_add_pcie_ep(priv, pdev); 356 + } 357 + 358 + static const struct pci_epc_features uniphier_pro5_data = { 359 + .linkup_notifier = false, 360 + .msi_capable = true, 361 + .msix_capable = false, 362 + .align = 1 << 16, 363 + .bar_fixed_64bit = BIT(BAR_0) | BIT(BAR_2) | BIT(BAR_4), 364 + .reserved_bar = BIT(BAR_4), 365 + }; 366 + 367 + static const struct of_device_id uniphier_pcie_ep_match[] = { 368 + { 369 + .compatible = "socionext,uniphier-pro5-pcie-ep", 370 + .data = &uniphier_pro5_data, 371 + }, 372 + { /* sentinel */ }, 373 + }; 374 + 375 + static struct platform_driver uniphier_pcie_ep_driver = { 376 + .probe = uniphier_pcie_ep_probe, 377 + .driver = { 378 + .name = "uniphier-pcie-ep", 379 + .of_match_table = uniphier_pcie_ep_match, 380 + .suppress_bind_attrs = true, 381 + }, 382 + }; 383 + builtin_platform_driver(uniphier_pcie_ep_driver);
+2 -2
drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
··· 522 522 mobiveil_pcie_enable_msi(pcie); 523 523 524 524 rp->irq = platform_get_irq(pdev, 0); 525 - if (rp->irq <= 0) { 525 + if (rp->irq < 0) { 526 526 dev_err(dev, "failed to map IRQ: %d\n", rp->irq); 527 - return -ENODEV; 527 + return rp->irq; 528 528 } 529 529 530 530 /* initialize the IRQ domains */
+222 -44
drivers/pci/controller/pci-aardvark.c
··· 9 9 */ 10 10 11 11 #include <linux/delay.h> 12 + #include <linux/gpio.h> 12 13 #include <linux/interrupt.h> 13 14 #include <linux/irq.h> 14 15 #include <linux/irqdomain.h> 15 16 #include <linux/kernel.h> 16 17 #include <linux/pci.h> 17 18 #include <linux/init.h> 19 + #include <linux/phy/phy.h> 18 20 #include <linux/platform_device.h> 19 21 #include <linux/msi.h> 20 22 #include <linux/of_address.h> 23 + #include <linux/of_gpio.h> 21 24 #include <linux/of_pci.h> 22 25 23 26 #include "../pci.h" ··· 34 31 #define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) 35 32 #define PCIE_CORE_DEV_REV_REG 0x8 36 33 #define PCIE_CORE_PCIEXP_CAP 0xc0 37 - #define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8 38 - #define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4) 39 - #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 40 - #define PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE (0 << 11) 41 - #define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT 12 42 - #define PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ 0x2 43 - #define PCIE_CORE_LINK_CTRL_STAT_REG 0xd0 44 - #define PCIE_CORE_LINK_L0S_ENTRY BIT(0) 45 - #define PCIE_CORE_LINK_TRAINING BIT(5) 46 - #define PCIE_CORE_LINK_WIDTH_SHIFT 20 47 34 #define PCIE_CORE_ERR_CAPCTL_REG 0x118 48 35 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) 49 36 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) ··· 94 101 #define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5) 95 102 #define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6) 96 103 #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) 104 + #define PCIE_CORE_REF_CLK_REG (CONTROL_BASE_ADDR + 0x14) 105 + #define PCIE_CORE_REF_CLK_TX_ENABLE BIT(1) 97 106 #define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30) 98 107 #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) 99 108 #define PCIE_MSG_PM_PME_MASK BIT(7) ··· 196 201 struct mutex msi_used_lock; 197 202 u16 msi_msg; 198 203 int root_bus_nr; 204 + int link_gen; 199 205 struct pci_bridge_emul bridge; 206 + struct gpio_desc *reset_gpio; 207 + struct phy *phy; 200 208 }; 201 209 202 210 static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg) ··· 210 212 static inline u32 advk_readl(struct advk_pcie *pcie, u64 reg) 211 213 { 212 214 return readl(pcie->base + reg); 215 + } 216 + 217 + static inline u16 advk_read16(struct advk_pcie *pcie, u64 reg) 218 + { 219 + return advk_readl(pcie, (reg & ~0x3)) >> ((reg & 0x3) * 8); 213 220 } 214 221 215 222 static int advk_pcie_link_up(struct advk_pcie *pcie) ··· 228 225 229 226 static int advk_pcie_wait_for_link(struct advk_pcie *pcie) 230 227 { 231 - struct device *dev = &pcie->pdev->dev; 232 228 int retries; 233 229 234 230 /* check if the link is up or not */ 235 231 for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 236 - if (advk_pcie_link_up(pcie)) { 237 - dev_info(dev, "link up\n"); 232 + if (advk_pcie_link_up(pcie)) 238 233 return 0; 239 - } 240 234 241 235 usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 242 236 } 243 237 244 - dev_err(dev, "link never came up\n"); 245 238 return -ETIMEDOUT; 246 239 } 247 240 ··· 252 253 } 253 254 } 254 255 256 + static int advk_pcie_train_at_gen(struct advk_pcie *pcie, int gen) 257 + { 258 + int ret, neg_gen; 259 + u32 reg; 260 + 261 + /* Setup link speed */ 262 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 263 + reg &= ~PCIE_GEN_SEL_MSK; 264 + if (gen == 3) 265 + reg |= SPEED_GEN_3; 266 + else if (gen == 2) 267 + reg |= SPEED_GEN_2; 268 + else 269 + reg |= SPEED_GEN_1; 270 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 271 + 272 + /* 273 + * Enable link training. This is not needed in every call to this 274 + * function, just once suffices, but it does not break anything either. 275 + */ 276 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 277 + reg |= LINK_TRAINING_EN; 278 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 279 + 280 + /* 281 + * Start link training immediately after enabling it. 282 + * This solves problems for some buggy cards. 283 + */ 284 + reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); 285 + reg |= PCI_EXP_LNKCTL_RL; 286 + advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKCTL); 287 + 288 + ret = advk_pcie_wait_for_link(pcie); 289 + if (ret) 290 + return ret; 291 + 292 + reg = advk_read16(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_LNKSTA); 293 + neg_gen = reg & PCI_EXP_LNKSTA_CLS; 294 + 295 + return neg_gen; 296 + } 297 + 298 + static void advk_pcie_train_link(struct advk_pcie *pcie) 299 + { 300 + struct device *dev = &pcie->pdev->dev; 301 + int neg_gen = -1, gen; 302 + 303 + /* 304 + * Try link training at link gen specified by device tree property 305 + * 'max-link-speed'. If this fails, iteratively train at lower gen. 306 + */ 307 + for (gen = pcie->link_gen; gen > 0; --gen) { 308 + neg_gen = advk_pcie_train_at_gen(pcie, gen); 309 + if (neg_gen > 0) 310 + break; 311 + } 312 + 313 + if (neg_gen < 0) 314 + goto err; 315 + 316 + /* 317 + * After successful training if negotiated gen is lower than requested, 318 + * train again on negotiated gen. This solves some stability issues for 319 + * some buggy gen1 cards. 320 + */ 321 + if (neg_gen < gen) { 322 + gen = neg_gen; 323 + neg_gen = advk_pcie_train_at_gen(pcie, gen); 324 + } 325 + 326 + if (neg_gen == gen) { 327 + dev_info(dev, "link up at gen %i\n", gen); 328 + return; 329 + } 330 + 331 + err: 332 + dev_err(dev, "link never came up\n"); 333 + } 334 + 335 + static void advk_pcie_issue_perst(struct advk_pcie *pcie) 336 + { 337 + u32 reg; 338 + 339 + if (!pcie->reset_gpio) 340 + return; 341 + 342 + /* PERST does not work for some cards when link training is enabled */ 343 + reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 344 + reg &= ~LINK_TRAINING_EN; 345 + advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 346 + 347 + /* 10ms delay is needed for some cards */ 348 + dev_info(&pcie->pdev->dev, "issuing PERST via reset GPIO for 10ms\n"); 349 + gpiod_set_value_cansleep(pcie->reset_gpio, 1); 350 + usleep_range(10000, 11000); 351 + gpiod_set_value_cansleep(pcie->reset_gpio, 0); 352 + } 353 + 255 354 static void advk_pcie_setup_hw(struct advk_pcie *pcie) 256 355 { 257 356 u32 reg; 357 + 358 + advk_pcie_issue_perst(pcie); 359 + 360 + /* Enable TX */ 361 + reg = advk_readl(pcie, PCIE_CORE_REF_CLK_REG); 362 + reg |= PCIE_CORE_REF_CLK_TX_ENABLE; 363 + advk_writel(pcie, reg, PCIE_CORE_REF_CLK_REG); 258 364 259 365 /* Set to Direct mode */ 260 366 reg = advk_readl(pcie, CTRL_CONFIG_REG); ··· 379 275 PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV; 380 276 advk_writel(pcie, reg, PCIE_CORE_ERR_CAPCTL_REG); 381 277 382 - /* Set PCIe Device Control and Status 1 PF0 register */ 383 - reg = PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE | 384 - (7 << PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT) | 385 - PCIE_CORE_DEV_CTRL_STATS_SNOOP_DISABLE | 386 - (PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SZ << 387 - PCIE_CORE_DEV_CTRL_STATS_MAX_RD_REQ_SIZE_SHIFT); 388 - advk_writel(pcie, reg, PCIE_CORE_DEV_CTRL_STATS_REG); 278 + /* Set PCIe Device Control register */ 279 + reg = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); 280 + reg &= ~PCI_EXP_DEVCTL_RELAX_EN; 281 + reg &= ~PCI_EXP_DEVCTL_NOSNOOP_EN; 282 + reg &= ~PCI_EXP_DEVCTL_READRQ; 283 + reg |= PCI_EXP_DEVCTL_PAYLOAD; /* Set max payload size */ 284 + reg |= PCI_EXP_DEVCTL_READRQ_512B; 285 + advk_writel(pcie, reg, PCIE_CORE_PCIEXP_CAP + PCI_EXP_DEVCTL); 389 286 390 287 /* Program PCIe Control 2 to disable strict ordering */ 391 288 reg = PCIE_CORE_CTRL2_RESERVED | 392 289 PCIE_CORE_CTRL2_TD_ENABLE; 393 290 advk_writel(pcie, reg, PCIE_CORE_CTRL2_REG); 394 291 395 - /* Set GEN2 */ 396 - reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 397 - reg &= ~PCIE_GEN_SEL_MSK; 398 - reg |= SPEED_GEN_2; 399 - advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 400 - 401 292 /* Set lane X1 */ 402 293 reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 403 294 reg &= ~LANE_CNT_MSK; 404 295 reg |= LANE_COUNT_1; 405 - advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 406 - 407 - /* Enable link training */ 408 - reg = advk_readl(pcie, PCIE_CORE_CTRL0_REG); 409 - reg |= LINK_TRAINING_EN; 410 296 advk_writel(pcie, reg, PCIE_CORE_CTRL0_REG); 411 297 412 298 /* Enable MSI */ ··· 434 340 435 341 /* 436 342 * PERST# signal could have been asserted by pinctrl subsystem before 437 - * probe() callback has been called, making the endpoint going into 343 + * probe() callback has been called or issued explicitly by reset gpio 344 + * function advk_pcie_issue_perst(), making the endpoint going into 438 345 * fundamental reset. As required by PCI Express spec a delay for at 439 346 * least 100ms after such a reset before link training is needed. 440 347 */ 441 348 msleep(PCI_PM_D3COLD_WAIT); 442 349 443 - /* Start link training */ 444 - reg = advk_readl(pcie, PCIE_CORE_LINK_CTRL_STAT_REG); 445 - reg |= PCIE_CORE_LINK_TRAINING; 446 - advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG); 350 + advk_pcie_train_link(pcie); 447 351 448 - advk_pcie_wait_for_link(pcie); 449 - 450 - reg = PCIE_CORE_LINK_L0S_ENTRY | 451 - (1 << PCIE_CORE_LINK_WIDTH_SHIFT); 452 - advk_writel(pcie, reg, PCIE_CORE_LINK_CTRL_STAT_REG); 453 - 352 + /* 353 + * FIXME: The following register update is suspicious. This register is 354 + * applicable only when the PCI controller is configured for Endpoint 355 + * mode, not as a Root Complex. But apparently when this code is 356 + * removed, some cards stop working. This should be investigated and 357 + * a comment explaining this should be put here. 358 + */ 454 359 reg = advk_readl(pcie, PCIE_CORE_CMD_STATUS_REG); 455 360 reg |= PCIE_CORE_CMD_MEM_ACCESS_EN | 456 361 PCIE_CORE_CMD_IO_ACCESS_EN | ··· 1045 952 return IRQ_HANDLED; 1046 953 } 1047 954 955 + static void __maybe_unused advk_pcie_disable_phy(struct advk_pcie *pcie) 956 + { 957 + phy_power_off(pcie->phy); 958 + phy_exit(pcie->phy); 959 + } 960 + 961 + static int advk_pcie_enable_phy(struct advk_pcie *pcie) 962 + { 963 + int ret; 964 + 965 + if (!pcie->phy) 966 + return 0; 967 + 968 + ret = phy_init(pcie->phy); 969 + if (ret) 970 + return ret; 971 + 972 + ret = phy_set_mode(pcie->phy, PHY_MODE_PCIE); 973 + if (ret) { 974 + phy_exit(pcie->phy); 975 + return ret; 976 + } 977 + 978 + ret = phy_power_on(pcie->phy); 979 + if (ret) { 980 + phy_exit(pcie->phy); 981 + return ret; 982 + } 983 + 984 + return 0; 985 + } 986 + 987 + static int advk_pcie_setup_phy(struct advk_pcie *pcie) 988 + { 989 + struct device *dev = &pcie->pdev->dev; 990 + struct device_node *node = dev->of_node; 991 + int ret = 0; 992 + 993 + pcie->phy = devm_of_phy_get(dev, node, NULL); 994 + if (IS_ERR(pcie->phy) && (PTR_ERR(pcie->phy) == -EPROBE_DEFER)) 995 + return PTR_ERR(pcie->phy); 996 + 997 + /* Old bindings miss the PHY handle */ 998 + if (IS_ERR(pcie->phy)) { 999 + dev_warn(dev, "PHY unavailable (%ld)\n", PTR_ERR(pcie->phy)); 1000 + pcie->phy = NULL; 1001 + return 0; 1002 + } 1003 + 1004 + ret = advk_pcie_enable_phy(pcie); 1005 + if (ret) 1006 + dev_err(dev, "Failed to initialize PHY (%d)\n", ret); 1007 + 1008 + return ret; 1009 + } 1010 + 1048 1011 static int advk_pcie_probe(struct platform_device *pdev) 1049 1012 { 1050 1013 struct device *dev = &pdev->dev; ··· 1122 973 return PTR_ERR(pcie->base); 1123 974 1124 975 irq = platform_get_irq(pdev, 0); 976 + if (irq < 0) 977 + return irq; 978 + 1125 979 ret = devm_request_irq(dev, irq, advk_pcie_irq_handler, 1126 980 IRQF_SHARED | IRQF_NO_THREAD, "advk-pcie", 1127 981 pcie); ··· 1140 988 return ret; 1141 989 } 1142 990 pcie->root_bus_nr = bus->start; 991 + 992 + pcie->reset_gpio = devm_gpiod_get_from_of_node(dev, dev->of_node, 993 + "reset-gpios", 0, 994 + GPIOD_OUT_LOW, 995 + "pcie1-reset"); 996 + ret = PTR_ERR_OR_ZERO(pcie->reset_gpio); 997 + if (ret) { 998 + if (ret == -ENOENT) { 999 + pcie->reset_gpio = NULL; 1000 + } else { 1001 + if (ret != -EPROBE_DEFER) 1002 + dev_err(dev, "Failed to get reset-gpio: %i\n", 1003 + ret); 1004 + return ret; 1005 + } 1006 + } 1007 + 1008 + ret = of_pci_get_max_link_speed(dev->of_node); 1009 + if (ret <= 0 || ret > 3) 1010 + pcie->link_gen = 3; 1011 + else 1012 + pcie->link_gen = ret; 1013 + 1014 + ret = advk_pcie_setup_phy(pcie); 1015 + if (ret) 1016 + return ret; 1143 1017 1144 1018 advk_pcie_setup_hw(pcie); 1145 1019
+14 -4
drivers/pci/controller/pci-host-common.c
··· 8 8 */ 9 9 10 10 #include <linux/kernel.h> 11 + #include <linux/module.h> 11 12 #include <linux/of_address.h> 13 + #include <linux/of_device.h> 12 14 #include <linux/of_pci.h> 13 15 #include <linux/pci-ecam.h> 14 16 #include <linux/platform_device.h> ··· 21 19 } 22 20 23 21 static struct pci_config_window *gen_pci_init(struct device *dev, 24 - struct list_head *resources, struct pci_ecam_ops *ops) 22 + struct list_head *resources, const struct pci_ecam_ops *ops) 25 23 { 26 24 int err; 27 25 struct resource cfgres; ··· 56 54 return ERR_PTR(err); 57 55 } 58 56 59 - int pci_host_common_probe(struct platform_device *pdev, 60 - struct pci_ecam_ops *ops) 57 + int pci_host_common_probe(struct platform_device *pdev) 61 58 { 62 59 struct device *dev = &pdev->dev; 63 60 struct pci_host_bridge *bridge; 64 61 struct pci_config_window *cfg; 65 62 struct list_head resources; 63 + const struct pci_ecam_ops *ops; 66 64 int ret; 65 + 66 + ops = of_device_get_match_data(&pdev->dev); 67 + if (!ops) 68 + return -ENODEV; 67 69 68 70 bridge = devm_pci_alloc_host_bridge(dev, 0); 69 71 if (!bridge) ··· 88 82 bridge->dev.parent = dev; 89 83 bridge->sysdata = cfg; 90 84 bridge->busnr = cfg->busr.start; 91 - bridge->ops = &ops->pci_ops; 85 + bridge->ops = (struct pci_ops *)&ops->pci_ops; 92 86 bridge->map_irq = of_irq_parse_and_map_pci; 93 87 bridge->swizzle_irq = pci_common_swizzle; 94 88 ··· 101 95 platform_set_drvdata(pdev, bridge->bus); 102 96 return 0; 103 97 } 98 + EXPORT_SYMBOL_GPL(pci_host_common_probe); 104 99 105 100 int pci_host_common_remove(struct platform_device *pdev) 106 101 { ··· 114 107 115 108 return 0; 116 109 } 110 + EXPORT_SYMBOL_GPL(pci_host_common_remove); 111 + 112 + MODULE_LICENSE("GPL v2");
+8 -18
drivers/pci/controller/pci-host-generic.c
··· 10 10 11 11 #include <linux/kernel.h> 12 12 #include <linux/init.h> 13 - #include <linux/of_address.h> 14 - #include <linux/of_pci.h> 13 + #include <linux/module.h> 15 14 #include <linux/pci-ecam.h> 16 15 #include <linux/platform_device.h> 17 16 18 - static struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = { 17 + static const struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = { 19 18 .bus_shift = 16, 20 19 .pci_ops = { 21 20 .map_bus = pci_ecam_map_bus, ··· 48 49 return pci_ecam_map_bus(bus, devfn, where); 49 50 } 50 51 51 - static struct pci_ecam_ops pci_dw_ecam_bus_ops = { 52 + static const struct pci_ecam_ops pci_dw_ecam_bus_ops = { 52 53 .bus_shift = 20, 53 54 .pci_ops = { 54 55 .map_bus = pci_dw_ecam_map_bus, ··· 75 76 76 77 { }, 77 78 }; 78 - 79 - static int gen_pci_probe(struct platform_device *pdev) 80 - { 81 - const struct of_device_id *of_id; 82 - struct pci_ecam_ops *ops; 83 - 84 - of_id = of_match_node(gen_pci_of_match, pdev->dev.of_node); 85 - ops = (struct pci_ecam_ops *)of_id->data; 86 - 87 - return pci_host_common_probe(pdev, ops); 88 - } 79 + MODULE_DEVICE_TABLE(of, gen_pci_of_match); 89 80 90 81 static struct platform_driver gen_pci_driver = { 91 82 .driver = { 92 83 .name = "pci-host-generic", 93 84 .of_match_table = gen_pci_of_match, 94 - .suppress_bind_attrs = true, 95 85 }, 96 - .probe = gen_pci_probe, 86 + .probe = pci_host_common_probe, 97 87 .remove = pci_host_common_remove, 98 88 }; 99 - builtin_platform_driver(gen_pci_driver); 89 + module_platform_driver(gen_pci_driver); 90 + 91 + MODULE_LICENSE("GPL v2");
+62 -20
drivers/pci/controller/pci-hyperv.c
··· 480 480 481 481 struct workqueue_struct *wq; 482 482 483 + /* Highest slot of child device with resources allocated */ 484 + int wslot_res_allocated; 485 + 483 486 /* hypercall arg, must not cross page boundary */ 484 487 struct hv_retarget_device_interrupt retarget_msi_interrupt_params; 485 488 ··· 2213 2210 struct hv_dr_state *dr; 2214 2211 int i; 2215 2212 2216 - dr = kzalloc(offsetof(struct hv_dr_state, func) + 2217 - (sizeof(struct hv_pcidev_description) * 2218 - (relations->device_count)), GFP_NOWAIT); 2219 - 2213 + dr = kzalloc(struct_size(dr, func, relations->device_count), 2214 + GFP_NOWAIT); 2220 2215 if (!dr) 2221 2216 return; 2222 2217 ··· 2248 2247 struct hv_dr_state *dr; 2249 2248 int i; 2250 2249 2251 - dr = kzalloc(offsetof(struct hv_dr_state, func) + 2252 - (sizeof(struct hv_pcidev_description) * 2253 - (relations->device_count)), GFP_NOWAIT); 2254 - 2250 + dr = kzalloc(struct_size(dr, func, relations->device_count), 2251 + GFP_NOWAIT); 2255 2252 if (!dr) 2256 2253 return; 2257 2254 ··· 2443 2444 2444 2445 bus_rel = (struct pci_bus_relations *)buffer; 2445 2446 if (bytes_recvd < 2446 - offsetof(struct pci_bus_relations, func) + 2447 - (sizeof(struct pci_function_description) * 2448 - (bus_rel->device_count))) { 2447 + struct_size(bus_rel, func, 2448 + bus_rel->device_count)) { 2449 2449 dev_err(&hbus->hdev->device, 2450 2450 "bus relations too small\n"); 2451 2451 break; ··· 2457 2459 2458 2460 bus_rel2 = (struct pci_bus_relations2 *)buffer; 2459 2461 if (bytes_recvd < 2460 - offsetof(struct pci_bus_relations2, func) + 2461 - (sizeof(struct pci_function_description2) * 2462 - (bus_rel2->device_count))) { 2462 + struct_size(bus_rel2, func, 2463 + bus_rel2->device_count)) { 2463 2464 dev_err(&hbus->hdev->device, 2464 2465 "bus relations v2 too small\n"); 2465 2466 break; ··· 2745 2748 vmbus_free_mmio(hbus->mem_config->start, PCI_CONFIG_MMIO_LENGTH); 2746 2749 } 2747 2750 2751 + static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs); 2752 + 2748 2753 /** 2749 2754 * hv_pci_enter_d0() - Bring the "bus" into the D0 power state 2750 2755 * @hdev: VMBus's tracking struct for this root PCI bus ··· 2759 2760 struct pci_bus_d0_entry *d0_entry; 2760 2761 struct hv_pci_compl comp_pkt; 2761 2762 struct pci_packet *pkt; 2763 + bool retry = true; 2762 2764 int ret; 2763 2765 2766 + enter_d0_retry: 2764 2767 /* 2765 2768 * Tell the host that the bus is ready to use, and moved into the 2766 2769 * powered-on state. This includes telling the host which region ··· 2788 2787 2789 2788 if (ret) 2790 2789 goto exit; 2790 + 2791 + /* 2792 + * In certain case (Kdump) the pci device of interest was 2793 + * not cleanly shut down and resource is still held on host 2794 + * side, the host could return invalid device status. 2795 + * We need to explicitly request host to release the resource 2796 + * and try to enter D0 again. 2797 + */ 2798 + if (comp_pkt.completion_status < 0 && retry) { 2799 + retry = false; 2800 + 2801 + dev_err(&hdev->device, "Retrying D0 Entry\n"); 2802 + 2803 + /* 2804 + * Hv_pci_bus_exit() calls hv_send_resource_released() 2805 + * to free up resources of its child devices. 2806 + * In the kdump kernel we need to set the 2807 + * wslot_res_allocated to 255 so it scans all child 2808 + * devices to release resources allocated in the 2809 + * normal kernel before panic happened. 2810 + */ 2811 + hbus->wslot_res_allocated = 255; 2812 + 2813 + ret = hv_pci_bus_exit(hdev, true); 2814 + 2815 + if (ret == 0) { 2816 + kfree(pkt); 2817 + goto enter_d0_retry; 2818 + } 2819 + dev_err(&hdev->device, 2820 + "Retrying D0 failed with ret %d\n", ret); 2821 + } 2791 2822 2792 2823 if (comp_pkt.completion_status < 0) { 2793 2824 dev_err(&hdev->device, ··· 2892 2859 struct hv_pci_dev *hpdev; 2893 2860 struct pci_packet *pkt; 2894 2861 size_t size_res; 2895 - u32 wslot; 2862 + int wslot; 2896 2863 int ret; 2897 2864 2898 2865 size_res = (hbus->protocol_version < PCI_PROTOCOL_VERSION_1_2) ··· 2945 2912 comp_pkt.completion_status); 2946 2913 break; 2947 2914 } 2915 + 2916 + hbus->wslot_res_allocated = wslot; 2948 2917 } 2949 2918 2950 2919 kfree(pkt); ··· 2965 2930 struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 2966 2931 struct pci_child_message pkt; 2967 2932 struct hv_pci_dev *hpdev; 2968 - u32 wslot; 2933 + int wslot; 2969 2934 int ret; 2970 2935 2971 - for (wslot = 0; wslot < 256; wslot++) { 2936 + for (wslot = hbus->wslot_res_allocated; wslot >= 0; wslot--) { 2972 2937 hpdev = get_pcichild_wslot(hbus, wslot); 2973 2938 if (!hpdev) 2974 2939 continue; ··· 2983 2948 VM_PKT_DATA_INBAND, 0); 2984 2949 if (ret) 2985 2950 return ret; 2951 + 2952 + hbus->wslot_res_allocated = wslot - 1; 2986 2953 } 2954 + 2955 + hbus->wslot_res_allocated = -1; 2987 2956 2988 2957 return 0; 2989 2958 } ··· 3088 3049 if (!hbus) 3089 3050 return -ENOMEM; 3090 3051 hbus->state = hv_pcibus_init; 3052 + hbus->wslot_res_allocated = -1; 3091 3053 3092 3054 /* 3093 3055 * The PCI bus "domain" is what is called "segment" in ACPI and other ··· 3188 3148 3189 3149 ret = hv_pci_allocate_bridge_windows(hbus); 3190 3150 if (ret) 3191 - goto free_irq_domain; 3151 + goto exit_d0; 3192 3152 3193 3153 ret = hv_send_resources_allocated(hdev); 3194 3154 if (ret) ··· 3206 3166 3207 3167 free_windows: 3208 3168 hv_pci_free_bridge_windows(hbus); 3169 + exit_d0: 3170 + (void) hv_pci_bus_exit(hdev, true); 3209 3171 free_irq_domain: 3210 3172 irq_domain_remove(hbus->irq_domain); 3211 3173 free_fwnode: ··· 3227 3185 return ret; 3228 3186 } 3229 3187 3230 - static int hv_pci_bus_exit(struct hv_device *hdev, bool hibernating) 3188 + static int hv_pci_bus_exit(struct hv_device *hdev, bool keep_devs) 3231 3189 { 3232 3190 struct hv_pcibus_device *hbus = hv_get_drvdata(hdev); 3233 3191 struct { ··· 3245 3203 if (hdev->channel->rescind) 3246 3204 return 0; 3247 3205 3248 - if (!hibernating) { 3206 + if (!keep_devs) { 3249 3207 /* Delete any children which might still exist. */ 3250 3208 dr = kzalloc(sizeof(*dr), GFP_KERNEL); 3251 3209 if (dr && hv_pci_start_relations_work(hbus, dr))
+3 -4
drivers/pci/controller/pci-tegra.c
··· 2219 2219 if (PTR_ERR(rp->reset_gpio) == -ENOENT) { 2220 2220 rp->reset_gpio = NULL; 2221 2221 } else { 2222 - dev_err(dev, "failed to get reset GPIO: %d\n", 2223 - err); 2222 + dev_err(dev, "failed to get reset GPIO: %ld\n", 2223 + PTR_ERR(rp->reset_gpio)); 2224 2224 return PTR_ERR(rp->reset_gpio); 2225 2225 } 2226 2226 } ··· 2712 2712 err = pm_runtime_get_sync(pcie->dev); 2713 2713 if (err < 0) { 2714 2714 dev_err(dev, "fail to enable pcie controller: %d\n", err); 2715 - goto teardown_msi; 2715 + goto pm_runtime_put; 2716 2716 } 2717 2717 2718 2718 host->busnr = bus->start; ··· 2746 2746 pm_runtime_put: 2747 2747 pm_runtime_put_sync(pcie->dev); 2748 2748 pm_runtime_disable(pcie->dev); 2749 - teardown_msi: 2750 2749 tegra_pcie_msi_teardown(pcie); 2751 2750 put_resources: 2752 2751 tegra_pcie_put_resources(pcie);
+6 -8
drivers/pci/controller/pci-thunder-ecam.c
··· 345 345 return pci_generic_config_write(bus, devfn, where, size, val); 346 346 } 347 347 348 - struct pci_ecam_ops pci_thunder_ecam_ops = { 348 + const struct pci_ecam_ops pci_thunder_ecam_ops = { 349 349 .bus_shift = 20, 350 350 .pci_ops = { 351 351 .map_bus = pci_ecam_map_bus, ··· 357 357 #ifdef CONFIG_PCI_HOST_THUNDER_ECAM 358 358 359 359 static const struct of_device_id thunder_ecam_of_match[] = { 360 - { .compatible = "cavium,pci-host-thunder-ecam" }, 360 + { 361 + .compatible = "cavium,pci-host-thunder-ecam", 362 + .data = &pci_thunder_ecam_ops, 363 + }, 361 364 { }, 362 365 }; 363 - 364 - static int thunder_ecam_probe(struct platform_device *pdev) 365 - { 366 - return pci_host_common_probe(pdev, &pci_thunder_ecam_ops); 367 - } 368 366 369 367 static struct platform_driver thunder_ecam_driver = { 370 368 .driver = { ··· 370 372 .of_match_table = thunder_ecam_of_match, 371 373 .suppress_bind_attrs = true, 372 374 }, 373 - .probe = thunder_ecam_probe, 375 + .probe = pci_host_common_probe, 374 376 }; 375 377 builtin_platform_driver(thunder_ecam_driver); 376 378
+7 -9
drivers/pci/controller/pci-thunder-pem.c
··· 403 403 return thunder_pem_init(dev, cfg, res_pem); 404 404 } 405 405 406 - struct pci_ecam_ops thunder_pem_ecam_ops = { 406 + const struct pci_ecam_ops thunder_pem_ecam_ops = { 407 407 .bus_shift = 24, 408 408 .init = thunder_pem_acpi_init, 409 409 .pci_ops = { ··· 440 440 return thunder_pem_init(dev, cfg, res_pem); 441 441 } 442 442 443 - static struct pci_ecam_ops pci_thunder_pem_ops = { 443 + static const struct pci_ecam_ops pci_thunder_pem_ops = { 444 444 .bus_shift = 24, 445 445 .init = thunder_pem_platform_init, 446 446 .pci_ops = { ··· 451 451 }; 452 452 453 453 static const struct of_device_id thunder_pem_of_match[] = { 454 - { .compatible = "cavium,pci-host-thunder-pem" }, 454 + { 455 + .compatible = "cavium,pci-host-thunder-pem", 456 + .data = &pci_thunder_pem_ops, 457 + }, 455 458 { }, 456 459 }; 457 - 458 - static int thunder_pem_probe(struct platform_device *pdev) 459 - { 460 - return pci_host_common_probe(pdev, &pci_thunder_pem_ops); 461 - } 462 460 463 461 static struct platform_driver thunder_pem_driver = { 464 462 .driver = { ··· 464 466 .of_match_table = thunder_pem_of_match, 465 467 .suppress_bind_attrs = true, 466 468 }, 467 - .probe = thunder_pem_probe, 469 + .probe = pci_host_common_probe, 468 470 }; 469 471 builtin_platform_driver(thunder_pem_driver); 470 472
+3 -3
drivers/pci/controller/pci-v3-semi.c
··· 720 720 int irq; 721 721 int ret; 722 722 723 - host = pci_alloc_host_bridge(sizeof(*v3)); 723 + host = devm_pci_alloc_host_bridge(dev, sizeof(*v3)); 724 724 if (!host) 725 725 return -ENOMEM; 726 726 ··· 777 777 778 778 /* Get and request error IRQ resource */ 779 779 irq = platform_get_irq(pdev, 0); 780 - if (irq <= 0) { 780 + if (irq < 0) { 781 781 dev_err(dev, "unable to obtain PCIv3 error IRQ\n"); 782 - return -ENODEV; 782 + return irq; 783 783 } 784 784 ret = devm_request_irq(dev, irq, v3_irq, 0, 785 785 "PCIv3 error", v3);
+2 -2
drivers/pci/controller/pci-xgene.c
··· 256 256 return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_1); 257 257 } 258 258 259 - struct pci_ecam_ops xgene_v1_pcie_ecam_ops = { 259 + const struct pci_ecam_ops xgene_v1_pcie_ecam_ops = { 260 260 .bus_shift = 16, 261 261 .init = xgene_v1_pcie_ecam_init, 262 262 .pci_ops = { ··· 271 271 return xgene_pcie_ecam_init(cfg, XGENE_PCIE_IP_VER_2); 272 272 } 273 273 274 - struct pci_ecam_ops xgene_v2_pcie_ecam_ops = { 274 + const struct pci_ecam_ops xgene_v2_pcie_ecam_ops = { 275 275 .bus_shift = 16, 276 276 .init = xgene_v2_pcie_ecam_init, 277 277 .pci_ops = {
+1 -1
drivers/pci/controller/pcie-altera.c
··· 193 193 if (bus->number == pcie->root_bus_nr && dev > 0) 194 194 return false; 195 195 196 - return true; 196 + return true; 197 197 } 198 198 199 199 static int tlp_read_packet(struct altera_pcie *pcie, u32 *value)
+33 -4
drivers/pci/controller/pcie-brcmstb.c
··· 28 28 #include <linux/string.h> 29 29 #include <linux/types.h> 30 30 31 + #include <soc/bcm2835/raspberrypi-firmware.h> 32 + 31 33 #include "../pci.h" 32 34 33 35 /* BRCM_PCIE_CAP_REGS - Offset for the mandatory capability config regs */ ··· 42 40 43 41 #define PCIE_RC_CFG_PRIV1_ID_VAL3 0x043c 44 42 #define PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK 0xffffff 43 + 44 + #define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY 0x04dc 45 + #define PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK 0xc00 45 46 46 47 #define PCIE_RC_DL_MDIO_ADDR 0x1100 47 48 #define PCIE_RC_DL_MDIO_WR_DATA 0x1104 ··· 59 54 60 55 #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO 0x400c 61 56 #define PCIE_MEM_WIN0_LO(win) \ 62 - PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 4) 57 + PCIE_MISC_CPU_2_PCIE_MEM_WIN0_LO + ((win) * 8) 63 58 64 59 #define PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI 0x4010 65 60 #define PCIE_MEM_WIN0_HI(win) \ 66 - PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 4) 61 + PCIE_MISC_CPU_2_PCIE_MEM_WIN0_HI + ((win) * 8) 67 62 68 63 #define PCIE_MISC_RC_BAR1_CONFIG_LO 0x402c 69 64 #define PCIE_MISC_RC_BAR1_CONFIG_LO_SIZE_MASK 0x1f ··· 698 693 int num_out_wins = 0; 699 694 u16 nlw, cls, lnksta; 700 695 int i, ret; 701 - u32 tmp; 696 + u32 tmp, aspm_support; 702 697 703 698 /* Reset the bridge */ 704 699 brcm_pcie_bridge_sw_init_set(pcie, 1); 700 + brcm_pcie_perst_set(pcie, 1); 705 701 706 702 usleep_range(100, 200); 707 703 ··· 809 803 num_out_wins++; 810 804 } 811 805 806 + /* Don't advertise L0s capability if 'aspm-no-l0s' */ 807 + aspm_support = PCIE_LINK_STATE_L1; 808 + if (!of_property_read_bool(pcie->np, "aspm-no-l0s")) 809 + aspm_support |= PCIE_LINK_STATE_L0S; 810 + tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); 811 + u32p_replace_bits(&tmp, aspm_support, 812 + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK); 813 + writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); 814 + 812 815 /* 813 816 * For config space accesses on the RC, show the right class for 814 817 * a PCIe-PCIe bridge (the default setting is to be EP mode). ··· 914 899 brcm_msi_remove(pcie); 915 900 brcm_pcie_turn_off(pcie); 916 901 clk_disable_unprepare(pcie->clk); 917 - clk_put(pcie->clk); 918 902 } 919 903 920 904 static int brcm_pcie_remove(struct platform_device *pdev) ··· 931 917 { 932 918 struct device_node *np = pdev->dev.of_node, *msi_np; 933 919 struct pci_host_bridge *bridge; 920 + struct device_node *fw_np; 934 921 struct brcm_pcie *pcie; 935 922 struct pci_bus *child; 936 923 struct resource *res; 937 924 int ret; 925 + 926 + /* 927 + * We have to wait for Raspberry Pi's firmware interface to be up as a 928 + * PCI fixup, rpi_firmware_init_vl805(), depends on it. This driver's 929 + * probe can race with the firmware interface's (see 930 + * drivers/firmware/raspberrypi.c) and potentially break the PCI fixup. 931 + */ 932 + fw_np = of_find_compatible_node(NULL, NULL, 933 + "raspberrypi,bcm2835-firmware"); 934 + if (fw_np && !rpi_firmware_get(fw_np)) { 935 + of_node_put(fw_np); 936 + return -EPROBE_DEFER; 937 + } 938 + of_node_put(fw_np); 938 939 939 940 bridge = devm_pci_alloc_host_bridge(&pdev->dev, sizeof(*pcie)); 940 941 if (!bridge)
+3
drivers/pci/controller/pcie-mediatek.c
··· 651 651 } 652 652 653 653 port->irq = platform_get_irq(pdev, port->slot); 654 + if (port->irq < 0) 655 + return port->irq; 656 + 654 657 irq_set_chained_handler_and_data(port->irq, 655 658 mtk_pcie_intr_handler, port); 656 659
+563
drivers/pci/controller/pcie-rcar-ep.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe endpoint driver for Renesas R-Car SoCs 4 + * Copyright (c) 2020 Renesas Electronics Europe GmbH 5 + * 6 + * Author: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> 7 + */ 8 + 9 + #include <linux/clk.h> 10 + #include <linux/delay.h> 11 + #include <linux/of_address.h> 12 + #include <linux/of_irq.h> 13 + #include <linux/of_pci.h> 14 + #include <linux/of_platform.h> 15 + #include <linux/pci.h> 16 + #include <linux/pci-epc.h> 17 + #include <linux/phy/phy.h> 18 + #include <linux/platform_device.h> 19 + 20 + #include "pcie-rcar.h" 21 + 22 + #define RCAR_EPC_MAX_FUNCTIONS 1 23 + 24 + /* Structure representing the PCIe interface */ 25 + struct rcar_pcie_endpoint { 26 + struct rcar_pcie pcie; 27 + phys_addr_t *ob_mapped_addr; 28 + struct pci_epc_mem_window *ob_window; 29 + u8 max_functions; 30 + unsigned int bar_to_atu[MAX_NR_INBOUND_MAPS]; 31 + unsigned long *ib_window_map; 32 + u32 num_ib_windows; 33 + u32 num_ob_windows; 34 + }; 35 + 36 + static void rcar_pcie_ep_hw_init(struct rcar_pcie *pcie) 37 + { 38 + u32 val; 39 + 40 + rcar_pci_write_reg(pcie, 0, PCIETCTLR); 41 + 42 + /* Set endpoint mode */ 43 + rcar_pci_write_reg(pcie, 0, PCIEMSR); 44 + 45 + /* Initialize default capabilities. */ 46 + rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); 47 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), 48 + PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ENDPOINT << 4); 49 + rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, 50 + PCI_HEADER_TYPE_NORMAL); 51 + 52 + /* Write out the physical slot number = 0 */ 53 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_SLTCAP), PCI_EXP_SLTCAP_PSN, 0); 54 + 55 + val = rcar_pci_read_reg(pcie, EXPCAP(1)); 56 + /* device supports fixed 128 bytes MPSS */ 57 + val &= ~GENMASK(2, 0); 58 + rcar_pci_write_reg(pcie, val, EXPCAP(1)); 59 + 60 + val = rcar_pci_read_reg(pcie, EXPCAP(2)); 61 + /* read requests size 128 bytes */ 62 + val &= ~GENMASK(14, 12); 63 + /* payload size 128 bytes */ 64 + val &= ~GENMASK(7, 5); 65 + rcar_pci_write_reg(pcie, val, EXPCAP(2)); 66 + 67 + /* Set target link speed to 5.0 GT/s */ 68 + rcar_rmw32(pcie, EXPCAP(12), PCI_EXP_LNKSTA_CLS, 69 + PCI_EXP_LNKSTA_CLS_5_0GB); 70 + 71 + /* Set the completion timer timeout to the maximum 50ms. */ 72 + rcar_rmw32(pcie, TLCTLR + 1, 0x3f, 50); 73 + 74 + /* Terminate list of capabilities (Next Capability Offset=0) */ 75 + rcar_rmw32(pcie, RVCCAP(0), 0xfff00000, 0); 76 + 77 + /* flush modifications */ 78 + wmb(); 79 + } 80 + 81 + static int rcar_pcie_ep_get_window(struct rcar_pcie_endpoint *ep, 82 + phys_addr_t addr) 83 + { 84 + int i; 85 + 86 + for (i = 0; i < ep->num_ob_windows; i++) 87 + if (ep->ob_window[i].phys_base == addr) 88 + return i; 89 + 90 + return -EINVAL; 91 + } 92 + 93 + static int rcar_pcie_parse_outbound_ranges(struct rcar_pcie_endpoint *ep, 94 + struct platform_device *pdev) 95 + { 96 + struct rcar_pcie *pcie = &ep->pcie; 97 + char outbound_name[10]; 98 + struct resource *res; 99 + unsigned int i = 0; 100 + 101 + ep->num_ob_windows = 0; 102 + for (i = 0; i < RCAR_PCI_MAX_RESOURCES; i++) { 103 + sprintf(outbound_name, "memory%u", i); 104 + res = platform_get_resource_byname(pdev, 105 + IORESOURCE_MEM, 106 + outbound_name); 107 + if (!res) { 108 + dev_err(pcie->dev, "missing outbound window %u\n", i); 109 + return -EINVAL; 110 + } 111 + if (!devm_request_mem_region(&pdev->dev, res->start, 112 + resource_size(res), 113 + outbound_name)) { 114 + dev_err(pcie->dev, "Cannot request memory region %s.\n", 115 + outbound_name); 116 + return -EIO; 117 + } 118 + 119 + ep->ob_window[i].phys_base = res->start; 120 + ep->ob_window[i].size = resource_size(res); 121 + /* controller doesn't support multiple allocation 122 + * from same window, so set page_size to window size 123 + */ 124 + ep->ob_window[i].page_size = resource_size(res); 125 + } 126 + ep->num_ob_windows = i; 127 + 128 + return 0; 129 + } 130 + 131 + static int rcar_pcie_ep_get_pdata(struct rcar_pcie_endpoint *ep, 132 + struct platform_device *pdev) 133 + { 134 + struct rcar_pcie *pcie = &ep->pcie; 135 + struct pci_epc_mem_window *window; 136 + struct device *dev = pcie->dev; 137 + struct resource res; 138 + int err; 139 + 140 + err = of_address_to_resource(dev->of_node, 0, &res); 141 + if (err) 142 + return err; 143 + pcie->base = devm_ioremap_resource(dev, &res); 144 + if (IS_ERR(pcie->base)) 145 + return PTR_ERR(pcie->base); 146 + 147 + ep->ob_window = devm_kcalloc(dev, RCAR_PCI_MAX_RESOURCES, 148 + sizeof(*window), GFP_KERNEL); 149 + if (!ep->ob_window) 150 + return -ENOMEM; 151 + 152 + rcar_pcie_parse_outbound_ranges(ep, pdev); 153 + 154 + err = of_property_read_u8(dev->of_node, "max-functions", 155 + &ep->max_functions); 156 + if (err < 0 || ep->max_functions > RCAR_EPC_MAX_FUNCTIONS) 157 + ep->max_functions = RCAR_EPC_MAX_FUNCTIONS; 158 + 159 + return 0; 160 + } 161 + 162 + static int rcar_pcie_ep_write_header(struct pci_epc *epc, u8 fn, 163 + struct pci_epf_header *hdr) 164 + { 165 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 166 + struct rcar_pcie *pcie = &ep->pcie; 167 + u32 val; 168 + 169 + if (!fn) 170 + val = hdr->vendorid; 171 + else 172 + val = rcar_pci_read_reg(pcie, IDSETR0); 173 + val |= hdr->deviceid << 16; 174 + rcar_pci_write_reg(pcie, val, IDSETR0); 175 + 176 + val = hdr->revid; 177 + val |= hdr->progif_code << 8; 178 + val |= hdr->subclass_code << 16; 179 + val |= hdr->baseclass_code << 24; 180 + rcar_pci_write_reg(pcie, val, IDSETR1); 181 + 182 + if (!fn) 183 + val = hdr->subsys_vendor_id; 184 + else 185 + val = rcar_pci_read_reg(pcie, SUBIDSETR); 186 + val |= hdr->subsys_id << 16; 187 + rcar_pci_write_reg(pcie, val, SUBIDSETR); 188 + 189 + if (hdr->interrupt_pin > PCI_INTERRUPT_INTA) 190 + return -EINVAL; 191 + val = rcar_pci_read_reg(pcie, PCICONF(15)); 192 + val |= (hdr->interrupt_pin << 8); 193 + rcar_pci_write_reg(pcie, val, PCICONF(15)); 194 + 195 + return 0; 196 + } 197 + 198 + static int rcar_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, 199 + struct pci_epf_bar *epf_bar) 200 + { 201 + int flags = epf_bar->flags | LAR_ENABLE | LAM_64BIT; 202 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 203 + u64 size = 1ULL << fls64(epf_bar->size - 1); 204 + dma_addr_t cpu_addr = epf_bar->phys_addr; 205 + enum pci_barno bar = epf_bar->barno; 206 + struct rcar_pcie *pcie = &ep->pcie; 207 + u32 mask; 208 + int idx; 209 + int err; 210 + 211 + idx = find_first_zero_bit(ep->ib_window_map, ep->num_ib_windows); 212 + if (idx >= ep->num_ib_windows) { 213 + dev_err(pcie->dev, "no free inbound window\n"); 214 + return -EINVAL; 215 + } 216 + 217 + if ((flags & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_IO) 218 + flags |= IO_SPACE; 219 + 220 + ep->bar_to_atu[bar] = idx; 221 + /* use 64-bit BARs */ 222 + set_bit(idx, ep->ib_window_map); 223 + set_bit(idx + 1, ep->ib_window_map); 224 + 225 + if (cpu_addr > 0) { 226 + unsigned long nr_zeros = __ffs64(cpu_addr); 227 + u64 alignment = 1ULL << nr_zeros; 228 + 229 + size = min(size, alignment); 230 + } 231 + 232 + size = min(size, 1ULL << 32); 233 + 234 + mask = roundup_pow_of_two(size) - 1; 235 + mask &= ~0xf; 236 + 237 + rcar_pcie_set_inbound(pcie, cpu_addr, 238 + 0x0, mask | flags, idx, false); 239 + 240 + err = rcar_pcie_wait_for_phyrdy(pcie); 241 + if (err) { 242 + dev_err(pcie->dev, "phy not ready\n"); 243 + return -EINVAL; 244 + } 245 + 246 + return 0; 247 + } 248 + 249 + static void rcar_pcie_ep_clear_bar(struct pci_epc *epc, u8 fn, 250 + struct pci_epf_bar *epf_bar) 251 + { 252 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 253 + enum pci_barno bar = epf_bar->barno; 254 + u32 atu_index = ep->bar_to_atu[bar]; 255 + 256 + rcar_pcie_set_inbound(&ep->pcie, 0x0, 0x0, 0x0, bar, false); 257 + 258 + clear_bit(atu_index, ep->ib_window_map); 259 + clear_bit(atu_index + 1, ep->ib_window_map); 260 + } 261 + 262 + static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 interrupts) 263 + { 264 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 265 + struct rcar_pcie *pcie = &ep->pcie; 266 + u32 flags; 267 + 268 + flags = rcar_pci_read_reg(pcie, MSICAP(fn)); 269 + flags |= interrupts << MSICAP0_MMESCAP_OFFSET; 270 + rcar_pci_write_reg(pcie, flags, MSICAP(fn)); 271 + 272 + return 0; 273 + } 274 + 275 + static int rcar_pcie_ep_get_msi(struct pci_epc *epc, u8 fn) 276 + { 277 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 278 + struct rcar_pcie *pcie = &ep->pcie; 279 + u32 flags; 280 + 281 + flags = rcar_pci_read_reg(pcie, MSICAP(fn)); 282 + if (!(flags & MSICAP0_MSIE)) 283 + return -EINVAL; 284 + 285 + return ((flags & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET); 286 + } 287 + 288 + static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, 289 + phys_addr_t addr, u64 pci_addr, size_t size) 290 + { 291 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 292 + struct rcar_pcie *pcie = &ep->pcie; 293 + struct resource_entry win; 294 + struct resource res; 295 + int window; 296 + int err; 297 + 298 + /* check if we have a link. */ 299 + err = rcar_pcie_wait_for_dl(pcie); 300 + if (err) { 301 + dev_err(pcie->dev, "link not up\n"); 302 + return err; 303 + } 304 + 305 + window = rcar_pcie_ep_get_window(ep, addr); 306 + if (window < 0) { 307 + dev_err(pcie->dev, "failed to get corresponding window\n"); 308 + return -EINVAL; 309 + } 310 + 311 + memset(&win, 0x0, sizeof(win)); 312 + memset(&res, 0x0, sizeof(res)); 313 + res.start = pci_addr; 314 + res.end = pci_addr + size - 1; 315 + res.flags = IORESOURCE_MEM; 316 + win.res = &res; 317 + 318 + rcar_pcie_set_outbound(pcie, window, &win); 319 + 320 + ep->ob_mapped_addr[window] = addr; 321 + 322 + return 0; 323 + } 324 + 325 + static void rcar_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, 326 + phys_addr_t addr) 327 + { 328 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 329 + struct resource_entry win; 330 + struct resource res; 331 + int idx; 332 + 333 + for (idx = 0; idx < ep->num_ob_windows; idx++) 334 + if (ep->ob_mapped_addr[idx] == addr) 335 + break; 336 + 337 + if (idx >= ep->num_ob_windows) 338 + return; 339 + 340 + memset(&win, 0x0, sizeof(win)); 341 + memset(&res, 0x0, sizeof(res)); 342 + win.res = &res; 343 + rcar_pcie_set_outbound(&ep->pcie, idx, &win); 344 + 345 + ep->ob_mapped_addr[idx] = 0; 346 + } 347 + 348 + static int rcar_pcie_ep_assert_intx(struct rcar_pcie_endpoint *ep, 349 + u8 fn, u8 intx) 350 + { 351 + struct rcar_pcie *pcie = &ep->pcie; 352 + u32 val; 353 + 354 + val = rcar_pci_read_reg(pcie, PCIEMSITXR); 355 + if ((val & PCI_MSI_FLAGS_ENABLE)) { 356 + dev_err(pcie->dev, "MSI is enabled, cannot assert INTx\n"); 357 + return -EINVAL; 358 + } 359 + 360 + val = rcar_pci_read_reg(pcie, PCICONF(1)); 361 + if ((val & INTDIS)) { 362 + dev_err(pcie->dev, "INTx message transmission is disabled\n"); 363 + return -EINVAL; 364 + } 365 + 366 + val = rcar_pci_read_reg(pcie, PCIEINTXR); 367 + if ((val & ASTINTX)) { 368 + dev_err(pcie->dev, "INTx is already asserted\n"); 369 + return -EINVAL; 370 + } 371 + 372 + val |= ASTINTX; 373 + rcar_pci_write_reg(pcie, val, PCIEINTXR); 374 + usleep_range(1000, 1001); 375 + val = rcar_pci_read_reg(pcie, PCIEINTXR); 376 + val &= ~ASTINTX; 377 + rcar_pci_write_reg(pcie, val, PCIEINTXR); 378 + 379 + return 0; 380 + } 381 + 382 + static int rcar_pcie_ep_assert_msi(struct rcar_pcie *pcie, 383 + u8 fn, u8 interrupt_num) 384 + { 385 + u16 msi_count; 386 + u32 val; 387 + 388 + /* Check MSI enable bit */ 389 + val = rcar_pci_read_reg(pcie, MSICAP(fn)); 390 + if (!(val & MSICAP0_MSIE)) 391 + return -EINVAL; 392 + 393 + /* Get MSI numbers from MME */ 394 + msi_count = ((val & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET); 395 + msi_count = 1 << msi_count; 396 + 397 + if (!interrupt_num || interrupt_num > msi_count) 398 + return -EINVAL; 399 + 400 + val = rcar_pci_read_reg(pcie, PCIEMSITXR); 401 + rcar_pci_write_reg(pcie, val | (interrupt_num - 1), PCIEMSITXR); 402 + 403 + return 0; 404 + } 405 + 406 + static int rcar_pcie_ep_raise_irq(struct pci_epc *epc, u8 fn, 407 + enum pci_epc_irq_type type, 408 + u16 interrupt_num) 409 + { 410 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 411 + 412 + switch (type) { 413 + case PCI_EPC_IRQ_LEGACY: 414 + return rcar_pcie_ep_assert_intx(ep, fn, 0); 415 + 416 + case PCI_EPC_IRQ_MSI: 417 + return rcar_pcie_ep_assert_msi(&ep->pcie, fn, interrupt_num); 418 + 419 + default: 420 + return -EINVAL; 421 + } 422 + } 423 + 424 + static int rcar_pcie_ep_start(struct pci_epc *epc) 425 + { 426 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 427 + 428 + rcar_pci_write_reg(&ep->pcie, MACCTLR_INIT_VAL, MACCTLR); 429 + rcar_pci_write_reg(&ep->pcie, CFINIT, PCIETCTLR); 430 + 431 + return 0; 432 + } 433 + 434 + static void rcar_pcie_ep_stop(struct pci_epc *epc) 435 + { 436 + struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 437 + 438 + rcar_pci_write_reg(&ep->pcie, 0, PCIETCTLR); 439 + } 440 + 441 + static const struct pci_epc_features rcar_pcie_epc_features = { 442 + .linkup_notifier = false, 443 + .msi_capable = true, 444 + .msix_capable = false, 445 + /* use 64-bit BARs so mark BAR[1,3,5] as reserved */ 446 + .reserved_bar = 1 << BAR_1 | 1 << BAR_3 | 1 << BAR_5, 447 + .bar_fixed_64bit = 1 << BAR_0 | 1 << BAR_2 | 1 << BAR_4, 448 + .bar_fixed_size[0] = 128, 449 + .bar_fixed_size[2] = 256, 450 + .bar_fixed_size[4] = 256, 451 + }; 452 + 453 + static const struct pci_epc_features* 454 + rcar_pcie_ep_get_features(struct pci_epc *epc, u8 func_no) 455 + { 456 + return &rcar_pcie_epc_features; 457 + } 458 + 459 + static const struct pci_epc_ops rcar_pcie_epc_ops = { 460 + .write_header = rcar_pcie_ep_write_header, 461 + .set_bar = rcar_pcie_ep_set_bar, 462 + .clear_bar = rcar_pcie_ep_clear_bar, 463 + .set_msi = rcar_pcie_ep_set_msi, 464 + .get_msi = rcar_pcie_ep_get_msi, 465 + .map_addr = rcar_pcie_ep_map_addr, 466 + .unmap_addr = rcar_pcie_ep_unmap_addr, 467 + .raise_irq = rcar_pcie_ep_raise_irq, 468 + .start = rcar_pcie_ep_start, 469 + .stop = rcar_pcie_ep_stop, 470 + .get_features = rcar_pcie_ep_get_features, 471 + }; 472 + 473 + static const struct of_device_id rcar_pcie_ep_of_match[] = { 474 + { .compatible = "renesas,r8a774c0-pcie-ep", }, 475 + { .compatible = "renesas,rcar-gen3-pcie-ep" }, 476 + { }, 477 + }; 478 + 479 + static int rcar_pcie_ep_probe(struct platform_device *pdev) 480 + { 481 + struct device *dev = &pdev->dev; 482 + struct rcar_pcie_endpoint *ep; 483 + struct rcar_pcie *pcie; 484 + struct pci_epc *epc; 485 + int err; 486 + 487 + ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); 488 + if (!ep) 489 + return -ENOMEM; 490 + 491 + pcie = &ep->pcie; 492 + pcie->dev = dev; 493 + 494 + pm_runtime_enable(dev); 495 + err = pm_runtime_get_sync(dev); 496 + if (err < 0) { 497 + dev_err(dev, "pm_runtime_get_sync failed\n"); 498 + goto err_pm_disable; 499 + } 500 + 501 + err = rcar_pcie_ep_get_pdata(ep, pdev); 502 + if (err < 0) { 503 + dev_err(dev, "failed to request resources: %d\n", err); 504 + goto err_pm_put; 505 + } 506 + 507 + ep->num_ib_windows = MAX_NR_INBOUND_MAPS; 508 + ep->ib_window_map = 509 + devm_kcalloc(dev, BITS_TO_LONGS(ep->num_ib_windows), 510 + sizeof(long), GFP_KERNEL); 511 + if (!ep->ib_window_map) { 512 + err = -ENOMEM; 513 + dev_err(dev, "failed to allocate memory for inbound map\n"); 514 + goto err_pm_put; 515 + } 516 + 517 + ep->ob_mapped_addr = devm_kcalloc(dev, ep->num_ob_windows, 518 + sizeof(*ep->ob_mapped_addr), 519 + GFP_KERNEL); 520 + if (!ep->ob_mapped_addr) { 521 + err = -ENOMEM; 522 + dev_err(dev, "failed to allocate memory for outbound memory pointers\n"); 523 + goto err_pm_put; 524 + } 525 + 526 + epc = devm_pci_epc_create(dev, &rcar_pcie_epc_ops); 527 + if (IS_ERR(epc)) { 528 + dev_err(dev, "failed to create epc device\n"); 529 + err = PTR_ERR(epc); 530 + goto err_pm_put; 531 + } 532 + 533 + epc->max_functions = ep->max_functions; 534 + epc_set_drvdata(epc, ep); 535 + 536 + rcar_pcie_ep_hw_init(pcie); 537 + 538 + err = pci_epc_multi_mem_init(epc, ep->ob_window, ep->num_ob_windows); 539 + if (err < 0) { 540 + dev_err(dev, "failed to initialize the epc memory space\n"); 541 + goto err_pm_put; 542 + } 543 + 544 + return 0; 545 + 546 + err_pm_put: 547 + pm_runtime_put(dev); 548 + 549 + err_pm_disable: 550 + pm_runtime_disable(dev); 551 + 552 + return err; 553 + } 554 + 555 + static struct platform_driver rcar_pcie_ep_driver = { 556 + .driver = { 557 + .name = "rcar-pcie-ep", 558 + .of_match_table = rcar_pcie_ep_of_match, 559 + .suppress_bind_attrs = true, 560 + }, 561 + .probe = rcar_pcie_ep_probe, 562 + }; 563 + builtin_platform_driver(rcar_pcie_ep_driver);
+1130
drivers/pci/controller/pcie-rcar-host.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe driver for Renesas R-Car SoCs 4 + * Copyright (C) 2014-2020 Renesas Electronics Europe Ltd 5 + * 6 + * Based on: 7 + * arch/sh/drivers/pci/pcie-sh7786.c 8 + * arch/sh/drivers/pci/ops-sh7786.c 9 + * Copyright (C) 2009 - 2011 Paul Mundt 10 + * 11 + * Author: Phil Edworthy <phil.edworthy@renesas.com> 12 + */ 13 + 14 + #include <linux/bitops.h> 15 + #include <linux/clk.h> 16 + #include <linux/delay.h> 17 + #include <linux/interrupt.h> 18 + #include <linux/irq.h> 19 + #include <linux/irqdomain.h> 20 + #include <linux/kernel.h> 21 + #include <linux/init.h> 22 + #include <linux/msi.h> 23 + #include <linux/of_address.h> 24 + #include <linux/of_irq.h> 25 + #include <linux/of_pci.h> 26 + #include <linux/of_platform.h> 27 + #include <linux/pci.h> 28 + #include <linux/phy/phy.h> 29 + #include <linux/platform_device.h> 30 + #include <linux/pm_runtime.h> 31 + #include <linux/slab.h> 32 + 33 + #include "pcie-rcar.h" 34 + 35 + struct rcar_msi { 36 + DECLARE_BITMAP(used, INT_PCI_MSI_NR); 37 + struct irq_domain *domain; 38 + struct msi_controller chip; 39 + unsigned long pages; 40 + struct mutex lock; 41 + int irq1; 42 + int irq2; 43 + }; 44 + 45 + static inline struct rcar_msi *to_rcar_msi(struct msi_controller *chip) 46 + { 47 + return container_of(chip, struct rcar_msi, chip); 48 + } 49 + 50 + /* Structure representing the PCIe interface */ 51 + struct rcar_pcie_host { 52 + struct rcar_pcie pcie; 53 + struct device *dev; 54 + struct phy *phy; 55 + void __iomem *base; 56 + struct list_head resources; 57 + int root_bus_nr; 58 + struct clk *bus_clk; 59 + struct rcar_msi msi; 60 + int (*phy_init_fn)(struct rcar_pcie_host *host); 61 + }; 62 + 63 + static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) 64 + { 65 + unsigned int shift = BITS_PER_BYTE * (where & 3); 66 + u32 val = rcar_pci_read_reg(pcie, where & ~3); 67 + 68 + return val >> shift; 69 + } 70 + 71 + /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ 72 + static int rcar_pcie_config_access(struct rcar_pcie_host *host, 73 + unsigned char access_type, struct pci_bus *bus, 74 + unsigned int devfn, int where, u32 *data) 75 + { 76 + struct rcar_pcie *pcie = &host->pcie; 77 + unsigned int dev, func, reg, index; 78 + 79 + dev = PCI_SLOT(devfn); 80 + func = PCI_FUNC(devfn); 81 + reg = where & ~3; 82 + index = reg / 4; 83 + 84 + /* 85 + * While each channel has its own memory-mapped extended config 86 + * space, it's generally only accessible when in endpoint mode. 87 + * When in root complex mode, the controller is unable to target 88 + * itself with either type 0 or type 1 accesses, and indeed, any 89 + * controller initiated target transfer to its own config space 90 + * result in a completer abort. 91 + * 92 + * Each channel effectively only supports a single device, but as 93 + * the same channel <-> device access works for any PCI_SLOT() 94 + * value, we cheat a bit here and bind the controller's config 95 + * space to devfn 0 in order to enable self-enumeration. In this 96 + * case the regular ECAR/ECDR path is sidelined and the mangled 97 + * config access itself is initiated as an internal bus transaction. 98 + */ 99 + if (pci_is_root_bus(bus)) { 100 + if (dev != 0) 101 + return PCIBIOS_DEVICE_NOT_FOUND; 102 + 103 + if (access_type == RCAR_PCI_ACCESS_READ) { 104 + *data = rcar_pci_read_reg(pcie, PCICONF(index)); 105 + } else { 106 + /* Keep an eye out for changes to the root bus number */ 107 + if (pci_is_root_bus(bus) && (reg == PCI_PRIMARY_BUS)) 108 + host->root_bus_nr = *data & 0xff; 109 + 110 + rcar_pci_write_reg(pcie, *data, PCICONF(index)); 111 + } 112 + 113 + return PCIBIOS_SUCCESSFUL; 114 + } 115 + 116 + if (host->root_bus_nr < 0) 117 + return PCIBIOS_DEVICE_NOT_FOUND; 118 + 119 + /* Clear errors */ 120 + rcar_pci_write_reg(pcie, rcar_pci_read_reg(pcie, PCIEERRFR), PCIEERRFR); 121 + 122 + /* Set the PIO address */ 123 + rcar_pci_write_reg(pcie, PCIE_CONF_BUS(bus->number) | 124 + PCIE_CONF_DEV(dev) | PCIE_CONF_FUNC(func) | reg, PCIECAR); 125 + 126 + /* Enable the configuration access */ 127 + if (bus->parent->number == host->root_bus_nr) 128 + rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE0, PCIECCTLR); 129 + else 130 + rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE1, PCIECCTLR); 131 + 132 + /* Check for errors */ 133 + if (rcar_pci_read_reg(pcie, PCIEERRFR) & UNSUPPORTED_REQUEST) 134 + return PCIBIOS_DEVICE_NOT_FOUND; 135 + 136 + /* Check for master and target aborts */ 137 + if (rcar_read_conf(pcie, RCONF(PCI_STATUS)) & 138 + (PCI_STATUS_REC_MASTER_ABORT | PCI_STATUS_REC_TARGET_ABORT)) 139 + return PCIBIOS_DEVICE_NOT_FOUND; 140 + 141 + if (access_type == RCAR_PCI_ACCESS_READ) 142 + *data = rcar_pci_read_reg(pcie, PCIECDR); 143 + else 144 + rcar_pci_write_reg(pcie, *data, PCIECDR); 145 + 146 + /* Disable the configuration access */ 147 + rcar_pci_write_reg(pcie, 0, PCIECCTLR); 148 + 149 + return PCIBIOS_SUCCESSFUL; 150 + } 151 + 152 + static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, 153 + int where, int size, u32 *val) 154 + { 155 + struct rcar_pcie_host *host = bus->sysdata; 156 + int ret; 157 + 158 + ret = rcar_pcie_config_access(host, RCAR_PCI_ACCESS_READ, 159 + bus, devfn, where, val); 160 + if (ret != PCIBIOS_SUCCESSFUL) { 161 + *val = 0xffffffff; 162 + return ret; 163 + } 164 + 165 + if (size == 1) 166 + *val = (*val >> (BITS_PER_BYTE * (where & 3))) & 0xff; 167 + else if (size == 2) 168 + *val = (*val >> (BITS_PER_BYTE * (where & 2))) & 0xffff; 169 + 170 + dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", 171 + bus->number, devfn, where, size, *val); 172 + 173 + return ret; 174 + } 175 + 176 + /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ 177 + static int rcar_pcie_write_conf(struct pci_bus *bus, unsigned int devfn, 178 + int where, int size, u32 val) 179 + { 180 + struct rcar_pcie_host *host = bus->sysdata; 181 + unsigned int shift; 182 + u32 data; 183 + int ret; 184 + 185 + ret = rcar_pcie_config_access(host, RCAR_PCI_ACCESS_READ, 186 + bus, devfn, where, &data); 187 + if (ret != PCIBIOS_SUCCESSFUL) 188 + return ret; 189 + 190 + dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", 191 + bus->number, devfn, where, size, val); 192 + 193 + if (size == 1) { 194 + shift = BITS_PER_BYTE * (where & 3); 195 + data &= ~(0xff << shift); 196 + data |= ((val & 0xff) << shift); 197 + } else if (size == 2) { 198 + shift = BITS_PER_BYTE * (where & 2); 199 + data &= ~(0xffff << shift); 200 + data |= ((val & 0xffff) << shift); 201 + } else 202 + data = val; 203 + 204 + ret = rcar_pcie_config_access(host, RCAR_PCI_ACCESS_WRITE, 205 + bus, devfn, where, &data); 206 + 207 + return ret; 208 + } 209 + 210 + static struct pci_ops rcar_pcie_ops = { 211 + .read = rcar_pcie_read_conf, 212 + .write = rcar_pcie_write_conf, 213 + }; 214 + 215 + static int rcar_pcie_setup(struct list_head *resource, 216 + struct rcar_pcie_host *host) 217 + { 218 + struct resource_entry *win; 219 + int i = 0; 220 + 221 + /* Setup PCI resources */ 222 + resource_list_for_each_entry(win, &host->resources) { 223 + struct resource *res = win->res; 224 + 225 + if (!res->flags) 226 + continue; 227 + 228 + switch (resource_type(res)) { 229 + case IORESOURCE_IO: 230 + case IORESOURCE_MEM: 231 + rcar_pcie_set_outbound(&host->pcie, i, win); 232 + i++; 233 + break; 234 + case IORESOURCE_BUS: 235 + host->root_bus_nr = res->start; 236 + break; 237 + default: 238 + continue; 239 + } 240 + 241 + pci_add_resource(resource, res); 242 + } 243 + 244 + return 1; 245 + } 246 + 247 + static void rcar_pcie_force_speedup(struct rcar_pcie *pcie) 248 + { 249 + struct device *dev = pcie->dev; 250 + unsigned int timeout = 1000; 251 + u32 macsr; 252 + 253 + if ((rcar_pci_read_reg(pcie, MACS2R) & LINK_SPEED) != LINK_SPEED_5_0GTS) 254 + return; 255 + 256 + if (rcar_pci_read_reg(pcie, MACCTLR) & SPEED_CHANGE) { 257 + dev_err(dev, "Speed change already in progress\n"); 258 + return; 259 + } 260 + 261 + macsr = rcar_pci_read_reg(pcie, MACSR); 262 + if ((macsr & LINK_SPEED) == LINK_SPEED_5_0GTS) 263 + goto done; 264 + 265 + /* Set target link speed to 5.0 GT/s */ 266 + rcar_rmw32(pcie, EXPCAP(12), PCI_EXP_LNKSTA_CLS, 267 + PCI_EXP_LNKSTA_CLS_5_0GB); 268 + 269 + /* Set speed change reason as intentional factor */ 270 + rcar_rmw32(pcie, MACCGSPSETR, SPCNGRSN, 0); 271 + 272 + /* Clear SPCHGFIN, SPCHGSUC, and SPCHGFAIL */ 273 + if (macsr & (SPCHGFIN | SPCHGSUC | SPCHGFAIL)) 274 + rcar_pci_write_reg(pcie, macsr, MACSR); 275 + 276 + /* Start link speed change */ 277 + rcar_rmw32(pcie, MACCTLR, SPEED_CHANGE, SPEED_CHANGE); 278 + 279 + while (timeout--) { 280 + macsr = rcar_pci_read_reg(pcie, MACSR); 281 + if (macsr & SPCHGFIN) { 282 + /* Clear the interrupt bits */ 283 + rcar_pci_write_reg(pcie, macsr, MACSR); 284 + 285 + if (macsr & SPCHGFAIL) 286 + dev_err(dev, "Speed change failed\n"); 287 + 288 + goto done; 289 + } 290 + 291 + msleep(1); 292 + } 293 + 294 + dev_err(dev, "Speed change timed out\n"); 295 + 296 + done: 297 + dev_info(dev, "Current link speed is %s GT/s\n", 298 + (macsr & LINK_SPEED) == LINK_SPEED_5_0GTS ? "5" : "2.5"); 299 + } 300 + 301 + static void rcar_pcie_hw_enable(struct rcar_pcie_host *host) 302 + { 303 + struct rcar_pcie *pcie = &host->pcie; 304 + struct resource_entry *win; 305 + LIST_HEAD(res); 306 + int i = 0; 307 + 308 + /* Try setting 5 GT/s link speed */ 309 + rcar_pcie_force_speedup(pcie); 310 + 311 + /* Setup PCI resources */ 312 + resource_list_for_each_entry(win, &host->resources) { 313 + struct resource *res = win->res; 314 + 315 + if (!res->flags) 316 + continue; 317 + 318 + switch (resource_type(res)) { 319 + case IORESOURCE_IO: 320 + case IORESOURCE_MEM: 321 + rcar_pcie_set_outbound(pcie, i, win); 322 + i++; 323 + break; 324 + } 325 + } 326 + } 327 + 328 + static int rcar_pcie_enable(struct rcar_pcie_host *host) 329 + { 330 + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(host); 331 + struct rcar_pcie *pcie = &host->pcie; 332 + struct device *dev = pcie->dev; 333 + struct pci_bus *bus, *child; 334 + int ret; 335 + 336 + /* Try setting 5 GT/s link speed */ 337 + rcar_pcie_force_speedup(pcie); 338 + 339 + rcar_pcie_setup(&bridge->windows, host); 340 + 341 + pci_add_flags(PCI_REASSIGN_ALL_BUS); 342 + 343 + bridge->dev.parent = dev; 344 + bridge->sysdata = host; 345 + bridge->busnr = host->root_bus_nr; 346 + bridge->ops = &rcar_pcie_ops; 347 + bridge->map_irq = of_irq_parse_and_map_pci; 348 + bridge->swizzle_irq = pci_common_swizzle; 349 + if (IS_ENABLED(CONFIG_PCI_MSI)) 350 + bridge->msi = &host->msi.chip; 351 + 352 + ret = pci_scan_root_bus_bridge(bridge); 353 + if (ret < 0) 354 + return ret; 355 + 356 + bus = bridge->bus; 357 + 358 + pci_bus_size_bridges(bus); 359 + pci_bus_assign_resources(bus); 360 + 361 + list_for_each_entry(child, &bus->children, node) 362 + pcie_bus_configure_settings(child); 363 + 364 + pci_bus_add_devices(bus); 365 + 366 + return 0; 367 + } 368 + 369 + static int phy_wait_for_ack(struct rcar_pcie *pcie) 370 + { 371 + struct device *dev = pcie->dev; 372 + unsigned int timeout = 100; 373 + 374 + while (timeout--) { 375 + if (rcar_pci_read_reg(pcie, H1_PCIEPHYADRR) & PHY_ACK) 376 + return 0; 377 + 378 + udelay(100); 379 + } 380 + 381 + dev_err(dev, "Access to PCIe phy timed out\n"); 382 + 383 + return -ETIMEDOUT; 384 + } 385 + 386 + static void phy_write_reg(struct rcar_pcie *pcie, 387 + unsigned int rate, u32 addr, 388 + unsigned int lane, u32 data) 389 + { 390 + u32 phyaddr; 391 + 392 + phyaddr = WRITE_CMD | 393 + ((rate & 1) << RATE_POS) | 394 + ((lane & 0xf) << LANE_POS) | 395 + ((addr & 0xff) << ADR_POS); 396 + 397 + /* Set write data */ 398 + rcar_pci_write_reg(pcie, data, H1_PCIEPHYDOUTR); 399 + rcar_pci_write_reg(pcie, phyaddr, H1_PCIEPHYADRR); 400 + 401 + /* Ignore errors as they will be dealt with if the data link is down */ 402 + phy_wait_for_ack(pcie); 403 + 404 + /* Clear command */ 405 + rcar_pci_write_reg(pcie, 0, H1_PCIEPHYDOUTR); 406 + rcar_pci_write_reg(pcie, 0, H1_PCIEPHYADRR); 407 + 408 + /* Ignore errors as they will be dealt with if the data link is down */ 409 + phy_wait_for_ack(pcie); 410 + } 411 + 412 + static int rcar_pcie_hw_init(struct rcar_pcie *pcie) 413 + { 414 + int err; 415 + 416 + /* Begin initialization */ 417 + rcar_pci_write_reg(pcie, 0, PCIETCTLR); 418 + 419 + /* Set mode */ 420 + rcar_pci_write_reg(pcie, 1, PCIEMSR); 421 + 422 + err = rcar_pcie_wait_for_phyrdy(pcie); 423 + if (err) 424 + return err; 425 + 426 + /* 427 + * Initial header for port config space is type 1, set the device 428 + * class to match. Hardware takes care of propagating the IDSETR 429 + * settings, so there is no need to bother with a quirk. 430 + */ 431 + rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI << 16, IDSETR1); 432 + 433 + /* 434 + * Setup Secondary Bus Number & Subordinate Bus Number, even though 435 + * they aren't used, to avoid bridge being detected as broken. 436 + */ 437 + rcar_rmw32(pcie, RCONF(PCI_SECONDARY_BUS), 0xff, 1); 438 + rcar_rmw32(pcie, RCONF(PCI_SUBORDINATE_BUS), 0xff, 1); 439 + 440 + /* Initialize default capabilities. */ 441 + rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); 442 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), 443 + PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4); 444 + rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, 445 + PCI_HEADER_TYPE_BRIDGE); 446 + 447 + /* Enable data link layer active state reporting */ 448 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_LNKCAP), PCI_EXP_LNKCAP_DLLLARC, 449 + PCI_EXP_LNKCAP_DLLLARC); 450 + 451 + /* Write out the physical slot number = 0 */ 452 + rcar_rmw32(pcie, REXPCAP(PCI_EXP_SLTCAP), PCI_EXP_SLTCAP_PSN, 0); 453 + 454 + /* Set the completion timer timeout to the maximum 50ms. */ 455 + rcar_rmw32(pcie, TLCTLR + 1, 0x3f, 50); 456 + 457 + /* Terminate list of capabilities (Next Capability Offset=0) */ 458 + rcar_rmw32(pcie, RVCCAP(0), 0xfff00000, 0); 459 + 460 + /* Enable MSI */ 461 + if (IS_ENABLED(CONFIG_PCI_MSI)) 462 + rcar_pci_write_reg(pcie, 0x801f0000, PCIEMSITXR); 463 + 464 + rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); 465 + 466 + /* Finish initialization - establish a PCI Express link */ 467 + rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); 468 + 469 + /* This will timeout if we don't have a link. */ 470 + err = rcar_pcie_wait_for_dl(pcie); 471 + if (err) 472 + return err; 473 + 474 + /* Enable INTx interrupts */ 475 + rcar_rmw32(pcie, PCIEINTXR, 0, 0xF << 8); 476 + 477 + wmb(); 478 + 479 + return 0; 480 + } 481 + 482 + static int rcar_pcie_phy_init_h1(struct rcar_pcie_host *host) 483 + { 484 + struct rcar_pcie *pcie = &host->pcie; 485 + 486 + /* Initialize the phy */ 487 + phy_write_reg(pcie, 0, 0x42, 0x1, 0x0EC34191); 488 + phy_write_reg(pcie, 1, 0x42, 0x1, 0x0EC34180); 489 + phy_write_reg(pcie, 0, 0x43, 0x1, 0x00210188); 490 + phy_write_reg(pcie, 1, 0x43, 0x1, 0x00210188); 491 + phy_write_reg(pcie, 0, 0x44, 0x1, 0x015C0014); 492 + phy_write_reg(pcie, 1, 0x44, 0x1, 0x015C0014); 493 + phy_write_reg(pcie, 1, 0x4C, 0x1, 0x786174A0); 494 + phy_write_reg(pcie, 1, 0x4D, 0x1, 0x048000BB); 495 + phy_write_reg(pcie, 0, 0x51, 0x1, 0x079EC062); 496 + phy_write_reg(pcie, 0, 0x52, 0x1, 0x20000000); 497 + phy_write_reg(pcie, 1, 0x52, 0x1, 0x20000000); 498 + phy_write_reg(pcie, 1, 0x56, 0x1, 0x00003806); 499 + 500 + phy_write_reg(pcie, 0, 0x60, 0x1, 0x004B03A5); 501 + phy_write_reg(pcie, 0, 0x64, 0x1, 0x3F0F1F0F); 502 + phy_write_reg(pcie, 0, 0x66, 0x1, 0x00008000); 503 + 504 + return 0; 505 + } 506 + 507 + static int rcar_pcie_phy_init_gen2(struct rcar_pcie_host *host) 508 + { 509 + struct rcar_pcie *pcie = &host->pcie; 510 + 511 + /* 512 + * These settings come from the R-Car Series, 2nd Generation User's 513 + * Manual, section 50.3.1 (2) Initialization of the physical layer. 514 + */ 515 + rcar_pci_write_reg(pcie, 0x000f0030, GEN2_PCIEPHYADDR); 516 + rcar_pci_write_reg(pcie, 0x00381203, GEN2_PCIEPHYDATA); 517 + rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); 518 + rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); 519 + 520 + rcar_pci_write_reg(pcie, 0x000f0054, GEN2_PCIEPHYADDR); 521 + /* The following value is for DC connection, no termination resistor */ 522 + rcar_pci_write_reg(pcie, 0x13802007, GEN2_PCIEPHYDATA); 523 + rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); 524 + rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); 525 + 526 + return 0; 527 + } 528 + 529 + static int rcar_pcie_phy_init_gen3(struct rcar_pcie_host *host) 530 + { 531 + int err; 532 + 533 + err = phy_init(host->phy); 534 + if (err) 535 + return err; 536 + 537 + err = phy_power_on(host->phy); 538 + if (err) 539 + phy_exit(host->phy); 540 + 541 + return err; 542 + } 543 + 544 + static int rcar_msi_alloc(struct rcar_msi *chip) 545 + { 546 + int msi; 547 + 548 + mutex_lock(&chip->lock); 549 + 550 + msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); 551 + if (msi < INT_PCI_MSI_NR) 552 + set_bit(msi, chip->used); 553 + else 554 + msi = -ENOSPC; 555 + 556 + mutex_unlock(&chip->lock); 557 + 558 + return msi; 559 + } 560 + 561 + static int rcar_msi_alloc_region(struct rcar_msi *chip, int no_irqs) 562 + { 563 + int msi; 564 + 565 + mutex_lock(&chip->lock); 566 + msi = bitmap_find_free_region(chip->used, INT_PCI_MSI_NR, 567 + order_base_2(no_irqs)); 568 + mutex_unlock(&chip->lock); 569 + 570 + return msi; 571 + } 572 + 573 + static void rcar_msi_free(struct rcar_msi *chip, unsigned long irq) 574 + { 575 + mutex_lock(&chip->lock); 576 + clear_bit(irq, chip->used); 577 + mutex_unlock(&chip->lock); 578 + } 579 + 580 + static irqreturn_t rcar_pcie_msi_irq(int irq, void *data) 581 + { 582 + struct rcar_pcie_host *host = data; 583 + struct rcar_pcie *pcie = &host->pcie; 584 + struct rcar_msi *msi = &host->msi; 585 + struct device *dev = pcie->dev; 586 + unsigned long reg; 587 + 588 + reg = rcar_pci_read_reg(pcie, PCIEMSIFR); 589 + 590 + /* MSI & INTx share an interrupt - we only handle MSI here */ 591 + if (!reg) 592 + return IRQ_NONE; 593 + 594 + while (reg) { 595 + unsigned int index = find_first_bit(&reg, 32); 596 + unsigned int msi_irq; 597 + 598 + /* clear the interrupt */ 599 + rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR); 600 + 601 + msi_irq = irq_find_mapping(msi->domain, index); 602 + if (msi_irq) { 603 + if (test_bit(index, msi->used)) 604 + generic_handle_irq(msi_irq); 605 + else 606 + dev_info(dev, "unhandled MSI\n"); 607 + } else { 608 + /* Unknown MSI, just clear it */ 609 + dev_dbg(dev, "unexpected MSI\n"); 610 + } 611 + 612 + /* see if there's any more pending in this vector */ 613 + reg = rcar_pci_read_reg(pcie, PCIEMSIFR); 614 + } 615 + 616 + return IRQ_HANDLED; 617 + } 618 + 619 + static int rcar_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev, 620 + struct msi_desc *desc) 621 + { 622 + struct rcar_msi *msi = to_rcar_msi(chip); 623 + struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host, 624 + msi.chip); 625 + struct rcar_pcie *pcie = &host->pcie; 626 + struct msi_msg msg; 627 + unsigned int irq; 628 + int hwirq; 629 + 630 + hwirq = rcar_msi_alloc(msi); 631 + if (hwirq < 0) 632 + return hwirq; 633 + 634 + irq = irq_find_mapping(msi->domain, hwirq); 635 + if (!irq) { 636 + rcar_msi_free(msi, hwirq); 637 + return -EINVAL; 638 + } 639 + 640 + irq_set_msi_desc(irq, desc); 641 + 642 + msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 643 + msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); 644 + msg.data = hwirq; 645 + 646 + pci_write_msi_msg(irq, &msg); 647 + 648 + return 0; 649 + } 650 + 651 + static int rcar_msi_setup_irqs(struct msi_controller *chip, 652 + struct pci_dev *pdev, int nvec, int type) 653 + { 654 + struct rcar_msi *msi = to_rcar_msi(chip); 655 + struct rcar_pcie_host *host = container_of(chip, struct rcar_pcie_host, 656 + msi.chip); 657 + struct rcar_pcie *pcie = &host->pcie; 658 + struct msi_desc *desc; 659 + struct msi_msg msg; 660 + unsigned int irq; 661 + int hwirq; 662 + int i; 663 + 664 + /* MSI-X interrupts are not supported */ 665 + if (type == PCI_CAP_ID_MSIX) 666 + return -EINVAL; 667 + 668 + WARN_ON(!list_is_singular(&pdev->dev.msi_list)); 669 + desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list); 670 + 671 + hwirq = rcar_msi_alloc_region(msi, nvec); 672 + if (hwirq < 0) 673 + return -ENOSPC; 674 + 675 + irq = irq_find_mapping(msi->domain, hwirq); 676 + if (!irq) 677 + return -ENOSPC; 678 + 679 + for (i = 0; i < nvec; i++) { 680 + /* 681 + * irq_create_mapping() called from rcar_pcie_probe() pre- 682 + * allocates descs, so there is no need to allocate descs here. 683 + * We can therefore assume that if irq_find_mapping() above 684 + * returns non-zero, then the descs are also successfully 685 + * allocated. 686 + */ 687 + if (irq_set_msi_desc_off(irq, i, desc)) { 688 + /* TODO: clear */ 689 + return -EINVAL; 690 + } 691 + } 692 + 693 + desc->nvec_used = nvec; 694 + desc->msi_attrib.multiple = order_base_2(nvec); 695 + 696 + msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 697 + msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); 698 + msg.data = hwirq; 699 + 700 + pci_write_msi_msg(irq, &msg); 701 + 702 + return 0; 703 + } 704 + 705 + static void rcar_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) 706 + { 707 + struct rcar_msi *msi = to_rcar_msi(chip); 708 + struct irq_data *d = irq_get_irq_data(irq); 709 + 710 + rcar_msi_free(msi, d->hwirq); 711 + } 712 + 713 + static struct irq_chip rcar_msi_irq_chip = { 714 + .name = "R-Car PCIe MSI", 715 + .irq_enable = pci_msi_unmask_irq, 716 + .irq_disable = pci_msi_mask_irq, 717 + .irq_mask = pci_msi_mask_irq, 718 + .irq_unmask = pci_msi_unmask_irq, 719 + }; 720 + 721 + static int rcar_msi_map(struct irq_domain *domain, unsigned int irq, 722 + irq_hw_number_t hwirq) 723 + { 724 + irq_set_chip_and_handler(irq, &rcar_msi_irq_chip, handle_simple_irq); 725 + irq_set_chip_data(irq, domain->host_data); 726 + 727 + return 0; 728 + } 729 + 730 + static const struct irq_domain_ops msi_domain_ops = { 731 + .map = rcar_msi_map, 732 + }; 733 + 734 + static void rcar_pcie_unmap_msi(struct rcar_pcie_host *host) 735 + { 736 + struct rcar_msi *msi = &host->msi; 737 + int i, irq; 738 + 739 + for (i = 0; i < INT_PCI_MSI_NR; i++) { 740 + irq = irq_find_mapping(msi->domain, i); 741 + if (irq > 0) 742 + irq_dispose_mapping(irq); 743 + } 744 + 745 + irq_domain_remove(msi->domain); 746 + } 747 + 748 + static void rcar_pcie_hw_enable_msi(struct rcar_pcie_host *host) 749 + { 750 + struct rcar_pcie *pcie = &host->pcie; 751 + struct rcar_msi *msi = &host->msi; 752 + unsigned long base; 753 + 754 + /* setup MSI data target */ 755 + base = virt_to_phys((void *)msi->pages); 756 + 757 + rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR); 758 + rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR); 759 + 760 + /* enable all MSI interrupts */ 761 + rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); 762 + } 763 + 764 + static int rcar_pcie_enable_msi(struct rcar_pcie_host *host) 765 + { 766 + struct rcar_pcie *pcie = &host->pcie; 767 + struct device *dev = pcie->dev; 768 + struct rcar_msi *msi = &host->msi; 769 + int err, i; 770 + 771 + mutex_init(&msi->lock); 772 + 773 + msi->chip.dev = dev; 774 + msi->chip.setup_irq = rcar_msi_setup_irq; 775 + msi->chip.setup_irqs = rcar_msi_setup_irqs; 776 + msi->chip.teardown_irq = rcar_msi_teardown_irq; 777 + 778 + msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR, 779 + &msi_domain_ops, &msi->chip); 780 + if (!msi->domain) { 781 + dev_err(dev, "failed to create IRQ domain\n"); 782 + return -ENOMEM; 783 + } 784 + 785 + for (i = 0; i < INT_PCI_MSI_NR; i++) 786 + irq_create_mapping(msi->domain, i); 787 + 788 + /* Two irqs are for MSI, but they are also used for non-MSI irqs */ 789 + err = devm_request_irq(dev, msi->irq1, rcar_pcie_msi_irq, 790 + IRQF_SHARED | IRQF_NO_THREAD, 791 + rcar_msi_irq_chip.name, host); 792 + if (err < 0) { 793 + dev_err(dev, "failed to request IRQ: %d\n", err); 794 + goto err; 795 + } 796 + 797 + err = devm_request_irq(dev, msi->irq2, rcar_pcie_msi_irq, 798 + IRQF_SHARED | IRQF_NO_THREAD, 799 + rcar_msi_irq_chip.name, host); 800 + if (err < 0) { 801 + dev_err(dev, "failed to request IRQ: %d\n", err); 802 + goto err; 803 + } 804 + 805 + /* setup MSI data target */ 806 + msi->pages = __get_free_pages(GFP_KERNEL, 0); 807 + rcar_pcie_hw_enable_msi(host); 808 + 809 + return 0; 810 + 811 + err: 812 + rcar_pcie_unmap_msi(host); 813 + return err; 814 + } 815 + 816 + static void rcar_pcie_teardown_msi(struct rcar_pcie_host *host) 817 + { 818 + struct rcar_pcie *pcie = &host->pcie; 819 + struct rcar_msi *msi = &host->msi; 820 + 821 + /* Disable all MSI interrupts */ 822 + rcar_pci_write_reg(pcie, 0, PCIEMSIIER); 823 + 824 + /* Disable address decoding of the MSI interrupt, MSIFE */ 825 + rcar_pci_write_reg(pcie, 0, PCIEMSIALR); 826 + 827 + free_pages(msi->pages, 0); 828 + 829 + rcar_pcie_unmap_msi(host); 830 + } 831 + 832 + static int rcar_pcie_get_resources(struct rcar_pcie_host *host) 833 + { 834 + struct rcar_pcie *pcie = &host->pcie; 835 + struct device *dev = pcie->dev; 836 + struct resource res; 837 + int err, i; 838 + 839 + host->phy = devm_phy_optional_get(dev, "pcie"); 840 + if (IS_ERR(host->phy)) 841 + return PTR_ERR(host->phy); 842 + 843 + err = of_address_to_resource(dev->of_node, 0, &res); 844 + if (err) 845 + return err; 846 + 847 + pcie->base = devm_ioremap_resource(dev, &res); 848 + if (IS_ERR(pcie->base)) 849 + return PTR_ERR(pcie->base); 850 + 851 + host->bus_clk = devm_clk_get(dev, "pcie_bus"); 852 + if (IS_ERR(host->bus_clk)) { 853 + dev_err(dev, "cannot get pcie bus clock\n"); 854 + return PTR_ERR(host->bus_clk); 855 + } 856 + 857 + i = irq_of_parse_and_map(dev->of_node, 0); 858 + if (!i) { 859 + dev_err(dev, "cannot get platform resources for msi interrupt\n"); 860 + err = -ENOENT; 861 + goto err_irq1; 862 + } 863 + host->msi.irq1 = i; 864 + 865 + i = irq_of_parse_and_map(dev->of_node, 1); 866 + if (!i) { 867 + dev_err(dev, "cannot get platform resources for msi interrupt\n"); 868 + err = -ENOENT; 869 + goto err_irq2; 870 + } 871 + host->msi.irq2 = i; 872 + 873 + return 0; 874 + 875 + err_irq2: 876 + irq_dispose_mapping(host->msi.irq1); 877 + err_irq1: 878 + return err; 879 + } 880 + 881 + static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie, 882 + struct resource_entry *entry, 883 + int *index) 884 + { 885 + u64 restype = entry->res->flags; 886 + u64 cpu_addr = entry->res->start; 887 + u64 cpu_end = entry->res->end; 888 + u64 pci_addr = entry->res->start - entry->offset; 889 + u32 flags = LAM_64BIT | LAR_ENABLE; 890 + u64 mask; 891 + u64 size = resource_size(entry->res); 892 + int idx = *index; 893 + 894 + if (restype & IORESOURCE_PREFETCH) 895 + flags |= LAM_PREFETCH; 896 + 897 + while (cpu_addr < cpu_end) { 898 + if (idx >= MAX_NR_INBOUND_MAPS - 1) { 899 + dev_err(pcie->dev, "Failed to map inbound regions!\n"); 900 + return -EINVAL; 901 + } 902 + /* 903 + * If the size of the range is larger than the alignment of 904 + * the start address, we have to use multiple entries to 905 + * perform the mapping. 906 + */ 907 + if (cpu_addr > 0) { 908 + unsigned long nr_zeros = __ffs64(cpu_addr); 909 + u64 alignment = 1ULL << nr_zeros; 910 + 911 + size = min(size, alignment); 912 + } 913 + /* Hardware supports max 4GiB inbound region */ 914 + size = min(size, 1ULL << 32); 915 + 916 + mask = roundup_pow_of_two(size) - 1; 917 + mask &= ~0xf; 918 + 919 + rcar_pcie_set_inbound(pcie, cpu_addr, pci_addr, 920 + lower_32_bits(mask) | flags, idx, true); 921 + 922 + pci_addr += size; 923 + cpu_addr += size; 924 + idx += 2; 925 + } 926 + *index = idx; 927 + 928 + return 0; 929 + } 930 + 931 + static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie_host *host) 932 + { 933 + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(host); 934 + struct resource_entry *entry; 935 + int index = 0, err = 0; 936 + 937 + resource_list_for_each_entry(entry, &bridge->dma_ranges) { 938 + err = rcar_pcie_inbound_ranges(&host->pcie, entry, &index); 939 + if (err) 940 + break; 941 + } 942 + 943 + return err; 944 + } 945 + 946 + static const struct of_device_id rcar_pcie_of_match[] = { 947 + { .compatible = "renesas,pcie-r8a7779", 948 + .data = rcar_pcie_phy_init_h1 }, 949 + { .compatible = "renesas,pcie-r8a7790", 950 + .data = rcar_pcie_phy_init_gen2 }, 951 + { .compatible = "renesas,pcie-r8a7791", 952 + .data = rcar_pcie_phy_init_gen2 }, 953 + { .compatible = "renesas,pcie-rcar-gen2", 954 + .data = rcar_pcie_phy_init_gen2 }, 955 + { .compatible = "renesas,pcie-r8a7795", 956 + .data = rcar_pcie_phy_init_gen3 }, 957 + { .compatible = "renesas,pcie-rcar-gen3", 958 + .data = rcar_pcie_phy_init_gen3 }, 959 + {}, 960 + }; 961 + 962 + static int rcar_pcie_probe(struct platform_device *pdev) 963 + { 964 + struct device *dev = &pdev->dev; 965 + struct rcar_pcie_host *host; 966 + struct rcar_pcie *pcie; 967 + u32 data; 968 + int err; 969 + struct pci_host_bridge *bridge; 970 + 971 + bridge = pci_alloc_host_bridge(sizeof(*host)); 972 + if (!bridge) 973 + return -ENOMEM; 974 + 975 + host = pci_host_bridge_priv(bridge); 976 + pcie = &host->pcie; 977 + pcie->dev = dev; 978 + platform_set_drvdata(pdev, host); 979 + 980 + err = pci_parse_request_of_pci_ranges(dev, &host->resources, 981 + &bridge->dma_ranges, NULL); 982 + if (err) 983 + goto err_free_bridge; 984 + 985 + pm_runtime_enable(pcie->dev); 986 + err = pm_runtime_get_sync(pcie->dev); 987 + if (err < 0) { 988 + dev_err(pcie->dev, "pm_runtime_get_sync failed\n"); 989 + goto err_pm_disable; 990 + } 991 + 992 + err = rcar_pcie_get_resources(host); 993 + if (err < 0) { 994 + dev_err(dev, "failed to request resources: %d\n", err); 995 + goto err_pm_put; 996 + } 997 + 998 + err = clk_prepare_enable(host->bus_clk); 999 + if (err) { 1000 + dev_err(dev, "failed to enable bus clock: %d\n", err); 1001 + goto err_unmap_msi_irqs; 1002 + } 1003 + 1004 + err = rcar_pcie_parse_map_dma_ranges(host); 1005 + if (err) 1006 + goto err_clk_disable; 1007 + 1008 + host->phy_init_fn = of_device_get_match_data(dev); 1009 + err = host->phy_init_fn(host); 1010 + if (err) { 1011 + dev_err(dev, "failed to init PCIe PHY\n"); 1012 + goto err_clk_disable; 1013 + } 1014 + 1015 + /* Failure to get a link might just be that no cards are inserted */ 1016 + if (rcar_pcie_hw_init(pcie)) { 1017 + dev_info(dev, "PCIe link down\n"); 1018 + err = -ENODEV; 1019 + goto err_phy_shutdown; 1020 + } 1021 + 1022 + data = rcar_pci_read_reg(pcie, MACSR); 1023 + dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 1024 + 1025 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 1026 + err = rcar_pcie_enable_msi(host); 1027 + if (err < 0) { 1028 + dev_err(dev, 1029 + "failed to enable MSI support: %d\n", 1030 + err); 1031 + goto err_phy_shutdown; 1032 + } 1033 + } 1034 + 1035 + err = rcar_pcie_enable(host); 1036 + if (err) 1037 + goto err_msi_teardown; 1038 + 1039 + return 0; 1040 + 1041 + err_msi_teardown: 1042 + if (IS_ENABLED(CONFIG_PCI_MSI)) 1043 + rcar_pcie_teardown_msi(host); 1044 + 1045 + err_phy_shutdown: 1046 + if (host->phy) { 1047 + phy_power_off(host->phy); 1048 + phy_exit(host->phy); 1049 + } 1050 + 1051 + err_clk_disable: 1052 + clk_disable_unprepare(host->bus_clk); 1053 + 1054 + err_unmap_msi_irqs: 1055 + irq_dispose_mapping(host->msi.irq2); 1056 + irq_dispose_mapping(host->msi.irq1); 1057 + 1058 + err_pm_put: 1059 + pm_runtime_put(dev); 1060 + 1061 + err_pm_disable: 1062 + pm_runtime_disable(dev); 1063 + pci_free_resource_list(&host->resources); 1064 + 1065 + err_free_bridge: 1066 + pci_free_host_bridge(bridge); 1067 + 1068 + return err; 1069 + } 1070 + 1071 + static int __maybe_unused rcar_pcie_resume(struct device *dev) 1072 + { 1073 + struct rcar_pcie_host *host = dev_get_drvdata(dev); 1074 + struct rcar_pcie *pcie = &host->pcie; 1075 + unsigned int data; 1076 + int err; 1077 + 1078 + err = rcar_pcie_parse_map_dma_ranges(host); 1079 + if (err) 1080 + return 0; 1081 + 1082 + /* Failure to get a link might just be that no cards are inserted */ 1083 + err = host->phy_init_fn(host); 1084 + if (err) { 1085 + dev_info(dev, "PCIe link down\n"); 1086 + return 0; 1087 + } 1088 + 1089 + data = rcar_pci_read_reg(pcie, MACSR); 1090 + dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 1091 + 1092 + /* Enable MSI */ 1093 + if (IS_ENABLED(CONFIG_PCI_MSI)) 1094 + rcar_pcie_hw_enable_msi(host); 1095 + 1096 + rcar_pcie_hw_enable(host); 1097 + 1098 + return 0; 1099 + } 1100 + 1101 + static int rcar_pcie_resume_noirq(struct device *dev) 1102 + { 1103 + struct rcar_pcie_host *host = dev_get_drvdata(dev); 1104 + struct rcar_pcie *pcie = &host->pcie; 1105 + 1106 + if (rcar_pci_read_reg(pcie, PMSR) && 1107 + !(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN)) 1108 + return 0; 1109 + 1110 + /* Re-establish the PCIe link */ 1111 + rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); 1112 + rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); 1113 + return rcar_pcie_wait_for_dl(pcie); 1114 + } 1115 + 1116 + static const struct dev_pm_ops rcar_pcie_pm_ops = { 1117 + SET_SYSTEM_SLEEP_PM_OPS(NULL, rcar_pcie_resume) 1118 + .resume_noirq = rcar_pcie_resume_noirq, 1119 + }; 1120 + 1121 + static struct platform_driver rcar_pcie_driver = { 1122 + .driver = { 1123 + .name = "rcar-pcie", 1124 + .of_match_table = rcar_pcie_of_match, 1125 + .pm = &rcar_pcie_pm_ops, 1126 + .suppress_bind_attrs = true, 1127 + }, 1128 + .probe = rcar_pcie_probe, 1129 + }; 1130 + builtin_platform_driver(rcar_pcie_driver);
+48 -1177
drivers/pci/controller/pcie-rcar.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 3 * PCIe driver for Renesas R-Car SoCs 4 - * Copyright (C) 2014 Renesas Electronics Europe Ltd 5 - * 6 - * Based on: 7 - * arch/sh/drivers/pci/pcie-sh7786.c 8 - * arch/sh/drivers/pci/ops-sh7786.c 9 - * Copyright (C) 2009 - 2011 Paul Mundt 4 + * Copyright (C) 2014-2020 Renesas Electronics Europe Ltd 10 5 * 11 6 * Author: Phil Edworthy <phil.edworthy@renesas.com> 12 7 */ 13 8 14 - #include <linux/bitops.h> 15 - #include <linux/clk.h> 16 9 #include <linux/delay.h> 17 - #include <linux/interrupt.h> 18 - #include <linux/irq.h> 19 - #include <linux/irqdomain.h> 20 - #include <linux/kernel.h> 21 - #include <linux/init.h> 22 - #include <linux/msi.h> 23 - #include <linux/of_address.h> 24 - #include <linux/of_irq.h> 25 - #include <linux/of_pci.h> 26 - #include <linux/of_platform.h> 27 10 #include <linux/pci.h> 28 - #include <linux/phy/phy.h> 29 - #include <linux/platform_device.h> 30 - #include <linux/pm_runtime.h> 31 - #include <linux/slab.h> 32 11 33 - #define PCIECAR 0x000010 34 - #define PCIECCTLR 0x000018 35 - #define CONFIG_SEND_ENABLE BIT(31) 36 - #define TYPE0 (0 << 8) 37 - #define TYPE1 BIT(8) 38 - #define PCIECDR 0x000020 39 - #define PCIEMSR 0x000028 40 - #define PCIEINTXR 0x000400 41 - #define PCIEPHYSR 0x0007f0 42 - #define PHYRDY BIT(0) 43 - #define PCIEMSITXR 0x000840 12 + #include "pcie-rcar.h" 44 13 45 - /* Transfer control */ 46 - #define PCIETCTLR 0x02000 47 - #define DL_DOWN BIT(3) 48 - #define CFINIT BIT(0) 49 - #define PCIETSTR 0x02004 50 - #define DATA_LINK_ACTIVE BIT(0) 51 - #define PCIEERRFR 0x02020 52 - #define UNSUPPORTED_REQUEST BIT(4) 53 - #define PCIEMSIFR 0x02044 54 - #define PCIEMSIALR 0x02048 55 - #define MSIFE BIT(0) 56 - #define PCIEMSIAUR 0x0204c 57 - #define PCIEMSIIER 0x02050 58 - 59 - /* root port address */ 60 - #define PCIEPRAR(x) (0x02080 + ((x) * 0x4)) 61 - 62 - /* local address reg & mask */ 63 - #define PCIELAR(x) (0x02200 + ((x) * 0x20)) 64 - #define PCIELAMR(x) (0x02208 + ((x) * 0x20)) 65 - #define LAM_PREFETCH BIT(3) 66 - #define LAM_64BIT BIT(2) 67 - #define LAR_ENABLE BIT(1) 68 - 69 - /* PCIe address reg & mask */ 70 - #define PCIEPALR(x) (0x03400 + ((x) * 0x20)) 71 - #define PCIEPAUR(x) (0x03404 + ((x) * 0x20)) 72 - #define PCIEPAMR(x) (0x03408 + ((x) * 0x20)) 73 - #define PCIEPTCTLR(x) (0x0340c + ((x) * 0x20)) 74 - #define PAR_ENABLE BIT(31) 75 - #define IO_SPACE BIT(8) 76 - 77 - /* Configuration */ 78 - #define PCICONF(x) (0x010000 + ((x) * 0x4)) 79 - #define PMCAP(x) (0x010040 + ((x) * 0x4)) 80 - #define EXPCAP(x) (0x010070 + ((x) * 0x4)) 81 - #define VCCAP(x) (0x010100 + ((x) * 0x4)) 82 - 83 - /* link layer */ 84 - #define IDSETR1 0x011004 85 - #define TLCTLR 0x011048 86 - #define MACSR 0x011054 87 - #define SPCHGFIN BIT(4) 88 - #define SPCHGFAIL BIT(6) 89 - #define SPCHGSUC BIT(7) 90 - #define LINK_SPEED (0xf << 16) 91 - #define LINK_SPEED_2_5GTS (1 << 16) 92 - #define LINK_SPEED_5_0GTS (2 << 16) 93 - #define MACCTLR 0x011058 94 - #define MACCTLR_NFTS_MASK GENMASK(23, 16) /* The name is from SH7786 */ 95 - #define SPEED_CHANGE BIT(24) 96 - #define SCRAMBLE_DISABLE BIT(27) 97 - #define LTSMDIS BIT(31) 98 - #define MACCTLR_INIT_VAL (LTSMDIS | MACCTLR_NFTS_MASK) 99 - #define PMSR 0x01105c 100 - #define MACS2R 0x011078 101 - #define MACCGSPSETR 0x011084 102 - #define SPCNGRSN BIT(31) 103 - 104 - /* R-Car H1 PHY */ 105 - #define H1_PCIEPHYADRR 0x04000c 106 - #define WRITE_CMD BIT(16) 107 - #define PHY_ACK BIT(24) 108 - #define RATE_POS 12 109 - #define LANE_POS 8 110 - #define ADR_POS 0 111 - #define H1_PCIEPHYDOUTR 0x040014 112 - 113 - /* R-Car Gen2 PHY */ 114 - #define GEN2_PCIEPHYADDR 0x780 115 - #define GEN2_PCIEPHYDATA 0x784 116 - #define GEN2_PCIEPHYCTRL 0x78c 117 - 118 - #define INT_PCI_MSI_NR 32 119 - 120 - #define RCONF(x) (PCICONF(0) + (x)) 121 - #define RPMCAP(x) (PMCAP(0) + (x)) 122 - #define REXPCAP(x) (EXPCAP(0) + (x)) 123 - #define RVCCAP(x) (VCCAP(0) + (x)) 124 - 125 - #define PCIE_CONF_BUS(b) (((b) & 0xff) << 24) 126 - #define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19) 127 - #define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16) 128 - 129 - #define RCAR_PCI_MAX_RESOURCES 4 130 - #define MAX_NR_INBOUND_MAPS 6 131 - 132 - struct rcar_msi { 133 - DECLARE_BITMAP(used, INT_PCI_MSI_NR); 134 - struct irq_domain *domain; 135 - struct msi_controller chip; 136 - unsigned long pages; 137 - struct mutex lock; 138 - int irq1; 139 - int irq2; 140 - }; 141 - 142 - static inline struct rcar_msi *to_rcar_msi(struct msi_controller *chip) 143 - { 144 - return container_of(chip, struct rcar_msi, chip); 145 - } 146 - 147 - /* Structure representing the PCIe interface */ 148 - struct rcar_pcie { 149 - struct device *dev; 150 - struct phy *phy; 151 - void __iomem *base; 152 - struct list_head resources; 153 - int root_bus_nr; 154 - struct clk *bus_clk; 155 - struct rcar_msi msi; 156 - }; 157 - 158 - static void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val, 159 - unsigned int reg) 14 + void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val, unsigned int reg) 160 15 { 161 16 writel(val, pcie->base + reg); 162 17 } 163 18 164 - static u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg) 19 + u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg) 165 20 { 166 21 return readl(pcie->base + reg); 167 22 } 168 23 169 - enum { 170 - RCAR_PCI_ACCESS_READ, 171 - RCAR_PCI_ACCESS_WRITE, 172 - }; 173 - 174 - static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) 24 + void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) 175 25 { 176 26 unsigned int shift = BITS_PER_BYTE * (where & 3); 177 27 u32 val = rcar_pci_read_reg(pcie, where & ~3); ··· 31 181 rcar_pci_write_reg(pcie, val, where & ~3); 32 182 } 33 183 34 - static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) 35 - { 36 - unsigned int shift = BITS_PER_BYTE * (where & 3); 37 - u32 val = rcar_pci_read_reg(pcie, where & ~3); 38 - 39 - return val >> shift; 40 - } 41 - 42 - /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ 43 - static int rcar_pcie_config_access(struct rcar_pcie *pcie, 44 - unsigned char access_type, struct pci_bus *bus, 45 - unsigned int devfn, int where, u32 *data) 46 - { 47 - unsigned int dev, func, reg, index; 48 - 49 - dev = PCI_SLOT(devfn); 50 - func = PCI_FUNC(devfn); 51 - reg = where & ~3; 52 - index = reg / 4; 53 - 54 - /* 55 - * While each channel has its own memory-mapped extended config 56 - * space, it's generally only accessible when in endpoint mode. 57 - * When in root complex mode, the controller is unable to target 58 - * itself with either type 0 or type 1 accesses, and indeed, any 59 - * controller initiated target transfer to its own config space 60 - * result in a completer abort. 61 - * 62 - * Each channel effectively only supports a single device, but as 63 - * the same channel <-> device access works for any PCI_SLOT() 64 - * value, we cheat a bit here and bind the controller's config 65 - * space to devfn 0 in order to enable self-enumeration. In this 66 - * case the regular ECAR/ECDR path is sidelined and the mangled 67 - * config access itself is initiated as an internal bus transaction. 68 - */ 69 - if (pci_is_root_bus(bus)) { 70 - if (dev != 0) 71 - return PCIBIOS_DEVICE_NOT_FOUND; 72 - 73 - if (access_type == RCAR_PCI_ACCESS_READ) { 74 - *data = rcar_pci_read_reg(pcie, PCICONF(index)); 75 - } else { 76 - /* Keep an eye out for changes to the root bus number */ 77 - if (pci_is_root_bus(bus) && (reg == PCI_PRIMARY_BUS)) 78 - pcie->root_bus_nr = *data & 0xff; 79 - 80 - rcar_pci_write_reg(pcie, *data, PCICONF(index)); 81 - } 82 - 83 - return PCIBIOS_SUCCESSFUL; 84 - } 85 - 86 - if (pcie->root_bus_nr < 0) 87 - return PCIBIOS_DEVICE_NOT_FOUND; 88 - 89 - /* Clear errors */ 90 - rcar_pci_write_reg(pcie, rcar_pci_read_reg(pcie, PCIEERRFR), PCIEERRFR); 91 - 92 - /* Set the PIO address */ 93 - rcar_pci_write_reg(pcie, PCIE_CONF_BUS(bus->number) | 94 - PCIE_CONF_DEV(dev) | PCIE_CONF_FUNC(func) | reg, PCIECAR); 95 - 96 - /* Enable the configuration access */ 97 - if (bus->parent->number == pcie->root_bus_nr) 98 - rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE0, PCIECCTLR); 99 - else 100 - rcar_pci_write_reg(pcie, CONFIG_SEND_ENABLE | TYPE1, PCIECCTLR); 101 - 102 - /* Check for errors */ 103 - if (rcar_pci_read_reg(pcie, PCIEERRFR) & UNSUPPORTED_REQUEST) 104 - return PCIBIOS_DEVICE_NOT_FOUND; 105 - 106 - /* Check for master and target aborts */ 107 - if (rcar_read_conf(pcie, RCONF(PCI_STATUS)) & 108 - (PCI_STATUS_REC_MASTER_ABORT | PCI_STATUS_REC_TARGET_ABORT)) 109 - return PCIBIOS_DEVICE_NOT_FOUND; 110 - 111 - if (access_type == RCAR_PCI_ACCESS_READ) 112 - *data = rcar_pci_read_reg(pcie, PCIECDR); 113 - else 114 - rcar_pci_write_reg(pcie, *data, PCIECDR); 115 - 116 - /* Disable the configuration access */ 117 - rcar_pci_write_reg(pcie, 0, PCIECCTLR); 118 - 119 - return PCIBIOS_SUCCESSFUL; 120 - } 121 - 122 - static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn, 123 - int where, int size, u32 *val) 124 - { 125 - struct rcar_pcie *pcie = bus->sysdata; 126 - int ret; 127 - 128 - ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ, 129 - bus, devfn, where, val); 130 - if (ret != PCIBIOS_SUCCESSFUL) { 131 - *val = 0xffffffff; 132 - return ret; 133 - } 134 - 135 - if (size == 1) 136 - *val = (*val >> (BITS_PER_BYTE * (where & 3))) & 0xff; 137 - else if (size == 2) 138 - *val = (*val >> (BITS_PER_BYTE * (where & 2))) & 0xffff; 139 - 140 - dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", 141 - bus->number, devfn, where, size, *val); 142 - 143 - return ret; 144 - } 145 - 146 - /* Serialization is provided by 'pci_lock' in drivers/pci/access.c */ 147 - static int rcar_pcie_write_conf(struct pci_bus *bus, unsigned int devfn, 148 - int where, int size, u32 val) 149 - { 150 - struct rcar_pcie *pcie = bus->sysdata; 151 - unsigned int shift; 152 - u32 data; 153 - int ret; 154 - 155 - ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ, 156 - bus, devfn, where, &data); 157 - if (ret != PCIBIOS_SUCCESSFUL) 158 - return ret; 159 - 160 - dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n", 161 - bus->number, devfn, where, size, val); 162 - 163 - if (size == 1) { 164 - shift = BITS_PER_BYTE * (where & 3); 165 - data &= ~(0xff << shift); 166 - data |= ((val & 0xff) << shift); 167 - } else if (size == 2) { 168 - shift = BITS_PER_BYTE * (where & 2); 169 - data &= ~(0xffff << shift); 170 - data |= ((val & 0xffff) << shift); 171 - } else 172 - data = val; 173 - 174 - ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_WRITE, 175 - bus, devfn, where, &data); 176 - 177 - return ret; 178 - } 179 - 180 - static struct pci_ops rcar_pcie_ops = { 181 - .read = rcar_pcie_read_conf, 182 - .write = rcar_pcie_write_conf, 183 - }; 184 - 185 - static void rcar_pcie_setup_window(int win, struct rcar_pcie *pcie, 186 - struct resource *res) 187 - { 188 - /* Setup PCIe address space mappings for each resource */ 189 - resource_size_t size; 190 - resource_size_t res_start; 191 - u32 mask; 192 - 193 - rcar_pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win)); 194 - 195 - /* 196 - * The PAMR mask is calculated in units of 128Bytes, which 197 - * keeps things pretty simple. 198 - */ 199 - size = resource_size(res); 200 - mask = (roundup_pow_of_two(size) / SZ_128) - 1; 201 - rcar_pci_write_reg(pcie, mask << 7, PCIEPAMR(win)); 202 - 203 - if (res->flags & IORESOURCE_IO) 204 - res_start = pci_pio_to_address(res->start); 205 - else 206 - res_start = res->start; 207 - 208 - rcar_pci_write_reg(pcie, upper_32_bits(res_start), PCIEPAUR(win)); 209 - rcar_pci_write_reg(pcie, lower_32_bits(res_start) & ~0x7F, 210 - PCIEPALR(win)); 211 - 212 - /* First resource is for IO */ 213 - mask = PAR_ENABLE; 214 - if (res->flags & IORESOURCE_IO) 215 - mask |= IO_SPACE; 216 - 217 - rcar_pci_write_reg(pcie, mask, PCIEPTCTLR(win)); 218 - } 219 - 220 - static int rcar_pcie_setup(struct list_head *resource, struct rcar_pcie *pci) 221 - { 222 - struct resource_entry *win; 223 - int i = 0; 224 - 225 - /* Setup PCI resources */ 226 - resource_list_for_each_entry(win, &pci->resources) { 227 - struct resource *res = win->res; 228 - 229 - if (!res->flags) 230 - continue; 231 - 232 - switch (resource_type(res)) { 233 - case IORESOURCE_IO: 234 - case IORESOURCE_MEM: 235 - rcar_pcie_setup_window(i, pci, res); 236 - i++; 237 - break; 238 - case IORESOURCE_BUS: 239 - pci->root_bus_nr = res->start; 240 - break; 241 - default: 242 - continue; 243 - } 244 - 245 - pci_add_resource(resource, res); 246 - } 247 - 248 - return 1; 249 - } 250 - 251 - static void rcar_pcie_force_speedup(struct rcar_pcie *pcie) 252 - { 253 - struct device *dev = pcie->dev; 254 - unsigned int timeout = 1000; 255 - u32 macsr; 256 - 257 - if ((rcar_pci_read_reg(pcie, MACS2R) & LINK_SPEED) != LINK_SPEED_5_0GTS) 258 - return; 259 - 260 - if (rcar_pci_read_reg(pcie, MACCTLR) & SPEED_CHANGE) { 261 - dev_err(dev, "Speed change already in progress\n"); 262 - return; 263 - } 264 - 265 - macsr = rcar_pci_read_reg(pcie, MACSR); 266 - if ((macsr & LINK_SPEED) == LINK_SPEED_5_0GTS) 267 - goto done; 268 - 269 - /* Set target link speed to 5.0 GT/s */ 270 - rcar_rmw32(pcie, EXPCAP(12), PCI_EXP_LNKSTA_CLS, 271 - PCI_EXP_LNKSTA_CLS_5_0GB); 272 - 273 - /* Set speed change reason as intentional factor */ 274 - rcar_rmw32(pcie, MACCGSPSETR, SPCNGRSN, 0); 275 - 276 - /* Clear SPCHGFIN, SPCHGSUC, and SPCHGFAIL */ 277 - if (macsr & (SPCHGFIN | SPCHGSUC | SPCHGFAIL)) 278 - rcar_pci_write_reg(pcie, macsr, MACSR); 279 - 280 - /* Start link speed change */ 281 - rcar_rmw32(pcie, MACCTLR, SPEED_CHANGE, SPEED_CHANGE); 282 - 283 - while (timeout--) { 284 - macsr = rcar_pci_read_reg(pcie, MACSR); 285 - if (macsr & SPCHGFIN) { 286 - /* Clear the interrupt bits */ 287 - rcar_pci_write_reg(pcie, macsr, MACSR); 288 - 289 - if (macsr & SPCHGFAIL) 290 - dev_err(dev, "Speed change failed\n"); 291 - 292 - goto done; 293 - } 294 - 295 - msleep(1); 296 - } 297 - 298 - dev_err(dev, "Speed change timed out\n"); 299 - 300 - done: 301 - dev_info(dev, "Current link speed is %s GT/s\n", 302 - (macsr & LINK_SPEED) == LINK_SPEED_5_0GTS ? "5" : "2.5"); 303 - } 304 - 305 - static int rcar_pcie_enable(struct rcar_pcie *pcie) 306 - { 307 - struct device *dev = pcie->dev; 308 - struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 309 - struct pci_bus *bus, *child; 310 - int ret; 311 - 312 - /* Try setting 5 GT/s link speed */ 313 - rcar_pcie_force_speedup(pcie); 314 - 315 - rcar_pcie_setup(&bridge->windows, pcie); 316 - 317 - pci_add_flags(PCI_REASSIGN_ALL_BUS); 318 - 319 - bridge->dev.parent = dev; 320 - bridge->sysdata = pcie; 321 - bridge->busnr = pcie->root_bus_nr; 322 - bridge->ops = &rcar_pcie_ops; 323 - bridge->map_irq = of_irq_parse_and_map_pci; 324 - bridge->swizzle_irq = pci_common_swizzle; 325 - if (IS_ENABLED(CONFIG_PCI_MSI)) 326 - bridge->msi = &pcie->msi.chip; 327 - 328 - ret = pci_scan_root_bus_bridge(bridge); 329 - if (ret < 0) 330 - return ret; 331 - 332 - bus = bridge->bus; 333 - 334 - pci_bus_size_bridges(bus); 335 - pci_bus_assign_resources(bus); 336 - 337 - list_for_each_entry(child, &bus->children, node) 338 - pcie_bus_configure_settings(child); 339 - 340 - pci_bus_add_devices(bus); 341 - 342 - return 0; 343 - } 344 - 345 - static int phy_wait_for_ack(struct rcar_pcie *pcie) 346 - { 347 - struct device *dev = pcie->dev; 348 - unsigned int timeout = 100; 349 - 350 - while (timeout--) { 351 - if (rcar_pci_read_reg(pcie, H1_PCIEPHYADRR) & PHY_ACK) 352 - return 0; 353 - 354 - udelay(100); 355 - } 356 - 357 - dev_err(dev, "Access to PCIe phy timed out\n"); 358 - 359 - return -ETIMEDOUT; 360 - } 361 - 362 - static void phy_write_reg(struct rcar_pcie *pcie, 363 - unsigned int rate, u32 addr, 364 - unsigned int lane, u32 data) 365 - { 366 - u32 phyaddr; 367 - 368 - phyaddr = WRITE_CMD | 369 - ((rate & 1) << RATE_POS) | 370 - ((lane & 0xf) << LANE_POS) | 371 - ((addr & 0xff) << ADR_POS); 372 - 373 - /* Set write data */ 374 - rcar_pci_write_reg(pcie, data, H1_PCIEPHYDOUTR); 375 - rcar_pci_write_reg(pcie, phyaddr, H1_PCIEPHYADRR); 376 - 377 - /* Ignore errors as they will be dealt with if the data link is down */ 378 - phy_wait_for_ack(pcie); 379 - 380 - /* Clear command */ 381 - rcar_pci_write_reg(pcie, 0, H1_PCIEPHYDOUTR); 382 - rcar_pci_write_reg(pcie, 0, H1_PCIEPHYADRR); 383 - 384 - /* Ignore errors as they will be dealt with if the data link is down */ 385 - phy_wait_for_ack(pcie); 386 - } 387 - 388 - static int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie) 184 + int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie) 389 185 { 390 186 unsigned int timeout = 10; 391 187 ··· 45 549 return -ETIMEDOUT; 46 550 } 47 551 48 - static int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie) 552 + int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie) 49 553 { 50 554 unsigned int timeout = 10000; 51 555 ··· 60 564 return -ETIMEDOUT; 61 565 } 62 566 63 - static int rcar_pcie_hw_init(struct rcar_pcie *pcie) 567 + void rcar_pcie_set_outbound(struct rcar_pcie *pcie, int win, 568 + struct resource_entry *window) 64 569 { 65 - int err; 570 + /* Setup PCIe address space mappings for each resource */ 571 + struct resource *res = window->res; 572 + resource_size_t res_start; 573 + resource_size_t size; 574 + u32 mask; 66 575 67 - /* Begin initialization */ 68 - rcar_pci_write_reg(pcie, 0, PCIETCTLR); 69 - 70 - /* Set mode */ 71 - rcar_pci_write_reg(pcie, 1, PCIEMSR); 72 - 73 - err = rcar_pcie_wait_for_phyrdy(pcie); 74 - if (err) 75 - return err; 576 + rcar_pci_write_reg(pcie, 0x00000000, PCIEPTCTLR(win)); 76 577 77 578 /* 78 - * Initial header for port config space is type 1, set the device 79 - * class to match. Hardware takes care of propagating the IDSETR 80 - * settings, so there is no need to bother with a quirk. 579 + * The PAMR mask is calculated in units of 128Bytes, which 580 + * keeps things pretty simple. 81 581 */ 82 - rcar_pci_write_reg(pcie, PCI_CLASS_BRIDGE_PCI << 16, IDSETR1); 83 - 84 - /* 85 - * Setup Secondary Bus Number & Subordinate Bus Number, even though 86 - * they aren't used, to avoid bridge being detected as broken. 87 - */ 88 - rcar_rmw32(pcie, RCONF(PCI_SECONDARY_BUS), 0xff, 1); 89 - rcar_rmw32(pcie, RCONF(PCI_SUBORDINATE_BUS), 0xff, 1); 90 - 91 - /* Initialize default capabilities. */ 92 - rcar_rmw32(pcie, REXPCAP(0), 0xff, PCI_CAP_ID_EXP); 93 - rcar_rmw32(pcie, REXPCAP(PCI_EXP_FLAGS), 94 - PCI_EXP_FLAGS_TYPE, PCI_EXP_TYPE_ROOT_PORT << 4); 95 - rcar_rmw32(pcie, RCONF(PCI_HEADER_TYPE), 0x7f, 96 - PCI_HEADER_TYPE_BRIDGE); 97 - 98 - /* Enable data link layer active state reporting */ 99 - rcar_rmw32(pcie, REXPCAP(PCI_EXP_LNKCAP), PCI_EXP_LNKCAP_DLLLARC, 100 - PCI_EXP_LNKCAP_DLLLARC); 101 - 102 - /* Write out the physical slot number = 0 */ 103 - rcar_rmw32(pcie, REXPCAP(PCI_EXP_SLTCAP), PCI_EXP_SLTCAP_PSN, 0); 104 - 105 - /* Set the completion timer timeout to the maximum 50ms. */ 106 - rcar_rmw32(pcie, TLCTLR + 1, 0x3f, 50); 107 - 108 - /* Terminate list of capabilities (Next Capability Offset=0) */ 109 - rcar_rmw32(pcie, RVCCAP(0), 0xfff00000, 0); 110 - 111 - /* Enable MSI */ 112 - if (IS_ENABLED(CONFIG_PCI_MSI)) 113 - rcar_pci_write_reg(pcie, 0x801f0000, PCIEMSITXR); 114 - 115 - rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); 116 - 117 - /* Finish initialization - establish a PCI Express link */ 118 - rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); 119 - 120 - /* This will timeout if we don't have a link. */ 121 - err = rcar_pcie_wait_for_dl(pcie); 122 - if (err) 123 - return err; 124 - 125 - /* Enable INTx interrupts */ 126 - rcar_rmw32(pcie, PCIEINTXR, 0, 0xF << 8); 127 - 128 - wmb(); 129 - 130 - return 0; 131 - } 132 - 133 - static int rcar_pcie_phy_init_h1(struct rcar_pcie *pcie) 134 - { 135 - /* Initialize the phy */ 136 - phy_write_reg(pcie, 0, 0x42, 0x1, 0x0EC34191); 137 - phy_write_reg(pcie, 1, 0x42, 0x1, 0x0EC34180); 138 - phy_write_reg(pcie, 0, 0x43, 0x1, 0x00210188); 139 - phy_write_reg(pcie, 1, 0x43, 0x1, 0x00210188); 140 - phy_write_reg(pcie, 0, 0x44, 0x1, 0x015C0014); 141 - phy_write_reg(pcie, 1, 0x44, 0x1, 0x015C0014); 142 - phy_write_reg(pcie, 1, 0x4C, 0x1, 0x786174A0); 143 - phy_write_reg(pcie, 1, 0x4D, 0x1, 0x048000BB); 144 - phy_write_reg(pcie, 0, 0x51, 0x1, 0x079EC062); 145 - phy_write_reg(pcie, 0, 0x52, 0x1, 0x20000000); 146 - phy_write_reg(pcie, 1, 0x52, 0x1, 0x20000000); 147 - phy_write_reg(pcie, 1, 0x56, 0x1, 0x00003806); 148 - 149 - phy_write_reg(pcie, 0, 0x60, 0x1, 0x004B03A5); 150 - phy_write_reg(pcie, 0, 0x64, 0x1, 0x3F0F1F0F); 151 - phy_write_reg(pcie, 0, 0x66, 0x1, 0x00008000); 152 - 153 - return 0; 154 - } 155 - 156 - static int rcar_pcie_phy_init_gen2(struct rcar_pcie *pcie) 157 - { 158 - /* 159 - * These settings come from the R-Car Series, 2nd Generation User's 160 - * Manual, section 50.3.1 (2) Initialization of the physical layer. 161 - */ 162 - rcar_pci_write_reg(pcie, 0x000f0030, GEN2_PCIEPHYADDR); 163 - rcar_pci_write_reg(pcie, 0x00381203, GEN2_PCIEPHYDATA); 164 - rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); 165 - rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); 166 - 167 - rcar_pci_write_reg(pcie, 0x000f0054, GEN2_PCIEPHYADDR); 168 - /* The following value is for DC connection, no termination resistor */ 169 - rcar_pci_write_reg(pcie, 0x13802007, GEN2_PCIEPHYDATA); 170 - rcar_pci_write_reg(pcie, 0x00000001, GEN2_PCIEPHYCTRL); 171 - rcar_pci_write_reg(pcie, 0x00000006, GEN2_PCIEPHYCTRL); 172 - 173 - return 0; 174 - } 175 - 176 - static int rcar_pcie_phy_init_gen3(struct rcar_pcie *pcie) 177 - { 178 - int err; 179 - 180 - err = phy_init(pcie->phy); 181 - if (err) 182 - return err; 183 - 184 - err = phy_power_on(pcie->phy); 185 - if (err) 186 - phy_exit(pcie->phy); 187 - 188 - return err; 189 - } 190 - 191 - static int rcar_msi_alloc(struct rcar_msi *chip) 192 - { 193 - int msi; 194 - 195 - mutex_lock(&chip->lock); 196 - 197 - msi = find_first_zero_bit(chip->used, INT_PCI_MSI_NR); 198 - if (msi < INT_PCI_MSI_NR) 199 - set_bit(msi, chip->used); 582 + size = resource_size(res); 583 + if (size > 128) 584 + mask = (roundup_pow_of_two(size) / SZ_128) - 1; 200 585 else 201 - msi = -ENOSPC; 586 + mask = 0x0; 587 + rcar_pci_write_reg(pcie, mask << 7, PCIEPAMR(win)); 202 588 203 - mutex_unlock(&chip->lock); 589 + if (res->flags & IORESOURCE_IO) 590 + res_start = pci_pio_to_address(res->start) - window->offset; 591 + else 592 + res_start = res->start - window->offset; 204 593 205 - return msi; 594 + rcar_pci_write_reg(pcie, upper_32_bits(res_start), PCIEPAUR(win)); 595 + rcar_pci_write_reg(pcie, lower_32_bits(res_start) & ~0x7F, 596 + PCIEPALR(win)); 597 + 598 + /* First resource is for IO */ 599 + mask = PAR_ENABLE; 600 + if (res->flags & IORESOURCE_IO) 601 + mask |= IO_SPACE; 602 + 603 + rcar_pci_write_reg(pcie, mask, PCIEPTCTLR(win)); 206 604 } 207 605 208 - static int rcar_msi_alloc_region(struct rcar_msi *chip, int no_irqs) 606 + void rcar_pcie_set_inbound(struct rcar_pcie *pcie, u64 cpu_addr, 607 + u64 pci_addr, u64 flags, int idx, bool host) 209 608 { 210 - int msi; 211 - 212 - mutex_lock(&chip->lock); 213 - msi = bitmap_find_free_region(chip->used, INT_PCI_MSI_NR, 214 - order_base_2(no_irqs)); 215 - mutex_unlock(&chip->lock); 216 - 217 - return msi; 218 - } 219 - 220 - static void rcar_msi_free(struct rcar_msi *chip, unsigned long irq) 221 - { 222 - mutex_lock(&chip->lock); 223 - clear_bit(irq, chip->used); 224 - mutex_unlock(&chip->lock); 225 - } 226 - 227 - static irqreturn_t rcar_pcie_msi_irq(int irq, void *data) 228 - { 229 - struct rcar_pcie *pcie = data; 230 - struct rcar_msi *msi = &pcie->msi; 231 - struct device *dev = pcie->dev; 232 - unsigned long reg; 233 - 234 - reg = rcar_pci_read_reg(pcie, PCIEMSIFR); 235 - 236 - /* MSI & INTx share an interrupt - we only handle MSI here */ 237 - if (!reg) 238 - return IRQ_NONE; 239 - 240 - while (reg) { 241 - unsigned int index = find_first_bit(&reg, 32); 242 - unsigned int msi_irq; 243 - 244 - /* clear the interrupt */ 245 - rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR); 246 - 247 - msi_irq = irq_find_mapping(msi->domain, index); 248 - if (msi_irq) { 249 - if (test_bit(index, msi->used)) 250 - generic_handle_irq(msi_irq); 251 - else 252 - dev_info(dev, "unhandled MSI\n"); 253 - } else { 254 - /* Unknown MSI, just clear it */ 255 - dev_dbg(dev, "unexpected MSI\n"); 256 - } 257 - 258 - /* see if there's any more pending in this vector */ 259 - reg = rcar_pci_read_reg(pcie, PCIEMSIFR); 260 - } 261 - 262 - return IRQ_HANDLED; 263 - } 264 - 265 - static int rcar_msi_setup_irq(struct msi_controller *chip, struct pci_dev *pdev, 266 - struct msi_desc *desc) 267 - { 268 - struct rcar_msi *msi = to_rcar_msi(chip); 269 - struct rcar_pcie *pcie = container_of(chip, struct rcar_pcie, msi.chip); 270 - struct msi_msg msg; 271 - unsigned int irq; 272 - int hwirq; 273 - 274 - hwirq = rcar_msi_alloc(msi); 275 - if (hwirq < 0) 276 - return hwirq; 277 - 278 - irq = irq_find_mapping(msi->domain, hwirq); 279 - if (!irq) { 280 - rcar_msi_free(msi, hwirq); 281 - return -EINVAL; 282 - } 283 - 284 - irq_set_msi_desc(irq, desc); 285 - 286 - msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 287 - msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); 288 - msg.data = hwirq; 289 - 290 - pci_write_msi_msg(irq, &msg); 291 - 292 - return 0; 293 - } 294 - 295 - static int rcar_msi_setup_irqs(struct msi_controller *chip, 296 - struct pci_dev *pdev, int nvec, int type) 297 - { 298 - struct rcar_pcie *pcie = container_of(chip, struct rcar_pcie, msi.chip); 299 - struct rcar_msi *msi = to_rcar_msi(chip); 300 - struct msi_desc *desc; 301 - struct msi_msg msg; 302 - unsigned int irq; 303 - int hwirq; 304 - int i; 305 - 306 - /* MSI-X interrupts are not supported */ 307 - if (type == PCI_CAP_ID_MSIX) 308 - return -EINVAL; 309 - 310 - WARN_ON(!list_is_singular(&pdev->dev.msi_list)); 311 - desc = list_entry(pdev->dev.msi_list.next, struct msi_desc, list); 312 - 313 - hwirq = rcar_msi_alloc_region(msi, nvec); 314 - if (hwirq < 0) 315 - return -ENOSPC; 316 - 317 - irq = irq_find_mapping(msi->domain, hwirq); 318 - if (!irq) 319 - return -ENOSPC; 320 - 321 - for (i = 0; i < nvec; i++) { 322 - /* 323 - * irq_create_mapping() called from rcar_pcie_probe() pre- 324 - * allocates descs, so there is no need to allocate descs here. 325 - * We can therefore assume that if irq_find_mapping() above 326 - * returns non-zero, then the descs are also successfully 327 - * allocated. 328 - */ 329 - if (irq_set_msi_desc_off(irq, i, desc)) { 330 - /* TODO: clear */ 331 - return -EINVAL; 332 - } 333 - } 334 - 335 - desc->nvec_used = nvec; 336 - desc->msi_attrib.multiple = order_base_2(nvec); 337 - 338 - msg.address_lo = rcar_pci_read_reg(pcie, PCIEMSIALR) & ~MSIFE; 339 - msg.address_hi = rcar_pci_read_reg(pcie, PCIEMSIAUR); 340 - msg.data = hwirq; 341 - 342 - pci_write_msi_msg(irq, &msg); 343 - 344 - return 0; 345 - } 346 - 347 - static void rcar_msi_teardown_irq(struct msi_controller *chip, unsigned int irq) 348 - { 349 - struct rcar_msi *msi = to_rcar_msi(chip); 350 - struct irq_data *d = irq_get_irq_data(irq); 351 - 352 - rcar_msi_free(msi, d->hwirq); 353 - } 354 - 355 - static struct irq_chip rcar_msi_irq_chip = { 356 - .name = "R-Car PCIe MSI", 357 - .irq_enable = pci_msi_unmask_irq, 358 - .irq_disable = pci_msi_mask_irq, 359 - .irq_mask = pci_msi_mask_irq, 360 - .irq_unmask = pci_msi_unmask_irq, 361 - }; 362 - 363 - static int rcar_msi_map(struct irq_domain *domain, unsigned int irq, 364 - irq_hw_number_t hwirq) 365 - { 366 - irq_set_chip_and_handler(irq, &rcar_msi_irq_chip, handle_simple_irq); 367 - irq_set_chip_data(irq, domain->host_data); 368 - 369 - return 0; 370 - } 371 - 372 - static const struct irq_domain_ops msi_domain_ops = { 373 - .map = rcar_msi_map, 374 - }; 375 - 376 - static void rcar_pcie_unmap_msi(struct rcar_pcie *pcie) 377 - { 378 - struct rcar_msi *msi = &pcie->msi; 379 - int i, irq; 380 - 381 - for (i = 0; i < INT_PCI_MSI_NR; i++) { 382 - irq = irq_find_mapping(msi->domain, i); 383 - if (irq > 0) 384 - irq_dispose_mapping(irq); 385 - } 386 - 387 - irq_domain_remove(msi->domain); 388 - } 389 - 390 - static int rcar_pcie_enable_msi(struct rcar_pcie *pcie) 391 - { 392 - struct device *dev = pcie->dev; 393 - struct rcar_msi *msi = &pcie->msi; 394 - phys_addr_t base; 395 - int err, i; 396 - 397 - mutex_init(&msi->lock); 398 - 399 - msi->chip.dev = dev; 400 - msi->chip.setup_irq = rcar_msi_setup_irq; 401 - msi->chip.setup_irqs = rcar_msi_setup_irqs; 402 - msi->chip.teardown_irq = rcar_msi_teardown_irq; 403 - 404 - msi->domain = irq_domain_add_linear(dev->of_node, INT_PCI_MSI_NR, 405 - &msi_domain_ops, &msi->chip); 406 - if (!msi->domain) { 407 - dev_err(dev, "failed to create IRQ domain\n"); 408 - return -ENOMEM; 409 - } 410 - 411 - for (i = 0; i < INT_PCI_MSI_NR; i++) 412 - irq_create_mapping(msi->domain, i); 413 - 414 - /* Two irqs are for MSI, but they are also used for non-MSI irqs */ 415 - err = devm_request_irq(dev, msi->irq1, rcar_pcie_msi_irq, 416 - IRQF_SHARED | IRQF_NO_THREAD, 417 - rcar_msi_irq_chip.name, pcie); 418 - if (err < 0) { 419 - dev_err(dev, "failed to request IRQ: %d\n", err); 420 - goto err; 421 - } 422 - 423 - err = devm_request_irq(dev, msi->irq2, rcar_pcie_msi_irq, 424 - IRQF_SHARED | IRQF_NO_THREAD, 425 - rcar_msi_irq_chip.name, pcie); 426 - if (err < 0) { 427 - dev_err(dev, "failed to request IRQ: %d\n", err); 428 - goto err; 429 - } 430 - 431 - /* setup MSI data target */ 432 - msi->pages = __get_free_pages(GFP_KERNEL, 0); 433 - if (!msi->pages) { 434 - err = -ENOMEM; 435 - goto err; 436 - } 437 - base = virt_to_phys((void *)msi->pages); 438 - 439 - rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR); 440 - rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR); 441 - 442 - /* enable all MSI interrupts */ 443 - rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); 444 - 445 - return 0; 446 - 447 - err: 448 - rcar_pcie_unmap_msi(pcie); 449 - return err; 450 - } 451 - 452 - static void rcar_pcie_teardown_msi(struct rcar_pcie *pcie) 453 - { 454 - struct rcar_msi *msi = &pcie->msi; 455 - 456 - /* Disable all MSI interrupts */ 457 - rcar_pci_write_reg(pcie, 0, PCIEMSIIER); 458 - 459 - /* Disable address decoding of the MSI interrupt, MSIFE */ 460 - rcar_pci_write_reg(pcie, 0, PCIEMSIALR); 461 - 462 - free_pages(msi->pages, 0); 463 - 464 - rcar_pcie_unmap_msi(pcie); 465 - } 466 - 467 - static int rcar_pcie_get_resources(struct rcar_pcie *pcie) 468 - { 469 - struct device *dev = pcie->dev; 470 - struct resource res; 471 - int err, i; 472 - 473 - pcie->phy = devm_phy_optional_get(dev, "pcie"); 474 - if (IS_ERR(pcie->phy)) 475 - return PTR_ERR(pcie->phy); 476 - 477 - err = of_address_to_resource(dev->of_node, 0, &res); 478 - if (err) 479 - return err; 480 - 481 - pcie->base = devm_ioremap_resource(dev, &res); 482 - if (IS_ERR(pcie->base)) 483 - return PTR_ERR(pcie->base); 484 - 485 - pcie->bus_clk = devm_clk_get(dev, "pcie_bus"); 486 - if (IS_ERR(pcie->bus_clk)) { 487 - dev_err(dev, "cannot get pcie bus clock\n"); 488 - return PTR_ERR(pcie->bus_clk); 489 - } 490 - 491 - i = irq_of_parse_and_map(dev->of_node, 0); 492 - if (!i) { 493 - dev_err(dev, "cannot get platform resources for msi interrupt\n"); 494 - err = -ENOENT; 495 - goto err_irq1; 496 - } 497 - pcie->msi.irq1 = i; 498 - 499 - i = irq_of_parse_and_map(dev->of_node, 1); 500 - if (!i) { 501 - dev_err(dev, "cannot get platform resources for msi interrupt\n"); 502 - err = -ENOENT; 503 - goto err_irq2; 504 - } 505 - pcie->msi.irq2 = i; 506 - 507 - return 0; 508 - 509 - err_irq2: 510 - irq_dispose_mapping(pcie->msi.irq1); 511 - err_irq1: 512 - return err; 513 - } 514 - 515 - static int rcar_pcie_inbound_ranges(struct rcar_pcie *pcie, 516 - struct resource_entry *entry, 517 - int *index) 518 - { 519 - u64 restype = entry->res->flags; 520 - u64 cpu_addr = entry->res->start; 521 - u64 cpu_end = entry->res->end; 522 - u64 pci_addr = entry->res->start - entry->offset; 523 - u32 flags = LAM_64BIT | LAR_ENABLE; 524 - u64 mask; 525 - u64 size = resource_size(entry->res); 526 - int idx = *index; 527 - 528 - if (restype & IORESOURCE_PREFETCH) 529 - flags |= LAM_PREFETCH; 530 - 531 - while (cpu_addr < cpu_end) { 532 - if (idx >= MAX_NR_INBOUND_MAPS - 1) { 533 - dev_err(pcie->dev, "Failed to map inbound regions!\n"); 534 - return -EINVAL; 535 - } 536 - /* 537 - * If the size of the range is larger than the alignment of 538 - * the start address, we have to use multiple entries to 539 - * perform the mapping. 540 - */ 541 - if (cpu_addr > 0) { 542 - unsigned long nr_zeros = __ffs64(cpu_addr); 543 - u64 alignment = 1ULL << nr_zeros; 544 - 545 - size = min(size, alignment); 546 - } 547 - /* Hardware supports max 4GiB inbound region */ 548 - size = min(size, 1ULL << 32); 549 - 550 - mask = roundup_pow_of_two(size) - 1; 551 - mask &= ~0xf; 552 - 553 - /* 554 - * Set up 64-bit inbound regions as the range parser doesn't 555 - * distinguish between 32 and 64-bit types. 556 - */ 609 + /* 610 + * Set up 64-bit inbound regions as the range parser doesn't 611 + * distinguish between 32 and 64-bit types. 612 + */ 613 + if (host) 557 614 rcar_pci_write_reg(pcie, lower_32_bits(pci_addr), 558 615 PCIEPRAR(idx)); 559 - rcar_pci_write_reg(pcie, lower_32_bits(cpu_addr), PCIELAR(idx)); 560 - rcar_pci_write_reg(pcie, lower_32_bits(mask) | flags, 561 - PCIELAMR(idx)); 616 + rcar_pci_write_reg(pcie, lower_32_bits(cpu_addr), PCIELAR(idx)); 617 + rcar_pci_write_reg(pcie, flags, PCIELAMR(idx)); 562 618 619 + if (host) 563 620 rcar_pci_write_reg(pcie, upper_32_bits(pci_addr), 564 621 PCIEPRAR(idx + 1)); 565 - rcar_pci_write_reg(pcie, upper_32_bits(cpu_addr), 566 - PCIELAR(idx + 1)); 567 - rcar_pci_write_reg(pcie, 0, PCIELAMR(idx + 1)); 568 - 569 - pci_addr += size; 570 - cpu_addr += size; 571 - idx += 2; 572 - } 573 - *index = idx; 574 - 575 - return 0; 622 + rcar_pci_write_reg(pcie, upper_32_bits(cpu_addr), PCIELAR(idx + 1)); 623 + rcar_pci_write_reg(pcie, 0, PCIELAMR(idx + 1)); 576 624 } 577 - 578 - static int rcar_pcie_parse_map_dma_ranges(struct rcar_pcie *pcie) 579 - { 580 - struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 581 - struct resource_entry *entry; 582 - int index = 0, err = 0; 583 - 584 - resource_list_for_each_entry(entry, &bridge->dma_ranges) { 585 - err = rcar_pcie_inbound_ranges(pcie, entry, &index); 586 - if (err) 587 - break; 588 - } 589 - 590 - return err; 591 - } 592 - 593 - static const struct of_device_id rcar_pcie_of_match[] = { 594 - { .compatible = "renesas,pcie-r8a7779", 595 - .data = rcar_pcie_phy_init_h1 }, 596 - { .compatible = "renesas,pcie-r8a7790", 597 - .data = rcar_pcie_phy_init_gen2 }, 598 - { .compatible = "renesas,pcie-r8a7791", 599 - .data = rcar_pcie_phy_init_gen2 }, 600 - { .compatible = "renesas,pcie-rcar-gen2", 601 - .data = rcar_pcie_phy_init_gen2 }, 602 - { .compatible = "renesas,pcie-r8a7795", 603 - .data = rcar_pcie_phy_init_gen3 }, 604 - { .compatible = "renesas,pcie-rcar-gen3", 605 - .data = rcar_pcie_phy_init_gen3 }, 606 - {}, 607 - }; 608 - 609 - static int rcar_pcie_probe(struct platform_device *pdev) 610 - { 611 - struct device *dev = &pdev->dev; 612 - struct rcar_pcie *pcie; 613 - u32 data; 614 - int err; 615 - int (*phy_init_fn)(struct rcar_pcie *); 616 - struct pci_host_bridge *bridge; 617 - 618 - bridge = pci_alloc_host_bridge(sizeof(*pcie)); 619 - if (!bridge) 620 - return -ENOMEM; 621 - 622 - pcie = pci_host_bridge_priv(bridge); 623 - 624 - pcie->dev = dev; 625 - platform_set_drvdata(pdev, pcie); 626 - 627 - err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, 628 - &bridge->dma_ranges, NULL); 629 - if (err) 630 - goto err_free_bridge; 631 - 632 - pm_runtime_enable(pcie->dev); 633 - err = pm_runtime_get_sync(pcie->dev); 634 - if (err < 0) { 635 - dev_err(pcie->dev, "pm_runtime_get_sync failed\n"); 636 - goto err_pm_disable; 637 - } 638 - 639 - err = rcar_pcie_get_resources(pcie); 640 - if (err < 0) { 641 - dev_err(dev, "failed to request resources: %d\n", err); 642 - goto err_pm_put; 643 - } 644 - 645 - err = clk_prepare_enable(pcie->bus_clk); 646 - if (err) { 647 - dev_err(dev, "failed to enable bus clock: %d\n", err); 648 - goto err_unmap_msi_irqs; 649 - } 650 - 651 - err = rcar_pcie_parse_map_dma_ranges(pcie); 652 - if (err) 653 - goto err_clk_disable; 654 - 655 - phy_init_fn = of_device_get_match_data(dev); 656 - err = phy_init_fn(pcie); 657 - if (err) { 658 - dev_err(dev, "failed to init PCIe PHY\n"); 659 - goto err_clk_disable; 660 - } 661 - 662 - /* Failure to get a link might just be that no cards are inserted */ 663 - if (rcar_pcie_hw_init(pcie)) { 664 - dev_info(dev, "PCIe link down\n"); 665 - err = -ENODEV; 666 - goto err_phy_shutdown; 667 - } 668 - 669 - data = rcar_pci_read_reg(pcie, MACSR); 670 - dev_info(dev, "PCIe x%d: link up\n", (data >> 20) & 0x3f); 671 - 672 - if (IS_ENABLED(CONFIG_PCI_MSI)) { 673 - err = rcar_pcie_enable_msi(pcie); 674 - if (err < 0) { 675 - dev_err(dev, 676 - "failed to enable MSI support: %d\n", 677 - err); 678 - goto err_phy_shutdown; 679 - } 680 - } 681 - 682 - err = rcar_pcie_enable(pcie); 683 - if (err) 684 - goto err_msi_teardown; 685 - 686 - return 0; 687 - 688 - err_msi_teardown: 689 - if (IS_ENABLED(CONFIG_PCI_MSI)) 690 - rcar_pcie_teardown_msi(pcie); 691 - 692 - err_phy_shutdown: 693 - if (pcie->phy) { 694 - phy_power_off(pcie->phy); 695 - phy_exit(pcie->phy); 696 - } 697 - 698 - err_clk_disable: 699 - clk_disable_unprepare(pcie->bus_clk); 700 - 701 - err_unmap_msi_irqs: 702 - irq_dispose_mapping(pcie->msi.irq2); 703 - irq_dispose_mapping(pcie->msi.irq1); 704 - 705 - err_pm_put: 706 - pm_runtime_put(dev); 707 - 708 - err_pm_disable: 709 - pm_runtime_disable(dev); 710 - pci_free_resource_list(&pcie->resources); 711 - 712 - err_free_bridge: 713 - pci_free_host_bridge(bridge); 714 - 715 - return err; 716 - } 717 - 718 - static int rcar_pcie_resume_noirq(struct device *dev) 719 - { 720 - struct rcar_pcie *pcie = dev_get_drvdata(dev); 721 - 722 - if (rcar_pci_read_reg(pcie, PMSR) && 723 - !(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN)) 724 - return 0; 725 - 726 - /* Re-establish the PCIe link */ 727 - rcar_pci_write_reg(pcie, MACCTLR_INIT_VAL, MACCTLR); 728 - rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR); 729 - return rcar_pcie_wait_for_dl(pcie); 730 - } 731 - 732 - static const struct dev_pm_ops rcar_pcie_pm_ops = { 733 - .resume_noirq = rcar_pcie_resume_noirq, 734 - }; 735 - 736 - static struct platform_driver rcar_pcie_driver = { 737 - .driver = { 738 - .name = "rcar-pcie", 739 - .of_match_table = rcar_pcie_of_match, 740 - .pm = &rcar_pcie_pm_ops, 741 - .suppress_bind_attrs = true, 742 - }, 743 - .probe = rcar_pcie_probe, 744 - }; 745 - builtin_platform_driver(rcar_pcie_driver);
+140
drivers/pci/controller/pcie-rcar.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * PCIe driver for Renesas R-Car SoCs 4 + * Copyright (C) 2014-2020 Renesas Electronics Europe Ltd 5 + * 6 + * Author: Phil Edworthy <phil.edworthy@renesas.com> 7 + */ 8 + 9 + #ifndef _PCIE_RCAR_H 10 + #define _PCIE_RCAR_H 11 + 12 + #define PCIECAR 0x000010 13 + #define PCIECCTLR 0x000018 14 + #define CONFIG_SEND_ENABLE BIT(31) 15 + #define TYPE0 (0 << 8) 16 + #define TYPE1 BIT(8) 17 + #define PCIECDR 0x000020 18 + #define PCIEMSR 0x000028 19 + #define PCIEINTXR 0x000400 20 + #define ASTINTX BIT(16) 21 + #define PCIEPHYSR 0x0007f0 22 + #define PHYRDY BIT(0) 23 + #define PCIEMSITXR 0x000840 24 + 25 + /* Transfer control */ 26 + #define PCIETCTLR 0x02000 27 + #define DL_DOWN BIT(3) 28 + #define CFINIT BIT(0) 29 + #define PCIETSTR 0x02004 30 + #define DATA_LINK_ACTIVE BIT(0) 31 + #define PCIEERRFR 0x02020 32 + #define UNSUPPORTED_REQUEST BIT(4) 33 + #define PCIEMSIFR 0x02044 34 + #define PCIEMSIALR 0x02048 35 + #define MSIFE BIT(0) 36 + #define PCIEMSIAUR 0x0204c 37 + #define PCIEMSIIER 0x02050 38 + 39 + /* root port address */ 40 + #define PCIEPRAR(x) (0x02080 + ((x) * 0x4)) 41 + 42 + /* local address reg & mask */ 43 + #define PCIELAR(x) (0x02200 + ((x) * 0x20)) 44 + #define PCIELAMR(x) (0x02208 + ((x) * 0x20)) 45 + #define LAM_PREFETCH BIT(3) 46 + #define LAM_64BIT BIT(2) 47 + #define LAR_ENABLE BIT(1) 48 + 49 + /* PCIe address reg & mask */ 50 + #define PCIEPALR(x) (0x03400 + ((x) * 0x20)) 51 + #define PCIEPAUR(x) (0x03404 + ((x) * 0x20)) 52 + #define PCIEPAMR(x) (0x03408 + ((x) * 0x20)) 53 + #define PCIEPTCTLR(x) (0x0340c + ((x) * 0x20)) 54 + #define PAR_ENABLE BIT(31) 55 + #define IO_SPACE BIT(8) 56 + 57 + /* Configuration */ 58 + #define PCICONF(x) (0x010000 + ((x) * 0x4)) 59 + #define INTDIS BIT(10) 60 + #define PMCAP(x) (0x010040 + ((x) * 0x4)) 61 + #define MSICAP(x) (0x010050 + ((x) * 0x4)) 62 + #define MSICAP0_MSIE BIT(16) 63 + #define MSICAP0_MMESCAP_OFFSET 17 64 + #define MSICAP0_MMESE_OFFSET 20 65 + #define MSICAP0_MMESE_MASK GENMASK(22, 20) 66 + #define EXPCAP(x) (0x010070 + ((x) * 0x4)) 67 + #define VCCAP(x) (0x010100 + ((x) * 0x4)) 68 + 69 + /* link layer */ 70 + #define IDSETR0 0x011000 71 + #define IDSETR1 0x011004 72 + #define SUBIDSETR 0x011024 73 + #define TLCTLR 0x011048 74 + #define MACSR 0x011054 75 + #define SPCHGFIN BIT(4) 76 + #define SPCHGFAIL BIT(6) 77 + #define SPCHGSUC BIT(7) 78 + #define LINK_SPEED (0xf << 16) 79 + #define LINK_SPEED_2_5GTS (1 << 16) 80 + #define LINK_SPEED_5_0GTS (2 << 16) 81 + #define MACCTLR 0x011058 82 + #define MACCTLR_NFTS_MASK GENMASK(23, 16) /* The name is from SH7786 */ 83 + #define SPEED_CHANGE BIT(24) 84 + #define SCRAMBLE_DISABLE BIT(27) 85 + #define LTSMDIS BIT(31) 86 + #define MACCTLR_INIT_VAL (LTSMDIS | MACCTLR_NFTS_MASK) 87 + #define PMSR 0x01105c 88 + #define MACS2R 0x011078 89 + #define MACCGSPSETR 0x011084 90 + #define SPCNGRSN BIT(31) 91 + 92 + /* R-Car H1 PHY */ 93 + #define H1_PCIEPHYADRR 0x04000c 94 + #define WRITE_CMD BIT(16) 95 + #define PHY_ACK BIT(24) 96 + #define RATE_POS 12 97 + #define LANE_POS 8 98 + #define ADR_POS 0 99 + #define H1_PCIEPHYDOUTR 0x040014 100 + 101 + /* R-Car Gen2 PHY */ 102 + #define GEN2_PCIEPHYADDR 0x780 103 + #define GEN2_PCIEPHYDATA 0x784 104 + #define GEN2_PCIEPHYCTRL 0x78c 105 + 106 + #define INT_PCI_MSI_NR 32 107 + 108 + #define RCONF(x) (PCICONF(0) + (x)) 109 + #define RPMCAP(x) (PMCAP(0) + (x)) 110 + #define REXPCAP(x) (EXPCAP(0) + (x)) 111 + #define RVCCAP(x) (VCCAP(0) + (x)) 112 + 113 + #define PCIE_CONF_BUS(b) (((b) & 0xff) << 24) 114 + #define PCIE_CONF_DEV(d) (((d) & 0x1f) << 19) 115 + #define PCIE_CONF_FUNC(f) (((f) & 0x7) << 16) 116 + 117 + #define RCAR_PCI_MAX_RESOURCES 4 118 + #define MAX_NR_INBOUND_MAPS 6 119 + 120 + struct rcar_pcie { 121 + struct device *dev; 122 + void __iomem *base; 123 + }; 124 + 125 + enum { 126 + RCAR_PCI_ACCESS_READ, 127 + RCAR_PCI_ACCESS_WRITE, 128 + }; 129 + 130 + void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val, unsigned int reg); 131 + u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg); 132 + void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data); 133 + int rcar_pcie_wait_for_phyrdy(struct rcar_pcie *pcie); 134 + int rcar_pcie_wait_for_dl(struct rcar_pcie *pcie); 135 + void rcar_pcie_set_outbound(struct rcar_pcie *pcie, int win, 136 + struct resource_entry *window); 137 + void rcar_pcie_set_inbound(struct rcar_pcie *pcie, u64 cpu_addr, 138 + u64 pci_addr, u64 flags, int idx, bool host); 139 + 140 + #endif
+1 -1
drivers/pci/controller/pcie-rockchip-ep.c
··· 615 615 rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG); 616 616 617 617 err = pci_epc_mem_init(epc, rockchip->mem_res->start, 618 - resource_size(rockchip->mem_res)); 618 + resource_size(rockchip->mem_res), PAGE_SIZE); 619 619 if (err < 0) { 620 620 dev_err(dev, "failed to initialize the memory space\n"); 621 621 goto err_uninit_port;
+8 -5
drivers/pci/controller/pcie-tango.c
··· 207 207 return ret; 208 208 } 209 209 210 - static struct pci_ecam_ops smp8759_ecam_ops = { 210 + static const struct pci_ecam_ops smp8759_ecam_ops = { 211 211 .bus_shift = 20, 212 212 .pci_ops = { 213 213 .map_bus = pci_ecam_map_bus, ··· 273 273 writel_relaxed(0, pcie->base + SMP8759_ENABLE + offset); 274 274 275 275 virq = platform_get_irq(pdev, 1); 276 - if (virq <= 0) { 276 + if (virq < 0) { 277 277 dev_err(dev, "Failed to map IRQ\n"); 278 - return -ENXIO; 278 + return virq; 279 279 } 280 280 281 281 irq_dom = irq_domain_create_linear(fwnode, MSI_MAX, &dom_ops, pcie); ··· 295 295 spin_lock_init(&pcie->used_msi_lock); 296 296 irq_set_chained_handler_and_data(virq, tango_msi_isr, pcie); 297 297 298 - return pci_host_common_probe(pdev, &smp8759_ecam_ops); 298 + return pci_host_common_probe(pdev); 299 299 } 300 300 301 301 static const struct of_device_id tango_pcie_ids[] = { 302 - { .compatible = "sigma,smp8759-pcie" }, 302 + { 303 + .compatible = "sigma,smp8759-pcie", 304 + .data = &smp8759_ecam_ops, 305 + }, 303 306 { }, 304 307 }; 305 308
+4 -2
drivers/pci/controller/vmd.c
··· 445 445 if (!membar2) 446 446 return -ENOMEM; 447 447 offset[0] = vmd->dev->resource[VMD_MEMBAR1].start - 448 - readq(membar2 + MB2_SHADOW_OFFSET); 448 + (readq(membar2 + MB2_SHADOW_OFFSET) & 449 + PCI_BASE_ADDRESS_MEM_MASK); 449 450 offset[1] = vmd->dev->resource[VMD_MEMBAR2].start - 450 - readq(membar2 + MB2_SHADOW_OFFSET + 8); 451 + (readq(membar2 + MB2_SHADOW_OFFSET + 8) & 452 + PCI_BASE_ADDRESS_MEM_MASK); 451 453 pci_iounmap(vmd->dev, membar2); 452 454 } 453 455 }
+7 -3
drivers/pci/ecam.c
··· 26 26 */ 27 27 struct pci_config_window *pci_ecam_create(struct device *dev, 28 28 struct resource *cfgres, struct resource *busr, 29 - struct pci_ecam_ops *ops) 29 + const struct pci_ecam_ops *ops) 30 30 { 31 31 struct pci_config_window *cfg; 32 32 unsigned int bus_range, bus_range_max, bsz; ··· 101 101 pci_ecam_free(cfg); 102 102 return ERR_PTR(err); 103 103 } 104 + EXPORT_SYMBOL_GPL(pci_ecam_create); 104 105 105 106 void pci_ecam_free(struct pci_config_window *cfg) 106 107 { ··· 122 121 release_resource(&cfg->res); 123 122 kfree(cfg); 124 123 } 124 + EXPORT_SYMBOL_GPL(pci_ecam_free); 125 125 126 126 /* 127 127 * Function to implement the pci_ops ->map_bus method ··· 145 143 base = cfg->win + (busn << cfg->ops->bus_shift); 146 144 return base + (devfn << devfn_shift) + where; 147 145 } 146 + EXPORT_SYMBOL_GPL(pci_ecam_map_bus); 148 147 149 148 /* ECAM ops */ 150 - struct pci_ecam_ops pci_generic_ecam_ops = { 149 + const struct pci_ecam_ops pci_generic_ecam_ops = { 151 150 .bus_shift = 20, 152 151 .pci_ops = { 153 152 .map_bus = pci_ecam_map_bus, ··· 156 153 .write = pci_generic_config_write, 157 154 } 158 155 }; 156 + EXPORT_SYMBOL_GPL(pci_generic_ecam_ops); 159 157 160 158 #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 161 159 /* ECAM ops for 32-bit access only (non-compliant) */ 162 - struct pci_ecam_ops pci_32b_ops = { 160 + const struct pci_ecam_ops pci_32b_ops = { 163 161 .bus_shift = 20, 164 162 .pci_ops = { 165 163 .map_bus = pci_ecam_map_bus,
+3
drivers/pci/endpoint/functions/pci-epf-test.c
··· 187 187 */ 188 188 static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test) 189 189 { 190 + if (!epf_test->dma_supported) 191 + return; 192 + 190 193 dma_release_channel(epf_test->dma_chan); 191 194 epf_test->dma_chan = NULL; 192 195 }
+145 -61
drivers/pci/endpoint/pci-epc-mem.c
··· 23 23 static int pci_epc_mem_get_order(struct pci_epc_mem *mem, size_t size) 24 24 { 25 25 int order; 26 - unsigned int page_shift = ilog2(mem->page_size); 26 + unsigned int page_shift = ilog2(mem->window.page_size); 27 27 28 28 size--; 29 29 size >>= page_shift; ··· 36 36 } 37 37 38 38 /** 39 - * __pci_epc_mem_init() - initialize the pci_epc_mem structure 39 + * pci_epc_multi_mem_init() - initialize the pci_epc_mem structure 40 40 * @epc: the EPC device that invoked pci_epc_mem_init 41 - * @phys_base: the physical address of the base 42 - * @size: the size of the address space 43 - * @page_size: size of each page 41 + * @windows: pointer to windows supported by the device 42 + * @num_windows: number of windows device supports 44 43 * 45 44 * Invoke to initialize the pci_epc_mem structure used by the 46 45 * endpoint functions to allocate mapped PCI address. 47 46 */ 48 - int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size, 49 - size_t page_size) 47 + int pci_epc_multi_mem_init(struct pci_epc *epc, 48 + struct pci_epc_mem_window *windows, 49 + unsigned int num_windows) 50 50 { 51 - int ret; 52 - struct pci_epc_mem *mem; 53 - unsigned long *bitmap; 51 + struct pci_epc_mem *mem = NULL; 52 + unsigned long *bitmap = NULL; 54 53 unsigned int page_shift; 55 - int pages; 54 + size_t page_size; 56 55 int bitmap_size; 56 + int pages; 57 + int ret; 58 + int i; 57 59 58 - if (page_size < PAGE_SIZE) 59 - page_size = PAGE_SIZE; 60 + epc->num_windows = 0; 60 61 61 - page_shift = ilog2(page_size); 62 - pages = size >> page_shift; 63 - bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); 62 + if (!windows || !num_windows) 63 + return -EINVAL; 64 64 65 - mem = kzalloc(sizeof(*mem), GFP_KERNEL); 66 - if (!mem) { 67 - ret = -ENOMEM; 68 - goto err; 65 + epc->windows = kcalloc(num_windows, sizeof(*epc->windows), GFP_KERNEL); 66 + if (!epc->windows) 67 + return -ENOMEM; 68 + 69 + for (i = 0; i < num_windows; i++) { 70 + page_size = windows[i].page_size; 71 + if (page_size < PAGE_SIZE) 72 + page_size = PAGE_SIZE; 73 + page_shift = ilog2(page_size); 74 + pages = windows[i].size >> page_shift; 75 + bitmap_size = BITS_TO_LONGS(pages) * sizeof(long); 76 + 77 + mem = kzalloc(sizeof(*mem), GFP_KERNEL); 78 + if (!mem) { 79 + ret = -ENOMEM; 80 + i--; 81 + goto err_mem; 82 + } 83 + 84 + bitmap = kzalloc(bitmap_size, GFP_KERNEL); 85 + if (!bitmap) { 86 + ret = -ENOMEM; 87 + kfree(mem); 88 + i--; 89 + goto err_mem; 90 + } 91 + 92 + mem->window.phys_base = windows[i].phys_base; 93 + mem->window.size = windows[i].size; 94 + mem->window.page_size = page_size; 95 + mem->bitmap = bitmap; 96 + mem->pages = pages; 97 + mutex_init(&mem->lock); 98 + epc->windows[i] = mem; 69 99 } 70 100 71 - bitmap = kzalloc(bitmap_size, GFP_KERNEL); 72 - if (!bitmap) { 73 - ret = -ENOMEM; 74 - goto err_mem; 75 - } 76 - 77 - mem->bitmap = bitmap; 78 - mem->phys_base = phys_base; 79 - mem->page_size = page_size; 80 - mem->pages = pages; 81 - mem->size = size; 82 - mutex_init(&mem->lock); 83 - 84 - epc->mem = mem; 101 + epc->mem = epc->windows[0]; 102 + epc->num_windows = num_windows; 85 103 86 104 return 0; 87 105 88 106 err_mem: 89 - kfree(mem); 107 + for (; i >= 0; i--) { 108 + mem = epc->windows[i]; 109 + kfree(mem->bitmap); 110 + kfree(mem); 111 + } 112 + kfree(epc->windows); 90 113 91 - err: 92 - return ret; 114 + return ret; 93 115 } 94 - EXPORT_SYMBOL_GPL(__pci_epc_mem_init); 116 + EXPORT_SYMBOL_GPL(pci_epc_multi_mem_init); 117 + 118 + int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t base, 119 + size_t size, size_t page_size) 120 + { 121 + struct pci_epc_mem_window mem_window; 122 + 123 + mem_window.phys_base = base; 124 + mem_window.size = size; 125 + mem_window.page_size = page_size; 126 + 127 + return pci_epc_multi_mem_init(epc, &mem_window, 1); 128 + } 129 + EXPORT_SYMBOL_GPL(pci_epc_mem_init); 95 130 96 131 /** 97 132 * pci_epc_mem_exit() - cleanup the pci_epc_mem structure ··· 137 102 */ 138 103 void pci_epc_mem_exit(struct pci_epc *epc) 139 104 { 140 - struct pci_epc_mem *mem = epc->mem; 105 + struct pci_epc_mem *mem; 106 + int i; 141 107 108 + if (!epc->num_windows) 109 + return; 110 + 111 + for (i = 0; i < epc->num_windows; i++) { 112 + mem = epc->windows[i]; 113 + kfree(mem->bitmap); 114 + kfree(mem); 115 + } 116 + kfree(epc->windows); 117 + 118 + epc->windows = NULL; 142 119 epc->mem = NULL; 143 - kfree(mem->bitmap); 144 - kfree(mem); 120 + epc->num_windows = 0; 145 121 } 146 122 EXPORT_SYMBOL_GPL(pci_epc_mem_exit); 147 123 ··· 168 122 void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, 169 123 phys_addr_t *phys_addr, size_t size) 170 124 { 171 - int pageno; 172 125 void __iomem *virt_addr = NULL; 173 - struct pci_epc_mem *mem = epc->mem; 174 - unsigned int page_shift = ilog2(mem->page_size); 126 + struct pci_epc_mem *mem; 127 + unsigned int page_shift; 128 + size_t align_size; 129 + int pageno; 175 130 int order; 131 + int i; 176 132 177 - size = ALIGN(size, mem->page_size); 178 - order = pci_epc_mem_get_order(mem, size); 133 + for (i = 0; i < epc->num_windows; i++) { 134 + mem = epc->windows[i]; 135 + mutex_lock(&mem->lock); 136 + align_size = ALIGN(size, mem->window.page_size); 137 + order = pci_epc_mem_get_order(mem, align_size); 179 138 180 - mutex_lock(&mem->lock); 181 - pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); 182 - if (pageno < 0) 183 - goto ret; 139 + pageno = bitmap_find_free_region(mem->bitmap, mem->pages, 140 + order); 141 + if (pageno >= 0) { 142 + page_shift = ilog2(mem->window.page_size); 143 + *phys_addr = mem->window.phys_base + 144 + ((phys_addr_t)pageno << page_shift); 145 + virt_addr = ioremap(*phys_addr, align_size); 146 + if (!virt_addr) { 147 + bitmap_release_region(mem->bitmap, 148 + pageno, order); 149 + mutex_unlock(&mem->lock); 150 + continue; 151 + } 152 + mutex_unlock(&mem->lock); 153 + return virt_addr; 154 + } 155 + mutex_unlock(&mem->lock); 156 + } 184 157 185 - *phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift); 186 - virt_addr = ioremap(*phys_addr, size); 187 - if (!virt_addr) 188 - bitmap_release_region(mem->bitmap, pageno, order); 189 - 190 - ret: 191 - mutex_unlock(&mem->lock); 192 158 return virt_addr; 193 159 } 194 160 EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr); 161 + 162 + static struct pci_epc_mem *pci_epc_get_matching_window(struct pci_epc *epc, 163 + phys_addr_t phys_addr) 164 + { 165 + struct pci_epc_mem *mem; 166 + int i; 167 + 168 + for (i = 0; i < epc->num_windows; i++) { 169 + mem = epc->windows[i]; 170 + 171 + if (phys_addr >= mem->window.phys_base && 172 + phys_addr < (mem->window.phys_base + mem->window.size)) 173 + return mem; 174 + } 175 + 176 + return NULL; 177 + } 195 178 196 179 /** 197 180 * pci_epc_mem_free_addr() - free the allocated memory address ··· 234 159 void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr, 235 160 void __iomem *virt_addr, size_t size) 236 161 { 162 + struct pci_epc_mem *mem; 163 + unsigned int page_shift; 164 + size_t page_size; 237 165 int pageno; 238 - struct pci_epc_mem *mem = epc->mem; 239 - unsigned int page_shift = ilog2(mem->page_size); 240 166 int order; 241 167 168 + mem = pci_epc_get_matching_window(epc, phys_addr); 169 + if (!mem) { 170 + pr_err("failed to get matching window\n"); 171 + return; 172 + } 173 + 174 + page_size = mem->window.page_size; 175 + page_shift = ilog2(page_size); 242 176 iounmap(virt_addr); 243 - pageno = (phys_addr - mem->phys_base) >> page_shift; 244 - size = ALIGN(size, mem->page_size); 177 + pageno = (phys_addr - mem->window.phys_base) >> page_shift; 178 + size = ALIGN(size, page_size); 245 179 order = pci_epc_mem_get_order(mem, size); 246 180 mutex_lock(&mem->lock); 247 181 bitmap_release_region(mem->bitmap, pageno, order);
-2
drivers/pci/hotplug/pciehp.h
··· 148 148 #define MRL_SENS(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_MRLSP) 149 149 #define ATTN_LED(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_AIP) 150 150 #define PWR_LED(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_PIP) 151 - #define HP_SUPR_RM(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_HPS) 152 - #define EMI(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_EIP) 153 151 #define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS) 154 152 #define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19) 155 153
+1 -1
drivers/pci/hotplug/rpaphp_core.c
··· 435 435 */ 436 436 int rpaphp_add_slot(struct device_node *dn) 437 437 { 438 - if (!dn->name || strcmp(dn->name, "pci")) 438 + if (!of_node_name_eq(dn, "pci")) 439 439 return 0; 440 440 441 441 if (of_find_property(dn, "ibm,drc-info", NULL))
+1 -1
drivers/pci/hotplug/shpchp.h
··· 164 164 u8 shpchp_handle_presence_change(u8 hp_slot, struct controller *ctrl); 165 165 u8 shpchp_handle_power_fault(u8 hp_slot, struct controller *ctrl); 166 166 int shpchp_configure_device(struct slot *p_slot); 167 - int shpchp_unconfigure_device(struct slot *p_slot); 167 + void shpchp_unconfigure_device(struct slot *p_slot); 168 168 void cleanup_slots(struct controller *ctrl); 169 169 void shpchp_queue_pushbutton_work(struct work_struct *work); 170 170 int shpc_init(struct controller *ctrl, struct pci_dev *pdev);
+1 -2
drivers/pci/hotplug/shpchp_ctrl.c
··· 341 341 u8 hp_slot; 342 342 int rc; 343 343 344 - if (shpchp_unconfigure_device(p_slot)) 345 - return(1); 344 + shpchp_unconfigure_device(p_slot); 346 345 347 346 hp_slot = p_slot->device - ctrl->slot_device_offset; 348 347 p_slot = shpchp_find_slot(ctrl, hp_slot + ctrl->slot_device_offset);
+1 -4
drivers/pci/hotplug/shpchp_pci.c
··· 61 61 return ret; 62 62 } 63 63 64 - int shpchp_unconfigure_device(struct slot *p_slot) 64 + void shpchp_unconfigure_device(struct slot *p_slot) 65 65 { 66 - int rc = 0; 67 66 struct pci_bus *parent = p_slot->ctrl->pci_dev->subordinate; 68 67 struct pci_dev *dev, *temp; 69 68 struct controller *ctrl = p_slot->ctrl; ··· 82 83 } 83 84 84 85 pci_unlock_rescan_remove(); 85 - return rc; 86 86 } 87 -
+1 -1
drivers/pci/of.c
··· 592 592 u32 max_link_speed; 593 593 594 594 if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || 595 - max_link_speed > 4) 595 + max_link_speed == 0 || max_link_speed > 4) 596 596 return -EINVAL; 597 597 598 598 return max_link_speed;
+2
drivers/pci/p2pdma.c
··· 282 282 } pci_p2pdma_whitelist[] = { 283 283 /* AMD ZEN */ 284 284 {PCI_VENDOR_ID_AMD, 0x1450, 0}, 285 + {PCI_VENDOR_ID_AMD, 0x15d0, 0}, 286 + {PCI_VENDOR_ID_AMD, 0x1630, 0}, 285 287 286 288 /* Intel Xeon E5/Core i7 */ 287 289 {PCI_VENDOR_ID_INTEL, 0x3c00, REQ_SAME_HOST_BRIDGE},
+3 -3
drivers/pci/pci-acpi.c
··· 948 948 * Look for a special _DSD property for the root port and if it 949 949 * is set we know the hierarchy behind it supports D3 just fine. 950 950 */ 951 - root = pci_find_pcie_root_port(dev); 951 + root = pcie_find_root_port(dev); 952 952 if (!root) 953 953 return false; 954 954 ··· 1128 1128 return; 1129 1129 1130 1130 obj = acpi_evaluate_dsm(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 3, 1131 - RESET_DELAY_DSM, NULL); 1131 + DSM_PCI_POWER_ON_RESET_DELAY, NULL); 1132 1132 if (!obj) 1133 1133 return; 1134 1134 ··· 1193 1193 pdev->d3cold_delay = 0; 1194 1194 1195 1195 obj = acpi_evaluate_dsm(handle, &pci_acpi_dsm_guid, 3, 1196 - FUNCTION_DELAY_DSM, NULL); 1196 + DSM_PCI_DEVICE_READINESS_DURATIONS, NULL); 1197 1197 if (!obj) 1198 1198 return; 1199 1199
+31 -30
drivers/pci/pci-bridge-emul.c
··· 24 24 #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END 25 25 #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2) 26 26 27 + /** 28 + * struct pci_bridge_reg_behavior - register bits behaviors 29 + * @ro: Read-Only bits 30 + * @rw: Read-Write bits 31 + * @w1c: Write-1-to-Clear bits 32 + * 33 + * Reads and Writes will be filtered by specified behavior. All other bits not 34 + * declared are assumed 'Reserved' and will return 0 on reads, per PCIe 5.0: 35 + * "Reserved register fields must be read only and must return 0 (all 0's for 36 + * multi-bit fields) when read". 37 + */ 27 38 struct pci_bridge_reg_behavior { 28 39 /* Read-only bits */ 29 40 u32 ro; ··· 44 33 45 34 /* Write-1-to-clear bits */ 46 35 u32 w1c; 47 - 48 - /* Reserved bits (hardwired to 0) */ 49 - u32 rsvd; 50 36 }; 51 37 52 38 static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { ··· 57 49 PCI_COMMAND_FAST_BACK) | 58 50 (PCI_STATUS_CAP_LIST | PCI_STATUS_66MHZ | 59 51 PCI_STATUS_FAST_BACK | PCI_STATUS_DEVSEL_MASK) << 16), 60 - .rsvd = GENMASK(15, 10) | ((BIT(6) | GENMASK(3, 0)) << 16), 61 52 .w1c = PCI_STATUS_ERROR_BITS << 16, 62 53 }, 63 54 [PCI_CLASS_REVISION / 4] = { .ro = ~0 }, ··· 103 96 GENMASK(11, 8) | GENMASK(3, 0)), 104 97 105 98 .w1c = PCI_STATUS_ERROR_BITS << 16, 106 - 107 - .rsvd = ((BIT(6) | GENMASK(4, 0)) << 16), 108 99 }, 109 100 110 101 [PCI_MEMORY_BASE / 4] = { ··· 135 130 136 131 [PCI_CAPABILITY_LIST / 4] = { 137 132 .ro = GENMASK(7, 0), 138 - .rsvd = GENMASK(31, 8), 139 133 }, 140 134 141 135 [PCI_ROM_ADDRESS1 / 4] = { 142 136 .rw = GENMASK(31, 11) | BIT(0), 143 - .rsvd = GENMASK(10, 1), 144 137 }, 145 138 146 139 /* ··· 161 158 .ro = (GENMASK(15, 8) | ((PCI_BRIDGE_CTL_FAST_BACK) << 16)), 162 159 163 160 .w1c = BIT(10) << 16, 164 - 165 - .rsvd = (GENMASK(15, 12) | BIT(4)) << 16, 166 161 }, 167 162 }; 168 163 ··· 182 181 .rw = GENMASK(15, 0), 183 182 184 183 /* 185 - * Device status register has 4 bits W1C, then 2 bits 186 - * RO, the rest is reserved 184 + * Device status register has bits 6 and [3:0] W1C, [5:4] RO, 185 + * the rest is reserved 187 186 */ 188 - .w1c = GENMASK(19, 16), 189 - .ro = GENMASK(20, 19), 190 - .rsvd = GENMASK(31, 21), 187 + .w1c = (BIT(6) | GENMASK(3, 0)) << 16, 188 + .ro = GENMASK(5, 4) << 16, 191 189 }, 192 190 193 191 [PCI_EXP_LNKCAP / 4] = { 194 192 /* All bits are RO, except bit 23 which is reserved */ 195 193 .ro = lower_32_bits(~BIT(23)), 196 - .rsvd = BIT(23), 197 194 }, 198 195 199 196 [PCI_EXP_LNKCTL / 4] = { 200 197 /* 201 - * Link control has bits [1:0] and [11:3] RW, the 202 - * other bits are reserved. 203 - * Link status has bits [13:0] RO, and bits [14:15] 198 + * Link control has bits [15:14], [11:3] and [1:0] RW, the 199 + * rest is reserved. 200 + * 201 + * Link status has bits [13:0] RO, and bits [15:14] 204 202 * W1C. 205 203 */ 206 - .rw = GENMASK(11, 3) | GENMASK(1, 0), 204 + .rw = GENMASK(15, 14) | GENMASK(11, 3) | GENMASK(1, 0), 207 205 .ro = GENMASK(13, 0) << 16, 208 206 .w1c = GENMASK(15, 14) << 16, 209 - .rsvd = GENMASK(15, 12) | BIT(2), 210 207 }, 211 208 212 209 [PCI_EXP_SLTCAP / 4] = { ··· 213 214 214 215 [PCI_EXP_SLTCTL / 4] = { 215 216 /* 216 - * Slot control has bits [12:0] RW, the rest is 217 + * Slot control has bits [14:0] RW, the rest is 217 218 * reserved. 218 219 * 219 - * Slot status has a mix of W1C and RO bits, as well 220 - * as reserved bits. 220 + * Slot status has bits 8 and [4:0] W1C, bits [7:5] RO, the 221 + * rest is reserved. 221 222 */ 222 - .rw = GENMASK(12, 0), 223 + .rw = GENMASK(14, 0), 223 224 .w1c = (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 224 225 PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | 225 226 PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16, 226 227 .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS | 227 228 PCI_EXP_SLTSTA_EIS) << 16, 228 - .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16), 229 229 }, 230 230 231 231 [PCI_EXP_RTCTL / 4] = { ··· 232 234 * Root control has bits [4:0] RW, the rest is 233 235 * reserved. 234 236 * 235 - * Root status has bit 0 RO, the rest is reserved. 237 + * Root capabilities has bit 0 RO, the rest is reserved. 236 238 */ 237 239 .rw = (PCI_EXP_RTCTL_SECEE | PCI_EXP_RTCTL_SENFEE | 238 240 PCI_EXP_RTCTL_SEFEE | PCI_EXP_RTCTL_PMEIE | 239 241 PCI_EXP_RTCTL_CRSSVE), 240 242 .ro = PCI_EXP_RTCAP_CRSVIS << 16, 241 - .rsvd = GENMASK(15, 5) | (GENMASK(15, 1) << 16), 242 243 }, 243 244 244 245 [PCI_EXP_RTSTA / 4] = { 246 + /* 247 + * Root status has bits 17 and [15:0] RO, bit 16 W1C, the rest 248 + * is reserved. 249 + */ 245 250 .ro = GENMASK(15, 0) | PCI_EXP_RTSTA_PENDING, 246 251 .w1c = PCI_EXP_RTSTA_PME, 247 - .rsvd = GENMASK(31, 18), 248 252 }, 249 253 }; 250 254 ··· 354 354 * Make sure we never return any reserved bit with a value 355 355 * different from 0. 356 356 */ 357 - *value &= ~behavior[reg / 4].rsvd; 357 + *value &= behavior[reg / 4].ro | behavior[reg / 4].rw | 358 + behavior[reg / 4].w1c; 358 359 359 360 if (size == 1) 360 361 *value = (*value >> (8 * (where & 3))) & 0xff;
+2 -2
drivers/pci/pci-label.c
··· 178 178 return -1; 179 179 180 180 obj = acpi_evaluate_dsm(handle, &pci_acpi_dsm_guid, 0x2, 181 - DEVICE_LABEL_DSM, NULL); 181 + DSM_PCI_DEVICE_NAME, NULL); 182 182 if (!obj) 183 183 return -1; 184 184 ··· 218 218 return false; 219 219 220 220 return !!acpi_check_dsm(handle, &pci_acpi_dsm_guid, 0x2, 221 - 1 << DEVICE_LABEL_DSM); 221 + 1 << DSM_PCI_DEVICE_NAME); 222 222 } 223 223 224 224 static umode_t acpi_index_string_exist(struct kobject *kobj,
+27 -37
drivers/pci/pci.c
··· 752 752 EXPORT_SYMBOL(pci_find_resource); 753 753 754 754 /** 755 - * pci_find_pcie_root_port - return PCIe Root Port 756 - * @dev: PCI device to query 757 - * 758 - * Traverse up the parent chain and return the PCIe Root Port PCI Device 759 - * for a given PCI Device. 760 - */ 761 - struct pci_dev *pci_find_pcie_root_port(struct pci_dev *dev) 762 - { 763 - struct pci_dev *bridge, *highest_pcie_bridge = dev; 764 - 765 - bridge = pci_upstream_bridge(dev); 766 - while (bridge && pci_is_pcie(bridge)) { 767 - highest_pcie_bridge = bridge; 768 - bridge = pci_upstream_bridge(bridge); 769 - } 770 - 771 - if (pci_pcie_type(highest_pcie_bridge) != PCI_EXP_TYPE_ROOT_PORT) 772 - return NULL; 773 - 774 - return highest_pcie_bridge; 775 - } 776 - EXPORT_SYMBOL(pci_find_pcie_root_port); 777 - 778 - /** 779 755 * pci_wait_for_pending - wait for @mask bit(s) to clear in status word @pos 780 756 * @dev: the PCI device to operate on 781 757 * @pos: config space offset of status word ··· 844 868 845 869 static inline bool platform_pci_bridge_d3(struct pci_dev *dev) 846 870 { 847 - return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false; 871 + if (pci_platform_pm && pci_platform_pm->bridge_d3) 872 + return pci_platform_pm->bridge_d3(dev); 873 + return false; 848 874 } 849 875 850 876 /** ··· 1556 1578 1557 1579 struct pci_saved_state { 1558 1580 u32 config_space[16]; 1559 - struct pci_cap_saved_data cap[0]; 1581 + struct pci_cap_saved_data cap[]; 1560 1582 }; 1561 1583 1562 1584 /** ··· 4638 4660 * pcie_wait_for_link_delay - Wait until link is active or inactive 4639 4661 * @pdev: Bridge device 4640 4662 * @active: waiting for active or inactive? 4641 - * @delay: Delay to wait after link has become active (in ms) 4663 + * @delay: Delay to wait after link has become active (in ms). Specify %0 4664 + * for no delay. 4642 4665 * 4643 4666 * Use this to wait till link becomes active or inactive. 4644 4667 */ ··· 4652 4673 4653 4674 /* 4654 4675 * Some controllers might not implement link active reporting. In this 4655 - * case, we wait for 1000 + 100 ms. 4676 + * case, we wait for 1000 ms + any delay requested by the caller. 4656 4677 */ 4657 4678 if (!pdev->link_active_reporting) { 4658 - msleep(1100); 4679 + msleep(timeout + delay); 4659 4680 return true; 4660 4681 } 4661 4682 ··· 4680 4701 msleep(10); 4681 4702 timeout -= 10; 4682 4703 } 4683 - if (active && ret) 4704 + if (active && ret && delay) 4684 4705 msleep(delay); 4685 4706 else if (ret != active) 4686 4707 pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", ··· 4801 4822 if (!pcie_downstream_port(dev)) 4802 4823 return; 4803 4824 4804 - if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { 4805 - pci_dbg(dev, "waiting %d ms for downstream link\n", delay); 4806 - msleep(delay); 4807 - } else { 4808 - pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", 4809 - delay); 4810 - if (!pcie_wait_for_link_delay(dev, true, delay)) { 4825 + /* 4826 + * Per PCIe r5.0, sec 6.6.1, for downstream ports that support 4827 + * speeds > 5 GT/s, we must wait for link training to complete 4828 + * before the mandatory delay. 4829 + * 4830 + * We can only tell when link training completes via DLL Link 4831 + * Active, which is required for downstream ports that support 4832 + * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common 4833 + * devices do not implement Link Active reporting even when it's 4834 + * required, so we'll check for that directly instead of checking 4835 + * the supported link speed. We assume devices without Link Active 4836 + * reporting can train in 100 ms regardless of speed. 4837 + */ 4838 + if (dev->link_active_reporting) { 4839 + pci_dbg(dev, "waiting for link to train\n"); 4840 + if (!pcie_wait_for_link_delay(dev, true, 0)) { 4811 4841 /* Did not train, no need to wait any further */ 4812 4842 return; 4813 4843 } 4814 4844 } 4845 + pci_dbg(child, "waiting %d ms to become accessible\n", delay); 4846 + msleep(delay); 4815 4847 4816 4848 if (!pci_device_is_present(child)) { 4817 4849 pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
-1
drivers/pci/pcie/Kconfig
··· 25 25 bool "PCI Express Advanced Error Reporting support" 26 26 depends on PCIEPORTBUS 27 27 select RAS 28 - default y 29 28 help 30 29 This enables PCI Express Root Port Advanced Error Reporting 31 30 (AER) driver support. Error reporting messages sent to Root
+99 -247
drivers/pci/pcie/aer.c
··· 136 136 */ 137 137 static int enable_ecrc_checking(struct pci_dev *dev) 138 138 { 139 - int pos; 139 + int aer = dev->aer_cap; 140 140 u32 reg32; 141 141 142 - if (!pci_is_pcie(dev)) 142 + if (!aer) 143 143 return -ENODEV; 144 144 145 - pos = dev->aer_cap; 146 - if (!pos) 147 - return -ENODEV; 148 - 149 - pci_read_config_dword(dev, pos + PCI_ERR_CAP, &reg32); 145 + pci_read_config_dword(dev, aer + PCI_ERR_CAP, &reg32); 150 146 if (reg32 & PCI_ERR_CAP_ECRC_GENC) 151 147 reg32 |= PCI_ERR_CAP_ECRC_GENE; 152 148 if (reg32 & PCI_ERR_CAP_ECRC_CHKC) 153 149 reg32 |= PCI_ERR_CAP_ECRC_CHKE; 154 - pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); 150 + pci_write_config_dword(dev, aer + PCI_ERR_CAP, reg32); 155 151 156 152 return 0; 157 153 } ··· 160 164 */ 161 165 static int disable_ecrc_checking(struct pci_dev *dev) 162 166 { 163 - int pos; 167 + int aer = dev->aer_cap; 164 168 u32 reg32; 165 169 166 - if (!pci_is_pcie(dev)) 170 + if (!aer) 167 171 return -ENODEV; 168 172 169 - pos = dev->aer_cap; 170 - if (!pos) 171 - return -ENODEV; 172 - 173 - pci_read_config_dword(dev, pos + PCI_ERR_CAP, &reg32); 173 + pci_read_config_dword(dev, aer + PCI_ERR_CAP, &reg32); 174 174 reg32 &= ~(PCI_ERR_CAP_ECRC_GENE | PCI_ERR_CAP_ECRC_CHKE); 175 - pci_write_config_dword(dev, pos + PCI_ERR_CAP, reg32); 175 + pci_write_config_dword(dev, aer + PCI_ERR_CAP, reg32); 176 176 177 177 return 0; 178 178 } ··· 209 217 } 210 218 #endif /* CONFIG_PCIE_ECRC */ 211 219 212 - #ifdef CONFIG_ACPI_APEI 213 - static inline int hest_match_pci(struct acpi_hest_aer_common *p, 214 - struct pci_dev *pci) 215 - { 216 - return ACPI_HEST_SEGMENT(p->bus) == pci_domain_nr(pci->bus) && 217 - ACPI_HEST_BUS(p->bus) == pci->bus->number && 218 - p->device == PCI_SLOT(pci->devfn) && 219 - p->function == PCI_FUNC(pci->devfn); 220 - } 221 - 222 - static inline bool hest_match_type(struct acpi_hest_header *hest_hdr, 223 - struct pci_dev *dev) 224 - { 225 - u16 hest_type = hest_hdr->type; 226 - u8 pcie_type = pci_pcie_type(dev); 227 - 228 - if ((hest_type == ACPI_HEST_TYPE_AER_ROOT_PORT && 229 - pcie_type == PCI_EXP_TYPE_ROOT_PORT) || 230 - (hest_type == ACPI_HEST_TYPE_AER_ENDPOINT && 231 - pcie_type == PCI_EXP_TYPE_ENDPOINT) || 232 - (hest_type == ACPI_HEST_TYPE_AER_BRIDGE && 233 - (dev->class >> 16) == PCI_BASE_CLASS_BRIDGE)) 234 - return true; 235 - return false; 236 - } 237 - 238 - struct aer_hest_parse_info { 239 - struct pci_dev *pci_dev; 240 - int firmware_first; 241 - }; 242 - 243 - static int hest_source_is_pcie_aer(struct acpi_hest_header *hest_hdr) 244 - { 245 - if (hest_hdr->type == ACPI_HEST_TYPE_AER_ROOT_PORT || 246 - hest_hdr->type == ACPI_HEST_TYPE_AER_ENDPOINT || 247 - hest_hdr->type == ACPI_HEST_TYPE_AER_BRIDGE) 248 - return 1; 249 - return 0; 250 - } 251 - 252 - static int aer_hest_parse(struct acpi_hest_header *hest_hdr, void *data) 253 - { 254 - struct aer_hest_parse_info *info = data; 255 - struct acpi_hest_aer_common *p; 256 - int ff; 257 - 258 - if (!hest_source_is_pcie_aer(hest_hdr)) 259 - return 0; 260 - 261 - p = (struct acpi_hest_aer_common *)(hest_hdr + 1); 262 - ff = !!(p->flags & ACPI_HEST_FIRMWARE_FIRST); 263 - 264 - /* 265 - * If no specific device is supplied, determine whether 266 - * FIRMWARE_FIRST is set for *any* PCIe device. 267 - */ 268 - if (!info->pci_dev) { 269 - info->firmware_first |= ff; 270 - return 0; 271 - } 272 - 273 - /* Otherwise, check the specific device */ 274 - if (p->flags & ACPI_HEST_GLOBAL) { 275 - if (hest_match_type(hest_hdr, info->pci_dev)) 276 - info->firmware_first = ff; 277 - } else 278 - if (hest_match_pci(p, info->pci_dev)) 279 - info->firmware_first = ff; 280 - 281 - return 0; 282 - } 283 - 284 - static void aer_set_firmware_first(struct pci_dev *pci_dev) 285 - { 286 - int rc; 287 - struct aer_hest_parse_info info = { 288 - .pci_dev = pci_dev, 289 - .firmware_first = 0, 290 - }; 291 - 292 - rc = apei_hest_parse(aer_hest_parse, &info); 293 - 294 - if (rc) 295 - pci_dev->__aer_firmware_first = 0; 296 - else 297 - pci_dev->__aer_firmware_first = info.firmware_first; 298 - pci_dev->__aer_firmware_first_valid = 1; 299 - } 300 - 301 - int pcie_aer_get_firmware_first(struct pci_dev *dev) 302 - { 303 - if (!pci_is_pcie(dev)) 304 - return 0; 305 - 306 - if (pcie_ports_native) 307 - return 0; 308 - 309 - if (!dev->__aer_firmware_first_valid) 310 - aer_set_firmware_first(dev); 311 - return dev->__aer_firmware_first; 312 - } 313 - 314 - static bool aer_firmware_first; 315 - 316 - /** 317 - * aer_acpi_firmware_first - Check if APEI should control AER. 318 - */ 319 - bool aer_acpi_firmware_first(void) 320 - { 321 - static bool parsed = false; 322 - struct aer_hest_parse_info info = { 323 - .pci_dev = NULL, /* Check all PCIe devices */ 324 - .firmware_first = 0, 325 - }; 326 - 327 - if (pcie_ports_native) 328 - return false; 329 - 330 - if (!parsed) { 331 - apei_hest_parse(aer_hest_parse, &info); 332 - aer_firmware_first = info.firmware_first; 333 - parsed = true; 334 - } 335 - return aer_firmware_first; 336 - } 337 - #endif 338 - 339 220 #define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \ 340 221 PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) 341 222 342 - int pci_enable_pcie_error_reporting(struct pci_dev *dev) 223 + int pcie_aer_is_native(struct pci_dev *dev) 343 224 { 344 - if (pcie_aer_get_firmware_first(dev)) 345 - return -EIO; 225 + struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); 346 226 347 227 if (!dev->aer_cap) 228 + return 0; 229 + 230 + return pcie_ports_native || host->native_aer; 231 + } 232 + 233 + int pci_enable_pcie_error_reporting(struct pci_dev *dev) 234 + { 235 + if (!pcie_aer_is_native(dev)) 348 236 return -EIO; 349 237 350 238 return pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_AER_FLAGS); ··· 233 361 234 362 int pci_disable_pcie_error_reporting(struct pci_dev *dev) 235 363 { 236 - if (pcie_aer_get_firmware_first(dev)) 364 + if (!pcie_aer_is_native(dev)) 237 365 return -EIO; 238 366 239 367 return pcie_capability_clear_word(dev, PCI_EXP_DEVCTL, ··· 251 379 252 380 int pci_aer_clear_nonfatal_status(struct pci_dev *dev) 253 381 { 254 - int pos; 382 + int aer = dev->aer_cap; 255 383 u32 status, sev; 256 384 257 - pos = dev->aer_cap; 258 - if (!pos) 259 - return -EIO; 260 - 261 - if (pcie_aer_get_firmware_first(dev)) 385 + if (!pcie_aer_is_native(dev)) 262 386 return -EIO; 263 387 264 388 /* Clear status bits for ERR_NONFATAL errors only */ 265 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); 266 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); 389 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); 390 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, &sev); 267 391 status &= ~sev; 268 392 if (status) 269 - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); 393 + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, status); 270 394 271 395 return 0; 272 396 } ··· 270 402 271 403 void pci_aer_clear_fatal_status(struct pci_dev *dev) 272 404 { 273 - int pos; 405 + int aer = dev->aer_cap; 274 406 u32 status, sev; 275 407 276 - pos = dev->aer_cap; 277 - if (!pos) 278 - return; 279 - 280 - if (pcie_aer_get_firmware_first(dev)) 408 + if (!pcie_aer_is_native(dev)) 281 409 return; 282 410 283 411 /* Clear status bits for ERR_FATAL errors only */ 284 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); 285 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); 412 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); 413 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, &sev); 286 414 status &= sev; 287 415 if (status) 288 - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); 416 + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, status); 289 417 } 290 418 291 419 /** ··· 295 431 */ 296 432 int pci_aer_raw_clear_status(struct pci_dev *dev) 297 433 { 298 - int pos; 434 + int aer = dev->aer_cap; 299 435 u32 status; 300 436 int port_type; 301 437 302 - if (!pci_is_pcie(dev)) 303 - return -ENODEV; 304 - 305 - pos = dev->aer_cap; 306 - if (!pos) 438 + if (!aer) 307 439 return -EIO; 308 440 309 441 port_type = pci_pcie_type(dev); 310 442 if (port_type == PCI_EXP_TYPE_ROOT_PORT) { 311 - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &status); 312 - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, status); 443 + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, &status); 444 + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, status); 313 445 } 314 446 315 - pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, &status); 316 - pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, status); 447 + pci_read_config_dword(dev, aer + PCI_ERR_COR_STATUS, &status); 448 + pci_write_config_dword(dev, aer + PCI_ERR_COR_STATUS, status); 317 449 318 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); 319 - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); 450 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); 451 + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, status); 320 452 321 453 return 0; 322 454 } 323 455 324 456 int pci_aer_clear_status(struct pci_dev *dev) 325 457 { 326 - if (pcie_aer_get_firmware_first(dev)) 458 + if (!pcie_aer_is_native(dev)) 327 459 return -EIO; 328 460 329 461 return pci_aer_raw_clear_status(dev); ··· 327 467 328 468 void pci_save_aer_state(struct pci_dev *dev) 329 469 { 470 + int aer = dev->aer_cap; 330 471 struct pci_cap_saved_state *save_state; 331 472 u32 *cap; 332 - int pos; 333 473 334 - pos = dev->aer_cap; 335 - if (!pos) 474 + if (!aer) 336 475 return; 337 476 338 477 save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_ERR); ··· 339 480 return; 340 481 341 482 cap = &save_state->cap.data[0]; 342 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, cap++); 343 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, cap++); 344 - pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, cap++); 345 - pci_read_config_dword(dev, pos + PCI_ERR_CAP, cap++); 483 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, cap++); 484 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, cap++); 485 + pci_read_config_dword(dev, aer + PCI_ERR_COR_MASK, cap++); 486 + pci_read_config_dword(dev, aer + PCI_ERR_CAP, cap++); 346 487 if (pcie_cap_has_rtctl(dev)) 347 - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, cap++); 488 + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, cap++); 348 489 } 349 490 350 491 void pci_restore_aer_state(struct pci_dev *dev) 351 492 { 493 + int aer = dev->aer_cap; 352 494 struct pci_cap_saved_state *save_state; 353 495 u32 *cap; 354 - int pos; 355 496 356 - pos = dev->aer_cap; 357 - if (!pos) 497 + if (!aer) 358 498 return; 359 499 360 500 save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_ERR); ··· 361 503 return; 362 504 363 505 cap = &save_state->cap.data[0]; 364 - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, *cap++); 365 - pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, *cap++); 366 - pci_write_config_dword(dev, pos + PCI_ERR_COR_MASK, *cap++); 367 - pci_write_config_dword(dev, pos + PCI_ERR_CAP, *cap++); 506 + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, *cap++); 507 + pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_SEVER, *cap++); 508 + pci_write_config_dword(dev, aer + PCI_ERR_COR_MASK, *cap++); 509 + pci_write_config_dword(dev, aer + PCI_ERR_CAP, *cap++); 368 510 if (pcie_cap_has_rtctl(dev)) 369 - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, *cap++); 511 + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, *cap++); 370 512 } 371 513 372 514 void pci_aer_init(struct pci_dev *dev) ··· 797 939 */ 798 940 static bool is_error_source(struct pci_dev *dev, struct aer_err_info *e_info) 799 941 { 800 - int pos; 942 + int aer = dev->aer_cap; 801 943 u32 status, mask; 802 944 u16 reg16; 803 945 ··· 832 974 if (!(reg16 & PCI_EXP_AER_FLAGS)) 833 975 return false; 834 976 835 - pos = dev->aer_cap; 836 - if (!pos) 977 + if (!aer) 837 978 return false; 838 979 839 980 /* Check if error is recorded */ 840 981 if (e_info->severity == AER_CORRECTABLE) { 841 - pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, &status); 842 - pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, &mask); 982 + pci_read_config_dword(dev, aer + PCI_ERR_COR_STATUS, &status); 983 + pci_read_config_dword(dev, aer + PCI_ERR_COR_MASK, &mask); 843 984 } else { 844 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); 845 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask); 985 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, &status); 986 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, &mask); 846 987 } 847 988 if (status & ~mask) 848 989 return true; ··· 912 1055 */ 913 1056 static void handle_error_source(struct pci_dev *dev, struct aer_err_info *info) 914 1057 { 915 - int pos; 1058 + int aer = dev->aer_cap; 916 1059 917 1060 if (info->severity == AER_CORRECTABLE) { 918 1061 /* 919 1062 * Correctable error does not need software intervention. 920 1063 * No need to go through error recovery process. 921 1064 */ 922 - pos = dev->aer_cap; 923 - if (pos) 924 - pci_write_config_dword(dev, pos + PCI_ERR_COR_STATUS, 1065 + if (aer) 1066 + pci_write_config_dword(dev, aer + PCI_ERR_COR_STATUS, 925 1067 info->status); 926 1068 pci_aer_clear_device_status(dev); 927 1069 } else if (info->severity == AER_NONFATAL) ··· 1011 1155 */ 1012 1156 int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info) 1013 1157 { 1014 - int pos, temp; 1158 + int aer = dev->aer_cap; 1159 + int temp; 1015 1160 1016 1161 /* Must reset in this function */ 1017 1162 info->status = 0; 1018 1163 info->tlp_header_valid = 0; 1019 1164 1020 - pos = dev->aer_cap; 1021 - 1022 1165 /* The device might not support AER */ 1023 - if (!pos) 1166 + if (!aer) 1024 1167 return 0; 1025 1168 1026 1169 if (info->severity == AER_CORRECTABLE) { 1027 - pci_read_config_dword(dev, pos + PCI_ERR_COR_STATUS, 1170 + pci_read_config_dword(dev, aer + PCI_ERR_COR_STATUS, 1028 1171 &info->status); 1029 - pci_read_config_dword(dev, pos + PCI_ERR_COR_MASK, 1172 + pci_read_config_dword(dev, aer + PCI_ERR_COR_MASK, 1030 1173 &info->mask); 1031 1174 if (!(info->status & ~info->mask)) 1032 1175 return 0; ··· 1034 1179 info->severity == AER_NONFATAL) { 1035 1180 1036 1181 /* Link is still healthy for IO reads */ 1037 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, 1182 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_STATUS, 1038 1183 &info->status); 1039 - pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, 1184 + pci_read_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, 1040 1185 &info->mask); 1041 1186 if (!(info->status & ~info->mask)) 1042 1187 return 0; 1043 1188 1044 1189 /* Get First Error Pointer */ 1045 - pci_read_config_dword(dev, pos + PCI_ERR_CAP, &temp); 1190 + pci_read_config_dword(dev, aer + PCI_ERR_CAP, &temp); 1046 1191 info->first_error = PCI_ERR_CAP_FEP(temp); 1047 1192 1048 1193 if (info->status & AER_LOG_TLP_MASKS) { 1049 1194 info->tlp_header_valid = 1; 1050 1195 pci_read_config_dword(dev, 1051 - pos + PCI_ERR_HEADER_LOG, &info->tlp.dw0); 1196 + aer + PCI_ERR_HEADER_LOG, &info->tlp.dw0); 1052 1197 pci_read_config_dword(dev, 1053 - pos + PCI_ERR_HEADER_LOG + 4, &info->tlp.dw1); 1198 + aer + PCI_ERR_HEADER_LOG + 4, &info->tlp.dw1); 1054 1199 pci_read_config_dword(dev, 1055 - pos + PCI_ERR_HEADER_LOG + 8, &info->tlp.dw2); 1200 + aer + PCI_ERR_HEADER_LOG + 8, &info->tlp.dw2); 1056 1201 pci_read_config_dword(dev, 1057 - pos + PCI_ERR_HEADER_LOG + 12, &info->tlp.dw3); 1202 + aer + PCI_ERR_HEADER_LOG + 12, &info->tlp.dw3); 1058 1203 } 1059 1204 } 1060 1205 ··· 1160 1305 struct pcie_device *pdev = (struct pcie_device *)context; 1161 1306 struct aer_rpc *rpc = get_service_data(pdev); 1162 1307 struct pci_dev *rp = rpc->rpd; 1308 + int aer = rp->aer_cap; 1163 1309 struct aer_err_source e_src = {}; 1164 - int pos = rp->aer_cap; 1165 1310 1166 - pci_read_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, &e_src.status); 1311 + pci_read_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, &e_src.status); 1167 1312 if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) 1168 1313 return IRQ_NONE; 1169 1314 1170 - pci_read_config_dword(rp, pos + PCI_ERR_ROOT_ERR_SRC, &e_src.id); 1171 - pci_write_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, e_src.status); 1315 + pci_read_config_dword(rp, aer + PCI_ERR_ROOT_ERR_SRC, &e_src.id); 1316 + pci_write_config_dword(rp, aer + PCI_ERR_ROOT_STATUS, e_src.status); 1172 1317 1173 1318 if (!kfifo_put(&rpc->aer_fifo, e_src)) 1174 1319 return IRQ_HANDLED; ··· 1220 1365 static void aer_enable_rootport(struct aer_rpc *rpc) 1221 1366 { 1222 1367 struct pci_dev *pdev = rpc->rpd; 1223 - int aer_pos; 1368 + int aer = pdev->aer_cap; 1224 1369 u16 reg16; 1225 1370 u32 reg32; 1226 1371 ··· 1232 1377 pcie_capability_clear_word(pdev, PCI_EXP_RTCTL, 1233 1378 SYSTEM_ERROR_INTR_ON_MESG_MASK); 1234 1379 1235 - aer_pos = pdev->aer_cap; 1236 1380 /* Clear error status */ 1237 - pci_read_config_dword(pdev, aer_pos + PCI_ERR_ROOT_STATUS, &reg32); 1238 - pci_write_config_dword(pdev, aer_pos + PCI_ERR_ROOT_STATUS, reg32); 1239 - pci_read_config_dword(pdev, aer_pos + PCI_ERR_COR_STATUS, &reg32); 1240 - pci_write_config_dword(pdev, aer_pos + PCI_ERR_COR_STATUS, reg32); 1241 - pci_read_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, &reg32); 1242 - pci_write_config_dword(pdev, aer_pos + PCI_ERR_UNCOR_STATUS, reg32); 1381 + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32); 1382 + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, reg32); 1383 + pci_read_config_dword(pdev, aer + PCI_ERR_COR_STATUS, &reg32); 1384 + pci_write_config_dword(pdev, aer + PCI_ERR_COR_STATUS, reg32); 1385 + pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32); 1386 + pci_write_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, reg32); 1243 1387 1244 1388 /* 1245 1389 * Enable error reporting for the root port device and downstream port ··· 1247 1393 set_downstream_devices_error_reporting(pdev, true); 1248 1394 1249 1395 /* Enable Root Port's interrupt in response to error messages */ 1250 - pci_read_config_dword(pdev, aer_pos + PCI_ERR_ROOT_COMMAND, &reg32); 1396 + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1251 1397 reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; 1252 - pci_write_config_dword(pdev, aer_pos + PCI_ERR_ROOT_COMMAND, reg32); 1398 + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1253 1399 } 1254 1400 1255 1401 /** ··· 1261 1407 static void aer_disable_rootport(struct aer_rpc *rpc) 1262 1408 { 1263 1409 struct pci_dev *pdev = rpc->rpd; 1410 + int aer = pdev->aer_cap; 1264 1411 u32 reg32; 1265 - int pos; 1266 1412 1267 1413 /* 1268 1414 * Disable error reporting for the root port device and downstream port ··· 1270 1416 */ 1271 1417 set_downstream_devices_error_reporting(pdev, false); 1272 1418 1273 - pos = pdev->aer_cap; 1274 1419 /* Disable Root's interrupt in response to error messages */ 1275 - pci_read_config_dword(pdev, pos + PCI_ERR_ROOT_COMMAND, &reg32); 1420 + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1276 1421 reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; 1277 - pci_write_config_dword(pdev, pos + PCI_ERR_ROOT_COMMAND, reg32); 1422 + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1278 1423 1279 1424 /* Clear Root's error status reg */ 1280 - pci_read_config_dword(pdev, pos + PCI_ERR_ROOT_STATUS, &reg32); 1281 - pci_write_config_dword(pdev, pos + PCI_ERR_ROOT_STATUS, reg32); 1425 + pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32); 1426 + pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, reg32); 1282 1427 } 1283 1428 1284 1429 /** ··· 1334 1481 */ 1335 1482 static pci_ers_result_t aer_root_reset(struct pci_dev *dev) 1336 1483 { 1484 + int aer = dev->aer_cap; 1337 1485 u32 reg32; 1338 - int pos; 1339 1486 int rc; 1340 1487 1341 - pos = dev->aer_cap; 1342 1488 1343 1489 /* Disable Root's interrupt in response to error messages */ 1344 - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, &reg32); 1490 + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1345 1491 reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; 1346 - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); 1492 + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1347 1493 1348 1494 rc = pci_bus_error_reset(dev); 1349 1495 pci_info(dev, "Root Port link has been reset\n"); 1350 1496 1351 1497 /* Clear Root Error Status */ 1352 - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32); 1353 - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, reg32); 1498 + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, &reg32); 1499 + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_STATUS, reg32); 1354 1500 1355 1501 /* Enable Root Port's interrupt in response to error messages */ 1356 - pci_read_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, &reg32); 1502 + pci_read_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1357 1503 reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; 1358 - pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); 1504 + pci_write_config_dword(dev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1359 1505 1360 1506 return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; 1361 1507 } ··· 1375 1523 */ 1376 1524 int __init pcie_aer_init(void) 1377 1525 { 1378 - if (!pci_aer_available() || aer_acpi_firmware_first()) 1526 + if (!pci_aer_available()) 1379 1527 return -ENXIO; 1380 1528 return pcie_port_service_register(&aerdriver); 1381 1529 }
-10
drivers/pci/pcie/aspm.c
··· 628 628 629 629 /* Setup initial capable state. Will be updated later */ 630 630 link->aspm_capable = link->aspm_support; 631 - /* 632 - * If the downstream component has pci bridge function, don't 633 - * do ASPM for now. 634 - */ 635 - list_for_each_entry(child, &linkbus->devices, bus_list) { 636 - if (pci_pcie_type(child) == PCI_EXP_TYPE_PCI_BRIDGE) { 637 - link->aspm_disable = ASPM_STATE_ALL; 638 - break; 639 - } 640 - } 641 631 642 632 /* Get and check endpoint acceptable latencies */ 643 633 list_for_each_entry(child, &linkbus->devices, bus_list) {
+2 -1
drivers/pci/pcie/dpc.c
··· 284 284 int status; 285 285 u16 ctl, cap; 286 286 287 - if (pcie_aer_get_firmware_first(pdev) && !pcie_ports_dpc_native) 287 + if (!pcie_aer_is_native(pdev) && !pcie_ports_dpc_native) 288 288 return -ENOTSUPP; 289 289 290 290 status = devm_request_threaded_irq(device, dev->irq, dpc_irq, ··· 301 301 302 302 ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 303 303 pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); 304 + pci_info(pdev, "enabled with IRQ %d\n", dev->irq); 304 305 305 306 pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", 306 307 cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT),
+2 -2
drivers/pci/pcie/edr.c
··· 148 148 pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT; 149 149 u16 status; 150 150 151 - pci_info(pdev, "ACPI event %#x received\n", event); 152 - 153 151 if (event != ACPI_NOTIFY_DISCONNECT_RECOVER) 154 152 return; 153 + 154 + pci_info(pdev, "EDR event received\n"); 155 155 156 156 /* Locate the port which issued EDR event */ 157 157 edev = acpi_dpc_port_get(pdev);
+2 -2
drivers/pci/pcie/pme.c
··· 408 408 409 409 /** 410 410 * pcie_pme_resume - Resume PCIe PME service device. 411 - * @srv - PCIe service device to resume. 411 + * @srv: PCIe service device to resume. 412 412 */ 413 413 static int pcie_pme_resume(struct pcie_device *srv) 414 414 { ··· 431 431 432 432 /** 433 433 * pcie_pme_remove - Prepare PCIe PME service device for removal. 434 - * @srv - PCIe service device to remove. 434 + * @srv: PCIe service device to remove. 435 435 */ 436 436 static void pcie_pme_remove(struct pcie_device *srv) 437 437 {
+2 -11
drivers/pci/pcie/portdrv.h
··· 29 29 30 30 #ifdef CONFIG_PCIEAER 31 31 int pcie_aer_init(void); 32 + int pcie_aer_is_native(struct pci_dev *dev); 32 33 #else 33 34 static inline int pcie_aer_init(void) { return 0; } 35 + static inline int pcie_aer_is_native(struct pci_dev *dev) { return 0; } 34 36 #endif 35 37 36 38 #ifdef CONFIG_HOTPLUG_PCI_PCIE ··· 148 146 static inline bool pcie_pme_no_msi(void) { return false; } 149 147 static inline void pcie_pme_interrupt_enable(struct pci_dev *dev, bool en) {} 150 148 #endif /* !CONFIG_PCIE_PME */ 151 - 152 - #ifdef CONFIG_ACPI_APEI 153 - int pcie_aer_get_firmware_first(struct pci_dev *pci_dev); 154 - #else 155 - static inline int pcie_aer_get_firmware_first(struct pci_dev *pci_dev) 156 - { 157 - if (pci_dev->__aer_firmware_first_valid) 158 - return pci_dev->__aer_firmware_first; 159 - return 0; 160 - } 161 - #endif 162 149 163 150 struct device *pcie_port_find_device(struct pci_dev *dev, u32 service); 164 151 #endif /* _PORTDRV_H_ */
+17 -5
drivers/pci/pcie/ptm.c
··· 39 39 if (!pci_is_pcie(dev)) 40 40 return; 41 41 42 - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM); 43 - if (!pos) 44 - return; 45 - 46 42 /* 47 43 * Enable PTM only on interior devices (root ports, switch ports, 48 44 * etc.) on the assumption that it causes no link traffic until an ··· 46 50 */ 47 51 if ((pci_pcie_type(dev) == PCI_EXP_TYPE_ENDPOINT || 48 52 pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END)) 53 + return; 54 + 55 + /* 56 + * Switch Downstream Ports are not permitted to have a PTM 57 + * capability; their PTM behavior is controlled by the Upstream 58 + * Port (PCIe r5.0, sec 7.9.16). 59 + */ 60 + ups = pci_upstream_bridge(dev); 61 + if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM && 62 + ups && ups->ptm_enabled) { 63 + dev->ptm_granularity = ups->ptm_granularity; 64 + dev->ptm_enabled = 1; 65 + return; 66 + } 67 + 68 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_PTM); 69 + if (!pos) 49 70 return; 50 71 51 72 pci_read_config_dword(dev, pos + PCI_PTM_CAP, &cap); ··· 74 61 * the spec recommendation (PCIe r3.1, sec 7.32.3), select the 75 62 * furthest upstream Time Source as the PTM Root. 76 63 */ 77 - ups = pci_upstream_bridge(dev); 78 64 if (ups && ups->ptm_enabled) { 79 65 ctrl = PCI_PTM_CTRL_ENABLE; 80 66 if (ups->ptm_granularity == 0)
+44 -21
drivers/pci/probe.c
··· 565 565 return b; 566 566 } 567 567 568 - static void devm_pci_release_host_bridge_dev(struct device *dev) 568 + static void pci_release_host_bridge_dev(struct device *dev) 569 569 { 570 570 struct pci_host_bridge *bridge = to_pci_host_bridge(dev); 571 571 ··· 574 574 575 575 pci_free_resource_list(&bridge->windows); 576 576 pci_free_resource_list(&bridge->dma_ranges); 577 - } 578 - 579 - static void pci_release_host_bridge_dev(struct device *dev) 580 - { 581 - devm_pci_release_host_bridge_dev(dev); 582 - kfree(to_pci_host_bridge(dev)); 577 + kfree(bridge); 583 578 } 584 579 585 580 static void pci_init_host_bridge(struct pci_host_bridge *bridge) ··· 594 599 bridge->native_pme = 1; 595 600 bridge->native_ltr = 1; 596 601 bridge->native_dpc = 1; 602 + 603 + device_initialize(&bridge->dev); 597 604 } 598 605 599 606 struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) ··· 613 616 } 614 617 EXPORT_SYMBOL(pci_alloc_host_bridge); 615 618 619 + static void devm_pci_alloc_host_bridge_release(void *data) 620 + { 621 + pci_free_host_bridge(data); 622 + } 623 + 616 624 struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev, 617 625 size_t priv) 618 626 { 627 + int ret; 619 628 struct pci_host_bridge *bridge; 620 629 621 - bridge = devm_kzalloc(dev, sizeof(*bridge) + priv, GFP_KERNEL); 630 + bridge = pci_alloc_host_bridge(priv); 622 631 if (!bridge) 623 632 return NULL; 624 633 625 - pci_init_host_bridge(bridge); 626 - bridge->dev.release = devm_pci_release_host_bridge_dev; 634 + ret = devm_add_action_or_reset(dev, devm_pci_alloc_host_bridge_release, 635 + bridge); 636 + if (ret) 637 + return NULL; 627 638 628 639 return bridge; 629 640 } ··· 639 634 640 635 void pci_free_host_bridge(struct pci_host_bridge *bridge) 641 636 { 642 - pci_free_resource_list(&bridge->windows); 643 - pci_free_resource_list(&bridge->dma_ranges); 644 - 645 - kfree(bridge); 637 + put_device(&bridge->dev); 646 638 } 647 639 EXPORT_SYMBOL(pci_free_host_bridge); 648 640 ··· 910 908 if (err) 911 909 goto free; 912 910 913 - err = device_register(&bridge->dev); 914 - if (err) 911 + err = device_add(&bridge->dev); 912 + if (err) { 915 913 put_device(&bridge->dev); 916 - 914 + goto free; 915 + } 917 916 bus->bridge = get_device(&bridge->dev); 918 917 device_enable_async_suspend(bus->bridge); 919 918 pci_set_bus_of_node(bus); ··· 980 977 981 978 unregister: 982 979 put_device(&bridge->dev); 983 - device_unregister(&bridge->dev); 980 + device_del(&bridge->dev); 984 981 985 982 free: 986 983 kfree(bus); ··· 1937 1934 struct pci_dev *bridge = pci_upstream_bridge(dev); 1938 1935 int mps, mpss, p_mps, rc; 1939 1936 1940 - if (!pci_is_pcie(dev) || !bridge || !pci_is_pcie(bridge)) 1937 + if (!pci_is_pcie(dev)) 1941 1938 return; 1942 1939 1943 1940 /* MPS and MRRS fields are of type 'RsvdP' for VFs, short-circuit out */ 1944 1941 if (dev->is_virtfn) 1942 + return; 1943 + 1944 + /* 1945 + * For Root Complex Integrated Endpoints, program the maximum 1946 + * supported value unless limited by the PCIE_BUS_PEER2PEER case. 1947 + */ 1948 + if (pci_pcie_type(dev) == PCI_EXP_TYPE_RC_END) { 1949 + if (pcie_bus_config == PCIE_BUS_PEER2PEER) 1950 + mps = 128; 1951 + else 1952 + mps = 128 << dev->pcie_mpss; 1953 + rc = pcie_set_mps(dev, mps); 1954 + if (rc) { 1955 + pci_warn(dev, "can't set Max Payload Size to %d; if necessary, use \"pci=pcie_bus_safe\" and report a bug\n", 1956 + mps); 1957 + } 1958 + return; 1959 + } 1960 + 1961 + if (!bridge || !pci_is_pcie(bridge)) 1945 1962 return; 1946 1963 1947 1964 mps = pcie_get_mps(dev); ··· 2079 2056 * For now, we only deal with Relaxed Ordering issues with Root 2080 2057 * Ports. Peer-to-Peer DMA is another can of worms. 2081 2058 */ 2082 - root = pci_find_pcie_root_port(dev); 2059 + root = pcie_find_root_port(dev); 2083 2060 if (!root) 2084 2061 return; 2085 2062 ··· 2975 2952 return bridge->bus; 2976 2953 2977 2954 err_out: 2978 - kfree(bridge); 2955 + put_device(&bridge->dev); 2979 2956 return NULL; 2980 2957 } 2981 2958 EXPORT_SYMBOL_GPL(pci_create_root_bus);
+45 -5
drivers/pci/quirks.c
··· 4319 4319 */ 4320 4320 static void quirk_disable_root_port_attributes(struct pci_dev *pdev) 4321 4321 { 4322 - struct pci_dev *root_port = pci_find_pcie_root_port(pdev); 4322 + struct pci_dev *root_port = pcie_find_root_port(pdev); 4323 4323 4324 4324 if (!root_port) { 4325 4325 pci_warn(pdev, "PCIe Completion erratum may cause device errors\n"); ··· 4682 4682 PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_DT); 4683 4683 } 4684 4684 4685 + static int pci_quirk_rciep_acs(struct pci_dev *dev, u16 acs_flags) 4686 + { 4687 + /* 4688 + * Intel RCiEP's are required to allow p2p only on translated 4689 + * addresses. Refer to Intel VT-d specification, r3.1, sec 3.16, 4690 + * "Root-Complex Peer to Peer Considerations". 4691 + */ 4692 + if (pci_pcie_type(dev) != PCI_EXP_TYPE_RC_END) 4693 + return -ENOTTY; 4694 + 4695 + return pci_acs_ctrl_enabled(acs_flags, 4696 + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4697 + } 4698 + 4685 4699 static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags) 4686 4700 { 4687 4701 /* ··· 4778 4764 /* I219 */ 4779 4765 { PCI_VENDOR_ID_INTEL, 0x15b7, pci_quirk_mf_endpoint_acs }, 4780 4766 { PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs }, 4767 + { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_rciep_acs }, 4781 4768 /* QCOM QDF2xxx root ports */ 4782 4769 { PCI_VENDOR_ID_QCOM, 0x0400, pci_quirk_qcom_rp_acs }, 4783 4770 { PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs }, ··· 5144 5129 } 5145 5130 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap); 5146 5131 5147 - /* FLR may cause some 82579 devices to hang */ 5148 - static void quirk_intel_no_flr(struct pci_dev *dev) 5132 + /* 5133 + * FLR may cause the following to devices to hang: 5134 + * 5135 + * AMD Starship/Matisse HD Audio Controller 0x1487 5136 + * AMD Starship USB 3.0 Host Controller 0x148c 5137 + * AMD Matisse USB 3.0 Host Controller 0x149c 5138 + * Intel 82579LM Gigabit Ethernet Controller 0x1502 5139 + * Intel 82579V Gigabit Ethernet Controller 0x1503 5140 + * 5141 + */ 5142 + static void quirk_no_flr(struct pci_dev *dev) 5149 5143 { 5150 5144 dev->dev_flags |= PCI_DEV_FLAGS_NO_FLR_RESET; 5151 5145 } 5152 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_intel_no_flr); 5153 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_intel_no_flr); 5146 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1487, quirk_no_flr); 5147 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr); 5148 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr); 5149 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr); 5150 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr); 5154 5151 5155 5152 static void quirk_no_ext_tags(struct pci_dev *pdev) 5156 5153 { ··· 5594 5567 dev->pme_support &= ~(PCI_PM_CAP_PME_D0 >> PCI_PM_CAP_PME_SHIFT); 5595 5568 } 5596 5569 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x2142, pci_fixup_no_d0_pme); 5570 + 5571 + /* 5572 + * Device [12d8:0x400e] and [12d8:0x400f] 5573 + * These devices advertise PME# support in all power states but don't 5574 + * reliably assert it. 5575 + */ 5576 + static void pci_fixup_no_pme(struct pci_dev *dev) 5577 + { 5578 + pci_info(dev, "PME# is unreliable, disabling it\n"); 5579 + dev->pme_support = 0; 5580 + } 5581 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400e, pci_fixup_no_pme); 5582 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_PERICOM, 0x400f, pci_fixup_no_pme); 5597 5583 5598 5584 static void apex_pci_fixup_class(struct pci_dev *pdev) 5599 5585 {
+1 -1
drivers/pci/remove.c
··· 160 160 host_bridge->bus = NULL; 161 161 162 162 /* remove the host bridge */ 163 - device_unregister(&host_bridge->dev); 163 + device_del(&host_bridge->dev); 164 164 } 165 165 EXPORT_SYMBOL_GPL(pci_remove_root_bus);
+62 -53
drivers/pci/setup-bus.c
··· 26 26 #include "pci.h" 27 27 28 28 unsigned int pci_flags; 29 + EXPORT_SYMBOL_GPL(pci_flags); 29 30 30 31 struct pci_dev_resource { 31 32 struct list_head list; ··· 584 583 io_mask = PCI_IO_1K_RANGE_MASK; 585 584 586 585 /* Set up the top and bottom of the PCI I/O segment for this bus */ 587 - res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; 586 + res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 588 587 pcibios_resource_to_bus(bridge->bus, &region, res); 589 588 if (res->flags & IORESOURCE_IO) { 590 589 pci_read_config_word(bridge, PCI_IO_BASE, &l); ··· 614 613 u32 l; 615 614 616 615 /* Set up the top and bottom of the PCI Memory segment for this bus */ 617 - res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; 616 + res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 618 617 pcibios_resource_to_bus(bridge->bus, &region, res); 619 618 if (res->flags & IORESOURCE_MEM) { 620 619 l = (region.start >> 16) & 0xfff0; ··· 641 640 642 641 /* Set up PREF base/limit */ 643 642 bu = lu = 0; 644 - res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; 643 + res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 645 644 pcibios_resource_to_bus(bridge->bus, &region, res); 646 645 if (res->flags & IORESOURCE_PREFETCH) { 647 646 l = (region.start >> 16) & 0xfff0; ··· 708 707 if (!pci_bus_clip_resource(bridge, i)) 709 708 return -EINVAL; /* Clipping didn't change anything */ 710 709 711 - switch (i - PCI_BRIDGE_RESOURCES) { 712 - case 0: 710 + switch (i) { 711 + case PCI_BRIDGE_IO_WINDOW: 713 712 pci_setup_bridge_io(bridge); 714 713 break; 715 - case 1: 714 + case PCI_BRIDGE_MEM_WINDOW: 716 715 pci_setup_bridge_mmio(bridge); 717 716 break; 718 - case 2: 717 + case PCI_BRIDGE_PREF_MEM_WINDOW: 719 718 pci_setup_bridge_mmio_pref(bridge); 720 719 break; 721 720 default: ··· 736 735 static void pci_bridge_check_ranges(struct pci_bus *bus) 737 736 { 738 737 struct pci_dev *bridge = bus->self; 739 - struct resource *b_res = &bridge->resource[PCI_BRIDGE_RESOURCES]; 738 + struct resource *b_res; 740 739 741 - b_res[1].flags |= IORESOURCE_MEM; 740 + b_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 741 + b_res->flags |= IORESOURCE_MEM; 742 742 743 - if (bridge->io_window) 744 - b_res[0].flags |= IORESOURCE_IO; 743 + if (bridge->io_window) { 744 + b_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 745 + b_res->flags |= IORESOURCE_IO; 746 + } 745 747 746 748 if (bridge->pref_window) { 747 - b_res[2].flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH; 749 + b_res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 750 + b_res->flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH; 748 751 if (bridge->pref_64_window) { 749 - b_res[2].flags |= IORESOURCE_MEM_64; 750 - b_res[2].flags |= PCI_PREF_RANGE_TYPE_64; 752 + b_res->flags |= IORESOURCE_MEM_64 | 753 + PCI_PREF_RANGE_TYPE_64; 751 754 } 752 755 } 753 756 } ··· 1110 1105 struct list_head *realloc_head) 1111 1106 { 1112 1107 struct pci_dev *bridge = bus->self; 1113 - struct resource *b_res = &bridge->resource[PCI_BRIDGE_RESOURCES]; 1108 + struct resource *b_res; 1114 1109 resource_size_t b_res_3_size = pci_cardbus_mem_size * 2; 1115 1110 u16 ctrl; 1116 1111 1117 - if (b_res[0].parent) 1112 + b_res = &bridge->resource[PCI_CB_BRIDGE_IO_0_WINDOW]; 1113 + if (b_res->parent) 1118 1114 goto handle_b_res_1; 1119 1115 /* 1120 1116 * Reserve some resources for CardBus. We reserve a fixed amount 1121 1117 * of bus space for CardBus bridges. 1122 1118 */ 1123 - b_res[0].start = pci_cardbus_io_size; 1124 - b_res[0].end = b_res[0].start + pci_cardbus_io_size - 1; 1125 - b_res[0].flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 1119 + b_res->start = pci_cardbus_io_size; 1120 + b_res->end = b_res->start + pci_cardbus_io_size - 1; 1121 + b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 1126 1122 if (realloc_head) { 1127 - b_res[0].end -= pci_cardbus_io_size; 1123 + b_res->end -= pci_cardbus_io_size; 1128 1124 add_to_list(realloc_head, bridge, b_res, pci_cardbus_io_size, 1129 - pci_cardbus_io_size); 1125 + pci_cardbus_io_size); 1130 1126 } 1131 1127 1132 1128 handle_b_res_1: 1133 - if (b_res[1].parent) 1129 + b_res = &bridge->resource[PCI_CB_BRIDGE_IO_1_WINDOW]; 1130 + if (b_res->parent) 1134 1131 goto handle_b_res_2; 1135 - b_res[1].start = pci_cardbus_io_size; 1136 - b_res[1].end = b_res[1].start + pci_cardbus_io_size - 1; 1137 - b_res[1].flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 1132 + b_res->start = pci_cardbus_io_size; 1133 + b_res->end = b_res->start + pci_cardbus_io_size - 1; 1134 + b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 1138 1135 if (realloc_head) { 1139 - b_res[1].end -= pci_cardbus_io_size; 1140 - add_to_list(realloc_head, bridge, b_res+1, pci_cardbus_io_size, 1141 - pci_cardbus_io_size); 1136 + b_res->end -= pci_cardbus_io_size; 1137 + add_to_list(realloc_head, bridge, b_res, pci_cardbus_io_size, 1138 + pci_cardbus_io_size); 1142 1139 } 1143 1140 1144 1141 handle_b_res_2: ··· 1160 1153 pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1161 1154 } 1162 1155 1163 - if (b_res[2].parent) 1156 + b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_0_WINDOW]; 1157 + if (b_res->parent) 1164 1158 goto handle_b_res_3; 1165 1159 /* 1166 1160 * If we have prefetchable memory support, allocate two regions. 1167 1161 * Otherwise, allocate one region of twice the size. 1168 1162 */ 1169 1163 if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0) { 1170 - b_res[2].start = pci_cardbus_mem_size; 1171 - b_res[2].end = b_res[2].start + pci_cardbus_mem_size - 1; 1172 - b_res[2].flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH | 1173 - IORESOURCE_STARTALIGN; 1164 + b_res->start = pci_cardbus_mem_size; 1165 + b_res->end = b_res->start + pci_cardbus_mem_size - 1; 1166 + b_res->flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH | 1167 + IORESOURCE_STARTALIGN; 1174 1168 if (realloc_head) { 1175 - b_res[2].end -= pci_cardbus_mem_size; 1176 - add_to_list(realloc_head, bridge, b_res+2, 1177 - pci_cardbus_mem_size, pci_cardbus_mem_size); 1169 + b_res->end -= pci_cardbus_mem_size; 1170 + add_to_list(realloc_head, bridge, b_res, 1171 + pci_cardbus_mem_size, pci_cardbus_mem_size); 1178 1172 } 1179 1173 1180 1174 /* Reduce that to half */ ··· 1183 1175 } 1184 1176 1185 1177 handle_b_res_3: 1186 - if (b_res[3].parent) 1178 + b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_1_WINDOW]; 1179 + if (b_res->parent) 1187 1180 goto handle_done; 1188 - b_res[3].start = pci_cardbus_mem_size; 1189 - b_res[3].end = b_res[3].start + b_res_3_size - 1; 1190 - b_res[3].flags |= IORESOURCE_MEM | IORESOURCE_STARTALIGN; 1181 + b_res->start = pci_cardbus_mem_size; 1182 + b_res->end = b_res->start + b_res_3_size - 1; 1183 + b_res->flags |= IORESOURCE_MEM | IORESOURCE_STARTALIGN; 1191 1184 if (realloc_head) { 1192 - b_res[3].end -= b_res_3_size; 1193 - add_to_list(realloc_head, bridge, b_res+3, b_res_3_size, 1194 - pci_cardbus_mem_size); 1185 + b_res->end -= b_res_3_size; 1186 + add_to_list(realloc_head, bridge, b_res, b_res_3_size, 1187 + pci_cardbus_mem_size); 1195 1188 } 1196 1189 1197 1190 handle_done: ··· 1236 1227 break; 1237 1228 hdr_type = -1; /* Intentionally invalid - not a PCI device. */ 1238 1229 } else { 1239 - pref = &bus->self->resource[PCI_BRIDGE_RESOURCES + 2]; 1230 + pref = &bus->self->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 1240 1231 hdr_type = bus->self->hdr_type; 1241 1232 } 1242 1233 ··· 1894 1885 struct pci_dev *dev, *bridge = bus->self; 1895 1886 resource_size_t io_per_hp, mmio_per_hp, mmio_pref_per_hp, align; 1896 1887 1897 - io_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; 1898 - mmio_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; 1899 - mmio_pref_res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; 1888 + io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 1889 + mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 1890 + mmio_pref_res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 1900 1891 1901 1892 /* 1902 1893 * The alignment of this bridge is yet to be considered, hence it must ··· 1969 1960 * Reduce the available resource space by what the 1970 1961 * bridge and devices below it occupy. 1971 1962 */ 1972 - res = &dev->resource[PCI_BRIDGE_RESOURCES + 0]; 1963 + res = &dev->resource[PCI_BRIDGE_IO_WINDOW]; 1973 1964 align = pci_resource_alignment(dev, res); 1974 1965 align = align ? ALIGN(io.start, align) - io.start : 0; 1975 1966 used_size = align + resource_size(res); 1976 1967 if (!res->parent) 1977 1968 io.start = min(io.start + used_size, io.end + 1); 1978 1969 1979 - res = &dev->resource[PCI_BRIDGE_RESOURCES + 1]; 1970 + res = &dev->resource[PCI_BRIDGE_MEM_WINDOW]; 1980 1971 align = pci_resource_alignment(dev, res); 1981 1972 align = align ? ALIGN(mmio.start, align) - mmio.start : 0; 1982 1973 used_size = align + resource_size(res); 1983 1974 if (!res->parent) 1984 1975 mmio.start = min(mmio.start + used_size, mmio.end + 1); 1985 1976 1986 - res = &dev->resource[PCI_BRIDGE_RESOURCES + 2]; 1977 + res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 1987 1978 align = pci_resource_alignment(dev, res); 1988 1979 align = align ? ALIGN(mmio_pref.start, align) - 1989 1980 mmio_pref.start : 0; ··· 2036 2027 return; 2037 2028 2038 2029 /* Take the initial extra resources from the hotplug port */ 2039 - available_io = bridge->resource[PCI_BRIDGE_RESOURCES + 0]; 2040 - available_mmio = bridge->resource[PCI_BRIDGE_RESOURCES + 1]; 2041 - available_mmio_pref = bridge->resource[PCI_BRIDGE_RESOURCES + 2]; 2030 + available_io = bridge->resource[PCI_BRIDGE_IO_WINDOW]; 2031 + available_mmio = bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 2032 + available_mmio_pref = bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 2042 2033 2043 2034 pci_bus_distribute_available_resources(bridge->subordinate, 2044 2035 add_list, available_io,
+5 -4
drivers/pci/setup-res.c
··· 439 439 res->end = res->start + pci_rebar_size_to_bytes(size) - 1; 440 440 441 441 /* Check if the new config works by trying to assign everything. */ 442 - ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); 443 - if (ret) 444 - goto error_resize; 445 - 442 + if (dev->bus->self) { 443 + ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); 444 + if (ret) 445 + goto error_resize; 446 + } 446 447 return 0; 447 448 448 449 error_resize:
+1 -1
drivers/pci/switch/switchtec.c
··· 25 25 module_param(max_devices, int, 0644); 26 26 MODULE_PARM_DESC(max_devices, "max number of switchtec device instances"); 27 27 28 - static bool use_dma_mrpc = 1; 28 + static bool use_dma_mrpc = true; 29 29 module_param(use_dma_mrpc, bool, 0644); 30 30 MODULE_PARM_DESC(use_dma_mrpc, 31 31 "Enable the use of the DMA MRPC feature");
+26 -14
drivers/pcmcia/yenta_socket.c
··· 694 694 struct pci_bus_region region; 695 695 unsigned mask; 696 696 697 - res = dev->resource + PCI_BRIDGE_RESOURCES + nr; 697 + res = &dev->resource[nr]; 698 698 /* Already allocated? */ 699 699 if (res->parent) 700 700 return 0; ··· 711 711 region.end = config_readl(socket, addr_end) | ~mask; 712 712 if (region.start && region.end > region.start && !override_bios) { 713 713 pcibios_bus_to_resource(dev->bus, res, &region); 714 - if (pci_claim_resource(dev, PCI_BRIDGE_RESOURCES + nr) == 0) 714 + if (pci_claim_resource(dev, nr) == 0) 715 715 return 0; 716 716 dev_info(&dev->dev, 717 717 "Preassigned resource %d busy or not available, reconfiguring...\n", ··· 745 745 return 0; 746 746 } 747 747 748 + static void yenta_free_res(struct yenta_socket *socket, int nr) 749 + { 750 + struct pci_dev *dev = socket->dev; 751 + struct resource *res; 752 + 753 + res = &dev->resource[nr]; 754 + if (res->start != 0 && res->end != 0) 755 + release_resource(res); 756 + 757 + res->start = res->end = res->flags = 0; 758 + } 759 + 748 760 /* 749 761 * Allocate the bridge mappings for the device.. 750 762 */ 751 763 static void yenta_allocate_resources(struct yenta_socket *socket) 752 764 { 753 765 int program = 0; 754 - program += yenta_allocate_res(socket, 0, IORESOURCE_IO, 766 + program += yenta_allocate_res(socket, PCI_CB_BRIDGE_IO_0_WINDOW, 767 + IORESOURCE_IO, 755 768 PCI_CB_IO_BASE_0, PCI_CB_IO_LIMIT_0); 756 - program += yenta_allocate_res(socket, 1, IORESOURCE_IO, 769 + program += yenta_allocate_res(socket, PCI_CB_BRIDGE_IO_1_WINDOW, 770 + IORESOURCE_IO, 757 771 PCI_CB_IO_BASE_1, PCI_CB_IO_LIMIT_1); 758 - program += yenta_allocate_res(socket, 2, IORESOURCE_MEM|IORESOURCE_PREFETCH, 772 + program += yenta_allocate_res(socket, PCI_CB_BRIDGE_MEM_0_WINDOW, 773 + IORESOURCE_MEM | IORESOURCE_PREFETCH, 759 774 PCI_CB_MEMORY_BASE_0, PCI_CB_MEMORY_LIMIT_0); 760 - program += yenta_allocate_res(socket, 3, IORESOURCE_MEM, 775 + program += yenta_allocate_res(socket, PCI_CB_BRIDGE_MEM_1_WINDOW, 776 + IORESOURCE_MEM, 761 777 PCI_CB_MEMORY_BASE_1, PCI_CB_MEMORY_LIMIT_1); 762 778 if (program) 763 779 pci_setup_cardbus(socket->dev->subordinate); ··· 785 769 */ 786 770 static void yenta_free_resources(struct yenta_socket *socket) 787 771 { 788 - int i; 789 - for (i = 0; i < 4; i++) { 790 - struct resource *res; 791 - res = socket->dev->resource + PCI_BRIDGE_RESOURCES + i; 792 - if (res->start != 0 && res->end != 0) 793 - release_resource(res); 794 - res->start = res->end = res->flags = 0; 795 - } 772 + yenta_free_res(socket, PCI_CB_BRIDGE_IO_0_WINDOW); 773 + yenta_free_res(socket, PCI_CB_BRIDGE_IO_1_WINDOW); 774 + yenta_free_res(socket, PCI_CB_BRIDGE_MEM_0_WINDOW); 775 + yenta_free_res(socket, PCI_CB_BRIDGE_MEM_1_WINDOW); 796 776 } 797 777 798 778
+2 -2
drivers/thunderbolt/switch.c
··· 263 263 * itself. To be on the safe side keep the root port in D0 during 264 264 * the whole upgrade process. 265 265 */ 266 - root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev); 266 + root_port = pcie_find_root_port(sw->tb->nhi->pdev); 267 267 if (root_port) 268 268 pm_runtime_get_noresume(&root_port->dev); 269 269 } ··· 272 272 { 273 273 struct pci_dev *root_port; 274 274 275 - root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev); 275 + root_port = pcie_find_root_port(sw->tb->nhi->pdev); 276 276 if (root_port) 277 277 pm_runtime_put(&root_port->dev); 278 278 }
-6
drivers/tty/serial/8250/8250_pci.c
··· 1869 1869 #define PCIE_DEVICE_ID_WCH_CH384_4S 0x3470 1870 1870 #define PCIE_DEVICE_ID_WCH_CH382_2S 0x3253 1871 1871 1872 - #define PCI_VENDOR_ID_PERICOM 0x12D8 1873 - #define PCI_DEVICE_ID_PERICOM_PI7C9X7951 0x7951 1874 - #define PCI_DEVICE_ID_PERICOM_PI7C9X7952 0x7952 1875 - #define PCI_DEVICE_ID_PERICOM_PI7C9X7954 0x7954 1876 - #define PCI_DEVICE_ID_PERICOM_PI7C9X7958 0x7958 1877 - 1878 1872 #define PCI_VENDOR_ID_ACCESIO 0x494f 1879 1873 #define PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB 0x1051 1880 1874 #define PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S 0x1053
+16
drivers/usb/host/pci-quirks.c
··· 16 16 #include <linux/export.h> 17 17 #include <linux/acpi.h> 18 18 #include <linux/dmi.h> 19 + 20 + #include <soc/bcm2835/raspberrypi-firmware.h> 21 + 19 22 #include "pci-quirks.h" 20 23 #include "xhci-ext-caps.h" 21 24 ··· 1246 1243 1247 1244 static void quirk_usb_early_handoff(struct pci_dev *pdev) 1248 1245 { 1246 + int ret; 1247 + 1249 1248 /* Skip Netlogic mips SoC's internal PCI USB controller. 1250 1249 * This device does not need/support EHCI/OHCI handoff 1251 1250 */ 1252 1251 if (pdev->vendor == 0x184e) /* vendor Netlogic */ 1253 1252 return; 1253 + 1254 + if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483) { 1255 + ret = rpi_firmware_init_vl805(pdev); 1256 + if (ret) { 1257 + /* Firmware might be outdated, or something failed */ 1258 + dev_warn(&pdev->dev, 1259 + "Failed to load VL805's firmware: %d. Will continue to attempt to work, but bad things might happen. You should fix this...\n", 1260 + ret); 1261 + } 1262 + } 1263 + 1254 1264 if (pdev->class != PCI_CLASS_SERIAL_USB_UHCI && 1255 1265 pdev->class != PCI_CLASS_SERIAL_USB_OHCI && 1256 1266 pdev->class != PCI_CLASS_SERIAL_USB_EHCI &&
+7 -11
include/linux/pci-acpi.h
··· 27 27 28 28 struct pci_ecam_ops; 29 29 extern int pci_mcfg_lookup(struct acpi_pci_root *root, struct resource *cfgres, 30 - struct pci_ecam_ops **ecam_ops); 30 + const struct pci_ecam_ops **ecam_ops); 31 31 32 32 static inline acpi_handle acpi_find_root_bridge_handle(struct pci_dev *pdev) 33 33 { ··· 107 107 #endif 108 108 109 109 extern const guid_t pci_acpi_dsm_guid; 110 - #define IGNORE_PCI_BOOT_CONFIG_DSM 0x05 111 - #define DEVICE_LABEL_DSM 0x07 112 - #define RESET_DELAY_DSM 0x08 113 - #define FUNCTION_DELAY_DSM 0x09 110 + 111 + /* _DSM Definitions for PCI */ 112 + #define DSM_PCI_PRESERVE_BOOT_CONFIG 0x05 113 + #define DSM_PCI_DEVICE_NAME 0x07 114 + #define DSM_PCI_POWER_ON_RESET_DELAY 0x08 115 + #define DSM_PCI_DEVICE_READINESS_DURATIONS 0x09 114 116 115 117 #ifdef CONFIG_PCIE_EDR 116 118 void pci_acpi_add_edr_notifier(struct pci_dev *pdev); ··· 126 124 static inline void acpi_pci_add_bus(struct pci_bus *bus) { } 127 125 static inline void acpi_pci_remove_bus(struct pci_bus *bus) { } 128 126 #endif /* CONFIG_ACPI */ 129 - 130 - #ifdef CONFIG_ACPI_APEI 131 - extern bool aer_acpi_firmware_first(void); 132 - #else 133 - static inline bool aer_acpi_firmware_first(void) { return false; } 134 - #endif 135 127 136 128 #endif /* _PCI_ACPI_H_ */
+12 -13
include/linux/pci-ecam.h
··· 29 29 struct resource res; 30 30 struct resource busr; 31 31 void *priv; 32 - struct pci_ecam_ops *ops; 32 + const struct pci_ecam_ops *ops; 33 33 union { 34 34 void __iomem *win; /* 64-bit single mapping */ 35 35 void __iomem **winp; /* 32-bit per-bus mapping */ ··· 40 40 /* create and free pci_config_window */ 41 41 struct pci_config_window *pci_ecam_create(struct device *dev, 42 42 struct resource *cfgres, struct resource *busr, 43 - struct pci_ecam_ops *ops); 43 + const struct pci_ecam_ops *ops); 44 44 void pci_ecam_free(struct pci_config_window *cfg); 45 45 46 46 /* map_bus when ->sysdata is an instance of pci_config_window */ 47 47 void __iomem *pci_ecam_map_bus(struct pci_bus *bus, unsigned int devfn, 48 48 int where); 49 49 /* default ECAM ops */ 50 - extern struct pci_ecam_ops pci_generic_ecam_ops; 50 + extern const struct pci_ecam_ops pci_generic_ecam_ops; 51 51 52 52 #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) 53 - extern struct pci_ecam_ops pci_32b_ops; /* 32-bit accesses only */ 54 - extern struct pci_ecam_ops hisi_pcie_ops; /* HiSilicon */ 55 - extern struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */ 56 - extern struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */ 57 - extern struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */ 58 - extern struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */ 59 - extern struct pci_ecam_ops al_pcie_ops; /* Amazon Annapurna Labs PCIe */ 53 + extern const struct pci_ecam_ops pci_32b_ops; /* 32-bit accesses only */ 54 + extern const struct pci_ecam_ops hisi_pcie_ops; /* HiSilicon */ 55 + extern const struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */ 56 + extern const struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */ 57 + extern const struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */ 58 + extern const struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */ 59 + extern const struct pci_ecam_ops al_pcie_ops; /* Amazon Annapurna Labs PCIe */ 60 60 #endif 61 61 62 - #ifdef CONFIG_PCI_HOST_COMMON 62 + #if IS_ENABLED(CONFIG_PCI_HOST_COMMON) 63 63 /* for DT-based PCI controllers that support ECAM */ 64 - int pci_host_common_probe(struct platform_device *pdev, 65 - struct pci_ecam_ops *ops); 64 + int pci_host_common_probe(struct platform_device *pdev); 66 65 int pci_host_common_remove(struct platform_device *pdev); 67 66 #endif 68 67 #endif
+26 -12
include/linux/pci-epc.h
··· 66 66 }; 67 67 68 68 /** 69 + * struct pci_epc_mem_window - address window of the endpoint controller 70 + * @phys_base: physical base address of the PCI address window 71 + * @size: the size of the PCI address window 72 + * @page_size: size of each page 73 + */ 74 + struct pci_epc_mem_window { 75 + phys_addr_t phys_base; 76 + size_t size; 77 + size_t page_size; 78 + }; 79 + 80 + /** 69 81 * struct pci_epc_mem - address space of the endpoint controller 70 - * @phys_base: physical base address of the PCI address space 71 - * @size: the size of the PCI address space 82 + * @window: address window of the endpoint controller 72 83 * @bitmap: bitmap to manage the PCI address space 73 84 * @pages: number of bits representing the address region 74 - * @page_size: size of each page 75 85 * @lock: mutex to protect bitmap 76 86 */ 77 87 struct pci_epc_mem { 78 - phys_addr_t phys_base; 79 - size_t size; 88 + struct pci_epc_mem_window window; 80 89 unsigned long *bitmap; 81 - size_t page_size; 82 90 int pages; 83 91 /* mutex to protect against concurrent access for memory allocation*/ 84 92 struct mutex lock; ··· 97 89 * @dev: PCI EPC device 98 90 * @pci_epf: list of endpoint functions present in this EPC device 99 91 * @ops: function pointers for performing endpoint operations 100 - * @mem: address space of the endpoint controller 92 + * @windows: array of address space of the endpoint controller 93 + * @mem: first window of the endpoint controller, which corresponds to 94 + * default address space of the endpoint controller supporting 95 + * single window. 96 + * @num_windows: number of windows supported by device 101 97 * @max_functions: max number of functions that can be configured in this EPC 102 98 * @group: configfs group representing the PCI EPC device 103 99 * @lock: mutex to protect pci_epc ops ··· 112 100 struct device dev; 113 101 struct list_head pci_epf; 114 102 const struct pci_epc_ops *ops; 103 + struct pci_epc_mem **windows; 115 104 struct pci_epc_mem *mem; 105 + unsigned int num_windows; 116 106 u8 max_functions; 117 107 struct config_group *group; 118 108 /* mutex to protect against concurrent access of EP controller */ ··· 150 136 __pci_epc_create((dev), (ops), THIS_MODULE) 151 137 #define devm_pci_epc_create(dev, ops) \ 152 138 __devm_pci_epc_create((dev), (ops), THIS_MODULE) 153 - 154 - #define pci_epc_mem_init(epc, phys_addr, size) \ 155 - __pci_epc_mem_init((epc), (phys_addr), (size), PAGE_SIZE) 156 139 157 140 static inline void epc_set_drvdata(struct pci_epc *epc, void *data) 158 141 { ··· 206 195 struct pci_epc *pci_epc_get(const char *epc_name); 207 196 void pci_epc_put(struct pci_epc *epc); 208 197 209 - int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_addr, size_t size, 210 - size_t page_size); 198 + int pci_epc_mem_init(struct pci_epc *epc, phys_addr_t base, 199 + size_t size, size_t page_size); 200 + int pci_epc_multi_mem_init(struct pci_epc *epc, 201 + struct pci_epc_mem_window *window, 202 + unsigned int num_windows); 211 203 void pci_epc_mem_exit(struct pci_epc *epc); 212 204 void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc, 213 205 phys_addr_t *phys_addr, size_t size);
+29 -14
include/linux/pci.h
··· 100 100 PCI_IOV_RESOURCE_END = PCI_IOV_RESOURCES + PCI_SRIOV_NUM_BARS - 1, 101 101 #endif 102 102 103 - /* Resources assigned to buses behind the bridge */ 103 + /* PCI-to-PCI (P2P) bridge windows */ 104 + #define PCI_BRIDGE_IO_WINDOW (PCI_BRIDGE_RESOURCES + 0) 105 + #define PCI_BRIDGE_MEM_WINDOW (PCI_BRIDGE_RESOURCES + 1) 106 + #define PCI_BRIDGE_PREF_MEM_WINDOW (PCI_BRIDGE_RESOURCES + 2) 107 + 108 + /* CardBus bridge windows */ 109 + #define PCI_CB_BRIDGE_IO_0_WINDOW (PCI_BRIDGE_RESOURCES + 0) 110 + #define PCI_CB_BRIDGE_IO_1_WINDOW (PCI_BRIDGE_RESOURCES + 1) 111 + #define PCI_CB_BRIDGE_MEM_0_WINDOW (PCI_BRIDGE_RESOURCES + 2) 112 + #define PCI_CB_BRIDGE_MEM_1_WINDOW (PCI_BRIDGE_RESOURCES + 3) 113 + 114 + /* Total number of bridge resources for P2P and CardBus */ 104 115 #define PCI_BRIDGE_RESOURCE_NUM 4 105 116 117 + /* Resources assigned to buses behind the bridge */ 106 118 PCI_BRIDGE_RESOURCES, 107 119 PCI_BRIDGE_RESOURCE_END = PCI_BRIDGE_RESOURCES + 108 120 PCI_BRIDGE_RESOURCE_NUM - 1, ··· 291 279 u16 cap_nr; 292 280 bool cap_extended; 293 281 unsigned int size; 294 - u32 data[0]; 282 + u32 data[]; 295 283 }; 296 284 297 285 struct pci_cap_saved_state { ··· 432 420 * mappings to make sure they cannot access arbitrary memory. 433 421 */ 434 422 unsigned int untrusted:1; 435 - unsigned int __aer_firmware_first_valid:1; 436 - unsigned int __aer_firmware_first:1; 437 423 unsigned int broken_intx_masking:1; /* INTx masking can't be used */ 438 424 unsigned int io_window_1k:1; /* Intel bridge 1K I/O windows */ 439 425 unsigned int irq_managed:1; ··· 542 532 resource_size_t start, 543 533 resource_size_t size, 544 534 resource_size_t align); 545 - unsigned long private[0] ____cacheline_aligned; 535 + unsigned long private[] ____cacheline_aligned; 546 536 }; 547 537 548 538 #define to_pci_host_bridge(n) container_of(n, struct pci_host_bridge, dev) ··· 1035 1025 void pci_read_bridge_bases(struct pci_bus *child); 1036 1026 struct resource *pci_find_parent_resource(const struct pci_dev *dev, 1037 1027 struct resource *res); 1038 - struct pci_dev *pci_find_pcie_root_port(struct pci_dev *dev); 1039 1028 u8 pci_swizzle_interrupt_pin(const struct pci_dev *dev, u8 pin); 1040 1029 int pci_get_interrupt_pin(struct pci_dev *dev, struct pci_dev **bridge); 1041 1030 u8 pci_common_swizzle(struct pci_dev *dev, u8 *pinp); ··· 2152 2143 return (pcie_caps_reg(dev) & PCI_EXP_FLAGS_TYPE) >> 4; 2153 2144 } 2154 2145 2146 + /** 2147 + * pcie_find_root_port - Get the PCIe root port device 2148 + * @dev: PCI device 2149 + * 2150 + * Traverse up the parent chain and return the PCIe Root Port PCI Device 2151 + * for a given PCI/PCIe Device. 2152 + */ 2155 2153 static inline struct pci_dev *pcie_find_root_port(struct pci_dev *dev) 2156 2154 { 2157 - while (1) { 2158 - if (!pci_is_pcie(dev)) 2159 - break; 2160 - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) 2161 - return dev; 2162 - if (!dev->bus->self) 2163 - break; 2164 - dev = dev->bus->self; 2155 + struct pci_dev *bridge = pci_upstream_bridge(dev); 2156 + 2157 + while (bridge) { 2158 + if (pci_pcie_type(bridge) == PCI_EXP_TYPE_ROOT_PORT) 2159 + return bridge; 2160 + bridge = pci_upstream_bridge(bridge); 2165 2161 } 2162 + 2166 2163 return NULL; 2167 2164 } 2168 2165
+6
include/linux/pci_ids.h
··· 1832 1832 #define PCI_VENDOR_ID_NVIDIA_SGS 0x12d2 1833 1833 #define PCI_DEVICE_ID_NVIDIA_SGS_RIVA128 0x0018 1834 1834 1835 + #define PCI_VENDOR_ID_PERICOM 0x12D8 1836 + #define PCI_DEVICE_ID_PERICOM_PI7C9X7951 0x7951 1837 + #define PCI_DEVICE_ID_PERICOM_PI7C9X7952 0x7952 1838 + #define PCI_DEVICE_ID_PERICOM_PI7C9X7954 0x7954 1839 + #define PCI_DEVICE_ID_PERICOM_PI7C9X7958 0x7958 1840 + 1835 1841 #define PCI_SUBVENDOR_ID_CHASE_PCIFAST 0x12E0 1836 1842 #define PCI_SUBDEVICE_ID_CHASE_PCIFAST4 0x0031 1837 1843 #define PCI_SUBDEVICE_ID_CHASE_PCIFAST8 0x0021
+8 -1
include/soc/bcm2835/raspberrypi-firmware.h
··· 10 10 #include <linux/of_device.h> 11 11 12 12 struct rpi_firmware; 13 + struct pci_dev; 13 14 14 15 enum rpi_firmware_property_status { 15 16 RPI_FIRMWARE_STATUS_REQUEST = 0, ··· 91 90 RPI_FIRMWARE_SET_PERIPH_REG = 0x00038045, 92 91 RPI_FIRMWARE_GET_POE_HAT_VAL = 0x00030049, 93 92 RPI_FIRMWARE_SET_POE_HAT_VAL = 0x00030050, 94 - 93 + RPI_FIRMWARE_NOTIFY_XHCI_RESET = 0x00030058, 95 94 96 95 /* Dispmanx TAGS */ 97 96 RPI_FIRMWARE_FRAMEBUFFER_ALLOCATE = 0x00040001, ··· 142 141 int rpi_firmware_property_list(struct rpi_firmware *fw, 143 142 void *data, size_t tag_size); 144 143 struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node); 144 + int rpi_firmware_init_vl805(struct pci_dev *pdev); 145 145 #else 146 146 static inline int rpi_firmware_property(struct rpi_firmware *fw, u32 tag, 147 147 void *data, size_t len) ··· 159 157 static inline struct rpi_firmware *rpi_firmware_get(struct device_node *firmware_node) 160 158 { 161 159 return NULL; 160 + } 161 + 162 + static inline int rpi_firmware_init_vl805(struct pci_dev *pdev) 163 + { 164 + return 0; 162 165 } 163 166 #endif 164 167