Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Revert sysfs "rescan" renames that broke apps (Kelsey Skunberg)

- Add more 32 GT/s link speed decoding and improve the implementation
(Yicong Yang)

Resource management:

- Add support for sizing programmable host bridge apertures and fix a
related alpha Nautilus regression (Ivan Kokshaysky)

Interrupts:

- Add boot interrupt quirk mechanism for Xeon chipsets and document
boot interrupts (Sean V Kelley)

PCIe native device hotplug:

- When possible, disable in-band presence detect and use PDS
(Alexandru Gagniuc)

- Add DMI table for devices that don't use in-band presence detection
but don't advertise that correctly (Stuart Hayes)

- Fix hang when powering slots up/down via sysfs (Lukas Wunner)

- Fix an MSI interrupt race (Stuart Hayes)

Virtualization:

- Add ACS quirks for Zhaoxin devices (Raymond Pang)

Error handling:

- Add Error Disconnect Recover (EDR) support so firmware can report
devices disconnected via DPC and we can try to recover (Kuppuswamy
Sathyanarayanan)

Peer-to-peer DMA:

- Add Intel Sky Lake-E Root Ports B, C, D to the whitelist (Andrew
Maier)

ASPM:

- Reduce severity of common clock config message (Chris Packham)

- Clear the correct bits when enabling L1 substates, so we don't go
to the wrong state (Yicong Yang)

Endpoint framework:

- Replace EPF linkup ops with notifier call chain and improve locking
(Kishon Vijay Abraham I)

- Fix concurrent memory allocation in OB address region (Kishon Vijay
Abraham I)

- Move PF function number assignment to EPC core to support multiple
function creation methods (Kishon Vijay Abraham I)

- Fix issue with clearing configfs "start" entry (Kunihiko Hayashi)

- Fix issue with endpoint MSI-X ignoring BAR Indicator and Table
Offset (Kishon Vijay Abraham I)

- Add support for testing DMA transfers (Kishon Vijay Abraham I)

- Add support for testing > 10 endpoint devices (Kishon Vijay Abraham I)

- Add support for tests to clear IRQ (Kishon Vijay Abraham I)

- Add common DT schema for endpoint controllers (Kishon Vijay Abraham I)

Amlogic Meson PCIe controller driver:

- Add DT bindings for AXG PCIe PHY, shared MIPI/PCIe analog PHY (Remi
Pommarel)

- Add Amlogic AXG PCIe PHY, AXG MIPI/PCIe analog PHY drivers (Remi
Pommarel)

Cadence PCIe controller driver:

- Add Root Complex/Endpoint DT schema for Cadence PCIe (Kishon Vijay
Abraham I)

Intel VMD host bridge driver:

- Add two VMD Device IDs that require bus restriction mode (Sushma
Kalakota)

Mobiveil PCIe controller driver:

- Refactor and modularize mobiveil driver (Hou Zhiqiang)

- Add support for Mobiveil GPEX Gen4 host (Hou Zhiqiang)

Microsoft Hyper-V host bridge driver:

- Add support for Hyper-V PCI protocol version 1.3 and
PCI_BUS_RELATIONS2 (Long Li)

- Refactor to prepare for virtual PCI on non-x86 architectures (Boqun
Feng)

- Fix memory leak in hv_pci_probe()'s error path (Dexuan Cui)

NVIDIA Tegra PCIe controller driver:

- Use pci_parse_request_of_pci_ranges() (Rob Herring)

- Add support for endpoint mode and related DT updates (Vidya Sagar)

- Reduce -EPROBE_DEFER error message log level (Thierry Reding)

Qualcomm PCIe controller driver:

- Restrict class fixup to specific Qualcomm devices (Bjorn Andersson)

Synopsys DesignWare PCIe controller driver:

- Refactor core initialization code for endpoint mode (Vidya Sagar)

- Fix endpoint MSI-X to use correct table address (Kishon Vijay
Abraham I)

TI DRA7xx PCIe controller driver:

- Fix MSI IRQ handling (Vignesh Raghavendra)

TI Keystone PCIe controller driver:

- Allow AM654 endpoint to raise MSI-X interrupt (Kishon Vijay Abraham I)

Miscellaneous:

- Quirk ASMedia XHCI USB to avoid "PME# from D0" defect (Kai-Heng
Feng)

- Use ioremap(), not phys_to_virt(), for platform ROM to fix video
ROM mapping with CONFIG_HIGHMEM (Mikel Rychliski)"

* tag 'pci-v5.7-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (96 commits)
misc: pci_endpoint_test: remove duplicate macro PCI_ENDPOINT_TEST_STATUS
PCI: tegra: Print -EPROBE_DEFER error message at debug level
misc: pci_endpoint_test: Use full pci-endpoint-test name in request_irq()
misc: pci_endpoint_test: Fix to support > 10 pci-endpoint-test devices
tools: PCI: Add 'e' to clear IRQ
misc: pci_endpoint_test: Add ioctl to clear IRQ
misc: pci_endpoint_test: Avoid using module parameter to determine irqtype
PCI: keystone: Allow AM654 PCIe Endpoint to raise MSI-X interrupt
PCI: dwc: Fix dw_pcie_ep_raise_msix_irq() to get correct MSI-X table address
PCI: endpoint: Fix ->set_msix() to take BIR and offset as arguments
misc: pci_endpoint_test: Add support to get DMA option from userspace
tools: PCI: Add 'd' command line option to support DMA
misc: pci_endpoint_test: Use streaming DMA APIs for buffer allocation
PCI: endpoint: functions/pci-epf-test: Print throughput information
PCI: endpoint: functions/pci-epf-test: Add DMA support to transfer data
PCI: pciehp: Fix MSI interrupt race
PCI: pciehp: Fix indefinite wait on sysfs requests
PCI: endpoint: Fix clearing start entry in configfs
PCI: tegra: Add support for PCIe endpoint mode in Tegra194
PCI: sysfs: Revert "rescan" file renames
...

+4836 -1635
+155
Documentation/PCI/boot-interrupts.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =============== 4 + Boot Interrupts 5 + =============== 6 + 7 + :Author: - Sean V Kelley <sean.v.kelley@linux.intel.com> 8 + 9 + Overview 10 + ======== 11 + 12 + On PCI Express, interrupts are represented with either MSI or inbound 13 + interrupt messages (Assert_INTx/Deassert_INTx). The integrated IO-APIC in a 14 + given Core IO converts the legacy interrupt messages from PCI Express to 15 + MSI interrupts. If the IO-APIC is disabled (via the mask bits in the 16 + IO-APIC table entries), the messages are routed to the legacy PCH. This 17 + in-band interrupt mechanism was traditionally necessary for systems that 18 + did not support the IO-APIC and for boot. Intel in the past has used the 19 + term "boot interrupts" to describe this mechanism. Further, the PCI Express 20 + protocol describes this in-band legacy wire-interrupt INTx mechanism for 21 + I/O devices to signal PCI-style level interrupts. The subsequent paragraphs 22 + describe problems with the Core IO handling of INTx message routing to the 23 + PCH and mitigation within BIOS and the OS. 24 + 25 + 26 + Issue 27 + ===== 28 + 29 + When in-band legacy INTx messages are forwarded to the PCH, they in turn 30 + trigger a new interrupt for which the OS likely lacks a handler. When an 31 + interrupt goes unhandled over time, they are tracked by the Linux kernel as 32 + Spurious Interrupts. The IRQ will be disabled by the Linux kernel after it 33 + reaches a specific count with the error "nobody cared". This disabled IRQ 34 + now prevents valid usage by an existing interrupt which may happen to share 35 + the IRQ line. 36 + 37 + irq 19: nobody cared (try booting with the "irqpoll" option) 38 + CPU: 0 PID: 2988 Comm: irq/34-nipalk Tainted: 4.14.87-rt49-02410-g4a640ec-dirty #1 39 + Hardware name: National Instruments NI PXIe-8880/NI PXIe-8880, BIOS 2.1.5f1 01/09/2020 40 + Call Trace: 41 + <IRQ> 42 + ? dump_stack+0x46/0x5e 43 + ? __report_bad_irq+0x2e/0xb0 44 + ? note_interrupt+0x242/0x290 45 + ? nNIKAL100_memoryRead16+0x8/0x10 [nikal] 46 + ? handle_irq_event_percpu+0x55/0x70 47 + ? handle_irq_event+0x4f/0x80 48 + ? handle_fasteoi_irq+0x81/0x180 49 + ? handle_irq+0x1c/0x30 50 + ? do_IRQ+0x41/0xd0 51 + ? common_interrupt+0x84/0x84 52 + </IRQ> 53 + 54 + handlers: 55 + irq_default_primary_handler threaded usb_hcd_irq 56 + Disabling IRQ #19 57 + 58 + 59 + Conditions 60 + ========== 61 + 62 + The use of threaded interrupts is the most likely condition to trigger 63 + this problem today. Threaded interrupts may not be reenabled after the IRQ 64 + handler wakes. These "one shot" conditions mean that the threaded interrupt 65 + needs to keep the interrupt line masked until the threaded handler has run. 66 + Especially when dealing with high data rate interrupts, the thread needs to 67 + run to completion; otherwise some handlers will end up in stack overflows 68 + since the interrupt of the issuing device is still active. 69 + 70 + Affected Chipsets 71 + ================= 72 + 73 + The legacy interrupt forwarding mechanism exists today in a number of 74 + devices including but not limited to chipsets from AMD/ATI, Broadcom, and 75 + Intel. Changes made through the mitigations below have been applied to 76 + drivers/pci/quirks.c 77 + 78 + Starting with ICX there are no longer any IO-APICs in the Core IO's 79 + devices. IO-APIC is only in the PCH. Devices connected to the Core IO's 80 + PCIe Root Ports will use native MSI/MSI-X mechanisms. 81 + 82 + Mitigations 83 + =========== 84 + 85 + The mitigations take the form of PCI quirks. The preference has been to 86 + first identify and make use of a means to disable the routing to the PCH. 87 + In such a case a quirk to disable boot interrupt generation can be 88 + added.[1] 89 + 90 + Intel® 6300ESB I/O Controller Hub 91 + Alternate Base Address Register: 92 + BIE: Boot Interrupt Enable 93 + 0 = Boot interrupt is enabled. 94 + 1 = Boot interrupt is disabled. 95 + 96 + Intel® Sandy Bridge through Sky Lake based Xeon servers: 97 + Coherent Interface Protocol Interrupt Control 98 + dis_intx_route2pch/dis_intx_route2ich/dis_intx_route2dmi2: 99 + When this bit is set. Local INTx messages received from the 100 + Intel® Quick Data DMA/PCI Express ports are not routed to legacy 101 + PCH - they are either converted into MSI via the integrated IO-APIC 102 + (if the IO-APIC mask bit is clear in the appropriate entries) 103 + or cause no further action (when mask bit is set) 104 + 105 + In the absence of a way to directly disable the routing, another approach 106 + has been to make use of PCI Interrupt pin to INTx routing tables for 107 + purposes of redirecting the interrupt handler to the rerouted interrupt 108 + line by default. Therefore, on chipsets where this INTx routing cannot be 109 + disabled, the Linux kernel will reroute the valid interrupt to its legacy 110 + interrupt. This redirection of the handler will prevent the occurrence of 111 + the spurious interrupt detection which would ordinarily disable the IRQ 112 + line due to excessive unhandled counts.[2] 113 + 114 + The config option X86_REROUTE_FOR_BROKEN_BOOT_IRQS exists to enable (or 115 + disable) the redirection of the interrupt handler to the PCH interrupt 116 + line. The option can be overridden by either pci=ioapicreroute or 117 + pci=noioapicreroute.[3] 118 + 119 + 120 + More Documentation 121 + ================== 122 + 123 + There is an overview of the legacy interrupt handling in several datasheets 124 + (6300ESB and 6700PXH below). While largely the same, it provides insight 125 + into the evolution of its handling with chipsets. 126 + 127 + Example of disabling of the boot interrupt 128 + ------------------------------------------ 129 + 130 + Intel® 6300ESB I/O Controller Hub (Document # 300641-004US) 131 + 5.7.3 Boot Interrupt 132 + https://www.intel.com/content/dam/doc/datasheet/6300esb-io-controller-hub-datasheet.pdf 133 + 134 + Intel® Xeon® Processor E5-1600/2400/2600/4600 v3 Product Families 135 + Datasheet - Volume 2: Registers (Document # 330784-003) 136 + 6.6.41 cipintrc Coherent Interface Protocol Interrupt Control 137 + https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e5-v3-datasheet-vol-2.pdf 138 + 139 + Example of handler rerouting 140 + ---------------------------- 141 + 142 + Intel® 6700PXH 64-bit PCI Hub (Document # 302628) 143 + 2.15.2 PCI Express Legacy INTx Support and Boot Interrupt 144 + https://www.intel.com/content/dam/doc/datasheet/6700pxh-64-bit-pci-hub-datasheet.pdf 145 + 146 + 147 + If you have any legacy PCI interrupt questions that aren't answered, email me. 148 + 149 + Cheers, 150 + Sean V Kelley 151 + sean.v.kelley@linux.intel.com 152 + 153 + [1] https://lore.kernel.org/r/12131949181903-git-send-email-sassmann@suse.de/ 154 + [2] https://lore.kernel.org/r/12131949182094-git-send-email-sassmann@suse.de/ 155 + [3] https://lore.kernel.org/r/487C8EA7.6020205@suse.de/
+1
Documentation/PCI/index.rst
··· 16 16 pci-error-recovery 17 17 pcieaer-howto 18 18 endpoint/index 19 + boot-interrupts
+6 -17
Documentation/PCI/pcieaer-howto.rst
··· 156 156 have different specifications to reset pci express link, so all 157 157 upstream ports should provide their own reset_link functions. 158 158 159 - In struct pcie_port_service_driver, a new pointer, reset_link, is 160 - added. 161 - :: 162 - 163 - pci_ers_result_t (*reset_link) (struct pci_dev *dev); 164 - 165 159 Section 3.2.2.2 provides more detailed info on when to call 166 160 reset_link. 167 161 ··· 206 212 a hierarchy in question. Then, performing link reset at upstream is 207 213 necessary. As different kinds of devices might use different approaches 208 214 to reset link, AER port service driver is required to provide the 209 - function to reset link. Firstly, kernel looks for if the upstream 210 - component has an aer driver. If it has, kernel uses the reset_link 211 - callback of the aer driver. If the upstream component has no aer driver 212 - and the port is downstream port, we will perform a hot reset as the 213 - default by setting the Secondary Bus Reset bit of the Bridge Control 214 - register associated with the downstream port. As for upstream ports, 215 - they should provide their own aer service drivers with reset_link 216 - function. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER and 217 - reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes 215 + function to reset link via callback parameter of pcie_do_recovery() 216 + function. If reset_link is not NULL, recovery function will use it 217 + to reset the link. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER 218 + and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes 218 219 to mmio_enabled. 219 220 220 221 helper functions ··· 232 243 233 244 :: 234 245 235 - int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev);` 246 + int pci_aer_clear_nonfatal_status(struct pci_dev *dev);` 236 247 237 - pci_cleanup_aer_uncorrect_error_status cleanups the uncorrectable 248 + pci_aer_clear_nonfatal_status clears non-fatal errors in the uncorrectable 238 249 error status register. 239 250 240 251 Frequent Asked Questions
+9 -13
Documentation/devicetree/bindings/pci/amlogic,meson-pcie.txt
··· 18 18 - reg-names: Must be 19 19 - "elbi" External local bus interface registers 20 20 - "cfg" Meson specific registers 21 - - "phy" Meson PCIE PHY registers for AXG SoC Family 22 21 - "config" PCIe configuration space 23 22 - reset-gpios: The GPIO to generate PCIe PERST# assert and deassert signal. 24 23 - clocks: Must contain an entry for each entry in clock-names. ··· 25 26 - "pclk" PCIe GEN 100M PLL clock 26 27 - "port" PCIe_x(A or B) RC clock gate 27 28 - "general" PCIe Phy clock 28 - - "mipi" PCIe_x(A or B) 100M ref clock gate for AXG SoC Family 29 29 - resets: phandle to the reset lines. 30 - - reset-names: must contain "phy" "port" and "apb" 31 - - "phy" Share PHY reset for AXG SoC Family 30 + - reset-names: must contain "port" and "apb" 32 31 - "port" Port A or B reset 33 32 - "apb" Share APB reset 34 - - phys: should contain a phandle to the shared phy for G12A SoC Family 33 + - phys: should contain a phandle to the PCIE phy 34 + - phy-names: must contain "pcie" 35 + 35 36 - device_type: 36 37 should be "pci". As specified in designware-pcie.txt 37 38 ··· 42 43 compatible = "amlogic,axg-pcie", "snps,dw-pcie"; 43 44 reg = <0x0 0xf9800000 0x0 0x400000 44 45 0x0 0xff646000 0x0 0x2000 45 - 0x0 0xff644000 0x0 0x2000 46 46 0x0 0xf9f00000 0x0 0x100000>; 47 - reg-names = "elbi", "cfg", "phy", "config"; 47 + reg-names = "elbi", "cfg", "config"; 48 48 reset-gpios = <&gpio GPIOX_19 GPIO_ACTIVE_HIGH>; 49 49 interrupts = <GIC_SPI 177 IRQ_TYPE_EDGE_RISING>; 50 50 #interrupt-cells = <1>; ··· 56 58 ranges = <0x82000000 0 0 0x0 0xf9c00000 0 0x00300000>; 57 59 58 60 clocks = <&clkc CLKID_USB 59 - &clkc CLKID_MIPI_ENABLE 60 61 &clkc CLKID_PCIE_A 61 62 &clkc CLKID_PCIE_CML_EN0>; 62 63 clock-names = "general", 63 - "mipi", 64 64 "pclk", 65 65 "port"; 66 - resets = <&reset RESET_PCIE_PHY>, 67 - <&reset RESET_PCIE_A>, 66 + resets = <&reset RESET_PCIE_A>, 68 67 <&reset RESET_PCIE_APB>; 69 - reset-names = "phy", 70 - "port", 68 + reset-names = "port", 71 69 "apb"; 70 + phys = <&pcie_phy>; 71 + phy-names = "pcie"; 72 72 };
-27
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.txt
··· 1 - * Cadence PCIe endpoint controller 2 - 3 - Required properties: 4 - - compatible: Should contain "cdns,cdns-pcie-ep" to identify the IP used. 5 - - reg: Should contain the controller register base address and AXI interface 6 - region base address respectively. 7 - - reg-names: Must be "reg" and "mem" respectively. 8 - - cdns,max-outbound-regions: Set to maximum number of outbound regions 9 - 10 - Optional properties: 11 - - max-functions: Maximum number of functions that can be configured (default 1). 12 - - phys: From PHY bindings: List of Generic PHY phandles. One per lane if more 13 - than one in the list. If only one PHY listed it must manage all lanes. 14 - - phy-names: List of names to identify the PHY. 15 - 16 - Example: 17 - 18 - pcie@fc000000 { 19 - compatible = "cdns,cdns-pcie-ep"; 20 - reg = <0x0 0xfc000000 0x0 0x01000000>, 21 - <0x0 0x80000000 0x0 0x40000000>; 22 - reg-names = "reg", "mem"; 23 - cdns,max-outbound-regions = <16>; 24 - max-functions = /bits/ 8 <8>; 25 - phys = <&ep_phy0 &ep_phy1>; 26 - phy-names = "pcie-lane0","pcie-lane1"; 27 - };
+49
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/cdns,cdns-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Cadence PCIe EP Controller 8 + 9 + maintainers: 10 + - Tom Joseph <tjoseph@cadence.com> 11 + 12 + allOf: 13 + - $ref: "cdns-pcie.yaml#" 14 + - $ref: "pci-ep.yaml#" 15 + 16 + properties: 17 + compatible: 18 + const: cdns,cdns-pcie-ep 19 + 20 + reg: 21 + maxItems: 2 22 + 23 + reg-names: 24 + items: 25 + - const: reg 26 + - const: mem 27 + 28 + required: 29 + - reg 30 + - reg-names 31 + 32 + examples: 33 + - | 34 + bus { 35 + #address-cells = <2>; 36 + #size-cells = <2>; 37 + 38 + pcie-ep@fc000000 { 39 + compatible = "cdns,cdns-pcie-ep"; 40 + reg = <0x0 0xfc000000 0x0 0x01000000>, 41 + <0x0 0x80000000 0x0 0x40000000>; 42 + reg-names = "reg", "mem"; 43 + cdns,max-outbound-regions = <16>; 44 + max-functions = /bits/ 8 <8>; 45 + phys = <&pcie_phy0>; 46 + phy-names = "pcie-phy"; 47 + }; 48 + }; 49 + ...
-66
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.txt
··· 1 - * Cadence PCIe host controller 2 - 3 - This PCIe controller inherits the base properties defined in 4 - host-generic-pci.txt. 5 - 6 - Required properties: 7 - - compatible: Should contain "cdns,cdns-pcie-host" to identify the IP used. 8 - - reg: Should contain the controller register base address, PCIe configuration 9 - window base address, and AXI interface region base address respectively. 10 - - reg-names: Must be "reg", "cfg" and "mem" respectively. 11 - - #address-cells: Set to <3> 12 - - #size-cells: Set to <2> 13 - - device_type: Set to "pci" 14 - - ranges: Ranges for the PCI memory and I/O regions 15 - - #interrupt-cells: Set to <1> 16 - - interrupt-map-mask and interrupt-map: Standard PCI properties to define the 17 - mapping of the PCIe interface to interrupt numbers. 18 - 19 - Optional properties: 20 - - cdns,max-outbound-regions: Set to maximum number of outbound regions 21 - (default 32) 22 - - cdns,no-bar-match-nbits: Set into the no BAR match register to configure the 23 - number of least significant bits kept during inbound (PCIe -> AXI) address 24 - translations (default 32) 25 - - vendor-id: The PCI vendor ID (16 bits, default is design dependent) 26 - - device-id: The PCI device ID (16 bits, default is design dependent) 27 - - phys: From PHY bindings: List of Generic PHY phandles. One per lane if more 28 - than one in the list. If only one PHY listed it must manage all lanes. 29 - - phy-names: List of names to identify the PHY. 30 - 31 - Example: 32 - 33 - pcie@fb000000 { 34 - compatible = "cdns,cdns-pcie-host"; 35 - device_type = "pci"; 36 - #address-cells = <3>; 37 - #size-cells = <2>; 38 - bus-range = <0x0 0xff>; 39 - linux,pci-domain = <0>; 40 - cdns,max-outbound-regions = <16>; 41 - cdns,no-bar-match-nbits = <32>; 42 - vendor-id = /bits/ 16 <0x17cd>; 43 - device-id = /bits/ 16 <0x0200>; 44 - 45 - reg = <0x0 0xfb000000 0x0 0x01000000>, 46 - <0x0 0x41000000 0x0 0x00001000>, 47 - <0x0 0x40000000 0x0 0x04000000>; 48 - reg-names = "reg", "cfg", "mem"; 49 - 50 - ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>, 51 - <0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>; 52 - 53 - #interrupt-cells = <0x1>; 54 - 55 - interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0x0 0x0 14 0x1 56 - 0x0 0x0 0x0 0x2 &gic 0x0 0x0 0x0 15 0x1 57 - 0x0 0x0 0x0 0x3 &gic 0x0 0x0 0x0 16 0x1 58 - 0x0 0x0 0x0 0x4 &gic 0x0 0x0 0x0 17 0x1>; 59 - 60 - interrupt-map-mask = <0x0 0x0 0x0 0x7>; 61 - 62 - msi-parent = <&its_pci>; 63 - 64 - phys = <&pcie_phy0>; 65 - phy-names = "pcie-phy"; 66 - };
+76
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-host.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/cdns,cdns-pcie-host.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Cadence PCIe host controller 8 + 9 + maintainers: 10 + - Tom Joseph <tjoseph@cadence.com> 11 + 12 + allOf: 13 + - $ref: /schemas/pci/pci-bus.yaml# 14 + - $ref: "cdns-pcie-host.yaml#" 15 + 16 + properties: 17 + compatible: 18 + const: cdns,cdns-pcie-host 19 + 20 + reg: 21 + maxItems: 3 22 + 23 + reg-names: 24 + items: 25 + - const: reg 26 + - const: cfg 27 + - const: mem 28 + 29 + msi-parent: true 30 + 31 + required: 32 + - reg 33 + - reg-names 34 + 35 + examples: 36 + - | 37 + bus { 38 + #address-cells = <2>; 39 + #size-cells = <2>; 40 + 41 + pcie@fb000000 { 42 + compatible = "cdns,cdns-pcie-host"; 43 + device_type = "pci"; 44 + #address-cells = <3>; 45 + #size-cells = <2>; 46 + bus-range = <0x0 0xff>; 47 + linux,pci-domain = <0>; 48 + cdns,max-outbound-regions = <16>; 49 + cdns,no-bar-match-nbits = <32>; 50 + vendor-id = <0x17cd>; 51 + device-id = <0x0200>; 52 + 53 + reg = <0x0 0xfb000000 0x0 0x01000000>, 54 + <0x0 0x41000000 0x0 0x00001000>, 55 + <0x0 0x40000000 0x0 0x04000000>; 56 + reg-names = "reg", "cfg", "mem"; 57 + 58 + ranges = <0x02000000 0x0 0x42000000 0x0 0x42000000 0x0 0x1000000>, 59 + <0x01000000 0x0 0x43000000 0x0 0x43000000 0x0 0x0010000>; 60 + 61 + #interrupt-cells = <0x1>; 62 + 63 + interrupt-map = <0x0 0x0 0x0 0x1 &gic 0x0 0x0 0x0 14 0x1>, 64 + <0x0 0x0 0x0 0x2 &gic 0x0 0x0 0x0 15 0x1>, 65 + <0x0 0x0 0x0 0x3 &gic 0x0 0x0 0x0 16 0x1>, 66 + <0x0 0x0 0x0 0x4 &gic 0x0 0x0 0x0 17 0x1>; 67 + 68 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 69 + 70 + msi-parent = <&its_pci>; 71 + 72 + phys = <&pcie_phy0>; 73 + phy-names = "pcie-phy"; 74 + }; 75 + }; 76 + ...
+27
Documentation/devicetree/bindings/pci/cdns-pcie-host.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/pci/cdns-pcie-host.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Cadence PCIe Host 8 + 9 + maintainers: 10 + - Tom Joseph <tjoseph@cadence.com> 11 + 12 + allOf: 13 + - $ref: "/schemas/pci/pci-bus.yaml#" 14 + - $ref: "cdns-pcie.yaml#" 15 + 16 + properties: 17 + cdns,no-bar-match-nbits: 18 + description: 19 + Set into the no BAR match register to configure the number of least 20 + significant bits kept during inbound (PCIe -> AXI) address translations 21 + allOf: 22 + - $ref: /schemas/types.yaml#/definitions/uint32 23 + minimum: 0 24 + maximum: 64 25 + default: 32 26 + 27 + msi-parent: true
+31
Documentation/devicetree/bindings/pci/cdns-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/pci/cdns-pcie.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Cadence PCIe Core 8 + 9 + maintainers: 10 + - Tom Joseph <tjoseph@cadence.com> 11 + 12 + properties: 13 + cdns,max-outbound-regions: 14 + description: maximum number of outbound regions 15 + allOf: 16 + - $ref: /schemas/types.yaml#/definitions/uint32 17 + minimum: 1 18 + maximum: 32 19 + default: 32 20 + 21 + phys: 22 + description: 23 + One per lane if more than one in the list. If only one PHY listed it must 24 + manage all lanes. 25 + minItems: 1 26 + maxItems: 16 27 + 28 + phy-names: 29 + items: 30 + - const: pcie-phy 31 + # FIXME: names when more than 1
+52
Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt
··· 1 + NXP Layerscape PCIe Gen4 controller 2 + 3 + This PCIe controller is based on the Mobiveil PCIe IP and thus inherits all 4 + the common properties defined in mobiveil-pcie.txt. 5 + 6 + Required properties: 7 + - compatible: should contain the platform identifier such as: 8 + "fsl,lx2160a-pcie" 9 + - reg: base addresses and lengths of the PCIe controller register blocks. 10 + "csr_axi_slave": Bridge config registers 11 + "config_axi_slave": PCIe controller registers 12 + - interrupts: A list of interrupt outputs of the controller. Must contain an 13 + entry for each entry in the interrupt-names property. 14 + - interrupt-names: It could include the following entries: 15 + "intr": The interrupt that is asserted for controller interrupts 16 + "aer": Asserted for aer interrupt when chip support the aer interrupt with 17 + none MSI/MSI-X/INTx mode,but there is interrupt line for aer. 18 + "pme": Asserted for pme interrupt when chip support the pme interrupt with 19 + none MSI/MSI-X/INTx mode,but there is interrupt line for pme. 20 + - dma-coherent: Indicates that the hardware IP block can ensure the coherency 21 + of the data transferred from/to the IP block. This can avoid the software 22 + cache flush/invalid actions, and improve the performance significantly. 23 + - msi-parent : See the generic MSI binding described in 24 + Documentation/devicetree/bindings/interrupt-controller/msi.txt. 25 + 26 + Example: 27 + 28 + pcie@3400000 { 29 + compatible = "fsl,lx2160a-pcie"; 30 + reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */ 31 + 0x80 0x00000000 0x0 0x00001000>; /* configuration space */ 32 + reg-names = "csr_axi_slave", "config_axi_slave"; 33 + interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* AER interrupt */ 34 + <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* PME interrupt */ 35 + <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */ 36 + interrupt-names = "aer", "pme", "intr"; 37 + #address-cells = <3>; 38 + #size-cells = <2>; 39 + device_type = "pci"; 40 + apio-wins = <8>; 41 + ppio-wins = <8>; 42 + dma-coherent; 43 + bus-range = <0x0 0xff>; 44 + msi-parent = <&its>; 45 + ranges = <0x82000000 0x0 0x40000000 0x80 0x40000000 0x0 0x40000000>; 46 + #interrupt-cells = <1>; 47 + interrupt-map-mask = <0 0 0 7>; 48 + interrupt-map = <0000 0 0 1 &gic 0 0 GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>, 49 + <0000 0 0 2 &gic 0 0 GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>, 50 + <0000 0 0 3 &gic 0 0 GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>, 51 + <0000 0 0 4 &gic 0 0 GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>; 52 + };
+99 -26
Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
··· 1 1 NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based) 2 2 3 - This PCIe host controller is based on the Synopsis Designware PCIe IP 3 + This PCIe controller is based on the Synopsis Designware PCIe IP 4 4 and thus inherits all the common properties defined in designware-pcie.txt. 5 + Some of the controller instances are dual mode where in they can work either 6 + in root port mode or endpoint mode but one at a time. 5 7 6 8 Required properties: 7 - - compatible: For Tegra19x, must contain "nvidia,tegra194-pcie". 8 - - device_type: Must be "pci" 9 9 - power-domains: A phandle to the node that controls power to the respective 10 10 PCIe controller and a specifier name for the PCIe controller. Following are 11 11 the specifiers for the different PCIe controllers ··· 32 32 entry for each entry in the interrupt-names property. 33 33 - interrupt-names: Must include the following entries: 34 34 "intr": The Tegra interrupt that is asserted for controller interrupts 35 + - clocks: Must contain an entry for each entry in clock-names. 36 + See ../clocks/clock-bindings.txt for details. 37 + - clock-names: Must include the following entries: 38 + - core 39 + - resets: Must contain an entry for each entry in reset-names. 40 + See ../reset/reset.txt for details. 41 + - reset-names: Must include the following entries: 42 + - apb 43 + - core 44 + - phys: Must contain a phandle to P2U PHY for each entry in phy-names. 45 + - phy-names: Must include an entry for each active lane. 46 + "p2u-N": where N ranges from 0 to one less than the total number of lanes 47 + - nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed 48 + by controller-id. Following are the controller ids for each controller. 49 + 0: C0 50 + 1: C1 51 + 2: C2 52 + 3: C3 53 + 4: C4 54 + 5: C5 55 + - vddio-pex-ctl-supply: Regulator supply for PCIe side band signals 56 + 57 + RC mode: 58 + - compatible: Tegra19x must contain "nvidia,tegra194-pcie" 59 + - device_type: Must be "pci" for RC mode 60 + - interrupt-names: Must include the following entries: 35 61 "msi": The Tegra interrupt that is asserted when an MSI is received 36 62 - bus-range: Range of bus numbers associated with this controller 37 63 - #address-cells: Address representation for root ports (must be 3) ··· 86 60 - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 87 61 Please refer to the standard PCI bus binding document for a more detailed 88 62 explanation. 89 - - clocks: Must contain an entry for each entry in clock-names. 90 - See ../clocks/clock-bindings.txt for details. 91 - - clock-names: Must include the following entries: 92 - - core 93 - - resets: Must contain an entry for each entry in reset-names. 94 - See ../reset/reset.txt for details. 95 - - reset-names: Must include the following entries: 96 - - apb 97 - - core 98 - - phys: Must contain a phandle to P2U PHY for each entry in phy-names. 99 - - phy-names: Must include an entry for each active lane. 100 - "p2u-N": where N ranges from 0 to one less than the total number of lanes 101 - - nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed 102 - by controller-id. Following are the controller ids for each controller. 103 - 0: C0 104 - 1: C1 105 - 2: C2 106 - 3: C3 107 - 4: C4 108 - 5: C5 109 - - vddio-pex-ctl-supply: Regulator supply for PCIe side band signals 63 + 64 + EP mode: 65 + In Tegra194, Only controllers C0, C4 & C5 support EP mode. 66 + - compatible: Tegra19x must contain "nvidia,tegra194-pcie-ep" 67 + - reg-names: Must include the following entries: 68 + "addr_space": Used to map remote RC address space 69 + - reset-gpios: Must contain a phandle to a GPIO controller followed by 70 + GPIO that is being used as PERST input signal. Please refer to pci.txt 71 + document. 110 72 111 73 Optional properties: 112 74 - pinctrl-names: A list of pinctrl state names. ··· 118 104 specified in microseconds 119 105 - nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be 120 106 specified in microseconds 107 + 108 + RC mode: 121 109 - vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot 122 110 if the platform has one such slot. (Ex:- x16 slot owned by C5 controller 123 111 in p2972-0000 platform). ··· 127 111 if the platform has one such slot. (Ex:- x16 slot owned by C5 controller 128 112 in p2972-0000 platform). 129 113 114 + EP mode: 115 + - nvidia,refclk-select-gpios: Must contain a phandle to a GPIO controller 116 + followed by GPIO that is being used to enable REFCLK to controller from host 117 + 118 + NOTE:- On Tegra194's P2972-0000 platform, only C5 controller can be enabled to 119 + operate in the endpoint mode because of the way the platform is designed. 120 + 130 121 Examples: 131 122 ========= 132 123 133 - Tegra194: 134 - -------- 124 + Tegra194 RC mode: 125 + ----------------- 135 126 136 127 pcie@14180000 { 137 128 compatible = "nvidia,tegra194-pcie", "snps,dw-pcie"; ··· 191 168 phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>, 192 169 <&p2u_hsio_5>; 193 170 phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3"; 171 + }; 172 + 173 + Tegra194 EP mode: 174 + ----------------- 175 + 176 + pcie_ep@141a0000 { 177 + compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep"; 178 + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>; 179 + reg = <0x00 0x141a0000 0x0 0x00020000 /* appl registers (128K) */ 180 + 0x00 0x3a040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */ 181 + 0x00 0x3a080000 0x0 0x00040000 /* DBI reg space (256K) */ 182 + 0x1c 0x00000000 0x4 0x00000000>; /* Address Space (16G) */ 183 + reg-names = "appl", "atu_dma", "dbi", "addr_space"; 184 + 185 + num-lanes = <8>; 186 + num-ib-windows = <2>; 187 + num-ob-windows = <8>; 188 + 189 + pinctrl-names = "default"; 190 + pinctrl-0 = <&clkreq_c5_bi_dir_state>; 191 + 192 + clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>; 193 + clock-names = "core"; 194 + 195 + resets = <&bpmp TEGRA194_RESET_PEX1_CORE_5_APB>, 196 + <&bpmp TEGRA194_RESET_PEX1_CORE_5>; 197 + reset-names = "apb", "core"; 198 + 199 + interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */ 200 + interrupt-names = "intr"; 201 + 202 + nvidia,bpmp = <&bpmp 5>; 203 + 204 + nvidia,aspm-cmrt-us = <60>; 205 + nvidia,aspm-pwr-on-t-us = <20>; 206 + nvidia,aspm-l0s-entrance-latency-us = <3>; 207 + 208 + vddio-pex-ctl-supply = <&vdd_1v8ao>; 209 + 210 + reset-gpios = <&gpio TEGRA194_MAIN_GPIO(GG, 1) GPIO_ACTIVE_LOW>; 211 + 212 + nvidia,refclk-select-gpios = <&gpio_aon TEGRA194_AON_GPIO(AA, 5) 213 + GPIO_ACTIVE_HIGH>; 214 + 215 + phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>, 216 + <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>, 217 + <&p2u_nvhs_6>, <&p2u_nvhs_7>; 218 + 219 + phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4", 220 + "p2u-5", "p2u-6", "p2u-7"; 194 221 };
+41
Documentation/devicetree/bindings/pci/pci-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/pci-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: PCI Endpoint Controller Schema 8 + 9 + description: | 10 + Common properties for PCI Endpoint Controller Nodes. 11 + 12 + maintainers: 13 + - Kishon Vijay Abraham I <kishon@ti.com> 14 + 15 + properties: 16 + $nodename: 17 + pattern: "^pcie-ep@" 18 + 19 + max-functions: 20 + description: Maximum number of functions that can be configured 21 + allOf: 22 + - $ref: /schemas/types.yaml#/definitions/uint8 23 + minimum: 1 24 + default: 1 25 + maximum: 255 26 + 27 + max-link-speed: 28 + allOf: 29 + - $ref: /schemas/types.yaml#/definitions/uint32 30 + enum: [ 1, 2, 3, 4 ] 31 + 32 + num-lanes: 33 + description: maximum number of lanes 34 + allOf: 35 + - $ref: /schemas/types.yaml#/definitions/uint32 36 + minimum: 1 37 + default: 1 38 + maximum: 16 39 + 40 + required: 41 + - compatible
+35
Documentation/devicetree/bindings/phy/amlogic,meson-axg-mipi-pcie-analog.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/phy/amlogic,meson-axg-mipi-pcie-analog.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Amlogic AXG shared MIPI/PCIE analog PHY 8 + 9 + maintainers: 10 + - Remi Pommarel <repk@triplefau.lt> 11 + 12 + properties: 13 + compatible: 14 + const: amlogic,axg-mipi-pcie-analog-phy 15 + 16 + reg: 17 + maxItems: 1 18 + 19 + "#phy-cells": 20 + const: 1 21 + 22 + required: 23 + - compatible 24 + - reg 25 + - "#phy-cells" 26 + 27 + additionalProperties: false 28 + 29 + examples: 30 + - | 31 + mpphy: phy@0 { 32 + compatible = "amlogic,axg-mipi-pcie-analog-phy"; 33 + reg = <0x0 0x0 0x0 0xc>; 34 + #phy-cells = <1>; 35 + };
+52
Documentation/devicetree/bindings/phy/amlogic,meson-axg-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/phy/amlogic,meson-axg-pcie.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Amlogic AXG PCIE PHY 8 + 9 + maintainers: 10 + - Remi Pommarel <repk@triplefau.lt> 11 + 12 + properties: 13 + compatible: 14 + const: amlogic,axg-pcie-phy 15 + 16 + reg: 17 + maxItems: 1 18 + 19 + resets: 20 + maxItems: 1 21 + 22 + phys: 23 + maxItems: 1 24 + 25 + phy-names: 26 + const: analog 27 + 28 + "#phy-cells": 29 + const: 0 30 + 31 + required: 32 + - compatible 33 + - reg 34 + - phys 35 + - phy-names 36 + - resets 37 + - "#phy-cells" 38 + 39 + additionalProperties: false 40 + 41 + examples: 42 + - | 43 + #include <dt-bindings/reset/amlogic,meson-axg-reset.h> 44 + #include <dt-bindings/phy/phy.h> 45 + pcie_phy: pcie-phy@ff644000 { 46 + compatible = "amlogic,axg-pcie-phy"; 47 + reg = <0x0 0xff644000 0x0 0x1c>; 48 + resets = <&reset RESET_PCIE_PHY>; 49 + phys = <&mipi_analog_phy PHY_TYPE_PCIE>; 50 + phy-names = "analog"; 51 + #phy-cells = <0>; 52 + };
+10 -2
MAINTAINERS
··· 12857 12857 M: Tom Joseph <tjoseph@cadence.com> 12858 12858 L: linux-pci@vger.kernel.org 12859 12859 S: Maintained 12860 - F: Documentation/devicetree/bindings/pci/cdns,*.txt 12860 + F: Documentation/devicetree/bindings/pci/cdns,* 12861 12861 F: drivers/pci/controller/cadence/ 12862 12862 12863 12863 PCI DRIVER FOR FREESCALE LAYERSCAPE ··· 12869 12869 L: linux-arm-kernel@lists.infradead.org 12870 12870 S: Maintained 12871 12871 F: drivers/pci/controller/dwc/*layerscape* 12872 + 12873 + PCI DRIVER FOR NXP LAYERSCAPE GEN4 CONTROLLER 12874 + M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com> 12875 + L: linux-pci@vger.kernel.org 12876 + L: linux-arm-kernel@lists.infradead.org 12877 + S: Maintained 12878 + F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt 12879 + F: drivers/pci/controller/mobibeil/pcie-layerscape-gen4.c 12872 12880 12873 12881 PCI DRIVER FOR GENERIC OF HOSTS 12874 12882 M: Will Deacon <will@kernel.org> ··· 12920 12912 L: linux-pci@vger.kernel.org 12921 12913 S: Supported 12922 12914 F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt 12923 - F: drivers/pci/controller/pcie-mobiveil.c 12915 + F: drivers/pci/controller/mobiveil/pcie-mobiveil* 12924 12916 12925 12917 PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support) 12926 12918 M: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
+20 -32
arch/alpha/kernel/sys_nautilus.c
··· 187 187 188 188 extern void pcibios_claim_one_bus(struct pci_bus *); 189 189 190 - static struct resource irongate_io = { 191 - .name = "Irongate PCI IO", 192 - .flags = IORESOURCE_IO, 193 - }; 194 190 static struct resource irongate_mem = { 195 191 .name = "Irongate PCI MEM", 196 192 .flags = IORESOURCE_MEM, ··· 204 208 struct pci_controller *hose = hose_head; 205 209 struct pci_host_bridge *bridge; 206 210 struct pci_bus *bus; 207 - struct pci_dev *irongate; 208 211 unsigned long bus_align, bus_size, pci_mem; 209 212 unsigned long memtop = max_low_pfn << PAGE_SHIFT; 210 - int ret; 211 213 212 214 bridge = pci_alloc_host_bridge(0); 213 215 if (!bridge) 214 216 return; 215 217 218 + /* Use default IO. */ 216 219 pci_add_resource(&bridge->windows, &ioport_resource); 217 - pci_add_resource(&bridge->windows, &iomem_resource); 220 + /* Irongate PCI memory aperture, calculate requred size before 221 + setting it up. */ 222 + pci_add_resource(&bridge->windows, &irongate_mem); 223 + 218 224 pci_add_resource(&bridge->windows, &busn_resource); 219 225 bridge->dev.parent = NULL; 220 226 bridge->sysdata = hose; ··· 224 226 bridge->ops = alpha_mv.pci_ops; 225 227 bridge->swizzle_irq = alpha_mv.pci_swizzle; 226 228 bridge->map_irq = alpha_mv.pci_map_irq; 229 + bridge->size_windows = 1; 227 230 228 231 /* Scan our single hose. */ 229 - ret = pci_scan_root_bus_bridge(bridge); 230 - if (ret) { 232 + if (pci_scan_root_bus_bridge(bridge)) { 231 233 pci_free_host_bridge(bridge); 232 234 return; 233 235 } 234 - 235 236 bus = hose->bus = bridge->bus; 236 237 pcibios_claim_one_bus(bus); 237 238 238 - irongate = pci_get_domain_bus_and_slot(pci_domain_nr(bus), 0, 0); 239 - bus->self = irongate; 240 - bus->resource[0] = &irongate_io; 241 - bus->resource[1] = &irongate_mem; 242 - 243 239 pci_bus_size_bridges(bus); 244 240 245 - /* IO port range. */ 246 - bus->resource[0]->start = 0; 247 - bus->resource[0]->end = 0xffff; 248 - 249 - /* Set up PCI memory range - limit is hardwired to 0xffffffff, 250 - base must be at aligned to 16Mb. */ 251 - bus_align = bus->resource[1]->start; 252 - bus_size = bus->resource[1]->end + 1 - bus_align; 241 + /* Now we've got the size and alignment of PCI memory resources 242 + stored in irongate_mem. Set up the PCI memory range: limit is 243 + hardwired to 0xffffffff, base must be aligned to 16Mb. */ 244 + bus_align = irongate_mem.start; 245 + bus_size = irongate_mem.end + 1 - bus_align; 253 246 if (bus_align < 0x1000000UL) 254 247 bus_align = 0x1000000UL; 255 248 256 249 pci_mem = (0x100000000UL - bus_size) & -bus_align; 250 + irongate_mem.start = pci_mem; 251 + irongate_mem.end = 0xffffffffUL; 257 252 258 - bus->resource[1]->start = pci_mem; 259 - bus->resource[1]->end = 0xffffffffUL; 260 - if (request_resource(&iomem_resource, bus->resource[1]) < 0) 253 + /* Register our newly calculated PCI memory window in the resource 254 + tree. */ 255 + if (request_resource(&iomem_resource, &irongate_mem) < 0) 261 256 printk(KERN_ERR "Failed to request MEM on hose 0\n"); 257 + 258 + printk(KERN_INFO "Irongate pci_mem %pR\n", &irongate_mem); 262 259 263 260 if (pci_mem < memtop) 264 261 memtop = pci_mem; 265 262 if (memtop > alpha_mv.min_mem_address) { 266 263 free_reserved_area(__va(alpha_mv.min_mem_address), 267 264 __va(memtop), -1, NULL); 268 - printk("nautilus_init_pci: %ldk freed\n", 265 + printk(KERN_INFO "nautilus_init_pci: %ldk freed\n", 269 266 (memtop - alpha_mv.min_mem_address) >> 10); 270 267 } 271 - 272 268 if ((IRONGATE0->dev_vendor >> 16) > 0x7006) /* Albacore? */ 273 269 IRONGATE0->pci_mem = pci_mem; 274 270 275 271 pci_bus_assign_resources(bus); 276 - 277 - /* pci_common_swizzle() relies on bus->self being NULL 278 - for the root bus, so just clear it. */ 279 - bus->self = NULL; 280 272 pci_bus_add_devices(bus); 281 273 } 282 274
+41
arch/x86/include/asm/hyperv-tlfs.h
··· 376 376 #define HVCALL_SEND_IPI_EX 0x0015 377 377 #define HVCALL_POST_MESSAGE 0x005c 378 378 #define HVCALL_SIGNAL_EVENT 0x005d 379 + #define HVCALL_RETARGET_INTERRUPT 0x007e 379 380 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af 380 381 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0 381 382 ··· 405 404 HV_GENERIC_SET_SPARSE_4K, 406 405 HV_GENERIC_SET_ALL, 407 406 }; 407 + 408 + #define HV_PARTITION_ID_SELF ((u64)-1) 408 409 409 410 #define HV_HYPERCALL_RESULT_MASK GENMASK_ULL(15, 0) 410 411 #define HV_HYPERCALL_FAST_BIT BIT(16) ··· 912 909 struct hv_partition_assist_pg { 913 910 u32 tlb_lock_count; 914 911 }; 912 + 913 + union hv_msi_entry { 914 + u64 as_uint64; 915 + struct { 916 + u32 address; 917 + u32 data; 918 + } __packed; 919 + }; 920 + 921 + struct hv_interrupt_entry { 922 + u32 source; /* 1 for MSI(-X) */ 923 + u32 reserved1; 924 + union hv_msi_entry msi_entry; 925 + } __packed; 926 + 927 + /* 928 + * flags for hv_device_interrupt_target.flags 929 + */ 930 + #define HV_DEVICE_INTERRUPT_TARGET_MULTICAST 1 931 + #define HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET 2 932 + 933 + struct hv_device_interrupt_target { 934 + u32 vector; 935 + u32 flags; 936 + union { 937 + u64 vp_mask; 938 + struct hv_vpset vp_set; 939 + }; 940 + } __packed; 941 + 942 + /* HvRetargetDeviceInterrupt hypercall */ 943 + struct hv_retarget_device_interrupt { 944 + u64 partition_id; /* use "self" */ 945 + u64 device_id; 946 + struct hv_interrupt_entry int_entry; 947 + u64 reserved2; 948 + struct hv_device_interrupt_target int_target; 949 + } __packed __aligned(8); 915 950 #endif
+8
arch/x86/include/asm/mshyperv.h
··· 4 4 5 5 #include <linux/types.h> 6 6 #include <linux/nmi.h> 7 + #include <linux/msi.h> 7 8 #include <asm/io.h> 8 9 #include <asm/hyperv-tlfs.h> 9 10 #include <asm/nospec-branch.h> ··· 242 241 #else 243 242 static inline void hv_apic_init(void) {} 244 243 #endif 244 + 245 + static inline void hv_set_msi_entry_from_desc(union hv_msi_entry *msi_entry, 246 + struct msi_desc *msi_desc) 247 + { 248 + msi_entry->address = msi_desc->msg.address_lo; 249 + msi_entry->data = msi_desc->msg.data; 250 + } 245 251 246 252 #else /* CONFIG_HYPERV */ 247 253 static inline void hyperv_init(void) {}
+15
drivers/acpi/pci_root.c
··· 131 131 { OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" }, 132 132 { OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" }, 133 133 { OSC_PCI_MSI_SUPPORT, "MSI" }, 134 + { OSC_PCI_EDR_SUPPORT, "EDR" }, 134 135 { OSC_PCI_HPX_TYPE_3_SUPPORT, "HPX-Type3" }, 135 136 }; 136 137 ··· 142 141 { OSC_PCI_EXPRESS_AER_CONTROL, "AER" }, 143 142 { OSC_PCI_EXPRESS_CAPABILITY_CONTROL, "PCIeCapability" }, 144 143 { OSC_PCI_EXPRESS_LTR_CONTROL, "LTR" }, 144 + { OSC_PCI_EXPRESS_DPC_CONTROL, "DPC" }, 145 145 }; 146 146 147 147 static void decode_osc_bits(struct acpi_pci_root *root, char *msg, u32 word, ··· 442 440 support |= OSC_PCI_ASPM_SUPPORT | OSC_PCI_CLOCK_PM_SUPPORT; 443 441 if (pci_msi_enabled()) 444 442 support |= OSC_PCI_MSI_SUPPORT; 443 + if (IS_ENABLED(CONFIG_PCIE_EDR)) 444 + support |= OSC_PCI_EDR_SUPPORT; 445 445 446 446 decode_osc_support(root, "OS supports", support); 447 447 status = acpi_pci_osc_support(root, support); ··· 490 486 else 491 487 control |= OSC_PCI_EXPRESS_AER_CONTROL; 492 488 } 489 + 490 + /* 491 + * Per the Downstream Port Containment Related Enhancements ECN to 492 + * the PCI Firmware Spec, r3.2, sec 4.5.1, table 4-5, 493 + * OSC_PCI_EXPRESS_DPC_CONTROL indicates the OS supports both DPC 494 + * and EDR. 495 + */ 496 + if (IS_ENABLED(CONFIG_PCIE_DPC) && IS_ENABLED(CONFIG_PCIE_EDR)) 497 + control |= OSC_PCI_EXPRESS_DPC_CONTROL; 493 498 494 499 requested = control; 495 500 status = acpi_pci_osc_control_set(handle, &control, ··· 929 916 host_bridge->native_pme = 0; 930 917 if (!(root->osc_control_set & OSC_PCI_EXPRESS_LTR_CONTROL)) 931 918 host_bridge->native_ltr = 0; 919 + if (!(root->osc_control_set & OSC_PCI_EXPRESS_DPC_CONTROL)) 920 + host_bridge->native_dpc = 0; 932 921 933 922 /* 934 923 * Evaluate the "PCI Boot Configuration" _DSM Function. If it
+20 -15
drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
··· 192 192 193 193 static bool amdgpu_read_platform_bios(struct amdgpu_device *adev) 194 194 { 195 - uint8_t __iomem *bios; 196 - size_t size; 195 + phys_addr_t rom = adev->pdev->rom; 196 + size_t romlen = adev->pdev->romlen; 197 + void __iomem *bios; 197 198 198 199 adev->bios = NULL; 199 200 200 - bios = pci_platform_rom(adev->pdev, &size); 201 - if (!bios) { 202 - return false; 203 - } 204 - 205 - adev->bios = kzalloc(size, GFP_KERNEL); 206 - if (adev->bios == NULL) 201 + if (!rom || romlen == 0) 207 202 return false; 208 203 209 - memcpy_fromio(adev->bios, bios, size); 210 - 211 - if (!check_atom_bios(adev->bios, size)) { 212 - kfree(adev->bios); 204 + adev->bios = kzalloc(romlen, GFP_KERNEL); 205 + if (!adev->bios) 213 206 return false; 214 - } 215 207 216 - adev->bios_size = size; 208 + bios = ioremap(rom, romlen); 209 + if (!bios) 210 + goto free_bios; 211 + 212 + memcpy_fromio(adev->bios, bios, romlen); 213 + iounmap(bios); 214 + 215 + if (!check_atom_bios(adev->bios, romlen)) 216 + goto free_bios; 217 + 218 + adev->bios_size = romlen; 217 219 218 220 return true; 221 + free_bios: 222 + kfree(adev->bios); 223 + return false; 219 224 } 220 225 221 226 #ifdef CONFIG_ACPI
+15 -2
drivers/gpu/drm/nouveau/nvkm/subdev/bios/shadowpci.c
··· 101 101 else 102 102 return ERR_PTR(-ENODEV); 103 103 104 + if (!pdev->rom || pdev->romlen == 0) 105 + return ERR_PTR(-ENODEV); 106 + 104 107 if ((priv = kmalloc(sizeof(*priv), GFP_KERNEL))) { 108 + priv->size = pdev->romlen; 105 109 if (ret = -ENODEV, 106 - (priv->rom = pci_platform_rom(pdev, &priv->size))) 110 + (priv->rom = ioremap(pdev->rom, pdev->romlen))) 107 111 return priv; 108 112 kfree(priv); 109 113 } ··· 115 111 return ERR_PTR(ret); 116 112 } 117 113 114 + static void 115 + platform_fini(void *data) 116 + { 117 + struct priv *priv = data; 118 + 119 + iounmap(priv->rom); 120 + kfree(priv); 121 + } 122 + 118 123 const struct nvbios_source 119 124 nvbios_platform = { 120 125 .name = "PLATFORM", 121 126 .init = platform_init, 122 - .fini = (void(*)(void *))kfree, 127 + .fini = platform_fini, 123 128 .read = pcirom_read, 124 129 .rw = true, 125 130 };
+19 -11
drivers/gpu/drm/radeon/radeon_bios.c
··· 108 108 109 109 static bool radeon_read_platform_bios(struct radeon_device *rdev) 110 110 { 111 - uint8_t __iomem *bios; 112 - size_t size; 111 + phys_addr_t rom = rdev->pdev->rom; 112 + size_t romlen = rdev->pdev->romlen; 113 + void __iomem *bios; 113 114 114 115 rdev->bios = NULL; 115 116 116 - bios = pci_platform_rom(rdev->pdev, &size); 117 - if (!bios) { 117 + if (!rom || romlen == 0) 118 118 return false; 119 - } 120 119 121 - if (size == 0 || bios[0] != 0x55 || bios[1] != 0xaa) { 120 + rdev->bios = kzalloc(romlen, GFP_KERNEL); 121 + if (!rdev->bios) 122 122 return false; 123 - } 124 - rdev->bios = kmemdup(bios, size, GFP_KERNEL); 125 - if (rdev->bios == NULL) { 126 - return false; 127 - } 123 + 124 + bios = ioremap(rom, romlen); 125 + if (!bios) 126 + goto free_bios; 127 + 128 + memcpy_fromio(rdev->bios, bios, romlen); 129 + iounmap(bios); 130 + 131 + if (rdev->bios[0] != 0x55 || rdev->bios[1] != 0xaa) 132 + goto free_bios; 128 133 129 134 return true; 135 + free_bios: 136 + kfree(rdev->bios); 137 + return false; 130 138 } 131 139 132 140 #ifdef CONFIG_ACPI
+179 -34
drivers/misc/pci_endpoint_test.c
··· 17 17 #include <linux/mutex.h> 18 18 #include <linux/random.h> 19 19 #include <linux/slab.h> 20 + #include <linux/uaccess.h> 20 21 #include <linux/pci.h> 21 22 #include <linux/pci_ids.h> 22 23 ··· 65 64 #define PCI_ENDPOINT_TEST_IRQ_TYPE 0x24 66 65 #define PCI_ENDPOINT_TEST_IRQ_NUMBER 0x28 67 66 67 + #define PCI_ENDPOINT_TEST_FLAGS 0x2c 68 + #define FLAG_USE_DMA BIT(0) 69 + 68 70 #define PCI_DEVICE_ID_TI_AM654 0xb00c 69 71 70 72 #define is_am654_pci_dev(pdev) \ ··· 102 98 struct completion irq_raised; 103 99 int last_irq; 104 100 int num_irqs; 101 + int irq_type; 105 102 /* mutex to protect the ioctls */ 106 103 struct mutex mutex; 107 104 struct miscdevice miscdev; 108 105 enum pci_barno test_reg_bar; 109 106 size_t alignment; 107 + const char *name; 110 108 }; 111 109 112 110 struct pci_endpoint_test_data { ··· 163 157 struct pci_dev *pdev = test->pdev; 164 158 165 159 pci_free_irq_vectors(pdev); 160 + test->irq_type = IRQ_TYPE_UNDEFINED; 166 161 } 167 162 168 163 static bool pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test, ··· 198 191 irq = 0; 199 192 res = false; 200 193 } 194 + 195 + test->irq_type = type; 201 196 test->num_irqs = irq; 202 197 203 198 return res; ··· 227 218 for (i = 0; i < test->num_irqs; i++) { 228 219 err = devm_request_irq(dev, pci_irq_vector(pdev, i), 229 220 pci_endpoint_test_irqhandler, 230 - IRQF_SHARED, DRV_MODULE_NAME, test); 221 + IRQF_SHARED, test->name, test); 231 222 if (err) 232 223 goto fail; 233 224 } ··· 324 315 return false; 325 316 } 326 317 327 - static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size) 318 + static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, 319 + unsigned long arg) 328 320 { 321 + struct pci_endpoint_test_xfer_param param; 329 322 bool ret = false; 330 323 void *src_addr; 331 324 void *dst_addr; 325 + u32 flags = 0; 326 + bool use_dma; 327 + size_t size; 332 328 dma_addr_t src_phys_addr; 333 329 dma_addr_t dst_phys_addr; 334 330 struct pci_dev *pdev = test->pdev; ··· 344 330 dma_addr_t orig_dst_phys_addr; 345 331 size_t offset; 346 332 size_t alignment = test->alignment; 333 + int irq_type = test->irq_type; 347 334 u32 src_crc32; 348 335 u32 dst_crc32; 336 + int err; 349 337 338 + err = copy_from_user(&param, (void __user *)arg, sizeof(param)); 339 + if (err) { 340 + dev_err(dev, "Failed to get transfer param\n"); 341 + return false; 342 + } 343 + 344 + size = param.size; 350 345 if (size > SIZE_MAX - alignment) 351 346 goto err; 347 + 348 + use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA); 349 + if (use_dma) 350 + flags |= FLAG_USE_DMA; 352 351 353 352 if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) { 354 353 dev_err(dev, "Invalid IRQ type option\n"); 355 354 goto err; 356 355 } 357 356 358 - orig_src_addr = dma_alloc_coherent(dev, size + alignment, 359 - &orig_src_phys_addr, GFP_KERNEL); 357 + orig_src_addr = kzalloc(size + alignment, GFP_KERNEL); 360 358 if (!orig_src_addr) { 361 359 dev_err(dev, "Failed to allocate source buffer\n"); 362 360 ret = false; 363 361 goto err; 362 + } 363 + 364 + get_random_bytes(orig_src_addr, size + alignment); 365 + orig_src_phys_addr = dma_map_single(dev, orig_src_addr, 366 + size + alignment, DMA_TO_DEVICE); 367 + if (dma_mapping_error(dev, orig_src_phys_addr)) { 368 + dev_err(dev, "failed to map source buffer address\n"); 369 + ret = false; 370 + goto err_src_phys_addr; 364 371 } 365 372 366 373 if (alignment && !IS_ALIGNED(orig_src_phys_addr, alignment)) { ··· 399 364 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_UPPER_SRC_ADDR, 400 365 upper_32_bits(src_phys_addr)); 401 366 402 - get_random_bytes(src_addr, size); 403 367 src_crc32 = crc32_le(~0, src_addr, size); 404 368 405 - orig_dst_addr = dma_alloc_coherent(dev, size + alignment, 406 - &orig_dst_phys_addr, GFP_KERNEL); 369 + orig_dst_addr = kzalloc(size + alignment, GFP_KERNEL); 407 370 if (!orig_dst_addr) { 408 371 dev_err(dev, "Failed to allocate destination address\n"); 409 372 ret = false; 410 - goto err_orig_src_addr; 373 + goto err_dst_addr; 374 + } 375 + 376 + orig_dst_phys_addr = dma_map_single(dev, orig_dst_addr, 377 + size + alignment, DMA_FROM_DEVICE); 378 + if (dma_mapping_error(dev, orig_dst_phys_addr)) { 379 + dev_err(dev, "failed to map destination buffer address\n"); 380 + ret = false; 381 + goto err_dst_phys_addr; 411 382 } 412 383 413 384 if (alignment && !IS_ALIGNED(orig_dst_phys_addr, alignment)) { ··· 433 392 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, 434 393 size); 435 394 395 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_FLAGS, flags); 436 396 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type); 437 397 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1); 438 398 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, ··· 441 399 442 400 wait_for_completion(&test->irq_raised); 443 401 402 + dma_unmap_single(dev, orig_dst_phys_addr, size + alignment, 403 + DMA_FROM_DEVICE); 404 + 444 405 dst_crc32 = crc32_le(~0, dst_addr, size); 445 406 if (dst_crc32 == src_crc32) 446 407 ret = true; 447 408 448 - dma_free_coherent(dev, size + alignment, orig_dst_addr, 449 - orig_dst_phys_addr); 409 + err_dst_phys_addr: 410 + kfree(orig_dst_addr); 450 411 451 - err_orig_src_addr: 452 - dma_free_coherent(dev, size + alignment, orig_src_addr, 453 - orig_src_phys_addr); 412 + err_dst_addr: 413 + dma_unmap_single(dev, orig_src_phys_addr, size + alignment, 414 + DMA_TO_DEVICE); 415 + 416 + err_src_phys_addr: 417 + kfree(orig_src_addr); 454 418 455 419 err: 456 420 return ret; 457 421 } 458 422 459 - static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size) 423 + static bool pci_endpoint_test_write(struct pci_endpoint_test *test, 424 + unsigned long arg) 460 425 { 426 + struct pci_endpoint_test_xfer_param param; 461 427 bool ret = false; 428 + u32 flags = 0; 429 + bool use_dma; 462 430 u32 reg; 463 431 void *addr; 464 432 dma_addr_t phys_addr; ··· 478 426 dma_addr_t orig_phys_addr; 479 427 size_t offset; 480 428 size_t alignment = test->alignment; 429 + int irq_type = test->irq_type; 430 + size_t size; 481 431 u32 crc32; 432 + int err; 482 433 434 + err = copy_from_user(&param, (void __user *)arg, sizeof(param)); 435 + if (err != 0) { 436 + dev_err(dev, "Failed to get transfer param\n"); 437 + return false; 438 + } 439 + 440 + size = param.size; 483 441 if (size > SIZE_MAX - alignment) 484 442 goto err; 443 + 444 + use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA); 445 + if (use_dma) 446 + flags |= FLAG_USE_DMA; 485 447 486 448 if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) { 487 449 dev_err(dev, "Invalid IRQ type option\n"); 488 450 goto err; 489 451 } 490 452 491 - orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 492 - GFP_KERNEL); 453 + orig_addr = kzalloc(size + alignment, GFP_KERNEL); 493 454 if (!orig_addr) { 494 455 dev_err(dev, "Failed to allocate address\n"); 495 456 ret = false; 496 457 goto err; 458 + } 459 + 460 + get_random_bytes(orig_addr, size + alignment); 461 + 462 + orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment, 463 + DMA_TO_DEVICE); 464 + if (dma_mapping_error(dev, orig_phys_addr)) { 465 + dev_err(dev, "failed to map source buffer address\n"); 466 + ret = false; 467 + goto err_phys_addr; 497 468 } 498 469 499 470 if (alignment && !IS_ALIGNED(orig_phys_addr, alignment)) { ··· 527 452 phys_addr = orig_phys_addr; 528 453 addr = orig_addr; 529 454 } 530 - 531 - get_random_bytes(addr, size); 532 455 533 456 crc32 = crc32_le(~0, addr, size); 534 457 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_CHECKSUM, ··· 539 466 540 467 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, size); 541 468 469 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_FLAGS, flags); 542 470 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type); 543 471 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1); 544 472 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, ··· 551 477 if (reg & STATUS_READ_SUCCESS) 552 478 ret = true; 553 479 554 - dma_free_coherent(dev, size + alignment, orig_addr, orig_phys_addr); 480 + dma_unmap_single(dev, orig_phys_addr, size + alignment, 481 + DMA_TO_DEVICE); 482 + 483 + err_phys_addr: 484 + kfree(orig_addr); 555 485 556 486 err: 557 487 return ret; 558 488 } 559 489 560 - static bool pci_endpoint_test_read(struct pci_endpoint_test *test, size_t size) 490 + static bool pci_endpoint_test_read(struct pci_endpoint_test *test, 491 + unsigned long arg) 561 492 { 493 + struct pci_endpoint_test_xfer_param param; 562 494 bool ret = false; 495 + u32 flags = 0; 496 + bool use_dma; 497 + size_t size; 563 498 void *addr; 564 499 dma_addr_t phys_addr; 565 500 struct pci_dev *pdev = test->pdev; ··· 577 494 dma_addr_t orig_phys_addr; 578 495 size_t offset; 579 496 size_t alignment = test->alignment; 497 + int irq_type = test->irq_type; 580 498 u32 crc32; 499 + int err; 581 500 501 + err = copy_from_user(&param, (void __user *)arg, sizeof(param)); 502 + if (err) { 503 + dev_err(dev, "Failed to get transfer param\n"); 504 + return false; 505 + } 506 + 507 + size = param.size; 582 508 if (size > SIZE_MAX - alignment) 583 509 goto err; 510 + 511 + use_dma = !!(param.flags & PCITEST_FLAGS_USE_DMA); 512 + if (use_dma) 513 + flags |= FLAG_USE_DMA; 584 514 585 515 if (irq_type < IRQ_TYPE_LEGACY || irq_type > IRQ_TYPE_MSIX) { 586 516 dev_err(dev, "Invalid IRQ type option\n"); 587 517 goto err; 588 518 } 589 519 590 - orig_addr = dma_alloc_coherent(dev, size + alignment, &orig_phys_addr, 591 - GFP_KERNEL); 520 + orig_addr = kzalloc(size + alignment, GFP_KERNEL); 592 521 if (!orig_addr) { 593 522 dev_err(dev, "Failed to allocate destination address\n"); 594 523 ret = false; 595 524 goto err; 525 + } 526 + 527 + orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment, 528 + DMA_FROM_DEVICE); 529 + if (dma_mapping_error(dev, orig_phys_addr)) { 530 + dev_err(dev, "failed to map source buffer address\n"); 531 + ret = false; 532 + goto err_phys_addr; 596 533 } 597 534 598 535 if (alignment && !IS_ALIGNED(orig_phys_addr, alignment)) { ··· 631 528 632 529 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, size); 633 530 531 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_FLAGS, flags); 634 532 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type); 635 533 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1); 636 534 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, ··· 639 535 640 536 wait_for_completion(&test->irq_raised); 641 537 538 + dma_unmap_single(dev, orig_phys_addr, size + alignment, 539 + DMA_FROM_DEVICE); 540 + 642 541 crc32 = crc32_le(~0, addr, size); 643 542 if (crc32 == pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CHECKSUM)) 644 543 ret = true; 645 544 646 - dma_free_coherent(dev, size + alignment, orig_addr, orig_phys_addr); 545 + err_phys_addr: 546 + kfree(orig_addr); 647 547 err: 648 548 return ret; 549 + } 550 + 551 + static bool pci_endpoint_test_clear_irq(struct pci_endpoint_test *test) 552 + { 553 + pci_endpoint_test_release_irq(test); 554 + pci_endpoint_test_free_irq_vectors(test); 555 + return true; 649 556 } 650 557 651 558 static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test, ··· 670 555 return false; 671 556 } 672 557 673 - if (irq_type == req_irq_type) 558 + if (test->irq_type == req_irq_type) 674 559 return true; 675 560 676 561 pci_endpoint_test_release_irq(test); ··· 682 567 if (!pci_endpoint_test_request_irq(test)) 683 568 goto err; 684 569 685 - irq_type = req_irq_type; 686 570 return true; 687 571 688 572 err: 689 573 pci_endpoint_test_free_irq_vectors(test); 690 - irq_type = IRQ_TYPE_UNDEFINED; 691 574 return false; 692 575 } 693 576 ··· 729 616 case PCITEST_GET_IRQTYPE: 730 617 ret = irq_type; 731 618 break; 619 + case PCITEST_CLEAR_IRQ: 620 + ret = pci_endpoint_test_clear_irq(test); 621 + break; 732 622 } 733 623 734 624 ret: ··· 749 633 { 750 634 int err; 751 635 int id; 752 - char name[20]; 636 + char name[24]; 753 637 enum pci_barno bar; 754 638 void __iomem *base; 755 639 struct device *dev = &pdev->dev; ··· 768 652 test->test_reg_bar = 0; 769 653 test->alignment = 0; 770 654 test->pdev = pdev; 655 + test->irq_type = IRQ_TYPE_UNDEFINED; 771 656 772 657 if (no_msi) 773 658 irq_type = IRQ_TYPE_LEGACY; ··· 783 666 784 667 init_completion(&test->irq_raised); 785 668 mutex_init(&test->mutex); 669 + 670 + if ((dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48)) != 0) && 671 + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)) != 0) { 672 + dev_err(dev, "Cannot set DMA mask\n"); 673 + return -EINVAL; 674 + } 786 675 787 676 err = pci_enable_device(pdev); 788 677 if (err) { ··· 805 682 pci_set_master(pdev); 806 683 807 684 if (!pci_endpoint_test_alloc_irq_vectors(test, irq_type)) 808 - goto err_disable_irq; 809 - 810 - if (!pci_endpoint_test_request_irq(test)) 811 685 goto err_disable_irq; 812 686 813 687 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { ··· 836 716 } 837 717 838 718 snprintf(name, sizeof(name), DRV_MODULE_NAME ".%d", id); 719 + test->name = kstrdup(name, GFP_KERNEL); 720 + if (!test->name) { 721 + err = -ENOMEM; 722 + goto err_ida_remove; 723 + } 724 + 725 + if (!pci_endpoint_test_request_irq(test)) 726 + goto err_kfree_test_name; 727 + 839 728 misc_device = &test->miscdev; 840 729 misc_device->minor = MISC_DYNAMIC_MINOR; 841 730 misc_device->name = kstrdup(name, GFP_KERNEL); 842 731 if (!misc_device->name) { 843 732 err = -ENOMEM; 844 - goto err_ida_remove; 733 + goto err_release_irq; 845 734 } 846 735 misc_device->fops = &pci_endpoint_test_fops, 847 736 ··· 865 736 err_kfree_name: 866 737 kfree(misc_device->name); 867 738 739 + err_release_irq: 740 + pci_endpoint_test_release_irq(test); 741 + 742 + err_kfree_test_name: 743 + kfree(test->name); 744 + 868 745 err_ida_remove: 869 746 ida_simple_remove(&pci_endpoint_test_ida, id); 870 747 ··· 879 744 if (test->bar[bar]) 880 745 pci_iounmap(pdev, test->bar[bar]); 881 746 } 882 - pci_endpoint_test_release_irq(test); 883 747 884 748 err_disable_irq: 885 749 pci_endpoint_test_free_irq_vectors(test); ··· 904 770 905 771 misc_deregister(&test->miscdev); 906 772 kfree(misc_device->name); 773 + kfree(test->name); 907 774 ida_simple_remove(&pci_endpoint_test_ida, id); 908 775 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 909 776 if (test->bar[bar]) ··· 918 783 pci_disable_device(pdev); 919 784 } 920 785 786 + static const struct pci_endpoint_test_data default_data = { 787 + .test_reg_bar = BAR_0, 788 + .alignment = SZ_4K, 789 + .irq_type = IRQ_TYPE_MSI, 790 + }; 791 + 921 792 static const struct pci_endpoint_test_data am654_data = { 922 793 .test_reg_bar = BAR_2, 923 794 .alignment = SZ_64K, ··· 931 790 }; 932 791 933 792 static const struct pci_device_id pci_endpoint_test_tbl[] = { 934 - { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) }, 935 - { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) }, 793 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x), 794 + .driver_data = (kernel_ulong_t)&default_data, 795 + }, 796 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x), 797 + .driver_data = (kernel_ulong_t)&default_data, 798 + }, 936 799 { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0) }, 937 800 { PCI_DEVICE_DATA(SYNOPSYS, EDDA, NULL) }, 938 801 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654),
+2 -2
drivers/net/ethernet/intel/ice/ice_main.c
··· 3508 3508 result = PCI_ERS_RESULT_DISCONNECT; 3509 3509 } 3510 3510 3511 - err = pci_cleanup_aer_uncorrect_error_status(pdev); 3511 + err = pci_aer_clear_nonfatal_status(pdev); 3512 3512 if (err) 3513 - dev_dbg(&pdev->dev, "pci_cleanup_aer_uncorrect_error_status failed, error %d\n", 3513 + dev_dbg(&pdev->dev, "pci_aer_clear_nonfatal_status() failed, error %d\n", 3514 3514 err); 3515 3515 /* non-fatal, continue */ 3516 3516
+2 -2
drivers/ntb/hw/idt/ntb_hw_idt.c
··· 2674 2674 ret = pci_enable_pcie_error_reporting(pdev); 2675 2675 if (ret != 0) 2676 2676 dev_warn(&pdev->dev, "PCIe AER capability disabled\n"); 2677 - else /* Cleanup uncorrectable error status before getting to init */ 2678 - pci_cleanup_aer_uncorrect_error_status(pdev); 2677 + else /* Cleanup nonfatal error status before getting to init */ 2678 + pci_aer_clear_nonfatal_status(pdev); 2679 2679 2680 2680 /* First enable the PCI device */ 2681 2681 ret = pcim_enable_device(pdev);
+1 -10
drivers/pci/controller/Kconfig
··· 213 213 Say Y here if you want to enable PCIe controller support on 214 214 MediaTek SoCs. 215 215 216 - config PCIE_MOBIVEIL 217 - bool "Mobiveil AXI PCIe controller" 218 - depends on ARCH_ZYNQMP || COMPILE_TEST 219 - depends on OF 220 - depends on PCI_MSI_IRQ_DOMAIN 221 - help 222 - Say Y here if you want to enable support for the Mobiveil AXI PCIe 223 - Soft IP. It has up to 8 outbound and inbound windows 224 - for address translation and it is a PCIe Gen4 IP. 225 - 226 216 config PCIE_TANGO_SMP8759 227 217 bool "Tango SMP8759 PCIe controller (DANGEROUS)" 228 218 depends on ARCH_TANGO && PCI_MSI && OF ··· 259 269 have a common interface with the Hyper-V PCI frontend driver. 260 270 261 271 source "drivers/pci/controller/dwc/Kconfig" 272 + source "drivers/pci/controller/mobiveil/Kconfig" 262 273 source "drivers/pci/controller/cadence/Kconfig" 263 274 endmenu
+1 -1
drivers/pci/controller/Makefile
··· 25 25 obj-$(CONFIG_PCIE_ROCKCHIP_EP) += pcie-rockchip-ep.o 26 26 obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o 27 27 obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o 28 - obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o 29 28 obj-$(CONFIG_PCIE_TANGO_SMP8759) += pcie-tango.o 30 29 obj-$(CONFIG_VMD) += vmd.o 31 30 obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o 32 31 # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW 33 32 obj-y += dwc/ 33 + obj-y += mobiveil/ 34 34 35 35 36 36 # The following drivers are for devices that use the generic ACPI
+26 -3
drivers/pci/controller/dwc/Kconfig
··· 248 248 implement the driver. 249 249 250 250 config PCIE_TEGRA194 251 - tristate "NVIDIA Tegra194 (and later) PCIe controller" 251 + tristate 252 + 253 + config PCIE_TEGRA194_HOST 254 + tristate "NVIDIA Tegra194 (and later) PCIe controller - Host Mode" 252 255 depends on ARCH_TEGRA_194_SOC || COMPILE_TEST 253 256 depends on PCI_MSI_IRQ_DOMAIN 254 257 select PCIE_DW_HOST 255 258 select PHY_TEGRA194_P2U 259 + select PCIE_TEGRA194 256 260 help 257 - Say Y here if you want support for DesignWare core based PCIe host 258 - controller found in NVIDIA Tegra194 SoC. 261 + Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to 262 + work in host mode. There are two instances of PCIe controllers in 263 + Tegra194. This controller can work either as EP or RC. In order to 264 + enable host-specific features PCIE_TEGRA194_HOST must be selected and 265 + in order to enable device-specific features PCIE_TEGRA194_EP must be 266 + selected. This uses the DesignWare core. 267 + 268 + config PCIE_TEGRA194_EP 269 + tristate "NVIDIA Tegra194 (and later) PCIe controller - Endpoint Mode" 270 + depends on ARCH_TEGRA_194_SOC || COMPILE_TEST 271 + depends on PCI_ENDPOINT 272 + select PCIE_DW_EP 273 + select PHY_TEGRA194_P2U 274 + select PCIE_TEGRA194 275 + help 276 + Enables support for the PCIe controller in the NVIDIA Tegra194 SoC to 277 + work in host mode. There are two instances of PCIe controllers in 278 + Tegra194. This controller can work either as EP or RC. In order to 279 + enable host-specific features PCIE_TEGRA194_HOST must be selected and 280 + in order to enable device-specific features PCIE_TEGRA194_EP must be 281 + selected. This uses the DesignWare core. 259 282 260 283 config PCIE_UNIPHIER 261 284 bool "Socionext UniPhier PCIe controllers"
+196 -37
drivers/pci/controller/dwc/pci-dra7xx.c
··· 215 215 return 0; 216 216 } 217 217 218 - static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = { 219 - .host_init = dra7xx_pcie_host_init, 220 - }; 221 - 222 218 static int dra7xx_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 223 219 irq_hw_number_t hwirq) 224 220 { ··· 229 233 .xlate = pci_irqd_intx_xlate, 230 234 }; 231 235 232 - static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp) 236 + static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index) 233 237 { 234 238 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 235 - struct device *dev = pci->dev; 236 - struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); 237 - struct device_node *node = dev->of_node; 238 - struct device_node *pcie_intc_node = of_get_next_child(node, NULL); 239 + unsigned long val; 240 + int pos, irq; 239 241 240 - if (!pcie_intc_node) { 241 - dev_err(dev, "No PCIe Intc node found\n"); 242 - return -ENODEV; 242 + val = dw_pcie_readl_dbi(pci, PCIE_MSI_INTR0_STATUS + 243 + (index * MSI_REG_CTRL_BLOCK_SIZE)); 244 + if (!val) 245 + return 0; 246 + 247 + pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, 0); 248 + while (pos != MAX_MSI_IRQS_PER_CTRL) { 249 + irq = irq_find_mapping(pp->irq_domain, 250 + (index * MAX_MSI_IRQS_PER_CTRL) + pos); 251 + generic_handle_irq(irq); 252 + pos++; 253 + pos = find_next_bit(&val, MAX_MSI_IRQS_PER_CTRL, pos); 243 254 } 244 255 245 - dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 246 - &intx_domain_ops, pp); 247 - of_node_put(pcie_intc_node); 248 - if (!dra7xx->irq_domain) { 249 - dev_err(dev, "Failed to get a INTx IRQ domain\n"); 250 - return -ENODEV; 251 - } 252 - 253 - return 0; 256 + return 1; 254 257 } 255 258 256 - static irqreturn_t dra7xx_pcie_msi_irq_handler(int irq, void *arg) 259 + static void dra7xx_pcie_handle_msi_irq(struct pcie_port *pp) 257 260 { 258 - struct dra7xx_pcie *dra7xx = arg; 259 - struct dw_pcie *pci = dra7xx->pci; 260 - struct pcie_port *pp = &pci->pp; 261 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 262 + int ret, i, count, num_ctrls; 263 + 264 + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 265 + 266 + /** 267 + * Need to make sure all MSI status bits read 0 before exiting. 268 + * Else, new MSI IRQs are not registered by the wrapper. Have an 269 + * upperbound for the loop and exit the IRQ in case of IRQ flood 270 + * to avoid locking up system in interrupt context. 271 + */ 272 + count = 0; 273 + do { 274 + ret = 0; 275 + 276 + for (i = 0; i < num_ctrls; i++) 277 + ret |= dra7xx_pcie_handle_msi(pp, i); 278 + count++; 279 + } while (ret && count <= 1000); 280 + 281 + if (count > 1000) 282 + dev_warn_ratelimited(pci->dev, 283 + "Too many MSI IRQs to handle\n"); 284 + } 285 + 286 + static void dra7xx_pcie_msi_irq_handler(struct irq_desc *desc) 287 + { 288 + struct irq_chip *chip = irq_desc_get_chip(desc); 289 + struct dra7xx_pcie *dra7xx; 290 + struct dw_pcie *pci; 291 + struct pcie_port *pp; 261 292 unsigned long reg; 262 293 u32 virq, bit; 263 294 295 + chained_irq_enter(chip, desc); 296 + 297 + pp = irq_desc_get_handler_data(desc); 298 + pci = to_dw_pcie_from_pp(pp); 299 + dra7xx = to_dra7xx_pcie(pci); 300 + 264 301 reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI); 302 + dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg); 265 303 266 304 switch (reg) { 267 305 case MSI: 268 - dw_handle_msi_irq(pp); 306 + dra7xx_pcie_handle_msi_irq(pp); 269 307 break; 270 308 case INTA: 271 309 case INTB: ··· 313 283 break; 314 284 } 315 285 316 - dra7xx_pcie_writel(dra7xx, PCIECTRL_DRA7XX_CONF_IRQSTATUS_MSI, reg); 317 - 318 - return IRQ_HANDLED; 286 + chained_irq_exit(chip, desc); 319 287 } 320 288 321 289 static irqreturn_t dra7xx_pcie_irq_handler(int irq, void *arg) ··· 374 346 375 347 return IRQ_HANDLED; 376 348 } 349 + 350 + static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp) 351 + { 352 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 353 + struct device *dev = pci->dev; 354 + struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); 355 + struct device_node *node = dev->of_node; 356 + struct device_node *pcie_intc_node = of_get_next_child(node, NULL); 357 + 358 + if (!pcie_intc_node) { 359 + dev_err(dev, "No PCIe Intc node found\n"); 360 + return -ENODEV; 361 + } 362 + 363 + irq_set_chained_handler_and_data(pp->irq, dra7xx_pcie_msi_irq_handler, 364 + pp); 365 + dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 366 + &intx_domain_ops, pp); 367 + of_node_put(pcie_intc_node); 368 + if (!dra7xx->irq_domain) { 369 + dev_err(dev, "Failed to get a INTx IRQ domain\n"); 370 + return -ENODEV; 371 + } 372 + 373 + return 0; 374 + } 375 + 376 + static void dra7xx_pcie_setup_msi_msg(struct irq_data *d, struct msi_msg *msg) 377 + { 378 + struct pcie_port *pp = irq_data_get_irq_chip_data(d); 379 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 380 + u64 msi_target; 381 + 382 + msi_target = (u64)pp->msi_data; 383 + 384 + msg->address_lo = lower_32_bits(msi_target); 385 + msg->address_hi = upper_32_bits(msi_target); 386 + 387 + msg->data = d->hwirq; 388 + 389 + dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", 390 + (int)d->hwirq, msg->address_hi, msg->address_lo); 391 + } 392 + 393 + static int dra7xx_pcie_msi_set_affinity(struct irq_data *d, 394 + const struct cpumask *mask, 395 + bool force) 396 + { 397 + return -EINVAL; 398 + } 399 + 400 + static void dra7xx_pcie_bottom_mask(struct irq_data *d) 401 + { 402 + struct pcie_port *pp = irq_data_get_irq_chip_data(d); 403 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 404 + unsigned int res, bit, ctrl; 405 + unsigned long flags; 406 + 407 + raw_spin_lock_irqsave(&pp->lock, flags); 408 + 409 + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 410 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 411 + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 412 + 413 + pp->irq_mask[ctrl] |= BIT(bit); 414 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res, 415 + pp->irq_mask[ctrl]); 416 + 417 + raw_spin_unlock_irqrestore(&pp->lock, flags); 418 + } 419 + 420 + static void dra7xx_pcie_bottom_unmask(struct irq_data *d) 421 + { 422 + struct pcie_port *pp = irq_data_get_irq_chip_data(d); 423 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 424 + unsigned int res, bit, ctrl; 425 + unsigned long flags; 426 + 427 + raw_spin_lock_irqsave(&pp->lock, flags); 428 + 429 + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 430 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 431 + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 432 + 433 + pp->irq_mask[ctrl] &= ~BIT(bit); 434 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + res, 435 + pp->irq_mask[ctrl]); 436 + 437 + raw_spin_unlock_irqrestore(&pp->lock, flags); 438 + } 439 + 440 + static void dra7xx_pcie_bottom_ack(struct irq_data *d) 441 + { 442 + struct pcie_port *pp = irq_data_get_irq_chip_data(d); 443 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 444 + unsigned int res, bit, ctrl; 445 + 446 + ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; 447 + res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; 448 + bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; 449 + 450 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_STATUS + res, BIT(bit)); 451 + } 452 + 453 + static struct irq_chip dra7xx_pci_msi_bottom_irq_chip = { 454 + .name = "DRA7XX-PCI-MSI", 455 + .irq_ack = dra7xx_pcie_bottom_ack, 456 + .irq_compose_msi_msg = dra7xx_pcie_setup_msi_msg, 457 + .irq_set_affinity = dra7xx_pcie_msi_set_affinity, 458 + .irq_mask = dra7xx_pcie_bottom_mask, 459 + .irq_unmask = dra7xx_pcie_bottom_unmask, 460 + }; 461 + 462 + static int dra7xx_pcie_msi_host_init(struct pcie_port *pp) 463 + { 464 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 465 + u32 ctrl, num_ctrls; 466 + 467 + pp->msi_irq_chip = &dra7xx_pci_msi_bottom_irq_chip; 468 + 469 + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 470 + /* Initialize IRQ Status array */ 471 + for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 472 + pp->irq_mask[ctrl] = ~0; 473 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_MASK + 474 + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 475 + pp->irq_mask[ctrl]); 476 + dw_pcie_writel_dbi(pci, PCIE_MSI_INTR0_ENABLE + 477 + (ctrl * MSI_REG_CTRL_BLOCK_SIZE), 478 + ~0); 479 + } 480 + 481 + return dw_pcie_allocate_domains(pp); 482 + } 483 + 484 + static const struct dw_pcie_host_ops dra7xx_pcie_host_ops = { 485 + .host_init = dra7xx_pcie_host_init, 486 + .msi_host_init = dra7xx_pcie_msi_host_init, 487 + }; 377 488 378 489 static void dra7xx_pcie_ep_init(struct dw_pcie_ep *ep) 379 490 { ··· 632 465 if (pp->irq < 0) { 633 466 dev_err(dev, "missing IRQ resource\n"); 634 467 return pp->irq; 635 - } 636 - 637 - ret = devm_request_irq(dev, pp->irq, dra7xx_pcie_msi_irq_handler, 638 - IRQF_SHARED | IRQF_NO_THREAD, 639 - "dra7-pcie-msi", dra7xx); 640 - if (ret) { 641 - dev_err(dev, "failed to request irq\n"); 642 - return ret; 643 468 } 644 469 645 470 ret = dra7xx_pcie_init_irq_domain(pp);
+4 -1
drivers/pci/controller/dwc/pci-keystone.c
··· 959 959 case PCI_EPC_IRQ_MSI: 960 960 dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 961 961 break; 962 + case PCI_EPC_IRQ_MSIX: 963 + dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num); 964 + break; 962 965 default: 963 966 dev_err(pci->dev, "UNKNOWN IRQ type\n"); 964 967 return -EINVAL; ··· 973 970 static const struct pci_epc_features ks_pcie_am654_epc_features = { 974 971 .linkup_notifier = false, 975 972 .msi_capable = true, 976 - .msix_capable = false, 973 + .msix_capable = true, 977 974 .reserved_bar = 1 << BAR_0 | 1 << BAR_1, 978 975 .bar_fixed_64bit = 1 << BAR_0, 979 976 .bar_fixed_size[2] = SZ_1M,
+22 -94
drivers/pci/controller/dwc/pci-meson.c
··· 66 66 #define PORT_CLK_RATE 100000000UL 67 67 #define MAX_PAYLOAD_SIZE 256 68 68 #define MAX_READ_REQ_SIZE 256 69 - #define MESON_PCIE_PHY_POWERUP 0x1c 70 69 #define PCIE_RESET_DELAY 500 71 70 #define PCIE_SHARED_RESET 1 72 71 #define PCIE_NORMAL_RESET 0 ··· 80 81 struct meson_pcie_mem_res { 81 82 void __iomem *elbi_base; 82 83 void __iomem *cfg_base; 83 - void __iomem *phy_base; 84 84 }; 85 85 86 86 struct meson_pcie_clk_res { 87 87 struct clk *clk; 88 - struct clk *mipi_gate; 89 88 struct clk *port_clk; 90 89 struct clk *general_clk; 91 90 }; 92 91 93 92 struct meson_pcie_rc_reset { 94 - struct reset_control *phy; 95 93 struct reset_control *port; 96 94 struct reset_control *apb; 97 - }; 98 - 99 - struct meson_pcie_param { 100 - bool has_shared_phy; 101 95 }; 102 96 103 97 struct meson_pcie { ··· 100 108 struct meson_pcie_rc_reset mrst; 101 109 struct gpio_desc *reset_gpio; 102 110 struct phy *phy; 103 - const struct meson_pcie_param *param; 104 111 }; 105 112 106 113 static struct reset_control *meson_pcie_get_reset(struct meson_pcie *mp, ··· 120 129 static int meson_pcie_get_resets(struct meson_pcie *mp) 121 130 { 122 131 struct meson_pcie_rc_reset *mrst = &mp->mrst; 123 - 124 - if (!mp->param->has_shared_phy) { 125 - mrst->phy = meson_pcie_get_reset(mp, "phy", PCIE_SHARED_RESET); 126 - if (IS_ERR(mrst->phy)) 127 - return PTR_ERR(mrst->phy); 128 - reset_control_deassert(mrst->phy); 129 - } 130 132 131 133 mrst->port = meson_pcie_get_reset(mp, "port", PCIE_NORMAL_RESET); 132 134 if (IS_ERR(mrst->port)) ··· 146 162 return devm_ioremap_resource(dev, res); 147 163 } 148 164 149 - static void __iomem *meson_pcie_get_mem_shared(struct platform_device *pdev, 150 - struct meson_pcie *mp, 151 - const char *id) 152 - { 153 - struct device *dev = mp->pci.dev; 154 - struct resource *res; 155 - 156 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, id); 157 - if (!res) { 158 - dev_err(dev, "No REG resource %s\n", id); 159 - return ERR_PTR(-ENXIO); 160 - } 161 - 162 - return devm_ioremap(dev, res->start, resource_size(res)); 163 - } 164 - 165 165 static int meson_pcie_get_mems(struct platform_device *pdev, 166 166 struct meson_pcie *mp) 167 167 { ··· 157 189 if (IS_ERR(mp->mem_res.cfg_base)) 158 190 return PTR_ERR(mp->mem_res.cfg_base); 159 191 160 - /* Meson AXG SoC has two PCI controllers use same phy register */ 161 - if (!mp->param->has_shared_phy) { 162 - mp->mem_res.phy_base = 163 - meson_pcie_get_mem_shared(pdev, mp, "phy"); 164 - if (IS_ERR(mp->mem_res.phy_base)) 165 - return PTR_ERR(mp->mem_res.phy_base); 166 - } 167 - 168 192 return 0; 169 193 } 170 194 ··· 164 204 { 165 205 int ret = 0; 166 206 167 - if (mp->param->has_shared_phy) { 168 - ret = phy_init(mp->phy); 169 - if (ret) 170 - return ret; 207 + ret = phy_init(mp->phy); 208 + if (ret) 209 + return ret; 171 210 172 - ret = phy_power_on(mp->phy); 173 - if (ret) { 174 - phy_exit(mp->phy); 175 - return ret; 176 - } 177 - } else 178 - writel(MESON_PCIE_PHY_POWERUP, mp->mem_res.phy_base); 211 + ret = phy_power_on(mp->phy); 212 + if (ret) { 213 + phy_exit(mp->phy); 214 + return ret; 215 + } 179 216 180 217 return 0; 218 + } 219 + 220 + static void meson_pcie_power_off(struct meson_pcie *mp) 221 + { 222 + phy_power_off(mp->phy); 223 + phy_exit(mp->phy); 181 224 } 182 225 183 226 static int meson_pcie_reset(struct meson_pcie *mp) ··· 188 225 struct meson_pcie_rc_reset *mrst = &mp->mrst; 189 226 int ret = 0; 190 227 191 - if (mp->param->has_shared_phy) { 192 - ret = phy_reset(mp->phy); 193 - if (ret) 194 - return ret; 195 - } else { 196 - reset_control_assert(mrst->phy); 197 - udelay(PCIE_RESET_DELAY); 198 - reset_control_deassert(mrst->phy); 199 - udelay(PCIE_RESET_DELAY); 200 - } 228 + ret = phy_reset(mp->phy); 229 + if (ret) 230 + return ret; 201 231 202 232 reset_control_assert(mrst->port); 203 233 reset_control_assert(mrst->apb); ··· 241 285 res->port_clk = meson_pcie_probe_clock(dev, "port", PORT_CLK_RATE); 242 286 if (IS_ERR(res->port_clk)) 243 287 return PTR_ERR(res->port_clk); 244 - 245 - if (!mp->param->has_shared_phy) { 246 - res->mipi_gate = meson_pcie_probe_clock(dev, "mipi", 0); 247 - if (IS_ERR(res->mipi_gate)) 248 - return PTR_ERR(res->mipi_gate); 249 - } 250 288 251 289 res->general_clk = meson_pcie_probe_clock(dev, "general", 0); 252 290 if (IS_ERR(res->general_clk)) ··· 512 562 513 563 static int meson_pcie_probe(struct platform_device *pdev) 514 564 { 515 - const struct meson_pcie_param *match_data; 516 565 struct device *dev = &pdev->dev; 517 566 struct dw_pcie *pci; 518 567 struct meson_pcie *mp; ··· 525 576 pci->dev = dev; 526 577 pci->ops = &dw_pcie_ops; 527 578 528 - match_data = of_device_get_match_data(dev); 529 - if (!match_data) { 530 - dev_err(dev, "failed to get match data\n"); 531 - return -ENODEV; 532 - } 533 - mp->param = match_data; 534 - 535 - if (mp->param->has_shared_phy) { 536 - mp->phy = devm_phy_get(dev, "pcie"); 537 - if (IS_ERR(mp->phy)) 538 - return PTR_ERR(mp->phy); 579 + mp->phy = devm_phy_get(dev, "pcie"); 580 + if (IS_ERR(mp->phy)) { 581 + dev_err(dev, "get phy failed, %ld\n", PTR_ERR(mp->phy)); 582 + return PTR_ERR(mp->phy); 539 583 } 540 584 541 585 mp->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); ··· 578 636 return 0; 579 637 580 638 err_phy: 581 - if (mp->param->has_shared_phy) { 582 - phy_power_off(mp->phy); 583 - phy_exit(mp->phy); 584 - } 585 - 639 + meson_pcie_power_off(mp); 586 640 return ret; 587 641 } 588 - 589 - static struct meson_pcie_param meson_pcie_axg_param = { 590 - .has_shared_phy = false, 591 - }; 592 - 593 - static struct meson_pcie_param meson_pcie_g12a_param = { 594 - .has_shared_phy = true, 595 - }; 596 642 597 643 static const struct of_device_id meson_pcie_of_match[] = { 598 644 { 599 645 .compatible = "amlogic,axg-pcie", 600 - .data = &meson_pcie_axg_param, 601 646 }, 602 647 { 603 648 .compatible = "amlogic,g12a-pcie", 604 - .data = &meson_pcie_g12a_param, 605 649 }, 606 650 {}, 607 651 };
+85 -59
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 18 18 19 19 pci_epc_linkup(epc); 20 20 } 21 + EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup); 22 + 23 + void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep) 24 + { 25 + struct pci_epc *epc = ep->epc; 26 + 27 + pci_epc_init_notify(epc); 28 + } 29 + EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify); 21 30 22 31 static void __dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar, 23 32 int flags) ··· 134 125 135 126 dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND); 136 127 clear_bit(atu_index, ep->ib_window_map); 128 + ep->epf_bar[bar] = NULL; 137 129 } 138 130 139 131 static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, ··· 168 158 dw_pcie_writel_dbi(pci, reg + 4, 0); 169 159 } 170 160 161 + ep->epf_bar[bar] = epf_bar; 171 162 dw_pcie_dbi_ro_wr_dis(pci); 172 163 173 164 return 0; ··· 280 269 return val; 281 270 } 282 271 283 - static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts) 272 + static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts, 273 + enum pci_barno bir, u32 offset) 284 274 { 285 275 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 286 276 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 290 278 if (!ep->msix_cap) 291 279 return -EINVAL; 292 280 281 + dw_pcie_dbi_ro_wr_en(pci); 282 + 293 283 reg = ep->msix_cap + PCI_MSIX_FLAGS; 294 284 val = dw_pcie_readw_dbi(pci, reg); 295 285 val &= ~PCI_MSIX_FLAGS_QSIZE; 296 286 val |= interrupts; 297 - dw_pcie_dbi_ro_wr_en(pci); 298 287 dw_pcie_writew_dbi(pci, reg, val); 288 + 289 + reg = ep->msix_cap + PCI_MSIX_TABLE; 290 + val = offset | bir; 291 + dw_pcie_writel_dbi(pci, reg, val); 292 + 293 + reg = ep->msix_cap + PCI_MSIX_PBA; 294 + val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir; 295 + dw_pcie_writel_dbi(pci, reg, val); 296 + 299 297 dw_pcie_dbi_ro_wr_dis(pci); 300 298 301 299 return 0; ··· 431 409 u16 interrupt_num) 432 410 { 433 411 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 412 + struct pci_epf_msix_tbl *msix_tbl; 434 413 struct pci_epc *epc = ep->epc; 435 - u16 tbl_offset, bir; 436 - u32 bar_addr_upper, bar_addr_lower; 437 - u32 msg_addr_upper, msg_addr_lower; 414 + struct pci_epf_bar *epf_bar; 438 415 u32 reg, msg_data, vec_ctrl; 439 - u64 tbl_addr, msg_addr, reg_u64; 440 - void __iomem *msix_tbl; 416 + unsigned int aligned_offset; 417 + u32 tbl_offset; 418 + u64 msg_addr; 441 419 int ret; 420 + u8 bir; 442 421 443 422 reg = ep->msix_cap + PCI_MSIX_TABLE; 444 423 tbl_offset = dw_pcie_readl_dbi(pci, reg); 445 424 bir = (tbl_offset & PCI_MSIX_TABLE_BIR); 446 425 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 447 426 448 - reg = PCI_BASE_ADDRESS_0 + (4 * bir); 449 - bar_addr_upper = 0; 450 - bar_addr_lower = dw_pcie_readl_dbi(pci, reg); 451 - reg_u64 = (bar_addr_lower & PCI_BASE_ADDRESS_MEM_TYPE_MASK); 452 - if (reg_u64 == PCI_BASE_ADDRESS_MEM_TYPE_64) 453 - bar_addr_upper = dw_pcie_readl_dbi(pci, reg + 4); 427 + epf_bar = ep->epf_bar[bir]; 428 + msix_tbl = epf_bar->addr; 429 + msix_tbl = (struct pci_epf_msix_tbl *)((char *)msix_tbl + tbl_offset); 454 430 455 - tbl_addr = ((u64) bar_addr_upper) << 32 | bar_addr_lower; 456 - tbl_addr += (tbl_offset + ((interrupt_num - 1) * PCI_MSIX_ENTRY_SIZE)); 457 - tbl_addr &= PCI_BASE_ADDRESS_MEM_MASK; 458 - 459 - msix_tbl = ioremap(ep->phys_base + tbl_addr, 460 - PCI_MSIX_ENTRY_SIZE); 461 - if (!msix_tbl) 462 - return -EINVAL; 463 - 464 - msg_addr_lower = readl(msix_tbl + PCI_MSIX_ENTRY_LOWER_ADDR); 465 - msg_addr_upper = readl(msix_tbl + PCI_MSIX_ENTRY_UPPER_ADDR); 466 - msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; 467 - msg_data = readl(msix_tbl + PCI_MSIX_ENTRY_DATA); 468 - vec_ctrl = readl(msix_tbl + PCI_MSIX_ENTRY_VECTOR_CTRL); 469 - 470 - iounmap(msix_tbl); 431 + msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr; 432 + msg_data = msix_tbl[(interrupt_num - 1)].msg_data; 433 + vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl; 471 434 472 435 if (vec_ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT) { 473 436 dev_dbg(pci->dev, "MSI-X entry ctrl set\n"); 474 437 return -EPERM; 475 438 } 476 439 477 - ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 440 + aligned_offset = msg_addr & (epc->mem->page_size - 1); 441 + ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, 478 442 epc->mem->page_size); 479 443 if (ret) 480 444 return ret; 481 445 482 - writel(msg_data, ep->msi_mem); 446 + writel(msg_data, ep->msi_mem + aligned_offset); 483 447 484 448 dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); 485 449 ··· 500 492 return 0; 501 493 } 502 494 495 + int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep) 496 + { 497 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 498 + unsigned int offset; 499 + unsigned int nbars; 500 + u8 hdr_type; 501 + u32 reg; 502 + int i; 503 + 504 + hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE); 505 + if (hdr_type != PCI_HEADER_TYPE_NORMAL) { 506 + dev_err(pci->dev, 507 + "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n", 508 + hdr_type); 509 + return -EIO; 510 + } 511 + 512 + ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); 513 + 514 + ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX); 515 + 516 + offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 517 + if (offset) { 518 + reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 519 + nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 520 + PCI_REBAR_CTRL_NBAR_SHIFT; 521 + 522 + dw_pcie_dbi_ro_wr_en(pci); 523 + for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) 524 + dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0); 525 + dw_pcie_dbi_ro_wr_dis(pci); 526 + } 527 + 528 + dw_pcie_setup(pci); 529 + 530 + return 0; 531 + } 532 + EXPORT_SYMBOL_GPL(dw_pcie_ep_init_complete); 533 + 503 534 int dw_pcie_ep_init(struct dw_pcie_ep *ep) 504 535 { 505 - int i; 506 536 int ret; 507 - u32 reg; 508 537 void *addr; 509 - u8 hdr_type; 510 - unsigned int nbars; 511 - unsigned int offset; 512 538 struct pci_epc *epc; 513 539 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 514 540 struct device *dev = pci->dev; 515 541 struct device_node *np = dev->of_node; 542 + const struct pci_epc_features *epc_features; 516 543 517 544 if (!pci->dbi_base || !pci->dbi_base2) { 518 545 dev_err(dev, "dbi_base/dbi_base2 is not populated\n"); ··· 606 563 if (ep->ops->ep_init) 607 564 ep->ops->ep_init(ep); 608 565 609 - hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE); 610 - if (hdr_type != PCI_HEADER_TYPE_NORMAL) { 611 - dev_err(pci->dev, "PCIe controller is not set to EP mode (hdr_type:0x%x)!\n", 612 - hdr_type); 613 - return -EIO; 614 - } 615 - 616 566 ret = of_property_read_u8(np, "max-functions", &epc->max_functions); 617 567 if (ret < 0) 618 568 epc->max_functions = 1; ··· 623 587 dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); 624 588 return -ENOMEM; 625 589 } 626 - ep->msi_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); 627 590 628 - ep->msix_cap = dw_pcie_find_capability(pci, PCI_CAP_ID_MSIX); 629 - 630 - offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 631 - if (offset) { 632 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 633 - nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 634 - PCI_REBAR_CTRL_NBAR_SHIFT; 635 - 636 - dw_pcie_dbi_ro_wr_en(pci); 637 - for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) 638 - dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0); 639 - dw_pcie_dbi_ro_wr_dis(pci); 591 + if (ep->ops->get_features) { 592 + epc_features = ep->ops->get_features(ep); 593 + if (epc_features->core_init_notifier) 594 + return 0; 640 595 } 641 596 642 - dw_pcie_setup(pci); 643 - 644 - return 0; 597 + return dw_pcie_ep_init_complete(ep); 645 598 } 599 + EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
+12
drivers/pci/controller/dwc/pcie-designware.h
··· 233 233 phys_addr_t msi_mem_phys; 234 234 u8 msi_cap; /* MSI capability offset */ 235 235 u8 msix_cap; /* MSI-X capability offset */ 236 + struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS]; 236 237 }; 237 238 238 239 struct dw_pcie_ops { ··· 412 411 #ifdef CONFIG_PCIE_DW_EP 413 412 void dw_pcie_ep_linkup(struct dw_pcie_ep *ep); 414 413 int dw_pcie_ep_init(struct dw_pcie_ep *ep); 414 + int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep); 415 + void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep); 415 416 void dw_pcie_ep_exit(struct dw_pcie_ep *ep); 416 417 int dw_pcie_ep_raise_legacy_irq(struct dw_pcie_ep *ep, u8 func_no); 417 418 int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, ··· 429 426 static inline int dw_pcie_ep_init(struct dw_pcie_ep *ep) 430 427 { 431 428 return 0; 429 + } 430 + 431 + static inline int dw_pcie_ep_init_complete(struct dw_pcie_ep *ep) 432 + { 433 + return 0; 434 + } 435 + 436 + static inline void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep) 437 + { 432 438 } 433 439 434 440 static inline void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
+7 -1
drivers/pci/controller/dwc/pcie-qcom.c
··· 1439 1439 { 1440 1440 dev->class = PCI_CLASS_BRIDGE_PCI << 8; 1441 1441 } 1442 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, PCI_ANY_ID, qcom_fixup_class); 1442 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class); 1443 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class); 1444 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0106, qcom_fixup_class); 1445 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0107, qcom_fixup_class); 1446 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class); 1447 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class); 1448 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class); 1443 1449 1444 1450 static struct platform_driver qcom_pcie_driver = { 1445 1451 .probe = qcom_pcie_probe,
+695 -17
drivers/pci/controller/dwc/pcie-tegra194.c
··· 11 11 #include <linux/debugfs.h> 12 12 #include <linux/delay.h> 13 13 #include <linux/gpio.h> 14 + #include <linux/gpio/consumer.h> 14 15 #include <linux/interrupt.h> 15 16 #include <linux/iopoll.h> 16 17 #include <linux/kernel.h> ··· 54 53 #define APPL_INTR_EN_L0_0_LINK_STATE_INT_EN BIT(0) 55 54 #define APPL_INTR_EN_L0_0_MSI_RCV_INT_EN BIT(4) 56 55 #define APPL_INTR_EN_L0_0_INT_INT_EN BIT(8) 56 + #define APPL_INTR_EN_L0_0_PCI_CMD_EN_INT_EN BIT(15) 57 57 #define APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN BIT(19) 58 58 #define APPL_INTR_EN_L0_0_SYS_INTR_EN BIT(30) 59 59 #define APPL_INTR_EN_L0_0_SYS_MSI_INTR_EN BIT(31) ··· 62 60 #define APPL_INTR_STATUS_L0 0xC 63 61 #define APPL_INTR_STATUS_L0_LINK_STATE_INT BIT(0) 64 62 #define APPL_INTR_STATUS_L0_INT_INT BIT(8) 63 + #define APPL_INTR_STATUS_L0_PCI_CMD_EN_INT BIT(15) 64 + #define APPL_INTR_STATUS_L0_PEX_RST_INT BIT(16) 65 65 #define APPL_INTR_STATUS_L0_CDM_REG_CHK_INT BIT(18) 66 66 67 67 #define APPL_INTR_EN_L1_0_0 0x1C 68 68 #define APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN BIT(1) 69 + #define APPL_INTR_EN_L1_0_0_RDLH_LINK_UP_INT_EN BIT(3) 70 + #define APPL_INTR_EN_L1_0_0_HOT_RESET_DONE_INT_EN BIT(30) 69 71 70 72 #define APPL_INTR_STATUS_L1_0_0 0x20 71 73 #define APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED BIT(1) 74 + #define APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED BIT(3) 75 + #define APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE BIT(30) 72 76 73 77 #define APPL_INTR_STATUS_L1_1 0x2C 74 78 #define APPL_INTR_STATUS_L1_2 0x30 75 79 #define APPL_INTR_STATUS_L1_3 0x34 76 80 #define APPL_INTR_STATUS_L1_6 0x3C 77 81 #define APPL_INTR_STATUS_L1_7 0x40 82 + #define APPL_INTR_STATUS_L1_15_CFG_BME_CHGED BIT(1) 78 83 79 84 #define APPL_INTR_EN_L1_8_0 0x44 80 85 #define APPL_INTR_EN_L1_8_BW_MGT_INT_EN BIT(2) ··· 112 103 #define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR BIT(1) 113 104 #define APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR BIT(0) 114 105 106 + #define APPL_MSI_CTRL_1 0xAC 107 + 115 108 #define APPL_MSI_CTRL_2 0xB0 109 + 110 + #define APPL_LEGACY_INTX 0xB8 116 111 117 112 #define APPL_LTR_MSG_1 0xC4 118 113 #define LTR_MSG_REQ BIT(15) ··· 218 205 #define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFFFFFF 1 219 206 #define AMBA_ERROR_RESPONSE_CRS_OKAY_FFFF0001 2 220 207 208 + #define MSIX_ADDR_MATCH_LOW_OFF 0x940 209 + #define MSIX_ADDR_MATCH_LOW_OFF_EN BIT(0) 210 + #define MSIX_ADDR_MATCH_LOW_OFF_MASK GENMASK(31, 2) 211 + 212 + #define MSIX_ADDR_MATCH_HIGH_OFF 0x944 213 + #define MSIX_ADDR_MATCH_HIGH_OFF_MASK GENMASK(31, 0) 214 + 221 215 #define PORT_LOGIC_MSIX_DOORBELL 0x948 222 216 223 217 #define CAP_SPCIE_CAP_OFF 0x154 ··· 242 222 #define GEN2_CORE_CLK_FREQ 125000000 243 223 #define GEN3_CORE_CLK_FREQ 250000000 244 224 #define GEN4_CORE_CLK_FREQ 500000000 225 + 226 + #define LTR_MSG_TIMEOUT (100 * 1000) 227 + 228 + #define PERST_DEBOUNCE_TIME (5 * 1000) 229 + 230 + #define EP_STATE_DISABLED 0 231 + #define EP_STATE_ENABLED 1 245 232 246 233 static const unsigned int pcie_gen_freq[] = { 247 234 GEN1_CORE_CLK_FREQ, ··· 287 260 struct dw_pcie pci; 288 261 struct tegra_bpmp *bpmp; 289 262 263 + enum dw_pcie_device_mode mode; 264 + 290 265 bool supports_clkreq; 291 266 bool enable_cdm_check; 292 267 bool link_state; ··· 312 283 struct phy **phys; 313 284 314 285 struct dentry *debugfs; 286 + 287 + /* Endpoint mode specific */ 288 + struct gpio_desc *pex_rst_gpiod; 289 + struct gpio_desc *pex_refclk_sel_gpiod; 290 + unsigned int pex_rst_irq; 291 + int ep_state; 292 + }; 293 + 294 + struct tegra_pcie_dw_of_data { 295 + enum dw_pcie_device_mode mode; 315 296 }; 316 297 317 298 static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci) ··· 378 339 } 379 340 } 380 341 381 - static irqreturn_t tegra_pcie_rp_irq_handler(struct tegra_pcie_dw *pcie) 342 + static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg) 382 343 { 344 + struct tegra_pcie_dw *pcie = arg; 383 345 struct dw_pcie *pci = &pcie->pci; 384 346 struct pcie_port *pp = &pci->pp; 385 347 u32 val, tmp; ··· 451 411 return IRQ_HANDLED; 452 412 } 453 413 454 - static irqreturn_t tegra_pcie_irq_handler(int irq, void *arg) 414 + static void pex_ep_event_hot_rst_done(struct tegra_pcie_dw *pcie) 415 + { 416 + u32 val; 417 + 418 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0); 419 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0); 420 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1); 421 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2); 422 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3); 423 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6); 424 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7); 425 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0); 426 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9); 427 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10); 428 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11); 429 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13); 430 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14); 431 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15); 432 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17); 433 + appl_writel(pcie, 0xFFFFFFFF, APPL_MSI_CTRL_2); 434 + 435 + val = appl_readl(pcie, APPL_CTRL); 436 + val |= APPL_CTRL_LTSSM_EN; 437 + appl_writel(pcie, val, APPL_CTRL); 438 + } 439 + 440 + static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg) 455 441 { 456 442 struct tegra_pcie_dw *pcie = arg; 443 + struct dw_pcie *pci = &pcie->pci; 444 + u32 val, speed; 457 445 458 - return tegra_pcie_rp_irq_handler(pcie); 446 + speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) & 447 + PCI_EXP_LNKSTA_CLS; 448 + clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]); 449 + 450 + /* If EP doesn't advertise L1SS, just return */ 451 + val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub); 452 + if (!(val & (PCI_L1SS_CAP_ASPM_L1_1 | PCI_L1SS_CAP_ASPM_L1_2))) 453 + return IRQ_HANDLED; 454 + 455 + /* Check if BME is set to '1' */ 456 + val = dw_pcie_readl_dbi(pci, PCI_COMMAND); 457 + if (val & PCI_COMMAND_MASTER) { 458 + ktime_t timeout; 459 + 460 + /* 110us for both snoop and no-snoop */ 461 + val = 110 | (2 << PCI_LTR_SCALE_SHIFT) | LTR_MSG_REQ; 462 + val |= (val << LTR_MST_NO_SNOOP_SHIFT); 463 + appl_writel(pcie, val, APPL_LTR_MSG_1); 464 + 465 + /* Send LTR upstream */ 466 + val = appl_readl(pcie, APPL_LTR_MSG_2); 467 + val |= APPL_LTR_MSG_2_LTR_MSG_REQ_STATE; 468 + appl_writel(pcie, val, APPL_LTR_MSG_2); 469 + 470 + timeout = ktime_add_us(ktime_get(), LTR_MSG_TIMEOUT); 471 + for (;;) { 472 + val = appl_readl(pcie, APPL_LTR_MSG_2); 473 + if (!(val & APPL_LTR_MSG_2_LTR_MSG_REQ_STATE)) 474 + break; 475 + if (ktime_after(ktime_get(), timeout)) 476 + break; 477 + usleep_range(1000, 1100); 478 + } 479 + if (val & APPL_LTR_MSG_2_LTR_MSG_REQ_STATE) 480 + dev_err(pcie->dev, "Failed to send LTR message\n"); 481 + } 482 + 483 + return IRQ_HANDLED; 484 + } 485 + 486 + static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg) 487 + { 488 + struct tegra_pcie_dw *pcie = arg; 489 + struct dw_pcie_ep *ep = &pcie->pci.ep; 490 + int spurious = 1; 491 + u32 val, tmp; 492 + 493 + val = appl_readl(pcie, APPL_INTR_STATUS_L0); 494 + if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) { 495 + val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0); 496 + appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0); 497 + 498 + if (val & APPL_INTR_STATUS_L1_0_0_HOT_RESET_DONE) 499 + pex_ep_event_hot_rst_done(pcie); 500 + 501 + if (val & APPL_INTR_STATUS_L1_0_0_RDLH_LINK_UP_CHGED) { 502 + tmp = appl_readl(pcie, APPL_LINK_STATUS); 503 + if (tmp & APPL_LINK_STATUS_RDLH_LINK_UP) { 504 + dev_dbg(pcie->dev, "Link is up with Host\n"); 505 + dw_pcie_ep_linkup(ep); 506 + } 507 + } 508 + 509 + spurious = 0; 510 + } 511 + 512 + if (val & APPL_INTR_STATUS_L0_PCI_CMD_EN_INT) { 513 + val = appl_readl(pcie, APPL_INTR_STATUS_L1_15); 514 + appl_writel(pcie, val, APPL_INTR_STATUS_L1_15); 515 + 516 + if (val & APPL_INTR_STATUS_L1_15_CFG_BME_CHGED) 517 + return IRQ_WAKE_THREAD; 518 + 519 + spurious = 0; 520 + } 521 + 522 + if (spurious) { 523 + dev_warn(pcie->dev, "Random interrupt (STATUS = 0x%08X)\n", 524 + val); 525 + appl_writel(pcie, val, APPL_INTR_STATUS_L0); 526 + } 527 + 528 + return IRQ_HANDLED; 459 529 } 460 530 461 531 static int tegra_pcie_dw_rd_own_conf(struct pcie_port *pp, int where, int size, ··· 1034 884 pp->num_vectors = MAX_MSI_IRQS; 1035 885 } 1036 886 887 + static int tegra_pcie_dw_start_link(struct dw_pcie *pci) 888 + { 889 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 890 + 891 + enable_irq(pcie->pex_rst_irq); 892 + 893 + return 0; 894 + } 895 + 896 + static void tegra_pcie_dw_stop_link(struct dw_pcie *pci) 897 + { 898 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 899 + 900 + disable_irq(pcie->pex_rst_irq); 901 + } 902 + 1037 903 static const struct dw_pcie_ops tegra_dw_pcie_ops = { 1038 904 .link_up = tegra_pcie_dw_link_up, 905 + .start_link = tegra_pcie_dw_start_link, 906 + .stop_link = tegra_pcie_dw_stop_link, 1039 907 }; 1040 908 1041 909 static struct dw_pcie_host_ops tegra_pcie_dw_host_ops = { ··· 1154 986 pcie->enable_cdm_check = 1155 987 of_property_read_bool(np, "snps,enable-cdm-check"); 1156 988 989 + if (pcie->mode == DW_PCIE_RC_TYPE) 990 + return 0; 991 + 992 + /* Endpoint mode specific DT entries */ 993 + pcie->pex_rst_gpiod = devm_gpiod_get(pcie->dev, "reset", GPIOD_IN); 994 + if (IS_ERR(pcie->pex_rst_gpiod)) { 995 + int err = PTR_ERR(pcie->pex_rst_gpiod); 996 + const char *level = KERN_ERR; 997 + 998 + if (err == -EPROBE_DEFER) 999 + level = KERN_DEBUG; 1000 + 1001 + dev_printk(level, pcie->dev, 1002 + dev_fmt("Failed to get PERST GPIO: %d\n"), 1003 + err); 1004 + return err; 1005 + } 1006 + 1007 + pcie->pex_refclk_sel_gpiod = devm_gpiod_get(pcie->dev, 1008 + "nvidia,refclk-select", 1009 + GPIOD_OUT_HIGH); 1010 + if (IS_ERR(pcie->pex_refclk_sel_gpiod)) { 1011 + int err = PTR_ERR(pcie->pex_refclk_sel_gpiod); 1012 + const char *level = KERN_ERR; 1013 + 1014 + if (err == -EPROBE_DEFER) 1015 + level = KERN_DEBUG; 1016 + 1017 + dev_printk(level, pcie->dev, 1018 + dev_fmt("Failed to get REFCLK select GPIOs: %d\n"), 1019 + err); 1020 + pcie->pex_refclk_sel_gpiod = NULL; 1021 + } 1022 + 1157 1023 return 0; 1158 1024 } 1159 1025 ··· 1208 1006 req.cmd = CMD_UPHY_PCIE_CONTROLLER_STATE; 1209 1007 req.controller_state.pcie_controller = pcie->cid; 1210 1008 req.controller_state.enable = enable; 1009 + 1010 + memset(&msg, 0, sizeof(msg)); 1011 + msg.mrq = MRQ_UPHY; 1012 + msg.tx.data = &req; 1013 + msg.tx.size = sizeof(req); 1014 + msg.rx.data = &resp; 1015 + msg.rx.size = sizeof(resp); 1016 + 1017 + return tegra_bpmp_transfer(pcie->bpmp, &msg); 1018 + } 1019 + 1020 + static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie, 1021 + bool enable) 1022 + { 1023 + struct mrq_uphy_response resp; 1024 + struct tegra_bpmp_message msg; 1025 + struct mrq_uphy_request req; 1026 + 1027 + memset(&req, 0, sizeof(req)); 1028 + memset(&resp, 0, sizeof(resp)); 1029 + 1030 + if (enable) { 1031 + req.cmd = CMD_UPHY_PCIE_EP_CONTROLLER_PLL_INIT; 1032 + req.ep_ctrlr_pll_init.ep_controller = pcie->cid; 1033 + } else { 1034 + req.cmd = CMD_UPHY_PCIE_EP_CONTROLLER_PLL_OFF; 1035 + req.ep_ctrlr_pll_off.ep_controller = pcie->cid; 1036 + } 1211 1037 1212 1038 memset(&msg, 0, sizeof(msg)); 1213 1039 msg.mrq = MRQ_UPHY; ··· 1657 1427 return ret; 1658 1428 } 1659 1429 1430 + static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie) 1431 + { 1432 + u32 val; 1433 + int ret; 1434 + 1435 + if (pcie->ep_state == EP_STATE_DISABLED) 1436 + return; 1437 + 1438 + /* Disable LTSSM */ 1439 + val = appl_readl(pcie, APPL_CTRL); 1440 + val &= ~APPL_CTRL_LTSSM_EN; 1441 + appl_writel(pcie, val, APPL_CTRL); 1442 + 1443 + ret = readl_poll_timeout(pcie->appl_base + APPL_DEBUG, val, 1444 + ((val & APPL_DEBUG_LTSSM_STATE_MASK) >> 1445 + APPL_DEBUG_LTSSM_STATE_SHIFT) == 1446 + LTSSM_STATE_PRE_DETECT, 1447 + 1, LTSSM_TIMEOUT); 1448 + if (ret) 1449 + dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret); 1450 + 1451 + reset_control_assert(pcie->core_rst); 1452 + 1453 + tegra_pcie_disable_phy(pcie); 1454 + 1455 + reset_control_assert(pcie->core_apb_rst); 1456 + 1457 + clk_disable_unprepare(pcie->core_clk); 1458 + 1459 + pm_runtime_put_sync(pcie->dev); 1460 + 1461 + ret = tegra_pcie_bpmp_set_pll_state(pcie, false); 1462 + if (ret) 1463 + dev_err(pcie->dev, "Failed to turn off UPHY: %d\n", ret); 1464 + 1465 + pcie->ep_state = EP_STATE_DISABLED; 1466 + dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n"); 1467 + } 1468 + 1469 + static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie) 1470 + { 1471 + struct dw_pcie *pci = &pcie->pci; 1472 + struct dw_pcie_ep *ep = &pci->ep; 1473 + struct device *dev = pcie->dev; 1474 + u32 val; 1475 + int ret; 1476 + 1477 + if (pcie->ep_state == EP_STATE_ENABLED) 1478 + return; 1479 + 1480 + ret = pm_runtime_get_sync(dev); 1481 + if (ret < 0) { 1482 + dev_err(dev, "Failed to get runtime sync for PCIe dev: %d\n", 1483 + ret); 1484 + return; 1485 + } 1486 + 1487 + ret = tegra_pcie_bpmp_set_pll_state(pcie, true); 1488 + if (ret) { 1489 + dev_err(dev, "Failed to init UPHY for PCIe EP: %d\n", ret); 1490 + goto fail_pll_init; 1491 + } 1492 + 1493 + ret = clk_prepare_enable(pcie->core_clk); 1494 + if (ret) { 1495 + dev_err(dev, "Failed to enable core clock: %d\n", ret); 1496 + goto fail_core_clk_enable; 1497 + } 1498 + 1499 + ret = reset_control_deassert(pcie->core_apb_rst); 1500 + if (ret) { 1501 + dev_err(dev, "Failed to deassert core APB reset: %d\n", ret); 1502 + goto fail_core_apb_rst; 1503 + } 1504 + 1505 + ret = tegra_pcie_enable_phy(pcie); 1506 + if (ret) { 1507 + dev_err(dev, "Failed to enable PHY: %d\n", ret); 1508 + goto fail_phy; 1509 + } 1510 + 1511 + /* Clear any stale interrupt statuses */ 1512 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0); 1513 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0); 1514 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_1); 1515 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_2); 1516 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_3); 1517 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_6); 1518 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_7); 1519 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_8_0); 1520 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_9); 1521 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_10); 1522 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_11); 1523 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_13); 1524 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_14); 1525 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_15); 1526 + appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_17); 1527 + 1528 + /* configure this core for EP mode operation */ 1529 + val = appl_readl(pcie, APPL_DM_TYPE); 1530 + val &= ~APPL_DM_TYPE_MASK; 1531 + val |= APPL_DM_TYPE_EP; 1532 + appl_writel(pcie, val, APPL_DM_TYPE); 1533 + 1534 + appl_writel(pcie, 0x0, APPL_CFG_SLCG_OVERRIDE); 1535 + 1536 + val = appl_readl(pcie, APPL_CTRL); 1537 + val |= APPL_CTRL_SYS_PRE_DET_STATE; 1538 + val |= APPL_CTRL_HW_HOT_RST_EN; 1539 + appl_writel(pcie, val, APPL_CTRL); 1540 + 1541 + val = appl_readl(pcie, APPL_CFG_MISC); 1542 + val |= APPL_CFG_MISC_SLV_EP_MODE; 1543 + val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT); 1544 + appl_writel(pcie, val, APPL_CFG_MISC); 1545 + 1546 + val = appl_readl(pcie, APPL_PINMUX); 1547 + val |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN; 1548 + val |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE; 1549 + appl_writel(pcie, val, APPL_PINMUX); 1550 + 1551 + appl_writel(pcie, pcie->dbi_res->start & APPL_CFG_BASE_ADDR_MASK, 1552 + APPL_CFG_BASE_ADDR); 1553 + 1554 + appl_writel(pcie, pcie->atu_dma_res->start & 1555 + APPL_CFG_IATU_DMA_BASE_ADDR_MASK, 1556 + APPL_CFG_IATU_DMA_BASE_ADDR); 1557 + 1558 + val = appl_readl(pcie, APPL_INTR_EN_L0_0); 1559 + val |= APPL_INTR_EN_L0_0_SYS_INTR_EN; 1560 + val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN; 1561 + val |= APPL_INTR_EN_L0_0_PCI_CMD_EN_INT_EN; 1562 + appl_writel(pcie, val, APPL_INTR_EN_L0_0); 1563 + 1564 + val = appl_readl(pcie, APPL_INTR_EN_L1_0_0); 1565 + val |= APPL_INTR_EN_L1_0_0_HOT_RESET_DONE_INT_EN; 1566 + val |= APPL_INTR_EN_L1_0_0_RDLH_LINK_UP_INT_EN; 1567 + appl_writel(pcie, val, APPL_INTR_EN_L1_0_0); 1568 + 1569 + reset_control_deassert(pcie->core_rst); 1570 + 1571 + if (pcie->update_fc_fixup) { 1572 + val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF); 1573 + val |= 0x1 << CFG_TIMER_CTRL_ACK_NAK_SHIFT; 1574 + dw_pcie_writel_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF, val); 1575 + } 1576 + 1577 + config_gen3_gen4_eq_presets(pcie); 1578 + 1579 + init_host_aspm(pcie); 1580 + 1581 + /* Disable ASPM-L1SS advertisement if there is no CLKREQ routing */ 1582 + if (!pcie->supports_clkreq) { 1583 + disable_aspm_l11(pcie); 1584 + disable_aspm_l12(pcie); 1585 + } 1586 + 1587 + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 1588 + val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 1589 + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 1590 + 1591 + /* Configure N_FTS & FTS */ 1592 + val = dw_pcie_readl_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL); 1593 + val &= ~(N_FTS_MASK << N_FTS_SHIFT); 1594 + val |= N_FTS_VAL << N_FTS_SHIFT; 1595 + dw_pcie_writel_dbi(pci, PORT_LOGIC_ACK_F_ASPM_CTRL, val); 1596 + 1597 + val = dw_pcie_readl_dbi(pci, PORT_LOGIC_GEN2_CTRL); 1598 + val &= ~FTS_MASK; 1599 + val |= FTS_VAL; 1600 + dw_pcie_writel_dbi(pci, PORT_LOGIC_GEN2_CTRL, val); 1601 + 1602 + /* Configure Max Speed from DT */ 1603 + if (pcie->max_speed && pcie->max_speed != -EINVAL) { 1604 + val = dw_pcie_readl_dbi(pci, pcie->pcie_cap_base + 1605 + PCI_EXP_LNKCAP); 1606 + val &= ~PCI_EXP_LNKCAP_SLS; 1607 + val |= pcie->max_speed; 1608 + dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, 1609 + val); 1610 + } 1611 + 1612 + pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, 1613 + PCI_CAP_ID_EXP); 1614 + clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ); 1615 + 1616 + val = (ep->msi_mem_phys & MSIX_ADDR_MATCH_LOW_OFF_MASK); 1617 + val |= MSIX_ADDR_MATCH_LOW_OFF_EN; 1618 + dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_LOW_OFF, val); 1619 + val = (lower_32_bits(ep->msi_mem_phys) & MSIX_ADDR_MATCH_HIGH_OFF_MASK); 1620 + dw_pcie_writel_dbi(pci, MSIX_ADDR_MATCH_HIGH_OFF, val); 1621 + 1622 + ret = dw_pcie_ep_init_complete(ep); 1623 + if (ret) { 1624 + dev_err(dev, "Failed to complete initialization: %d\n", ret); 1625 + goto fail_init_complete; 1626 + } 1627 + 1628 + dw_pcie_ep_init_notify(ep); 1629 + 1630 + /* Enable LTSSM */ 1631 + val = appl_readl(pcie, APPL_CTRL); 1632 + val |= APPL_CTRL_LTSSM_EN; 1633 + appl_writel(pcie, val, APPL_CTRL); 1634 + 1635 + pcie->ep_state = EP_STATE_ENABLED; 1636 + dev_dbg(dev, "Initialization of endpoint is completed\n"); 1637 + 1638 + return; 1639 + 1640 + fail_init_complete: 1641 + reset_control_assert(pcie->core_rst); 1642 + tegra_pcie_disable_phy(pcie); 1643 + fail_phy: 1644 + reset_control_assert(pcie->core_apb_rst); 1645 + fail_core_apb_rst: 1646 + clk_disable_unprepare(pcie->core_clk); 1647 + fail_core_clk_enable: 1648 + tegra_pcie_bpmp_set_pll_state(pcie, false); 1649 + fail_pll_init: 1650 + pm_runtime_put_sync(dev); 1651 + } 1652 + 1653 + static irqreturn_t tegra_pcie_ep_pex_rst_irq(int irq, void *arg) 1654 + { 1655 + struct tegra_pcie_dw *pcie = arg; 1656 + 1657 + if (gpiod_get_value(pcie->pex_rst_gpiod)) 1658 + pex_ep_event_pex_rst_assert(pcie); 1659 + else 1660 + pex_ep_event_pex_rst_deassert(pcie); 1661 + 1662 + return IRQ_HANDLED; 1663 + } 1664 + 1665 + static int tegra_pcie_ep_raise_legacy_irq(struct tegra_pcie_dw *pcie, u16 irq) 1666 + { 1667 + /* Tegra194 supports only INTA */ 1668 + if (irq > 1) 1669 + return -EINVAL; 1670 + 1671 + appl_writel(pcie, 1, APPL_LEGACY_INTX); 1672 + usleep_range(1000, 2000); 1673 + appl_writel(pcie, 0, APPL_LEGACY_INTX); 1674 + return 0; 1675 + } 1676 + 1677 + static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq) 1678 + { 1679 + if (unlikely(irq > 31)) 1680 + return -EINVAL; 1681 + 1682 + appl_writel(pcie, (1 << irq), APPL_MSI_CTRL_1); 1683 + 1684 + return 0; 1685 + } 1686 + 1687 + static int tegra_pcie_ep_raise_msix_irq(struct tegra_pcie_dw *pcie, u16 irq) 1688 + { 1689 + struct dw_pcie_ep *ep = &pcie->pci.ep; 1690 + 1691 + writel(irq, ep->msi_mem); 1692 + 1693 + return 0; 1694 + } 1695 + 1696 + static int tegra_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 1697 + enum pci_epc_irq_type type, 1698 + u16 interrupt_num) 1699 + { 1700 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1701 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 1702 + 1703 + switch (type) { 1704 + case PCI_EPC_IRQ_LEGACY: 1705 + return tegra_pcie_ep_raise_legacy_irq(pcie, interrupt_num); 1706 + 1707 + case PCI_EPC_IRQ_MSI: 1708 + return tegra_pcie_ep_raise_msi_irq(pcie, interrupt_num); 1709 + 1710 + case PCI_EPC_IRQ_MSIX: 1711 + return tegra_pcie_ep_raise_msix_irq(pcie, interrupt_num); 1712 + 1713 + default: 1714 + dev_err(pci->dev, "Unknown IRQ type\n"); 1715 + return -EPERM; 1716 + } 1717 + 1718 + return 0; 1719 + } 1720 + 1721 + static const struct pci_epc_features tegra_pcie_epc_features = { 1722 + .linkup_notifier = true, 1723 + .core_init_notifier = true, 1724 + .msi_capable = false, 1725 + .msix_capable = false, 1726 + .reserved_bar = 1 << BAR_2 | 1 << BAR_3 | 1 << BAR_4 | 1 << BAR_5, 1727 + .bar_fixed_64bit = 1 << BAR_0, 1728 + .bar_fixed_size[0] = SZ_1M, 1729 + }; 1730 + 1731 + static const struct pci_epc_features* 1732 + tegra_pcie_ep_get_features(struct dw_pcie_ep *ep) 1733 + { 1734 + return &tegra_pcie_epc_features; 1735 + } 1736 + 1737 + static struct dw_pcie_ep_ops pcie_ep_ops = { 1738 + .raise_irq = tegra_pcie_ep_raise_irq, 1739 + .get_features = tegra_pcie_ep_get_features, 1740 + }; 1741 + 1742 + static int tegra_pcie_config_ep(struct tegra_pcie_dw *pcie, 1743 + struct platform_device *pdev) 1744 + { 1745 + struct dw_pcie *pci = &pcie->pci; 1746 + struct device *dev = pcie->dev; 1747 + struct dw_pcie_ep *ep; 1748 + struct resource *res; 1749 + char *name; 1750 + int ret; 1751 + 1752 + ep = &pci->ep; 1753 + ep->ops = &pcie_ep_ops; 1754 + 1755 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 1756 + if (!res) 1757 + return -EINVAL; 1758 + 1759 + ep->phys_base = res->start; 1760 + ep->addr_size = resource_size(res); 1761 + ep->page_size = SZ_64K; 1762 + 1763 + ret = gpiod_set_debounce(pcie->pex_rst_gpiod, PERST_DEBOUNCE_TIME); 1764 + if (ret < 0) { 1765 + dev_err(dev, "Failed to set PERST GPIO debounce time: %d\n", 1766 + ret); 1767 + return ret; 1768 + } 1769 + 1770 + ret = gpiod_to_irq(pcie->pex_rst_gpiod); 1771 + if (ret < 0) { 1772 + dev_err(dev, "Failed to get IRQ for PERST GPIO: %d\n", ret); 1773 + return ret; 1774 + } 1775 + pcie->pex_rst_irq = (unsigned int)ret; 1776 + 1777 + name = devm_kasprintf(dev, GFP_KERNEL, "tegra_pcie_%u_pex_rst_irq", 1778 + pcie->cid); 1779 + if (!name) { 1780 + dev_err(dev, "Failed to create PERST IRQ string\n"); 1781 + return -ENOMEM; 1782 + } 1783 + 1784 + irq_set_status_flags(pcie->pex_rst_irq, IRQ_NOAUTOEN); 1785 + 1786 + pcie->ep_state = EP_STATE_DISABLED; 1787 + 1788 + ret = devm_request_threaded_irq(dev, pcie->pex_rst_irq, NULL, 1789 + tegra_pcie_ep_pex_rst_irq, 1790 + IRQF_TRIGGER_RISING | 1791 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 1792 + name, (void *)pcie); 1793 + if (ret < 0) { 1794 + dev_err(dev, "Failed to request IRQ for PERST: %d\n", ret); 1795 + return ret; 1796 + } 1797 + 1798 + name = devm_kasprintf(dev, GFP_KERNEL, "tegra_pcie_%u_ep_work", 1799 + pcie->cid); 1800 + if (!name) { 1801 + dev_err(dev, "Failed to create PCIe EP work thread string\n"); 1802 + return -ENOMEM; 1803 + } 1804 + 1805 + pm_runtime_enable(dev); 1806 + 1807 + ret = dw_pcie_ep_init(ep); 1808 + if (ret) { 1809 + dev_err(dev, "Failed to initialize DWC Endpoint subsystem: %d\n", 1810 + ret); 1811 + return ret; 1812 + } 1813 + 1814 + return 0; 1815 + } 1816 + 1660 1817 static int tegra_pcie_dw_probe(struct platform_device *pdev) 1661 1818 { 1819 + const struct tegra_pcie_dw_of_data *data; 1662 1820 struct device *dev = &pdev->dev; 1663 1821 struct resource *atu_dma_res; 1664 1822 struct tegra_pcie_dw *pcie; ··· 2058 1440 int ret; 2059 1441 u32 i; 2060 1442 1443 + data = of_device_get_match_data(dev); 1444 + 2061 1445 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 2062 1446 if (!pcie) 2063 1447 return -ENOMEM; ··· 2069 1449 pci->ops = &tegra_dw_pcie_ops; 2070 1450 pp = &pci->pp; 2071 1451 pcie->dev = &pdev->dev; 1452 + pcie->mode = (enum dw_pcie_device_mode)data->mode; 2072 1453 2073 1454 ret = tegra_pcie_dw_parse_dt(pcie); 2074 1455 if (ret < 0) { 2075 - dev_err(dev, "Failed to parse device tree: %d\n", ret); 1456 + const char *level = KERN_ERR; 1457 + 1458 + if (ret == -EPROBE_DEFER) 1459 + level = KERN_DEBUG; 1460 + 1461 + dev_printk(level, dev, 1462 + dev_fmt("Failed to parse device tree: %d\n"), 1463 + ret); 2076 1464 return ret; 2077 1465 } 2078 1466 2079 1467 ret = tegra_pcie_get_slot_regulators(pcie); 2080 1468 if (ret < 0) { 2081 - dev_err(dev, "Failed to get slot regulators: %d\n", ret); 1469 + const char *level = KERN_ERR; 1470 + 1471 + if (ret == -EPROBE_DEFER) 1472 + level = KERN_DEBUG; 1473 + 1474 + dev_printk(level, dev, 1475 + dev_fmt("Failed to get slot regulators: %d\n"), 1476 + ret); 2082 1477 return ret; 2083 1478 } 1479 + 1480 + if (pcie->pex_refclk_sel_gpiod) 1481 + gpiod_set_value(pcie->pex_refclk_sel_gpiod, 1); 2084 1482 2085 1483 pcie->pex_ctl_supply = devm_regulator_get(dev, "vddio-pex-ctl"); 2086 1484 if (IS_ERR(pcie->pex_ctl_supply)) { ··· 2195 1557 return -ENODEV; 2196 1558 } 2197 1559 2198 - ret = devm_request_irq(dev, pp->irq, tegra_pcie_irq_handler, 2199 - IRQF_SHARED, "tegra-pcie-intr", pcie); 2200 - if (ret) { 2201 - dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ret); 2202 - return ret; 2203 - } 2204 - 2205 1560 pcie->bpmp = tegra_bpmp_get(dev); 2206 1561 if (IS_ERR(pcie->bpmp)) 2207 1562 return PTR_ERR(pcie->bpmp); 2208 1563 2209 1564 platform_set_drvdata(pdev, pcie); 2210 1565 2211 - ret = tegra_pcie_config_rp(pcie); 2212 - if (ret && ret != -ENOMEDIUM) 2213 - goto fail; 2214 - else 2215 - return 0; 1566 + switch (pcie->mode) { 1567 + case DW_PCIE_RC_TYPE: 1568 + ret = devm_request_irq(dev, pp->irq, tegra_pcie_rp_irq_handler, 1569 + IRQF_SHARED, "tegra-pcie-intr", pcie); 1570 + if (ret) { 1571 + dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, 1572 + ret); 1573 + goto fail; 1574 + } 1575 + 1576 + ret = tegra_pcie_config_rp(pcie); 1577 + if (ret && ret != -ENOMEDIUM) 1578 + goto fail; 1579 + else 1580 + return 0; 1581 + break; 1582 + 1583 + case DW_PCIE_EP_TYPE: 1584 + ret = devm_request_threaded_irq(dev, pp->irq, 1585 + tegra_pcie_ep_hard_irq, 1586 + tegra_pcie_ep_irq_thread, 1587 + IRQF_SHARED | IRQF_ONESHOT, 1588 + "tegra-pcie-ep-intr", pcie); 1589 + if (ret) { 1590 + dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, 1591 + ret); 1592 + goto fail; 1593 + } 1594 + 1595 + ret = tegra_pcie_config_ep(pcie, pdev); 1596 + if (ret < 0) 1597 + goto fail; 1598 + break; 1599 + 1600 + default: 1601 + dev_err(dev, "Invalid PCIe device type %d\n", pcie->mode); 1602 + } 2216 1603 2217 1604 fail: 2218 1605 tegra_bpmp_put(pcie->bpmp); ··· 2256 1593 pm_runtime_put_sync(pcie->dev); 2257 1594 pm_runtime_disable(pcie->dev); 2258 1595 tegra_bpmp_put(pcie->bpmp); 1596 + if (pcie->pex_refclk_sel_gpiod) 1597 + gpiod_set_value(pcie->pex_refclk_sel_gpiod, 0); 2259 1598 2260 1599 return 0; 2261 1600 } ··· 2362 1697 __deinit_controller(pcie); 2363 1698 } 2364 1699 1700 + static const struct tegra_pcie_dw_of_data tegra_pcie_dw_rc_of_data = { 1701 + .mode = DW_PCIE_RC_TYPE, 1702 + }; 1703 + 1704 + static const struct tegra_pcie_dw_of_data tegra_pcie_dw_ep_of_data = { 1705 + .mode = DW_PCIE_EP_TYPE, 1706 + }; 1707 + 2365 1708 static const struct of_device_id tegra_pcie_dw_of_match[] = { 2366 1709 { 2367 1710 .compatible = "nvidia,tegra194-pcie", 1711 + .data = &tegra_pcie_dw_rc_of_data, 1712 + }, 1713 + { 1714 + .compatible = "nvidia,tegra194-pcie-ep", 1715 + .data = &tegra_pcie_dw_ep_of_data, 2368 1716 }, 2369 1717 {}, 2370 1718 };
+34
drivers/pci/controller/mobiveil/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + menu "Mobiveil PCIe Core Support" 4 + depends on PCI 5 + 6 + config PCIE_MOBIVEIL 7 + bool 8 + 9 + config PCIE_MOBIVEIL_HOST 10 + bool 11 + depends on PCI_MSI_IRQ_DOMAIN 12 + select PCIE_MOBIVEIL 13 + 14 + config PCIE_MOBIVEIL_PLAT 15 + bool "Mobiveil AXI PCIe controller" 16 + depends on ARCH_ZYNQMP || COMPILE_TEST 17 + depends on OF 18 + depends on PCI_MSI_IRQ_DOMAIN 19 + select PCIE_MOBIVEIL_HOST 20 + help 21 + Say Y here if you want to enable support for the Mobiveil AXI PCIe 22 + Soft IP. It has up to 8 outbound and inbound windows 23 + for address translation and it is a PCIe Gen4 IP. 24 + 25 + config PCIE_LAYERSCAPE_GEN4 26 + bool "Freescale Layerscape PCIe Gen4 controller" 27 + depends on PCI 28 + depends on OF && (ARM64 || ARCH_LAYERSCAPE) 29 + depends on PCI_MSI_IRQ_DOMAIN 30 + select PCIE_MOBIVEIL_HOST 31 + help 32 + Say Y here if you want PCIe Gen4 controller support on 33 + Layerscape SoCs. 34 + endmenu
+5
drivers/pci/controller/mobiveil/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_PCIE_MOBIVEIL) += pcie-mobiveil.o 3 + obj-$(CONFIG_PCIE_MOBIVEIL_HOST) += pcie-mobiveil-host.o 4 + obj-$(CONFIG_PCIE_MOBIVEIL_PLAT) += pcie-mobiveil-plat.o 5 + obj-$(CONFIG_PCIE_LAYERSCAPE_GEN4) += pcie-layerscape-gen4.o
+267
drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe Gen4 host controller driver for NXP Layerscape SoCs 4 + * 5 + * Copyright 2019-2020 NXP 6 + * 7 + * Author: Zhiqiang Hou <Zhiqiang.Hou@nxp.com> 8 + */ 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/init.h> 13 + #include <linux/of_pci.h> 14 + #include <linux/of_platform.h> 15 + #include <linux/of_irq.h> 16 + #include <linux/of_address.h> 17 + #include <linux/pci.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/resource.h> 20 + #include <linux/mfd/syscon.h> 21 + #include <linux/regmap.h> 22 + 23 + #include "pcie-mobiveil.h" 24 + 25 + /* LUT and PF control registers */ 26 + #define PCIE_LUT_OFF 0x80000 27 + #define PCIE_PF_OFF 0xc0000 28 + #define PCIE_PF_INT_STAT 0x18 29 + #define PF_INT_STAT_PABRST BIT(31) 30 + 31 + #define PCIE_PF_DBG 0x7fc 32 + #define PF_DBG_LTSSM_MASK 0x3f 33 + #define PF_DBG_LTSSM_L0 0x2d /* L0 state */ 34 + #define PF_DBG_WE BIT(31) 35 + #define PF_DBG_PABR BIT(27) 36 + 37 + #define to_ls_pcie_g4(x) platform_get_drvdata((x)->pdev) 38 + 39 + struct ls_pcie_g4 { 40 + struct mobiveil_pcie pci; 41 + struct delayed_work dwork; 42 + int irq; 43 + }; 44 + 45 + static inline u32 ls_pcie_g4_lut_readl(struct ls_pcie_g4 *pcie, u32 off) 46 + { 47 + return ioread32(pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off); 48 + } 49 + 50 + static inline void ls_pcie_g4_lut_writel(struct ls_pcie_g4 *pcie, 51 + u32 off, u32 val) 52 + { 53 + iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_LUT_OFF + off); 54 + } 55 + 56 + static inline u32 ls_pcie_g4_pf_readl(struct ls_pcie_g4 *pcie, u32 off) 57 + { 58 + return ioread32(pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off); 59 + } 60 + 61 + static inline void ls_pcie_g4_pf_writel(struct ls_pcie_g4 *pcie, 62 + u32 off, u32 val) 63 + { 64 + iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off); 65 + } 66 + 67 + static int ls_pcie_g4_link_up(struct mobiveil_pcie *pci) 68 + { 69 + struct ls_pcie_g4 *pcie = to_ls_pcie_g4(pci); 70 + u32 state; 71 + 72 + state = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG); 73 + state = state & PF_DBG_LTSSM_MASK; 74 + 75 + if (state == PF_DBG_LTSSM_L0) 76 + return 1; 77 + 78 + return 0; 79 + } 80 + 81 + static void ls_pcie_g4_disable_interrupt(struct ls_pcie_g4 *pcie) 82 + { 83 + struct mobiveil_pcie *mv_pci = &pcie->pci; 84 + 85 + mobiveil_csr_writel(mv_pci, 0, PAB_INTP_AMBA_MISC_ENB); 86 + } 87 + 88 + static void ls_pcie_g4_enable_interrupt(struct ls_pcie_g4 *pcie) 89 + { 90 + struct mobiveil_pcie *mv_pci = &pcie->pci; 91 + u32 val; 92 + 93 + /* Clear the interrupt status */ 94 + mobiveil_csr_writel(mv_pci, 0xffffffff, PAB_INTP_AMBA_MISC_STAT); 95 + 96 + val = PAB_INTP_INTX_MASK | PAB_INTP_MSI | PAB_INTP_RESET | 97 + PAB_INTP_PCIE_UE | PAB_INTP_IE_PMREDI | PAB_INTP_IE_EC; 98 + mobiveil_csr_writel(mv_pci, val, PAB_INTP_AMBA_MISC_ENB); 99 + } 100 + 101 + static int ls_pcie_g4_reinit_hw(struct ls_pcie_g4 *pcie) 102 + { 103 + struct mobiveil_pcie *mv_pci = &pcie->pci; 104 + struct device *dev = &mv_pci->pdev->dev; 105 + u32 val, act_stat; 106 + int to = 100; 107 + 108 + /* Poll for pab_csb_reset to set and PAB activity to clear */ 109 + do { 110 + usleep_range(10, 15); 111 + val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_INT_STAT); 112 + act_stat = mobiveil_csr_readl(mv_pci, PAB_ACTIVITY_STAT); 113 + } while (((val & PF_INT_STAT_PABRST) == 0 || act_stat) && to--); 114 + if (to < 0) { 115 + dev_err(dev, "Poll PABRST&PABACT timeout\n"); 116 + return -EIO; 117 + } 118 + 119 + /* clear PEX_RESET bit in PEX_PF0_DBG register */ 120 + val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG); 121 + val |= PF_DBG_WE; 122 + ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val); 123 + 124 + val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG); 125 + val |= PF_DBG_PABR; 126 + ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val); 127 + 128 + val = ls_pcie_g4_pf_readl(pcie, PCIE_PF_DBG); 129 + val &= ~PF_DBG_WE; 130 + ls_pcie_g4_pf_writel(pcie, PCIE_PF_DBG, val); 131 + 132 + mobiveil_host_init(mv_pci, true); 133 + 134 + to = 100; 135 + while (!ls_pcie_g4_link_up(mv_pci) && to--) 136 + usleep_range(200, 250); 137 + if (to < 0) { 138 + dev_err(dev, "PCIe link training timeout\n"); 139 + return -EIO; 140 + } 141 + 142 + return 0; 143 + } 144 + 145 + static irqreturn_t ls_pcie_g4_isr(int irq, void *dev_id) 146 + { 147 + struct ls_pcie_g4 *pcie = (struct ls_pcie_g4 *)dev_id; 148 + struct mobiveil_pcie *mv_pci = &pcie->pci; 149 + u32 val; 150 + 151 + val = mobiveil_csr_readl(mv_pci, PAB_INTP_AMBA_MISC_STAT); 152 + if (!val) 153 + return IRQ_NONE; 154 + 155 + if (val & PAB_INTP_RESET) { 156 + ls_pcie_g4_disable_interrupt(pcie); 157 + schedule_delayed_work(&pcie->dwork, msecs_to_jiffies(1)); 158 + } 159 + 160 + mobiveil_csr_writel(mv_pci, val, PAB_INTP_AMBA_MISC_STAT); 161 + 162 + return IRQ_HANDLED; 163 + } 164 + 165 + static int ls_pcie_g4_interrupt_init(struct mobiveil_pcie *mv_pci) 166 + { 167 + struct ls_pcie_g4 *pcie = to_ls_pcie_g4(mv_pci); 168 + struct platform_device *pdev = mv_pci->pdev; 169 + struct device *dev = &pdev->dev; 170 + int ret; 171 + 172 + pcie->irq = platform_get_irq_byname(pdev, "intr"); 173 + if (pcie->irq < 0) { 174 + dev_err(dev, "Can't get 'intr' IRQ, errno = %d\n", pcie->irq); 175 + return pcie->irq; 176 + } 177 + ret = devm_request_irq(dev, pcie->irq, ls_pcie_g4_isr, 178 + IRQF_SHARED, pdev->name, pcie); 179 + if (ret) { 180 + dev_err(dev, "Can't register PCIe IRQ, errno = %d\n", ret); 181 + return ret; 182 + } 183 + 184 + return 0; 185 + } 186 + 187 + static void ls_pcie_g4_reset(struct work_struct *work) 188 + { 189 + struct delayed_work *dwork = container_of(work, struct delayed_work, 190 + work); 191 + struct ls_pcie_g4 *pcie = container_of(dwork, struct ls_pcie_g4, dwork); 192 + struct mobiveil_pcie *mv_pci = &pcie->pci; 193 + u16 ctrl; 194 + 195 + ctrl = mobiveil_csr_readw(mv_pci, PCI_BRIDGE_CONTROL); 196 + ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET; 197 + mobiveil_csr_writew(mv_pci, ctrl, PCI_BRIDGE_CONTROL); 198 + 199 + if (!ls_pcie_g4_reinit_hw(pcie)) 200 + return; 201 + 202 + ls_pcie_g4_enable_interrupt(pcie); 203 + } 204 + 205 + static struct mobiveil_rp_ops ls_pcie_g4_rp_ops = { 206 + .interrupt_init = ls_pcie_g4_interrupt_init, 207 + }; 208 + 209 + static const struct mobiveil_pab_ops ls_pcie_g4_pab_ops = { 210 + .link_up = ls_pcie_g4_link_up, 211 + }; 212 + 213 + static int __init ls_pcie_g4_probe(struct platform_device *pdev) 214 + { 215 + struct device *dev = &pdev->dev; 216 + struct pci_host_bridge *bridge; 217 + struct mobiveil_pcie *mv_pci; 218 + struct ls_pcie_g4 *pcie; 219 + struct device_node *np = dev->of_node; 220 + int ret; 221 + 222 + if (!of_parse_phandle(np, "msi-parent", 0)) { 223 + dev_err(dev, "Failed to find msi-parent\n"); 224 + return -EINVAL; 225 + } 226 + 227 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 228 + if (!bridge) 229 + return -ENOMEM; 230 + 231 + pcie = pci_host_bridge_priv(bridge); 232 + mv_pci = &pcie->pci; 233 + 234 + mv_pci->pdev = pdev; 235 + mv_pci->ops = &ls_pcie_g4_pab_ops; 236 + mv_pci->rp.ops = &ls_pcie_g4_rp_ops; 237 + mv_pci->rp.bridge = bridge; 238 + 239 + platform_set_drvdata(pdev, pcie); 240 + 241 + INIT_DELAYED_WORK(&pcie->dwork, ls_pcie_g4_reset); 242 + 243 + ret = mobiveil_pcie_host_probe(mv_pci); 244 + if (ret) { 245 + dev_err(dev, "Fail to probe\n"); 246 + return ret; 247 + } 248 + 249 + ls_pcie_g4_enable_interrupt(pcie); 250 + 251 + return 0; 252 + } 253 + 254 + static const struct of_device_id ls_pcie_g4_of_match[] = { 255 + { .compatible = "fsl,lx2160a-pcie", }, 256 + { }, 257 + }; 258 + 259 + static struct platform_driver ls_pcie_g4_driver = { 260 + .driver = { 261 + .name = "layerscape-pcie-gen4", 262 + .of_match_table = ls_pcie_g4_of_match, 263 + .suppress_bind_attrs = true, 264 + }, 265 + }; 266 + 267 + builtin_platform_driver_probe(ls_pcie_g4_driver, ls_pcie_g4_probe);
+61
drivers/pci/controller/mobiveil/pcie-mobiveil-plat.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe host controller driver for Mobiveil PCIe Host controller 4 + * 5 + * Copyright (c) 2018 Mobiveil Inc. 6 + * Copyright 2019 NXP 7 + * 8 + * Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 9 + * Hou Zhiqiang <Zhiqiang.Hou@nxp.com> 10 + */ 11 + 12 + #include <linux/init.h> 13 + #include <linux/kernel.h> 14 + #include <linux/module.h> 15 + #include <linux/of_pci.h> 16 + #include <linux/pci.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/slab.h> 19 + 20 + #include "pcie-mobiveil.h" 21 + 22 + static int mobiveil_pcie_probe(struct platform_device *pdev) 23 + { 24 + struct mobiveil_pcie *pcie; 25 + struct pci_host_bridge *bridge; 26 + struct device *dev = &pdev->dev; 27 + 28 + /* allocate the PCIe port */ 29 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 30 + if (!bridge) 31 + return -ENOMEM; 32 + 33 + pcie = pci_host_bridge_priv(bridge); 34 + pcie->rp.bridge = bridge; 35 + 36 + pcie->pdev = pdev; 37 + 38 + return mobiveil_pcie_host_probe(pcie); 39 + } 40 + 41 + static const struct of_device_id mobiveil_pcie_of_match[] = { 42 + {.compatible = "mbvl,gpex40-pcie",}, 43 + {}, 44 + }; 45 + 46 + MODULE_DEVICE_TABLE(of, mobiveil_pcie_of_match); 47 + 48 + static struct platform_driver mobiveil_pcie_driver = { 49 + .probe = mobiveil_pcie_probe, 50 + .driver = { 51 + .name = "mobiveil-pcie", 52 + .of_match_table = mobiveil_pcie_of_match, 53 + .suppress_bind_attrs = true, 54 + }, 55 + }; 56 + 57 + builtin_platform_driver(mobiveil_pcie_driver); 58 + 59 + MODULE_LICENSE("GPL v2"); 60 + MODULE_DESCRIPTION("Mobiveil PCIe host controller driver"); 61 + MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");
+231
drivers/pci/controller/mobiveil/pcie-mobiveil.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe host controller driver for Mobiveil PCIe Host controller 4 + * 5 + * Copyright (c) 2018 Mobiveil Inc. 6 + * Copyright 2019 NXP 7 + * 8 + * Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 9 + * Hou Zhiqiang <Zhiqiang.Hou@nxp.com> 10 + */ 11 + 12 + #include <linux/delay.h> 13 + #include <linux/init.h> 14 + #include <linux/kernel.h> 15 + #include <linux/pci.h> 16 + #include <linux/platform_device.h> 17 + 18 + #include "pcie-mobiveil.h" 19 + 20 + /* 21 + * mobiveil_pcie_sel_page - routine to access paged register 22 + * 23 + * Registers whose address greater than PAGED_ADDR_BNDRY (0xc00) are paged, 24 + * for this scheme to work extracted higher 6 bits of the offset will be 25 + * written to pg_sel field of PAB_CTRL register and rest of the lower 10 26 + * bits enabled with PAGED_ADDR_BNDRY are used as offset of the register. 27 + */ 28 + static void mobiveil_pcie_sel_page(struct mobiveil_pcie *pcie, u8 pg_idx) 29 + { 30 + u32 val; 31 + 32 + val = readl(pcie->csr_axi_slave_base + PAB_CTRL); 33 + val &= ~(PAGE_SEL_MASK << PAGE_SEL_SHIFT); 34 + val |= (pg_idx & PAGE_SEL_MASK) << PAGE_SEL_SHIFT; 35 + 36 + writel(val, pcie->csr_axi_slave_base + PAB_CTRL); 37 + } 38 + 39 + static void __iomem *mobiveil_pcie_comp_addr(struct mobiveil_pcie *pcie, 40 + u32 off) 41 + { 42 + if (off < PAGED_ADDR_BNDRY) { 43 + /* For directly accessed registers, clear the pg_sel field */ 44 + mobiveil_pcie_sel_page(pcie, 0); 45 + return pcie->csr_axi_slave_base + off; 46 + } 47 + 48 + mobiveil_pcie_sel_page(pcie, OFFSET_TO_PAGE_IDX(off)); 49 + return pcie->csr_axi_slave_base + OFFSET_TO_PAGE_ADDR(off); 50 + } 51 + 52 + static int mobiveil_pcie_read(void __iomem *addr, int size, u32 *val) 53 + { 54 + if ((uintptr_t)addr & (size - 1)) { 55 + *val = 0; 56 + return PCIBIOS_BAD_REGISTER_NUMBER; 57 + } 58 + 59 + switch (size) { 60 + case 4: 61 + *val = readl(addr); 62 + break; 63 + case 2: 64 + *val = readw(addr); 65 + break; 66 + case 1: 67 + *val = readb(addr); 68 + break; 69 + default: 70 + *val = 0; 71 + return PCIBIOS_BAD_REGISTER_NUMBER; 72 + } 73 + 74 + return PCIBIOS_SUCCESSFUL; 75 + } 76 + 77 + static int mobiveil_pcie_write(void __iomem *addr, int size, u32 val) 78 + { 79 + if ((uintptr_t)addr & (size - 1)) 80 + return PCIBIOS_BAD_REGISTER_NUMBER; 81 + 82 + switch (size) { 83 + case 4: 84 + writel(val, addr); 85 + break; 86 + case 2: 87 + writew(val, addr); 88 + break; 89 + case 1: 90 + writeb(val, addr); 91 + break; 92 + default: 93 + return PCIBIOS_BAD_REGISTER_NUMBER; 94 + } 95 + 96 + return PCIBIOS_SUCCESSFUL; 97 + } 98 + 99 + u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size) 100 + { 101 + void __iomem *addr; 102 + u32 val; 103 + int ret; 104 + 105 + addr = mobiveil_pcie_comp_addr(pcie, off); 106 + 107 + ret = mobiveil_pcie_read(addr, size, &val); 108 + if (ret) 109 + dev_err(&pcie->pdev->dev, "read CSR address failed\n"); 110 + 111 + return val; 112 + } 113 + 114 + void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off, 115 + size_t size) 116 + { 117 + void __iomem *addr; 118 + int ret; 119 + 120 + addr = mobiveil_pcie_comp_addr(pcie, off); 121 + 122 + ret = mobiveil_pcie_write(addr, size, val); 123 + if (ret) 124 + dev_err(&pcie->pdev->dev, "write CSR address failed\n"); 125 + } 126 + 127 + bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie) 128 + { 129 + if (pcie->ops->link_up) 130 + return pcie->ops->link_up(pcie); 131 + 132 + return (mobiveil_csr_readl(pcie, LTSSM_STATUS) & 133 + LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0; 134 + } 135 + 136 + void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, 137 + u64 cpu_addr, u64 pci_addr, u32 type, u64 size) 138 + { 139 + u32 value; 140 + u64 size64 = ~(size - 1); 141 + 142 + if (win_num >= pcie->ppio_wins) { 143 + dev_err(&pcie->pdev->dev, 144 + "ERROR: max inbound windows reached !\n"); 145 + return; 146 + } 147 + 148 + value = mobiveil_csr_readl(pcie, PAB_PEX_AMAP_CTRL(win_num)); 149 + value &= ~(AMAP_CTRL_TYPE_MASK << AMAP_CTRL_TYPE_SHIFT | WIN_SIZE_MASK); 150 + value |= type << AMAP_CTRL_TYPE_SHIFT | 1 << AMAP_CTRL_EN_SHIFT | 151 + (lower_32_bits(size64) & WIN_SIZE_MASK); 152 + mobiveil_csr_writel(pcie, value, PAB_PEX_AMAP_CTRL(win_num)); 153 + 154 + mobiveil_csr_writel(pcie, upper_32_bits(size64), 155 + PAB_EXT_PEX_AMAP_SIZEN(win_num)); 156 + 157 + mobiveil_csr_writel(pcie, lower_32_bits(cpu_addr), 158 + PAB_PEX_AMAP_AXI_WIN(win_num)); 159 + mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr), 160 + PAB_EXT_PEX_AMAP_AXI_WIN(win_num)); 161 + 162 + mobiveil_csr_writel(pcie, lower_32_bits(pci_addr), 163 + PAB_PEX_AMAP_PEX_WIN_L(win_num)); 164 + mobiveil_csr_writel(pcie, upper_32_bits(pci_addr), 165 + PAB_PEX_AMAP_PEX_WIN_H(win_num)); 166 + 167 + pcie->ib_wins_configured++; 168 + } 169 + 170 + /* 171 + * routine to program the outbound windows 172 + */ 173 + void program_ob_windows(struct mobiveil_pcie *pcie, int win_num, 174 + u64 cpu_addr, u64 pci_addr, u32 type, u64 size) 175 + { 176 + u32 value; 177 + u64 size64 = ~(size - 1); 178 + 179 + if (win_num >= pcie->apio_wins) { 180 + dev_err(&pcie->pdev->dev, 181 + "ERROR: max outbound windows reached !\n"); 182 + return; 183 + } 184 + 185 + /* 186 + * program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit 187 + * to 4 KB in PAB_AXI_AMAP_CTRL register 188 + */ 189 + value = mobiveil_csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num)); 190 + value &= ~(WIN_TYPE_MASK << WIN_TYPE_SHIFT | WIN_SIZE_MASK); 191 + value |= 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT | 192 + (lower_32_bits(size64) & WIN_SIZE_MASK); 193 + mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_CTRL(win_num)); 194 + 195 + mobiveil_csr_writel(pcie, upper_32_bits(size64), 196 + PAB_EXT_AXI_AMAP_SIZE(win_num)); 197 + 198 + /* 199 + * program AXI window base with appropriate value in 200 + * PAB_AXI_AMAP_AXI_WIN0 register 201 + */ 202 + mobiveil_csr_writel(pcie, 203 + lower_32_bits(cpu_addr) & (~AXI_WINDOW_ALIGN_MASK), 204 + PAB_AXI_AMAP_AXI_WIN(win_num)); 205 + mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr), 206 + PAB_EXT_AXI_AMAP_AXI_WIN(win_num)); 207 + 208 + mobiveil_csr_writel(pcie, lower_32_bits(pci_addr), 209 + PAB_AXI_AMAP_PEX_WIN_L(win_num)); 210 + mobiveil_csr_writel(pcie, upper_32_bits(pci_addr), 211 + PAB_AXI_AMAP_PEX_WIN_H(win_num)); 212 + 213 + pcie->ob_wins_configured++; 214 + } 215 + 216 + int mobiveil_bringup_link(struct mobiveil_pcie *pcie) 217 + { 218 + int retries; 219 + 220 + /* check if the link is up or not */ 221 + for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 222 + if (mobiveil_pcie_link_up(pcie)) 223 + return 0; 224 + 225 + usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX); 226 + } 227 + 228 + dev_err(&pcie->pdev->dev, "link never came up\n"); 229 + 230 + return -ETIMEDOUT; 231 + }
+226
drivers/pci/controller/mobiveil/pcie-mobiveil.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * PCIe host controller driver for Mobiveil PCIe Host controller 4 + * 5 + * Copyright (c) 2018 Mobiveil Inc. 6 + * Copyright 2019 NXP 7 + * 8 + * Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 9 + * Hou Zhiqiang <Zhiqiang.Hou@nxp.com> 10 + */ 11 + 12 + #ifndef _PCIE_MOBIVEIL_H 13 + #define _PCIE_MOBIVEIL_H 14 + 15 + #include <linux/pci.h> 16 + #include <linux/irq.h> 17 + #include <linux/msi.h> 18 + #include "../../pci.h" 19 + 20 + /* register offsets and bit positions */ 21 + 22 + /* 23 + * translation tables are grouped into windows, each window registers are 24 + * grouped into blocks of 4 or 16 registers each 25 + */ 26 + #define PAB_REG_BLOCK_SIZE 16 27 + #define PAB_EXT_REG_BLOCK_SIZE 4 28 + 29 + #define PAB_REG_ADDR(offset, win) \ 30 + (offset + (win * PAB_REG_BLOCK_SIZE)) 31 + #define PAB_EXT_REG_ADDR(offset, win) \ 32 + (offset + (win * PAB_EXT_REG_BLOCK_SIZE)) 33 + 34 + #define LTSSM_STATUS 0x0404 35 + #define LTSSM_STATUS_L0_MASK 0x3f 36 + #define LTSSM_STATUS_L0 0x2d 37 + 38 + #define PAB_CTRL 0x0808 39 + #define AMBA_PIO_ENABLE_SHIFT 0 40 + #define PEX_PIO_ENABLE_SHIFT 1 41 + #define PAGE_SEL_SHIFT 13 42 + #define PAGE_SEL_MASK 0x3f 43 + #define PAGE_LO_MASK 0x3ff 44 + #define PAGE_SEL_OFFSET_SHIFT 10 45 + 46 + #define PAB_ACTIVITY_STAT 0x81c 47 + 48 + #define PAB_AXI_PIO_CTRL 0x0840 49 + #define APIO_EN_MASK 0xf 50 + 51 + #define PAB_PEX_PIO_CTRL 0x08c0 52 + #define PIO_ENABLE_SHIFT 0 53 + 54 + #define PAB_INTP_AMBA_MISC_ENB 0x0b0c 55 + #define PAB_INTP_AMBA_MISC_STAT 0x0b1c 56 + #define PAB_INTP_RESET BIT(1) 57 + #define PAB_INTP_MSI BIT(3) 58 + #define PAB_INTP_INTA BIT(5) 59 + #define PAB_INTP_INTB BIT(6) 60 + #define PAB_INTP_INTC BIT(7) 61 + #define PAB_INTP_INTD BIT(8) 62 + #define PAB_INTP_PCIE_UE BIT(9) 63 + #define PAB_INTP_IE_PMREDI BIT(29) 64 + #define PAB_INTP_IE_EC BIT(30) 65 + #define PAB_INTP_MSI_MASK PAB_INTP_MSI 66 + #define PAB_INTP_INTX_MASK (PAB_INTP_INTA | PAB_INTP_INTB |\ 67 + PAB_INTP_INTC | PAB_INTP_INTD) 68 + 69 + #define PAB_AXI_AMAP_CTRL(win) PAB_REG_ADDR(0x0ba0, win) 70 + #define WIN_ENABLE_SHIFT 0 71 + #define WIN_TYPE_SHIFT 1 72 + #define WIN_TYPE_MASK 0x3 73 + #define WIN_SIZE_MASK 0xfffffc00 74 + 75 + #define PAB_EXT_AXI_AMAP_SIZE(win) PAB_EXT_REG_ADDR(0xbaf0, win) 76 + 77 + #define PAB_EXT_AXI_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0x80a0, win) 78 + #define PAB_AXI_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x0ba4, win) 79 + #define AXI_WINDOW_ALIGN_MASK 3 80 + 81 + #define PAB_AXI_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x0ba8, win) 82 + #define PAB_BUS_SHIFT 24 83 + #define PAB_DEVICE_SHIFT 19 84 + #define PAB_FUNCTION_SHIFT 16 85 + 86 + #define PAB_AXI_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x0bac, win) 87 + #define PAB_INTP_AXI_PIO_CLASS 0x474 88 + 89 + #define PAB_PEX_AMAP_CTRL(win) PAB_REG_ADDR(0x4ba0, win) 90 + #define AMAP_CTRL_EN_SHIFT 0 91 + #define AMAP_CTRL_TYPE_SHIFT 1 92 + #define AMAP_CTRL_TYPE_MASK 3 93 + 94 + #define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win) 95 + #define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win) 96 + #define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win) 97 + #define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win) 98 + #define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win) 99 + 100 + /* starting offset of INTX bits in status register */ 101 + #define PAB_INTX_START 5 102 + 103 + /* supported number of MSI interrupts */ 104 + #define PCI_NUM_MSI 16 105 + 106 + /* MSI registers */ 107 + #define MSI_BASE_LO_OFFSET 0x04 108 + #define MSI_BASE_HI_OFFSET 0x08 109 + #define MSI_SIZE_OFFSET 0x0c 110 + #define MSI_ENABLE_OFFSET 0x14 111 + #define MSI_STATUS_OFFSET 0x18 112 + #define MSI_DATA_OFFSET 0x20 113 + #define MSI_ADDR_L_OFFSET 0x24 114 + #define MSI_ADDR_H_OFFSET 0x28 115 + 116 + /* outbound and inbound window definitions */ 117 + #define WIN_NUM_0 0 118 + #define WIN_NUM_1 1 119 + #define CFG_WINDOW_TYPE 0 120 + #define IO_WINDOW_TYPE 1 121 + #define MEM_WINDOW_TYPE 2 122 + #define IB_WIN_SIZE ((u64)256 * 1024 * 1024 * 1024) 123 + #define MAX_PIO_WINDOWS 8 124 + 125 + /* Parameters for the waiting for link up routine */ 126 + #define LINK_WAIT_MAX_RETRIES 10 127 + #define LINK_WAIT_MIN 90000 128 + #define LINK_WAIT_MAX 100000 129 + 130 + #define PAGED_ADDR_BNDRY 0xc00 131 + #define OFFSET_TO_PAGE_ADDR(off) \ 132 + ((off & PAGE_LO_MASK) | PAGED_ADDR_BNDRY) 133 + #define OFFSET_TO_PAGE_IDX(off) \ 134 + ((off >> PAGE_SEL_OFFSET_SHIFT) & PAGE_SEL_MASK) 135 + 136 + struct mobiveil_msi { /* MSI information */ 137 + struct mutex lock; /* protect bitmap variable */ 138 + struct irq_domain *msi_domain; 139 + struct irq_domain *dev_domain; 140 + phys_addr_t msi_pages_phys; 141 + int num_of_vectors; 142 + DECLARE_BITMAP(msi_irq_in_use, PCI_NUM_MSI); 143 + }; 144 + 145 + struct mobiveil_pcie; 146 + 147 + struct mobiveil_rp_ops { 148 + int (*interrupt_init)(struct mobiveil_pcie *pcie); 149 + }; 150 + 151 + struct mobiveil_root_port { 152 + char root_bus_nr; 153 + void __iomem *config_axi_slave_base; /* endpoint config base */ 154 + struct resource *ob_io_res; 155 + struct mobiveil_rp_ops *ops; 156 + int irq; 157 + raw_spinlock_t intx_mask_lock; 158 + struct irq_domain *intx_domain; 159 + struct mobiveil_msi msi; 160 + struct pci_host_bridge *bridge; 161 + }; 162 + 163 + struct mobiveil_pab_ops { 164 + int (*link_up)(struct mobiveil_pcie *pcie); 165 + }; 166 + 167 + struct mobiveil_pcie { 168 + struct platform_device *pdev; 169 + void __iomem *csr_axi_slave_base; /* root port config base */ 170 + void __iomem *apb_csr_base; /* MSI register base */ 171 + phys_addr_t pcie_reg_base; /* Physical PCIe Controller Base */ 172 + int apio_wins; 173 + int ppio_wins; 174 + int ob_wins_configured; /* configured outbound windows */ 175 + int ib_wins_configured; /* configured inbound windows */ 176 + const struct mobiveil_pab_ops *ops; 177 + struct mobiveil_root_port rp; 178 + }; 179 + 180 + int mobiveil_pcie_host_probe(struct mobiveil_pcie *pcie); 181 + int mobiveil_host_init(struct mobiveil_pcie *pcie, bool reinit); 182 + bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie); 183 + int mobiveil_bringup_link(struct mobiveil_pcie *pcie); 184 + void program_ob_windows(struct mobiveil_pcie *pcie, int win_num, u64 cpu_addr, 185 + u64 pci_addr, u32 type, u64 size); 186 + void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, u64 cpu_addr, 187 + u64 pci_addr, u32 type, u64 size); 188 + u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size); 189 + void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off, 190 + size_t size); 191 + 192 + static inline u32 mobiveil_csr_readl(struct mobiveil_pcie *pcie, u32 off) 193 + { 194 + return mobiveil_csr_read(pcie, off, 0x4); 195 + } 196 + 197 + static inline u16 mobiveil_csr_readw(struct mobiveil_pcie *pcie, u32 off) 198 + { 199 + return mobiveil_csr_read(pcie, off, 0x2); 200 + } 201 + 202 + static inline u8 mobiveil_csr_readb(struct mobiveil_pcie *pcie, u32 off) 203 + { 204 + return mobiveil_csr_read(pcie, off, 0x1); 205 + } 206 + 207 + 208 + static inline void mobiveil_csr_writel(struct mobiveil_pcie *pcie, u32 val, 209 + u32 off) 210 + { 211 + mobiveil_csr_write(pcie, val, off, 0x4); 212 + } 213 + 214 + static inline void mobiveil_csr_writew(struct mobiveil_pcie *pcie, u16 val, 215 + u32 off) 216 + { 217 + mobiveil_csr_write(pcie, val, off, 0x2); 218 + } 219 + 220 + static inline void mobiveil_csr_writeb(struct mobiveil_pcie *pcie, u8 val, 221 + u32 off) 222 + { 223 + mobiveil_csr_write(pcie, val, off, 0x1); 224 + } 225 + 226 + #endif /* _PCIE_MOBIVEIL_H */
+186 -74
drivers/pci/controller/pci-hyperv.c
··· 63 63 enum pci_protocol_version_t { 64 64 PCI_PROTOCOL_VERSION_1_1 = PCI_MAKE_VERSION(1, 1), /* Win10 */ 65 65 PCI_PROTOCOL_VERSION_1_2 = PCI_MAKE_VERSION(1, 2), /* RS1 */ 66 + PCI_PROTOCOL_VERSION_1_3 = PCI_MAKE_VERSION(1, 3), /* Vibranium */ 66 67 }; 67 68 68 69 #define CPU_AFFINITY_ALL -1ULL ··· 73 72 * first. 74 73 */ 75 74 static enum pci_protocol_version_t pci_protocol_versions[] = { 75 + PCI_PROTOCOL_VERSION_1_3, 76 76 PCI_PROTOCOL_VERSION_1_2, 77 77 PCI_PROTOCOL_VERSION_1_1, 78 78 }; ··· 121 119 PCI_RESOURCES_ASSIGNED2 = PCI_MESSAGE_BASE + 0x16, 122 120 PCI_CREATE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x17, 123 121 PCI_DELETE_INTERRUPT_MESSAGE2 = PCI_MESSAGE_BASE + 0x18, /* unused */ 122 + PCI_BUS_RELATIONS2 = PCI_MESSAGE_BASE + 0x19, 124 123 PCI_MESSAGE_MAXIMUM 125 124 }; 126 125 ··· 165 162 u32 subsystem_id; 166 163 union win_slot_encoding win_slot; 167 164 u32 ser; /* serial number */ 165 + } __packed; 166 + 167 + enum pci_device_description_flags { 168 + HV_PCI_DEVICE_FLAG_NONE = 0x0, 169 + HV_PCI_DEVICE_FLAG_NUMA_AFFINITY = 0x1, 170 + }; 171 + 172 + struct pci_function_description2 { 173 + u16 v_id; /* vendor ID */ 174 + u16 d_id; /* device ID */ 175 + u8 rev; 176 + u8 prog_intf; 177 + u8 subclass; 178 + u8 base_class; 179 + u32 subsystem_id; 180 + union win_slot_encoding win_slot; 181 + u32 ser; /* serial number */ 182 + u32 flags; 183 + u16 virtual_numa_node; 184 + u16 reserved; 168 185 } __packed; 169 186 170 187 /** ··· 283 260 int resp_packet_size); 284 261 void *compl_ctxt; 285 262 286 - struct pci_message message[0]; 263 + struct pci_message message[]; 287 264 }; 288 265 289 266 /* ··· 319 296 struct pci_bus_relations { 320 297 struct pci_incoming_message incoming; 321 298 u32 device_count; 322 - struct pci_function_description func[0]; 299 + struct pci_function_description func[]; 300 + } __packed; 301 + 302 + struct pci_bus_relations2 { 303 + struct pci_incoming_message incoming; 304 + u32 device_count; 305 + struct pci_function_description2 func[]; 323 306 } __packed; 324 307 325 308 struct pci_q_res_req_response { ··· 436 407 static int pci_ring_size = (4 * PAGE_SIZE); 437 408 438 409 /* 439 - * Definitions or interrupt steering hypercall. 440 - */ 441 - #define HV_PARTITION_ID_SELF ((u64)-1) 442 - #define HVCALL_RETARGET_INTERRUPT 0x7e 443 - 444 - struct hv_interrupt_entry { 445 - u32 source; /* 1 for MSI(-X) */ 446 - u32 reserved1; 447 - u32 address; 448 - u32 data; 449 - }; 450 - 451 - /* 452 - * flags for hv_device_interrupt_target.flags 453 - */ 454 - #define HV_DEVICE_INTERRUPT_TARGET_MULTICAST 1 455 - #define HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET 2 456 - 457 - struct hv_device_interrupt_target { 458 - u32 vector; 459 - u32 flags; 460 - union { 461 - u64 vp_mask; 462 - struct hv_vpset vp_set; 463 - }; 464 - }; 465 - 466 - struct retarget_msi_interrupt { 467 - u64 partition_id; /* use "self" */ 468 - u64 device_id; 469 - struct hv_interrupt_entry int_entry; 470 - u64 reserved2; 471 - struct hv_device_interrupt_target int_target; 472 - } __packed __aligned(8); 473 - 474 - /* 475 410 * Driver specific state. 476 411 */ 477 412 ··· 481 488 struct workqueue_struct *wq; 482 489 483 490 /* hypercall arg, must not cross page boundary */ 484 - struct retarget_msi_interrupt retarget_msi_interrupt_params; 491 + struct hv_retarget_device_interrupt retarget_msi_interrupt_params; 485 492 486 493 /* 487 494 * Don't put anything here: retarget_msi_interrupt_params must be last ··· 498 505 struct hv_pcibus_device *bus; 499 506 }; 500 507 508 + struct hv_pcidev_description { 509 + u16 v_id; /* vendor ID */ 510 + u16 d_id; /* device ID */ 511 + u8 rev; 512 + u8 prog_intf; 513 + u8 subclass; 514 + u8 base_class; 515 + u32 subsystem_id; 516 + union win_slot_encoding win_slot; 517 + u32 ser; /* serial number */ 518 + u32 flags; 519 + u16 virtual_numa_node; 520 + }; 521 + 501 522 struct hv_dr_state { 502 523 struct list_head list_entry; 503 524 u32 device_count; 504 - struct pci_function_description func[0]; 525 + struct hv_pcidev_description func[]; 505 526 }; 506 527 507 528 enum hv_pcichild_state { ··· 532 525 refcount_t refs; 533 526 enum hv_pcichild_state state; 534 527 struct pci_slot *pci_slot; 535 - struct pci_function_description desc; 528 + struct hv_pcidev_description desc; 536 529 bool reported_missing; 537 530 struct hv_pcibus_device *hbus; 538 531 struct work_struct wrk; ··· 1191 1184 { 1192 1185 struct msi_desc *msi_desc = irq_data_get_msi_desc(data); 1193 1186 struct irq_cfg *cfg = irqd_cfg(data); 1194 - struct retarget_msi_interrupt *params; 1187 + struct hv_retarget_device_interrupt *params; 1195 1188 struct hv_pcibus_device *hbus; 1196 1189 struct cpumask *dest; 1197 1190 cpumask_var_t tmp; ··· 1213 1206 memset(params, 0, sizeof(*params)); 1214 1207 params->partition_id = HV_PARTITION_ID_SELF; 1215 1208 params->int_entry.source = 1; /* MSI(-X) */ 1216 - params->int_entry.address = msi_desc->msg.address_lo; 1217 - params->int_entry.data = msi_desc->msg.data; 1209 + hv_set_msi_entry_from_desc(&params->int_entry.msi_entry, msi_desc); 1218 1210 params->device_id = (hbus->hdev->dev_instance.b[5] << 24) | 1219 1211 (hbus->hdev->dev_instance.b[4] << 16) | 1220 1212 (hbus->hdev->dev_instance.b[7] << 8) | ··· 1407 1401 break; 1408 1402 1409 1403 case PCI_PROTOCOL_VERSION_1_2: 1404 + case PCI_PROTOCOL_VERSION_1_3: 1410 1405 size = hv_compose_msi_req_v2(&ctxt.int_pkts.v2, 1411 1406 dest, 1412 1407 hpdev->desc.win_slot.slot, ··· 1806 1799 } 1807 1800 } 1808 1801 1802 + /* 1803 + * Set NUMA node for the devices on the bus 1804 + */ 1805 + static void hv_pci_assign_numa_node(struct hv_pcibus_device *hbus) 1806 + { 1807 + struct pci_dev *dev; 1808 + struct pci_bus *bus = hbus->pci_bus; 1809 + struct hv_pci_dev *hv_dev; 1810 + 1811 + list_for_each_entry(dev, &bus->devices, bus_list) { 1812 + hv_dev = get_pcichild_wslot(hbus, devfn_to_wslot(dev->devfn)); 1813 + if (!hv_dev) 1814 + continue; 1815 + 1816 + if (hv_dev->desc.flags & HV_PCI_DEVICE_FLAG_NUMA_AFFINITY) 1817 + set_dev_node(&dev->dev, hv_dev->desc.virtual_numa_node); 1818 + 1819 + put_pcichild(hv_dev); 1820 + } 1821 + } 1822 + 1809 1823 /** 1810 1824 * create_root_hv_pci_bus() - Expose a new root PCI bus 1811 1825 * @hbus: Root PCI bus, as understood by this driver ··· 1849 1821 1850 1822 pci_lock_rescan_remove(); 1851 1823 pci_scan_child_bus(hbus->pci_bus); 1824 + hv_pci_assign_numa_node(hbus); 1852 1825 pci_bus_assign_resources(hbus->pci_bus); 1853 1826 hv_pci_assign_slots(hbus); 1854 1827 pci_bus_add_devices(hbus->pci_bus); ··· 1906 1877 * Return: Pointer to the new tracking struct 1907 1878 */ 1908 1879 static struct hv_pci_dev *new_pcichild_device(struct hv_pcibus_device *hbus, 1909 - struct pci_function_description *desc) 1880 + struct hv_pcidev_description *desc) 1910 1881 { 1911 1882 struct hv_pci_dev *hpdev; 1912 1883 struct pci_child_message *res_req; ··· 2017 1988 { 2018 1989 u32 child_no; 2019 1990 bool found; 2020 - struct pci_function_description *new_desc; 1991 + struct hv_pcidev_description *new_desc; 2021 1992 struct hv_pci_dev *hpdev; 2022 1993 struct hv_pcibus_device *hbus; 2023 1994 struct list_head removed; ··· 2118 2089 */ 2119 2090 pci_lock_rescan_remove(); 2120 2091 pci_scan_child_bus(hbus->pci_bus); 2092 + hv_pci_assign_numa_node(hbus); 2121 2093 hv_pci_assign_slots(hbus); 2122 2094 pci_unlock_rescan_remove(); 2123 2095 break; ··· 2137 2107 } 2138 2108 2139 2109 /** 2140 - * hv_pci_devices_present() - Handles list of new children 2110 + * hv_pci_start_relations_work() - Queue work to start device discovery 2141 2111 * @hbus: Root PCI bus, as understood by this driver 2142 - * @relations: Packet from host listing children 2112 + * @dr: The list of children returned from host 2143 2113 * 2144 - * This function is invoked whenever a new list of devices for 2145 - * this bus appears. 2114 + * Return: 0 on success, -errno on failure 2146 2115 */ 2147 - static void hv_pci_devices_present(struct hv_pcibus_device *hbus, 2148 - struct pci_bus_relations *relations) 2116 + static int hv_pci_start_relations_work(struct hv_pcibus_device *hbus, 2117 + struct hv_dr_state *dr) 2149 2118 { 2150 - struct hv_dr_state *dr; 2151 2119 struct hv_dr_work *dr_wrk; 2152 2120 unsigned long flags; 2153 2121 bool pending_dr; ··· 2153 2125 if (hbus->state == hv_pcibus_removing) { 2154 2126 dev_info(&hbus->hdev->device, 2155 2127 "PCI VMBus BUS_RELATIONS: ignored\n"); 2156 - return; 2128 + return -ENOENT; 2157 2129 } 2158 2130 2159 2131 dr_wrk = kzalloc(sizeof(*dr_wrk), GFP_NOWAIT); 2160 2132 if (!dr_wrk) 2161 - return; 2162 - 2163 - dr = kzalloc(offsetof(struct hv_dr_state, func) + 2164 - (sizeof(struct pci_function_description) * 2165 - (relations->device_count)), GFP_NOWAIT); 2166 - if (!dr) { 2167 - kfree(dr_wrk); 2168 - return; 2169 - } 2133 + return -ENOMEM; 2170 2134 2171 2135 INIT_WORK(&dr_wrk->wrk, pci_devices_present_work); 2172 2136 dr_wrk->bus = hbus; 2173 - dr->device_count = relations->device_count; 2174 - if (dr->device_count != 0) { 2175 - memcpy(dr->func, relations->func, 2176 - sizeof(struct pci_function_description) * 2177 - dr->device_count); 2178 - } 2179 2137 2180 2138 spin_lock_irqsave(&hbus->device_list_lock, flags); 2181 2139 /* ··· 2179 2165 get_hvpcibus(hbus); 2180 2166 queue_work(hbus->wq, &dr_wrk->wrk); 2181 2167 } 2168 + 2169 + return 0; 2170 + } 2171 + 2172 + /** 2173 + * hv_pci_devices_present() - Handle list of new children 2174 + * @hbus: Root PCI bus, as understood by this driver 2175 + * @relations: Packet from host listing children 2176 + * 2177 + * Process a new list of devices on the bus. The list of devices is 2178 + * discovered by VSP and sent to us via VSP message PCI_BUS_RELATIONS, 2179 + * whenever a new list of devices for this bus appears. 2180 + */ 2181 + static void hv_pci_devices_present(struct hv_pcibus_device *hbus, 2182 + struct pci_bus_relations *relations) 2183 + { 2184 + struct hv_dr_state *dr; 2185 + int i; 2186 + 2187 + dr = kzalloc(offsetof(struct hv_dr_state, func) + 2188 + (sizeof(struct hv_pcidev_description) * 2189 + (relations->device_count)), GFP_NOWAIT); 2190 + 2191 + if (!dr) 2192 + return; 2193 + 2194 + dr->device_count = relations->device_count; 2195 + for (i = 0; i < dr->device_count; i++) { 2196 + dr->func[i].v_id = relations->func[i].v_id; 2197 + dr->func[i].d_id = relations->func[i].d_id; 2198 + dr->func[i].rev = relations->func[i].rev; 2199 + dr->func[i].prog_intf = relations->func[i].prog_intf; 2200 + dr->func[i].subclass = relations->func[i].subclass; 2201 + dr->func[i].base_class = relations->func[i].base_class; 2202 + dr->func[i].subsystem_id = relations->func[i].subsystem_id; 2203 + dr->func[i].win_slot = relations->func[i].win_slot; 2204 + dr->func[i].ser = relations->func[i].ser; 2205 + } 2206 + 2207 + if (hv_pci_start_relations_work(hbus, dr)) 2208 + kfree(dr); 2209 + } 2210 + 2211 + /** 2212 + * hv_pci_devices_present2() - Handle list of new children 2213 + * @hbus: Root PCI bus, as understood by this driver 2214 + * @relations: Packet from host listing children 2215 + * 2216 + * This function is the v2 version of hv_pci_devices_present() 2217 + */ 2218 + static void hv_pci_devices_present2(struct hv_pcibus_device *hbus, 2219 + struct pci_bus_relations2 *relations) 2220 + { 2221 + struct hv_dr_state *dr; 2222 + int i; 2223 + 2224 + dr = kzalloc(offsetof(struct hv_dr_state, func) + 2225 + (sizeof(struct hv_pcidev_description) * 2226 + (relations->device_count)), GFP_NOWAIT); 2227 + 2228 + if (!dr) 2229 + return; 2230 + 2231 + dr->device_count = relations->device_count; 2232 + for (i = 0; i < dr->device_count; i++) { 2233 + dr->func[i].v_id = relations->func[i].v_id; 2234 + dr->func[i].d_id = relations->func[i].d_id; 2235 + dr->func[i].rev = relations->func[i].rev; 2236 + dr->func[i].prog_intf = relations->func[i].prog_intf; 2237 + dr->func[i].subclass = relations->func[i].subclass; 2238 + dr->func[i].base_class = relations->func[i].base_class; 2239 + dr->func[i].subsystem_id = relations->func[i].subsystem_id; 2240 + dr->func[i].win_slot = relations->func[i].win_slot; 2241 + dr->func[i].ser = relations->func[i].ser; 2242 + dr->func[i].flags = relations->func[i].flags; 2243 + dr->func[i].virtual_numa_node = 2244 + relations->func[i].virtual_numa_node; 2245 + } 2246 + 2247 + if (hv_pci_start_relations_work(hbus, dr)) 2248 + kfree(dr); 2182 2249 } 2183 2250 2184 2251 /** ··· 2375 2280 struct pci_response *response; 2376 2281 struct pci_incoming_message *new_message; 2377 2282 struct pci_bus_relations *bus_rel; 2283 + struct pci_bus_relations2 *bus_rel2; 2378 2284 struct pci_dev_inval_block *inval; 2379 2285 struct pci_dev_incoming *dev_message; 2380 2286 struct hv_pci_dev *hpdev; ··· 2441 2345 } 2442 2346 2443 2347 hv_pci_devices_present(hbus, bus_rel); 2348 + break; 2349 + 2350 + case PCI_BUS_RELATIONS2: 2351 + 2352 + bus_rel2 = (struct pci_bus_relations2 *)buffer; 2353 + if (bytes_recvd < 2354 + offsetof(struct pci_bus_relations2, func) + 2355 + (sizeof(struct pci_function_description2) * 2356 + (bus_rel2->device_count))) { 2357 + dev_err(&hbus->hdev->device, 2358 + "bus relations v2 too small\n"); 2359 + break; 2360 + } 2361 + 2362 + hv_pci_devices_present2(hbus, bus_rel2); 2444 2363 break; 2445 2364 2446 2365 case PCI_EJECT: ··· 3033 2922 * positive by using kmemleak_alloc() and kmemleak_free() to ask 3034 2923 * kmemleak to track and scan the hbus buffer. 3035 2924 */ 3036 - hbus = (struct hv_pcibus_device *)kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); 2925 + hbus = kzalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL); 3037 2926 if (!hbus) 3038 2927 return -ENOMEM; 3039 2928 hbus->state = hv_pcibus_init; ··· 3169 3058 free_dom: 3170 3059 hv_put_dom_num(hbus->sysdata.domain); 3171 3060 free_bus: 3172 - free_page((unsigned long)hbus); 3061 + kfree(hbus); 3173 3062 return ret; 3174 3063 } 3175 3064 ··· 3180 3069 struct pci_packet teardown_packet; 3181 3070 u8 buffer[sizeof(struct pci_message)]; 3182 3071 } pkt; 3183 - struct pci_bus_relations relations; 3072 + struct hv_dr_state *dr; 3184 3073 struct hv_pci_compl comp_pkt; 3185 3074 int ret; 3186 3075 ··· 3193 3082 3194 3083 if (!hibernating) { 3195 3084 /* Delete any children which might still exist. */ 3196 - memset(&relations, 0, sizeof(relations)); 3197 - hv_pci_devices_present(hbus, &relations); 3085 + dr = kzalloc(sizeof(*dr), GFP_KERNEL); 3086 + if (dr && hv_pci_start_relations_work(hbus, dr)) 3087 + kfree(dr); 3198 3088 } 3199 3089 3200 3090 ret = hv_send_resources_released(hdev);
+44 -139
drivers/pci/controller/pci-tegra.c
··· 355 355 int irq; 356 356 357 357 struct resource cs; 358 - struct resource io; 359 - struct resource pio; 360 - struct resource mem; 361 - struct resource prefetch; 362 - struct resource busn; 363 - 364 - struct { 365 - resource_size_t mem; 366 - resource_size_t io; 367 - } offset; 368 358 369 359 struct clk *pex_clk; 370 360 struct clk *afi_clk; ··· 787 797 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0e1c, tegra_pcie_relax_enable); 788 798 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0e1d, tegra_pcie_relax_enable); 789 799 790 - static int tegra_pcie_request_resources(struct tegra_pcie *pcie) 791 - { 792 - struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 793 - struct list_head *windows = &host->windows; 794 - struct device *dev = pcie->dev; 795 - int err; 796 - 797 - pci_add_resource_offset(windows, &pcie->pio, pcie->offset.io); 798 - pci_add_resource_offset(windows, &pcie->mem, pcie->offset.mem); 799 - pci_add_resource_offset(windows, &pcie->prefetch, pcie->offset.mem); 800 - pci_add_resource(windows, &pcie->busn); 801 - 802 - err = devm_request_pci_bus_resources(dev, windows); 803 - if (err < 0) { 804 - pci_free_resource_list(windows); 805 - return err; 806 - } 807 - 808 - pci_remap_iospace(&pcie->pio, pcie->io.start); 809 - 810 - return 0; 811 - } 812 - 813 - static void tegra_pcie_free_resources(struct tegra_pcie *pcie) 814 - { 815 - struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 816 - struct list_head *windows = &host->windows; 817 - 818 - pci_unmap_iospace(&pcie->pio); 819 - pci_free_resource_list(windows); 820 - } 821 - 822 800 static int tegra_pcie_map_irq(const struct pci_dev *pdev, u8 slot, u8 pin) 823 801 { 824 802 struct tegra_pcie *pcie = pdev->bus->sysdata; ··· 867 909 */ 868 910 static void tegra_pcie_setup_translations(struct tegra_pcie *pcie) 869 911 { 870 - u32 fpci_bar, size, axi_address; 912 + u32 size; 913 + struct resource_entry *entry; 914 + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 871 915 872 916 /* Bar 0: type 1 extended configuration space */ 873 917 size = resource_size(&pcie->cs); 874 918 afi_writel(pcie, pcie->cs.start, AFI_AXI_BAR0_START); 875 919 afi_writel(pcie, size >> 12, AFI_AXI_BAR0_SZ); 876 920 877 - /* Bar 1: downstream IO bar */ 878 - fpci_bar = 0xfdfc0000; 879 - size = resource_size(&pcie->io); 880 - axi_address = pcie->io.start; 881 - afi_writel(pcie, axi_address, AFI_AXI_BAR1_START); 882 - afi_writel(pcie, size >> 12, AFI_AXI_BAR1_SZ); 883 - afi_writel(pcie, fpci_bar, AFI_FPCI_BAR1); 921 + resource_list_for_each_entry(entry, &bridge->windows) { 922 + u32 fpci_bar, axi_address; 923 + struct resource *res = entry->res; 884 924 885 - /* Bar 2: prefetchable memory BAR */ 886 - fpci_bar = (((pcie->prefetch.start >> 12) & 0x0fffffff) << 4) | 0x1; 887 - size = resource_size(&pcie->prefetch); 888 - axi_address = pcie->prefetch.start; 889 - afi_writel(pcie, axi_address, AFI_AXI_BAR2_START); 890 - afi_writel(pcie, size >> 12, AFI_AXI_BAR2_SZ); 891 - afi_writel(pcie, fpci_bar, AFI_FPCI_BAR2); 925 + size = resource_size(res); 892 926 893 - /* Bar 3: non prefetchable memory BAR */ 894 - fpci_bar = (((pcie->mem.start >> 12) & 0x0fffffff) << 4) | 0x1; 895 - size = resource_size(&pcie->mem); 896 - axi_address = pcie->mem.start; 897 - afi_writel(pcie, axi_address, AFI_AXI_BAR3_START); 898 - afi_writel(pcie, size >> 12, AFI_AXI_BAR3_SZ); 899 - afi_writel(pcie, fpci_bar, AFI_FPCI_BAR3); 927 + switch (resource_type(res)) { 928 + case IORESOURCE_IO: 929 + /* Bar 1: downstream IO bar */ 930 + fpci_bar = 0xfdfc0000; 931 + axi_address = pci_pio_to_address(res->start); 932 + afi_writel(pcie, axi_address, AFI_AXI_BAR1_START); 933 + afi_writel(pcie, size >> 12, AFI_AXI_BAR1_SZ); 934 + afi_writel(pcie, fpci_bar, AFI_FPCI_BAR1); 935 + break; 936 + case IORESOURCE_MEM: 937 + fpci_bar = (((res->start >> 12) & 0x0fffffff) << 4) | 0x1; 938 + axi_address = res->start; 939 + 940 + if (res->flags & IORESOURCE_PREFETCH) { 941 + /* Bar 2: prefetchable memory BAR */ 942 + afi_writel(pcie, axi_address, AFI_AXI_BAR2_START); 943 + afi_writel(pcie, size >> 12, AFI_AXI_BAR2_SZ); 944 + afi_writel(pcie, fpci_bar, AFI_FPCI_BAR2); 945 + 946 + } else { 947 + /* Bar 3: non prefetchable memory BAR */ 948 + afi_writel(pcie, axi_address, AFI_AXI_BAR3_START); 949 + afi_writel(pcie, size >> 12, AFI_AXI_BAR3_SZ); 950 + afi_writel(pcie, fpci_bar, AFI_FPCI_BAR3); 951 + } 952 + break; 953 + } 954 + } 900 955 901 956 /* NULL out the remaining BARs as they are not used */ 902 957 afi_writel(pcie, 0, AFI_AXI_BAR4_START); ··· 2128 2157 struct device *dev = pcie->dev; 2129 2158 struct device_node *np = dev->of_node, *port; 2130 2159 const struct tegra_pcie_soc *soc = pcie->soc; 2131 - struct of_pci_range_parser parser; 2132 - struct of_pci_range range; 2133 2160 u32 lanes = 0, mask = 0; 2134 2161 unsigned int lane = 0; 2135 - struct resource res; 2136 2162 int err; 2137 - 2138 - if (of_pci_range_parser_init(&parser, np)) { 2139 - dev_err(dev, "missing \"ranges\" property\n"); 2140 - return -EINVAL; 2141 - } 2142 - 2143 - for_each_of_pci_range(&parser, &range) { 2144 - err = of_pci_range_to_resource(&range, np, &res); 2145 - if (err < 0) 2146 - return err; 2147 - 2148 - switch (res.flags & IORESOURCE_TYPE_BITS) { 2149 - case IORESOURCE_IO: 2150 - /* Track the bus -> CPU I/O mapping offset. */ 2151 - pcie->offset.io = res.start - range.pci_addr; 2152 - 2153 - memcpy(&pcie->pio, &res, sizeof(res)); 2154 - pcie->pio.name = np->full_name; 2155 - 2156 - /* 2157 - * The Tegra PCIe host bridge uses this to program the 2158 - * mapping of the I/O space to the physical address, 2159 - * so we override the .start and .end fields here that 2160 - * of_pci_range_to_resource() converted to I/O space. 2161 - * We also set the IORESOURCE_MEM type to clarify that 2162 - * the resource is in the physical memory space. 2163 - */ 2164 - pcie->io.start = range.cpu_addr; 2165 - pcie->io.end = range.cpu_addr + range.size - 1; 2166 - pcie->io.flags = IORESOURCE_MEM; 2167 - pcie->io.name = "I/O"; 2168 - 2169 - memcpy(&res, &pcie->io, sizeof(res)); 2170 - break; 2171 - 2172 - case IORESOURCE_MEM: 2173 - /* 2174 - * Track the bus -> CPU memory mapping offset. This 2175 - * assumes that the prefetchable and non-prefetchable 2176 - * regions will be the last of type IORESOURCE_MEM in 2177 - * the ranges property. 2178 - * */ 2179 - pcie->offset.mem = res.start - range.pci_addr; 2180 - 2181 - if (res.flags & IORESOURCE_PREFETCH) { 2182 - memcpy(&pcie->prefetch, &res, sizeof(res)); 2183 - pcie->prefetch.name = "prefetchable"; 2184 - } else { 2185 - memcpy(&pcie->mem, &res, sizeof(res)); 2186 - pcie->mem.name = "non-prefetchable"; 2187 - } 2188 - break; 2189 - } 2190 - } 2191 - 2192 - err = of_pci_parse_bus_range(np, &pcie->busn); 2193 - if (err < 0) { 2194 - dev_err(dev, "failed to parse ranges property: %d\n", err); 2195 - pcie->busn.name = np->name; 2196 - pcie->busn.start = 0; 2197 - pcie->busn.end = 0xff; 2198 - pcie->busn.flags = IORESOURCE_BUS; 2199 - } 2200 2163 2201 2164 /* parse root ports */ 2202 2165 for_each_child_of_node(np, port) { ··· 2671 2766 struct pci_host_bridge *host; 2672 2767 struct tegra_pcie *pcie; 2673 2768 struct pci_bus *child; 2769 + struct resource *bus; 2674 2770 int err; 2675 2771 2676 2772 host = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); ··· 2685 2779 pcie->soc = of_device_get_match_data(dev); 2686 2780 INIT_LIST_HEAD(&pcie->ports); 2687 2781 pcie->dev = dev; 2782 + 2783 + err = pci_parse_request_of_pci_ranges(dev, &host->windows, NULL, &bus); 2784 + if (err) { 2785 + dev_err(dev, "Getting bridge resources failed\n"); 2786 + return err; 2787 + } 2688 2788 2689 2789 err = tegra_pcie_parse_dt(pcie); 2690 2790 if (err < 0) ··· 2715 2803 goto teardown_msi; 2716 2804 } 2717 2805 2718 - err = tegra_pcie_request_resources(pcie); 2719 - if (err) 2720 - goto pm_runtime_put; 2721 - 2722 - host->busnr = pcie->busn.start; 2806 + host->busnr = bus->start; 2723 2807 host->dev.parent = &pdev->dev; 2724 2808 host->ops = &tegra_pcie_ops; 2725 2809 host->map_irq = tegra_pcie_map_irq; ··· 2724 2816 err = pci_scan_root_bus_bridge(host); 2725 2817 if (err < 0) { 2726 2818 dev_err(dev, "failed to register host: %d\n", err); 2727 - goto free_resources; 2819 + goto pm_runtime_put; 2728 2820 } 2729 2821 2730 2822 pci_bus_size_bridges(host->bus); ··· 2743 2835 2744 2836 return 0; 2745 2837 2746 - free_resources: 2747 - tegra_pcie_free_resources(pcie); 2748 2838 pm_runtime_put: 2749 2839 pm_runtime_put_sync(pcie->dev); 2750 2840 pm_runtime_disable(pcie->dev); ··· 2764 2858 2765 2859 pci_stop_root_bus(host->bus); 2766 2860 pci_remove_root_bus(host->bus); 2767 - tegra_pcie_free_resources(pcie); 2768 2861 pm_runtime_put_sync(pcie->dev); 2769 2862 pm_runtime_disable(pcie->dev); 2770 2863
+2 -2
drivers/pci/controller/pcie-brcmstb.c
··· 824 824 cls = FIELD_GET(PCI_EXP_LNKSTA_CLS, lnksta); 825 825 nlw = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta); 826 826 dev_info(dev, "link up, %s x%u %s\n", 827 - PCIE_SPEED2STR(cls + PCI_SPEED_133MHz_PCIX_533), 828 - nlw, ssc_good ? "(SSC)" : "(!SSC)"); 827 + pci_speed_string(pcie_link_speed[cls]), nlw, 828 + ssc_good ? "(SSC)" : "(!SSC)"); 829 829 830 830 /* PCIe->SCB endian mode for BAR */ 831 831 tmp = readl(base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1);
+121 -443
drivers/pci/controller/pcie-mobiveil.c drivers/pci/controller/mobiveil/pcie-mobiveil-host.c
··· 3 3 * PCIe host controller driver for Mobiveil PCIe Host controller 4 4 * 5 5 * Copyright (c) 2018 Mobiveil Inc. 6 + * Copyright 2019-2020 NXP 7 + * 6 8 * Author: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 9 + * Hou Zhiqiang <Zhiqiang.Hou@nxp.com> 7 10 */ 8 11 9 - #include <linux/delay.h> 10 12 #include <linux/init.h> 11 13 #include <linux/interrupt.h> 12 14 #include <linux/irq.h> ··· 25 23 #include <linux/platform_device.h> 26 24 #include <linux/slab.h> 27 25 28 - #include "../pci.h" 29 - 30 - /* register offsets and bit positions */ 31 - 32 - /* 33 - * translation tables are grouped into windows, each window registers are 34 - * grouped into blocks of 4 or 16 registers each 35 - */ 36 - #define PAB_REG_BLOCK_SIZE 16 37 - #define PAB_EXT_REG_BLOCK_SIZE 4 38 - 39 - #define PAB_REG_ADDR(offset, win) \ 40 - (offset + (win * PAB_REG_BLOCK_SIZE)) 41 - #define PAB_EXT_REG_ADDR(offset, win) \ 42 - (offset + (win * PAB_EXT_REG_BLOCK_SIZE)) 43 - 44 - #define LTSSM_STATUS 0x0404 45 - #define LTSSM_STATUS_L0_MASK 0x3f 46 - #define LTSSM_STATUS_L0 0x2d 47 - 48 - #define PAB_CTRL 0x0808 49 - #define AMBA_PIO_ENABLE_SHIFT 0 50 - #define PEX_PIO_ENABLE_SHIFT 1 51 - #define PAGE_SEL_SHIFT 13 52 - #define PAGE_SEL_MASK 0x3f 53 - #define PAGE_LO_MASK 0x3ff 54 - #define PAGE_SEL_OFFSET_SHIFT 10 55 - 56 - #define PAB_AXI_PIO_CTRL 0x0840 57 - #define APIO_EN_MASK 0xf 58 - 59 - #define PAB_PEX_PIO_CTRL 0x08c0 60 - #define PIO_ENABLE_SHIFT 0 61 - 62 - #define PAB_INTP_AMBA_MISC_ENB 0x0b0c 63 - #define PAB_INTP_AMBA_MISC_STAT 0x0b1c 64 - #define PAB_INTP_INTX_MASK 0x01e0 65 - #define PAB_INTP_MSI_MASK 0x8 66 - 67 - #define PAB_AXI_AMAP_CTRL(win) PAB_REG_ADDR(0x0ba0, win) 68 - #define WIN_ENABLE_SHIFT 0 69 - #define WIN_TYPE_SHIFT 1 70 - #define WIN_TYPE_MASK 0x3 71 - #define WIN_SIZE_MASK 0xfffffc00 72 - 73 - #define PAB_EXT_AXI_AMAP_SIZE(win) PAB_EXT_REG_ADDR(0xbaf0, win) 74 - 75 - #define PAB_EXT_AXI_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0x80a0, win) 76 - #define PAB_AXI_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x0ba4, win) 77 - #define AXI_WINDOW_ALIGN_MASK 3 78 - 79 - #define PAB_AXI_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x0ba8, win) 80 - #define PAB_BUS_SHIFT 24 81 - #define PAB_DEVICE_SHIFT 19 82 - #define PAB_FUNCTION_SHIFT 16 83 - 84 - #define PAB_AXI_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x0bac, win) 85 - #define PAB_INTP_AXI_PIO_CLASS 0x474 86 - 87 - #define PAB_PEX_AMAP_CTRL(win) PAB_REG_ADDR(0x4ba0, win) 88 - #define AMAP_CTRL_EN_SHIFT 0 89 - #define AMAP_CTRL_TYPE_SHIFT 1 90 - #define AMAP_CTRL_TYPE_MASK 3 91 - 92 - #define PAB_EXT_PEX_AMAP_SIZEN(win) PAB_EXT_REG_ADDR(0xbef0, win) 93 - #define PAB_EXT_PEX_AMAP_AXI_WIN(win) PAB_EXT_REG_ADDR(0xb4a0, win) 94 - #define PAB_PEX_AMAP_AXI_WIN(win) PAB_REG_ADDR(0x4ba4, win) 95 - #define PAB_PEX_AMAP_PEX_WIN_L(win) PAB_REG_ADDR(0x4ba8, win) 96 - #define PAB_PEX_AMAP_PEX_WIN_H(win) PAB_REG_ADDR(0x4bac, win) 97 - 98 - /* starting offset of INTX bits in status register */ 99 - #define PAB_INTX_START 5 100 - 101 - /* supported number of MSI interrupts */ 102 - #define PCI_NUM_MSI 16 103 - 104 - /* MSI registers */ 105 - #define MSI_BASE_LO_OFFSET 0x04 106 - #define MSI_BASE_HI_OFFSET 0x08 107 - #define MSI_SIZE_OFFSET 0x0c 108 - #define MSI_ENABLE_OFFSET 0x14 109 - #define MSI_STATUS_OFFSET 0x18 110 - #define MSI_DATA_OFFSET 0x20 111 - #define MSI_ADDR_L_OFFSET 0x24 112 - #define MSI_ADDR_H_OFFSET 0x28 113 - 114 - /* outbound and inbound window definitions */ 115 - #define WIN_NUM_0 0 116 - #define WIN_NUM_1 1 117 - #define CFG_WINDOW_TYPE 0 118 - #define IO_WINDOW_TYPE 1 119 - #define MEM_WINDOW_TYPE 2 120 - #define IB_WIN_SIZE ((u64)256 * 1024 * 1024 * 1024) 121 - #define MAX_PIO_WINDOWS 8 122 - 123 - /* Parameters for the waiting for link up routine */ 124 - #define LINK_WAIT_MAX_RETRIES 10 125 - #define LINK_WAIT_MIN 90000 126 - #define LINK_WAIT_MAX 100000 127 - 128 - #define PAGED_ADDR_BNDRY 0xc00 129 - #define OFFSET_TO_PAGE_ADDR(off) \ 130 - ((off & PAGE_LO_MASK) | PAGED_ADDR_BNDRY) 131 - #define OFFSET_TO_PAGE_IDX(off) \ 132 - ((off >> PAGE_SEL_OFFSET_SHIFT) & PAGE_SEL_MASK) 133 - 134 - struct mobiveil_msi { /* MSI information */ 135 - struct mutex lock; /* protect bitmap variable */ 136 - struct irq_domain *msi_domain; 137 - struct irq_domain *dev_domain; 138 - phys_addr_t msi_pages_phys; 139 - int num_of_vectors; 140 - DECLARE_BITMAP(msi_irq_in_use, PCI_NUM_MSI); 141 - }; 142 - 143 - struct mobiveil_pcie { 144 - struct platform_device *pdev; 145 - void __iomem *config_axi_slave_base; /* endpoint config base */ 146 - void __iomem *csr_axi_slave_base; /* root port config base */ 147 - void __iomem *apb_csr_base; /* MSI register base */ 148 - phys_addr_t pcie_reg_base; /* Physical PCIe Controller Base */ 149 - struct irq_domain *intx_domain; 150 - raw_spinlock_t intx_mask_lock; 151 - int irq; 152 - int apio_wins; 153 - int ppio_wins; 154 - int ob_wins_configured; /* configured outbound windows */ 155 - int ib_wins_configured; /* configured inbound windows */ 156 - struct resource *ob_io_res; 157 - char root_bus_nr; 158 - struct mobiveil_msi msi; 159 - }; 160 - 161 - /* 162 - * mobiveil_pcie_sel_page - routine to access paged register 163 - * 164 - * Registers whose address greater than PAGED_ADDR_BNDRY (0xc00) are paged, 165 - * for this scheme to work extracted higher 6 bits of the offset will be 166 - * written to pg_sel field of PAB_CTRL register and rest of the lower 10 167 - * bits enabled with PAGED_ADDR_BNDRY are used as offset of the register. 168 - */ 169 - static void mobiveil_pcie_sel_page(struct mobiveil_pcie *pcie, u8 pg_idx) 170 - { 171 - u32 val; 172 - 173 - val = readl(pcie->csr_axi_slave_base + PAB_CTRL); 174 - val &= ~(PAGE_SEL_MASK << PAGE_SEL_SHIFT); 175 - val |= (pg_idx & PAGE_SEL_MASK) << PAGE_SEL_SHIFT; 176 - 177 - writel(val, pcie->csr_axi_slave_base + PAB_CTRL); 178 - } 179 - 180 - static void *mobiveil_pcie_comp_addr(struct mobiveil_pcie *pcie, u32 off) 181 - { 182 - if (off < PAGED_ADDR_BNDRY) { 183 - /* For directly accessed registers, clear the pg_sel field */ 184 - mobiveil_pcie_sel_page(pcie, 0); 185 - return pcie->csr_axi_slave_base + off; 186 - } 187 - 188 - mobiveil_pcie_sel_page(pcie, OFFSET_TO_PAGE_IDX(off)); 189 - return pcie->csr_axi_slave_base + OFFSET_TO_PAGE_ADDR(off); 190 - } 191 - 192 - static int mobiveil_pcie_read(void __iomem *addr, int size, u32 *val) 193 - { 194 - if ((uintptr_t)addr & (size - 1)) { 195 - *val = 0; 196 - return PCIBIOS_BAD_REGISTER_NUMBER; 197 - } 198 - 199 - switch (size) { 200 - case 4: 201 - *val = readl(addr); 202 - break; 203 - case 2: 204 - *val = readw(addr); 205 - break; 206 - case 1: 207 - *val = readb(addr); 208 - break; 209 - default: 210 - *val = 0; 211 - return PCIBIOS_BAD_REGISTER_NUMBER; 212 - } 213 - 214 - return PCIBIOS_SUCCESSFUL; 215 - } 216 - 217 - static int mobiveil_pcie_write(void __iomem *addr, int size, u32 val) 218 - { 219 - if ((uintptr_t)addr & (size - 1)) 220 - return PCIBIOS_BAD_REGISTER_NUMBER; 221 - 222 - switch (size) { 223 - case 4: 224 - writel(val, addr); 225 - break; 226 - case 2: 227 - writew(val, addr); 228 - break; 229 - case 1: 230 - writeb(val, addr); 231 - break; 232 - default: 233 - return PCIBIOS_BAD_REGISTER_NUMBER; 234 - } 235 - 236 - return PCIBIOS_SUCCESSFUL; 237 - } 238 - 239 - static u32 mobiveil_csr_read(struct mobiveil_pcie *pcie, u32 off, size_t size) 240 - { 241 - void *addr; 242 - u32 val; 243 - int ret; 244 - 245 - addr = mobiveil_pcie_comp_addr(pcie, off); 246 - 247 - ret = mobiveil_pcie_read(addr, size, &val); 248 - if (ret) 249 - dev_err(&pcie->pdev->dev, "read CSR address failed\n"); 250 - 251 - return val; 252 - } 253 - 254 - static void mobiveil_csr_write(struct mobiveil_pcie *pcie, u32 val, u32 off, 255 - size_t size) 256 - { 257 - void *addr; 258 - int ret; 259 - 260 - addr = mobiveil_pcie_comp_addr(pcie, off); 261 - 262 - ret = mobiveil_pcie_write(addr, size, val); 263 - if (ret) 264 - dev_err(&pcie->pdev->dev, "write CSR address failed\n"); 265 - } 266 - 267 - static u32 mobiveil_csr_readl(struct mobiveil_pcie *pcie, u32 off) 268 - { 269 - return mobiveil_csr_read(pcie, off, 0x4); 270 - } 271 - 272 - static void mobiveil_csr_writel(struct mobiveil_pcie *pcie, u32 val, u32 off) 273 - { 274 - mobiveil_csr_write(pcie, val, off, 0x4); 275 - } 276 - 277 - static bool mobiveil_pcie_link_up(struct mobiveil_pcie *pcie) 278 - { 279 - return (mobiveil_csr_readl(pcie, LTSSM_STATUS) & 280 - LTSSM_STATUS_L0_MASK) == LTSSM_STATUS_L0; 281 - } 26 + #include "pcie-mobiveil.h" 282 27 283 28 static bool mobiveil_pcie_valid_device(struct pci_bus *bus, unsigned int devfn) 284 29 { 285 30 struct mobiveil_pcie *pcie = bus->sysdata; 31 + struct mobiveil_root_port *rp = &pcie->rp; 286 32 287 33 /* Only one device down on each root port */ 288 - if ((bus->number == pcie->root_bus_nr) && (devfn > 0)) 34 + if ((bus->number == rp->root_bus_nr) && (devfn > 0)) 289 35 return false; 290 36 291 37 /* 292 38 * Do not read more than one device on the bus directly 293 39 * attached to RC 294 40 */ 295 - if ((bus->primary == pcie->root_bus_nr) && (PCI_SLOT(devfn) > 0)) 41 + if ((bus->primary == rp->root_bus_nr) && (PCI_SLOT(devfn) > 0)) 296 42 return false; 297 43 298 44 return true; ··· 54 304 unsigned int devfn, int where) 55 305 { 56 306 struct mobiveil_pcie *pcie = bus->sysdata; 307 + struct mobiveil_root_port *rp = &pcie->rp; 57 308 u32 value; 58 309 59 310 if (!mobiveil_pcie_valid_device(bus, devfn)) 60 311 return NULL; 61 312 62 313 /* RC config access */ 63 - if (bus->number == pcie->root_bus_nr) 314 + if (bus->number == rp->root_bus_nr) 64 315 return pcie->csr_axi_slave_base + where; 65 316 66 317 /* ··· 76 325 77 326 mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_PEX_WIN_L(WIN_NUM_0)); 78 327 79 - return pcie->config_axi_slave_base + where; 328 + return rp->config_axi_slave_base + where; 80 329 } 81 330 82 331 static struct pci_ops mobiveil_pcie_ops = { ··· 90 339 struct irq_chip *chip = irq_desc_get_chip(desc); 91 340 struct mobiveil_pcie *pcie = irq_desc_get_handler_data(desc); 92 341 struct device *dev = &pcie->pdev->dev; 93 - struct mobiveil_msi *msi = &pcie->msi; 342 + struct mobiveil_root_port *rp = &pcie->rp; 343 + struct mobiveil_msi *msi = &rp->msi; 94 344 u32 msi_data, msi_addr_lo, msi_addr_hi; 95 345 u32 intr_status, msi_status; 96 346 unsigned long shifted_status; ··· 117 365 shifted_status >>= PAB_INTX_START; 118 366 do { 119 367 for_each_set_bit(bit, &shifted_status, PCI_NUM_INTX) { 120 - virq = irq_find_mapping(pcie->intx_domain, 368 + virq = irq_find_mapping(rp->intx_domain, 121 369 bit + 1); 122 370 if (virq) 123 371 generic_handle_irq(virq); ··· 176 424 struct device *dev = &pcie->pdev->dev; 177 425 struct platform_device *pdev = pcie->pdev; 178 426 struct device_node *node = dev->of_node; 427 + struct mobiveil_root_port *rp = &pcie->rp; 179 428 struct resource *res; 180 429 181 430 /* map config resource */ 182 431 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 183 432 "config_axi_slave"); 184 - pcie->config_axi_slave_base = devm_pci_remap_cfg_resource(dev, res); 185 - if (IS_ERR(pcie->config_axi_slave_base)) 186 - return PTR_ERR(pcie->config_axi_slave_base); 187 - pcie->ob_io_res = res; 433 + rp->config_axi_slave_base = devm_pci_remap_cfg_resource(dev, res); 434 + if (IS_ERR(rp->config_axi_slave_base)) 435 + return PTR_ERR(rp->config_axi_slave_base); 436 + rp->ob_io_res = res; 188 437 189 438 /* map csr resource */ 190 439 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, ··· 195 442 return PTR_ERR(pcie->csr_axi_slave_base); 196 443 pcie->pcie_reg_base = res->start; 197 444 198 - /* map MSI config resource */ 199 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb_csr"); 200 - pcie->apb_csr_base = devm_pci_remap_cfg_resource(dev, res); 201 - if (IS_ERR(pcie->apb_csr_base)) 202 - return PTR_ERR(pcie->apb_csr_base); 203 - 204 445 /* read the number of windows requested */ 205 446 if (of_property_read_u32(node, "apio-wins", &pcie->apio_wins)) 206 447 pcie->apio_wins = MAX_PIO_WINDOWS; ··· 202 455 if (of_property_read_u32(node, "ppio-wins", &pcie->ppio_wins)) 203 456 pcie->ppio_wins = MAX_PIO_WINDOWS; 204 457 205 - pcie->irq = platform_get_irq(pdev, 0); 206 - if (pcie->irq <= 0) { 207 - dev_err(dev, "failed to map IRQ: %d\n", pcie->irq); 208 - return -ENODEV; 209 - } 210 - 211 458 return 0; 212 - } 213 - 214 - static void program_ib_windows(struct mobiveil_pcie *pcie, int win_num, 215 - u64 cpu_addr, u64 pci_addr, u32 type, u64 size) 216 - { 217 - u32 value; 218 - u64 size64 = ~(size - 1); 219 - 220 - if (win_num >= pcie->ppio_wins) { 221 - dev_err(&pcie->pdev->dev, 222 - "ERROR: max inbound windows reached !\n"); 223 - return; 224 - } 225 - 226 - value = mobiveil_csr_readl(pcie, PAB_PEX_AMAP_CTRL(win_num)); 227 - value &= ~(AMAP_CTRL_TYPE_MASK << AMAP_CTRL_TYPE_SHIFT | WIN_SIZE_MASK); 228 - value |= type << AMAP_CTRL_TYPE_SHIFT | 1 << AMAP_CTRL_EN_SHIFT | 229 - (lower_32_bits(size64) & WIN_SIZE_MASK); 230 - mobiveil_csr_writel(pcie, value, PAB_PEX_AMAP_CTRL(win_num)); 231 - 232 - mobiveil_csr_writel(pcie, upper_32_bits(size64), 233 - PAB_EXT_PEX_AMAP_SIZEN(win_num)); 234 - 235 - mobiveil_csr_writel(pcie, lower_32_bits(cpu_addr), 236 - PAB_PEX_AMAP_AXI_WIN(win_num)); 237 - mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr), 238 - PAB_EXT_PEX_AMAP_AXI_WIN(win_num)); 239 - 240 - mobiveil_csr_writel(pcie, lower_32_bits(pci_addr), 241 - PAB_PEX_AMAP_PEX_WIN_L(win_num)); 242 - mobiveil_csr_writel(pcie, upper_32_bits(pci_addr), 243 - PAB_PEX_AMAP_PEX_WIN_H(win_num)); 244 - 245 - pcie->ib_wins_configured++; 246 - } 247 - 248 - /* 249 - * routine to program the outbound windows 250 - */ 251 - static void program_ob_windows(struct mobiveil_pcie *pcie, int win_num, 252 - u64 cpu_addr, u64 pci_addr, u32 type, u64 size) 253 - { 254 - u32 value; 255 - u64 size64 = ~(size - 1); 256 - 257 - if (win_num >= pcie->apio_wins) { 258 - dev_err(&pcie->pdev->dev, 259 - "ERROR: max outbound windows reached !\n"); 260 - return; 261 - } 262 - 263 - /* 264 - * program Enable Bit to 1, Type Bit to (00) base 2, AXI Window Size Bit 265 - * to 4 KB in PAB_AXI_AMAP_CTRL register 266 - */ 267 - value = mobiveil_csr_readl(pcie, PAB_AXI_AMAP_CTRL(win_num)); 268 - value &= ~(WIN_TYPE_MASK << WIN_TYPE_SHIFT | WIN_SIZE_MASK); 269 - value |= 1 << WIN_ENABLE_SHIFT | type << WIN_TYPE_SHIFT | 270 - (lower_32_bits(size64) & WIN_SIZE_MASK); 271 - mobiveil_csr_writel(pcie, value, PAB_AXI_AMAP_CTRL(win_num)); 272 - 273 - mobiveil_csr_writel(pcie, upper_32_bits(size64), 274 - PAB_EXT_AXI_AMAP_SIZE(win_num)); 275 - 276 - /* 277 - * program AXI window base with appropriate value in 278 - * PAB_AXI_AMAP_AXI_WIN0 register 279 - */ 280 - mobiveil_csr_writel(pcie, 281 - lower_32_bits(cpu_addr) & (~AXI_WINDOW_ALIGN_MASK), 282 - PAB_AXI_AMAP_AXI_WIN(win_num)); 283 - mobiveil_csr_writel(pcie, upper_32_bits(cpu_addr), 284 - PAB_EXT_AXI_AMAP_AXI_WIN(win_num)); 285 - 286 - mobiveil_csr_writel(pcie, lower_32_bits(pci_addr), 287 - PAB_AXI_AMAP_PEX_WIN_L(win_num)); 288 - mobiveil_csr_writel(pcie, upper_32_bits(pci_addr), 289 - PAB_AXI_AMAP_PEX_WIN_H(win_num)); 290 - 291 - pcie->ob_wins_configured++; 292 - } 293 - 294 - static int mobiveil_bringup_link(struct mobiveil_pcie *pcie) 295 - { 296 - int retries; 297 - 298 - /* check if the link is up or not */ 299 - for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 300 - if (mobiveil_pcie_link_up(pcie)) 301 - return 0; 302 - 303 - usleep_range(LINK_WAIT_MIN, LINK_WAIT_MAX); 304 - } 305 - 306 - dev_err(&pcie->pdev->dev, "link never came up\n"); 307 - 308 - return -ETIMEDOUT; 309 459 } 310 460 311 461 static void mobiveil_pcie_enable_msi(struct mobiveil_pcie *pcie) 312 462 { 313 463 phys_addr_t msg_addr = pcie->pcie_reg_base; 314 - struct mobiveil_msi *msi = &pcie->msi; 464 + struct mobiveil_msi *msi = &pcie->rp.msi; 315 465 316 - pcie->msi.num_of_vectors = PCI_NUM_MSI; 466 + msi->num_of_vectors = PCI_NUM_MSI; 317 467 msi->msi_pages_phys = (phys_addr_t)msg_addr; 318 468 319 469 writel_relaxed(lower_32_bits(msg_addr), ··· 221 577 writel_relaxed(1, pcie->apb_csr_base + MSI_ENABLE_OFFSET); 222 578 } 223 579 224 - static int mobiveil_host_init(struct mobiveil_pcie *pcie) 580 + int mobiveil_host_init(struct mobiveil_pcie *pcie, bool reinit) 225 581 { 226 - struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 582 + struct mobiveil_root_port *rp = &pcie->rp; 583 + struct pci_host_bridge *bridge = rp->bridge; 227 584 u32 value, pab_ctrl, type; 228 585 struct resource_entry *win; 229 586 230 - /* setup bus numbers */ 231 - value = mobiveil_csr_readl(pcie, PCI_PRIMARY_BUS); 232 - value &= 0xff000000; 233 - value |= 0x00ff0100; 234 - mobiveil_csr_writel(pcie, value, PCI_PRIMARY_BUS); 587 + pcie->ib_wins_configured = 0; 588 + pcie->ob_wins_configured = 0; 589 + 590 + if (!reinit) { 591 + /* setup bus numbers */ 592 + value = mobiveil_csr_readl(pcie, PCI_PRIMARY_BUS); 593 + value &= 0xff000000; 594 + value |= 0x00ff0100; 595 + mobiveil_csr_writel(pcie, value, PCI_PRIMARY_BUS); 596 + } 235 597 236 598 /* 237 599 * program Bus Master Enable Bit in Command Register in PAB Config ··· 254 604 pab_ctrl = mobiveil_csr_readl(pcie, PAB_CTRL); 255 605 pab_ctrl |= (1 << AMBA_PIO_ENABLE_SHIFT) | (1 << PEX_PIO_ENABLE_SHIFT); 256 606 mobiveil_csr_writel(pcie, pab_ctrl, PAB_CTRL); 257 - 258 - mobiveil_csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK), 259 - PAB_INTP_AMBA_MISC_ENB); 260 607 261 608 /* 262 609 * program PIO Enable Bit to 1 and Config Window Enable Bit to 1 in ··· 276 629 */ 277 630 278 631 /* config outbound translation window */ 279 - program_ob_windows(pcie, WIN_NUM_0, pcie->ob_io_res->start, 0, 280 - CFG_WINDOW_TYPE, resource_size(pcie->ob_io_res)); 632 + program_ob_windows(pcie, WIN_NUM_0, rp->ob_io_res->start, 0, 633 + CFG_WINDOW_TYPE, resource_size(rp->ob_io_res)); 281 634 282 635 /* memory inbound translation window */ 283 636 program_ib_windows(pcie, WIN_NUM_0, 0, 0, MEM_WINDOW_TYPE, IB_WIN_SIZE); ··· 304 657 value |= (PCI_CLASS_BRIDGE_PCI << 16); 305 658 mobiveil_csr_writel(pcie, value, PAB_INTP_AXI_PIO_CLASS); 306 659 307 - /* setup MSI hardware registers */ 308 - mobiveil_pcie_enable_msi(pcie); 309 - 310 660 return 0; 311 661 } 312 662 ··· 311 667 { 312 668 struct irq_desc *desc = irq_to_desc(data->irq); 313 669 struct mobiveil_pcie *pcie; 670 + struct mobiveil_root_port *rp; 314 671 unsigned long flags; 315 672 u32 mask, shifted_val; 316 673 317 674 pcie = irq_desc_get_chip_data(desc); 675 + rp = &pcie->rp; 318 676 mask = 1 << ((data->hwirq + PAB_INTX_START) - 1); 319 - raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags); 677 + raw_spin_lock_irqsave(&rp->intx_mask_lock, flags); 320 678 shifted_val = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); 321 679 shifted_val &= ~mask; 322 680 mobiveil_csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB); 323 - raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags); 681 + raw_spin_unlock_irqrestore(&rp->intx_mask_lock, flags); 324 682 } 325 683 326 684 static void mobiveil_unmask_intx_irq(struct irq_data *data) 327 685 { 328 686 struct irq_desc *desc = irq_to_desc(data->irq); 329 687 struct mobiveil_pcie *pcie; 688 + struct mobiveil_root_port *rp; 330 689 unsigned long flags; 331 690 u32 shifted_val, mask; 332 691 333 692 pcie = irq_desc_get_chip_data(desc); 693 + rp = &pcie->rp; 334 694 mask = 1 << ((data->hwirq + PAB_INTX_START) - 1); 335 - raw_spin_lock_irqsave(&pcie->intx_mask_lock, flags); 695 + raw_spin_lock_irqsave(&rp->intx_mask_lock, flags); 336 696 shifted_val = mobiveil_csr_readl(pcie, PAB_INTP_AMBA_MISC_ENB); 337 697 shifted_val |= mask; 338 698 mobiveil_csr_writel(pcie, shifted_val, PAB_INTP_AMBA_MISC_ENB); 339 - raw_spin_unlock_irqrestore(&pcie->intx_mask_lock, flags); 699 + raw_spin_unlock_irqrestore(&rp->intx_mask_lock, flags); 340 700 } 341 701 342 702 static struct irq_chip intx_irq_chip = { ··· 408 760 unsigned int nr_irqs, void *args) 409 761 { 410 762 struct mobiveil_pcie *pcie = domain->host_data; 411 - struct mobiveil_msi *msi = &pcie->msi; 763 + struct mobiveil_msi *msi = &pcie->rp.msi; 412 764 unsigned long bit; 413 765 414 766 WARN_ON(nr_irqs != 1); ··· 435 787 { 436 788 struct irq_data *d = irq_domain_get_irq_data(domain, virq); 437 789 struct mobiveil_pcie *pcie = irq_data_get_irq_chip_data(d); 438 - struct mobiveil_msi *msi = &pcie->msi; 790 + struct mobiveil_msi *msi = &pcie->rp.msi; 439 791 440 792 mutex_lock(&msi->lock); 441 793 ··· 456 808 { 457 809 struct device *dev = &pcie->pdev->dev; 458 810 struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node); 459 - struct mobiveil_msi *msi = &pcie->msi; 811 + struct mobiveil_msi *msi = &pcie->rp.msi; 460 812 461 - mutex_init(&pcie->msi.lock); 813 + mutex_init(&msi->lock); 462 814 msi->dev_domain = irq_domain_add_linear(NULL, msi->num_of_vectors, 463 815 &msi_domain_ops, pcie); 464 816 if (!msi->dev_domain) { ··· 482 834 { 483 835 struct device *dev = &pcie->pdev->dev; 484 836 struct device_node *node = dev->of_node; 837 + struct mobiveil_root_port *rp = &pcie->rp; 485 838 int ret; 486 839 487 840 /* setup INTx */ 488 - pcie->intx_domain = irq_domain_add_linear(node, PCI_NUM_INTX, 489 - &intx_domain_ops, pcie); 841 + rp->intx_domain = irq_domain_add_linear(node, PCI_NUM_INTX, 842 + &intx_domain_ops, pcie); 490 843 491 - if (!pcie->intx_domain) { 844 + if (!rp->intx_domain) { 492 845 dev_err(dev, "Failed to get a INTx IRQ domain\n"); 493 846 return -ENOMEM; 494 847 } 495 848 496 - raw_spin_lock_init(&pcie->intx_mask_lock); 849 + raw_spin_lock_init(&rp->intx_mask_lock); 497 850 498 851 /* setup MSI */ 499 852 ret = mobiveil_allocate_msi_domains(pcie); ··· 504 855 return 0; 505 856 } 506 857 507 - static int mobiveil_pcie_probe(struct platform_device *pdev) 858 + static int mobiveil_pcie_integrated_interrupt_init(struct mobiveil_pcie *pcie) 508 859 { 509 - struct mobiveil_pcie *pcie; 510 - struct pci_bus *bus; 511 - struct pci_bus *child; 512 - struct pci_host_bridge *bridge; 860 + struct platform_device *pdev = pcie->pdev; 513 861 struct device *dev = &pdev->dev; 862 + struct mobiveil_root_port *rp = &pcie->rp; 863 + struct resource *res; 514 864 int ret; 515 865 516 - /* allocate the PCIe port */ 517 - bridge = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 518 - if (!bridge) 519 - return -ENOMEM; 866 + /* map MSI config resource */ 867 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb_csr"); 868 + pcie->apb_csr_base = devm_pci_remap_cfg_resource(dev, res); 869 + if (IS_ERR(pcie->apb_csr_base)) 870 + return PTR_ERR(pcie->apb_csr_base); 520 871 521 - pcie = pci_host_bridge_priv(bridge); 872 + /* setup MSI hardware registers */ 873 + mobiveil_pcie_enable_msi(pcie); 522 874 523 - pcie->pdev = pdev; 875 + rp->irq = platform_get_irq(pdev, 0); 876 + if (rp->irq <= 0) { 877 + dev_err(dev, "failed to map IRQ: %d\n", rp->irq); 878 + return -ENODEV; 879 + } 880 + 881 + /* initialize the IRQ domains */ 882 + ret = mobiveil_pcie_init_irq_domain(pcie); 883 + if (ret) { 884 + dev_err(dev, "Failed creating IRQ Domain\n"); 885 + return ret; 886 + } 887 + 888 + irq_set_chained_handler_and_data(rp->irq, mobiveil_pcie_isr, pcie); 889 + 890 + /* Enable interrupts */ 891 + mobiveil_csr_writel(pcie, (PAB_INTP_INTX_MASK | PAB_INTP_MSI_MASK), 892 + PAB_INTP_AMBA_MISC_ENB); 893 + 894 + 895 + return 0; 896 + } 897 + 898 + static int mobiveil_pcie_interrupt_init(struct mobiveil_pcie *pcie) 899 + { 900 + struct mobiveil_root_port *rp = &pcie->rp; 901 + 902 + if (rp->ops->interrupt_init) 903 + return rp->ops->interrupt_init(pcie); 904 + 905 + return mobiveil_pcie_integrated_interrupt_init(pcie); 906 + } 907 + 908 + static bool mobiveil_pcie_is_bridge(struct mobiveil_pcie *pcie) 909 + { 910 + u32 header_type; 911 + 912 + header_type = mobiveil_csr_readb(pcie, PCI_HEADER_TYPE); 913 + header_type &= 0x7f; 914 + 915 + return header_type == PCI_HEADER_TYPE_BRIDGE; 916 + } 917 + 918 + int mobiveil_pcie_host_probe(struct mobiveil_pcie *pcie) 919 + { 920 + struct mobiveil_root_port *rp = &pcie->rp; 921 + struct pci_host_bridge *bridge = rp->bridge; 922 + struct device *dev = &pcie->pdev->dev; 923 + struct pci_bus *bus; 924 + struct pci_bus *child; 925 + int ret; 524 926 525 927 ret = mobiveil_pcie_parse_dt(pcie); 526 928 if (ret) { 527 929 dev_err(dev, "Parsing DT failed, ret: %x\n", ret); 528 930 return ret; 529 931 } 932 + 933 + if (!mobiveil_pcie_is_bridge(pcie)) 934 + return -ENODEV; 530 935 531 936 /* parse the host bridge base addresses from the device tree file */ 532 937 ret = pci_parse_request_of_pci_ranges(dev, &bridge->windows, ··· 594 891 * configure all inbound and outbound windows and prepare the RC for 595 892 * config access 596 893 */ 597 - ret = mobiveil_host_init(pcie); 894 + ret = mobiveil_host_init(pcie, false); 598 895 if (ret) { 599 896 dev_err(dev, "Failed to initialize host\n"); 600 897 return ret; 601 898 } 602 899 603 - /* initialize the IRQ domains */ 604 - ret = mobiveil_pcie_init_irq_domain(pcie); 900 + ret = mobiveil_pcie_interrupt_init(pcie); 605 901 if (ret) { 606 - dev_err(dev, "Failed creating IRQ Domain\n"); 902 + dev_err(dev, "Interrupt init failed\n"); 607 903 return ret; 608 904 } 609 - 610 - irq_set_chained_handler_and_data(pcie->irq, mobiveil_pcie_isr, pcie); 611 905 612 906 /* Initialize bridge */ 613 907 bridge->dev.parent = dev; 614 908 bridge->sysdata = pcie; 615 - bridge->busnr = pcie->root_bus_nr; 909 + bridge->busnr = rp->root_bus_nr; 616 910 bridge->ops = &mobiveil_pcie_ops; 617 911 bridge->map_irq = of_irq_parse_and_map_pci; 618 912 bridge->swizzle_irq = pci_common_swizzle; ··· 634 934 635 935 return 0; 636 936 } 637 - 638 - static const struct of_device_id mobiveil_pcie_of_match[] = { 639 - {.compatible = "mbvl,gpex40-pcie",}, 640 - {}, 641 - }; 642 - 643 - MODULE_DEVICE_TABLE(of, mobiveil_pcie_of_match); 644 - 645 - static struct platform_driver mobiveil_pcie_driver = { 646 - .probe = mobiveil_pcie_probe, 647 - .driver = { 648 - .name = "mobiveil-pcie", 649 - .of_match_table = mobiveil_pcie_of_match, 650 - .suppress_bind_attrs = true, 651 - }, 652 - }; 653 - 654 - builtin_platform_driver(mobiveil_pcie_driver); 655 - 656 - MODULE_LICENSE("GPL v2"); 657 - MODULE_DESCRIPTION("Mobiveil PCIe host controller driver"); 658 - MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");
+359 -49
drivers/pci/endpoint/functions/pci-epf-test.c
··· 8 8 9 9 #include <linux/crc32.h> 10 10 #include <linux/delay.h> 11 + #include <linux/dmaengine.h> 11 12 #include <linux/io.h> 12 13 #include <linux/module.h> 13 14 #include <linux/slab.h> ··· 40 39 #define STATUS_SRC_ADDR_INVALID BIT(7) 41 40 #define STATUS_DST_ADDR_INVALID BIT(8) 42 41 42 + #define FLAG_USE_DMA BIT(0) 43 + 43 44 #define TIMER_RESOLUTION 1 44 45 45 46 static struct workqueue_struct *kpcitest_workqueue; ··· 50 47 void *reg[PCI_STD_NUM_BARS]; 51 48 struct pci_epf *epf; 52 49 enum pci_barno test_reg_bar; 50 + size_t msix_table_offset; 53 51 struct delayed_work cmd_handler; 52 + struct dma_chan *dma_chan; 53 + struct completion transfer_complete; 54 + bool dma_supported; 54 55 const struct pci_epc_features *epc_features; 55 56 }; 56 57 ··· 68 61 u32 checksum; 69 62 u32 irq_type; 70 63 u32 irq_number; 64 + u32 flags; 71 65 } __packed; 72 66 73 67 static struct pci_epf_header test_header = { ··· 80 72 81 73 static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; 82 74 75 + static void pci_epf_test_dma_callback(void *param) 76 + { 77 + struct pci_epf_test *epf_test = param; 78 + 79 + complete(&epf_test->transfer_complete); 80 + } 81 + 82 + /** 83 + * pci_epf_test_data_transfer() - Function that uses dmaengine API to transfer 84 + * data between PCIe EP and remote PCIe RC 85 + * @epf_test: the EPF test device that performs the data transfer operation 86 + * @dma_dst: The destination address of the data transfer. It can be a physical 87 + * address given by pci_epc_mem_alloc_addr or DMA mapping APIs. 88 + * @dma_src: The source address of the data transfer. It can be a physical 89 + * address given by pci_epc_mem_alloc_addr or DMA mapping APIs. 90 + * @len: The size of the data transfer 91 + * 92 + * Function that uses dmaengine API to transfer data between PCIe EP and remote 93 + * PCIe RC. The source and destination address can be a physical address given 94 + * by pci_epc_mem_alloc_addr or the one obtained using DMA mapping APIs. 95 + * 96 + * The function returns '0' on success and negative value on failure. 97 + */ 98 + static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test, 99 + dma_addr_t dma_dst, dma_addr_t dma_src, 100 + size_t len) 101 + { 102 + enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 103 + struct dma_chan *chan = epf_test->dma_chan; 104 + struct pci_epf *epf = epf_test->epf; 105 + struct dma_async_tx_descriptor *tx; 106 + struct device *dev = &epf->dev; 107 + dma_cookie_t cookie; 108 + int ret; 109 + 110 + if (IS_ERR_OR_NULL(chan)) { 111 + dev_err(dev, "Invalid DMA memcpy channel\n"); 112 + return -EINVAL; 113 + } 114 + 115 + tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags); 116 + if (!tx) { 117 + dev_err(dev, "Failed to prepare DMA memcpy\n"); 118 + return -EIO; 119 + } 120 + 121 + tx->callback = pci_epf_test_dma_callback; 122 + tx->callback_param = epf_test; 123 + cookie = tx->tx_submit(tx); 124 + reinit_completion(&epf_test->transfer_complete); 125 + 126 + ret = dma_submit_error(cookie); 127 + if (ret) { 128 + dev_err(dev, "Failed to do DMA tx_submit %d\n", cookie); 129 + return -EIO; 130 + } 131 + 132 + dma_async_issue_pending(chan); 133 + ret = wait_for_completion_interruptible(&epf_test->transfer_complete); 134 + if (ret < 0) { 135 + dmaengine_terminate_sync(chan); 136 + dev_err(dev, "DMA wait_for_completion_timeout\n"); 137 + return -ETIMEDOUT; 138 + } 139 + 140 + return 0; 141 + } 142 + 143 + /** 144 + * pci_epf_test_init_dma_chan() - Function to initialize EPF test DMA channel 145 + * @epf_test: the EPF test device that performs data transfer operation 146 + * 147 + * Function to initialize EPF test DMA channel. 148 + */ 149 + static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test) 150 + { 151 + struct pci_epf *epf = epf_test->epf; 152 + struct device *dev = &epf->dev; 153 + struct dma_chan *dma_chan; 154 + dma_cap_mask_t mask; 155 + int ret; 156 + 157 + dma_cap_zero(mask); 158 + dma_cap_set(DMA_MEMCPY, mask); 159 + 160 + dma_chan = dma_request_chan_by_mask(&mask); 161 + if (IS_ERR(dma_chan)) { 162 + ret = PTR_ERR(dma_chan); 163 + if (ret != -EPROBE_DEFER) 164 + dev_err(dev, "Failed to get DMA channel\n"); 165 + return ret; 166 + } 167 + init_completion(&epf_test->transfer_complete); 168 + 169 + epf_test->dma_chan = dma_chan; 170 + 171 + return 0; 172 + } 173 + 174 + /** 175 + * pci_epf_test_clean_dma_chan() - Function to cleanup EPF test DMA channel 176 + * @epf: the EPF test device that performs data transfer operation 177 + * 178 + * Helper to cleanup EPF test DMA channel. 179 + */ 180 + static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test) 181 + { 182 + dma_release_channel(epf_test->dma_chan); 183 + epf_test->dma_chan = NULL; 184 + } 185 + 186 + static void pci_epf_test_print_rate(const char *ops, u64 size, 187 + struct timespec64 *start, 188 + struct timespec64 *end, bool dma) 189 + { 190 + struct timespec64 ts; 191 + u64 rate, ns; 192 + 193 + ts = timespec64_sub(*end, *start); 194 + 195 + /* convert both size (stored in 'rate') and time in terms of 'ns' */ 196 + ns = timespec64_to_ns(&ts); 197 + rate = size * NSEC_PER_SEC; 198 + 199 + /* Divide both size (stored in 'rate') and ns by a common factor */ 200 + while (ns > UINT_MAX) { 201 + rate >>= 1; 202 + ns >>= 1; 203 + } 204 + 205 + if (!ns) 206 + return; 207 + 208 + /* calculate the rate */ 209 + do_div(rate, (uint32_t)ns); 210 + 211 + pr_info("\n%s => Size: %llu bytes\t DMA: %s\t Time: %llu.%09u seconds\t" 212 + "Rate: %llu KB/s\n", ops, size, dma ? "YES" : "NO", 213 + (u64)ts.tv_sec, (u32)ts.tv_nsec, rate / 1024); 214 + } 215 + 83 216 static int pci_epf_test_copy(struct pci_epf_test *epf_test) 84 217 { 85 218 int ret; 219 + bool use_dma; 86 220 void __iomem *src_addr; 87 221 void __iomem *dst_addr; 88 222 phys_addr_t src_phys_addr; 89 223 phys_addr_t dst_phys_addr; 224 + struct timespec64 start, end; 90 225 struct pci_epf *epf = epf_test->epf; 91 226 struct device *dev = &epf->dev; 92 227 struct pci_epc *epc = epf->epc; ··· 268 117 goto err_dst_addr; 269 118 } 270 119 271 - memcpy(dst_addr, src_addr, reg->size); 120 + ktime_get_ts64(&start); 121 + use_dma = !!(reg->flags & FLAG_USE_DMA); 122 + if (use_dma) { 123 + if (!epf_test->dma_supported) { 124 + dev_err(dev, "Cannot transfer data using DMA\n"); 125 + ret = -EINVAL; 126 + goto err_map_addr; 127 + } 272 128 129 + ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr, 130 + src_phys_addr, reg->size); 131 + if (ret) 132 + dev_err(dev, "Data transfer failed\n"); 133 + } else { 134 + memcpy(dst_addr, src_addr, reg->size); 135 + } 136 + ktime_get_ts64(&end); 137 + pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma); 138 + 139 + err_map_addr: 273 140 pci_epc_unmap_addr(epc, epf->func_no, dst_phys_addr); 274 141 275 142 err_dst_addr: ··· 309 140 void __iomem *src_addr; 310 141 void *buf; 311 142 u32 crc32; 143 + bool use_dma; 312 144 phys_addr_t phys_addr; 145 + phys_addr_t dst_phys_addr; 146 + struct timespec64 start, end; 313 147 struct pci_epf *epf = epf_test->epf; 314 148 struct device *dev = &epf->dev; 315 149 struct pci_epc *epc = epf->epc; 150 + struct device *dma_dev = epf->epc->dev.parent; 316 151 enum pci_barno test_reg_bar = epf_test->test_reg_bar; 317 152 struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 318 153 ··· 342 169 goto err_map_addr; 343 170 } 344 171 345 - memcpy_fromio(buf, src_addr, reg->size); 172 + use_dma = !!(reg->flags & FLAG_USE_DMA); 173 + if (use_dma) { 174 + if (!epf_test->dma_supported) { 175 + dev_err(dev, "Cannot transfer data using DMA\n"); 176 + ret = -EINVAL; 177 + goto err_dma_map; 178 + } 179 + 180 + dst_phys_addr = dma_map_single(dma_dev, buf, reg->size, 181 + DMA_FROM_DEVICE); 182 + if (dma_mapping_error(dma_dev, dst_phys_addr)) { 183 + dev_err(dev, "Failed to map destination buffer addr\n"); 184 + ret = -ENOMEM; 185 + goto err_dma_map; 186 + } 187 + 188 + ktime_get_ts64(&start); 189 + ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr, 190 + phys_addr, reg->size); 191 + if (ret) 192 + dev_err(dev, "Data transfer failed\n"); 193 + ktime_get_ts64(&end); 194 + 195 + dma_unmap_single(dma_dev, dst_phys_addr, reg->size, 196 + DMA_FROM_DEVICE); 197 + } else { 198 + ktime_get_ts64(&start); 199 + memcpy_fromio(buf, src_addr, reg->size); 200 + ktime_get_ts64(&end); 201 + } 202 + 203 + pci_epf_test_print_rate("READ", reg->size, &start, &end, use_dma); 346 204 347 205 crc32 = crc32_le(~0, buf, reg->size); 348 206 if (crc32 != reg->checksum) 349 207 ret = -EIO; 350 208 209 + err_dma_map: 351 210 kfree(buf); 352 211 353 212 err_map_addr: ··· 397 192 int ret; 398 193 void __iomem *dst_addr; 399 194 void *buf; 195 + bool use_dma; 400 196 phys_addr_t phys_addr; 197 + phys_addr_t src_phys_addr; 198 + struct timespec64 start, end; 401 199 struct pci_epf *epf = epf_test->epf; 402 200 struct device *dev = &epf->dev; 403 201 struct pci_epc *epc = epf->epc; 202 + struct device *dma_dev = epf->epc->dev.parent; 404 203 enum pci_barno test_reg_bar = epf_test->test_reg_bar; 405 204 struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 406 205 ··· 433 224 get_random_bytes(buf, reg->size); 434 225 reg->checksum = crc32_le(~0, buf, reg->size); 435 226 436 - memcpy_toio(dst_addr, buf, reg->size); 227 + use_dma = !!(reg->flags & FLAG_USE_DMA); 228 + if (use_dma) { 229 + if (!epf_test->dma_supported) { 230 + dev_err(dev, "Cannot transfer data using DMA\n"); 231 + ret = -EINVAL; 232 + goto err_map_addr; 233 + } 234 + 235 + src_phys_addr = dma_map_single(dma_dev, buf, reg->size, 236 + DMA_TO_DEVICE); 237 + if (dma_mapping_error(dma_dev, src_phys_addr)) { 238 + dev_err(dev, "Failed to map source buffer addr\n"); 239 + ret = -ENOMEM; 240 + goto err_dma_map; 241 + } 242 + 243 + ktime_get_ts64(&start); 244 + ret = pci_epf_test_data_transfer(epf_test, phys_addr, 245 + src_phys_addr, reg->size); 246 + if (ret) 247 + dev_err(dev, "Data transfer failed\n"); 248 + ktime_get_ts64(&end); 249 + 250 + dma_unmap_single(dma_dev, src_phys_addr, reg->size, 251 + DMA_TO_DEVICE); 252 + } else { 253 + ktime_get_ts64(&start); 254 + memcpy_toio(dst_addr, buf, reg->size); 255 + ktime_get_ts64(&end); 256 + } 257 + 258 + pci_epf_test_print_rate("WRITE", reg->size, &start, &end, use_dma); 437 259 438 260 /* 439 261 * wait 1ms inorder for the write to complete. Without this delay L3 ··· 472 232 */ 473 233 usleep_range(1000, 2000); 474 234 235 + err_dma_map: 475 236 kfree(buf); 476 237 477 238 err_map_addr: ··· 601 360 msecs_to_jiffies(1)); 602 361 } 603 362 604 - static void pci_epf_test_linkup(struct pci_epf *epf) 605 - { 606 - struct pci_epf_test *epf_test = epf_get_drvdata(epf); 607 - 608 - queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler, 609 - msecs_to_jiffies(1)); 610 - } 611 - 612 363 static void pci_epf_test_unbind(struct pci_epf *epf) 613 364 { 614 365 struct pci_epf_test *epf_test = epf_get_drvdata(epf); ··· 609 376 int bar; 610 377 611 378 cancel_delayed_work(&epf_test->cmd_handler); 379 + pci_epf_test_clean_dma_chan(epf_test); 612 380 pci_epc_stop(epc); 613 381 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 614 382 epf_bar = &epf->bar[bar]; ··· 658 424 return 0; 659 425 } 660 426 427 + static int pci_epf_test_core_init(struct pci_epf *epf) 428 + { 429 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 430 + struct pci_epf_header *header = epf->header; 431 + const struct pci_epc_features *epc_features; 432 + struct pci_epc *epc = epf->epc; 433 + struct device *dev = &epf->dev; 434 + bool msix_capable = false; 435 + bool msi_capable = true; 436 + int ret; 437 + 438 + epc_features = pci_epc_get_features(epc, epf->func_no); 439 + if (epc_features) { 440 + msix_capable = epc_features->msix_capable; 441 + msi_capable = epc_features->msi_capable; 442 + } 443 + 444 + ret = pci_epc_write_header(epc, epf->func_no, header); 445 + if (ret) { 446 + dev_err(dev, "Configuration header write failed\n"); 447 + return ret; 448 + } 449 + 450 + ret = pci_epf_test_set_bar(epf); 451 + if (ret) 452 + return ret; 453 + 454 + if (msi_capable) { 455 + ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts); 456 + if (ret) { 457 + dev_err(dev, "MSI configuration failed\n"); 458 + return ret; 459 + } 460 + } 461 + 462 + if (msix_capable) { 463 + ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts, 464 + epf_test->test_reg_bar, 465 + epf_test->msix_table_offset); 466 + if (ret) { 467 + dev_err(dev, "MSI-X configuration failed\n"); 468 + return ret; 469 + } 470 + } 471 + 472 + return 0; 473 + } 474 + 475 + static int pci_epf_test_notifier(struct notifier_block *nb, unsigned long val, 476 + void *data) 477 + { 478 + struct pci_epf *epf = container_of(nb, struct pci_epf, nb); 479 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 480 + int ret; 481 + 482 + switch (val) { 483 + case CORE_INIT: 484 + ret = pci_epf_test_core_init(epf); 485 + if (ret) 486 + return NOTIFY_BAD; 487 + break; 488 + 489 + case LINK_UP: 490 + queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler, 491 + msecs_to_jiffies(1)); 492 + break; 493 + 494 + default: 495 + dev_err(&epf->dev, "Invalid EPF test notifier event\n"); 496 + return NOTIFY_BAD; 497 + } 498 + 499 + return NOTIFY_OK; 500 + } 501 + 661 502 static int pci_epf_test_alloc_space(struct pci_epf *epf) 662 503 { 663 504 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 664 505 struct device *dev = &epf->dev; 665 506 struct pci_epf_bar *epf_bar; 507 + size_t msix_table_size = 0; 508 + size_t test_reg_bar_size; 509 + size_t pba_size = 0; 510 + bool msix_capable; 666 511 void *base; 667 512 int bar, add; 668 513 enum pci_barno test_reg_bar = epf_test->test_reg_bar; ··· 750 437 751 438 epc_features = epf_test->epc_features; 752 439 753 - if (epc_features->bar_fixed_size[test_reg_bar]) 754 - test_reg_size = bar_size[test_reg_bar]; 755 - else 756 - test_reg_size = sizeof(struct pci_epf_test_reg); 440 + test_reg_bar_size = ALIGN(sizeof(struct pci_epf_test_reg), 128); 757 441 758 - base = pci_epf_alloc_space(epf, test_reg_size, 759 - test_reg_bar, epc_features->align); 442 + msix_capable = epc_features->msix_capable; 443 + if (msix_capable) { 444 + msix_table_size = PCI_MSIX_ENTRY_SIZE * epf->msix_interrupts; 445 + epf_test->msix_table_offset = test_reg_bar_size; 446 + /* Align to QWORD or 8 Bytes */ 447 + pba_size = ALIGN(DIV_ROUND_UP(epf->msix_interrupts, 8), 8); 448 + } 449 + test_reg_size = test_reg_bar_size + msix_table_size + pba_size; 450 + 451 + if (epc_features->bar_fixed_size[test_reg_bar]) { 452 + if (test_reg_size > bar_size[test_reg_bar]) 453 + return -ENOMEM; 454 + test_reg_size = bar_size[test_reg_bar]; 455 + } 456 + 457 + base = pci_epf_alloc_space(epf, test_reg_size, test_reg_bar, 458 + epc_features->align); 760 459 if (!base) { 761 460 dev_err(dev, "Failed to allocated register space\n"); 762 461 return -ENOMEM; ··· 817 492 { 818 493 int ret; 819 494 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 820 - struct pci_epf_header *header = epf->header; 821 495 const struct pci_epc_features *epc_features; 822 496 enum pci_barno test_reg_bar = BAR_0; 823 497 struct pci_epc *epc = epf->epc; 824 - struct device *dev = &epf->dev; 825 498 bool linkup_notifier = false; 826 - bool msix_capable = false; 827 - bool msi_capable = true; 499 + bool core_init_notifier = false; 828 500 829 501 if (WARN_ON_ONCE(!epc)) 830 502 return -EINVAL; ··· 829 507 epc_features = pci_epc_get_features(epc, epf->func_no); 830 508 if (epc_features) { 831 509 linkup_notifier = epc_features->linkup_notifier; 832 - msix_capable = epc_features->msix_capable; 833 - msi_capable = epc_features->msi_capable; 510 + core_init_notifier = epc_features->core_init_notifier; 834 511 test_reg_bar = pci_epc_get_first_free_bar(epc_features); 835 512 pci_epf_configure_bar(epf, epc_features); 836 513 } ··· 837 516 epf_test->test_reg_bar = test_reg_bar; 838 517 epf_test->epc_features = epc_features; 839 518 840 - ret = pci_epc_write_header(epc, epf->func_no, header); 841 - if (ret) { 842 - dev_err(dev, "Configuration header write failed\n"); 843 - return ret; 844 - } 845 - 846 519 ret = pci_epf_test_alloc_space(epf); 847 520 if (ret) 848 521 return ret; 849 522 850 - ret = pci_epf_test_set_bar(epf); 523 + if (!core_init_notifier) { 524 + ret = pci_epf_test_core_init(epf); 525 + if (ret) 526 + return ret; 527 + } 528 + 529 + epf_test->dma_supported = true; 530 + 531 + ret = pci_epf_test_init_dma_chan(epf_test); 851 532 if (ret) 852 - return ret; 533 + epf_test->dma_supported = false; 853 534 854 - if (msi_capable) { 855 - ret = pci_epc_set_msi(epc, epf->func_no, epf->msi_interrupts); 856 - if (ret) { 857 - dev_err(dev, "MSI configuration failed\n"); 858 - return ret; 859 - } 860 - } 861 - 862 - if (msix_capable) { 863 - ret = pci_epc_set_msix(epc, epf->func_no, epf->msix_interrupts); 864 - if (ret) { 865 - dev_err(dev, "MSI-X configuration failed\n"); 866 - return ret; 867 - } 868 - } 869 - 870 - if (!linkup_notifier) 535 + if (linkup_notifier) { 536 + epf->nb.notifier_call = pci_epf_test_notifier; 537 + pci_epc_register_notifier(epc, &epf->nb); 538 + } else { 871 539 queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); 540 + } 872 541 873 542 return 0; 874 543 } ··· 891 580 static struct pci_epf_ops ops = { 892 581 .unbind = pci_epf_test_unbind, 893 582 .bind = pci_epf_test_bind, 894 - .linkup = pci_epf_test_linkup, 895 583 }; 896 584 897 585 static struct pci_epf_driver test_driver = {
+6 -22
drivers/pci/endpoint/pci-ep-cfs.c
··· 29 29 struct config_group group; 30 30 struct pci_epc *epc; 31 31 bool start; 32 - unsigned long function_num_map; 33 32 }; 34 33 35 34 static inline struct pci_epf_group *to_pci_epf_group(struct config_item *item) ··· 57 58 58 59 if (!start) { 59 60 pci_epc_stop(epc); 61 + epc_group->start = 0; 60 62 return len; 61 63 } 62 64 ··· 89 89 struct config_item *epf_item) 90 90 { 91 91 int ret; 92 - u32 func_no = 0; 93 92 struct pci_epf_group *epf_group = to_pci_epf_group(epf_item); 94 93 struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); 95 94 struct pci_epc *epc = epc_group->epc; 96 95 struct pci_epf *epf = epf_group->epf; 97 96 98 - func_no = find_first_zero_bit(&epc_group->function_num_map, 99 - BITS_PER_LONG); 100 - if (func_no >= BITS_PER_LONG) 101 - return -EINVAL; 102 - 103 - set_bit(func_no, &epc_group->function_num_map); 104 - epf->func_no = func_no; 105 - 106 97 ret = pci_epc_add_epf(epc, epf); 107 98 if (ret) 108 - goto err_add_epf; 99 + return ret; 109 100 110 101 ret = pci_epf_bind(epf); 111 - if (ret) 112 - goto err_epf_bind; 102 + if (ret) { 103 + pci_epc_remove_epf(epc, epf); 104 + return ret; 105 + } 113 106 114 107 return 0; 115 - 116 - err_epf_bind: 117 - pci_epc_remove_epf(epc, epf); 118 - 119 - err_add_epf: 120 - clear_bit(func_no, &epc_group->function_num_map); 121 - 122 - return ret; 123 108 } 124 109 125 110 static void pci_epc_epf_unlink(struct config_item *epc_item, ··· 119 134 120 135 epc = epc_group->epc; 121 136 epf = epf_group->epf; 122 - clear_bit(epf->func_no, &epc_group->function_num_map); 123 137 pci_epf_unbind(epf); 124 138 pci_epc_remove_epf(epc, epf); 125 139 }
+75 -62
drivers/pci/endpoint/pci-epc-core.c
··· 120 120 u8 func_no) 121 121 { 122 122 const struct pci_epc_features *epc_features; 123 - unsigned long flags; 124 123 125 124 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 126 125 return NULL; ··· 127 128 if (!epc->ops->get_features) 128 129 return NULL; 129 130 130 - spin_lock_irqsave(&epc->lock, flags); 131 + mutex_lock(&epc->lock); 131 132 epc_features = epc->ops->get_features(epc, func_no); 132 - spin_unlock_irqrestore(&epc->lock, flags); 133 + mutex_unlock(&epc->lock); 133 134 134 135 return epc_features; 135 136 } ··· 143 144 */ 144 145 void pci_epc_stop(struct pci_epc *epc) 145 146 { 146 - unsigned long flags; 147 - 148 147 if (IS_ERR(epc) || !epc->ops->stop) 149 148 return; 150 149 151 - spin_lock_irqsave(&epc->lock, flags); 150 + mutex_lock(&epc->lock); 152 151 epc->ops->stop(epc); 153 - spin_unlock_irqrestore(&epc->lock, flags); 152 + mutex_unlock(&epc->lock); 154 153 } 155 154 EXPORT_SYMBOL_GPL(pci_epc_stop); 156 155 ··· 161 164 int pci_epc_start(struct pci_epc *epc) 162 165 { 163 166 int ret; 164 - unsigned long flags; 165 167 166 168 if (IS_ERR(epc)) 167 169 return -EINVAL; ··· 168 172 if (!epc->ops->start) 169 173 return 0; 170 174 171 - spin_lock_irqsave(&epc->lock, flags); 175 + mutex_lock(&epc->lock); 172 176 ret = epc->ops->start(epc); 173 - spin_unlock_irqrestore(&epc->lock, flags); 177 + mutex_unlock(&epc->lock); 174 178 175 179 return ret; 176 180 } ··· 189 193 enum pci_epc_irq_type type, u16 interrupt_num) 190 194 { 191 195 int ret; 192 - unsigned long flags; 193 196 194 197 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 195 198 return -EINVAL; ··· 196 201 if (!epc->ops->raise_irq) 197 202 return 0; 198 203 199 - spin_lock_irqsave(&epc->lock, flags); 204 + mutex_lock(&epc->lock); 200 205 ret = epc->ops->raise_irq(epc, func_no, type, interrupt_num); 201 - spin_unlock_irqrestore(&epc->lock, flags); 206 + mutex_unlock(&epc->lock); 202 207 203 208 return ret; 204 209 } ··· 214 219 int pci_epc_get_msi(struct pci_epc *epc, u8 func_no) 215 220 { 216 221 int interrupt; 217 - unsigned long flags; 218 222 219 223 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 220 224 return 0; ··· 221 227 if (!epc->ops->get_msi) 222 228 return 0; 223 229 224 - spin_lock_irqsave(&epc->lock, flags); 230 + mutex_lock(&epc->lock); 225 231 interrupt = epc->ops->get_msi(epc, func_no); 226 - spin_unlock_irqrestore(&epc->lock, flags); 232 + mutex_unlock(&epc->lock); 227 233 228 234 if (interrupt < 0) 229 235 return 0; ··· 246 252 { 247 253 int ret; 248 254 u8 encode_int; 249 - unsigned long flags; 250 255 251 256 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || 252 257 interrupts > 32) ··· 256 263 257 264 encode_int = order_base_2(interrupts); 258 265 259 - spin_lock_irqsave(&epc->lock, flags); 266 + mutex_lock(&epc->lock); 260 267 ret = epc->ops->set_msi(epc, func_no, encode_int); 261 - spin_unlock_irqrestore(&epc->lock, flags); 268 + mutex_unlock(&epc->lock); 262 269 263 270 return ret; 264 271 } ··· 274 281 int pci_epc_get_msix(struct pci_epc *epc, u8 func_no) 275 282 { 276 283 int interrupt; 277 - unsigned long flags; 278 284 279 285 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 280 286 return 0; ··· 281 289 if (!epc->ops->get_msix) 282 290 return 0; 283 291 284 - spin_lock_irqsave(&epc->lock, flags); 292 + mutex_lock(&epc->lock); 285 293 interrupt = epc->ops->get_msix(epc, func_no); 286 - spin_unlock_irqrestore(&epc->lock, flags); 294 + mutex_unlock(&epc->lock); 287 295 288 296 if (interrupt < 0) 289 297 return 0; ··· 297 305 * @epc: the EPC device on which MSI-X has to be configured 298 306 * @func_no: the endpoint function number in the EPC device 299 307 * @interrupts: number of MSI-X interrupts required by the EPF 308 + * @bir: BAR where the MSI-X table resides 309 + * @offset: Offset pointing to the start of MSI-X table 300 310 * 301 311 * Invoke to set the required number of MSI-X interrupts. 302 312 */ 303 - int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts) 313 + int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts, 314 + enum pci_barno bir, u32 offset) 304 315 { 305 316 int ret; 306 - unsigned long flags; 307 317 308 318 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || 309 319 interrupts < 1 || interrupts > 2048) ··· 314 320 if (!epc->ops->set_msix) 315 321 return 0; 316 322 317 - spin_lock_irqsave(&epc->lock, flags); 318 - ret = epc->ops->set_msix(epc, func_no, interrupts - 1); 319 - spin_unlock_irqrestore(&epc->lock, flags); 323 + mutex_lock(&epc->lock); 324 + ret = epc->ops->set_msix(epc, func_no, interrupts - 1, bir, offset); 325 + mutex_unlock(&epc->lock); 320 326 321 327 return ret; 322 328 } ··· 333 339 void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, 334 340 phys_addr_t phys_addr) 335 341 { 336 - unsigned long flags; 337 - 338 342 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 339 343 return; 340 344 341 345 if (!epc->ops->unmap_addr) 342 346 return; 343 347 344 - spin_lock_irqsave(&epc->lock, flags); 348 + mutex_lock(&epc->lock); 345 349 epc->ops->unmap_addr(epc, func_no, phys_addr); 346 - spin_unlock_irqrestore(&epc->lock, flags); 350 + mutex_unlock(&epc->lock); 347 351 } 348 352 EXPORT_SYMBOL_GPL(pci_epc_unmap_addr); 349 353 ··· 359 367 phys_addr_t phys_addr, u64 pci_addr, size_t size) 360 368 { 361 369 int ret; 362 - unsigned long flags; 363 370 364 371 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 365 372 return -EINVAL; ··· 366 375 if (!epc->ops->map_addr) 367 376 return 0; 368 377 369 - spin_lock_irqsave(&epc->lock, flags); 378 + mutex_lock(&epc->lock); 370 379 ret = epc->ops->map_addr(epc, func_no, phys_addr, pci_addr, size); 371 - spin_unlock_irqrestore(&epc->lock, flags); 380 + mutex_unlock(&epc->lock); 372 381 373 382 return ret; 374 383 } ··· 385 394 void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, 386 395 struct pci_epf_bar *epf_bar) 387 396 { 388 - unsigned long flags; 389 - 390 397 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || 391 398 (epf_bar->barno == BAR_5 && 392 399 epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64)) ··· 393 404 if (!epc->ops->clear_bar) 394 405 return; 395 406 396 - spin_lock_irqsave(&epc->lock, flags); 407 + mutex_lock(&epc->lock); 397 408 epc->ops->clear_bar(epc, func_no, epf_bar); 398 - spin_unlock_irqrestore(&epc->lock, flags); 409 + mutex_unlock(&epc->lock); 399 410 } 400 411 EXPORT_SYMBOL_GPL(pci_epc_clear_bar); 401 412 ··· 411 422 struct pci_epf_bar *epf_bar) 412 423 { 413 424 int ret; 414 - unsigned long irq_flags; 415 425 int flags = epf_bar->flags; 416 426 417 427 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions || ··· 425 437 if (!epc->ops->set_bar) 426 438 return 0; 427 439 428 - spin_lock_irqsave(&epc->lock, irq_flags); 440 + mutex_lock(&epc->lock); 429 441 ret = epc->ops->set_bar(epc, func_no, epf_bar); 430 - spin_unlock_irqrestore(&epc->lock, irq_flags); 442 + mutex_unlock(&epc->lock); 431 443 432 444 return ret; 433 445 } ··· 448 460 struct pci_epf_header *header) 449 461 { 450 462 int ret; 451 - unsigned long flags; 452 463 453 464 if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions) 454 465 return -EINVAL; ··· 455 468 if (!epc->ops->write_header) 456 469 return 0; 457 470 458 - spin_lock_irqsave(&epc->lock, flags); 471 + mutex_lock(&epc->lock); 459 472 ret = epc->ops->write_header(epc, func_no, header); 460 - spin_unlock_irqrestore(&epc->lock, flags); 473 + mutex_unlock(&epc->lock); 461 474 462 475 return ret; 463 476 } ··· 474 487 */ 475 488 int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf) 476 489 { 477 - unsigned long flags; 490 + u32 func_no; 491 + int ret = 0; 478 492 479 493 if (epf->epc) 480 494 return -EBUSY; ··· 483 495 if (IS_ERR(epc)) 484 496 return -EINVAL; 485 497 486 - if (epf->func_no > epc->max_functions - 1) 487 - return -EINVAL; 498 + mutex_lock(&epc->lock); 499 + func_no = find_first_zero_bit(&epc->function_num_map, 500 + BITS_PER_LONG); 501 + if (func_no >= BITS_PER_LONG) { 502 + ret = -EINVAL; 503 + goto ret; 504 + } 488 505 506 + if (func_no > epc->max_functions - 1) { 507 + dev_err(&epc->dev, "Exceeding max supported Function Number\n"); 508 + ret = -EINVAL; 509 + goto ret; 510 + } 511 + 512 + set_bit(func_no, &epc->function_num_map); 513 + epf->func_no = func_no; 489 514 epf->epc = epc; 490 515 491 - spin_lock_irqsave(&epc->lock, flags); 492 516 list_add_tail(&epf->list, &epc->pci_epf); 493 - spin_unlock_irqrestore(&epc->lock, flags); 494 517 495 - return 0; 518 + ret: 519 + mutex_unlock(&epc->lock); 520 + 521 + return ret; 496 522 } 497 523 EXPORT_SYMBOL_GPL(pci_epc_add_epf); 498 524 ··· 519 517 */ 520 518 void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf) 521 519 { 522 - unsigned long flags; 523 - 524 520 if (!epc || IS_ERR(epc) || !epf) 525 521 return; 526 522 527 - spin_lock_irqsave(&epc->lock, flags); 523 + mutex_lock(&epc->lock); 524 + clear_bit(epf->func_no, &epc->function_num_map); 528 525 list_del(&epf->list); 529 526 epf->epc = NULL; 530 - spin_unlock_irqrestore(&epc->lock, flags); 527 + mutex_unlock(&epc->lock); 531 528 } 532 529 EXPORT_SYMBOL_GPL(pci_epc_remove_epf); 533 530 ··· 540 539 */ 541 540 void pci_epc_linkup(struct pci_epc *epc) 542 541 { 543 - unsigned long flags; 544 - struct pci_epf *epf; 545 - 546 542 if (!epc || IS_ERR(epc)) 547 543 return; 548 544 549 - spin_lock_irqsave(&epc->lock, flags); 550 - list_for_each_entry(epf, &epc->pci_epf, list) 551 - pci_epf_linkup(epf); 552 - spin_unlock_irqrestore(&epc->lock, flags); 545 + atomic_notifier_call_chain(&epc->notifier, LINK_UP, NULL); 553 546 } 554 547 EXPORT_SYMBOL_GPL(pci_epc_linkup); 548 + 549 + /** 550 + * pci_epc_init_notify() - Notify the EPF device that EPC device's core 551 + * initialization is completed. 552 + * @epc: the EPC device whose core initialization is completeds 553 + * 554 + * Invoke to Notify the EPF device that the EPC device's initialization 555 + * is completed. 556 + */ 557 + void pci_epc_init_notify(struct pci_epc *epc) 558 + { 559 + if (!epc || IS_ERR(epc)) 560 + return; 561 + 562 + atomic_notifier_call_chain(&epc->notifier, CORE_INIT, NULL); 563 + } 564 + EXPORT_SYMBOL_GPL(pci_epc_init_notify); 555 565 556 566 /** 557 567 * pci_epc_destroy() - destroy the EPC device ··· 622 610 goto err_ret; 623 611 } 624 612 625 - spin_lock_init(&epc->lock); 613 + mutex_init(&epc->lock); 626 614 INIT_LIST_HEAD(&epc->pci_epf); 615 + ATOMIC_INIT_NOTIFIER_HEAD(&epc->notifier); 627 616 628 617 device_initialize(&epc->dev); 629 618 epc->dev.class = pci_epc_class;
+8 -2
drivers/pci/endpoint/pci-epc-mem.c
··· 79 79 mem->page_size = page_size; 80 80 mem->pages = pages; 81 81 mem->size = size; 82 + mutex_init(&mem->lock); 82 83 83 84 epc->mem = mem; 84 85 ··· 123 122 phys_addr_t *phys_addr, size_t size) 124 123 { 125 124 int pageno; 126 - void __iomem *virt_addr; 125 + void __iomem *virt_addr = NULL; 127 126 struct pci_epc_mem *mem = epc->mem; 128 127 unsigned int page_shift = ilog2(mem->page_size); 129 128 int order; ··· 131 130 size = ALIGN(size, mem->page_size); 132 131 order = pci_epc_mem_get_order(mem, size); 133 132 133 + mutex_lock(&mem->lock); 134 134 pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order); 135 135 if (pageno < 0) 136 - return NULL; 136 + goto ret; 137 137 138 138 *phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift); 139 139 virt_addr = ioremap(*phys_addr, size); 140 140 if (!virt_addr) 141 141 bitmap_release_region(mem->bitmap, pageno, order); 142 142 143 + ret: 144 + mutex_unlock(&mem->lock); 143 145 return virt_addr; 144 146 } 145 147 EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr); ··· 168 164 pageno = (phys_addr - mem->phys_base) >> page_shift; 169 165 size = ALIGN(size, mem->page_size); 170 166 order = pci_epc_mem_get_order(mem, size); 167 + mutex_lock(&mem->lock); 171 168 bitmap_release_region(mem->bitmap, pageno, order); 169 + mutex_unlock(&mem->lock); 172 170 } 173 171 EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr); 174 172
+13 -22
drivers/pci/endpoint/pci-epf-core.c
··· 21 21 static const struct device_type pci_epf_type; 22 22 23 23 /** 24 - * pci_epf_linkup() - Notify the function driver that EPC device has 25 - * established a connection with the Root Complex. 26 - * @epf: the EPF device bound to the EPC device which has established 27 - * the connection with the host 28 - * 29 - * Invoke to notify the function driver that EPC device has established 30 - * a connection with the Root Complex. 31 - */ 32 - void pci_epf_linkup(struct pci_epf *epf) 33 - { 34 - if (!epf->driver) { 35 - dev_WARN(&epf->dev, "epf device not bound to driver\n"); 36 - return; 37 - } 38 - 39 - epf->driver->ops->linkup(epf); 40 - } 41 - EXPORT_SYMBOL_GPL(pci_epf_linkup); 42 - 43 - /** 44 24 * pci_epf_unbind() - Notify the function driver that the binding between the 45 25 * EPF device and EPC device has been lost 46 26 * @epf: the EPF device which has lost the binding with the EPC device ··· 35 55 return; 36 56 } 37 57 58 + mutex_lock(&epf->lock); 38 59 epf->driver->ops->unbind(epf); 60 + mutex_unlock(&epf->lock); 39 61 module_put(epf->driver->owner); 40 62 } 41 63 EXPORT_SYMBOL_GPL(pci_epf_unbind); ··· 51 69 */ 52 70 int pci_epf_bind(struct pci_epf *epf) 53 71 { 72 + int ret; 73 + 54 74 if (!epf->driver) { 55 75 dev_WARN(&epf->dev, "epf device not bound to driver\n"); 56 76 return -EINVAL; ··· 61 77 if (!try_module_get(epf->driver->owner)) 62 78 return -EAGAIN; 63 79 64 - return epf->driver->ops->bind(epf); 80 + mutex_lock(&epf->lock); 81 + ret = epf->driver->ops->bind(epf); 82 + mutex_unlock(&epf->lock); 83 + 84 + return ret; 65 85 } 66 86 EXPORT_SYMBOL_GPL(pci_epf_bind); 67 87 ··· 87 99 epf->bar[bar].phys_addr); 88 100 89 101 epf->bar[bar].phys_addr = 0; 102 + epf->bar[bar].addr = NULL; 90 103 epf->bar[bar].size = 0; 91 104 epf->bar[bar].barno = 0; 92 105 epf->bar[bar].flags = 0; ··· 124 135 } 125 136 126 137 epf->bar[bar].phys_addr = phys_addr; 138 + epf->bar[bar].addr = space; 127 139 epf->bar[bar].size = size; 128 140 epf->bar[bar].barno = bar; 129 141 epf->bar[bar].flags |= upper_32_bits(size) ? ··· 204 214 if (!driver->ops) 205 215 return -EINVAL; 206 216 207 - if (!driver->ops->bind || !driver->ops->unbind || !driver->ops->linkup) 217 + if (!driver->ops->bind || !driver->ops->unbind) 208 218 return -EINVAL; 209 219 210 220 driver->driver.bus = &pci_epf_bus_type; ··· 262 272 device_initialize(dev); 263 273 dev->bus = &pci_epf_bus_type; 264 274 dev->type = &pci_epf_type; 275 + mutex_init(&epf->lock); 265 276 266 277 ret = dev_set_name(dev, "%s", name); 267 278 if (ret) {
+1
drivers/pci/hotplug/pciehp.h
··· 84 84 struct pcie_device *pcie; 85 85 86 86 u32 slot_cap; /* capabilities and quirks */ 87 + unsigned int inband_presence_disabled:1; 87 88 88 89 u16 slot_ctrl; /* control register access */ 89 90 struct mutex ctrl_lock;
+78 -15
drivers/pci/hotplug/pciehp_hpc.c
··· 14 14 15 15 #define dev_fmt(fmt) "pciehp: " fmt 16 16 17 + #include <linux/dmi.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/types.h> 19 20 #include <linux/jiffies.h> ··· 26 25 27 26 #include "../pci.h" 28 27 #include "pciehp.h" 28 + 29 + static const struct dmi_system_id inband_presence_disabled_dmi_table[] = { 30 + /* 31 + * Match all Dell systems, as some Dell systems have inband 32 + * presence disabled on NVMe slots (but don't support the bit to 33 + * report it). Setting inband presence disabled should have no 34 + * negative effect, except on broken hotplug slots that never 35 + * assert presence detect--and those will still work, they will 36 + * just have a bit of extra delay before being probed. 37 + */ 38 + { 39 + .ident = "Dell System", 40 + .matches = { 41 + DMI_MATCH(DMI_OEM_STRING, "Dell System"), 42 + }, 43 + }, 44 + {} 45 + }; 29 46 30 47 static inline struct pci_dev *ctrl_dev(struct controller *ctrl) 31 48 { ··· 271 252 return found; 272 253 } 273 254 255 + static void pcie_wait_for_presence(struct pci_dev *pdev) 256 + { 257 + int timeout = 1250; 258 + u16 slot_status; 259 + 260 + do { 261 + pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 262 + if (slot_status & PCI_EXP_SLTSTA_PDS) 263 + return; 264 + msleep(10); 265 + timeout -= 10; 266 + } while (timeout > 0); 267 + 268 + pci_info(pdev, "Timeout waiting for Presence Detect\n"); 269 + } 270 + 274 271 int pciehp_check_link_status(struct controller *ctrl) 275 272 { 276 273 struct pci_dev *pdev = ctrl_dev(ctrl); ··· 295 260 296 261 if (!pcie_wait_for_link(pdev, true)) 297 262 return -1; 263 + 264 + if (ctrl->inband_presence_disabled) 265 + pcie_wait_for_presence(pdev); 298 266 299 267 found = pci_bus_check_dev(ctrl->pcie->port->subordinate, 300 268 PCI_DEVFN(0, 0)); ··· 565 527 struct controller *ctrl = (struct controller *)dev_id; 566 528 struct pci_dev *pdev = ctrl_dev(ctrl); 567 529 struct device *parent = pdev->dev.parent; 568 - u16 status, events; 530 + u16 status, events = 0; 569 531 570 532 /* 571 533 * Interrupts only occur in D3hot or shallower and only if enabled ··· 590 552 } 591 553 } 592 554 555 + read_status: 593 556 pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &status); 594 557 if (status == (u16) ~0) { 595 558 ctrl_info(ctrl, "%s: no response from device\n", __func__); ··· 603 564 * Slot Status contains plain status bits as well as event 604 565 * notification bits; right now we only want the event bits. 605 566 */ 606 - events = status & (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 607 - PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC | 608 - PCI_EXP_SLTSTA_DLLSC); 567 + status &= PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 568 + PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_CC | 569 + PCI_EXP_SLTSTA_DLLSC; 609 570 610 571 /* 611 572 * If we've already reported a power fault, don't report it again 612 573 * until we've done something to handle it. 613 574 */ 614 575 if (ctrl->power_fault_detected) 615 - events &= ~PCI_EXP_SLTSTA_PFD; 576 + status &= ~PCI_EXP_SLTSTA_PFD; 616 577 578 + events |= status; 617 579 if (!events) { 618 580 if (parent) 619 581 pm_runtime_put(parent); 620 582 return IRQ_NONE; 621 583 } 622 584 623 - pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events); 585 + if (status) { 586 + pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, events); 587 + 588 + /* 589 + * In MSI mode, all event bits must be zero before the port 590 + * will send a new interrupt (PCIe Base Spec r5.0 sec 6.7.3.4). 591 + * So re-read the Slot Status register in case a bit was set 592 + * between read and write. 593 + */ 594 + if (pci_dev_msi_enabled(pdev) && !pciehp_poll_mode) 595 + goto read_status; 596 + } 597 + 624 598 ctrl_dbg(ctrl, "pending interrupts %#06x from Slot Status\n", events); 625 599 if (parent) 626 600 pm_runtime_put(parent); ··· 677 625 if (atomic_fetch_and(~RERUN_ISR, &ctrl->pending_events) & RERUN_ISR) { 678 626 ret = pciehp_isr(irq, dev_id); 679 627 enable_irq(irq); 680 - if (ret != IRQ_WAKE_THREAD) { 681 - pci_config_pm_runtime_put(pdev); 682 - return ret; 683 - } 628 + if (ret != IRQ_WAKE_THREAD) 629 + goto out; 684 630 } 685 631 686 632 synchronize_hardirq(irq); 687 633 events = atomic_xchg(&ctrl->pending_events, 0); 688 634 if (!events) { 689 - pci_config_pm_runtime_put(pdev); 690 - return IRQ_NONE; 635 + ret = IRQ_NONE; 636 + goto out; 691 637 } 692 638 693 639 /* Check Attention Button Pressed */ ··· 714 664 pciehp_handle_presence_or_link_change(ctrl, events); 715 665 up_read(&ctrl->reset_lock); 716 666 667 + ret = IRQ_HANDLED; 668 + out: 717 669 pci_config_pm_runtime_put(pdev); 718 670 ctrl->ist_running = false; 719 671 wake_up(&ctrl->requester); 720 - return IRQ_HANDLED; 672 + return ret; 721 673 } 722 674 723 675 static int pciehp_poll(void *data) ··· 900 848 struct controller *pcie_init(struct pcie_device *dev) 901 849 { 902 850 struct controller *ctrl; 903 - u32 slot_cap, link_cap; 851 + u32 slot_cap, slot_cap2, link_cap; 904 852 u8 poweron; 905 853 struct pci_dev *pdev = dev->port; 906 854 struct pci_bus *subordinate = pdev->subordinate; ··· 935 883 ctrl->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE; 936 884 up_read(&pci_bus_sem); 937 885 886 + pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP2, &slot_cap2); 887 + if (slot_cap2 & PCI_EXP_SLTCAP2_IBPD) { 888 + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_IBPD_DISABLE, 889 + PCI_EXP_SLTCTL_IBPD_DISABLE); 890 + ctrl->inband_presence_disabled = 1; 891 + } 892 + 893 + if (dmi_first_match(inband_presence_disabled_dmi_table)) 894 + ctrl->inband_presence_disabled = 1; 895 + 938 896 /* Check if Data Link Layer Link Active Reporting is implemented */ 939 897 pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); 940 898 ··· 954 892 PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_CC | 955 893 PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC); 956 894 957 - ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c LLActRep%c%s\n", 895 + ctrl_info(ctrl, "Slot #%d AttnBtn%c PwrCtrl%c MRL%c AttnInd%c PwrInd%c HotPlug%c Surprise%c Interlock%c NoCompl%c IbPresDis%c LLActRep%c%s\n", 958 896 (slot_cap & PCI_EXP_SLTCAP_PSN) >> 19, 959 897 FLAG(slot_cap, PCI_EXP_SLTCAP_ABP), 960 898 FLAG(slot_cap, PCI_EXP_SLTCAP_PCP), ··· 965 903 FLAG(slot_cap, PCI_EXP_SLTCAP_HPS), 966 904 FLAG(slot_cap, PCI_EXP_SLTCAP_EIP), 967 905 FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS), 906 + FLAG(slot_cap2, PCI_EXP_SLTCAP2_IBPD), 968 907 FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC), 969 908 pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : ""); 970 909
+3
drivers/pci/p2pdma.c
··· 291 291 {PCI_VENDOR_ID_INTEL, 0x2f01, REQ_SAME_HOST_BRIDGE}, 292 292 /* Intel SkyLake-E */ 293 293 {PCI_VENDOR_ID_INTEL, 0x2030, 0}, 294 + {PCI_VENDOR_ID_INTEL, 0x2031, 0}, 295 + {PCI_VENDOR_ID_INTEL, 0x2032, 0}, 296 + {PCI_VENDOR_ID_INTEL, 0x2033, 0}, 294 297 {PCI_VENDOR_ID_INTEL, 0x2020, 0}, 295 298 {} 296 299 };
+3 -1
drivers/pci/pci-acpi.c
··· 439 439 static u16 hpx3_device_type(struct pci_dev *dev) 440 440 { 441 441 u16 pcie_type = pci_pcie_type(dev); 442 - const int pcie_to_hpx3_type[] = { 442 + static const int pcie_to_hpx3_type[] = { 443 443 [PCI_EXP_TYPE_ENDPOINT] = HPX_TYPE_ENDPOINT, 444 444 [PCI_EXP_TYPE_LEG_END] = HPX_TYPE_LEG_END, 445 445 [PCI_EXP_TYPE_RC_END] = HPX_TYPE_RC_END, ··· 1241 1241 1242 1242 pci_acpi_optimize_delay(pci_dev, adev->handle); 1243 1243 pci_acpi_set_untrusted(pci_dev); 1244 + pci_acpi_add_edr_notifier(pci_dev); 1244 1245 1245 1246 pci_acpi_add_pm_notifier(adev, pci_dev); 1246 1247 if (!adev->wakeup.flags.valid) ··· 1269 1268 if (!adev) 1270 1269 return; 1271 1270 1271 + pci_acpi_remove_edr_notifier(pci_dev); 1272 1272 pci_acpi_remove_pm_notifier(adev); 1273 1273 if (adev->wakeup.flags.valid) { 1274 1274 acpi_device_power_remove_dependent(adev, dev);
+9 -24
drivers/pci/pci-sysfs.c
··· 156 156 { 157 157 struct pci_dev *pdev = to_pci_dev(dev); 158 158 159 - return sprintf(buf, "%s\n", PCIE_SPEED2STR(pcie_get_speed_cap(pdev))); 159 + return sprintf(buf, "%s\n", 160 + pci_speed_string(pcie_get_speed_cap(pdev))); 160 161 } 161 162 static DEVICE_ATTR_RO(max_link_speed); 162 163 ··· 176 175 struct pci_dev *pci_dev = to_pci_dev(dev); 177 176 u16 linkstat; 178 177 int err; 179 - const char *speed; 178 + enum pci_bus_speed speed; 180 179 181 180 err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 182 181 if (err) 183 182 return -EINVAL; 184 183 185 - switch (linkstat & PCI_EXP_LNKSTA_CLS) { 186 - case PCI_EXP_LNKSTA_CLS_32_0GB: 187 - speed = "32 GT/s"; 188 - break; 189 - case PCI_EXP_LNKSTA_CLS_16_0GB: 190 - speed = "16 GT/s"; 191 - break; 192 - case PCI_EXP_LNKSTA_CLS_8_0GB: 193 - speed = "8 GT/s"; 194 - break; 195 - case PCI_EXP_LNKSTA_CLS_5_0GB: 196 - speed = "5 GT/s"; 197 - break; 198 - case PCI_EXP_LNKSTA_CLS_2_5GB: 199 - speed = "2.5 GT/s"; 200 - break; 201 - default: 202 - speed = "Unknown speed"; 203 - } 184 + speed = pcie_link_speed[linkstat & PCI_EXP_LNKSTA_CLS]; 204 185 205 - return sprintf(buf, "%s\n", speed); 186 + return sprintf(buf, "%s\n", pci_speed_string(speed)); 206 187 } 207 188 static DEVICE_ATTR_RO(current_link_speed); 208 189 ··· 447 464 } 448 465 return count; 449 466 } 450 - static DEVICE_ATTR_WO(dev_rescan); 467 + static struct device_attribute dev_attr_dev_rescan = __ATTR(rescan, 0200, NULL, 468 + dev_rescan_store); 451 469 452 470 static ssize_t remove_store(struct device *dev, struct device_attribute *attr, 453 471 const char *buf, size_t count) ··· 485 501 } 486 502 return count; 487 503 } 488 - static DEVICE_ATTR_WO(bus_rescan); 504 + static struct device_attribute dev_attr_bus_rescan = __ATTR(rescan, 0200, NULL, 505 + bus_rescan_store); 489 506 490 507 #if defined(CONFIG_PM) && defined(CONFIG_ACPI) 491 508 static ssize_t d3cold_allowed_store(struct device *dev,
+8 -17
drivers/pci/pci.c
··· 1560 1560 pci_restore_rebar_state(dev); 1561 1561 pci_restore_dpc_state(dev); 1562 1562 1563 - pci_cleanup_aer_error_status_regs(dev); 1563 + pci_aer_clear_status(dev); 1564 1564 pci_restore_aer_state(dev); 1565 1565 1566 1566 pci_restore_config_space(dev); ··· 5841 5841 * where only 2.5 GT/s and 5.0 GT/s speeds were defined. 5842 5842 */ 5843 5843 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2); 5844 - if (lnkcap2) { /* PCIe r3.0-compliant */ 5845 - if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_32_0GB) 5846 - return PCIE_SPEED_32_0GT; 5847 - else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_16_0GB) 5848 - return PCIE_SPEED_16_0GT; 5849 - else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_8_0GB) 5850 - return PCIE_SPEED_8_0GT; 5851 - else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_5_0GB) 5852 - return PCIE_SPEED_5_0GT; 5853 - else if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_2_5GB) 5854 - return PCIE_SPEED_2_5GT; 5855 - return PCI_SPEED_UNKNOWN; 5856 - } 5844 + 5845 + /* PCIe r3.0-compliant */ 5846 + if (lnkcap2) 5847 + return PCIE_LNKCAP2_SLS2SPEED(lnkcap2); 5857 5848 5858 5849 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 5859 5850 if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB) ··· 5920 5929 if (bw_avail >= bw_cap && verbose) 5921 5930 pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth (%s x%d link)\n", 5922 5931 bw_cap / 1000, bw_cap % 1000, 5923 - PCIE_SPEED2STR(speed_cap), width_cap); 5932 + pci_speed_string(speed_cap), width_cap); 5924 5933 else if (bw_avail < bw_cap) 5925 5934 pci_info(dev, "%u.%03u Gb/s available PCIe bandwidth, limited by %s x%d link at %s (capable of %u.%03u Gb/s with %s x%d link)\n", 5926 5935 bw_avail / 1000, bw_avail % 1000, 5927 - PCIE_SPEED2STR(speed), width, 5936 + pci_speed_string(speed), width, 5928 5937 limiting_dev ? pci_name(limiting_dev) : "<unknown>", 5929 5938 bw_cap / 1000, bw_cap % 1000, 5930 - PCIE_SPEED2STR(speed_cap), width_cap); 5939 + pci_speed_string(speed_cap), width_cap); 5931 5940 } 5932 5941 5933 5942 /**
+22 -10
drivers/pci/pci.h
··· 292 292 struct pci_bus *pci_bus_get(struct pci_bus *bus); 293 293 void pci_bus_put(struct pci_bus *bus); 294 294 295 - /* PCIe link information */ 296 - #define PCIE_SPEED2STR(speed) \ 297 - ((speed) == PCIE_SPEED_16_0GT ? "16 GT/s" : \ 298 - (speed) == PCIE_SPEED_8_0GT ? "8 GT/s" : \ 299 - (speed) == PCIE_SPEED_5_0GT ? "5 GT/s" : \ 300 - (speed) == PCIE_SPEED_2_5GT ? "2.5 GT/s" : \ 301 - "Unknown speed") 295 + /* PCIe link information from Link Capabilities 2 */ 296 + #define PCIE_LNKCAP2_SLS2SPEED(lnkcap2) \ 297 + ((lnkcap2) & PCI_EXP_LNKCAP2_SLS_32_0GB ? PCIE_SPEED_32_0GT : \ 298 + (lnkcap2) & PCI_EXP_LNKCAP2_SLS_16_0GB ? PCIE_SPEED_16_0GT : \ 299 + (lnkcap2) & PCI_EXP_LNKCAP2_SLS_8_0GB ? PCIE_SPEED_8_0GT : \ 300 + (lnkcap2) & PCI_EXP_LNKCAP2_SLS_5_0GB ? PCIE_SPEED_5_0GT : \ 301 + (lnkcap2) & PCI_EXP_LNKCAP2_SLS_2_5GB ? PCIE_SPEED_2_5GT : \ 302 + PCI_SPEED_UNKNOWN) 302 303 303 304 /* PCIe speed to Mb/s reduced by encoding overhead */ 304 305 #define PCIE_SPEED2MBS_ENC(speed) \ 305 - ((speed) == PCIE_SPEED_16_0GT ? 16000*128/130 : \ 306 + ((speed) == PCIE_SPEED_32_0GT ? 32000*128/130 : \ 307 + (speed) == PCIE_SPEED_16_0GT ? 16000*128/130 : \ 306 308 (speed) == PCIE_SPEED_8_0GT ? 8000*128/130 : \ 307 309 (speed) == PCIE_SPEED_5_0GT ? 5000*8/10 : \ 308 310 (speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \ 309 311 0) 310 312 313 + const char *pci_speed_string(enum pci_bus_speed speed); 311 314 enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev); 312 315 enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev); 313 316 u32 pcie_bandwidth_capable(struct pci_dev *dev, enum pci_bus_speed *speed, ··· 451 448 #ifdef CONFIG_PCIE_DPC 452 449 void pci_save_dpc_state(struct pci_dev *dev); 453 450 void pci_restore_dpc_state(struct pci_dev *dev); 451 + void pci_dpc_init(struct pci_dev *pdev); 452 + void dpc_process_error(struct pci_dev *pdev); 453 + pci_ers_result_t dpc_reset_link(struct pci_dev *pdev); 454 454 #else 455 455 static inline void pci_save_dpc_state(struct pci_dev *dev) {} 456 456 static inline void pci_restore_dpc_state(struct pci_dev *dev) {} 457 + static inline void pci_dpc_init(struct pci_dev *pdev) {} 457 458 #endif 458 459 459 460 #ifdef CONFIG_PCI_ATS ··· 554 547 #endif 555 548 556 549 /* PCI error reporting and recovery */ 557 - void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state, 558 - u32 service); 550 + pci_ers_result_t pcie_do_recovery(struct pci_dev *dev, 551 + enum pci_channel_state state, 552 + pci_ers_result_t (*reset_link)(struct pci_dev *pdev)); 559 553 560 554 bool pcie_wait_for_link(struct pci_dev *pdev, bool active); 561 555 #ifdef CONFIG_PCIEASPM ··· 659 651 extern const struct attribute_group aer_stats_attr_group; 660 652 void pci_aer_clear_fatal_status(struct pci_dev *dev); 661 653 void pci_aer_clear_device_status(struct pci_dev *dev); 654 + int pci_aer_clear_status(struct pci_dev *dev); 655 + int pci_aer_raw_clear_status(struct pci_dev *dev); 662 656 #else 663 657 static inline void pci_no_aer(void) { } 664 658 static inline void pci_aer_init(struct pci_dev *d) { } 665 659 static inline void pci_aer_exit(struct pci_dev *d) { } 666 660 static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { } 667 661 static inline void pci_aer_clear_device_status(struct pci_dev *dev) { } 662 + static inline int pci_aer_clear_status(struct pci_dev *dev) { return -EINVAL; } 663 + static inline int pci_aer_raw_clear_status(struct pci_dev *dev) { return -EINVAL; } 668 664 #endif 669 665 670 666 #ifdef CONFIG_ACPI
+10
drivers/pci/pcie/Kconfig
··· 141 141 This enables PCI Express Bandwidth Change Notification. If 142 142 you know link width or rate changes occur only to correct 143 143 unreliable links, you may answer Y. 144 + 145 + config PCIE_EDR 146 + bool "PCI Express Error Disconnect Recover support" 147 + depends on PCIE_DPC && ACPI 148 + help 149 + This option adds Error Disconnect Recover support as specified 150 + in the Downstream Port Containment Related Enhancements ECN to 151 + the PCI Firmware Specification r3.2. Enable this if you want to 152 + support hybrid DPC model which uses both firmware and OS to 153 + implement DPC.
+1
drivers/pci/pcie/Makefile
··· 13 13 obj-$(CONFIG_PCIE_DPC) += dpc.o 14 14 obj-$(CONFIG_PCIE_PTM) += ptm.o 15 15 obj-$(CONFIG_PCIE_BW) += bw_notification.o 16 + obj-$(CONFIG_PCIE_EDR) += edr.o
+26 -14
drivers/pci/pcie/aer.c
··· 102 102 #define ERR_UNCOR_ID(d) (d >> 16) 103 103 104 104 static int pcie_aer_disable; 105 + static pci_ers_result_t aer_root_reset(struct pci_dev *dev); 105 106 106 107 void pci_no_aer(void) 107 108 { ··· 377 376 pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta); 378 377 } 379 378 380 - int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev) 379 + int pci_aer_clear_nonfatal_status(struct pci_dev *dev) 381 380 { 382 381 int pos; 383 382 u32 status, sev; ··· 398 397 399 398 return 0; 400 399 } 401 - EXPORT_SYMBOL_GPL(pci_cleanup_aer_uncorrect_error_status); 400 + EXPORT_SYMBOL_GPL(pci_aer_clear_nonfatal_status); 402 401 403 402 void pci_aer_clear_fatal_status(struct pci_dev *dev) 404 403 { ··· 420 419 pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); 421 420 } 422 421 423 - int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) 422 + /** 423 + * pci_aer_raw_clear_status - Clear AER error registers. 424 + * @dev: the PCI device 425 + * 426 + * Clearing AER error status registers unconditionally, regardless of 427 + * whether they're owned by firmware or the OS. 428 + * 429 + * Returns 0 on success, or negative on failure. 430 + */ 431 + int pci_aer_raw_clear_status(struct pci_dev *dev) 424 432 { 425 433 int pos; 426 434 u32 status; ··· 440 430 441 431 pos = dev->aer_cap; 442 432 if (!pos) 443 - return -EIO; 444 - 445 - if (pcie_aer_get_firmware_first(dev)) 446 433 return -EIO; 447 434 448 435 port_type = pci_pcie_type(dev); ··· 455 448 pci_write_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, status); 456 449 457 450 return 0; 451 + } 452 + 453 + int pci_aer_clear_status(struct pci_dev *dev) 454 + { 455 + if (pcie_aer_get_firmware_first(dev)) 456 + return -EIO; 457 + 458 + return pci_aer_raw_clear_status(dev); 458 459 } 459 460 460 461 void pci_save_aer_state(struct pci_dev *dev) ··· 530 515 n = pcie_cap_has_rtctl(dev) ? 5 : 4; 531 516 pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_ERR, sizeof(u32) * n); 532 517 533 - pci_cleanup_aer_error_status_regs(dev); 518 + pci_aer_clear_status(dev); 534 519 } 535 520 536 521 void pci_aer_exit(struct pci_dev *dev) ··· 1068 1053 info->status); 1069 1054 pci_aer_clear_device_status(dev); 1070 1055 } else if (info->severity == AER_NONFATAL) 1071 - pcie_do_recovery(dev, pci_channel_io_normal, 1072 - PCIE_PORT_SERVICE_AER); 1056 + pcie_do_recovery(dev, pci_channel_io_normal, aer_root_reset); 1073 1057 else if (info->severity == AER_FATAL) 1074 - pcie_do_recovery(dev, pci_channel_io_frozen, 1075 - PCIE_PORT_SERVICE_AER); 1058 + pcie_do_recovery(dev, pci_channel_io_frozen, aer_root_reset); 1076 1059 pci_dev_put(dev); 1077 1060 } 1078 1061 ··· 1107 1094 cper_print_aer(pdev, entry.severity, entry.regs); 1108 1095 if (entry.severity == AER_NONFATAL) 1109 1096 pcie_do_recovery(pdev, pci_channel_io_normal, 1110 - PCIE_PORT_SERVICE_AER); 1097 + aer_root_reset); 1111 1098 else if (entry.severity == AER_FATAL) 1112 1099 pcie_do_recovery(pdev, pci_channel_io_frozen, 1113 - PCIE_PORT_SERVICE_AER); 1100 + aer_root_reset); 1114 1101 pci_dev_put(pdev); 1115 1102 } 1116 1103 } ··· 1514 1501 1515 1502 .probe = aer_probe, 1516 1503 .remove = aer_remove, 1517 - .reset_link = aer_root_reset, 1518 1504 }; 1519 1505 1520 1506 /**
+3 -3
drivers/pci/pcie/aspm.c
··· 273 273 } 274 274 if (consistent) 275 275 return; 276 - pci_warn(parent, "ASPM: current common clock configuration is broken, reconfiguring\n"); 276 + pci_info(parent, "ASPM: current common clock configuration is inconsistent, reconfiguring\n"); 277 277 } 278 278 279 279 /* Configure downstream component, all functions */ ··· 747 747 748 748 /* Enable what we need to enable */ 749 749 pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1, 750 - PCI_L1SS_CAP_L1_PM_SS, val); 750 + PCI_L1SS_CTL1_L1SS_MASK, val); 751 751 pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1, 752 - PCI_L1SS_CAP_L1_PM_SS, val); 752 + PCI_L1SS_CTL1_L1SS_MASK, val); 753 753 } 754 754 755 755 static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val)
+55 -82
drivers/pci/pcie/dpc.c
··· 17 17 #include "portdrv.h" 18 18 #include "../pci.h" 19 19 20 - struct dpc_dev { 21 - struct pcie_device *dev; 22 - u16 cap_pos; 23 - bool rp_extensions; 24 - u8 rp_log_size; 25 - }; 26 - 27 20 static const char * const rp_pio_error_string[] = { 28 21 "Configuration Request received UR Completion", /* Bit Position 0 */ 29 22 "Configuration Request received CA Completion", /* Bit Position 1 */ ··· 39 46 "Memory Request Completion Timeout", /* Bit Position 18 */ 40 47 }; 41 48 42 - static struct dpc_dev *to_dpc_dev(struct pci_dev *dev) 43 - { 44 - struct device *device; 45 - 46 - device = pcie_port_find_device(dev, PCIE_PORT_SERVICE_DPC); 47 - if (!device) 48 - return NULL; 49 - return get_service_data(to_pcie_device(device)); 50 - } 51 - 52 49 void pci_save_dpc_state(struct pci_dev *dev) 53 50 { 54 - struct dpc_dev *dpc; 55 51 struct pci_cap_saved_state *save_state; 56 52 u16 *cap; 57 53 58 54 if (!pci_is_pcie(dev)) 59 - return; 60 - 61 - dpc = to_dpc_dev(dev); 62 - if (!dpc) 63 55 return; 64 56 65 57 save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC); ··· 52 74 return; 53 75 54 76 cap = (u16 *)&save_state->cap.data[0]; 55 - pci_read_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, cap); 77 + pci_read_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, cap); 56 78 } 57 79 58 80 void pci_restore_dpc_state(struct pci_dev *dev) 59 81 { 60 - struct dpc_dev *dpc; 61 82 struct pci_cap_saved_state *save_state; 62 83 u16 *cap; 63 84 64 85 if (!pci_is_pcie(dev)) 65 - return; 66 - 67 - dpc = to_dpc_dev(dev); 68 - if (!dpc) 69 86 return; 70 87 71 88 save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC); ··· 68 95 return; 69 96 70 97 cap = (u16 *)&save_state->cap.data[0]; 71 - pci_write_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, *cap); 98 + pci_write_config_word(dev, dev->dpc_cap + PCI_EXP_DPC_CTL, *cap); 72 99 } 73 100 74 - static int dpc_wait_rp_inactive(struct dpc_dev *dpc) 101 + static int dpc_wait_rp_inactive(struct pci_dev *pdev) 75 102 { 76 103 unsigned long timeout = jiffies + HZ; 77 - struct pci_dev *pdev = dpc->dev->port; 78 - u16 cap = dpc->cap_pos, status; 104 + u16 cap = pdev->dpc_cap, status; 79 105 80 106 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); 81 107 while (status & PCI_EXP_DPC_RP_BUSY && ··· 89 117 return 0; 90 118 } 91 119 92 - static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) 120 + pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) 93 121 { 94 - struct dpc_dev *dpc; 95 122 u16 cap; 96 123 97 124 /* 98 125 * DPC disables the Link automatically in hardware, so it has 99 126 * already been reset by the time we get here. 100 127 */ 101 - dpc = to_dpc_dev(pdev); 102 - cap = dpc->cap_pos; 128 + cap = pdev->dpc_cap; 103 129 104 130 /* 105 131 * Wait until the Link is inactive, then clear DPC Trigger Status ··· 105 135 */ 106 136 pcie_wait_for_link(pdev, false); 107 137 108 - if (dpc->rp_extensions && dpc_wait_rp_inactive(dpc)) 138 + if (pdev->dpc_rp_extensions && dpc_wait_rp_inactive(pdev)) 109 139 return PCI_ERS_RESULT_DISCONNECT; 110 140 111 141 pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, ··· 117 147 return PCI_ERS_RESULT_RECOVERED; 118 148 } 119 149 120 - static void dpc_process_rp_pio_error(struct dpc_dev *dpc) 150 + static void dpc_process_rp_pio_error(struct pci_dev *pdev) 121 151 { 122 - struct pci_dev *pdev = dpc->dev->port; 123 - u16 cap = dpc->cap_pos, dpc_status, first_error; 152 + u16 cap = pdev->dpc_cap, dpc_status, first_error; 124 153 u32 status, mask, sev, syserr, exc, dw0, dw1, dw2, dw3, log, prefix; 125 154 int i; 126 155 ··· 144 175 first_error == i ? " (First)" : ""); 145 176 } 146 177 147 - if (dpc->rp_log_size < 4) 178 + if (pdev->dpc_rp_log_size < 4) 148 179 goto clear_status; 149 180 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG, 150 181 &dw0); ··· 157 188 pci_err(pdev, "TLP Header: %#010x %#010x %#010x %#010x\n", 158 189 dw0, dw1, dw2, dw3); 159 190 160 - if (dpc->rp_log_size < 5) 191 + if (pdev->dpc_rp_log_size < 5) 161 192 goto clear_status; 162 193 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log); 163 194 pci_err(pdev, "RP PIO ImpSpec Log %#010x\n", log); 164 195 165 - for (i = 0; i < dpc->rp_log_size - 5; i++) { 196 + for (i = 0; i < pdev->dpc_rp_log_size - 5; i++) { 166 197 pci_read_config_dword(pdev, 167 198 cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix); 168 199 pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix); ··· 193 224 return 1; 194 225 } 195 226 196 - static irqreturn_t dpc_handler(int irq, void *context) 227 + void dpc_process_error(struct pci_dev *pdev) 197 228 { 229 + u16 cap = pdev->dpc_cap, status, source, reason, ext_reason; 198 230 struct aer_err_info info; 199 - struct dpc_dev *dpc = context; 200 - struct pci_dev *pdev = dpc->dev->port; 201 - u16 cap = dpc->cap_pos, status, source, reason, ext_reason; 202 231 203 232 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); 204 233 pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source); ··· 215 248 "reserved error"); 216 249 217 250 /* show RP PIO error detail information */ 218 - if (dpc->rp_extensions && reason == 3 && ext_reason == 0) 219 - dpc_process_rp_pio_error(dpc); 251 + if (pdev->dpc_rp_extensions && reason == 3 && ext_reason == 0) 252 + dpc_process_rp_pio_error(pdev); 220 253 else if (reason == 0 && 221 254 dpc_get_aer_uncorrect_severity(pdev, &info) && 222 255 aer_get_device_error_info(pdev, &info)) { 223 256 aer_print_error(pdev, &info); 224 - pci_cleanup_aer_uncorrect_error_status(pdev); 257 + pci_aer_clear_nonfatal_status(pdev); 225 258 pci_aer_clear_fatal_status(pdev); 226 259 } 260 + } 261 + 262 + static irqreturn_t dpc_handler(int irq, void *context) 263 + { 264 + struct pci_dev *pdev = context; 265 + 266 + dpc_process_error(pdev); 227 267 228 268 /* We configure DPC so it only triggers on ERR_FATAL */ 229 - pcie_do_recovery(pdev, pci_channel_io_frozen, PCIE_PORT_SERVICE_DPC); 269 + pcie_do_recovery(pdev, pci_channel_io_frozen, dpc_reset_link); 230 270 231 271 return IRQ_HANDLED; 232 272 } 233 273 234 274 static irqreturn_t dpc_irq(int irq, void *context) 235 275 { 236 - struct dpc_dev *dpc = (struct dpc_dev *)context; 237 - struct pci_dev *pdev = dpc->dev->port; 238 - u16 cap = dpc->cap_pos, status; 276 + struct pci_dev *pdev = context; 277 + u16 cap = pdev->dpc_cap, status; 239 278 240 279 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); 241 280 ··· 255 282 return IRQ_HANDLED; 256 283 } 257 284 285 + void pci_dpc_init(struct pci_dev *pdev) 286 + { 287 + u16 cap; 288 + 289 + pdev->dpc_cap = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DPC); 290 + if (!pdev->dpc_cap) 291 + return; 292 + 293 + pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap); 294 + if (!(cap & PCI_EXP_DPC_CAP_RP_EXT)) 295 + return; 296 + 297 + pdev->dpc_rp_extensions = true; 298 + pdev->dpc_rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; 299 + if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) { 300 + pci_err(pdev, "RP PIO log size %u is invalid\n", 301 + pdev->dpc_rp_log_size); 302 + pdev->dpc_rp_log_size = 0; 303 + } 304 + } 305 + 258 306 #define FLAG(x, y) (((x) & (y)) ? '+' : '-') 259 307 static int dpc_probe(struct pcie_device *dev) 260 308 { 261 - struct dpc_dev *dpc; 262 309 struct pci_dev *pdev = dev->port; 263 310 struct device *device = &dev->device; 264 311 int status; ··· 287 294 if (pcie_aer_get_firmware_first(pdev) && !pcie_ports_dpc_native) 288 295 return -ENOTSUPP; 289 296 290 - dpc = devm_kzalloc(device, sizeof(*dpc), GFP_KERNEL); 291 - if (!dpc) 292 - return -ENOMEM; 293 - 294 - dpc->cap_pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DPC); 295 - dpc->dev = dev; 296 - set_service_data(dev, dpc); 297 - 298 297 status = devm_request_threaded_irq(device, dev->irq, dpc_irq, 299 298 dpc_handler, IRQF_SHARED, 300 - "pcie-dpc", dpc); 299 + "pcie-dpc", pdev); 301 300 if (status) { 302 301 pci_warn(pdev, "request IRQ%d failed: %d\n", dev->irq, 303 302 status); 304 303 return status; 305 304 } 306 305 307 - pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CAP, &cap); 308 - pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl); 309 - 310 - dpc->rp_extensions = (cap & PCI_EXP_DPC_CAP_RP_EXT); 311 - if (dpc->rp_extensions) { 312 - dpc->rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; 313 - if (dpc->rp_log_size < 4 || dpc->rp_log_size > 9) { 314 - pci_err(pdev, "RP PIO log size %u is invalid\n", 315 - dpc->rp_log_size); 316 - dpc->rp_log_size = 0; 317 - } 318 - } 306 + pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap); 307 + pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl); 319 308 320 309 ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 321 - pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 310 + pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); 322 311 323 312 pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", 324 313 cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), 325 314 FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), 326 - FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size, 315 + FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), pdev->dpc_rp_log_size, 327 316 FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); 328 317 329 318 pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16)); ··· 314 339 315 340 static void dpc_remove(struct pcie_device *dev) 316 341 { 317 - struct dpc_dev *dpc = get_service_data(dev); 318 342 struct pci_dev *pdev = dev->port; 319 343 u16 ctl; 320 344 321 - pci_read_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, &ctl); 345 + pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl); 322 346 ctl &= ~(PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN); 323 - pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); 347 + pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); 324 348 } 325 349 326 350 static struct pcie_port_service_driver dpcdriver = { ··· 328 354 .service = PCIE_PORT_SERVICE_DPC, 329 355 .probe = dpc_probe, 330 356 .remove = dpc_remove, 331 - .reset_link = dpc_reset_link, 332 357 }; 333 358 334 359 int __init pcie_dpc_init(void)
+239
drivers/pci/pcie/edr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCI Error Disconnect Recover support 4 + * Author: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> 5 + * 6 + * Copyright (C) 2020 Intel Corp. 7 + */ 8 + 9 + #define dev_fmt(fmt) "EDR: " fmt 10 + 11 + #include <linux/pci.h> 12 + #include <linux/pci-acpi.h> 13 + 14 + #include "portdrv.h" 15 + #include "../pci.h" 16 + 17 + #define EDR_PORT_DPC_ENABLE_DSM 0x0C 18 + #define EDR_PORT_LOCATE_DSM 0x0D 19 + #define EDR_OST_SUCCESS 0x80 20 + #define EDR_OST_FAILED 0x81 21 + 22 + /* 23 + * _DSM wrapper function to enable/disable DPC 24 + * @pdev : PCI device structure 25 + * 26 + * returns 0 on success or errno on failure. 27 + */ 28 + static int acpi_enable_dpc(struct pci_dev *pdev) 29 + { 30 + struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 31 + union acpi_object *obj, argv4, req; 32 + int status = 0; 33 + 34 + /* 35 + * Behavior when calling unsupported _DSM functions is undefined, 36 + * so check whether EDR_PORT_DPC_ENABLE_DSM is supported. 37 + */ 38 + if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5, 39 + 1ULL << EDR_PORT_DPC_ENABLE_DSM)) 40 + return 0; 41 + 42 + req.type = ACPI_TYPE_INTEGER; 43 + req.integer.value = 1; 44 + 45 + argv4.type = ACPI_TYPE_PACKAGE; 46 + argv4.package.count = 1; 47 + argv4.package.elements = &req; 48 + 49 + /* 50 + * Per Downstream Port Containment Related Enhancements ECN to PCI 51 + * Firmware Specification r3.2, sec 4.6.12, EDR_PORT_DPC_ENABLE_DSM is 52 + * optional. Return success if it's not implemented. 53 + */ 54 + obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 5, 55 + EDR_PORT_DPC_ENABLE_DSM, &argv4); 56 + if (!obj) 57 + return 0; 58 + 59 + if (obj->type != ACPI_TYPE_INTEGER) { 60 + pci_err(pdev, FW_BUG "Enable DPC _DSM returned non integer\n"); 61 + status = -EIO; 62 + } 63 + 64 + if (obj->integer.value != 1) { 65 + pci_err(pdev, "Enable DPC _DSM failed to enable DPC\n"); 66 + status = -EIO; 67 + } 68 + 69 + ACPI_FREE(obj); 70 + 71 + return status; 72 + } 73 + 74 + /* 75 + * _DSM wrapper function to locate DPC port 76 + * @pdev : Device which received EDR event 77 + * 78 + * Returns pci_dev or NULL. Caller is responsible for dropping a reference 79 + * on the returned pci_dev with pci_dev_put(). 80 + */ 81 + static struct pci_dev *acpi_dpc_port_get(struct pci_dev *pdev) 82 + { 83 + struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 84 + union acpi_object *obj; 85 + u16 port; 86 + 87 + /* 88 + * Behavior when calling unsupported _DSM functions is undefined, 89 + * so check whether EDR_PORT_DPC_ENABLE_DSM is supported. 90 + */ 91 + if (!acpi_check_dsm(adev->handle, &pci_acpi_dsm_guid, 5, 92 + 1ULL << EDR_PORT_LOCATE_DSM)) 93 + return pci_dev_get(pdev); 94 + 95 + obj = acpi_evaluate_dsm(adev->handle, &pci_acpi_dsm_guid, 5, 96 + EDR_PORT_LOCATE_DSM, NULL); 97 + if (!obj) 98 + return pci_dev_get(pdev); 99 + 100 + if (obj->type != ACPI_TYPE_INTEGER) { 101 + ACPI_FREE(obj); 102 + pci_err(pdev, FW_BUG "Locate Port _DSM returned non integer\n"); 103 + return NULL; 104 + } 105 + 106 + /* 107 + * Firmware returns DPC port BDF details in following format: 108 + * 15:8 = bus 109 + * 7:3 = device 110 + * 2:0 = function 111 + */ 112 + port = obj->integer.value; 113 + 114 + ACPI_FREE(obj); 115 + 116 + return pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus), 117 + PCI_BUS_NUM(port), port & 0xff); 118 + } 119 + 120 + /* 121 + * _OST wrapper function to let firmware know the status of EDR event 122 + * @pdev : Device used to send _OST 123 + * @edev : Device which experienced EDR event 124 + * @status : Status of EDR event 125 + */ 126 + static int acpi_send_edr_status(struct pci_dev *pdev, struct pci_dev *edev, 127 + u16 status) 128 + { 129 + struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 130 + u32 ost_status; 131 + 132 + pci_dbg(pdev, "Status for %s: %#x\n", pci_name(edev), status); 133 + 134 + ost_status = PCI_DEVID(edev->bus->number, edev->devfn) << 16; 135 + ost_status |= status; 136 + 137 + status = acpi_evaluate_ost(adev->handle, ACPI_NOTIFY_DISCONNECT_RECOVER, 138 + ost_status, NULL); 139 + if (ACPI_FAILURE(status)) 140 + return -EINVAL; 141 + 142 + return 0; 143 + } 144 + 145 + static void edr_handle_event(acpi_handle handle, u32 event, void *data) 146 + { 147 + struct pci_dev *pdev = data, *edev; 148 + pci_ers_result_t estate = PCI_ERS_RESULT_DISCONNECT; 149 + u16 status; 150 + 151 + pci_info(pdev, "ACPI event %#x received\n", event); 152 + 153 + if (event != ACPI_NOTIFY_DISCONNECT_RECOVER) 154 + return; 155 + 156 + /* Locate the port which issued EDR event */ 157 + edev = acpi_dpc_port_get(pdev); 158 + if (!edev) { 159 + pci_err(pdev, "Firmware failed to locate DPC port\n"); 160 + return; 161 + } 162 + 163 + pci_dbg(pdev, "Reported EDR dev: %s\n", pci_name(edev)); 164 + 165 + /* If port does not support DPC, just send the OST */ 166 + if (!edev->dpc_cap) { 167 + pci_err(edev, FW_BUG "This device doesn't support DPC\n"); 168 + goto send_ost; 169 + } 170 + 171 + /* Check if there is a valid DPC trigger */ 172 + pci_read_config_word(edev, edev->dpc_cap + PCI_EXP_DPC_STATUS, &status); 173 + if (!(status & PCI_EXP_DPC_STATUS_TRIGGER)) { 174 + pci_err(edev, "Invalid DPC trigger %#010x\n", status); 175 + goto send_ost; 176 + } 177 + 178 + dpc_process_error(edev); 179 + pci_aer_raw_clear_status(edev); 180 + 181 + /* 182 + * Irrespective of whether the DPC event is triggered by ERR_FATAL 183 + * or ERR_NONFATAL, since the link is already down, use the FATAL 184 + * error recovery path for both cases. 185 + */ 186 + estate = pcie_do_recovery(edev, pci_channel_io_frozen, dpc_reset_link); 187 + 188 + send_ost: 189 + 190 + /* 191 + * If recovery is successful, send _OST(0xF, BDF << 16 | 0x80) 192 + * to firmware. If not successful, send _OST(0xF, BDF << 16 | 0x81). 193 + */ 194 + if (estate == PCI_ERS_RESULT_RECOVERED) { 195 + pci_dbg(edev, "DPC port successfully recovered\n"); 196 + acpi_send_edr_status(pdev, edev, EDR_OST_SUCCESS); 197 + } else { 198 + pci_dbg(edev, "DPC port recovery failed\n"); 199 + acpi_send_edr_status(pdev, edev, EDR_OST_FAILED); 200 + } 201 + 202 + pci_dev_put(edev); 203 + } 204 + 205 + void pci_acpi_add_edr_notifier(struct pci_dev *pdev) 206 + { 207 + struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 208 + acpi_status status; 209 + 210 + if (!adev) { 211 + pci_dbg(pdev, "No valid ACPI node, skipping EDR init\n"); 212 + return; 213 + } 214 + 215 + status = acpi_install_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY, 216 + edr_handle_event, pdev); 217 + if (ACPI_FAILURE(status)) { 218 + pci_err(pdev, "Failed to install notify handler\n"); 219 + return; 220 + } 221 + 222 + if (acpi_enable_dpc(pdev)) 223 + acpi_remove_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY, 224 + edr_handle_event); 225 + else 226 + pci_dbg(pdev, "Notify handler installed\n"); 227 + } 228 + 229 + void pci_acpi_remove_edr_notifier(struct pci_dev *pdev) 230 + { 231 + struct acpi_device *adev = ACPI_COMPANION(&pdev->dev); 232 + 233 + if (!adev) 234 + return; 235 + 236 + acpi_remove_notify_handler(adev->handle, ACPI_SYSTEM_NOTIFY, 237 + edr_handle_event); 238 + pci_dbg(pdev, "Notify handler removed\n"); 239 + }
+15 -51
drivers/pci/pcie/err.c
··· 146 146 return 0; 147 147 } 148 148 149 - /** 150 - * default_reset_link - default reset function 151 - * @dev: pointer to pci_dev data structure 152 - * 153 - * Invoked when performing link reset on a Downstream Port or a 154 - * Root Port with no aer driver. 155 - */ 156 - static pci_ers_result_t default_reset_link(struct pci_dev *dev) 157 - { 158 - int rc; 159 - 160 - rc = pci_bus_error_reset(dev); 161 - pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); 162 - return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; 163 - } 164 - 165 - static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) 166 - { 167 - pci_ers_result_t status; 168 - struct pcie_port_service_driver *driver = NULL; 169 - 170 - driver = pcie_port_find_service(dev, service); 171 - if (driver && driver->reset_link) { 172 - status = driver->reset_link(dev); 173 - } else if (pcie_downstream_port(dev)) { 174 - status = default_reset_link(dev); 175 - } else { 176 - pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", 177 - pci_name(dev)); 178 - return PCI_ERS_RESULT_DISCONNECT; 179 - } 180 - 181 - if (status != PCI_ERS_RESULT_RECOVERED) { 182 - pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n", 183 - pci_name(dev)); 184 - return PCI_ERS_RESULT_DISCONNECT; 185 - } 186 - 187 - return status; 188 - } 189 - 190 - void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state, 191 - u32 service) 149 + pci_ers_result_t pcie_do_recovery(struct pci_dev *dev, 150 + enum pci_channel_state state, 151 + pci_ers_result_t (*reset_link)(struct pci_dev *pdev)) 192 152 { 193 153 pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER; 194 154 struct pci_bus *bus; ··· 163 203 bus = dev->subordinate; 164 204 165 205 pci_dbg(dev, "broadcast error_detected message\n"); 166 - if (state == pci_channel_io_frozen) 206 + if (state == pci_channel_io_frozen) { 167 207 pci_walk_bus(bus, report_frozen_detected, &status); 168 - else 208 + status = reset_link(dev); 209 + if (status != PCI_ERS_RESULT_RECOVERED) { 210 + pci_warn(dev, "link reset failed\n"); 211 + goto failed; 212 + } 213 + } else { 169 214 pci_walk_bus(bus, report_normal_detected, &status); 170 - 171 - if (state == pci_channel_io_frozen && 172 - reset_link(dev, service) != PCI_ERS_RESULT_RECOVERED) 173 - goto failed; 215 + } 174 216 175 217 if (status == PCI_ERS_RESULT_CAN_RECOVER) { 176 218 status = PCI_ERS_RESULT_RECOVERED; ··· 198 236 pci_walk_bus(bus, report_resume, &status); 199 237 200 238 pci_aer_clear_device_status(dev); 201 - pci_cleanup_aer_uncorrect_error_status(dev); 239 + pci_aer_clear_nonfatal_status(dev); 202 240 pci_info(dev, "device recovery successful\n"); 203 - return; 241 + return status; 204 242 205 243 failed: 206 244 pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); 207 245 208 246 /* TODO: Should kernel panic here? */ 209 247 pci_info(dev, "device recovery failed\n"); 248 + 249 + return status; 210 250 }
-5
drivers/pci/pcie/portdrv.h
··· 92 92 /* Device driver may resume normal operations */ 93 93 void (*error_resume)(struct pci_dev *dev); 94 94 95 - /* Link Reset Capability - AER service driver specific */ 96 - pci_ers_result_t (*reset_link)(struct pci_dev *dev); 97 - 98 95 int port_type; /* Type of the port this driver can handle */ 99 96 u32 service; /* Port service this device represents */ 100 97 ··· 158 161 } 159 162 #endif 160 163 161 - struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev, 162 - u32 service); 163 164 struct device *pcie_port_find_device(struct pci_dev *dev, u32 service); 164 165 #endif /* _PORTDRV_H_ */
-21
drivers/pci/pcie/portdrv_core.c
··· 459 459 } 460 460 461 461 /** 462 - * pcie_port_find_service - find the service driver 463 - * @dev: PCI Express port the service is associated with 464 - * @service: Service to find 465 - * 466 - * Find PCI Express port service driver associated with given service 467 - */ 468 - struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev, 469 - u32 service) 470 - { 471 - struct pcie_port_service_driver *drv; 472 - struct portdrv_service_data pdrvs; 473 - 474 - pdrvs.drv = NULL; 475 - pdrvs.service = service; 476 - device_for_each_child(&dev->dev, &pdrvs, find_service_iter); 477 - 478 - drv = pdrvs.drv; 479 - return drv; 480 - } 481 - 482 - /** 483 462 * pcie_port_find_device - find the struct device 484 463 * @dev: PCI Express port the service is associated with 485 464 * @service: For the service to find
+42
drivers/pci/probe.c
··· 598 598 bridge->native_shpc_hotplug = 1; 599 599 bridge->native_pme = 1; 600 600 bridge->native_ltr = 1; 601 + bridge->native_dpc = 1; 601 602 } 602 603 603 604 struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) ··· 641 640 } 642 641 EXPORT_SYMBOL(pci_free_host_bridge); 643 642 643 + /* Indexed by PCI_X_SSTATUS_FREQ (secondary bus mode and frequency) */ 644 644 static const unsigned char pcix_bus_speed[] = { 645 645 PCI_SPEED_UNKNOWN, /* 0 */ 646 646 PCI_SPEED_66MHz_PCIX, /* 1 */ ··· 661 659 PCI_SPEED_133MHz_PCIX_533 /* F */ 662 660 }; 663 661 662 + /* Indexed by PCI_EXP_LNKCAP_SLS, PCI_EXP_LNKSTA_CLS */ 664 663 const unsigned char pcie_link_speed[] = { 665 664 PCI_SPEED_UNKNOWN, /* 0 */ 666 665 PCIE_SPEED_2_5GT, /* 1 */ ··· 680 677 PCI_SPEED_UNKNOWN, /* E */ 681 678 PCI_SPEED_UNKNOWN /* F */ 682 679 }; 680 + EXPORT_SYMBOL_GPL(pcie_link_speed); 681 + 682 + const char *pci_speed_string(enum pci_bus_speed speed) 683 + { 684 + /* Indexed by the pci_bus_speed enum */ 685 + static const char *speed_strings[] = { 686 + "33 MHz PCI", /* 0x00 */ 687 + "66 MHz PCI", /* 0x01 */ 688 + "66 MHz PCI-X", /* 0x02 */ 689 + "100 MHz PCI-X", /* 0x03 */ 690 + "133 MHz PCI-X", /* 0x04 */ 691 + NULL, /* 0x05 */ 692 + NULL, /* 0x06 */ 693 + NULL, /* 0x07 */ 694 + NULL, /* 0x08 */ 695 + "66 MHz PCI-X 266", /* 0x09 */ 696 + "100 MHz PCI-X 266", /* 0x0a */ 697 + "133 MHz PCI-X 266", /* 0x0b */ 698 + "Unknown AGP", /* 0x0c */ 699 + "1x AGP", /* 0x0d */ 700 + "2x AGP", /* 0x0e */ 701 + "4x AGP", /* 0x0f */ 702 + "8x AGP", /* 0x10 */ 703 + "66 MHz PCI-X 533", /* 0x11 */ 704 + "100 MHz PCI-X 533", /* 0x12 */ 705 + "133 MHz PCI-X 533", /* 0x13 */ 706 + "2.5 GT/s PCIe", /* 0x14 */ 707 + "5.0 GT/s PCIe", /* 0x15 */ 708 + "8.0 GT/s PCIe", /* 0x16 */ 709 + "16.0 GT/s PCIe", /* 0x17 */ 710 + "32.0 GT/s PCIe", /* 0x18 */ 711 + }; 712 + 713 + if (speed < ARRAY_SIZE(speed_strings)) 714 + return speed_strings[speed]; 715 + return "Unknown"; 716 + } 717 + EXPORT_SYMBOL_GPL(pci_speed_string); 683 718 684 719 void pcie_update_link_speed(struct pci_bus *bus, u16 linksta) 685 720 { ··· 2370 2329 pci_enable_acs(dev); /* Enable ACS P2P upstream forwarding */ 2371 2330 pci_ptm_init(dev); /* Precision Time Measurement */ 2372 2331 pci_aer_init(dev); /* Advanced Error Reporting */ 2332 + pci_dpc_init(dev); /* Downstream Port Containment */ 2373 2333 2374 2334 pcie_report_downtraining(dev); 2375 2335
+113 -7
drivers/pci/quirks.c
··· 1970 1970 /* 1971 1971 * IO-APIC1 on 6300ESB generates boot interrupts, see Intel order no 1972 1972 * 300641-004US, section 5.7.3. 1973 + * 1974 + * Core IO on Xeon E5 1600/2600/4600, see Intel order no 326509-003. 1975 + * Core IO on Xeon E5 v2, see Intel order no 329188-003. 1976 + * Core IO on Xeon E7 v2, see Intel order no 329595-002. 1977 + * Core IO on Xeon E5 v3, see Intel order no 330784-003. 1978 + * Core IO on Xeon E7 v3, see Intel order no 332315-001US. 1979 + * Core IO on Xeon E5 v4, see Intel order no 333810-002US. 1980 + * Core IO on Xeon E7 v4, see Intel order no 332315-001US. 1981 + * Core IO on Xeon D-1500, see Intel order no 332051-001. 1982 + * Core IO on Xeon Scalable, see Intel order no 610950. 1973 1983 */ 1974 - #define INTEL_6300_IOAPIC_ABAR 0x40 1984 + #define INTEL_6300_IOAPIC_ABAR 0x40 /* Bus 0, Dev 29, Func 5 */ 1975 1985 #define INTEL_6300_DISABLE_BOOT_IRQ (1<<14) 1986 + 1987 + #define INTEL_CIPINTRC_CFG_OFFSET 0x14C /* Bus 0, Dev 5, Func 0 */ 1988 + #define INTEL_CIPINTRC_DIS_INTX_ICH (1<<25) 1976 1989 1977 1990 static void quirk_disable_intel_boot_interrupt(struct pci_dev *dev) 1978 1991 { 1979 1992 u16 pci_config_word; 1993 + u32 pci_config_dword; 1980 1994 1981 1995 if (noioapicquirk) 1982 1996 return; 1983 1997 1984 - pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, &pci_config_word); 1985 - pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ; 1986 - pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, pci_config_word); 1987 - 1998 + switch (dev->device) { 1999 + case PCI_DEVICE_ID_INTEL_ESB_10: 2000 + pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, 2001 + &pci_config_word); 2002 + pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ; 2003 + pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, 2004 + pci_config_word); 2005 + break; 2006 + case 0x3c28: /* Xeon E5 1600/2600/4600 */ 2007 + case 0x0e28: /* Xeon E5/E7 V2 */ 2008 + case 0x2f28: /* Xeon E5/E7 V3,V4 */ 2009 + case 0x6f28: /* Xeon D-1500 */ 2010 + case 0x2034: /* Xeon Scalable Family */ 2011 + pci_read_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET, 2012 + &pci_config_dword); 2013 + pci_config_dword |= INTEL_CIPINTRC_DIS_INTX_ICH; 2014 + pci_write_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET, 2015 + pci_config_dword); 2016 + break; 2017 + default: 2018 + return; 2019 + } 1988 2020 pci_info(dev, "disabled boot interrupts on device [%04x:%04x]\n", 1989 2021 dev->vendor, dev->device); 1990 2022 } 1991 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); 1992 - DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, quirk_disable_intel_boot_interrupt); 2023 + /* 2024 + * Device 29 Func 5 Device IDs of IO-APIC 2025 + * containing ABAR—APIC1 Alternate Base Address Register 2026 + */ 2027 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, 2028 + quirk_disable_intel_boot_interrupt); 2029 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_10, 2030 + quirk_disable_intel_boot_interrupt); 2031 + 2032 + /* 2033 + * Device 5 Func 0 Device IDs of Core IO modules/hubs 2034 + * containing Coherent Interface Protocol Interrupt Control 2035 + * 2036 + * Device IDs obtained from volume 2 datasheets of commented 2037 + * families above. 2038 + */ 2039 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x3c28, 2040 + quirk_disable_intel_boot_interrupt); 2041 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0e28, 2042 + quirk_disable_intel_boot_interrupt); 2043 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2f28, 2044 + quirk_disable_intel_boot_interrupt); 2045 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x6f28, 2046 + quirk_disable_intel_boot_interrupt); 2047 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2034, 2048 + quirk_disable_intel_boot_interrupt); 2049 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x3c28, 2050 + quirk_disable_intel_boot_interrupt); 2051 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x0e28, 2052 + quirk_disable_intel_boot_interrupt); 2053 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2f28, 2054 + quirk_disable_intel_boot_interrupt); 2055 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x6f28, 2056 + quirk_disable_intel_boot_interrupt); 2057 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, 0x2034, 2058 + quirk_disable_intel_boot_interrupt); 1993 2059 1994 2060 /* Disable boot interrupts on HT-1000 */ 1995 2061 #define BC_HT1000_FEATURE_REG 0x64 ··· 4466 4400 } 4467 4401 4468 4402 /* 4403 + * Many Zhaoxin Root Ports and Switch Downstream Ports have no ACS capability. 4404 + * But the implementation could block peer-to-peer transactions between them 4405 + * and provide ACS-like functionality. 4406 + */ 4407 + static int pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags) 4408 + { 4409 + if (!pci_is_pcie(dev) || 4410 + ((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) && 4411 + (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM))) 4412 + return -ENOTTY; 4413 + 4414 + switch (dev->device) { 4415 + case 0x0710 ... 0x071e: 4416 + case 0x0721: 4417 + case 0x0723 ... 0x0732: 4418 + return pci_acs_ctrl_enabled(acs_flags, 4419 + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4420 + } 4421 + 4422 + return false; 4423 + } 4424 + 4425 + /* 4469 4426 * Many Intel PCH Root Ports do provide ACS-like features to disable peer 4470 4427 * transactions and validate bus numbers in requests, but do not provide an 4471 4428 * actual PCIe ACS capability. This is the list of device IDs known to fall ··· 4790 4701 { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs }, 4791 4702 /* Amazon Annapurna Labs */ 4792 4703 { PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs }, 4704 + /* Zhaoxin multi-function devices */ 4705 + { PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs }, 4706 + { PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs }, 4707 + { PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs }, 4708 + /* Zhaoxin Root/Downstream Ports */ 4709 + { PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs }, 4793 4710 { 0 } 4794 4711 }; 4795 4712 ··· 5556 5461 DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1, 5557 5462 PCI_CLASS_DISPLAY_VGA, 8, 5558 5463 quirk_reset_lenovo_thinkpad_p50_nvgpu); 5464 + 5465 + /* 5466 + * Device [1b21:2142] 5467 + * When in D0, PME# doesn't get asserted when plugging USB 3.0 device. 5468 + */ 5469 + static void pci_fixup_no_d0_pme(struct pci_dev *dev) 5470 + { 5471 + pci_info(dev, "PME# does not work under D0, disabling it\n"); 5472 + dev->pme_support &= ~(PCI_PM_CAP_PME_D0 >> PCI_PM_CAP_PME_SHIFT); 5473 + } 5474 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x2142, pci_fixup_no_d0_pme);
-17
drivers/pci/rom.c
··· 195 195 pci_disable_rom(pdev); 196 196 } 197 197 EXPORT_SYMBOL(pci_unmap_rom); 198 - 199 - /** 200 - * pci_platform_rom - provides a pointer to any ROM image provided by the 201 - * platform 202 - * @pdev: pointer to pci device struct 203 - * @size: pointer to receive size of pci window over ROM 204 - */ 205 - void __iomem *pci_platform_rom(struct pci_dev *pdev, size_t *size) 206 - { 207 - if (pdev->rom && pdev->romlen) { 208 - *size = pdev->romlen; 209 - return phys_to_virt((phys_addr_t)pdev->rom); 210 - } 211 - 212 - return NULL; 213 - } 214 - EXPORT_SYMBOL(pci_platform_rom);
+22 -12
drivers/pci/setup-bus.c
··· 846 846 * Per spec, I/O windows are 4K-aligned, but some bridges have 847 847 * an extension to support 1K alignment. 848 848 */ 849 - if (bus->self->io_window_1k) 849 + if (bus->self && bus->self->io_window_1k) 850 850 align = PCI_P2P_DEFAULT_IO_ALIGN_1K; 851 851 else 852 852 align = PCI_P2P_DEFAULT_IO_ALIGN; ··· 920 920 calculate_iosize(size, min_size, size1, add_size, children_add_size, 921 921 resource_size(b_res), min_align); 922 922 if (!size0 && !size1) { 923 - if (b_res->start || b_res->end) 923 + if (bus->self && (b_res->start || b_res->end)) 924 924 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 925 925 b_res, &bus->busn_res); 926 926 b_res->flags = 0; ··· 930 930 b_res->start = min_align; 931 931 b_res->end = b_res->start + size0 - 1; 932 932 b_res->flags |= IORESOURCE_STARTALIGN; 933 - if (size1 > size0 && realloc_head) { 933 + if (bus->self && size1 > size0 && realloc_head) { 934 934 add_to_list(realloc_head, bus->self, b_res, size1-size0, 935 935 min_align); 936 936 pci_info(bus->self, "bridge window %pR to %pR add_size %llx\n", ··· 1073 1073 calculate_memsize(size, min_size, add_size, children_add_size, 1074 1074 resource_size(b_res), add_align); 1075 1075 if (!size0 && !size1) { 1076 - if (b_res->start || b_res->end) 1076 + if (bus->self && (b_res->start || b_res->end)) 1077 1077 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 1078 1078 b_res, &bus->busn_res); 1079 1079 b_res->flags = 0; ··· 1082 1082 b_res->start = min_align; 1083 1083 b_res->end = size0 + min_align - 1; 1084 1084 b_res->flags |= IORESOURCE_STARTALIGN; 1085 - if (size1 > size0 && realloc_head) { 1085 + if (bus->self && size1 > size0 && realloc_head) { 1086 1086 add_to_list(realloc_head, bus->self, b_res, size1-size0, add_align); 1087 1087 pci_info(bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n", 1088 1088 b_res, &bus->busn_res, ··· 1196 1196 unsigned long mask, prefmask, type2 = 0, type3 = 0; 1197 1197 resource_size_t additional_io_size = 0, additional_mmio_size = 0, 1198 1198 additional_mmio_pref_size = 0; 1199 - struct resource *b_res; 1200 - int ret; 1199 + struct resource *pref; 1200 + struct pci_host_bridge *host; 1201 + int hdr_type, i, ret; 1201 1202 1202 1203 list_for_each_entry(dev, &bus->devices, bus_list) { 1203 1204 struct pci_bus *b = dev->subordinate; ··· 1218 1217 } 1219 1218 1220 1219 /* The root bus? */ 1221 - if (pci_is_root_bus(bus)) 1222 - return; 1220 + if (pci_is_root_bus(bus)) { 1221 + host = to_pci_host_bridge(bus->bridge); 1222 + if (!host->size_windows) 1223 + return; 1224 + pci_bus_for_each_resource(bus, pref, i) 1225 + if (pref && (pref->flags & IORESOURCE_PREFETCH)) 1226 + break; 1227 + hdr_type = -1; /* Intentionally invalid - not a PCI device. */ 1228 + } else { 1229 + pref = &bus->self->resource[PCI_BRIDGE_RESOURCES + 2]; 1230 + hdr_type = bus->self->hdr_type; 1231 + } 1223 1232 1224 - switch (bus->self->hdr_type) { 1233 + switch (hdr_type) { 1225 1234 case PCI_HEADER_TYPE_CARDBUS: 1226 1235 /* Don't size CardBuses yet */ 1227 1236 break; ··· 1253 1242 * the size required to put all 64-bit prefetchable 1254 1243 * resources in it. 1255 1244 */ 1256 - b_res = &bus->self->resource[PCI_BRIDGE_RESOURCES]; 1257 1245 mask = IORESOURCE_MEM; 1258 1246 prefmask = IORESOURCE_MEM | IORESOURCE_PREFETCH; 1259 - if (b_res[2].flags & IORESOURCE_MEM_64) { 1247 + if (pref && (pref->flags & IORESOURCE_MEM_64)) { 1260 1248 prefmask |= IORESOURCE_MEM_64; 1261 1249 ret = pbus_size_mem(bus, prefmask, prefmask, 1262 1250 prefmask, prefmask,
+1 -37
drivers/pci/slot.c
··· 49 49 slot->number); 50 50 } 51 51 52 - /* these strings match up with the values in pci_bus_speed */ 53 - static const char *pci_bus_speed_strings[] = { 54 - "33 MHz PCI", /* 0x00 */ 55 - "66 MHz PCI", /* 0x01 */ 56 - "66 MHz PCI-X", /* 0x02 */ 57 - "100 MHz PCI-X", /* 0x03 */ 58 - "133 MHz PCI-X", /* 0x04 */ 59 - NULL, /* 0x05 */ 60 - NULL, /* 0x06 */ 61 - NULL, /* 0x07 */ 62 - NULL, /* 0x08 */ 63 - "66 MHz PCI-X 266", /* 0x09 */ 64 - "100 MHz PCI-X 266", /* 0x0a */ 65 - "133 MHz PCI-X 266", /* 0x0b */ 66 - "Unknown AGP", /* 0x0c */ 67 - "1x AGP", /* 0x0d */ 68 - "2x AGP", /* 0x0e */ 69 - "4x AGP", /* 0x0f */ 70 - "8x AGP", /* 0x10 */ 71 - "66 MHz PCI-X 533", /* 0x11 */ 72 - "100 MHz PCI-X 533", /* 0x12 */ 73 - "133 MHz PCI-X 533", /* 0x13 */ 74 - "2.5 GT/s PCIe", /* 0x14 */ 75 - "5.0 GT/s PCIe", /* 0x15 */ 76 - "8.0 GT/s PCIe", /* 0x16 */ 77 - "16.0 GT/s PCIe", /* 0x17 */ 78 - "32.0 GT/s PCIe", /* 0x18 */ 79 - }; 80 - 81 52 static ssize_t bus_speed_read(enum pci_bus_speed speed, char *buf) 82 53 { 83 - const char *speed_string; 84 - 85 - if (speed < ARRAY_SIZE(pci_bus_speed_strings)) 86 - speed_string = pci_bus_speed_strings[speed]; 87 - else 88 - speed_string = "Unknown"; 89 - 90 - return sprintf(buf, "%s\n", speed_string); 54 + return sprintf(buf, "%s\n", pci_speed_string(speed)); 91 55 } 92 56 93 57 static ssize_t max_speed_read_file(struct pci_slot *slot, char *buf)
+22
drivers/phy/amlogic/Kconfig
··· 59 59 Enable this to support the Meson USB3 + PCIE Combo PHY found 60 60 in Meson G12A SoCs. 61 61 If unsure, say N. 62 + 63 + config PHY_MESON_AXG_PCIE 64 + tristate "Meson AXG PCIE PHY driver" 65 + default ARCH_MESON 66 + depends on OF && (ARCH_MESON || COMPILE_TEST) 67 + select GENERIC_PHY 68 + select REGMAP_MMIO 69 + help 70 + Enable this to support the Meson MIPI + PCIE PHY found 71 + in Meson AXG SoCs. 72 + If unsure, say N. 73 + 74 + config PHY_MESON_AXG_MIPI_PCIE_ANALOG 75 + tristate "Meson AXG MIPI + PCIE analog PHY driver" 76 + default ARCH_MESON 77 + depends on OF && (ARCH_MESON || COMPILE_TEST) 78 + select GENERIC_PHY 79 + select REGMAP_MMIO 80 + help 81 + Enable this to support the Meson MIPI + PCIE analog PHY 82 + found in Meson AXG SoCs. 83 + If unsure, say N.
+7 -5
drivers/phy/amlogic/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_PHY_MESON8B_USB2) += phy-meson8b-usb2.o 3 - obj-$(CONFIG_PHY_MESON_GXL_USB2) += phy-meson-gxl-usb2.o 4 - obj-$(CONFIG_PHY_MESON_G12A_USB2) += phy-meson-g12a-usb2.o 5 - obj-$(CONFIG_PHY_MESON_GXL_USB3) += phy-meson-gxl-usb3.o 6 - obj-$(CONFIG_PHY_MESON_G12A_USB3_PCIE) += phy-meson-g12a-usb3-pcie.o 2 + obj-$(CONFIG_PHY_MESON8B_USB2) += phy-meson8b-usb2.o 3 + obj-$(CONFIG_PHY_MESON_GXL_USB2) += phy-meson-gxl-usb2.o 4 + obj-$(CONFIG_PHY_MESON_G12A_USB2) += phy-meson-g12a-usb2.o 5 + obj-$(CONFIG_PHY_MESON_GXL_USB3) += phy-meson-gxl-usb3.o 6 + obj-$(CONFIG_PHY_MESON_G12A_USB3_PCIE) += phy-meson-g12a-usb3-pcie.o 7 + obj-$(CONFIG_PHY_MESON_AXG_PCIE) += phy-meson-axg-pcie.o 8 + obj-$(CONFIG_PHY_MESON_AXG_MIPI_PCIE_ANALOG) += phy-meson-axg-mipi-pcie-analog.o
+188
drivers/phy/amlogic/phy-meson-axg-mipi-pcie-analog.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Amlogic AXG MIPI + PCIE analog PHY driver 4 + * 5 + * Copyright (C) 2019 Remi Pommarel <repk@triplefau.lt> 6 + */ 7 + #include <linux/module.h> 8 + #include <linux/phy/phy.h> 9 + #include <linux/regmap.h> 10 + #include <linux/platform_device.h> 11 + #include <dt-bindings/phy/phy.h> 12 + 13 + #define HHI_MIPI_CNTL0 0x00 14 + #define HHI_MIPI_CNTL0_COMMON_BLOCK GENMASK(31, 28) 15 + #define HHI_MIPI_CNTL0_ENABLE BIT(29) 16 + #define HHI_MIPI_CNTL0_BANDGAP BIT(26) 17 + #define HHI_MIPI_CNTL0_DECODE_TO_RTERM GENMASK(15, 12) 18 + #define HHI_MIPI_CNTL0_OUTPUT_EN BIT(3) 19 + 20 + #define HHI_MIPI_CNTL1 0x01 21 + #define HHI_MIPI_CNTL1_CH0_CML_PDR_EN BIT(12) 22 + #define HHI_MIPI_CNTL1_LP_ABILITY GENMASK(5, 4) 23 + #define HHI_MIPI_CNTL1_LP_RESISTER BIT(3) 24 + #define HHI_MIPI_CNTL1_INPUT_SETTING BIT(2) 25 + #define HHI_MIPI_CNTL1_INPUT_SEL BIT(1) 26 + #define HHI_MIPI_CNTL1_PRBS7_EN BIT(0) 27 + 28 + #define HHI_MIPI_CNTL2 0x02 29 + #define HHI_MIPI_CNTL2_CH_PU GENMASK(31, 25) 30 + #define HHI_MIPI_CNTL2_CH_CTL GENMASK(24, 19) 31 + #define HHI_MIPI_CNTL2_CH0_DIGDR_EN BIT(18) 32 + #define HHI_MIPI_CNTL2_CH_DIGDR_EN BIT(17) 33 + #define HHI_MIPI_CNTL2_LPULPS_EN BIT(16) 34 + #define HHI_MIPI_CNTL2_CH_EN(n) BIT(15 - (n)) 35 + #define HHI_MIPI_CNTL2_CH0_LP_CTL GENMASK(10, 1) 36 + 37 + struct phy_axg_mipi_pcie_analog_priv { 38 + struct phy *phy; 39 + unsigned int mode; 40 + struct regmap *regmap; 41 + }; 42 + 43 + static const struct regmap_config phy_axg_mipi_pcie_analog_regmap_conf = { 44 + .reg_bits = 8, 45 + .val_bits = 32, 46 + .reg_stride = 4, 47 + .max_register = HHI_MIPI_CNTL2, 48 + }; 49 + 50 + static int phy_axg_mipi_pcie_analog_power_on(struct phy *phy) 51 + { 52 + struct phy_axg_mipi_pcie_analog_priv *priv = phy_get_drvdata(phy); 53 + 54 + /* MIPI not supported yet */ 55 + if (priv->mode != PHY_TYPE_PCIE) 56 + return -EINVAL; 57 + 58 + regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0, 59 + HHI_MIPI_CNTL0_BANDGAP, HHI_MIPI_CNTL0_BANDGAP); 60 + 61 + regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0, 62 + HHI_MIPI_CNTL0_ENABLE, HHI_MIPI_CNTL0_ENABLE); 63 + return 0; 64 + } 65 + 66 + static int phy_axg_mipi_pcie_analog_power_off(struct phy *phy) 67 + { 68 + struct phy_axg_mipi_pcie_analog_priv *priv = phy_get_drvdata(phy); 69 + 70 + /* MIPI not supported yet */ 71 + if (priv->mode != PHY_TYPE_PCIE) 72 + return -EINVAL; 73 + 74 + regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0, 75 + HHI_MIPI_CNTL0_BANDGAP, 0); 76 + regmap_update_bits(priv->regmap, HHI_MIPI_CNTL0, 77 + HHI_MIPI_CNTL0_ENABLE, 0); 78 + return 0; 79 + } 80 + 81 + static int phy_axg_mipi_pcie_analog_init(struct phy *phy) 82 + { 83 + return 0; 84 + } 85 + 86 + static int phy_axg_mipi_pcie_analog_exit(struct phy *phy) 87 + { 88 + return 0; 89 + } 90 + 91 + static const struct phy_ops phy_axg_mipi_pcie_analog_ops = { 92 + .init = phy_axg_mipi_pcie_analog_init, 93 + .exit = phy_axg_mipi_pcie_analog_exit, 94 + .power_on = phy_axg_mipi_pcie_analog_power_on, 95 + .power_off = phy_axg_mipi_pcie_analog_power_off, 96 + .owner = THIS_MODULE, 97 + }; 98 + 99 + static struct phy *phy_axg_mipi_pcie_analog_xlate(struct device *dev, 100 + struct of_phandle_args *args) 101 + { 102 + struct phy_axg_mipi_pcie_analog_priv *priv = dev_get_drvdata(dev); 103 + unsigned int mode; 104 + 105 + if (args->args_count != 1) { 106 + dev_err(dev, "invalid number of arguments\n"); 107 + return ERR_PTR(-EINVAL); 108 + } 109 + 110 + mode = args->args[0]; 111 + 112 + /* MIPI mode is not supported yet */ 113 + if (mode != PHY_TYPE_PCIE) { 114 + dev_err(dev, "invalid phy mode select argument\n"); 115 + return ERR_PTR(-EINVAL); 116 + } 117 + 118 + priv->mode = mode; 119 + return priv->phy; 120 + } 121 + 122 + static int phy_axg_mipi_pcie_analog_probe(struct platform_device *pdev) 123 + { 124 + struct phy_provider *phy; 125 + struct device *dev = &pdev->dev; 126 + struct phy_axg_mipi_pcie_analog_priv *priv; 127 + struct device_node *np = dev->of_node; 128 + struct regmap *map; 129 + struct resource *res; 130 + void __iomem *base; 131 + int ret; 132 + 133 + priv = devm_kmalloc(dev, sizeof(*priv), GFP_KERNEL); 134 + if (!priv) 135 + return -ENOMEM; 136 + 137 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 138 + base = devm_ioremap_resource(dev, res); 139 + if (IS_ERR(base)) { 140 + dev_err(dev, "failed to get regmap base\n"); 141 + return PTR_ERR(base); 142 + } 143 + 144 + map = devm_regmap_init_mmio(dev, base, 145 + &phy_axg_mipi_pcie_analog_regmap_conf); 146 + if (IS_ERR(map)) { 147 + dev_err(dev, "failed to get HHI regmap\n"); 148 + return PTR_ERR(map); 149 + } 150 + priv->regmap = map; 151 + 152 + priv->phy = devm_phy_create(dev, np, &phy_axg_mipi_pcie_analog_ops); 153 + if (IS_ERR(priv->phy)) { 154 + ret = PTR_ERR(priv->phy); 155 + if (ret != -EPROBE_DEFER) 156 + dev_err(dev, "failed to create PHY\n"); 157 + return ret; 158 + } 159 + 160 + phy_set_drvdata(priv->phy, priv); 161 + dev_set_drvdata(dev, priv); 162 + 163 + phy = devm_of_phy_provider_register(dev, 164 + phy_axg_mipi_pcie_analog_xlate); 165 + 166 + return PTR_ERR_OR_ZERO(phy); 167 + } 168 + 169 + static const struct of_device_id phy_axg_mipi_pcie_analog_of_match[] = { 170 + { 171 + .compatible = "amlogic,axg-mipi-pcie-analog-phy", 172 + }, 173 + { }, 174 + }; 175 + MODULE_DEVICE_TABLE(of, phy_axg_mipi_pcie_analog_of_match); 176 + 177 + static struct platform_driver phy_axg_mipi_pcie_analog_driver = { 178 + .probe = phy_axg_mipi_pcie_analog_probe, 179 + .driver = { 180 + .name = "phy-axg-mipi-pcie-analog", 181 + .of_match_table = phy_axg_mipi_pcie_analog_of_match, 182 + }, 183 + }; 184 + module_platform_driver(phy_axg_mipi_pcie_analog_driver); 185 + 186 + MODULE_AUTHOR("Remi Pommarel <repk@triplefau.lt>"); 187 + MODULE_DESCRIPTION("Amlogic AXG MIPI + PCIE analog PHY driver"); 188 + MODULE_LICENSE("GPL v2");
+192
drivers/phy/amlogic/phy-meson-axg-pcie.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Amlogic AXG PCIE PHY driver 4 + * 5 + * Copyright (C) 2020 Remi Pommarel <repk@triplefau.lt> 6 + */ 7 + #include <linux/module.h> 8 + #include <linux/phy/phy.h> 9 + #include <linux/regmap.h> 10 + #include <linux/reset.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/bitfield.h> 13 + #include <dt-bindings/phy/phy.h> 14 + 15 + #define MESON_PCIE_REG0 0x00 16 + #define MESON_PCIE_COMMON_CLK BIT(4) 17 + #define MESON_PCIE_PORT_SEL GENMASK(3, 2) 18 + #define MESON_PCIE_CLK BIT(1) 19 + #define MESON_PCIE_POWERDOWN BIT(0) 20 + 21 + #define MESON_PCIE_TWO_X1 FIELD_PREP(MESON_PCIE_PORT_SEL, 0x3) 22 + #define MESON_PCIE_COMMON_REF_CLK FIELD_PREP(MESON_PCIE_COMMON_CLK, 0x1) 23 + #define MESON_PCIE_PHY_INIT (MESON_PCIE_TWO_X1 | \ 24 + MESON_PCIE_COMMON_REF_CLK) 25 + #define MESON_PCIE_RESET_DELAY 500 26 + 27 + struct phy_axg_pcie_priv { 28 + struct phy *phy; 29 + struct phy *analog; 30 + struct regmap *regmap; 31 + struct reset_control *reset; 32 + }; 33 + 34 + static const struct regmap_config phy_axg_pcie_regmap_conf = { 35 + .reg_bits = 8, 36 + .val_bits = 32, 37 + .reg_stride = 4, 38 + .max_register = MESON_PCIE_REG0, 39 + }; 40 + 41 + static int phy_axg_pcie_power_on(struct phy *phy) 42 + { 43 + struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy); 44 + int ret; 45 + 46 + ret = phy_power_on(priv->analog); 47 + if (ret != 0) 48 + return ret; 49 + 50 + regmap_update_bits(priv->regmap, MESON_PCIE_REG0, 51 + MESON_PCIE_POWERDOWN, 0); 52 + return 0; 53 + } 54 + 55 + static int phy_axg_pcie_power_off(struct phy *phy) 56 + { 57 + struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy); 58 + int ret; 59 + 60 + ret = phy_power_off(priv->analog); 61 + if (ret != 0) 62 + return ret; 63 + 64 + regmap_update_bits(priv->regmap, MESON_PCIE_REG0, 65 + MESON_PCIE_POWERDOWN, 1); 66 + return 0; 67 + } 68 + 69 + static int phy_axg_pcie_init(struct phy *phy) 70 + { 71 + struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy); 72 + int ret; 73 + 74 + ret = phy_init(priv->analog); 75 + if (ret != 0) 76 + return ret; 77 + 78 + regmap_write(priv->regmap, MESON_PCIE_REG0, MESON_PCIE_PHY_INIT); 79 + return reset_control_reset(priv->reset); 80 + } 81 + 82 + static int phy_axg_pcie_exit(struct phy *phy) 83 + { 84 + struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy); 85 + int ret; 86 + 87 + ret = phy_exit(priv->analog); 88 + if (ret != 0) 89 + return ret; 90 + 91 + return reset_control_reset(priv->reset); 92 + } 93 + 94 + static int phy_axg_pcie_reset(struct phy *phy) 95 + { 96 + struct phy_axg_pcie_priv *priv = phy_get_drvdata(phy); 97 + int ret = 0; 98 + 99 + ret = phy_reset(priv->analog); 100 + if (ret != 0) 101 + goto out; 102 + 103 + ret = reset_control_assert(priv->reset); 104 + if (ret != 0) 105 + goto out; 106 + udelay(MESON_PCIE_RESET_DELAY); 107 + 108 + ret = reset_control_deassert(priv->reset); 109 + if (ret != 0) 110 + goto out; 111 + udelay(MESON_PCIE_RESET_DELAY); 112 + 113 + out: 114 + return ret; 115 + } 116 + 117 + static const struct phy_ops phy_axg_pcie_ops = { 118 + .init = phy_axg_pcie_init, 119 + .exit = phy_axg_pcie_exit, 120 + .power_on = phy_axg_pcie_power_on, 121 + .power_off = phy_axg_pcie_power_off, 122 + .reset = phy_axg_pcie_reset, 123 + .owner = THIS_MODULE, 124 + }; 125 + 126 + static int phy_axg_pcie_probe(struct platform_device *pdev) 127 + { 128 + struct phy_provider *pphy; 129 + struct device *dev = &pdev->dev; 130 + struct phy_axg_pcie_priv *priv; 131 + struct device_node *np = dev->of_node; 132 + struct resource *res; 133 + void __iomem *base; 134 + int ret; 135 + 136 + priv = devm_kmalloc(dev, sizeof(*priv), GFP_KERNEL); 137 + if (!priv) 138 + return -ENOMEM; 139 + 140 + priv->phy = devm_phy_create(dev, np, &phy_axg_pcie_ops); 141 + if (IS_ERR(priv->phy)) { 142 + ret = PTR_ERR(priv->phy); 143 + if (ret != -EPROBE_DEFER) 144 + dev_err(dev, "failed to create PHY\n"); 145 + return ret; 146 + } 147 + 148 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 149 + base = devm_ioremap_resource(dev, res); 150 + if (IS_ERR(base)) 151 + return PTR_ERR(base); 152 + 153 + priv->regmap = devm_regmap_init_mmio(dev, base, 154 + &phy_axg_pcie_regmap_conf); 155 + if (IS_ERR(priv->regmap)) 156 + return PTR_ERR(priv->regmap); 157 + 158 + priv->reset = devm_reset_control_array_get(dev, false, false); 159 + if (IS_ERR(priv->reset)) 160 + return PTR_ERR(priv->reset); 161 + 162 + priv->analog = devm_phy_get(dev, "analog"); 163 + if (IS_ERR(priv->analog)) 164 + return PTR_ERR(priv->analog); 165 + 166 + phy_set_drvdata(priv->phy, priv); 167 + dev_set_drvdata(dev, priv); 168 + pphy = devm_of_phy_provider_register(dev, of_phy_simple_xlate); 169 + 170 + return PTR_ERR_OR_ZERO(pphy); 171 + } 172 + 173 + static const struct of_device_id phy_axg_pcie_of_match[] = { 174 + { 175 + .compatible = "amlogic,axg-pcie-phy", 176 + }, 177 + { }, 178 + }; 179 + MODULE_DEVICE_TABLE(of, phy_axg_pcie_of_match); 180 + 181 + static struct platform_driver phy_axg_pcie_driver = { 182 + .probe = phy_axg_pcie_probe, 183 + .driver = { 184 + .name = "phy-axg-pcie", 185 + .of_match_table = phy_axg_pcie_of_match, 186 + }, 187 + }; 188 + module_platform_driver(phy_axg_pcie_driver); 189 + 190 + MODULE_AUTHOR("Remi Pommarel <repk@triplefau.lt>"); 191 + MODULE_DESCRIPTION("Amlogic AXG PCIE PHY driver"); 192 + MODULE_LICENSE("GPL v2");
+2 -2
drivers/scsi/lpfc/lpfc_attr.c
··· 4780 4780 * Description: 4781 4781 * If the @buf contains 1 and the device currently has the AER support 4782 4782 * enabled, then invokes the kernel AER helper routine 4783 - * pci_cleanup_aer_uncorrect_error_status to clean up the uncorrectable 4783 + * pci_aer_clear_nonfatal_status() to clean up the uncorrectable 4784 4784 * error status register. 4785 4785 * 4786 4786 * Notes: ··· 4806 4806 return -EINVAL; 4807 4807 4808 4808 if (phba->hba_flag & HBA_AER_ENABLED) 4809 - rc = pci_cleanup_aer_uncorrect_error_status(phba->pcidev); 4809 + rc = pci_aer_clear_nonfatal_status(phba->pcidev); 4810 4810 4811 4811 if (rc == 0) 4812 4812 return strlen(buf);
+4 -2
include/linux/acpi.h
··· 530 530 #define OSC_PCI_CLOCK_PM_SUPPORT 0x00000004 531 531 #define OSC_PCI_SEGMENT_GROUPS_SUPPORT 0x00000008 532 532 #define OSC_PCI_MSI_SUPPORT 0x00000010 533 + #define OSC_PCI_EDR_SUPPORT 0x00000080 533 534 #define OSC_PCI_HPX_TYPE_3_SUPPORT 0x00000100 534 - #define OSC_PCI_SUPPORT_MASKS 0x0000011f 535 + #define OSC_PCI_SUPPORT_MASKS 0x0000019f 535 536 536 537 /* PCI Host Bridge _OSC: Capabilities DWORD 3: Control Field */ 537 538 #define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 0x00000001 ··· 541 540 #define OSC_PCI_EXPRESS_AER_CONTROL 0x00000008 542 541 #define OSC_PCI_EXPRESS_CAPABILITY_CONTROL 0x00000010 543 542 #define OSC_PCI_EXPRESS_LTR_CONTROL 0x00000020 544 - #define OSC_PCI_CONTROL_MASKS 0x0000003f 543 + #define OSC_PCI_EXPRESS_DPC_CONTROL 0x00000080 544 + #define OSC_PCI_CONTROL_MASKS 0x000000bf 545 545 546 546 #define ACPI_GSB_ACCESS_ATTRIB_QUICK 0x00000002 547 547 #define ACPI_GSB_ACCESS_ATTRIB_SEND_RCV 0x00000004
+2 -7
include/linux/aer.h
··· 44 44 /* PCIe port driver needs this function to enable AER */ 45 45 int pci_enable_pcie_error_reporting(struct pci_dev *dev); 46 46 int pci_disable_pcie_error_reporting(struct pci_dev *dev); 47 - int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev); 48 - int pci_cleanup_aer_error_status_regs(struct pci_dev *dev); 47 + int pci_aer_clear_nonfatal_status(struct pci_dev *dev); 49 48 void pci_save_aer_state(struct pci_dev *dev); 50 49 void pci_restore_aer_state(struct pci_dev *dev); 51 50 #else ··· 56 57 { 57 58 return -EINVAL; 58 59 } 59 - static inline int pci_cleanup_aer_uncorrect_error_status(struct pci_dev *dev) 60 - { 61 - return -EINVAL; 62 - } 63 - static inline int pci_cleanup_aer_error_status_regs(struct pci_dev *dev) 60 + static inline int pci_aer_clear_nonfatal_status(struct pci_dev *dev) 64 61 { 65 62 return -EINVAL; 66 63 }
+8
include/linux/pci-acpi.h
··· 112 112 #define RESET_DELAY_DSM 0x08 113 113 #define FUNCTION_DELAY_DSM 0x09 114 114 115 + #ifdef CONFIG_PCIE_EDR 116 + void pci_acpi_add_edr_notifier(struct pci_dev *pdev); 117 + void pci_acpi_remove_edr_notifier(struct pci_dev *pdev); 118 + #else 119 + static inline void pci_acpi_add_edr_notifier(struct pci_dev *pdev) { } 120 + static inline void pci_acpi_remove_edr_notifier(struct pci_dev *pdev) { } 121 + #endif /* CONFIG_PCIE_EDR */ 122 + 115 123 #else /* CONFIG_ACPI */ 116 124 static inline void acpi_pci_add_bus(struct pci_bus *bus) { } 117 125 static inline void acpi_pci_remove_bus(struct pci_bus *bus) { }
+22 -5
include/linux/pci-epc.h
··· 53 53 phys_addr_t addr); 54 54 int (*set_msi)(struct pci_epc *epc, u8 func_no, u8 interrupts); 55 55 int (*get_msi)(struct pci_epc *epc, u8 func_no); 56 - int (*set_msix)(struct pci_epc *epc, u8 func_no, u16 interrupts); 56 + int (*set_msix)(struct pci_epc *epc, u8 func_no, u16 interrupts, 57 + enum pci_barno, u32 offset); 57 58 int (*get_msix)(struct pci_epc *epc, u8 func_no); 58 59 int (*raise_irq)(struct pci_epc *epc, u8 func_no, 59 60 enum pci_epc_irq_type type, u16 interrupt_num); ··· 72 71 * @bitmap: bitmap to manage the PCI address space 73 72 * @pages: number of bits representing the address region 74 73 * @page_size: size of each page 74 + * @lock: mutex to protect bitmap 75 75 */ 76 76 struct pci_epc_mem { 77 77 phys_addr_t phys_base; ··· 80 78 unsigned long *bitmap; 81 79 size_t page_size; 82 80 int pages; 81 + /* mutex to protect against concurrent access for memory allocation*/ 82 + struct mutex lock; 83 83 }; 84 84 85 85 /** ··· 92 88 * @mem: address space of the endpoint controller 93 89 * @max_functions: max number of functions that can be configured in this EPC 94 90 * @group: configfs group representing the PCI EPC device 95 - * @lock: spinlock to protect pci_epc ops 91 + * @lock: mutex to protect pci_epc ops 92 + * @function_num_map: bitmap to manage physical function number 93 + * @notifier: used to notify EPF of any EPC events (like linkup) 96 94 */ 97 95 struct pci_epc { 98 96 struct device dev; ··· 103 97 struct pci_epc_mem *mem; 104 98 u8 max_functions; 105 99 struct config_group *group; 106 - /* spinlock to protect against concurrent access of EP controller */ 107 - spinlock_t lock; 100 + /* mutex to protect against concurrent access of EP controller */ 101 + struct mutex lock; 102 + unsigned long function_num_map; 103 + struct atomic_notifier_head notifier; 108 104 }; 109 105 110 106 /** ··· 121 113 */ 122 114 struct pci_epc_features { 123 115 unsigned int linkup_notifier : 1; 116 + unsigned int core_init_notifier : 1; 124 117 unsigned int msi_capable : 1; 125 118 unsigned int msix_capable : 1; 126 119 u8 reserved_bar; ··· 150 141 return dev_get_drvdata(&epc->dev); 151 142 } 152 143 144 + static inline int 145 + pci_epc_register_notifier(struct pci_epc *epc, struct notifier_block *nb) 146 + { 147 + return atomic_notifier_chain_register(&epc->notifier, nb); 148 + } 149 + 153 150 struct pci_epc * 154 151 __devm_pci_epc_create(struct device *dev, const struct pci_epc_ops *ops, 155 152 struct module *owner); ··· 166 151 void pci_epc_destroy(struct pci_epc *epc); 167 152 int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf); 168 153 void pci_epc_linkup(struct pci_epc *epc); 154 + void pci_epc_init_notify(struct pci_epc *epc); 169 155 void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf); 170 156 int pci_epc_write_header(struct pci_epc *epc, u8 func_no, 171 157 struct pci_epf_header *hdr); ··· 181 165 phys_addr_t phys_addr); 182 166 int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 interrupts); 183 167 int pci_epc_get_msi(struct pci_epc *epc, u8 func_no); 184 - int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts); 168 + int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u16 interrupts, 169 + enum pci_barno, u32 offset); 185 170 int pci_epc_get_msix(struct pci_epc *epc, u8 func_no); 186 171 int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, 187 172 enum pci_epc_irq_type type, u16 interrupt_num);
+25 -4
include/linux/pci-epf.h
··· 15 15 16 16 struct pci_epf; 17 17 18 + enum pci_notify_event { 19 + CORE_INIT, 20 + LINK_UP, 21 + }; 22 + 18 23 enum pci_barno { 19 24 BAR_0, 20 25 BAR_1, ··· 60 55 * @bind: ops to perform when a EPC device has been bound to EPF device 61 56 * @unbind: ops to perform when a binding has been lost between a EPC device 62 57 * and EPF device 63 - * @linkup: ops to perform when the EPC device has established a connection with 64 - * a host system 65 58 */ 66 59 struct pci_epf_ops { 67 60 int (*bind)(struct pci_epf *epf); 68 61 void (*unbind)(struct pci_epf *epf); 69 - void (*linkup)(struct pci_epf *epf); 70 62 }; 71 63 72 64 /** ··· 94 92 /** 95 93 * struct pci_epf_bar - represents the BAR of EPF device 96 94 * @phys_addr: physical address that should be mapped to the BAR 95 + * @addr: virtual address corresponding to the @phys_addr 97 96 * @size: the size of the address space present in BAR 98 97 */ 99 98 struct pci_epf_bar { 100 99 dma_addr_t phys_addr; 100 + void *addr; 101 101 size_t size; 102 102 enum pci_barno barno; 103 103 int flags; ··· 116 112 * @epc: the EPC device to which this EPF device is bound 117 113 * @driver: the EPF driver to which this EPF device is bound 118 114 * @list: to add pci_epf as a list of PCI endpoint functions to pci_epc 115 + * @nb: notifier block to notify EPF of any EPC events (like linkup) 116 + * @lock: mutex to protect pci_epf_ops 119 117 */ 120 118 struct pci_epf { 121 119 struct device dev; ··· 131 125 struct pci_epc *epc; 132 126 struct pci_epf_driver *driver; 133 127 struct list_head list; 128 + struct notifier_block nb; 129 + /* mutex to protect against concurrent access of pci_epf_ops */ 130 + struct mutex lock; 131 + }; 132 + 133 + /** 134 + * struct pci_epf_msix_tbl - represents the MSIX table entry structure 135 + * @msg_addr: Writes to this address will trigger MSIX interrupt in host 136 + * @msg_data: Data that should be written to @msg_addr to trigger MSIX interrupt 137 + * @vector_ctrl: Identifies if the function is prohibited from sending a message 138 + * using this MSIX table entry 139 + */ 140 + struct pci_epf_msix_tbl { 141 + u64 msg_addr; 142 + u32 msg_data; 143 + u32 vector_ctrl; 134 144 }; 135 145 136 146 #define to_pci_epf(epf_dev) container_of((epf_dev), struct pci_epf, dev) ··· 176 154 void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar); 177 155 int pci_epf_bind(struct pci_epf *epf); 178 156 void pci_epf_unbind(struct pci_epf *epf); 179 - void pci_epf_linkup(struct pci_epf *epf); 180 157 #endif /* __LINUX_PCI_EPF_H */
+8 -2
include/linux/pci.h
··· 243 243 PCIE_LNK_WIDTH_UNKNOWN = 0xff, 244 244 }; 245 245 246 - /* Based on the PCI Hotplug Spec, but some values are made up by us */ 246 + /* See matching string table in pci_speed_string() */ 247 247 enum pci_bus_speed { 248 248 PCI_SPEED_33MHz = 0x00, 249 249 PCI_SPEED_66MHz = 0x01, ··· 451 451 const struct attribute_group **msi_irq_groups; 452 452 #endif 453 453 struct pci_vpd *vpd; 454 + #ifdef CONFIG_PCIE_DPC 455 + u16 dpc_cap; 456 + unsigned int dpc_rp_extensions:1; 457 + u8 dpc_rp_log_size; 458 + #endif 454 459 #ifdef CONFIG_PCI_ATS 455 460 union { 456 461 struct pci_sriov *sriov; /* PF: SR-IOV info */ ··· 522 517 unsigned int native_shpc_hotplug:1; /* OS may use SHPC hotplug */ 523 518 unsigned int native_pme:1; /* OS may use PCIe PME */ 524 519 unsigned int native_ltr:1; /* OS may use PCIe LTR */ 520 + unsigned int native_dpc:1; /* OS may use PCIe DPC */ 525 521 unsigned int preserve_config:1; /* Preserve FW resource setup */ 522 + unsigned int size_windows:1; /* Enable root bus sizing */ 526 523 527 524 /* Resource alignment requirements */ 528 525 resource_size_t (*align_resource)(struct pci_dev *dev, ··· 1231 1224 void pci_disable_rom(struct pci_dev *pdev); 1232 1225 void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size); 1233 1226 void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom); 1234 - void __iomem __must_check *pci_platform_rom(struct pci_dev *pdev, size_t *size); 1235 1227 1236 1228 /* Power management related routines */ 1237 1229 int pci_save_state(struct pci_dev *dev);
+2
include/linux/pci_ids.h
··· 2585 2585 2586 2586 #define PCI_VENDOR_ID_AMAZON 0x1d0f 2587 2587 2588 + #define PCI_VENDOR_ID_ZHAOXIN 0x1d17 2589 + 2588 2590 #define PCI_VENDOR_ID_HYGON 0x1d94 2589 2591 2590 2592 #define PCI_VENDOR_ID_HXT 0x1dbf
+9 -1
include/soc/tegra/bpmp-abi.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2014-2018, NVIDIA CORPORATION. All rights reserved. 3 + * Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 6 #ifndef _ABI_BPMP_ABI_H_ ··· 2119 2119 CMD_UPHY_PCIE_LANE_MARGIN_STATUS = 2, 2120 2120 CMD_UPHY_PCIE_EP_CONTROLLER_PLL_INIT = 3, 2121 2121 CMD_UPHY_PCIE_CONTROLLER_STATE = 4, 2122 + CMD_UPHY_PCIE_EP_CONTROLLER_PLL_OFF = 5, 2122 2123 CMD_UPHY_MAX, 2123 2124 }; 2124 2125 ··· 2152 2151 uint8_t enable; 2153 2152 } __ABI_PACKED; 2154 2153 2154 + struct cmd_uphy_ep_controller_pll_off_request { 2155 + /** @brief EP controller number, valid: 0, 4, 5 */ 2156 + uint8_t ep_controller; 2157 + } __ABI_PACKED; 2158 + 2155 2159 /** 2156 2160 * @ingroup UPHY 2157 2161 * @brief Request with #MRQ_UPHY ··· 2171 2165 * |CMD_UPHY_PCIE_LANE_MARGIN_STATUS | | 2172 2166 * |CMD_UPHY_PCIE_EP_CONTROLLER_PLL_INIT |cmd_uphy_ep_controller_pll_init_request | 2173 2167 * |CMD_UPHY_PCIE_CONTROLLER_STATE |cmd_uphy_pcie_controller_state_request | 2168 + * |CMD_UPHY_PCIE_EP_CONTROLLER_PLL_OFF |cmd_uphy_ep_controller_pll_off_request | 2174 2169 * 2175 2170 */ 2176 2171 ··· 2185 2178 struct cmd_uphy_margin_control_request uphy_set_margin_control; 2186 2179 struct cmd_uphy_ep_controller_pll_init_request ep_ctrlr_pll_init; 2187 2180 struct cmd_uphy_pcie_controller_state_request controller_state; 2181 + struct cmd_uphy_ep_controller_pll_off_request ep_ctrlr_pll_off; 2188 2182 } __UNION_ANON; 2189 2183 } __ABI_PACKED; 2190 2184
+2
include/uapi/linux/pci_regs.h
··· 605 605 #define PCI_EXP_SLTCTL_PWR_OFF 0x0400 /* Power Off */ 606 606 #define PCI_EXP_SLTCTL_EIC 0x0800 /* Electromechanical Interlock Control */ 607 607 #define PCI_EXP_SLTCTL_DLLSCE 0x1000 /* Data Link Layer State Changed Enable */ 608 + #define PCI_EXP_SLTCTL_IBPD_DISABLE 0x4000 /* In-band PD disable */ 608 609 #define PCI_EXP_SLTSTA 26 /* Slot Status */ 609 610 #define PCI_EXP_SLTSTA_ABP 0x0001 /* Attention Button Pressed */ 610 611 #define PCI_EXP_SLTSTA_PFD 0x0002 /* Power Fault Detected */ ··· 681 680 #define PCI_EXP_LNKSTA2 50 /* Link Status 2 */ 682 681 #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */ 683 682 #define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */ 683 + #define PCI_EXP_SLTCAP2_IBPD 0x00000001 /* In-band PD Disable Supported */ 684 684 #define PCI_EXP_SLTCTL2 56 /* Slot Control 2 */ 685 685 #define PCI_EXP_SLTSTA2 58 /* Slot Status 2 */ 686 686
+8
include/uapi/linux/pcitest.h
··· 19 19 #define PCITEST_MSIX _IOW('P', 0x7, int) 20 20 #define PCITEST_SET_IRQTYPE _IOW('P', 0x8, int) 21 21 #define PCITEST_GET_IRQTYPE _IO('P', 0x9) 22 + #define PCITEST_CLEAR_IRQ _IO('P', 0x10) 23 + 24 + #define PCITEST_FLAGS_USE_DMA 0x00000001 25 + 26 + struct pci_endpoint_test_xfer_param { 27 + unsigned long size; 28 + unsigned char flags; 29 + }; 22 30 23 31 #endif /* __UAPI_LINUX_PCITEST_H */
+33 -4
tools/pci/pcitest.c
··· 30 30 int irqtype; 31 31 bool set_irqtype; 32 32 bool get_irqtype; 33 + bool clear_irq; 33 34 bool read; 34 35 bool write; 35 36 bool copy; 36 37 unsigned long size; 38 + bool use_dma; 37 39 }; 38 40 39 41 static int run_test(struct pci_test *test) 40 42 { 43 + struct pci_endpoint_test_xfer_param param; 41 44 int ret = -EINVAL; 42 45 int fd; 43 46 ··· 77 74 fprintf(stdout, "%s\n", irq[ret]); 78 75 } 79 76 77 + if (test->clear_irq) { 78 + ret = ioctl(fd, PCITEST_CLEAR_IRQ); 79 + fprintf(stdout, "CLEAR IRQ:\t\t"); 80 + if (ret < 0) 81 + fprintf(stdout, "FAILED\n"); 82 + else 83 + fprintf(stdout, "%s\n", result[ret]); 84 + } 85 + 80 86 if (test->legacyirq) { 81 87 ret = ioctl(fd, PCITEST_LEGACY_IRQ, 0); 82 88 fprintf(stdout, "LEGACY IRQ:\t"); ··· 114 102 } 115 103 116 104 if (test->write) { 117 - ret = ioctl(fd, PCITEST_WRITE, test->size); 105 + param.size = test->size; 106 + if (test->use_dma) 107 + param.flags = PCITEST_FLAGS_USE_DMA; 108 + ret = ioctl(fd, PCITEST_WRITE, &param); 118 109 fprintf(stdout, "WRITE (%7ld bytes):\t\t", test->size); 119 110 if (ret < 0) 120 111 fprintf(stdout, "TEST FAILED\n"); ··· 126 111 } 127 112 128 113 if (test->read) { 129 - ret = ioctl(fd, PCITEST_READ, test->size); 114 + param.size = test->size; 115 + if (test->use_dma) 116 + param.flags = PCITEST_FLAGS_USE_DMA; 117 + ret = ioctl(fd, PCITEST_READ, &param); 130 118 fprintf(stdout, "READ (%7ld bytes):\t\t", test->size); 131 119 if (ret < 0) 132 120 fprintf(stdout, "TEST FAILED\n"); ··· 138 120 } 139 121 140 122 if (test->copy) { 141 - ret = ioctl(fd, PCITEST_COPY, test->size); 123 + param.size = test->size; 124 + if (test->use_dma) 125 + param.flags = PCITEST_FLAGS_USE_DMA; 126 + ret = ioctl(fd, PCITEST_COPY, &param); 142 127 fprintf(stdout, "COPY (%7ld bytes):\t\t", test->size); 143 128 if (ret < 0) 144 129 fprintf(stdout, "TEST FAILED\n"); ··· 174 153 /* set default endpoint device */ 175 154 test->device = "/dev/pci-endpoint-test.0"; 176 155 177 - while ((c = getopt(argc, argv, "D:b:m:x:i:Ilhrwcs:")) != EOF) 156 + while ((c = getopt(argc, argv, "D:b:m:x:i:deIlhrwcs:")) != EOF) 178 157 switch (c) { 179 158 case 'D': 180 159 test->device = optarg; ··· 215 194 case 'c': 216 195 test->copy = true; 217 196 continue; 197 + case 'e': 198 + test->clear_irq = true; 199 + continue; 218 200 case 's': 219 201 test->size = strtoul(optarg, NULL, 0); 202 + continue; 203 + case 'd': 204 + test->use_dma = true; 220 205 continue; 221 206 case 'h': 222 207 default: ··· 235 208 "\t-m <msi num> MSI test (msi number between 1..32)\n" 236 209 "\t-x <msix num> \tMSI-X test (msix number between 1..2048)\n" 237 210 "\t-i <irq type> \tSet IRQ type (0 - Legacy, 1 - MSI, 2 - MSI-X)\n" 211 + "\t-e Clear IRQ\n" 238 212 "\t-I Get current IRQ type configured\n" 213 + "\t-d Use DMA\n" 239 214 "\t-l Legacy IRQ test\n" 240 215 "\t-r Read buffer test\n" 241 216 "\t-w Write buffer test\n"