Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Print the actual delay time in pci_bridge_wait_for_secondary_bus()
instead of assuming it was 1000ms (Wilfred Mallawa)

- Revert 'iommu/amd: Prevent binding other PCI drivers to IOMMU PCI
devices', which broke resume from system sleep on AMD platforms and
has been fixed by other commits (Lukas Wunner)

Resource management:

- Remove mtip32xx use of pcim_iounmap_regions(), which is deprecated
and unnecessary (Philipp Stanner)

- Remove pcim_iounmap_regions() and pcim_request_region_exclusive()
and related flags since all uses have been removed (Philipp
Stanner)

- Rework devres 'request' functions so they are no longer 'hybrid',
i.e., their behavior no longer depends on whether
pcim_enable_device or pci_enable_device() was used, and remove
related code (Philipp Stanner)

- Warn (not BUG()) about failure to assign optional resources (Ilpo
Järvinen)

Error handling:

- Log the DPC Error Source ID only when it's actually valid (when
ERR_FATAL or ERR_NONFATAL was received from a downstream device)
and decode into bus/device/function (Bjorn Helgaas)

- Determine AER log level once and save it so all related messages
use the same level (Karolina Stolarek)

- Use KERN_WARNING, not KERN_ERR, when logging PCIe Correctable
Errors (Karolina Stolarek)

- Ratelimit PCIe Correctable and Non-Fatal error logging, with sysfs
controls on interval and burst count, to avoid flooding logs and
RCU stall warnings (Jon Pan-Doh)

Power management:

- Increment PM usage counter when probing reset methods so we don't
try to read config space of a powered-off device (Alex Williamson)

- Set all devices to D0 during enumeration to ensure ACPI opregion is
connected via _REG (Mario Limonciello)

Power control:

- Rename pwrctrl Kconfig symbols from 'PWRCTL' to 'PWRCTRL' to match
the filename paths. Retain old deprecated symbols for
compatibility, except for the pwrctrl slot driver
(PCI_PWRCTRL_SLOT) (Johan Hovold)

- When unregistering pwrctrl, cancel outstanding rescan work before
cleaning up data structures to avoid use-after-free issues (Brian
Norris)

Bandwidth control:

- Simplify link bandwidth controller by replacing the count of Link
Bandwidth Management Status (LBMS) events with a PCI_LINK_LBMS_SEEN
flag (Ilpo Järvinen)

- Update the Link Speed after retraining, since the Link Speed may
have changed (Ilpo Järvinen)

PCIe native device hotplug:

- Ignore Presence Detect Changed caused by DPC.

pciehp already ignores Link Down/Up events caused by DPC, but on
slots using in-band presence detect, DPC causes a spurious Presence
Detect Changed event (Lukas Wunner)

- Ignore Link Down/Up caused by Secondary Bus Reset.

On hotplug ports using in-band presence detect, the reset causes a
Presence Detect Changed event, which mistakenly caused teardown and
re-enumeration of the device. Drivers may need to annotate code
that resets their device (Lukas Wunner)

Virtualization:

- Add an ACS quirk for Loongson Root Ports that don't advertise ACS
but don't allow peer-to-peer transactions between Root Ports; the
quirk allows each Root Port to be in a separate IOMMU group (Huacai
Chen)

Endpoint framework:

- For fixed-size BARs, retain both the actual size and the possibly
larger size allocated to accommodate iATU alignment requirements
(Jerome Brunet)

- Simplify ctrl/SPAD space allocation and avoid allocating more space
than needed (Jerome Brunet)

- Correct MSI-X PBA offset calculations for DesignWare and Cadence
endpoint controllers (Niklas Cassel)

- Align the return value (number of interrupts) encoding for
pci_epc_get_msi()/pci_epc_ops::get_msi() and
pci_epc_get_msix()/pci_epc_ops::get_msix() (Niklas Cassel)

- Align the nr_irqs parameter encoding for
pci_epc_set_msi()/pci_epc_ops::set_msi() and
pci_epc_set_msix()/pci_epc_ops::set_msix() (Niklas Cassel)

Common host controller library:

- Convert pci-host-common to a library so platforms that don't need
native host controller drivers don't need to include these helper
functions (Manivannan Sadhasivam)

Apple PCIe controller driver:

- Extract ECAM bridge creation helper from pci_host_common_probe() to
separate driver-specific things like MSI from PCI things (Marc
Zyngier)

- Dynamically allocate RID-to_SID bitmap to prepare for SoCs with
varying capabilities (Marc Zyngier)

- Skip ports disabled in DT when setting up ports (Janne Grunau)

- Add t6020 compatible string (Alyssa Rosenzweig)

- Add T602x PCIe support (Hector Martin)

- Directly set/clear INTx mask bits because T602x dropped the
accessors that could do this without locking (Marc Zyngier)

- Move port PHY registers to their own reg items to accommodate
T602x, which moves them around; retain default offsets for existing
DTs that lack phy%d entries with the reg offsets (Hector Martin)

- Stop polling for core refclk, which doesn't work on T602x and the
bootloader has already done anyway (Hector Martin)

- Use gpiod_set_value_cansleep() when asserting PERST# in probe
because we're allowed to sleep there (Hector Martin)

Cadence PCIe controller driver:

- Drop a runtime PM 'put' to resolve a runtime atomic count underflow
(Hans Zhang)

- Make the cadence core buildable as a module (Kishon Vijay Abraham I)

- Add cdns_pcie_host_disable() and cdns_pcie_ep_disable() for use by
loadable drivers when they are removed (Siddharth Vadapalli)

Freescale i.MX6 PCIe controller driver:

- Apply link training workaround only on IMX6Q, IMX6SX, IMX6SP
(Richard Zhu)

- Remove redundant dw_pcie_wait_for_link() from
imx_pcie_start_link(); since the DWC core does this, imx6 only
needs it when retraining for a faster link speed (Richard Zhu)

- Toggle i.MX95 core reset to align with PHY powerup (Richard Zhu)

- Set SYS_AUX_PWR_DET to work around i.MX95 ERR051624 erratum: in
some cases, the controller can't exit 'L23 Ready' through Beacon or
PERST# deassertion (Richard Zhu)

- Clear GEN3_ZRXDC_NONCOMPL to work around i.MX95 ERR051586 erratum:
controller can't meet 2.5 GT/s ZRX-DC timing when operating at 8
GT/s, causing timeouts in L1 (Richard Zhu)

- Wait for i.MX95 PLL lock before enabling controller (Richard Zhu)

- Save/restore i.MX95 LUT for suspend/resume (Richard Zhu)

Mobiveil PCIe controller driver:

- Return bool (not int) for link-up check in
mobiveil_pab_ops.link_up() and layerscape-gen4, mobiveil (Hans
Zhang)

NVIDIA Tegra194 PCIe controller driver:

- Create debugfs directory for 'aspm_state_cnt' only when
CONFIG_PCIEASPM is enabled, since there are no other entries (Hans
Zhang)

Qualcomm PCIe controller driver:

- Add OF support for parsing DT 'eq-presets-<N>gts' property for lane
equalization presets (Krishna Chaitanya Chundru)

- Read Maximum Link Width from the Link Capabilities register if DT
lacks 'num-lanes' property (Krishna Chaitanya Chundru)

- Add Physical Layer 64 GT/s Capability ID and register offsets for
8, 32, and 64 GT/s lane equalization registers (Krishna Chaitanya
Chundru)

- Add generic dwc support for configuring lane equalization presets
(Krishna Chaitanya Chundru)

- Add DT and driver support for PCIe on IPQ5018 SoC (Nitheesh Sekar)

Renesas R-Car PCIe controller driver:

- Describe endpoint BAR 4 as being fixed size (Jerome Brunet)

- Document how to obtain R-Car V4H (r8a779g0) controller firmware
(Yoshihiro Shimoda)

Rockchip PCIe controller driver:

- Reorder rockchip_pci_core_rsts because
reset_control_bulk_deassert() deasserts in reverse order, to fix a
link training regression (Jensen Huang)

- Mark RK3399 as being capable of raising INTx interrupts (Niklas
Cassel)

Rockchip DesignWare PCIe controller driver:

- Check only PCIE_LINKUP, not LTSSM status, to determine whether the
link is up (Shawn Lin)

- Increase N_FTS (used in L0s->L0 transitions) and enable ASPM L0s
for Root Complex and Endpoint modes (Shawn Lin)

- Hide the broken ATS Capability in rockchip_pcie_ep_init() instead
of rockchip_pcie_ep_pre_init() so it stays hidden after PERST#
resets non-sticky registers (Shawn Lin)

- Call phy_power_off() before phy_exit() in rockchip_pcie_phy_deinit()
(Diederik de Haas)

Synopsys DesignWare PCIe controller driver:

- Set PORT_LOGIC_LINK_WIDTH to one lane to make initial link training
more robust; this will not affect the intended link width if all
lanes are functional (Wenbin Yao)

- Return bool (not int) for link-up check in dw_pcie_ops.link_up()
and armada8k, dra7xx, dw-rockchip, exynos, histb, keembay,
keystone, kirin, meson, qcom, qcom-ep, rcar_gen4, spear13xx,
tegra194, uniphier, visconti (Hans Zhang)

- Add debugfs support for exposing DWC device-specific PTM context
(Manivannan Sadhasivam)

TI J721E PCIe driver:

- Make j721e buildable as a loadable and removable module (Siddharth
Vadapalli)

- Fix j721e host/endpoint dependencies that result in link failures
in some configs (Arnd Bergmann)

Device tree bindings:

- Add qcom DT binding for 'global' interrupt (PCIe controller and
link-specific events) for ipq8074, ipq8074-gen3, ipq6018, sa8775p,
sc7280, sc8180x sdm845, sm8150, sm8250, sm8350 (Manivannan
Sadhasivam)

- Add qcom DT binding for 8 MSI SPI interrupts for msm8998, ipq8074,
ipq8074-gen3, ipq6018 (Manivannan Sadhasivam)

- Add dw rockchip DT binding for rk3576 and rk3562 (Kever Yang)

- Correct indentation and style of examples in brcm,stb-pcie,
cdns,cdns-pcie-ep, intel,keembay-pcie-ep, intel,keembay-pcie,
microchip,pcie-host, rcar-pci-ep, rcar-pci-host, xilinx-versal-cpm
(Krzysztof Kozlowski)

- Convert Marvell EBU (dove, kirkwood, armada-370, armada-xp) and
armada8k from text to schema DT bindings (Rob Herring)

- Remove obsolete .txt DT bindings for content that has been moved to
schemas (Rob Herring)

- Add qcom DT binding for MHI registers in IPQ5332, IPQ6018, IPQ8074
and IPQ9574 (Varadarajan Narayanan)

- Convert v3,v360epc-pci from text to DT schema binding (Rob Herring)

- Change microchip,pcie-host DT binding to be 'dma-noncoherent' since
PolarFire may be configured that way (Conor Dooley)

Miscellaneous:

- Drop 'pci' suffix from intel_mid_pci.c filename to match similar
files (Andy Shevchenko)

- All platforms with PCI have an MMU, so add PCI Kconfig dependency
on MMU to simplify build testing and avoid inadvertent build
regressions (Arnd Bergmann)

- Update Krzysztof Wilczyński's email address in MAINTAINERS
(Krzysztof Wilczyński)

- Update Manivannan Sadhasivam's email address in MAINTAINERS
(Manivannan Sadhasivam)"

* tag 'pci-v6.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (147 commits)
MAINTAINERS: Update Manivannan Sadhasivam email address
PCI: j721e: Fix host/endpoint dependencies
PCI: j721e: Add support to build as a loadable module
PCI: cadence-ep: Introduce cdns_pcie_ep_disable() helper for cleanup
PCI: cadence-host: Introduce cdns_pcie_host_disable() helper for cleanup
PCI: cadence: Add support to build pcie-cadence library as a kernel module
MAINTAINERS: Update Krzysztof Wilczyński email address
PCI: Remove unnecessary linesplit in __pci_setup_bridge()
PCI: WARN (not BUG()) when we fail to assign optional resources
PCI: Remove unused pci_printk()
PCI: qcom: Replace PERST# sleep time with proper macro
PCI: dw-rockchip: Replace PERST# sleep time with proper macro
PCI: host-common: Convert to library for host controller drivers
PCI/ERR: Remove misleading TODO regarding kernel panic
PCI: cadence: Remove duplicate message code definitions
PCI: endpoint: Align pci_epc_set_msix(), pci_epc_ops::set_msix() nr_irqs encoding
PCI: endpoint: Align pci_epc_set_msi(), pci_epc_ops::set_msi() nr_irqs encoding
PCI: endpoint: Align pci_epc_get_msix(), pci_epc_ops::get_msix() return value encoding
PCI: endpoint: Align pci_epc_get_msi(), pci_epc_ops::get_msi() return value encoding
PCI: cadence-ep: Correct PBA offset in .set_msix() callback
...

+3401 -2220
+3
.mailmap
··· 419 419 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski.k@gmail.com> 420 420 Krzysztof Kozlowski <krzk@kernel.org> <k.kozlowski@samsung.com> 421 421 Krzysztof Kozlowski <krzk@kernel.org> <krzysztof.kozlowski@canonical.com> 422 + Krzysztof Wilczyński <kwilczynski@kernel.org> <krzysztof.wilczynski@linux.com> 423 + Krzysztof Wilczyński <kwilczynski@kernel.org> <kw@linux.com> 422 424 Kshitiz Godara <quic_kgodara@quicinc.com> <kgodara@codeaurora.org> 423 425 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 424 426 Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org> ··· 463 461 Malathi Gottam <quic_mgottam@quicinc.com> <mgottam@codeaurora.org> 464 462 Manikanta Pubbisetty <quic_mpubbise@quicinc.com> <mpubbise@codeaurora.org> 465 463 Manivannan Sadhasivam <mani@kernel.org> <manivannanece23@gmail.com> 464 + Manivannan Sadhasivam <mani@kernel.org> <manivannan.sadhasivam@linaro.org> 466 465 Manoj Basapathi <quic_manojbm@quicinc.com> <manojbm@codeaurora.org> 467 466 Marcin Nowakowski <marcin.nowakowski@mips.com> <marcin.nowakowski@imgtec.com> 468 467 Marc Zyngier <maz@kernel.org> <marc.zyngier@arm.com>
+70
Documentation/ABI/testing/debugfs-pcie-ptm
··· 1 + What: /sys/kernel/debug/pcie_ptm_*/local_clock 2 + Date: May 2025 3 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 4 + Description: 5 + (RO) PTM local clock in nanoseconds. Applicable for both Root 6 + Complex and Endpoint controllers. 7 + 8 + What: /sys/kernel/debug/pcie_ptm_*/master_clock 9 + Date: May 2025 10 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 11 + Description: 12 + (RO) PTM master clock in nanoseconds. Applicable only for 13 + Endpoint controllers. 14 + 15 + What: /sys/kernel/debug/pcie_ptm_*/t1 16 + Date: May 2025 17 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 18 + Description: 19 + (RO) PTM T1 timestamp in nanoseconds. Applicable only for 20 + Endpoint controllers. 21 + 22 + What: /sys/kernel/debug/pcie_ptm_*/t2 23 + Date: May 2025 24 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 25 + Description: 26 + (RO) PTM T2 timestamp in nanoseconds. Applicable only for 27 + Root Complex controllers. 28 + 29 + What: /sys/kernel/debug/pcie_ptm_*/t3 30 + Date: May 2025 31 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 32 + Description: 33 + (RO) PTM T3 timestamp in nanoseconds. Applicable only for 34 + Root Complex controllers. 35 + 36 + What: /sys/kernel/debug/pcie_ptm_*/t4 37 + Date: May 2025 38 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 39 + Description: 40 + (RO) PTM T4 timestamp in nanoseconds. Applicable only for 41 + Endpoint controllers. 42 + 43 + What: /sys/kernel/debug/pcie_ptm_*/context_update 44 + Date: May 2025 45 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 46 + Description: 47 + (RW) Control the PTM context update mode. Applicable only for 48 + Endpoint controllers. 49 + 50 + Following values are supported: 51 + 52 + * auto = PTM context auto update trigger for every 10ms 53 + 54 + * manual = PTM context manual update. Writing 'manual' to this 55 + file triggers PTM context update (default) 56 + 57 + What: /sys/kernel/debug/pcie_ptm_*/context_valid 58 + Date: May 2025 59 + Contact: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 60 + Description: 61 + (RW) Control the PTM context validity (local clock timing). 62 + Applicable only for Root Complex controllers. PTM context is 63 + invalidated by hardware if the Root Complex enters low power 64 + mode or changes link frequency. 65 + 66 + Following values are supported: 67 + 68 + * 0 = PTM context invalid (default) 69 + 70 + * 1 = PTM context valid
+44
Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats Documentation/ABI/testing/sysfs-bus-pci-devices-aer
··· 117 117 KernelVersion: 4.19.0 118 118 Contact: linux-pci@vger.kernel.org, rajatja@google.com 119 119 Description: Total number of ERR_NONFATAL messages reported to rootport. 120 + 121 + PCIe AER ratelimits 122 + ------------------- 123 + 124 + These attributes show up under all the devices that are AER capable. 125 + They represent configurable ratelimits of logs per error type. 126 + 127 + See Documentation/PCI/pcieaer-howto.rst for more info on ratelimits. 128 + 129 + What: /sys/bus/pci/devices/<dev>/aer/correctable_ratelimit_interval_ms 130 + Date: May 2025 131 + KernelVersion: 6.16.0 132 + Contact: linux-pci@vger.kernel.org 133 + Description: Writing 0 disables AER correctable error log ratelimiting. 134 + Writing a positive value sets the ratelimit interval in ms. 135 + Default is DEFAULT_RATELIMIT_INTERVAL (5000 ms). 136 + 137 + What: /sys/bus/pci/devices/<dev>/aer/correctable_ratelimit_burst 138 + Date: May 2025 139 + KernelVersion: 6.16.0 140 + Contact: linux-pci@vger.kernel.org 141 + Description: Ratelimit burst for correctable error logs. Writing a value 142 + changes the number of errors (burst) allowed per interval 143 + before ratelimiting. Reading gets the current ratelimit 144 + burst. Default is DEFAULT_RATELIMIT_BURST (10). 145 + 146 + What: /sys/bus/pci/devices/<dev>/aer/nonfatal_ratelimit_interval_ms 147 + Date: May 2025 148 + KernelVersion: 6.16.0 149 + Contact: linux-pci@vger.kernel.org 150 + Description: Writing 0 disables AER non-fatal uncorrectable error log 151 + ratelimiting. Writing a positive value sets the ratelimit 152 + interval in ms. Default is DEFAULT_RATELIMIT_INTERVAL 153 + (5000 ms). 154 + 155 + What: /sys/bus/pci/devices/<dev>/aer/nonfatal_ratelimit_burst 156 + Date: May 2025 157 + KernelVersion: 6.16.0 158 + Contact: linux-pci@vger.kernel.org 159 + Description: Ratelimit burst for non-fatal uncorrectable error logs. 160 + Writing a value changes the number of errors (burst) 161 + allowed per interval before ratelimiting. Reading gets the 162 + current ratelimit burst. Default is DEFAULT_RATELIMIT_BURST 163 + (10).
+10
Documentation/PCI/controller/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =========================================== 4 + PCI Native Host Bridge and Endpoint Drivers 5 + =========================================== 6 + 7 + .. toctree:: 8 + :maxdepth: 2 9 + 10 + rcar-pcie-firmware
+32
Documentation/PCI/controller/rcar-pcie-firmware.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ================================================= 4 + Firmware of PCIe controller for Renesas R-Car V4H 5 + ================================================= 6 + 7 + Renesas R-Car V4H (r8a779g0) has a PCIe controller, requiring a specific 8 + firmware download during startup. 9 + 10 + However, Renesas currently cannot distribute the firmware free of charge. 11 + 12 + The firmware file "104_PCIe_fw_addr_data_ver1.05.txt" (note that the file name 13 + might be different between different datasheet revisions) can be found in the 14 + datasheet encoded as text, and as such, the file's content must be converted 15 + back to binary form. This can be achieved using the following example script: 16 + 17 + .. code-block:: sh 18 + 19 + $ awk '/^\s*0x[0-9A-Fa-f]{4}\s+0x[0-9A-Fa-f]{4}/ { print substr($2,5,2) substr($2,3,2) }' \ 20 + 104_PCIe_fw_addr_data_ver1.05.txt | \ 21 + xxd -p -r > rcar_gen4_pcie.bin 22 + 23 + Once the text content has been converted into a binary firmware file, verify 24 + its checksum as follows: 25 + 26 + .. code-block:: sh 27 + 28 + $ sha1sum rcar_gen4_pcie.bin 29 + 1d0bd4b189b4eb009f5d564b1f93a79112994945 rcar_gen4_pcie.bin 30 + 31 + The resulting binary file called "rcar_gen4_pcie.bin" should be placed in the 32 + "/lib/firmware" directory before the driver runs.
+1 -1
Documentation/PCI/endpoint/pci-nvme-function.rst
··· 8 8 9 9 The PCI NVMe endpoint function implements a PCI NVMe controller using the NVMe 10 10 subsystem target core code. The driver for this function resides with the NVMe 11 - subsystem as drivers/nvme/target/nvmet-pciep.c. 11 + subsystem as drivers/nvme/target/pci-epf.c. 12 12 13 13 See Documentation/nvme/nvme-pci-endpoint-target.rst for more details.
+1
Documentation/PCI/index.rst
··· 17 17 pci-error-recovery 18 18 pcieaer-howto 19 19 endpoint/index 20 + controller/index 20 21 boot-interrupts 21 22 tph
+16 -1
Documentation/PCI/pcieaer-howto.rst
··· 85 85 the error message to the Root Port. Please refer to PCIe specs for other 86 86 fields. 87 87 88 + AER Ratelimits 89 + -------------- 90 + 91 + Since error messages can be generated for each transaction, we may see 92 + large volumes of errors reported. To prevent spammy devices from flooding 93 + the console/stalling execution, messages are throttled by device and error 94 + type (correctable vs. non-fatal uncorrectable). Fatal errors, including 95 + DPC errors, are not ratelimited. 96 + 97 + AER uses the default ratelimit of DEFAULT_RATELIMIT_BURST (10 events) over 98 + DEFAULT_RATELIMIT_INTERVAL (5 seconds). 99 + 100 + Ratelimits are exposed in the form of sysfs attributes and configurable. 101 + See Documentation/ABI/testing/sysfs-bus-pci-devices-aer. 102 + 88 103 AER Statistics / Counters 89 104 ------------------------- 90 105 91 106 When PCIe AER errors are captured, the counters / statistics are also exposed 92 107 in the form of sysfs attributes which are documented at 93 - Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats 108 + Documentation/ABI/testing/sysfs-bus-pci-devices-aer. 94 109 95 110 Developer Guide 96 111 ===============
+26 -7
Documentation/devicetree/bindings/pci/apple,pcie.yaml
··· 17 17 implements its root ports. But the ATU found on most DesignWare 18 18 PCIe host bridges is absent. 19 19 20 + On systems derived from T602x, the PHY registers are in a region 21 + separate from the port registers. In that case, there is one PHY 22 + register range per port register range. 23 + 20 24 All root ports share a single ECAM space, but separate GPIOs are 21 25 used to take the PCI devices on those ports out of reset. Therefore 22 26 the standard "reset-gpios" and "max-link-speed" properties appear on ··· 34 30 35 31 properties: 36 32 compatible: 37 - items: 38 - - enum: 39 - - apple,t8103-pcie 40 - - apple,t8112-pcie 41 - - apple,t6000-pcie 42 - - const: apple,pcie 33 + oneOf: 34 + - items: 35 + - enum: 36 + - apple,t8103-pcie 37 + - apple,t8112-pcie 38 + - apple,t6000-pcie 39 + - const: apple,pcie 40 + - const: apple,t6020-pcie 43 41 44 42 reg: 45 43 minItems: 3 46 - maxItems: 6 44 + maxItems: 10 47 45 48 46 reg-names: 49 47 minItems: 3 ··· 56 50 - const: port1 57 51 - const: port2 58 52 - const: port3 53 + - const: phy0 54 + - const: phy1 55 + - const: phy2 56 + - const: phy3 59 57 60 58 ranges: 61 59 minItems: 2 ··· 108 98 maxItems: 5 109 99 interrupts: 110 100 maxItems: 3 101 + - if: 102 + properties: 103 + compatible: 104 + contains: 105 + const: apple,t6020-pcie 106 + then: 107 + properties: 108 + reg-names: 109 + minItems: 10 111 110 112 111 examples: 113 112 - |
+40 -41
Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
··· 186 186 #include <dt-bindings/interrupt-controller/arm-gic.h> 187 187 188 188 scb { 189 - #address-cells = <2>; 190 - #size-cells = <1>; 191 - pcie0: pcie@7d500000 { 192 - compatible = "brcm,bcm2711-pcie"; 193 - reg = <0x0 0x7d500000 0x9310>; 194 - device_type = "pci"; 195 - #address-cells = <3>; 196 - #size-cells = <2>; 197 - #interrupt-cells = <1>; 198 - interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 199 - <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 200 - interrupt-names = "pcie", "msi"; 201 - interrupt-map-mask = <0x0 0x0 0x0 0x7>; 202 - interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH 203 - 0 0 0 2 &gicv2 GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH 204 - 0 0 0 3 &gicv2 GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH 205 - 0 0 0 4 &gicv2 GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>; 189 + #address-cells = <2>; 190 + #size-cells = <1>; 191 + pcie0: pcie@7d500000 { 192 + compatible = "brcm,bcm2711-pcie"; 193 + reg = <0x0 0x7d500000 0x9310>; 194 + device_type = "pci"; 195 + #address-cells = <3>; 196 + #size-cells = <2>; 197 + #interrupt-cells = <1>; 198 + interrupts = <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 199 + <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 200 + interrupt-names = "pcie", "msi"; 201 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 202 + interrupt-map = <0 0 0 1 &gicv2 GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH 203 + 0 0 0 2 &gicv2 GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH 204 + 0 0 0 3 &gicv2 GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH 205 + 0 0 0 4 &gicv2 GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>; 206 206 207 - msi-parent = <&pcie0>; 208 - msi-controller; 209 - ranges = <0x02000000 0x0 0xf8000000 0x6 0x00000000 0x0 0x04000000>; 210 - dma-ranges = <0x42000000 0x1 0x00000000 0x0 0x40000000 0x0 0x80000000>, 211 - <0x42000000 0x1 0x80000000 0x3 0x00000000 0x0 0x80000000>; 212 - brcm,enable-ssc; 213 - brcm,scb-sizes = <0x0000000080000000 0x0000000080000000>; 207 + msi-parent = <&pcie0>; 208 + msi-controller; 209 + ranges = <0x02000000 0x0 0xf8000000 0x6 0x00000000 0x0 0x04000000>; 210 + dma-ranges = <0x42000000 0x1 0x00000000 0x0 0x40000000 0x0 0x80000000>, 211 + <0x42000000 0x1 0x80000000 0x3 0x00000000 0x0 0x80000000>; 212 + brcm,enable-ssc; 213 + brcm,scb-sizes = <0x0000000080000000 0x0000000080000000>; 214 214 215 - /* PCIe bridge, Root Port */ 216 - pci@0,0 { 217 - #address-cells = <3>; 218 - #size-cells = <2>; 219 - reg = <0x0 0x0 0x0 0x0 0x0>; 220 - compatible = "pciclass,0604"; 221 - device_type = "pci"; 222 - vpcie3v3-supply = <&vreg7>; 223 - ranges; 215 + /* PCIe bridge, Root Port */ 216 + pci@0,0 { 217 + #address-cells = <3>; 218 + #size-cells = <2>; 219 + reg = <0x0 0x0 0x0 0x0 0x0>; 220 + compatible = "pciclass,0604"; 221 + device_type = "pci"; 222 + vpcie3v3-supply = <&vreg7>; 223 + ranges; 224 224 225 - /* PCIe endpoint */ 226 - pci-ep@0,0 { 227 - assigned-addresses = 228 - <0x82010000 0x0 0xf8000000 0x6 0x00000000 0x0 0x2000>; 229 - reg = <0x0 0x0 0x0 0x0 0x0>; 230 - compatible = "pci14e4,1688"; 231 - }; 232 - }; 225 + /* PCIe endpoint */ 226 + pci-ep@0,0 { 227 + assigned-addresses = <0x82010000 0x0 0xf8000000 0x6 0x00000000 0x0 0x2000>; 228 + reg = <0x0 0x0 0x0 0x0 0x0>; 229 + compatible = "pci14e4,1688"; 230 + }; 233 231 }; 232 + }; 234 233 };
+8 -8
Documentation/devicetree/bindings/pci/cdns,cdns-pcie-ep.yaml
··· 37 37 #size-cells = <2>; 38 38 39 39 pcie-ep@fc000000 { 40 - compatible = "cdns,cdns-pcie-ep"; 41 - reg = <0x0 0xfc000000 0x0 0x01000000>, 42 - <0x0 0x80000000 0x0 0x40000000>; 43 - reg-names = "reg", "mem"; 44 - cdns,max-outbound-regions = <16>; 45 - max-functions = /bits/ 8 <8>; 46 - phys = <&pcie_phy0>; 47 - phy-names = "pcie-phy"; 40 + compatible = "cdns,cdns-pcie-ep"; 41 + reg = <0x0 0xfc000000 0x0 0x01000000>, 42 + <0x0 0x80000000 0x0 0x40000000>; 43 + reg-names = "reg", "mem"; 44 + cdns,max-outbound-regions = <16>; 45 + max-functions = /bits/ 8 <8>; 46 + phys = <&pcie_phy0>; 47 + phy-names = "pcie-phy"; 48 48 }; 49 49 }; 50 50 ...
+13 -13
Documentation/devicetree/bindings/pci/intel,keembay-pcie-ep.yaml
··· 53 53 #include <dt-bindings/interrupt-controller/arm-gic.h> 54 54 #include <dt-bindings/interrupt-controller/irq.h> 55 55 pcie-ep@37000000 { 56 - compatible = "intel,keembay-pcie-ep"; 57 - reg = <0x37000000 0x00001000>, 58 - <0x37100000 0x00001000>, 59 - <0x37300000 0x00001000>, 60 - <0x36000000 0x01000000>, 61 - <0x37800000 0x00000200>; 62 - reg-names = "dbi", "dbi2", "atu", "addr_space", "apb"; 63 - interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>, 64 - <GIC_SPI 108 IRQ_TYPE_EDGE_RISING>, 65 - <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>, 66 - <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>; 67 - interrupt-names = "pcie", "pcie_ev", "pcie_err", "pcie_mem_access"; 68 - num-lanes = <2>; 56 + compatible = "intel,keembay-pcie-ep"; 57 + reg = <0x37000000 0x00001000>, 58 + <0x37100000 0x00001000>, 59 + <0x37300000 0x00001000>, 60 + <0x36000000 0x01000000>, 61 + <0x37800000 0x00000200>; 62 + reg-names = "dbi", "dbi2", "atu", "addr_space", "apb"; 63 + interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>, 64 + <GIC_SPI 108 IRQ_TYPE_EDGE_RISING>, 65 + <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>, 66 + <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>; 67 + interrupt-names = "pcie", "pcie_ev", "pcie_err", "pcie_mem_access"; 68 + num-lanes = <2>; 69 69 };
+19 -19
Documentation/devicetree/bindings/pci/intel,keembay-pcie.yaml
··· 75 75 #define KEEM_BAY_A53_PCIE 76 76 #define KEEM_BAY_A53_AUX_PCIE 77 77 pcie@37000000 { 78 - compatible = "intel,keembay-pcie"; 79 - reg = <0x37000000 0x00001000>, 80 - <0x37300000 0x00001000>, 81 - <0x36e00000 0x00200000>, 82 - <0x37800000 0x00000200>; 83 - reg-names = "dbi", "atu", "config", "apb"; 84 - #address-cells = <3>; 85 - #size-cells = <2>; 86 - device_type = "pci"; 87 - ranges = <0x02000000 0 0x36000000 0x36000000 0 0x00e00000>; 88 - interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>, 89 - <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, 90 - <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>; 91 - interrupt-names = "pcie", "pcie_ev", "pcie_err"; 92 - clocks = <&scmi_clk KEEM_BAY_A53_PCIE>, 93 - <&scmi_clk KEEM_BAY_A53_AUX_PCIE>; 94 - clock-names = "master", "aux"; 95 - reset-gpios = <&pca2 9 GPIO_ACTIVE_LOW>; 96 - num-lanes = <2>; 78 + compatible = "intel,keembay-pcie"; 79 + reg = <0x37000000 0x00001000>, 80 + <0x37300000 0x00001000>, 81 + <0x36e00000 0x00200000>, 82 + <0x37800000 0x00000200>; 83 + reg-names = "dbi", "atu", "config", "apb"; 84 + #address-cells = <3>; 85 + #size-cells = <2>; 86 + device_type = "pci"; 87 + ranges = <0x02000000 0 0x36000000 0x36000000 0 0x00e00000>; 88 + interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>, 89 + <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, 90 + <GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>; 91 + interrupt-names = "pcie", "pcie_ev", "pcie_err"; 92 + clocks = <&scmi_clk KEEM_BAY_A53_PCIE>, 93 + <&scmi_clk KEEM_BAY_A53_AUX_PCIE>; 94 + clock-names = "master", "aux"; 95 + reset-gpios = <&pca2 9 GPIO_ACTIVE_LOW>; 96 + num-lanes = <2>; 97 97 };
+100
Documentation/devicetree/bindings/pci/marvell,armada8k-pcie.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/marvell,armada8k-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Marvell Armada 7K/8K PCIe interface 8 + 9 + maintainers: 10 + - Thomas Petazzoni <thomas.petazzoni@bootlin.com> 11 + 12 + description: 13 + This PCIe host controller is based on the Synopsys DesignWare PCIe IP. 14 + 15 + select: 16 + properties: 17 + compatible: 18 + contains: 19 + enum: 20 + - marvell,armada8k-pcie 21 + required: 22 + - compatible 23 + 24 + allOf: 25 + - $ref: snps,dw-pcie.yaml# 26 + 27 + properties: 28 + compatible: 29 + items: 30 + - enum: 31 + - marvell,armada8k-pcie 32 + - const: snps,dw-pcie 33 + 34 + reg: 35 + maxItems: 2 36 + 37 + reg-names: 38 + items: 39 + - const: ctrl 40 + - const: config 41 + 42 + clocks: 43 + minItems: 1 44 + maxItems: 2 45 + 46 + clock-names: 47 + items: 48 + - const: core 49 + - const: reg 50 + 51 + interrupts: 52 + maxItems: 1 53 + 54 + msi-parent: 55 + maxItems: 1 56 + 57 + phys: 58 + minItems: 1 59 + maxItems: 4 60 + 61 + phy-names: 62 + minItems: 1 63 + maxItems: 4 64 + 65 + marvell,reset-gpio: 66 + maxItems: 1 67 + deprecated: true 68 + 69 + required: 70 + - interrupt-map 71 + - clocks 72 + - msi-parent 73 + 74 + unevaluatedProperties: false 75 + 76 + examples: 77 + - | 78 + #include <dt-bindings/interrupt-controller/arm-gic.h> 79 + #include <dt-bindings/interrupt-controller/irq.h> 80 + 81 + pcie@f2600000 { 82 + compatible = "marvell,armada8k-pcie", "snps,dw-pcie"; 83 + reg = <0xf2600000 0x10000>, <0xf6f00000 0x80000>; 84 + reg-names = "ctrl", "config"; 85 + #address-cells = <3>; 86 + #size-cells = <2>; 87 + #interrupt-cells = <1>; 88 + device_type = "pci"; 89 + dma-coherent; 90 + msi-parent = <&gic_v2m0>; 91 + 92 + ranges = <0x81000000 0 0xf9000000 0xf9000000 0 0x10000>, /* downstream I/O */ 93 + <0x82000000 0 0xf6000000 0xf6000000 0 0xf00000>; /* non-prefetchable memory */ 94 + interrupt-map-mask = <0 0 0 0>; 95 + interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>; 96 + interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>; 97 + num-lanes = <1>; 98 + clocks = <&cpm_syscon0 1 13>; 99 + }; 100 + ...
+277
Documentation/devicetree/bindings/pci/marvell,kirkwood-pcie.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/marvell,kirkwood-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Marvell EBU PCIe interfaces 8 + 9 + maintainers: 10 + - Thomas Petazzoni <thomas.petazzoni@bootlin.com> 11 + - Pali Rohár <pali@kernel.org> 12 + 13 + allOf: 14 + - $ref: /schemas/pci/pci-host-bridge.yaml# 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - marvell,armada-370-pcie 20 + - marvell,armada-xp-pcie 21 + - marvell,dove-pcie 22 + - marvell,kirkwood-pcie 23 + 24 + ranges: 25 + description: > 26 + The ranges describing the MMIO registers have the following layout: 27 + 28 + 0x82000000 0 r MBUS_ID(0xf0, 0x01) r 0 s 29 + 30 + where: 31 + 32 + * r is a 32-bits value that gives the offset of the MMIO registers of 33 + this PCIe interface, from the base of the internal registers. 34 + 35 + * s is a 32-bits value that give the size of this MMIO registers area. 36 + This range entry translates the '0x82000000 0 r' PCI address into the 37 + 'MBUS_ID(0xf0, 0x01) r' CPU address, which is part of the internal 38 + register window (as identified by MBUS_ID(0xf0, 0x01)). 39 + 40 + The ranges describing the MBus windows have the following layout: 41 + 42 + 0x8t000000 s 0 MBUS_ID(w, a) 0 1 0 43 + 44 + where: 45 + 46 + * t is the type of the MBus window (as defined by the standard PCI DT 47 + bindings), 1 for I/O and 2 for memory. 48 + 49 + * s is the PCI slot that corresponds to this PCIe interface 50 + 51 + * w is the 'target ID' value for the MBus window 52 + 53 + * a the 'attribute' value for the MBus window. 54 + 55 + Since the location and size of the different MBus windows is not fixed in 56 + hardware, and only determined in runtime, those ranges cover the full first 57 + 4 GB of the physical address space, and do not translate into a valid CPU 58 + address. 59 + 60 + msi-parent: 61 + maxItems: 1 62 + 63 + patternProperties: 64 + '^pcie@': 65 + type: object 66 + allOf: 67 + - $ref: /schemas/pci/pci-bus-common.yaml# 68 + - $ref: /schemas/pci/pci-device.yaml# 69 + unevaluatedProperties: false 70 + 71 + properties: 72 + clocks: 73 + maxItems: 1 74 + 75 + interrupts: 76 + minItems: 1 77 + maxItems: 2 78 + 79 + interrupt-names: 80 + minItems: 1 81 + items: 82 + - const: intx 83 + - const: error 84 + 85 + reset-delay-us: 86 + default: 100000 87 + description: todo 88 + 89 + marvell,pcie-port: 90 + $ref: /schemas/types.yaml#/definitions/uint32 91 + maximum: 3 92 + description: todo 93 + 94 + marvell,pcie-lane: 95 + $ref: /schemas/types.yaml#/definitions/uint32 96 + maximum: 3 97 + description: todo 98 + 99 + interrupt-controller: 100 + type: object 101 + additionalProperties: false 102 + 103 + properties: 104 + interrupt-controller: true 105 + 106 + '#interrupt-cells': 107 + const: 1 108 + 109 + required: 110 + - assigned-addresses 111 + - clocks 112 + - interrupt-map 113 + - marvell,pcie-port 114 + 115 + unevaluatedProperties: false 116 + 117 + examples: 118 + - | 119 + #define MBUS_ID(target,attributes) (((target) << 24) | ((attributes) << 16)) 120 + 121 + soc { 122 + #address-cells = <2>; 123 + #size-cells = <2>; 124 + 125 + pcie@f001000000000000 { 126 + compatible = "marvell,armada-xp-pcie"; 127 + device_type = "pci"; 128 + 129 + #address-cells = <3>; 130 + #size-cells = <2>; 131 + 132 + bus-range = <0x00 0xff>; 133 + msi-parent = <&mpic>; 134 + 135 + ranges = 136 + <0x82000000 0 0x40000 MBUS_ID(0xf0, 0x01) 0x40000 0 0x00002000 /* Port 0.0 registers */ 137 + 0x82000000 0 0x42000 MBUS_ID(0xf0, 0x01) 0x42000 0 0x00002000 /* Port 2.0 registers */ 138 + 0x82000000 0 0x44000 MBUS_ID(0xf0, 0x01) 0x44000 0 0x00002000 /* Port 0.1 registers */ 139 + 0x82000000 0 0x48000 MBUS_ID(0xf0, 0x01) 0x48000 0 0x00002000 /* Port 0.2 registers */ 140 + 0x82000000 0 0x4c000 MBUS_ID(0xf0, 0x01) 0x4c000 0 0x00002000 /* Port 0.3 registers */ 141 + 0x82000000 0 0x80000 MBUS_ID(0xf0, 0x01) 0x80000 0 0x00002000 /* Port 1.0 registers */ 142 + 0x82000000 0 0x82000 MBUS_ID(0xf0, 0x01) 0x82000 0 0x00002000 /* Port 3.0 registers */ 143 + 0x82000000 0 0x84000 MBUS_ID(0xf0, 0x01) 0x84000 0 0x00002000 /* Port 1.1 registers */ 144 + 0x82000000 0 0x88000 MBUS_ID(0xf0, 0x01) 0x88000 0 0x00002000 /* Port 1.2 registers */ 145 + 0x82000000 0 0x8c000 MBUS_ID(0xf0, 0x01) 0x8c000 0 0x00002000 /* Port 1.3 registers */ 146 + 0x82000000 0x1 0 MBUS_ID(0x04, 0xe8) 0 1 0 /* Port 0.0 MEM */ 147 + 0x81000000 0x1 0 MBUS_ID(0x04, 0xe0) 0 1 0 /* Port 0.0 IO */ 148 + 0x82000000 0x2 0 MBUS_ID(0x04, 0xd8) 0 1 0 /* Port 0.1 MEM */ 149 + 0x81000000 0x2 0 MBUS_ID(0x04, 0xd0) 0 1 0 /* Port 0.1 IO */ 150 + 0x82000000 0x3 0 MBUS_ID(0x04, 0xb8) 0 1 0 /* Port 0.2 MEM */ 151 + 0x81000000 0x3 0 MBUS_ID(0x04, 0xb0) 0 1 0 /* Port 0.2 IO */ 152 + 0x82000000 0x4 0 MBUS_ID(0x04, 0x78) 0 1 0 /* Port 0.3 MEM */ 153 + 0x81000000 0x4 0 MBUS_ID(0x04, 0x70) 0 1 0 /* Port 0.3 IO */ 154 + 155 + 0x82000000 0x5 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */ 156 + 0x81000000 0x5 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */ 157 + 0x82000000 0x6 0 MBUS_ID(0x08, 0xd8) 0 1 0 /* Port 1.1 MEM */ 158 + 0x81000000 0x6 0 MBUS_ID(0x08, 0xd0) 0 1 0 /* Port 1.1 IO */ 159 + 0x82000000 0x7 0 MBUS_ID(0x08, 0xb8) 0 1 0 /* Port 1.2 MEM */ 160 + 0x81000000 0x7 0 MBUS_ID(0x08, 0xb0) 0 1 0 /* Port 1.2 IO */ 161 + 0x82000000 0x8 0 MBUS_ID(0x08, 0x78) 0 1 0 /* Port 1.3 MEM */ 162 + 0x81000000 0x8 0 MBUS_ID(0x08, 0x70) 0 1 0 /* Port 1.3 IO */ 163 + 164 + 0x82000000 0x9 0 MBUS_ID(0x04, 0xf8) 0 1 0 /* Port 2.0 MEM */ 165 + 0x81000000 0x9 0 MBUS_ID(0x04, 0xf0) 0 1 0 /* Port 2.0 IO */ 166 + 167 + 0x82000000 0xa 0 MBUS_ID(0x08, 0xf8) 0 1 0 /* Port 3.0 MEM */ 168 + 0x81000000 0xa 0 MBUS_ID(0x08, 0xf0) 0 1 0 /* Port 3.0 IO */>; 169 + 170 + pcie@1,0 { 171 + device_type = "pci"; 172 + assigned-addresses = <0x82000800 0 0x40000 0 0x2000>; 173 + reg = <0x0800 0 0 0 0>; 174 + #address-cells = <3>; 175 + #size-cells = <2>; 176 + #interrupt-cells = <1>; 177 + ranges = <0x82000000 0 0 0x82000000 0x1 0 1 0 178 + 0x81000000 0 0 0x81000000 0x1 0 1 0>; 179 + interrupt-map-mask = <0 0 0 0>; 180 + interrupt-map = <0 0 0 0 &mpic 58>; 181 + marvell,pcie-port = <0>; 182 + marvell,pcie-lane = <0>; 183 + num-lanes = <1>; 184 + /* low-active PERST# reset on GPIO 25 */ 185 + reset-gpios = <&gpio0 25 1>; 186 + /* wait 20ms for device settle after reset deassertion */ 187 + reset-delay-us = <20000>; 188 + clocks = <&gateclk 5>; 189 + }; 190 + 191 + pcie@2,0 { 192 + device_type = "pci"; 193 + assigned-addresses = <0x82001000 0 0x44000 0 0x2000>; 194 + reg = <0x1000 0 0 0 0>; 195 + #address-cells = <3>; 196 + #size-cells = <2>; 197 + #interrupt-cells = <1>; 198 + ranges = <0x82000000 0 0 0x82000000 0x2 0 1 0 199 + 0x81000000 0 0 0x81000000 0x2 0 1 0>; 200 + interrupt-map-mask = <0 0 0 0>; 201 + interrupt-map = <0 0 0 0 &mpic 59>; 202 + marvell,pcie-port = <0>; 203 + marvell,pcie-lane = <1>; 204 + num-lanes = <1>; 205 + clocks = <&gateclk 6>; 206 + }; 207 + 208 + pcie@3,0 { 209 + device_type = "pci"; 210 + assigned-addresses = <0x82001800 0 0x48000 0 0x2000>; 211 + reg = <0x1800 0 0 0 0>; 212 + #address-cells = <3>; 213 + #size-cells = <2>; 214 + #interrupt-cells = <1>; 215 + ranges = <0x82000000 0 0 0x82000000 0x3 0 1 0 216 + 0x81000000 0 0 0x81000000 0x3 0 1 0>; 217 + interrupt-map-mask = <0 0 0 0>; 218 + interrupt-map = <0 0 0 0 &mpic 60>; 219 + marvell,pcie-port = <0>; 220 + marvell,pcie-lane = <2>; 221 + num-lanes = <1>; 222 + clocks = <&gateclk 7>; 223 + }; 224 + 225 + pcie@4,0 { 226 + device_type = "pci"; 227 + assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>; 228 + reg = <0x2000 0 0 0 0>; 229 + #address-cells = <3>; 230 + #size-cells = <2>; 231 + #interrupt-cells = <1>; 232 + ranges = <0x82000000 0 0 0x82000000 0x4 0 1 0 233 + 0x81000000 0 0 0x81000000 0x4 0 1 0>; 234 + interrupt-map-mask = <0 0 0 0>; 235 + interrupt-map = <0 0 0 0 &mpic 61>; 236 + marvell,pcie-port = <0>; 237 + marvell,pcie-lane = <3>; 238 + num-lanes = <1>; 239 + clocks = <&gateclk 8>; 240 + }; 241 + 242 + pcie@5,0 { 243 + device_type = "pci"; 244 + assigned-addresses = <0x82002800 0 0x80000 0 0x2000>; 245 + reg = <0x2800 0 0 0 0>; 246 + #address-cells = <3>; 247 + #size-cells = <2>; 248 + #interrupt-cells = <1>; 249 + ranges = <0x82000000 0 0 0x82000000 0x5 0 1 0 250 + 0x81000000 0 0 0x81000000 0x5 0 1 0>; 251 + interrupt-map-mask = <0 0 0 0>; 252 + interrupt-map = <0 0 0 0 &mpic 62>; 253 + marvell,pcie-port = <1>; 254 + marvell,pcie-lane = <0>; 255 + num-lanes = <1>; 256 + clocks = <&gateclk 9>; 257 + }; 258 + 259 + pcie@6,0 { 260 + device_type = "pci"; 261 + assigned-addresses = <0x82003000 0 0x84000 0 0x2000>; 262 + reg = <0x3000 0 0 0 0>; 263 + #address-cells = <3>; 264 + #size-cells = <2>; 265 + #interrupt-cells = <1>; 266 + ranges = <0x82000000 0 0 0x82000000 0x6 0 1 0 267 + 0x81000000 0 0 0x81000000 0x6 0 1 0>; 268 + interrupt-map-mask = <0 0 0 0>; 269 + interrupt-map = <0 0 0 0 &mpic 63>; 270 + marvell,pcie-port = <1>; 271 + marvell,pcie-lane = <1>; 272 + num-lanes = <1>; 273 + clocks = <&gateclk 10>; 274 + }; 275 + }; 276 + }; 277 + ...
+28 -28
Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
··· 50 50 items: 51 51 pattern: '^fic[0-3]$' 52 52 53 - dma-coherent: true 53 + dma-noncoherent: true 54 54 55 55 ranges: 56 56 minItems: 1 ··· 65 65 examples: 66 66 - | 67 67 soc { 68 - #address-cells = <2>; 68 + #address-cells = <2>; 69 + #size-cells = <2>; 70 + pcie0: pcie@2030000000 { 71 + compatible = "microchip,pcie-host-1.0"; 72 + reg = <0x0 0x70000000 0x0 0x08000000>, 73 + <0x0 0x43008000 0x0 0x00002000>, 74 + <0x0 0x4300a000 0x0 0x00002000>; 75 + reg-names = "cfg", "bridge", "ctrl"; 76 + device_type = "pci"; 77 + #address-cells = <3>; 69 78 #size-cells = <2>; 70 - pcie0: pcie@2030000000 { 71 - compatible = "microchip,pcie-host-1.0"; 72 - reg = <0x0 0x70000000 0x0 0x08000000>, 73 - <0x0 0x43008000 0x0 0x00002000>, 74 - <0x0 0x4300a000 0x0 0x00002000>; 75 - reg-names = "cfg", "bridge", "ctrl"; 76 - device_type = "pci"; 77 - #address-cells = <3>; 78 - #size-cells = <2>; 79 - #interrupt-cells = <1>; 80 - interrupts = <119>; 81 - interrupt-map-mask = <0x0 0x0 0x0 0x7>; 82 - interrupt-map = <0 0 0 1 &pcie_intc0 0>, 83 - <0 0 0 2 &pcie_intc0 1>, 84 - <0 0 0 3 &pcie_intc0 2>, 85 - <0 0 0 4 &pcie_intc0 3>; 86 - interrupt-parent = <&plic0>; 87 - msi-parent = <&pcie0>; 88 - msi-controller; 89 - bus-range = <0x00 0x7f>; 90 - ranges = <0x03000000 0x0 0x78000000 0x0 0x78000000 0x0 0x04000000>; 91 - pcie_intc0: interrupt-controller { 92 - #address-cells = <0>; 93 - #interrupt-cells = <1>; 94 - interrupt-controller; 95 - }; 79 + #interrupt-cells = <1>; 80 + interrupts = <119>; 81 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 82 + interrupt-map = <0 0 0 1 &pcie_intc0 0>, 83 + <0 0 0 2 &pcie_intc0 1>, 84 + <0 0 0 3 &pcie_intc0 2>, 85 + <0 0 0 4 &pcie_intc0 3>; 86 + interrupt-parent = <&plic0>; 87 + msi-parent = <&pcie0>; 88 + msi-controller; 89 + bus-range = <0x00 0x7f>; 90 + ranges = <0x03000000 0x0 0x78000000 0x0 0x78000000 0x0 0x04000000>; 91 + pcie_intc0: interrupt-controller { 92 + #address-cells = <0>; 93 + #interrupt-cells = <1>; 94 + interrupt-controller; 96 95 }; 96 + }; 97 97 };
-310
Documentation/devicetree/bindings/pci/mvebu-pci.txt
··· 1 - * Marvell EBU PCIe interfaces 2 - 3 - Mandatory properties: 4 - 5 - - compatible: one of the following values: 6 - marvell,armada-370-pcie 7 - marvell,armada-xp-pcie 8 - marvell,dove-pcie 9 - marvell,kirkwood-pcie 10 - - #address-cells, set to <3> 11 - - #size-cells, set to <2> 12 - - #interrupt-cells, set to <1> 13 - - bus-range: PCI bus numbers covered 14 - - device_type, set to "pci" 15 - - ranges: ranges describing the MMIO registers to control the PCIe 16 - interfaces, and ranges describing the MBus windows needed to access 17 - the memory and I/O regions of each PCIe interface. 18 - - msi-parent: Link to the hardware entity that serves as the Message 19 - Signaled Interrupt controller for this PCI controller. 20 - 21 - The ranges describing the MMIO registers have the following layout: 22 - 23 - 0x82000000 0 r MBUS_ID(0xf0, 0x01) r 0 s 24 - 25 - where: 26 - 27 - * r is a 32-bits value that gives the offset of the MMIO 28 - registers of this PCIe interface, from the base of the internal 29 - registers. 30 - 31 - * s is a 32-bits value that give the size of this MMIO 32 - registers area. This range entry translates the '0x82000000 0 r' PCI 33 - address into the 'MBUS_ID(0xf0, 0x01) r' CPU address, which is part 34 - of the internal register window (as identified by MBUS_ID(0xf0, 35 - 0x01)). 36 - 37 - The ranges describing the MBus windows have the following layout: 38 - 39 - 0x8t000000 s 0 MBUS_ID(w, a) 0 1 0 40 - 41 - where: 42 - 43 - * t is the type of the MBus window (as defined by the standard PCI DT 44 - bindings), 1 for I/O and 2 for memory. 45 - 46 - * s is the PCI slot that corresponds to this PCIe interface 47 - 48 - * w is the 'target ID' value for the MBus window 49 - 50 - * a the 'attribute' value for the MBus window. 51 - 52 - Since the location and size of the different MBus windows is not fixed in 53 - hardware, and only determined in runtime, those ranges cover the full first 54 - 4 GB of the physical address space, and do not translate into a valid CPU 55 - address. 56 - 57 - In addition, the device tree node must have sub-nodes describing each 58 - PCIe interface, having the following mandatory properties: 59 - 60 - - reg: used only for interrupt mapping, so only the first four bytes 61 - are used to refer to the correct bus number and device number. 62 - - assigned-addresses: reference to the MMIO registers used to control 63 - this PCIe interface. 64 - - clocks: the clock associated to this PCIe interface 65 - - marvell,pcie-port: the physical PCIe port number 66 - - status: either "disabled" or "okay" 67 - - device_type, set to "pci" 68 - - #address-cells, set to <3> 69 - - #size-cells, set to <2> 70 - - #interrupt-cells, set to <1> 71 - - ranges, translating the MBus windows ranges of the parent node into 72 - standard PCI addresses. 73 - - interrupt-map-mask and interrupt-map, standard PCI properties to 74 - define the mapping of the PCIe interface to interrupt numbers. 75 - 76 - and the following optional properties: 77 - - marvell,pcie-lane: the physical PCIe lane number, for ports having 78 - multiple lanes. If this property is not found, we assume that the 79 - value is 0. 80 - - num-lanes: number of SerDes PCIe lanes for this link (1 or 4) 81 - - reset-gpios: optional GPIO to PERST# 82 - - reset-delay-us: delay in us to wait after reset de-assertion, if not 83 - specified will default to 100ms, as required by the PCIe specification. 84 - - interrupt-names: list of interrupt names, supported are: 85 - - "intx" - interrupt line triggered by one of the legacy interrupt 86 - - interrupts or interrupts-extended: List of the interrupt sources which 87 - corresponding to the "interrupt-names". If non-empty then also additional 88 - 'interrupt-controller' subnode must be defined. 89 - 90 - Example: 91 - 92 - pcie-controller { 93 - compatible = "marvell,armada-xp-pcie"; 94 - device_type = "pci"; 95 - 96 - #address-cells = <3>; 97 - #size-cells = <2>; 98 - 99 - bus-range = <0x00 0xff>; 100 - msi-parent = <&mpic>; 101 - 102 - ranges = 103 - <0x82000000 0 0x40000 MBUS_ID(0xf0, 0x01) 0x40000 0 0x00002000 /* Port 0.0 registers */ 104 - 0x82000000 0 0x42000 MBUS_ID(0xf0, 0x01) 0x42000 0 0x00002000 /* Port 2.0 registers */ 105 - 0x82000000 0 0x44000 MBUS_ID(0xf0, 0x01) 0x44000 0 0x00002000 /* Port 0.1 registers */ 106 - 0x82000000 0 0x48000 MBUS_ID(0xf0, 0x01) 0x48000 0 0x00002000 /* Port 0.2 registers */ 107 - 0x82000000 0 0x4c000 MBUS_ID(0xf0, 0x01) 0x4c000 0 0x00002000 /* Port 0.3 registers */ 108 - 0x82000000 0 0x80000 MBUS_ID(0xf0, 0x01) 0x80000 0 0x00002000 /* Port 1.0 registers */ 109 - 0x82000000 0 0x82000 MBUS_ID(0xf0, 0x01) 0x82000 0 0x00002000 /* Port 3.0 registers */ 110 - 0x82000000 0 0x84000 MBUS_ID(0xf0, 0x01) 0x84000 0 0x00002000 /* Port 1.1 registers */ 111 - 0x82000000 0 0x88000 MBUS_ID(0xf0, 0x01) 0x88000 0 0x00002000 /* Port 1.2 registers */ 112 - 0x82000000 0 0x8c000 MBUS_ID(0xf0, 0x01) 0x8c000 0 0x00002000 /* Port 1.3 registers */ 113 - 0x82000000 0x1 0 MBUS_ID(0x04, 0xe8) 0 1 0 /* Port 0.0 MEM */ 114 - 0x81000000 0x1 0 MBUS_ID(0x04, 0xe0) 0 1 0 /* Port 0.0 IO */ 115 - 0x82000000 0x2 0 MBUS_ID(0x04, 0xd8) 0 1 0 /* Port 0.1 MEM */ 116 - 0x81000000 0x2 0 MBUS_ID(0x04, 0xd0) 0 1 0 /* Port 0.1 IO */ 117 - 0x82000000 0x3 0 MBUS_ID(0x04, 0xb8) 0 1 0 /* Port 0.2 MEM */ 118 - 0x81000000 0x3 0 MBUS_ID(0x04, 0xb0) 0 1 0 /* Port 0.2 IO */ 119 - 0x82000000 0x4 0 MBUS_ID(0x04, 0x78) 0 1 0 /* Port 0.3 MEM */ 120 - 0x81000000 0x4 0 MBUS_ID(0x04, 0x70) 0 1 0 /* Port 0.3 IO */ 121 - 122 - 0x82000000 0x5 0 MBUS_ID(0x08, 0xe8) 0 1 0 /* Port 1.0 MEM */ 123 - 0x81000000 0x5 0 MBUS_ID(0x08, 0xe0) 0 1 0 /* Port 1.0 IO */ 124 - 0x82000000 0x6 0 MBUS_ID(0x08, 0xd8) 0 1 0 /* Port 1.1 MEM */ 125 - 0x81000000 0x6 0 MBUS_ID(0x08, 0xd0) 0 1 0 /* Port 1.1 IO */ 126 - 0x82000000 0x7 0 MBUS_ID(0x08, 0xb8) 0 1 0 /* Port 1.2 MEM */ 127 - 0x81000000 0x7 0 MBUS_ID(0x08, 0xb0) 0 1 0 /* Port 1.2 IO */ 128 - 0x82000000 0x8 0 MBUS_ID(0x08, 0x78) 0 1 0 /* Port 1.3 MEM */ 129 - 0x81000000 0x8 0 MBUS_ID(0x08, 0x70) 0 1 0 /* Port 1.3 IO */ 130 - 131 - 0x82000000 0x9 0 MBUS_ID(0x04, 0xf8) 0 1 0 /* Port 2.0 MEM */ 132 - 0x81000000 0x9 0 MBUS_ID(0x04, 0xf0) 0 1 0 /* Port 2.0 IO */ 133 - 134 - 0x82000000 0xa 0 MBUS_ID(0x08, 0xf8) 0 1 0 /* Port 3.0 MEM */ 135 - 0x81000000 0xa 0 MBUS_ID(0x08, 0xf0) 0 1 0 /* Port 3.0 IO */>; 136 - 137 - pcie@1,0 { 138 - device_type = "pci"; 139 - assigned-addresses = <0x82000800 0 0x40000 0 0x2000>; 140 - reg = <0x0800 0 0 0 0>; 141 - #address-cells = <3>; 142 - #size-cells = <2>; 143 - #interrupt-cells = <1>; 144 - ranges = <0x82000000 0 0 0x82000000 0x1 0 1 0 145 - 0x81000000 0 0 0x81000000 0x1 0 1 0>; 146 - interrupt-map-mask = <0 0 0 0>; 147 - interrupt-map = <0 0 0 0 &mpic 58>; 148 - marvell,pcie-port = <0>; 149 - marvell,pcie-lane = <0>; 150 - num-lanes = <1>; 151 - /* low-active PERST# reset on GPIO 25 */ 152 - reset-gpios = <&gpio0 25 1>; 153 - /* wait 20ms for device settle after reset deassertion */ 154 - reset-delay-us = <20000>; 155 - clocks = <&gateclk 5>; 156 - }; 157 - 158 - pcie@2,0 { 159 - device_type = "pci"; 160 - assigned-addresses = <0x82001000 0 0x44000 0 0x2000>; 161 - reg = <0x1000 0 0 0 0>; 162 - #address-cells = <3>; 163 - #size-cells = <2>; 164 - #interrupt-cells = <1>; 165 - ranges = <0x82000000 0 0 0x82000000 0x2 0 1 0 166 - 0x81000000 0 0 0x81000000 0x2 0 1 0>; 167 - interrupt-map-mask = <0 0 0 0>; 168 - interrupt-map = <0 0 0 0 &mpic 59>; 169 - marvell,pcie-port = <0>; 170 - marvell,pcie-lane = <1>; 171 - num-lanes = <1>; 172 - clocks = <&gateclk 6>; 173 - }; 174 - 175 - pcie@3,0 { 176 - device_type = "pci"; 177 - assigned-addresses = <0x82001800 0 0x48000 0 0x2000>; 178 - reg = <0x1800 0 0 0 0>; 179 - #address-cells = <3>; 180 - #size-cells = <2>; 181 - #interrupt-cells = <1>; 182 - ranges = <0x82000000 0 0 0x82000000 0x3 0 1 0 183 - 0x81000000 0 0 0x81000000 0x3 0 1 0>; 184 - interrupt-map-mask = <0 0 0 0>; 185 - interrupt-map = <0 0 0 0 &mpic 60>; 186 - marvell,pcie-port = <0>; 187 - marvell,pcie-lane = <2>; 188 - num-lanes = <1>; 189 - clocks = <&gateclk 7>; 190 - }; 191 - 192 - pcie@4,0 { 193 - device_type = "pci"; 194 - assigned-addresses = <0x82002000 0 0x4c000 0 0x2000>; 195 - reg = <0x2000 0 0 0 0>; 196 - #address-cells = <3>; 197 - #size-cells = <2>; 198 - #interrupt-cells = <1>; 199 - ranges = <0x82000000 0 0 0x82000000 0x4 0 1 0 200 - 0x81000000 0 0 0x81000000 0x4 0 1 0>; 201 - interrupt-map-mask = <0 0 0 0>; 202 - interrupt-map = <0 0 0 0 &mpic 61>; 203 - marvell,pcie-port = <0>; 204 - marvell,pcie-lane = <3>; 205 - num-lanes = <1>; 206 - clocks = <&gateclk 8>; 207 - }; 208 - 209 - pcie@5,0 { 210 - device_type = "pci"; 211 - assigned-addresses = <0x82002800 0 0x80000 0 0x2000>; 212 - reg = <0x2800 0 0 0 0>; 213 - #address-cells = <3>; 214 - #size-cells = <2>; 215 - #interrupt-cells = <1>; 216 - ranges = <0x82000000 0 0 0x82000000 0x5 0 1 0 217 - 0x81000000 0 0 0x81000000 0x5 0 1 0>; 218 - interrupt-map-mask = <0 0 0 0>; 219 - interrupt-map = <0 0 0 0 &mpic 62>; 220 - marvell,pcie-port = <1>; 221 - marvell,pcie-lane = <0>; 222 - num-lanes = <1>; 223 - clocks = <&gateclk 9>; 224 - }; 225 - 226 - pcie@6,0 { 227 - device_type = "pci"; 228 - assigned-addresses = <0x82003000 0 0x84000 0 0x2000>; 229 - reg = <0x3000 0 0 0 0>; 230 - #address-cells = <3>; 231 - #size-cells = <2>; 232 - #interrupt-cells = <1>; 233 - ranges = <0x82000000 0 0 0x82000000 0x6 0 1 0 234 - 0x81000000 0 0 0x81000000 0x6 0 1 0>; 235 - interrupt-map-mask = <0 0 0 0>; 236 - interrupt-map = <0 0 0 0 &mpic 63>; 237 - marvell,pcie-port = <1>; 238 - marvell,pcie-lane = <1>; 239 - num-lanes = <1>; 240 - clocks = <&gateclk 10>; 241 - }; 242 - 243 - pcie@7,0 { 244 - device_type = "pci"; 245 - assigned-addresses = <0x82003800 0 0x88000 0 0x2000>; 246 - reg = <0x3800 0 0 0 0>; 247 - #address-cells = <3>; 248 - #size-cells = <2>; 249 - #interrupt-cells = <1>; 250 - ranges = <0x82000000 0 0 0x82000000 0x7 0 1 0 251 - 0x81000000 0 0 0x81000000 0x7 0 1 0>; 252 - interrupt-map-mask = <0 0 0 0>; 253 - interrupt-map = <0 0 0 0 &mpic 64>; 254 - marvell,pcie-port = <1>; 255 - marvell,pcie-lane = <2>; 256 - num-lanes = <1>; 257 - clocks = <&gateclk 11>; 258 - }; 259 - 260 - pcie@8,0 { 261 - device_type = "pci"; 262 - assigned-addresses = <0x82004000 0 0x8c000 0 0x2000>; 263 - reg = <0x4000 0 0 0 0>; 264 - #address-cells = <3>; 265 - #size-cells = <2>; 266 - #interrupt-cells = <1>; 267 - ranges = <0x82000000 0 0 0x82000000 0x8 0 1 0 268 - 0x81000000 0 0 0x81000000 0x8 0 1 0>; 269 - interrupt-map-mask = <0 0 0 0>; 270 - interrupt-map = <0 0 0 0 &mpic 65>; 271 - marvell,pcie-port = <1>; 272 - marvell,pcie-lane = <3>; 273 - num-lanes = <1>; 274 - clocks = <&gateclk 12>; 275 - }; 276 - 277 - pcie@9,0 { 278 - device_type = "pci"; 279 - assigned-addresses = <0x82004800 0 0x42000 0 0x2000>; 280 - reg = <0x4800 0 0 0 0>; 281 - #address-cells = <3>; 282 - #size-cells = <2>; 283 - #interrupt-cells = <1>; 284 - ranges = <0x82000000 0 0 0x82000000 0x9 0 1 0 285 - 0x81000000 0 0 0x81000000 0x9 0 1 0>; 286 - interrupt-map-mask = <0 0 0 0>; 287 - interrupt-map = <0 0 0 0 &mpic 99>; 288 - marvell,pcie-port = <2>; 289 - marvell,pcie-lane = <0>; 290 - num-lanes = <1>; 291 - clocks = <&gateclk 26>; 292 - }; 293 - 294 - pcie@a,0 { 295 - device_type = "pci"; 296 - assigned-addresses = <0x82005000 0 0x82000 0 0x2000>; 297 - reg = <0x5000 0 0 0 0>; 298 - #address-cells = <3>; 299 - #size-cells = <2>; 300 - #interrupt-cells = <1>; 301 - ranges = <0x82000000 0 0 0x82000000 0xa 0 1 0 302 - 0x81000000 0 0 0x81000000 0xa 0 1 0>; 303 - interrupt-map-mask = <0 0 0 0>; 304 - interrupt-map = <0 0 0 0 &mpic 103>; 305 - marvell,pcie-port = <3>; 306 - marvell,pcie-lane = <0>; 307 - num-lanes = <1>; 308 - clocks = <&gateclk 27>; 309 - }; 310 - };
+1 -1
Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie-ep.yaml
··· 74 74 75 75 reset-gpios: 76 76 description: Must contain a phandle to a GPIO controller followed by GPIO 77 - that is being used as PERST input signal. Please refer to pci.txt. 77 + that is being used as PERST input signal. 78 78 79 79 phys: 80 80 minItems: 1
-48
Documentation/devicetree/bindings/pci/pci-armada8k.txt
··· 1 - * Marvell Armada 7K/8K PCIe interface 2 - 3 - This PCIe host controller is based on the Synopsys DesignWare PCIe IP 4 - and thus inherits all the common properties defined in snps,dw-pcie.yaml. 5 - 6 - Required properties: 7 - - compatible: "marvell,armada8k-pcie" 8 - - reg: must contain two register regions 9 - - the control register region 10 - - the config space region 11 - - reg-names: 12 - - "ctrl" for the control register region 13 - - "config" for the config space region 14 - - interrupts: Interrupt specifier for the PCIe controller 15 - - clocks: reference to the PCIe controller clocks 16 - - clock-names: mandatory if there is a second clock, in this case the 17 - name must be "core" for the first clock and "reg" for the second 18 - one 19 - 20 - Optional properties: 21 - - phys: phandle(s) to PHY node(s) following the generic PHY bindings. 22 - Either 1, 2 or 4 PHYs might be needed depending on the number of 23 - PCIe lanes. 24 - - phy-names: names of the PHYs corresponding to the number of lanes. 25 - Must be "cp0-pcie0-x4-lane0-phy", "cp0-pcie0-x4-lane1-phy" for 26 - 2 PHYs. 27 - 28 - Example: 29 - 30 - pcie@f2600000 { 31 - compatible = "marvell,armada8k-pcie", "snps,dw-pcie"; 32 - reg = <0 0xf2600000 0 0x10000>, <0 0xf6f00000 0 0x80000>; 33 - reg-names = "ctrl", "config"; 34 - #address-cells = <3>; 35 - #size-cells = <2>; 36 - #interrupt-cells = <1>; 37 - device_type = "pci"; 38 - dma-coherent; 39 - 40 - bus-range = <0 0xff>; 41 - ranges = <0x81000000 0 0xf9000000 0 0xf9000000 0 0x10000 /* downstream I/O */ 42 - 0x82000000 0 0xf6000000 0 0xf6000000 0 0xf00000>; /* non-prefetchable memory */ 43 - interrupt-map-mask = <0 0 0 0>; 44 - interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>; 45 - interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>; 46 - num-lanes = <1>; 47 - clocks = <&cpm_syscon0 1 13>; 48 - };
-171
Documentation/devicetree/bindings/pci/pci-iommu.txt
··· 1 - This document describes the generic device tree binding for describing the 2 - relationship between PCI(e) devices and IOMMU(s). 3 - 4 - Each PCI(e) device under a root complex is uniquely identified by its Requester 5 - ID (AKA RID). A Requester ID is a triplet of a Bus number, Device number, and 6 - Function number. 7 - 8 - For the purpose of this document, when treated as a numeric value, a RID is 9 - formatted such that: 10 - 11 - * Bits [15:8] are the Bus number. 12 - * Bits [7:3] are the Device number. 13 - * Bits [2:0] are the Function number. 14 - * Any other bits required for padding must be zero. 15 - 16 - IOMMUs may distinguish PCI devices through sideband data derived from the 17 - Requester ID. While a given PCI device can only master through one IOMMU, a 18 - root complex may split masters across a set of IOMMUs (e.g. with one IOMMU per 19 - bus). 20 - 21 - The generic 'iommus' property is insufficient to describe this relationship, 22 - and a mechanism is required to map from a PCI device to its IOMMU and sideband 23 - data. 24 - 25 - For generic IOMMU bindings, see 26 - Documentation/devicetree/bindings/iommu/iommu.txt. 27 - 28 - 29 - PCI root complex 30 - ================ 31 - 32 - Optional properties 33 - ------------------- 34 - 35 - - iommu-map: Maps a Requester ID to an IOMMU and associated IOMMU specifier 36 - data. 37 - 38 - The property is an arbitrary number of tuples of 39 - (rid-base,iommu,iommu-base,length). 40 - 41 - Any RID r in the interval [rid-base, rid-base + length) is associated with 42 - the listed IOMMU, with the IOMMU specifier (r - rid-base + iommu-base). 43 - 44 - - iommu-map-mask: A mask to be applied to each Requester ID prior to being 45 - mapped to an IOMMU specifier per the iommu-map property. 46 - 47 - 48 - Example (1) 49 - =========== 50 - 51 - / { 52 - #address-cells = <1>; 53 - #size-cells = <1>; 54 - 55 - iommu: iommu@a { 56 - reg = <0xa 0x1>; 57 - compatible = "vendor,some-iommu"; 58 - #iommu-cells = <1>; 59 - }; 60 - 61 - pci: pci@f { 62 - reg = <0xf 0x1>; 63 - compatible = "vendor,pcie-root-complex"; 64 - device_type = "pci"; 65 - 66 - /* 67 - * The sideband data provided to the IOMMU is the RID, 68 - * identity-mapped. 69 - */ 70 - iommu-map = <0x0 &iommu 0x0 0x10000>; 71 - }; 72 - }; 73 - 74 - 75 - Example (2) 76 - =========== 77 - 78 - / { 79 - #address-cells = <1>; 80 - #size-cells = <1>; 81 - 82 - iommu: iommu@a { 83 - reg = <0xa 0x1>; 84 - compatible = "vendor,some-iommu"; 85 - #iommu-cells = <1>; 86 - }; 87 - 88 - pci: pci@f { 89 - reg = <0xf 0x1>; 90 - compatible = "vendor,pcie-root-complex"; 91 - device_type = "pci"; 92 - 93 - /* 94 - * The sideband data provided to the IOMMU is the RID with the 95 - * function bits masked out. 96 - */ 97 - iommu-map = <0x0 &iommu 0x0 0x10000>; 98 - iommu-map-mask = <0xfff8>; 99 - }; 100 - }; 101 - 102 - 103 - Example (3) 104 - =========== 105 - 106 - / { 107 - #address-cells = <1>; 108 - #size-cells = <1>; 109 - 110 - iommu: iommu@a { 111 - reg = <0xa 0x1>; 112 - compatible = "vendor,some-iommu"; 113 - #iommu-cells = <1>; 114 - }; 115 - 116 - pci: pci@f { 117 - reg = <0xf 0x1>; 118 - compatible = "vendor,pcie-root-complex"; 119 - device_type = "pci"; 120 - 121 - /* 122 - * The sideband data provided to the IOMMU is the RID, 123 - * but the high bits of the bus number are flipped. 124 - */ 125 - iommu-map = <0x0000 &iommu 0x8000 0x8000>, 126 - <0x8000 &iommu 0x0000 0x8000>; 127 - }; 128 - }; 129 - 130 - 131 - Example (4) 132 - =========== 133 - 134 - / { 135 - #address-cells = <1>; 136 - #size-cells = <1>; 137 - 138 - iommu_a: iommu@a { 139 - reg = <0xa 0x1>; 140 - compatible = "vendor,some-iommu"; 141 - #iommu-cells = <1>; 142 - }; 143 - 144 - iommu_b: iommu@b { 145 - reg = <0xb 0x1>; 146 - compatible = "vendor,some-iommu"; 147 - #iommu-cells = <1>; 148 - }; 149 - 150 - iommu_c: iommu@c { 151 - reg = <0xc 0x1>; 152 - compatible = "vendor,some-iommu"; 153 - #iommu-cells = <1>; 154 - }; 155 - 156 - pci: pci@f { 157 - reg = <0xf 0x1>; 158 - compatible = "vendor,pcie-root-complex"; 159 - device_type = "pci"; 160 - 161 - /* 162 - * Devices with bus number 0-127 are mastered via IOMMU 163 - * a, with sideband data being RID[14:0]. 164 - * Devices with bus number 128-255 are mastered via 165 - * IOMMU b, with sideband data being RID[14:0]. 166 - * No devices master via IOMMU c. 167 - */ 168 - iommu-map = <0x0000 &iommu_a 0x0000 0x8000>, 169 - <0x8000 &iommu_b 0x0000 0x8000>; 170 - }; 171 - };
-220
Documentation/devicetree/bindings/pci/pci-msi.txt
··· 1 - This document describes the generic device tree binding for describing the 2 - relationship between PCI devices and MSI controllers. 3 - 4 - Each PCI device under a root complex is uniquely identified by its Requester ID 5 - (AKA RID). A Requester ID is a triplet of a Bus number, Device number, and 6 - Function number. 7 - 8 - For the purpose of this document, when treated as a numeric value, a RID is 9 - formatted such that: 10 - 11 - * Bits [15:8] are the Bus number. 12 - * Bits [7:3] are the Device number. 13 - * Bits [2:0] are the Function number. 14 - * Any other bits required for padding must be zero. 15 - 16 - MSIs may be distinguished in part through the use of sideband data accompanying 17 - writes. In the case of PCI devices, this sideband data may be derived from the 18 - Requester ID. A mechanism is required to associate a device with both the MSI 19 - controllers it can address, and the sideband data that will be associated with 20 - its writes to those controllers. 21 - 22 - For generic MSI bindings, see 23 - Documentation/devicetree/bindings/interrupt-controller/msi.txt. 24 - 25 - 26 - PCI root complex 27 - ================ 28 - 29 - Optional properties 30 - ------------------- 31 - 32 - - msi-map: Maps a Requester ID to an MSI controller and associated 33 - msi-specifier data. The property is an arbitrary number of tuples of 34 - (rid-base,msi-controller,msi-base,length), where: 35 - 36 - * rid-base is a single cell describing the first RID matched by the entry. 37 - 38 - * msi-controller is a single phandle to an MSI controller 39 - 40 - * msi-base is an msi-specifier describing the msi-specifier produced for the 41 - first RID matched by the entry. 42 - 43 - * length is a single cell describing how many consecutive RIDs are matched 44 - following the rid-base. 45 - 46 - Any RID r in the interval [rid-base, rid-base + length) is associated with 47 - the listed msi-controller, with the msi-specifier (r - rid-base + msi-base). 48 - 49 - - msi-map-mask: A mask to be applied to each Requester ID prior to being mapped 50 - to an msi-specifier per the msi-map property. 51 - 52 - - msi-parent: Describes the MSI parent of the root complex itself. Where 53 - the root complex and MSI controller do not pass sideband data with MSI 54 - writes, this property may be used to describe the MSI controller(s) 55 - used by PCI devices under the root complex, if defined as such in the 56 - binding for the root complex. 57 - 58 - 59 - Example (1) 60 - =========== 61 - 62 - / { 63 - #address-cells = <1>; 64 - #size-cells = <1>; 65 - 66 - msi: msi-controller@a { 67 - reg = <0xa 0x1>; 68 - compatible = "vendor,some-controller"; 69 - msi-controller; 70 - #msi-cells = <1>; 71 - }; 72 - 73 - pci: pci@f { 74 - reg = <0xf 0x1>; 75 - compatible = "vendor,pcie-root-complex"; 76 - device_type = "pci"; 77 - 78 - /* 79 - * The sideband data provided to the MSI controller is 80 - * the RID, identity-mapped. 81 - */ 82 - msi-map = <0x0 &msi_a 0x0 0x10000>, 83 - }; 84 - }; 85 - 86 - 87 - Example (2) 88 - =========== 89 - 90 - / { 91 - #address-cells = <1>; 92 - #size-cells = <1>; 93 - 94 - msi: msi-controller@a { 95 - reg = <0xa 0x1>; 96 - compatible = "vendor,some-controller"; 97 - msi-controller; 98 - #msi-cells = <1>; 99 - }; 100 - 101 - pci: pci@f { 102 - reg = <0xf 0x1>; 103 - compatible = "vendor,pcie-root-complex"; 104 - device_type = "pci"; 105 - 106 - /* 107 - * The sideband data provided to the MSI controller is 108 - * the RID, masked to only the device and function bits. 109 - */ 110 - msi-map = <0x0 &msi_a 0x0 0x100>, 111 - msi-map-mask = <0xff> 112 - }; 113 - }; 114 - 115 - 116 - Example (3) 117 - =========== 118 - 119 - / { 120 - #address-cells = <1>; 121 - #size-cells = <1>; 122 - 123 - msi: msi-controller@a { 124 - reg = <0xa 0x1>; 125 - compatible = "vendor,some-controller"; 126 - msi-controller; 127 - #msi-cells = <1>; 128 - }; 129 - 130 - pci: pci@f { 131 - reg = <0xf 0x1>; 132 - compatible = "vendor,pcie-root-complex"; 133 - device_type = "pci"; 134 - 135 - /* 136 - * The sideband data provided to the MSI controller is 137 - * the RID, but the high bit of the bus number is 138 - * ignored. 139 - */ 140 - msi-map = <0x0000 &msi 0x0000 0x8000>, 141 - <0x8000 &msi 0x0000 0x8000>; 142 - }; 143 - }; 144 - 145 - 146 - Example (4) 147 - =========== 148 - 149 - / { 150 - #address-cells = <1>; 151 - #size-cells = <1>; 152 - 153 - msi: msi-controller@a { 154 - reg = <0xa 0x1>; 155 - compatible = "vendor,some-controller"; 156 - msi-controller; 157 - #msi-cells = <1>; 158 - }; 159 - 160 - pci: pci@f { 161 - reg = <0xf 0x1>; 162 - compatible = "vendor,pcie-root-complex"; 163 - device_type = "pci"; 164 - 165 - /* 166 - * The sideband data provided to the MSI controller is 167 - * the RID, but the high bit of the bus number is 168 - * negated. 169 - */ 170 - msi-map = <0x0000 &msi 0x8000 0x8000>, 171 - <0x8000 &msi 0x0000 0x8000>; 172 - }; 173 - }; 174 - 175 - 176 - Example (5) 177 - =========== 178 - 179 - / { 180 - #address-cells = <1>; 181 - #size-cells = <1>; 182 - 183 - msi_a: msi-controller@a { 184 - reg = <0xa 0x1>; 185 - compatible = "vendor,some-controller"; 186 - msi-controller; 187 - #msi-cells = <1>; 188 - }; 189 - 190 - msi_b: msi-controller@b { 191 - reg = <0xb 0x1>; 192 - compatible = "vendor,some-controller"; 193 - msi-controller; 194 - #msi-cells = <1>; 195 - }; 196 - 197 - msi_c: msi-controller@c { 198 - reg = <0xc 0x1>; 199 - compatible = "vendor,some-controller"; 200 - msi-controller; 201 - #msi-cells = <1>; 202 - }; 203 - 204 - pci: pci@f { 205 - reg = <0xf 0x1>; 206 - compatible = "vendor,pcie-root-complex"; 207 - device_type = "pci"; 208 - 209 - /* 210 - * The sideband data provided to MSI controller a is the 211 - * RID, but the high bit of the bus number is negated. 212 - * The sideband data provided to MSI controller b is the 213 - * RID, identity-mapped. 214 - * MSI controller c is not addressable. 215 - */ 216 - msi-map = <0x0000 &msi_a 0x8000 0x08000>, 217 - <0x8000 &msi_a 0x0000 0x08000>, 218 - <0x0000 &msi_b 0x0000 0x10000>; 219 - }; 220 - };
-84
Documentation/devicetree/bindings/pci/pci.txt
··· 1 - PCI bus bridges have standardized Device Tree bindings: 2 - 3 - PCI Bus Binding to: IEEE Std 1275-1994 4 - https://www.devicetree.org/open-firmware/bindings/pci/pci2_1.pdf 5 - 6 - And for the interrupt mapping part: 7 - 8 - Open Firmware Recommended Practice: Interrupt Mapping 9 - https://www.devicetree.org/open-firmware/practice/imap/imap0_9d.pdf 10 - 11 - Additionally to the properties specified in the above standards a host bridge 12 - driver implementation may support the following properties: 13 - 14 - - linux,pci-domain: 15 - If present this property assigns a fixed PCI domain number to a host bridge, 16 - otherwise an unstable (across boots) unique number will be assigned. 17 - It is required to either not set this property at all or set it for all 18 - host bridges in the system, otherwise potentially conflicting domain numbers 19 - may be assigned to root buses behind different host bridges. The domain 20 - number for each host bridge in the system must be unique. 21 - - max-link-speed: 22 - If present this property specifies PCI gen for link capability. Host 23 - drivers could add this as a strategy to avoid unnecessary operation for 24 - unsupported link speed, for instance, trying to do training for 25 - unsupported link speed, etc. Must be '4' for gen4, '3' for gen3, '2' 26 - for gen2, and '1' for gen1. Any other values are invalid. 27 - - reset-gpios: 28 - If present this property specifies PERST# GPIO. Host drivers can parse the 29 - GPIO and apply fundamental reset to endpoints. 30 - - supports-clkreq: 31 - If present this property specifies that CLKREQ signal routing exists from 32 - root port to downstream device and host bridge drivers can do programming 33 - which depends on CLKREQ signal existence. For example, programming root port 34 - not to advertise ASPM L1 Sub-States support if there is no CLKREQ signal. 35 - 36 - PCI-PCI Bridge properties 37 - ------------------------- 38 - 39 - PCIe root ports and switch ports may be described explicitly in the device 40 - tree, as children of the host bridge node. Even though those devices are 41 - discoverable by probing, it might be necessary to describe properties that 42 - aren't provided by standard PCIe capabilities. 43 - 44 - Required properties: 45 - 46 - - reg: 47 - Identifies the PCI-PCI bridge. As defined in the IEEE Std 1275-1994 48 - document, it is a five-cell address encoded as (phys.hi phys.mid 49 - phys.lo size.hi size.lo). phys.hi should contain the device's BDF as 50 - 0b00000000 bbbbbbbb dddddfff 00000000. The other cells should be zero. 51 - 52 - The bus number is defined by firmware, through the standard bridge 53 - configuration mechanism. If this port is a switch port, then firmware 54 - allocates the bus number and writes it into the Secondary Bus Number 55 - register of the bridge directly above this port. Otherwise, the bus 56 - number of a root port is the first number in the bus-range property, 57 - defaulting to zero. 58 - 59 - If firmware leaves the ARI Forwarding Enable bit set in the bridge 60 - above this port, then phys.hi contains the 8-bit function number as 61 - 0b00000000 bbbbbbbb ffffffff 00000000. Note that the PCIe specification 62 - recommends that firmware only leaves ARI enabled when it knows that the 63 - OS is ARI-aware. 64 - 65 - Optional properties: 66 - 67 - - external-facing: 68 - When present, the port is external-facing. All bridges and endpoints 69 - downstream of this port are external to the machine. The OS can, for 70 - example, use this information to identify devices that cannot be 71 - trusted with relaxed DMA protection, as users could easily attach 72 - malicious devices to this port. 73 - 74 - Example: 75 - 76 - pcie@10000000 { 77 - compatible = "pci-host-ecam-generic"; 78 - ... 79 - pcie@0008 { 80 - /* Root port 00:01.0 is external-facing */ 81 - reg = <0x00000800 0 0 0 0>; 82 - external-facing; 83 - }; 84 - };
+7 -3
Documentation/devicetree/bindings/pci/qcom,pcie-sa8775p.yaml
··· 45 45 46 46 interrupts: 47 47 minItems: 8 48 - maxItems: 8 48 + maxItems: 9 49 49 50 50 interrupt-names: 51 + minItems: 8 51 52 items: 52 53 - const: msi0 53 54 - const: msi1 ··· 58 57 - const: msi5 59 58 - const: msi6 60 59 - const: msi7 60 + - const: global 61 61 62 62 resets: 63 63 maxItems: 1 ··· 131 129 <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>, 132 130 <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>, 133 131 <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, 134 - <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>; 132 + <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>, 133 + <GIC_SPI 306 IRQ_TYPE_LEVEL_HIGH>; 135 134 interrupt-names = "msi0", 136 135 "msi1", 137 136 "msi2", ··· 140 137 "msi4", 141 138 "msi5", 142 139 "msi6", 143 - "msi7"; 140 + "msi7", 141 + "global"; 144 142 #interrupt-cells = <1>; 145 143 interrupt-map-mask = <0 0 0 0x7>; 146 144 interrupt-map = <0 0 0 1 &intc GIC_SPI 434 IRQ_TYPE_LEVEL_HIGH>,
+6 -3
Documentation/devicetree/bindings/pci/qcom,pcie-sc7280.yaml
··· 54 54 55 55 interrupts: 56 56 minItems: 8 57 - maxItems: 8 57 + maxItems: 9 58 58 59 59 interrupt-names: 60 + minItems: 8 60 61 items: 61 62 - const: msi0 62 63 - const: msi1 ··· 67 66 - const: msi5 68 67 - const: msi6 69 68 - const: msi7 69 + - const: global 70 70 71 71 resets: 72 72 maxItems: 1 ··· 151 149 <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>, 152 150 <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>, 153 151 <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, 154 - <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>; 152 + <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>, 153 + <GIC_SPI 306 IRQ_TYPE_LEVEL_HIGH>; 155 154 interrupt-names = "msi0", "msi1", "msi2", "msi3", 156 - "msi4", "msi5", "msi6", "msi7"; 155 + "msi4", "msi5", "msi6", "msi7", "global"; 157 156 #interrupt-cells = <1>; 158 157 interrupt-map-mask = <0 0 0 0x7>; 159 158 interrupt-map = <0 0 0 1 &intc 0 0 0 434 IRQ_TYPE_LEVEL_HIGH>,
+7 -3
Documentation/devicetree/bindings/pci/qcom,pcie-sc8180x.yaml
··· 49 49 50 50 interrupts: 51 51 minItems: 8 52 - maxItems: 8 52 + maxItems: 9 53 53 54 54 interrupt-names: 55 + minItems: 8 55 56 items: 56 57 - const: msi0 57 58 - const: msi1 ··· 62 61 - const: msi5 63 62 - const: msi6 64 63 - const: msi7 64 + - const: global 65 65 66 66 resets: 67 67 maxItems: 1 ··· 138 136 <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 139 137 <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 140 138 <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 141 - <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 139 + <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 140 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 142 141 interrupt-names = "msi0", 143 142 "msi1", 144 143 "msi2", ··· 147 144 "msi4", 148 145 "msi5", 149 146 "msi6", 150 - "msi7"; 147 + "msi7", 148 + "global"; 151 149 #interrupt-cells = <1>; 152 150 interrupt-map-mask = <0 0 0 0x7>; 153 151 interrupt-map = <0 0 0 1 &intc 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+6 -3
Documentation/devicetree/bindings/pci/qcom,pcie-sm8150.yaml
··· 49 49 50 50 interrupts: 51 51 minItems: 8 52 - maxItems: 8 52 + maxItems: 9 53 53 54 54 interrupt-names: 55 + minItems: 8 55 56 items: 56 57 - const: msi0 57 58 - const: msi1 ··· 62 61 - const: msi5 63 62 - const: msi6 64 63 - const: msi7 64 + - const: global 65 65 66 66 resets: 67 67 maxItems: 1 ··· 130 128 <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 131 129 <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 132 130 <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 133 - <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 131 + <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 132 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 134 133 interrupt-names = "msi0", "msi1", "msi2", "msi3", 135 - "msi4", "msi5", "msi6", "msi7"; 134 + "msi4", "msi5", "msi6", "msi7", "global"; 136 135 #interrupt-cells = <1>; 137 136 interrupt-map-mask = <0 0 0 0x7>; 138 137 interrupt-map = <0 0 0 1 &intc 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+6 -3
Documentation/devicetree/bindings/pci/qcom,pcie-sm8250.yaml
··· 61 61 62 62 interrupts: 63 63 minItems: 8 64 - maxItems: 8 64 + maxItems: 9 65 65 66 66 interrupt-names: 67 + minItems: 8 67 68 items: 68 69 - const: msi0 69 70 - const: msi1 ··· 74 73 - const: msi5 75 74 - const: msi6 76 75 - const: msi7 76 + - const: global 77 77 78 78 resets: 79 79 maxItems: 1 ··· 145 143 <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 146 144 <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 147 145 <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 148 - <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 146 + <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 147 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 149 148 interrupt-names = "msi0", "msi1", "msi2", "msi3", 150 - "msi4", "msi5", "msi6", "msi7"; 149 + "msi4", "msi5", "msi6", "msi7", "global"; 151 150 #interrupt-cells = <1>; 152 151 interrupt-map-mask = <0 0 0 0x7>; 153 152 interrupt-map = <0 0 0 1 &intc 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+6 -3
Documentation/devicetree/bindings/pci/qcom,pcie-sm8350.yaml
··· 51 51 52 52 interrupts: 53 53 minItems: 8 54 - maxItems: 8 54 + maxItems: 9 55 55 56 56 interrupt-names: 57 + minItems: 8 57 58 items: 58 59 - const: msi0 59 60 - const: msi1 ··· 64 63 - const: msi5 65 64 - const: msi6 66 65 - const: msi7 66 + - const: global 67 67 68 68 resets: 69 69 maxItems: 1 ··· 134 132 <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 135 133 <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 136 134 <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 137 - <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 135 + <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 136 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 138 137 interrupt-names = "msi0", "msi1", "msi2", "msi3", 139 - "msi4", "msi5", "msi6", "msi7"; 138 + "msi4", "msi5", "msi6", "msi7", "global"; 140 139 #interrupt-cells = <1>; 141 140 interrupt-map-mask = <0 0 0 0x7>; 142 141 interrupt-map = <0 0 0 1 &intc 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+60 -5
Documentation/devicetree/bindings/pci/qcom,pcie.yaml
··· 21 21 - qcom,pcie-apq8064 22 22 - qcom,pcie-apq8084 23 23 - qcom,pcie-ipq4019 24 + - qcom,pcie-ipq5018 24 25 - qcom,pcie-ipq6018 25 26 - qcom,pcie-ipq8064 26 27 - qcom,pcie-ipq8064-v2 ··· 169 168 compatible: 170 169 contains: 171 170 enum: 171 + - qcom,pcie-ipq5018 172 172 - qcom,pcie-ipq6018 173 173 - qcom,pcie-ipq8074-gen3 174 174 - qcom,pcie-ipq9574 ··· 177 175 properties: 178 176 reg: 179 177 minItems: 5 180 - maxItems: 5 178 + maxItems: 6 181 179 reg-names: 180 + minItems: 5 182 181 items: 183 182 - const: dbi # DesignWare PCIe registers 184 183 - const: elbi # External local bus interface registers 185 184 - const: atu # ATU address space 186 185 - const: parf # Qualcomm specific registers 187 186 - const: config # PCIe configuration space 187 + - const: mhi # MHI registers 188 188 189 189 - if: 190 190 properties: ··· 325 321 - const: pwr # PWR reset 326 322 - const: ahb # AHB reset 327 323 - const: phy_ahb # PHY AHB reset 324 + 325 + - if: 326 + properties: 327 + compatible: 328 + contains: 329 + enum: 330 + - qcom,pcie-ipq5018 331 + then: 332 + properties: 333 + clocks: 334 + minItems: 6 335 + maxItems: 6 336 + clock-names: 337 + items: 338 + - const: iface # PCIe to SysNOC BIU clock 339 + - const: axi_m # AXI Master clock 340 + - const: axi_s # AXI Slave clock 341 + - const: ahb # AHB clock 342 + - const: aux # Auxiliary clock 343 + - const: axi_bridge # AXI bridge clock 344 + resets: 345 + minItems: 8 346 + maxItems: 8 347 + reset-names: 348 + items: 349 + - const: pipe # PIPE reset 350 + - const: sleep # Sleep reset 351 + - const: sticky # Core sticky reset 352 + - const: axi_m # AXI master reset 353 + - const: axi_s # AXI slave reset 354 + - const: ahb # AHB reset 355 + - const: axi_m_sticky # AXI master sticky reset 356 + - const: axi_s_sticky # AXI slave sticky reset 357 + interrupts: 358 + minItems: 9 359 + maxItems: 9 360 + interrupt-names: 361 + items: 362 + - const: msi0 363 + - const: msi1 364 + - const: msi2 365 + - const: msi3 366 + - const: msi4 367 + - const: msi5 368 + - const: msi6 369 + - const: msi7 370 + - const: global 328 371 329 372 - if: 330 373 properties: ··· 613 562 enum: 614 563 - qcom,pcie-apq8064 615 564 - qcom,pcie-ipq4019 565 + - qcom,pcie-ipq5018 616 566 - qcom,pcie-ipq8064 617 567 - qcom,pcie-ipq8064v2 618 568 - qcom,pcie-ipq8074 ··· 641 589 compatible: 642 590 contains: 643 591 enum: 592 + - qcom,pcie-ipq6018 593 + - qcom,pcie-ipq8074 594 + - qcom,pcie-ipq8074-gen3 644 595 - qcom,pcie-msm8996 596 + - qcom,pcie-msm8998 645 597 - qcom,pcie-sdm845 646 598 then: 647 599 oneOf: ··· 658 602 - properties: 659 603 interrupts: 660 604 minItems: 8 661 - maxItems: 8 605 + maxItems: 9 662 606 interrupt-names: 607 + minItems: 8 663 608 items: 664 609 - const: msi0 665 610 - const: msi1 ··· 670 613 - const: msi5 671 614 - const: msi6 672 615 - const: msi7 616 + - const: global 673 617 674 618 - if: 675 619 properties: ··· 680 622 - qcom,pcie-apq8064 681 623 - qcom,pcie-apq8084 682 624 - qcom,pcie-ipq4019 683 - - qcom,pcie-ipq6018 684 625 - qcom,pcie-ipq8064 685 626 - qcom,pcie-ipq8064-v2 686 - - qcom,pcie-ipq8074 687 - - qcom,pcie-ipq8074-gen3 688 627 - qcom,pcie-qcs404 689 628 then: 690 629 properties:
+17 -17
Documentation/devicetree/bindings/pci/rcar-pci-ep.yaml
··· 73 73 #include <dt-bindings/interrupt-controller/arm-gic.h> 74 74 #include <dt-bindings/power/r8a774c0-sysc.h> 75 75 76 - pcie0_ep: pcie-ep@fe000000 { 77 - compatible = "renesas,r8a774c0-pcie-ep", 78 - "renesas,rcar-gen3-pcie-ep"; 79 - reg = <0xfe000000 0x80000>, 80 - <0xfe100000 0x100000>, 81 - <0xfe200000 0x200000>, 82 - <0x30000000 0x8000000>, 83 - <0x38000000 0x8000000>; 84 - reg-names = "apb-base", "memory0", "memory1", "memory2", "memory3"; 85 - interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>, 86 - <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>, 87 - <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>; 88 - resets = <&cpg 319>; 89 - power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>; 90 - clocks = <&cpg CPG_MOD 319>; 91 - clock-names = "pcie"; 92 - max-functions = /bits/ 8 <1>; 76 + pcie0_ep: pcie-ep@fe000000 { 77 + compatible = "renesas,r8a774c0-pcie-ep", 78 + "renesas,rcar-gen3-pcie-ep"; 79 + reg = <0xfe000000 0x80000>, 80 + <0xfe100000 0x100000>, 81 + <0xfe200000 0x200000>, 82 + <0x30000000 0x8000000>, 83 + <0x38000000 0x8000000>; 84 + reg-names = "apb-base", "memory0", "memory1", "memory2", "memory3"; 85 + interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>, 86 + <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>, 87 + <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>; 88 + resets = <&cpg 319>; 89 + power-domains = <&sysc R8A774C0_PD_ALWAYS_ON>; 90 + clocks = <&cpg CPG_MOD 319>; 91 + clock-names = "pcie"; 92 + max-functions = /bits/ 8 <1>; 93 93 };
+23 -23
Documentation/devicetree/bindings/pci/rcar-pci-host.yaml
··· 113 113 pcie: pcie@fe000000 { 114 114 compatible = "renesas,pcie-r8a7791", "renesas,pcie-rcar-gen2"; 115 115 reg = <0 0xfe000000 0 0x80000>; 116 - #address-cells = <3>; 117 - #size-cells = <2>; 118 - bus-range = <0x00 0xff>; 119 - device_type = "pci"; 120 - ranges = <0x01000000 0 0x00000000 0 0xfe100000 0 0x00100000>, 121 - <0x02000000 0 0xfe200000 0 0xfe200000 0 0x00200000>, 122 - <0x02000000 0 0x30000000 0 0x30000000 0 0x08000000>, 123 - <0x42000000 0 0x38000000 0 0x38000000 0 0x08000000>; 124 - dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>, 125 - <0x42000000 2 0x00000000 2 0x00000000 0 0x40000000>; 126 - interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>, 127 - <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>, 128 - <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>; 129 - #interrupt-cells = <1>; 130 - interrupt-map-mask = <0 0 0 0>; 131 - interrupt-map = <0 0 0 0 &gic GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>; 132 - clocks = <&cpg CPG_MOD 319>, <&pcie_bus_clk>; 133 - clock-names = "pcie", "pcie_bus"; 134 - power-domains = <&sysc R8A7791_PD_ALWAYS_ON>; 135 - resets = <&cpg 319>; 136 - vpcie3v3-supply = <&pcie_3v3>; 137 - vpcie12v-supply = <&pcie_12v>; 138 - }; 116 + #address-cells = <3>; 117 + #size-cells = <2>; 118 + bus-range = <0x00 0xff>; 119 + device_type = "pci"; 120 + ranges = <0x01000000 0 0x00000000 0 0xfe100000 0 0x00100000>, 121 + <0x02000000 0 0xfe200000 0 0xfe200000 0 0x00200000>, 122 + <0x02000000 0 0x30000000 0 0x30000000 0 0x08000000>, 123 + <0x42000000 0 0x38000000 0 0x38000000 0 0x08000000>; 124 + dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>, 125 + <0x42000000 2 0x00000000 2 0x00000000 0 0x40000000>; 126 + interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>, 127 + <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>, 128 + <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>; 129 + #interrupt-cells = <1>; 130 + interrupt-map-mask = <0 0 0 0>; 131 + interrupt-map = <0 0 0 0 &gic GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>; 132 + clocks = <&cpg CPG_MOD 319>, <&pcie_bus_clk>; 133 + clock-names = "pcie", "pcie_bus"; 134 + power-domains = <&sysc R8A7791_PD_ALWAYS_ON>; 135 + resets = <&cpg 319>; 136 + vpcie3v3-supply = <&pcie_3v3>; 137 + vpcie12v-supply = <&pcie_12v>; 138 + }; 139 139 };
+8 -2
Documentation/devicetree/bindings/pci/rockchip-dw-pcie-common.yaml
··· 65 65 tx_cpl_timeout, cor_err_sent, nf_err_sent, f_err_sent, cor_err_rx, 66 66 nf_err_rx, f_err_rx, radm_qoverflow 67 67 - description: 68 - eDMA write channel 0 interrupt 68 + If the matching interrupt name is "msi", then this is the combined 69 + MSI line interrupt, which is to support MSI interrupts output to GIC 70 + controller via GIC SPI interrupt instead of GIC ITS interrupt. 71 + If the matching interrupt name is "dma0", then this is the eDMA write 72 + channel 0 interrupt. 69 73 - description: 70 74 eDMA write channel 1 interrupt 71 75 - description: ··· 85 81 - const: msg 86 82 - const: legacy 87 83 - const: err 88 - - const: dma0 84 + - enum: 85 + - msi 86 + - dma0 89 87 - const: dma1 90 88 - const: dma2 91 89 - const: dma3
+54 -6
Documentation/devicetree/bindings/pci/rockchip-dw-pcie.yaml
··· 16 16 PCIe IP and thus inherits all the common properties defined in 17 17 snps,dw-pcie.yaml. 18 18 19 - allOf: 20 - - $ref: /schemas/pci/snps,dw-pcie.yaml# 21 - - $ref: /schemas/pci/rockchip-dw-pcie-common.yaml# 22 - 23 19 properties: 24 20 compatible: 25 21 oneOf: 26 22 - const: rockchip,rk3568-pcie 27 23 - items: 28 24 - enum: 25 + - rockchip,rk3562-pcie 26 + - rockchip,rk3576-pcie 29 27 - rockchip,rk3588-pcie 30 28 - const: rockchip,rk3568-pcie 31 29 ··· 69 71 70 72 vpcie3v3-supply: true 71 73 72 - required: 73 - - msi-map 74 + allOf: 75 + - $ref: /schemas/pci/snps,dw-pcie.yaml# 76 + - $ref: /schemas/pci/rockchip-dw-pcie-common.yaml# 77 + - if: 78 + not: 79 + properties: 80 + compatible: 81 + contains: 82 + enum: 83 + - rockchip,rk3562-pcie 84 + - rockchip,rk3576-pcie 85 + then: 86 + required: 87 + - msi-map 88 + 89 + - if: 90 + properties: 91 + compatible: 92 + contains: 93 + enum: 94 + - rockchip,rk3562-pcie 95 + - rockchip,rk3576-pcie 96 + then: 97 + properties: 98 + interrupts: 99 + minItems: 6 100 + maxItems: 6 101 + interrupt-names: 102 + items: 103 + - const: sys 104 + - const: pmc 105 + - const: msg 106 + - const: legacy 107 + - const: err 108 + - const: msi 109 + else: 110 + properties: 111 + interrupts: 112 + minItems: 5 113 + interrupt-names: 114 + minItems: 5 115 + items: 116 + - const: sys 117 + - const: pmc 118 + - const: msg 119 + - const: legacy 120 + - const: err 121 + - const: dma0 122 + - const: dma1 123 + - const: dma2 124 + - const: dma3 125 + 74 126 75 127 unevaluatedProperties: false 76 128
+1 -1
Documentation/devicetree/bindings/pci/sifive,fu740-pcie.yaml
··· 81 81 82 82 examples: 83 83 - | 84 + #include <dt-bindings/clock/sifive-fu740-prci.h> 84 85 bus { 85 86 #address-cells = <2>; 86 87 #size-cells = <2>; 87 - #include <dt-bindings/clock/sifive-fu740-prci.h> 88 88 89 89 pcie@e00000000 { 90 90 compatible = "sifive,fu740-pcie";
+2 -1
Documentation/devicetree/bindings/pci/snps,dw-pcie-common.yaml
··· 115 115 above for new bindings. 116 116 oneOf: 117 117 - description: See native 'dbi' clock for details 118 - enum: [ pcie, pcie_apb_sys, aclk_dbi ] 118 + enum: [ pcie, pcie_apb_sys, aclk_dbi, reg ] 119 119 - description: See native 'mstr/slv' clock for details 120 120 enum: [ pcie_bus, pcie_inbound_axi, pcie_aclk, aclk_mst, aclk_slv ] 121 121 - description: See native 'pipe' clock for details ··· 201 201 oneOf: 202 202 - pattern: '^pcie(-?phy[0-9]*)?$' 203 203 - pattern: '^p2u-[0-7]$' 204 + - pattern: '^cp[01]-pcie[0-2]-x[124](-lane[0-3])?-phy$' # marvell,armada8k-pcie 204 205 205 206 reset-gpio: 206 207 deprecated: true
+3 -1
Documentation/devicetree/bindings/pci/snps,dw-pcie.yaml
··· 105 105 Vendor-specific CSR names. Consider using the generic names above 106 106 for new bindings. 107 107 oneOf: 108 + - description: See native 'dbi' CSR region for details. 109 + enum: [ ctrl ] 108 110 - description: See native 'elbi/app' CSR region for details. 109 111 enum: [ apb, mgmt, link, ulreg, appl ] 110 112 - description: See native 'atu' CSR region for details. ··· 119 117 const: slcr 120 118 allOf: 121 119 - contains: 122 - const: dbi 120 + enum: [ dbi, ctrl ] 123 121 - contains: 124 122 const: config 125 123
+100
Documentation/devicetree/bindings/pci/v3,v360epc-pci.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/v3,v360epc-pci.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: V3 Semiconductor V360 EPC PCI bridge 8 + 9 + maintainers: 10 + - Linus Walleij <linus.walleij@linaro.org> 11 + 12 + description: 13 + This bridge is found in the ARM Integrator/AP (Application Platform) 14 + 15 + allOf: 16 + - $ref: /schemas/pci/pci-host-bridge.yaml# 17 + 18 + properties: 19 + compatible: 20 + items: 21 + - const: arm,integrator-ap-pci 22 + - const: v3,v360epc-pci 23 + 24 + reg: 25 + items: 26 + - description: V3 host bridge controller 27 + - description: Configuration space 28 + 29 + clocks: 30 + maxItems: 1 31 + 32 + dma-ranges: 33 + maxItems: 2 34 + description: 35 + The inbound ranges must be aligned to a 1MB boundary, and may be 1MB, 2MB, 36 + 4MB, 8MB, 16MB, 32MB, 64MB, 128MB, 256MB, 512MB, 1GB or 2GB in size. The 37 + memory should be marked as pre-fetchable. 38 + 39 + interrupts: 40 + description: Bus Error IRQ 41 + maxItems: 1 42 + 43 + ranges: 44 + description: 45 + The non-prefetchable and prefetchable memory windows must each be exactly 46 + 256MB (0x10000000) in size. The prefetchable memory window must be 47 + immediately adjacent to the non-prefetchable memory window. 48 + 49 + required: 50 + - compatible 51 + - reg 52 + - clocks 53 + - dma-ranges 54 + - "#interrupt-cells" 55 + - interrupt-map 56 + - interrupt-map-mask 57 + 58 + unevaluatedProperties: false 59 + 60 + examples: 61 + - | 62 + pci@62000000 { 63 + compatible = "arm,integrator-ap-pci", "v3,v360epc-pci"; 64 + #interrupt-cells = <1>; 65 + #size-cells = <2>; 66 + #address-cells = <3>; 67 + reg = <0x62000000 0x10000>, <0x61000000 0x01000000>; 68 + device_type = "pci"; 69 + interrupt-parent = <&pic>; 70 + interrupts = <17>; /* Bus error IRQ */ 71 + clocks = <&pciclk>; 72 + ranges = <0x01000000 0 0x00000000 0x60000000 0 0x01000000>, /* 16 MiB @ LB 60000000 */ 73 + <0x02000000 0 0x40000000 0x40000000 0 0x10000000>, /* 256 MiB @ LB 40000000 1:1 */ 74 + <0x42000000 0 0x50000000 0x50000000 0 0x10000000>; /* 256 MiB @ LB 50000000 1:1 */ 75 + dma-ranges = <0x02000000 0 0x20000000 0x20000000 0 0x20000000>, /* EBI: 512 MB @ LB 20000000 1:1 */ 76 + <0x02000000 0 0x80000000 0x80000000 0 0x40000000>; /* CM alias: 1GB @ LB 80000000 */ 77 + interrupt-map-mask = <0xf800 0 0 0x7>; 78 + interrupt-map = 79 + /* IDSEL 9 */ 80 + <0x4800 0 0 1 &pic 13>, /* INT A on slot 9 is irq 13 */ 81 + <0x4800 0 0 2 &pic 14>, /* INT B on slot 9 is irq 14 */ 82 + <0x4800 0 0 3 &pic 15>, /* INT C on slot 9 is irq 15 */ 83 + <0x4800 0 0 4 &pic 16>, /* INT D on slot 9 is irq 16 */ 84 + /* IDSEL 10 */ 85 + <0x5000 0 0 1 &pic 14>, /* INT A on slot 10 is irq 14 */ 86 + <0x5000 0 0 2 &pic 15>, /* INT B on slot 10 is irq 15 */ 87 + <0x5000 0 0 3 &pic 16>, /* INT C on slot 10 is irq 16 */ 88 + <0x5000 0 0 4 &pic 13>, /* INT D on slot 10 is irq 13 */ 89 + /* IDSEL 11 */ 90 + <0x5800 0 0 1 &pic 15>, /* INT A on slot 11 is irq 15 */ 91 + <0x5800 0 0 2 &pic 16>, /* INT B on slot 11 is irq 16 */ 92 + <0x5800 0 0 3 &pic 13>, /* INT C on slot 11 is irq 13 */ 93 + <0x5800 0 0 4 &pic 14>, /* INT D on slot 11 is irq 14 */ 94 + /* IDSEL 12 */ 95 + <0x6000 0 0 1 &pic 16>, /* INT A on slot 12 is irq 16 */ 96 + <0x6000 0 0 2 &pic 13>, /* INT B on slot 12 is irq 13 */ 97 + <0x6000 0 0 3 &pic 14>, /* INT C on slot 12 is irq 14 */ 98 + <0x6000 0 0 4 &pic 15>; /* INT D on slot 12 is irq 15 */ 99 + }; 100 + ...
-76
Documentation/devicetree/bindings/pci/v3-v360epc-pci.txt
··· 1 - V3 Semiconductor V360 EPC PCI bridge 2 - 3 - This bridge is found in the ARM Integrator/AP (Application Platform) 4 - 5 - Required properties: 6 - - compatible: should be one of: 7 - "v3,v360epc-pci" 8 - "arm,integrator-ap-pci", "v3,v360epc-pci" 9 - - reg: should contain two register areas: 10 - first the base address of the V3 host bridge controller, 64KB 11 - second the configuration area register space, 16MB 12 - - interrupts: should contain a reference to the V3 error interrupt 13 - as routed on the system. 14 - - bus-range: see pci.txt 15 - - ranges: this follows the standard PCI bindings in the IEEE Std 16 - 1275-1994 (see pci.txt) with the following restriction: 17 - - The non-prefetchable and prefetchable memory windows must 18 - each be exactly 256MB (0x10000000) in size. 19 - - The prefetchable memory window must be immediately adjacent 20 - to the non-prefetcable memory window 21 - - dma-ranges: three ranges for the inbound memory region. The ranges must 22 - be aligned to a 1MB boundary, and may be 1MB, 2MB, 4MB, 8MB, 16MB, 32MB, 23 - 64MB, 128MB, 256MB, 512MB, 1GB or 2GB in size. The memory should be marked 24 - as pre-fetchable. Two ranges are supported by the hardware. 25 - 26 - Integrator-specific required properties: 27 - - syscon: should contain a link to the syscon device node, since 28 - on the Integrator, some registers in the syscon are required to 29 - operate the V3 host bridge. 30 - 31 - Example: 32 - 33 - pci: pciv3@62000000 { 34 - compatible = "arm,integrator-ap-pci", "v3,v360epc-pci"; 35 - #interrupt-cells = <1>; 36 - #size-cells = <2>; 37 - #address-cells = <3>; 38 - reg = <0x62000000 0x10000>, <0x61000000 0x01000000>; 39 - interrupt-parent = <&pic>; 40 - interrupts = <17>; /* Bus error IRQ */ 41 - clocks = <&pciclk>; 42 - bus-range = <0x00 0xff>; 43 - ranges = 0x01000000 0 0x00000000 /* I/O space @00000000 */ 44 - 0x60000000 0 0x01000000 /* 16 MiB @ LB 60000000 */ 45 - 0x02000000 0 0x40000000 /* non-prefectable memory @40000000 */ 46 - 0x40000000 0 0x10000000 /* 256 MiB @ LB 40000000 1:1 */ 47 - 0x42000000 0 0x50000000 /* prefetchable memory @50000000 */ 48 - 0x50000000 0 0x10000000>; /* 256 MiB @ LB 50000000 1:1 */ 49 - dma-ranges = <0x02000000 0 0x20000000 /* EBI memory space */ 50 - 0x20000000 0 0x20000000 /* 512 MB @ LB 20000000 1:1 */ 51 - 0x02000000 0 0x80000000 /* Core module alias memory */ 52 - 0x80000000 0 0x40000000>; /* 1GB @ LB 80000000 */ 53 - interrupt-map-mask = <0xf800 0 0 0x7>; 54 - interrupt-map = < 55 - /* IDSEL 9 */ 56 - 0x4800 0 0 1 &pic 13 /* INT A on slot 9 is irq 13 */ 57 - 0x4800 0 0 2 &pic 14 /* INT B on slot 9 is irq 14 */ 58 - 0x4800 0 0 3 &pic 15 /* INT C on slot 9 is irq 15 */ 59 - 0x4800 0 0 4 &pic 16 /* INT D on slot 9 is irq 16 */ 60 - /* IDSEL 10 */ 61 - 0x5000 0 0 1 &pic 14 /* INT A on slot 10 is irq 14 */ 62 - 0x5000 0 0 2 &pic 15 /* INT B on slot 10 is irq 15 */ 63 - 0x5000 0 0 3 &pic 16 /* INT C on slot 10 is irq 16 */ 64 - 0x5000 0 0 4 &pic 13 /* INT D on slot 10 is irq 13 */ 65 - /* IDSEL 11 */ 66 - 0x5800 0 0 1 &pic 15 /* INT A on slot 11 is irq 15 */ 67 - 0x5800 0 0 2 &pic 16 /* INT B on slot 11 is irq 16 */ 68 - 0x5800 0 0 3 &pic 13 /* INT C on slot 11 is irq 13 */ 69 - 0x5800 0 0 4 &pic 14 /* INT D on slot 11 is irq 14 */ 70 - /* IDSEL 12 */ 71 - 0x6000 0 0 1 &pic 16 /* INT A on slot 12 is irq 16 */ 72 - 0x6000 0 0 2 &pic 13 /* INT B on slot 12 is irq 13 */ 73 - 0x6000 0 0 3 &pic 14 /* INT C on slot 12 is irq 14 */ 74 - 0x6000 0 0 4 &pic 15 /* INT D on slot 12 is irq 15 */ 75 - >; 76 - };
+55 -57
Documentation/devicetree/bindings/pci/xilinx-versal-cpm.yaml
··· 76 76 77 77 examples: 78 78 - | 79 - 80 79 versal { 81 - #address-cells = <2>; 82 - #size-cells = <2>; 83 - cpm_pcie: pcie@fca10000 { 84 - compatible = "xlnx,versal-cpm-host-1.00"; 85 - device_type = "pci"; 86 - #address-cells = <3>; 87 - #interrupt-cells = <1>; 88 - #size-cells = <2>; 89 - interrupts = <0 72 4>; 90 - interrupt-parent = <&gic>; 91 - interrupt-map-mask = <0 0 0 7>; 92 - interrupt-map = <0 0 0 1 &pcie_intc_0 0>, 93 - <0 0 0 2 &pcie_intc_0 1>, 94 - <0 0 0 3 &pcie_intc_0 2>, 95 - <0 0 0 4 &pcie_intc_0 3>; 96 - bus-range = <0x00 0xff>; 97 - ranges = <0x02000000 0x0 0xe0010000 0x0 0xe0010000 0x0 0x10000000>, 98 - <0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>; 99 - msi-map = <0x0 &its_gic 0x0 0x10000>; 100 - reg = <0x0 0xfca10000 0x0 0x1000>, 101 - <0x6 0x00000000 0x0 0x10000000>; 102 - reg-names = "cpm_slcr", "cfg"; 103 - pcie_intc_0: interrupt-controller { 104 - #address-cells = <0>; 105 - #interrupt-cells = <1>; 106 - interrupt-controller; 107 - }; 108 - }; 80 + #address-cells = <2>; 81 + #size-cells = <2>; 82 + pcie@fca10000 { 83 + compatible = "xlnx,versal-cpm-host-1.00"; 84 + device_type = "pci"; 85 + #address-cells = <3>; 86 + #interrupt-cells = <1>; 87 + #size-cells = <2>; 88 + interrupts = <0 72 4>; 89 + interrupt-parent = <&gic>; 90 + interrupt-map-mask = <0 0 0 7>; 91 + interrupt-map = <0 0 0 1 &pcie_intc_0 0>, 92 + <0 0 0 2 &pcie_intc_0 1>, 93 + <0 0 0 3 &pcie_intc_0 2>, 94 + <0 0 0 4 &pcie_intc_0 3>; 95 + bus-range = <0x00 0xff>; 96 + ranges = <0x02000000 0x0 0xe0010000 0x0 0xe0010000 0x0 0x10000000>, 97 + <0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>; 98 + msi-map = <0x0 &its_gic 0x0 0x10000>; 99 + reg = <0x0 0xfca10000 0x0 0x1000>, 100 + <0x6 0x00000000 0x0 0x10000000>; 101 + reg-names = "cpm_slcr", "cfg"; 102 + pcie_intc_0: interrupt-controller { 103 + #address-cells = <0>; 104 + #interrupt-cells = <1>; 105 + interrupt-controller; 106 + }; 107 + }; 109 108 110 - cpm5_pcie: pcie@fcdd0000 { 111 - compatible = "xlnx,versal-cpm5-host"; 112 - device_type = "pci"; 113 - #address-cells = <3>; 114 - #interrupt-cells = <1>; 115 - #size-cells = <2>; 116 - interrupts = <0 72 4>; 117 - interrupt-parent = <&gic>; 118 - interrupt-map-mask = <0 0 0 7>; 119 - interrupt-map = <0 0 0 1 &pcie_intc_1 0>, 120 - <0 0 0 2 &pcie_intc_1 1>, 121 - <0 0 0 3 &pcie_intc_1 2>, 122 - <0 0 0 4 &pcie_intc_1 3>; 123 - bus-range = <0x00 0xff>; 124 - ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>, 125 - <0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>; 126 - msi-map = <0x0 &its_gic 0x0 0x10000>; 127 - reg = <0x00 0xfcdd0000 0x00 0x1000>, 128 - <0x06 0x00000000 0x00 0x1000000>, 129 - <0x00 0xfce20000 0x00 0x1000000>; 130 - reg-names = "cpm_slcr", "cfg", "cpm_csr"; 109 + pcie@fcdd0000 { 110 + compatible = "xlnx,versal-cpm5-host"; 111 + device_type = "pci"; 112 + #address-cells = <3>; 113 + #interrupt-cells = <1>; 114 + #size-cells = <2>; 115 + interrupts = <0 72 4>; 116 + interrupt-parent = <&gic>; 117 + interrupt-map-mask = <0 0 0 7>; 118 + interrupt-map = <0 0 0 1 &pcie_intc_1 0>, 119 + <0 0 0 2 &pcie_intc_1 1>, 120 + <0 0 0 3 &pcie_intc_1 2>, 121 + <0 0 0 4 &pcie_intc_1 3>; 122 + bus-range = <0x00 0xff>; 123 + ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>, 124 + <0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>; 125 + msi-map = <0x0 &its_gic 0x0 0x10000>; 126 + reg = <0x00 0xfcdd0000 0x00 0x1000>, 127 + <0x06 0x00000000 0x00 0x1000000>, 128 + <0x00 0xfce20000 0x00 0x1000000>; 129 + reg-names = "cpm_slcr", "cfg", "cpm_csr"; 131 130 132 - pcie_intc_1: interrupt-controller { 133 - #address-cells = <0>; 134 - #interrupt-cells = <1>; 135 - interrupt-controller; 136 - }; 137 - }; 138 - 131 + pcie_intc_1: interrupt-controller { 132 + #address-cells = <0>; 133 + #interrupt-cells = <1>; 134 + interrupt-controller; 135 + }; 136 + }; 139 137 };
+1 -2
Documentation/driver-api/driver-model/devres.rst
··· 391 391 devm_pci_remap_cfgspace() : ioremap PCI configuration space 392 392 devm_pci_remap_cfg_resource() : ioremap PCI configuration space resource 393 393 394 - pcim_enable_device() : after success, some PCI ops become managed 394 + pcim_enable_device() : after success, the PCI device gets disabled automatically on driver detach 395 395 pcim_iomap() : do iomap() on a single BAR 396 396 pcim_iomap_regions() : do request_region() and iomap() on multiple BARs 397 397 pcim_iomap_table() : array of mapped addresses indexed by BAR 398 398 pcim_iounmap() : do iounmap() on a single BAR 399 - pcim_iounmap_regions() : do iounmap() and release_region() on multiple BARs 400 399 pcim_pin_device() : keep PCI device enabled after release 401 400 pcim_set_mwi() : enable Memory-Write-Invalidate PCI transaction 402 401
+26 -24
MAINTAINERS
··· 2147 2147 2148 2148 ARM/ACTIONS SEMI ARCHITECTURE 2149 2149 M: Andreas Färber <afaerber@suse.de> 2150 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 2150 + M: Manivannan Sadhasivam <mani@kernel.org> 2151 2151 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2152 2152 L: linux-actions@lists.infradead.org (moderated for non-subscribers) 2153 2153 S: Maintained ··· 2400 2400 F: arch/arm/mach-axxia/ 2401 2401 2402 2402 ARM/BITMAIN ARCHITECTURE 2403 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 2403 + M: Manivannan Sadhasivam <mani@kernel.org> 2404 2404 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2405 2405 S: Maintained 2406 2406 F: Documentation/devicetree/bindings/arm/bitmain.yaml ··· 3069 3069 F: include/soc/qcom/ 3070 3070 3071 3071 ARM/RDA MICRO ARCHITECTURE 3072 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 3072 + M: Manivannan Sadhasivam <mani@kernel.org> 3073 3073 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3074 3074 L: linux-unisoc@lists.infradead.org (moderated for non-subscribers) 3075 3075 S: Maintained ··· 3776 3776 F: drivers/block/aoe/ 3777 3777 3778 3778 ATC260X PMIC MFD DRIVER 3779 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 3779 + M: Manivannan Sadhasivam <mani@kernel.org> 3780 3780 M: Cristian Ciocaltea <cristian.ciocaltea@gmail.com> 3781 3781 L: linux-actions@lists.infradead.org 3782 3782 S: Maintained ··· 6806 6806 F: drivers/mtd/nand/raw/denali* 6807 6807 6808 6808 DESIGNWARE EDMA CORE IP DRIVER 6809 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 6809 + M: Manivannan Sadhasivam <mani@kernel.org> 6810 6810 L: dmaengine@vger.kernel.org 6811 6811 S: Maintained 6812 6812 F: drivers/dma/dw-edma/ ··· 8652 8652 F: drivers/edac/pnd2_edac.[ch] 8653 8653 8654 8654 EDAC-QCOM 8655 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 8655 + M: Manivannan Sadhasivam <mani@kernel.org> 8656 8656 L: linux-arm-msm@vger.kernel.org 8657 8657 L: linux-edac@vger.kernel.org 8658 8658 S: Maintained ··· 12270 12270 L: linux-kernel@vger.kernel.org 12271 12271 S: Supported 12272 12272 F: arch/x86/include/asm/intel-mid.h 12273 - F: arch/x86/pci/intel_mid_pci.c 12273 + F: arch/x86/pci/intel_mid.c 12274 12274 F: arch/x86/platform/intel-mid/ 12275 12275 F: drivers/dma/hsu/ 12276 12276 F: drivers/extcon/extcon-intel-mrfld.c ··· 14859 14859 14860 14860 MCP251XFD SPI-CAN NETWORK DRIVER 14861 14861 M: Marc Kleine-Budde <mkl@pengutronix.de> 14862 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 14862 + M: Manivannan Sadhasivam <mani@kernel.org> 14863 14863 R: Thomas Kopp <thomas.kopp@microchip.com> 14864 14864 L: linux-can@vger.kernel.org 14865 14865 S: Maintained ··· 16025 16025 F: arch/arm64/boot/dts/marvell/armada-3720-uDPU.* 16026 16026 16027 16027 MHI BUS 16028 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 16028 + M: Manivannan Sadhasivam <mani@kernel.org> 16029 16029 L: mhi@lists.linux.dev 16030 16030 L: linux-arm-msm@vger.kernel.org 16031 16031 S: Maintained ··· 18837 18837 L: linux-pci@vger.kernel.org 18838 18838 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 18839 18839 S: Maintained 18840 - F: Documentation/devicetree/bindings/pci/pci-armada8k.txt 18840 + F: Documentation/devicetree/bindings/pci/marvell,armada8k-pcie.yaml 18841 18841 F: drivers/pci/controller/dwc/pcie-armada8k.c 18842 18842 18843 18843 PCI DRIVER FOR CADENCE PCIE IP ··· 18957 18957 L: linux-pci@vger.kernel.org 18958 18958 L: linux-renesas-soc@vger.kernel.org 18959 18959 S: Maintained 18960 + F: Documentation/PCI/controller/rcar-pcie-firmware.rst 18960 18961 F: Documentation/devicetree/bindings/pci/*rcar* 18961 18962 F: drivers/pci/controller/*rcar* 18962 18963 F: drivers/pci/controller/dwc/*rcar* ··· 18972 18971 18973 18972 PCI DRIVER FOR SYNOPSYS DESIGNWARE 18974 18973 M: Jingoo Han <jingoohan1@gmail.com> 18975 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 18974 + M: Manivannan Sadhasivam <mani@kernel.org> 18976 18975 L: linux-pci@vger.kernel.org 18977 18976 S: Maintained 18978 18977 F: Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml ··· 18995 18994 M: Linus Walleij <linus.walleij@linaro.org> 18996 18995 L: linux-pci@vger.kernel.org 18997 18996 S: Maintained 18998 - F: Documentation/devicetree/bindings/pci/v3-v360epc-pci.txt 18997 + F: Documentation/devicetree/bindings/pci/v3,v360epc-pci.yaml 18999 18998 F: drivers/pci/controller/pci-v3-semi.c 19000 18999 19001 19000 PCI DRIVER FOR XILINX VERSAL CPM ··· 19007 19006 F: drivers/pci/controller/pcie-xilinx-cpm.c 19008 19007 19009 19008 PCI ENDPOINT SUBSYSTEM 19010 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 19011 - M: Krzysztof Wilczyński <kw@linux.com> 19009 + M: Manivannan Sadhasivam <mani@kernel.org> 19010 + M: Krzysztof Wilczyński <kwilczynski@kernel.org> 19012 19011 R: Kishon Vijay Abraham I <kishon@kernel.org> 19013 19012 L: linux-pci@vger.kernel.org 19014 19013 S: Supported ··· 19059 19058 19060 19059 PCI NATIVE HOST BRIDGE AND ENDPOINT DRIVERS 19061 19060 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 19062 - M: Krzysztof Wilczyński <kw@linux.com> 19063 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 19061 + M: Krzysztof Wilczyński <kwilczynski@kernel.org> 19062 + M: Manivannan Sadhasivam <mani@kernel.org> 19064 19063 R: Rob Herring <robh@kernel.org> 19065 19064 L: linux-pci@vger.kernel.org 19066 19065 S: Supported ··· 19068 19067 B: https://bugzilla.kernel.org 19069 19068 C: irc://irc.oftc.net/linux-pci 19070 19069 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 19070 + F: Documentation/ABI/testing/debugfs-pcie-ptm 19071 19071 F: Documentation/devicetree/bindings/pci/ 19072 19072 F: drivers/pci/controller/ 19073 19073 F: drivers/pci/pci-bridge-emul.c ··· 19217 19215 F: drivers/pci/controller/plda/*microchip* 19218 19216 19219 19217 PCIE DRIVER FOR QUALCOMM MSM 19220 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 19218 + M: Manivannan Sadhasivam <mani@kernel.org> 19221 19219 L: linux-pci@vger.kernel.org 19222 19220 L: linux-arm-msm@vger.kernel.org 19223 19221 S: Maintained ··· 19253 19251 F: drivers/pci/controller/plda/pcie-starfive.c 19254 19252 19255 19253 PCIE ENDPOINT DRIVER FOR QUALCOMM 19256 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 19254 + M: Manivannan Sadhasivam <mani@kernel.org> 19257 19255 L: linux-pci@vger.kernel.org 19258 19256 L: linux-arm-msm@vger.kernel.org 19259 19257 S: Maintained ··· 20381 20379 F: drivers/iommu/msm_iommu* 20382 20380 20383 20381 QUALCOMM IPC ROUTER (QRTR) DRIVER 20384 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 20382 + M: Manivannan Sadhasivam <mani@kernel.org> 20385 20383 L: linux-arm-msm@vger.kernel.org 20386 20384 S: Maintained 20387 20385 F: include/trace/events/qrtr.h ··· 20389 20387 F: net/qrtr/ 20390 20388 20391 20389 QUALCOMM IPCC MAILBOX DRIVER 20392 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 20390 + M: Manivannan Sadhasivam <mani@kernel.org> 20393 20391 L: linux-arm-msm@vger.kernel.org 20394 20392 S: Supported 20395 20393 F: Documentation/devicetree/bindings/mailbox/qcom-ipcc.yaml ··· 20424 20422 F: drivers/media/platform/qcom/iris/ 20425 20423 20426 20424 QUALCOMM NAND CONTROLLER DRIVER 20427 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 20425 + M: Manivannan Sadhasivam <mani@kernel.org> 20428 20426 L: linux-mtd@lists.infradead.org 20429 20427 L: linux-arm-msm@vger.kernel.org 20430 20428 S: Maintained ··· 22992 22990 F: drivers/media/i2c/imx283.c 22993 22991 22994 22992 SONY IMX290 SENSOR DRIVER 22995 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 22993 + M: Manivannan Sadhasivam <mani@kernel.org> 22996 22994 L: linux-media@vger.kernel.org 22997 22995 S: Maintained 22998 22996 T: git git://linuxtv.org/media.git ··· 23001 22999 23002 23000 SONY IMX296 SENSOR DRIVER 23003 23001 M: Laurent Pinchart <laurent.pinchart@ideasonboard.com> 23004 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 23002 + M: Manivannan Sadhasivam <mani@kernel.org> 23005 23003 L: linux-media@vger.kernel.org 23006 23004 S: Maintained 23007 23005 T: git git://linuxtv.org/media.git ··· 25376 25374 F: drivers/ufs/host/ufs-mediatek* 25377 25375 25378 25376 UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER QUALCOMM HOOKS 25379 - M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 25377 + M: Manivannan Sadhasivam <mani@kernel.org> 25380 25378 L: linux-arm-msm@vger.kernel.org 25381 25379 L: linux-scsi@vger.kernel.org 25382 25380 S: Maintained
+1 -1
arch/arm64/Kconfig.platforms
··· 269 269 bool "Qualcomm Platforms" 270 270 select GPIOLIB 271 271 select PINCTRL 272 - select HAVE_PWRCTL if PCI 272 + select HAVE_PWRCTRL if PCI 273 273 help 274 274 This enables support for the ARMv8 based Qualcomm chipsets. 275 275
+3 -3
arch/x86/pci/Makefile
··· 8 8 obj-$(CONFIG_PCI_XEN) += xen.o 9 9 10 10 obj-y += fixup.o 11 - obj-$(CONFIG_X86_INTEL_CE) += ce4100.o 12 11 obj-$(CONFIG_ACPI) += acpi.o 13 12 obj-y += legacy.o irq.o 14 13 15 - obj-$(CONFIG_X86_NUMACHIP) += numachip.o 14 + obj-$(CONFIG_X86_INTEL_CE) += ce4100.o 15 + obj-$(CONFIG_X86_INTEL_MID) += intel_mid.o 16 16 17 - obj-$(CONFIG_X86_INTEL_MID) += intel_mid_pci.o 17 + obj-$(CONFIG_X86_NUMACHIP) += numachip.o 18 18 19 19 obj-y += common.o early.o 20 20 obj-y += bus_numa.o
arch/x86/pci/intel_mid_pci.c arch/x86/pci/intel_mid.c
-1
drivers/accel/qaic/Kconfig
··· 8 8 depends on DRM_ACCEL 9 9 depends on PCI && HAS_IOMEM 10 10 depends on MHI_BUS 11 - depends on MMU 12 11 select CRC32 13 12 help 14 13 Enables driver for Qualcomm's Cloud AI accelerator PCIe cards that are
+2 -5
drivers/block/mtip32xx/mtip32xx.c
··· 3717 3717 rv = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 3718 3718 if (rv) { 3719 3719 dev_warn(&pdev->dev, "64-bit DMA enable failed\n"); 3720 - goto setmask_err; 3720 + goto iomap_err; 3721 3721 } 3722 3722 3723 3723 /* Copy the info we may need later into the private data structure. */ ··· 3733 3733 if (!dd->isr_workq) { 3734 3734 dev_warn(&pdev->dev, "Can't create wq %d\n", dd->instance); 3735 3735 rv = -ENOMEM; 3736 - goto setmask_err; 3736 + goto iomap_err; 3737 3737 } 3738 3738 3739 3739 memset(cpu_list, 0, sizeof(cpu_list)); ··· 3830 3830 drop_cpu(dd->work[1].cpu_binding); 3831 3831 drop_cpu(dd->work[2].cpu_binding); 3832 3832 } 3833 - setmask_err: 3834 - pcim_iounmap_regions(pdev, 1 << MTIP_ABAR); 3835 3833 3836 3834 iomap_err: 3837 3835 kfree(dd); ··· 3905 3907 3906 3908 pci_disable_msi(pdev); 3907 3909 3908 - pcim_iounmap_regions(pdev, 1 << MTIP_ABAR); 3909 3910 pci_set_drvdata(pdev, NULL); 3910 3911 3911 3912 put_disk(dd->disk);
+1 -1
drivers/firewire/Kconfig
··· 83 83 84 84 config FIREWIRE_OHCI 85 85 tristate "OHCI-1394 controllers" 86 - depends on PCI && FIREWIRE && MMU 86 + depends on PCI && FIREWIRE 87 87 help 88 88 Enable this driver if you have a FireWire controller based 89 89 on the OHCI specification. For all practical purposes, this
+1 -1
drivers/gpu/drm/Kconfig
··· 398 398 399 399 config DRM_HYPERV 400 400 tristate "DRM Support for Hyper-V synthetic video device" 401 - depends on DRM && PCI && MMU && HYPERV 401 + depends on DRM && PCI && HYPERV 402 402 select DRM_CLIENT_SELECTION 403 403 select DRM_KMS_HELPER 404 404 select DRM_GEM_SHMEM_HELPER
+1 -2
drivers/gpu/drm/amd/amdgpu/Kconfig
··· 2 2 3 3 config DRM_AMDGPU 4 4 tristate "AMD GPU" 5 - depends on DRM && PCI && MMU 5 + depends on DRM && PCI 6 6 depends on !UML 7 7 select FW_LOADER 8 8 select DRM_CLIENT ··· 68 68 config DRM_AMDGPU_USERPTR 69 69 bool "Always enable userptr write support" 70 70 depends on DRM_AMDGPU 71 - depends on MMU 72 71 select HMM_MIRROR 73 72 select MMU_NOTIFIER 74 73 help
+1 -1
drivers/gpu/drm/ast/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_AST 3 3 tristate "AST server chips" 4 - depends on DRM && PCI && MMU 4 + depends on DRM && PCI 5 5 select DRM_CLIENT_SELECTION 6 6 select DRM_GEM_SHMEM_HELPER 7 7 select DRM_KMS_HELPER
+1 -1
drivers/gpu/drm/gma500/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_GMA500 3 3 tristate "Intel GMA500/600/3600/3650 KMS Framebuffer" 4 - depends on DRM && PCI && X86 && MMU && HAS_IOPORT 4 + depends on DRM && PCI && X86 && HAS_IOPORT 5 5 select DRM_CLIENT_SELECTION 6 6 select DRM_KMS_HELPER 7 7 select FB_IOMEM_HELPERS if DRM_FBDEV_EMULATION
-1
drivers/gpu/drm/hisilicon/hibmc/Kconfig
··· 2 2 config DRM_HISI_HIBMC 3 3 tristate "DRM Support for Hisilicon Hibmc" 4 4 depends on DRM && PCI 5 - depends on MMU 6 5 select DRM_CLIENT_SELECTION 7 6 select DRM_DISPLAY_HELPER 8 7 select DRM_DISPLAY_DP_HELPER
+1 -1
drivers/gpu/drm/loongson/Kconfig
··· 2 2 3 3 config DRM_LOONGSON 4 4 tristate "DRM support for Loongson Graphics" 5 - depends on DRM && PCI && MMU 5 + depends on DRM && PCI 6 6 depends on LOONGARCH || MIPS || COMPILE_TEST 7 7 select DRM_CLIENT_SELECTION 8 8 select DRM_KMS_HELPER
+1 -1
drivers/gpu/drm/mgag200/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_MGAG200 3 3 tristate "Matrox G200" 4 - depends on DRM && PCI && MMU 4 + depends on DRM && PCI 5 5 select DRM_CLIENT_SELECTION 6 6 select DRM_GEM_SHMEM_HELPER 7 7 select DRM_KMS_HELPER
+1 -2
drivers/gpu/drm/nouveau/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_NOUVEAU 3 3 tristate "Nouveau (NVIDIA) cards" 4 - depends on DRM && PCI && MMU 4 + depends on DRM && PCI 5 5 select IOMMU_API 6 6 select FW_LOADER 7 7 select FW_CACHE if PM_SLEEP ··· 94 94 bool "(EXPERIMENTAL) Enable SVM (Shared Virtual Memory) support" 95 95 depends on DEVICE_PRIVATE 96 96 depends on DRM_NOUVEAU 97 - depends on MMU 98 97 depends on STAGING 99 98 select HMM_MIRROR 100 99 select MMU_NOTIFIER
+1 -1
drivers/gpu/drm/qxl/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_QXL 3 3 tristate "QXL virtual GPU" 4 - depends on DRM && PCI && MMU && HAS_IOPORT 4 + depends on DRM && PCI && HAS_IOPORT 5 5 select DRM_CLIENT_SELECTION 6 6 select DRM_KMS_HELPER 7 7 select DRM_TTM
+1 -1
drivers/gpu/drm/radeon/Kconfig
··· 2 2 3 3 config DRM_RADEON 4 4 tristate "ATI Radeon" 5 - depends on DRM && PCI && MMU 5 + depends on DRM && PCI 6 6 depends on AGP || !AGP 7 7 select FW_LOADER 8 8 select DRM_CLIENT_SELECTION
+1 -1
drivers/gpu/drm/tiny/Kconfig
··· 38 38 39 39 config DRM_CIRRUS_QEMU 40 40 tristate "Cirrus driver for QEMU emulated device" 41 - depends on DRM && PCI && MMU 41 + depends on DRM && PCI 42 42 select DRM_CLIENT_SELECTION 43 43 select DRM_KMS_HELPER 44 44 select DRM_GEM_SHMEM_HELPER
+1 -1
drivers/gpu/drm/vmwgfx/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 config DRM_VMWGFX 3 3 tristate "DRM driver for VMware Virtual GPU" 4 - depends on DRM && PCI && MMU 4 + depends on DRM && PCI 5 5 depends on (X86 && HYPERVISOR_GUEST) || ARM64 6 6 select DRM_CLIENT_SELECTION 7 7 select DRM_TTM
+1 -1
drivers/gpu/drm/xe/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config DRM_XE 3 3 tristate "Intel Xe Graphics" 4 - depends on DRM && PCI && MMU && (m || (y && KUNIT=y)) 4 + depends on DRM && PCI && (m || (y && KUNIT=y)) 5 5 select INTERVAL_TREE 6 6 # we need shmfs for the swappable backing store, and in particular 7 7 # the shmem_readpage() which depends upon tmpfs
-3
drivers/iommu/amd/init.c
··· 2024 2024 if (!iommu->dev) 2025 2025 return -ENODEV; 2026 2026 2027 - /* Prevent binding other PCI device drivers to IOMMU devices */ 2028 - iommu->dev->match_driver = false; 2029 - 2030 2027 /* ACPI _PRT won't have an IRQ for IOMMU */ 2031 2028 iommu->dev->irq_managed = 1; 2032 2029
-1
drivers/net/ethernet/broadcom/Kconfig
··· 96 96 config CNIC 97 97 tristate "QLogic CNIC support" 98 98 depends on PCI && (IPV6 || IPV6=n) 99 - depends on MMU 100 99 select BNX2 101 100 select UIO 102 101 help
+1 -1
drivers/net/wireless/ath/ath11k/Kconfig
··· 24 24 select MHI_BUS 25 25 select QRTR 26 26 select QRTR_MHI 27 - select PCI_PWRCTL_PWRSEQ if HAVE_PWRCTL 27 + select PCI_PWRCTRL_PWRSEQ if HAVE_PWRCTRL 28 28 help 29 29 This module adds support for PCIE bus 30 30
+1 -1
drivers/net/wireless/ath/ath12k/Kconfig
··· 7 7 select MHI_BUS 8 8 select QRTR 9 9 select QRTR_MHI 10 - select PCI_PWRCTL_PWRSEQ if HAVE_PWRCTL 10 + select PCI_PWRCTRL_PWRSEQ if HAVE_PWRCTRL 11 11 help 12 12 Enable support for Qualcomm Technologies Wi-Fi 7 (IEEE 13 13 802.11be) family of chipsets, for example WCN7850 and
+1
drivers/pci/Kconfig
··· 21 21 menuconfig PCI 22 22 bool "PCI support" 23 23 depends on HAVE_PCI 24 + depends on MMU 24 25 help 25 26 This option enables support for the PCI local bus, including 26 27 support for PCI-X and the foundations for PCI Express support.
+3 -1
drivers/pci/bus.c
··· 369 369 pdev->name); 370 370 } 371 371 372 - dev->match_driver = !dn || of_device_is_available(dn); 372 + if (!dn || of_device_is_available(dn)) 373 + pci_dev_allow_binding(dev); 374 + 373 375 retval = device_attach(&dev->dev); 374 376 if (retval < 0 && retval != -EPROBE_DEFER) 375 377 pci_warn(dev, "device attach failed (%d)\n", retval);
+4 -4
drivers/pci/controller/Kconfig
··· 3 3 menu "PCI controller drivers" 4 4 depends on PCI 5 5 6 + config PCI_HOST_COMMON 7 + tristate 8 + select PCI_ECAM 9 + 6 10 config PCI_AARDVARK 7 11 tristate "Aardvark PCIe controller" 8 12 depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST ··· 123 119 bool "Faraday Technology FTPCI100 PCI controller" 124 120 depends on OF 125 121 default ARCH_GEMINI 126 - 127 - config PCI_HOST_COMMON 128 - tristate 129 - select PCI_ECAM 130 122 131 123 config PCI_HOST_GENERIC 132 124 tristate "Generic PCI host controller"
+8 -8
drivers/pci/controller/cadence/Kconfig
··· 4 4 depends on PCI 5 5 6 6 config PCIE_CADENCE 7 - bool 7 + tristate 8 8 9 9 config PCIE_CADENCE_HOST 10 - bool 10 + tristate 11 11 depends on OF 12 12 select IRQ_DOMAIN 13 13 select PCIE_CADENCE 14 14 15 15 config PCIE_CADENCE_EP 16 - bool 16 + tristate 17 17 depends on OF 18 18 depends on PCI_ENDPOINT 19 19 select PCIE_CADENCE ··· 43 43 different vendors SoCs. 44 44 45 45 config PCI_J721E 46 - bool 46 + tristate 47 + select PCIE_CADENCE_HOST if PCI_J721E_HOST != n 48 + select PCIE_CADENCE_EP if PCI_J721E_EP != n 47 49 48 50 config PCI_J721E_HOST 49 - bool "TI J721E PCIe controller (host mode)" 51 + tristate "TI J721E PCIe controller (host mode)" 50 52 depends on ARCH_K3 || COMPILE_TEST 51 53 depends on OF 52 - select PCIE_CADENCE_HOST 53 54 select PCI_J721E 54 55 help 55 56 Say Y here if you want to support the TI J721E PCIe platform ··· 58 57 core. 59 58 60 59 config PCI_J721E_EP 61 - bool "TI J721E PCIe controller (endpoint mode)" 60 + tristate "TI J721E PCIe controller (endpoint mode)" 62 61 depends on ARCH_K3 || COMPILE_TEST 63 62 depends on OF 64 63 depends on PCI_ENDPOINT 65 - select PCIE_CADENCE_EP 66 64 select PCI_J721E 67 65 help 68 66 Say Y here if you want to support the TI J721E PCIe platform
+32 -8
drivers/pci/controller/cadence/pci-j721e.c
··· 15 15 #include <linux/irqchip/chained_irq.h> 16 16 #include <linux/irqdomain.h> 17 17 #include <linux/mfd/syscon.h> 18 + #include <linux/module.h> 18 19 #include <linux/of.h> 19 20 #include <linux/pci.h> 20 21 #include <linux/platform_device.h> ··· 28 27 #define cdns_pcie_to_rc(p) container_of(p, struct cdns_pcie_rc, pcie) 29 28 30 29 #define ENABLE_REG_SYS_2 0x108 30 + #define ENABLE_CLR_REG_SYS_2 0x308 31 31 #define STATUS_REG_SYS_2 0x508 32 32 #define STATUS_CLR_REG_SYS_2 0x708 33 33 #define LINK_DOWN BIT(1) ··· 118 116 return IRQ_HANDLED; 119 117 } 120 118 119 + static void j721e_pcie_disable_link_irq(struct j721e_pcie *pcie) 120 + { 121 + u32 reg; 122 + 123 + reg = j721e_pcie_intd_readl(pcie, ENABLE_CLR_REG_SYS_2); 124 + reg |= pcie->linkdown_irq_regfield; 125 + j721e_pcie_intd_writel(pcie, ENABLE_CLR_REG_SYS_2, reg); 126 + } 127 + 121 128 static void j721e_pcie_config_link_irq(struct j721e_pcie *pcie) 122 129 { 123 130 u32 reg; ··· 164 153 u32 reg; 165 154 166 155 reg = j721e_pcie_user_readl(pcie, J721E_PCIE_USER_LINKSTATUS); 167 - reg &= LINK_STATUS; 168 - if (reg == LINK_UP_DL_COMPLETED) 169 - return true; 170 - 171 - return false; 156 + return (reg & LINK_STATUS) == LINK_UP_DL_COMPLETED; 172 157 } 173 158 174 159 static const struct cdns_pcie_ops j721e_pcie_ops = { ··· 471 464 472 465 switch (mode) { 473 466 case PCI_MODE_RC: 474 - if (!IS_ENABLED(CONFIG_PCIE_CADENCE_HOST)) 467 + if (!IS_ENABLED(CONFIG_PCI_J721E_HOST)) 475 468 return -ENODEV; 476 469 477 470 bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); ··· 490 483 pcie->cdns_pcie = cdns_pcie; 491 484 break; 492 485 case PCI_MODE_EP: 493 - if (!IS_ENABLED(CONFIG_PCIE_CADENCE_EP)) 486 + if (!IS_ENABLED(CONFIG_PCI_J721E_EP)) 494 487 return -ENODEV; 495 488 496 489 ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); ··· 640 633 struct j721e_pcie *pcie = platform_get_drvdata(pdev); 641 634 struct cdns_pcie *cdns_pcie = pcie->cdns_pcie; 642 635 struct device *dev = &pdev->dev; 636 + struct cdns_pcie_ep *ep; 637 + struct cdns_pcie_rc *rc; 638 + 639 + if (pcie->mode == PCI_MODE_RC) { 640 + rc = container_of(cdns_pcie, struct cdns_pcie_rc, pcie); 641 + cdns_pcie_host_disable(rc); 642 + } else { 643 + ep = container_of(cdns_pcie, struct cdns_pcie_ep, pcie); 644 + cdns_pcie_ep_disable(ep); 645 + } 646 + 647 + gpiod_set_value_cansleep(pcie->reset_gpio, 0); 643 648 644 649 clk_disable_unprepare(pcie->refclk); 645 650 cdns_pcie_disable_phy(cdns_pcie); 651 + j721e_pcie_disable_link_irq(pcie); 646 652 pm_runtime_put(dev); 647 653 pm_runtime_disable(dev); 648 654 } ··· 750 730 .pm = pm_sleep_ptr(&j721e_pcie_pm_ops), 751 731 }, 752 732 }; 753 - builtin_platform_driver(j721e_pcie_driver); 733 + module_platform_driver(j721e_pcie_driver); 734 + 735 + MODULE_LICENSE("GPL"); 736 + MODULE_DESCRIPTION("PCIe controller driver for TI's J721E and related SoCs"); 737 + MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>");
+27 -9
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 6 6 #include <linux/bitfield.h> 7 7 #include <linux/delay.h> 8 8 #include <linux/kernel.h> 9 + #include <linux/module.h> 9 10 #include <linux/of.h> 10 11 #include <linux/pci-epc.h> 11 12 #include <linux/platform_device.h> 12 13 #include <linux/sizes.h> 13 14 14 15 #include "pcie-cadence.h" 16 + #include "../../pci.h" 15 17 16 18 #define CDNS_PCIE_EP_MIN_APERTURE 128 /* 128 bytes */ 17 19 #define CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE 0x1 ··· 222 220 clear_bit(r, &ep->ob_region_map); 223 221 } 224 222 225 - static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 mmc) 223 + static int cdns_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 nr_irqs) 226 224 { 227 225 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 228 226 struct cdns_pcie *pcie = &ep->pcie; 227 + u8 mmc = order_base_2(nr_irqs); 229 228 u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 230 229 u16 flags; 231 230 ··· 265 262 */ 266 263 mme = FIELD_GET(PCI_MSI_FLAGS_QSIZE, flags); 267 264 268 - return mme; 265 + return 1 << mme; 269 266 } 270 267 271 268 static int cdns_pcie_ep_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no) ··· 284 281 285 282 val &= PCI_MSIX_FLAGS_QSIZE; 286 283 287 - return val; 284 + return val + 1; 288 285 } 289 286 290 287 static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u8 vfn, 291 - u16 interrupts, enum pci_barno bir, 292 - u32 offset) 288 + u16 nr_irqs, enum pci_barno bir, u32 offset) 293 289 { 294 290 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 295 291 struct cdns_pcie *pcie = &ep->pcie; ··· 300 298 reg = cap + PCI_MSIX_FLAGS; 301 299 val = cdns_pcie_ep_fn_readw(pcie, fn, reg); 302 300 val &= ~PCI_MSIX_FLAGS_QSIZE; 303 - val |= interrupts; 301 + val |= nr_irqs - 1; /* encoded as N-1 */ 304 302 cdns_pcie_ep_fn_writew(pcie, fn, reg, val); 305 303 306 304 /* Set MSI-X BAR and offset */ ··· 310 308 311 309 /* Set PBA BAR and offset. BAR must match MSI-X BAR */ 312 310 reg = cap + PCI_MSIX_PBA; 313 - val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir; 311 + val = (offset + (nr_irqs * PCI_MSIX_ENTRY_SIZE)) | bir; 314 312 cdns_pcie_ep_fn_writel(pcie, fn, reg, val); 315 313 316 314 return 0; ··· 339 337 340 338 if (is_asserted) { 341 339 ep->irq_pending |= BIT(intx); 342 - msg_code = MSG_CODE_ASSERT_INTA + intx; 340 + msg_code = PCIE_MSG_CODE_ASSERT_INTA + intx; 343 341 } else { 344 342 ep->irq_pending &= ~BIT(intx); 345 - msg_code = MSG_CODE_DEASSERT_INTA + intx; 343 + msg_code = PCIE_MSG_CODE_DEASSERT_INTA + intx; 346 344 } 347 345 348 346 spin_lock_irqsave(&ep->lock, flags); ··· 646 644 .get_features = cdns_pcie_ep_get_features, 647 645 }; 648 646 647 + void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep) 648 + { 649 + struct device *dev = ep->pcie.dev; 650 + struct pci_epc *epc = to_pci_epc(dev); 651 + 652 + pci_epc_deinit_notify(epc); 653 + pci_epc_mem_free_addr(epc, ep->irq_phys_addr, ep->irq_cpu_addr, 654 + SZ_128K); 655 + pci_epc_mem_exit(epc); 656 + } 657 + EXPORT_SYMBOL_GPL(cdns_pcie_ep_disable); 649 658 650 659 int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) 651 660 { ··· 764 751 765 752 return ret; 766 753 } 754 + EXPORT_SYMBOL_GPL(cdns_pcie_ep_setup); 755 + 756 + MODULE_LICENSE("GPL"); 757 + MODULE_DESCRIPTION("Cadence PCIe endpoint controller driver"); 758 + MODULE_AUTHOR("Cyrille Pitchen <cyrille.pitchen@free-electrons.com>");
+114 -10
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 5 5 6 6 #include <linux/delay.h> 7 7 #include <linux/kernel.h> 8 + #include <linux/module.h> 8 9 #include <linux/list_sort.h> 9 10 #include <linux/of_address.h> 10 11 #include <linux/of_pci.h> ··· 73 72 74 73 return rc->cfg_base + (where & 0xfff); 75 74 } 75 + EXPORT_SYMBOL_GPL(cdns_pci_map_bus); 76 76 77 77 static struct pci_ops cdns_pcie_host_ops = { 78 78 .map_bus = cdns_pci_map_bus, ··· 152 150 return ret; 153 151 } 154 152 153 + static void cdns_pcie_host_disable_ptm_response(struct cdns_pcie *pcie) 154 + { 155 + u32 val; 156 + 157 + val = cdns_pcie_readl(pcie, CDNS_PCIE_LM_PTM_CTRL); 158 + cdns_pcie_writel(pcie, CDNS_PCIE_LM_PTM_CTRL, val & ~CDNS_PCIE_LM_TPM_CTRL_PTMRSEN); 159 + } 160 + 155 161 static void cdns_pcie_host_enable_ptm_response(struct cdns_pcie *pcie) 156 162 { 157 163 u32 val; ··· 183 173 ret = cdns_pcie_retrain(pcie); 184 174 185 175 return ret; 176 + } 177 + 178 + static void cdns_pcie_host_deinit_root_port(struct cdns_pcie_rc *rc) 179 + { 180 + struct cdns_pcie *pcie = &rc->pcie; 181 + u32 value, ctrl; 182 + 183 + cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, 0xffff); 184 + cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0xff); 185 + cdns_pcie_rp_writeb(pcie, PCI_CLASS_REVISION, 0xff); 186 + cdns_pcie_writel(pcie, CDNS_PCIE_LM_ID, 0xffffffff); 187 + cdns_pcie_rp_writew(pcie, PCI_DEVICE_ID, 0xffff); 188 + ctrl = CDNS_PCIE_LM_BAR_CFG_CTRL_DISABLED; 189 + value = ~(CDNS_PCIE_LM_RC_BAR_CFG_BAR0_CTRL(ctrl) | 190 + CDNS_PCIE_LM_RC_BAR_CFG_BAR1_CTRL(ctrl) | 191 + CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_ENABLE | 192 + CDNS_PCIE_LM_RC_BAR_CFG_PREFETCH_MEM_64BITS | 193 + CDNS_PCIE_LM_RC_BAR_CFG_IO_ENABLE | 194 + CDNS_PCIE_LM_RC_BAR_CFG_IO_32BITS); 195 + cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value); 186 196 } 187 197 188 198 static int cdns_pcie_host_init_root_port(struct cdns_pcie_rc *rc) ··· 421 391 return resource_size(entry2->res) - resource_size(entry1->res); 422 392 } 423 393 394 + static void cdns_pcie_host_unmap_dma_ranges(struct cdns_pcie_rc *rc) 395 + { 396 + struct cdns_pcie *pcie = &rc->pcie; 397 + enum cdns_pcie_rp_bar bar; 398 + u32 value; 399 + 400 + /* Reset inbound configuration for all BARs which were being used */ 401 + for (bar = RP_BAR0; bar <= RP_NO_BAR; bar++) { 402 + if (rc->avail_ib_bar[bar]) 403 + continue; 404 + 405 + cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR0(bar), 0); 406 + cdns_pcie_writel(pcie, CDNS_PCIE_AT_IB_RP_BAR_ADDR1(bar), 0); 407 + 408 + if (bar == RP_NO_BAR) 409 + continue; 410 + 411 + value = ~(LM_RC_BAR_CFG_CTRL_MEM_64BITS(bar) | 412 + LM_RC_BAR_CFG_CTRL_PREF_MEM_64BITS(bar) | 413 + LM_RC_BAR_CFG_CTRL_MEM_32BITS(bar) | 414 + LM_RC_BAR_CFG_CTRL_PREF_MEM_32BITS(bar) | 415 + LM_RC_BAR_CFG_APERTURE(bar, bar_aperture_mask[bar] + 2)); 416 + cdns_pcie_writel(pcie, CDNS_PCIE_LM_RC_BAR_CFG, value); 417 + } 418 + } 419 + 424 420 static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) 425 421 { 426 422 struct cdns_pcie *pcie = &rc->pcie; ··· 482 426 } 483 427 484 428 return 0; 429 + } 430 + 431 + static void cdns_pcie_host_deinit_address_translation(struct cdns_pcie_rc *rc) 432 + { 433 + struct cdns_pcie *pcie = &rc->pcie; 434 + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(rc); 435 + struct resource_entry *entry; 436 + int r; 437 + 438 + cdns_pcie_host_unmap_dma_ranges(rc); 439 + 440 + /* 441 + * Reset outbound region 0 which was reserved for configuration space 442 + * accesses. 443 + */ 444 + cdns_pcie_reset_outbound_region(pcie, 0); 445 + 446 + /* Reset rest of the outbound regions */ 447 + r = 1; 448 + resource_list_for_each_entry(entry, &bridge->windows) { 449 + cdns_pcie_reset_outbound_region(pcie, r); 450 + r++; 451 + } 485 452 } 486 453 487 454 static int cdns_pcie_host_init_address_translation(struct cdns_pcie_rc *rc) ··· 564 485 return cdns_pcie_host_map_dma_ranges(rc); 565 486 } 566 487 488 + static void cdns_pcie_host_deinit(struct cdns_pcie_rc *rc) 489 + { 490 + cdns_pcie_host_deinit_address_translation(rc); 491 + cdns_pcie_host_deinit_root_port(rc); 492 + } 493 + 567 494 int cdns_pcie_host_init(struct cdns_pcie_rc *rc) 568 495 { 569 496 int err; ··· 579 494 return err; 580 495 581 496 return cdns_pcie_host_init_address_translation(rc); 497 + } 498 + EXPORT_SYMBOL_GPL(cdns_pcie_host_init); 499 + 500 + static void cdns_pcie_host_link_disable(struct cdns_pcie_rc *rc) 501 + { 502 + struct cdns_pcie *pcie = &rc->pcie; 503 + 504 + cdns_pcie_stop_link(pcie); 505 + cdns_pcie_host_disable_ptm_response(pcie); 582 506 } 583 507 584 508 int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc) ··· 613 519 614 520 return 0; 615 521 } 522 + EXPORT_SYMBOL_GPL(cdns_pcie_host_link_setup); 523 + 524 + void cdns_pcie_host_disable(struct cdns_pcie_rc *rc) 525 + { 526 + struct pci_host_bridge *bridge; 527 + 528 + bridge = pci_host_bridge_from_priv(rc); 529 + pci_stop_root_bus(bridge->bus); 530 + pci_remove_root_bus(bridge->bus); 531 + 532 + cdns_pcie_host_deinit(rc); 533 + cdns_pcie_host_link_disable(rc); 534 + } 535 + EXPORT_SYMBOL_GPL(cdns_pcie_host_disable); 616 536 617 537 int cdns_pcie_host_setup(struct cdns_pcie_rc *rc) 618 538 { ··· 678 570 if (!bridge->ops) 679 571 bridge->ops = &cdns_pcie_host_ops; 680 572 681 - ret = pci_host_probe(bridge); 682 - if (ret < 0) 683 - goto err_init; 684 - 685 - return 0; 686 - 687 - err_init: 688 - pm_runtime_put_sync(dev); 689 - 690 - return ret; 573 + return pci_host_probe(bridge); 691 574 } 575 + EXPORT_SYMBOL_GPL(cdns_pcie_host_setup); 576 + 577 + MODULE_LICENSE("GPL"); 578 + MODULE_DESCRIPTION("Cadence PCIe host controller driver"); 579 + MODULE_AUTHOR("Cyrille Pitchen <cyrille.pitchen@free-electrons.com>");
+12
drivers/pci/controller/cadence/pcie-cadence.c
··· 4 4 // Author: Cyrille Pitchen <cyrille.pitchen@free-electrons.com> 5 5 6 6 #include <linux/kernel.h> 7 + #include <linux/module.h> 7 8 #include <linux/of.h> 8 9 9 10 #include "pcie-cadence.h" ··· 24 23 25 24 cdns_pcie_writel(pcie, CDNS_PCIE_LTSSM_CONTROL_CAP, ltssm_control_cap); 26 25 } 26 + EXPORT_SYMBOL_GPL(cdns_pcie_detect_quiet_min_delay_set); 27 27 28 28 void cdns_pcie_set_outbound_region(struct cdns_pcie *pcie, u8 busnr, u8 fn, 29 29 u32 r, bool is_io, ··· 102 100 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0); 103 101 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1); 104 102 } 103 + EXPORT_SYMBOL_GPL(cdns_pcie_set_outbound_region); 105 104 106 105 void cdns_pcie_set_outbound_region_for_normal_msg(struct cdns_pcie *pcie, 107 106 u8 busnr, u8 fn, ··· 137 134 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), addr0); 138 135 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), addr1); 139 136 } 137 + EXPORT_SYMBOL_GPL(cdns_pcie_set_outbound_region_for_normal_msg); 140 138 141 139 void cdns_pcie_reset_outbound_region(struct cdns_pcie *pcie, u32 r) 142 140 { ··· 150 146 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR0(r), 0); 151 147 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_CPU_ADDR1(r), 0); 152 148 } 149 + EXPORT_SYMBOL_GPL(cdns_pcie_reset_outbound_region); 153 150 154 151 void cdns_pcie_disable_phy(struct cdns_pcie *pcie) 155 152 { ··· 161 156 phy_exit(pcie->phy[i]); 162 157 } 163 158 } 159 + EXPORT_SYMBOL_GPL(cdns_pcie_disable_phy); 164 160 165 161 int cdns_pcie_enable_phy(struct cdns_pcie *pcie) 166 162 { ··· 190 184 191 185 return ret; 192 186 } 187 + EXPORT_SYMBOL_GPL(cdns_pcie_enable_phy); 193 188 194 189 int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie) 195 190 { ··· 250 243 251 244 return ret; 252 245 } 246 + EXPORT_SYMBOL_GPL(cdns_pcie_init_phy); 253 247 254 248 static int cdns_pcie_suspend_noirq(struct device *dev) 255 249 { ··· 279 271 NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq, 280 272 cdns_pcie_resume_noirq) 281 273 }; 274 + 275 + MODULE_LICENSE("GPL"); 276 + MODULE_DESCRIPTION("Cadence PCIe controller driver"); 277 + MODULE_AUTHOR("Cyrille Pitchen <cyrille.pitchen@free-electrons.com>");
+12 -13
drivers/pci/controller/cadence/pcie-cadence.h
··· 250 250 251 251 struct cdns_pcie; 252 252 253 - enum cdns_pcie_msg_code { 254 - MSG_CODE_ASSERT_INTA = 0x20, 255 - MSG_CODE_ASSERT_INTB = 0x21, 256 - MSG_CODE_ASSERT_INTC = 0x22, 257 - MSG_CODE_ASSERT_INTD = 0x23, 258 - MSG_CODE_DEASSERT_INTA = 0x24, 259 - MSG_CODE_DEASSERT_INTB = 0x25, 260 - MSG_CODE_DEASSERT_INTC = 0x26, 261 - MSG_CODE_DEASSERT_INTD = 0x27, 262 - }; 263 - 264 253 enum cdns_pcie_msg_routing { 265 254 /* Route to Root Complex */ 266 255 MSG_ROUTING_TO_RC, ··· 508 519 return true; 509 520 } 510 521 511 - #ifdef CONFIG_PCIE_CADENCE_HOST 522 + #if IS_ENABLED(CONFIG_PCIE_CADENCE_HOST) 512 523 int cdns_pcie_host_link_setup(struct cdns_pcie_rc *rc); 513 524 int cdns_pcie_host_init(struct cdns_pcie_rc *rc); 514 525 int cdns_pcie_host_setup(struct cdns_pcie_rc *rc); 526 + void cdns_pcie_host_disable(struct cdns_pcie_rc *rc); 515 527 void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, 516 528 int where); 517 529 #else ··· 531 541 return 0; 532 542 } 533 543 544 + static inline void cdns_pcie_host_disable(struct cdns_pcie_rc *rc) 545 + { 546 + } 547 + 534 548 static inline void __iomem *cdns_pci_map_bus(struct pci_bus *bus, unsigned int devfn, 535 549 int where) 536 550 { ··· 542 548 } 543 549 #endif 544 550 545 - #ifdef CONFIG_PCIE_CADENCE_EP 551 + #if IS_ENABLED(CONFIG_PCIE_CADENCE_EP) 546 552 int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep); 553 + void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep); 547 554 #else 548 555 static inline int cdns_pcie_ep_setup(struct cdns_pcie_ep *ep) 549 556 { 550 557 return 0; 558 + } 559 + 560 + static inline void cdns_pcie_ep_disable(struct cdns_pcie_ep *ep) 561 + { 551 562 } 552 563 #endif 553 564
+2 -2
drivers/pci/controller/dwc/pci-dra7xx.c
··· 118 118 return cpu_addr & DRA7XX_CPU_TO_BUS_ADDR; 119 119 } 120 120 121 - static int dra7xx_pcie_link_up(struct dw_pcie *pci) 121 + static bool dra7xx_pcie_link_up(struct dw_pcie *pci) 122 122 { 123 123 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); 124 124 u32 reg = dra7xx_pcie_readl(dra7xx, PCIECTRL_DRA7XX_CONF_PHY_CS); 125 125 126 - return !!(reg & LINK_UP); 126 + return reg & LINK_UP; 127 127 } 128 128 129 129 static void dra7xx_pcie_stop_link(struct dw_pcie *pci)
+2 -2
drivers/pci/controller/dwc/pci-exynos.c
··· 209 209 .write = exynos_pcie_wr_own_conf, 210 210 }; 211 211 212 - static int exynos_pcie_link_up(struct dw_pcie *pci) 212 + static bool exynos_pcie_link_up(struct dw_pcie *pci) 213 213 { 214 214 struct exynos_pcie *ep = to_exynos_pcie(pci); 215 215 u32 val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_RDLH_LINKUP); 216 216 217 - return (val & PCIE_ELBI_XMLH_LINKUP); 217 + return val & PCIE_ELBI_XMLH_LINKUP; 218 218 } 219 219 220 220 static int exynos_pcie_host_init(struct dw_pcie_rp *pp)
+182 -31
drivers/pci/controller/dwc/pci-imx6.c
··· 45 45 #define IMX95_PCIE_PHY_GEN_CTRL 0x0 46 46 #define IMX95_PCIE_REF_USE_PAD BIT(17) 47 47 48 + #define IMX95_PCIE_PHY_MPLLA_CTRL 0x10 49 + #define IMX95_PCIE_PHY_MPLL_STATE BIT(30) 50 + 48 51 #define IMX95_PCIE_SS_RW_REG_0 0xf0 49 52 #define IMX95_PCIE_REF_CLKEN BIT(23) 50 53 #define IMX95_PCIE_PHY_CR_PARA_SEL BIT(9) 54 + #define IMX95_PCIE_SS_RW_REG_1 0xf4 55 + #define IMX95_PCIE_SYS_AUX_PWR_DET BIT(31) 51 56 52 57 #define IMX95_PE0_GEN_CTRL_1 0x1050 53 58 #define IMX95_PCIE_DEVICE_TYPE GENMASK(3, 0) ··· 76 71 #define IMX95_SID_MASK GENMASK(5, 0) 77 72 #define IMX95_MAX_LUT 32 78 73 74 + #define IMX95_PCIE_RST_CTRL 0x3010 75 + #define IMX95_PCIE_COLD_RST BIT(0) 76 + 79 77 #define to_imx_pcie(x) dev_get_drvdata((x)->dev) 80 78 81 79 enum imx_pcie_variants { ··· 99 91 }; 100 92 101 93 #define IMX_PCIE_FLAG_IMX_PHY BIT(0) 102 - #define IMX_PCIE_FLAG_IMX_SPEED_CHANGE BIT(1) 94 + #define IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND BIT(1) 103 95 #define IMX_PCIE_FLAG_SUPPORTS_SUSPEND BIT(2) 104 96 #define IMX_PCIE_FLAG_HAS_PHYDRV BIT(3) 105 97 #define IMX_PCIE_FLAG_HAS_APP_RESET BIT(4) ··· 113 105 */ 114 106 #define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9) 115 107 #define IMX_PCIE_FLAG_HAS_LUT BIT(10) 108 + #define IMX_PCIE_FLAG_8GT_ECN_ERR051586 BIT(11) 116 109 117 110 #define imx_check_flag(pci, val) (pci->drvdata->flags & val) 118 111 ··· 135 126 int (*init_phy)(struct imx_pcie *pcie); 136 127 int (*enable_ref_clk)(struct imx_pcie *pcie, bool enable); 137 128 int (*core_reset)(struct imx_pcie *pcie, bool assert); 129 + int (*wait_pll_lock)(struct imx_pcie *pcie); 138 130 const struct dw_pcie_host_ops *ops; 131 + }; 132 + 133 + struct imx_lut_data { 134 + u32 data1; 135 + u32 data2; 139 136 }; 140 137 141 138 struct imx_pcie { ··· 163 148 struct regulator *vph; 164 149 void __iomem *phy_base; 165 150 151 + /* LUT data for pcie */ 152 + struct imx_lut_data luts[IMX95_MAX_LUT]; 166 153 /* power domain for pcie */ 167 154 struct device *pd_pcie; 168 155 /* power domain for pcie phy */ ··· 241 224 242 225 static int imx95_pcie_init_phy(struct imx_pcie *imx_pcie) 243 226 { 227 + /* 228 + * ERR051624: The Controller Without Vaux Cannot Exit L23 Ready 229 + * Through Beacon or PERST# De-assertion 230 + * 231 + * When the auxiliary power is not available, the controller 232 + * cannot exit from L23 Ready with beacon or PERST# de-assertion 233 + * when main power is not removed. 234 + * 235 + * Workaround: Set SS_RW_REG_1[SYS_AUX_PWR_DET] to 1. 236 + */ 237 + regmap_set_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_1, 238 + IMX95_PCIE_SYS_AUX_PWR_DET); 239 + 244 240 regmap_update_bits(imx_pcie->iomuxc_gpr, 245 241 IMX95_PCIE_SS_RW_REG_0, 246 242 IMX95_PCIE_PHY_CR_PARA_SEL, ··· 488 458 PHY_PLL_LOCK_WAIT_USLEEP_MAX, 489 459 PHY_PLL_LOCK_WAIT_TIMEOUT)) 490 460 dev_err(dev, "PCIe PLL lock timeout\n"); 461 + } 462 + 463 + static int imx95_pcie_wait_for_phy_pll_lock(struct imx_pcie *imx_pcie) 464 + { 465 + u32 val; 466 + struct device *dev = imx_pcie->pci->dev; 467 + 468 + if (regmap_read_poll_timeout(imx_pcie->iomuxc_gpr, 469 + IMX95_PCIE_PHY_MPLLA_CTRL, val, 470 + val & IMX95_PCIE_PHY_MPLL_STATE, 471 + PHY_PLL_LOCK_WAIT_USLEEP_MAX, 472 + PHY_PLL_LOCK_WAIT_TIMEOUT)) { 473 + dev_err(dev, "PCIe PLL lock timeout\n"); 474 + return -ETIMEDOUT; 475 + } 476 + 477 + return 0; 491 478 } 492 479 493 480 static int imx_setup_phy_mpll(struct imx_pcie *imx_pcie) ··· 820 773 return 0; 821 774 } 822 775 776 + static int imx95_pcie_core_reset(struct imx_pcie *imx_pcie, bool assert) 777 + { 778 + u32 val; 779 + 780 + if (assert) { 781 + /* 782 + * From i.MX95 PCIe PHY perspective, the COLD reset toggle 783 + * should be complete after power-up by the following sequence. 784 + * > 10us(at power-up) 785 + * > 10ns(warm reset) 786 + * |<------------>| 787 + * ______________ 788 + * phy_reset ____/ \________________ 789 + * ____________ 790 + * ref_clk_en_______________________/ 791 + * Toggle COLD reset aligned with this sequence for i.MX95 PCIe. 792 + */ 793 + regmap_set_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_RST_CTRL, 794 + IMX95_PCIE_COLD_RST); 795 + /* 796 + * Make sure the write to IMX95_PCIE_RST_CTRL is flushed to the 797 + * hardware by doing a read. Otherwise, there is no guarantee 798 + * that the write has reached the hardware before udelay(). 799 + */ 800 + regmap_read_bypassed(imx_pcie->iomuxc_gpr, IMX95_PCIE_RST_CTRL, 801 + &val); 802 + udelay(15); 803 + regmap_clear_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_RST_CTRL, 804 + IMX95_PCIE_COLD_RST); 805 + regmap_read_bypassed(imx_pcie->iomuxc_gpr, IMX95_PCIE_RST_CTRL, 806 + &val); 807 + udelay(10); 808 + } 809 + 810 + return 0; 811 + } 812 + 823 813 static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie) 824 814 { 825 815 reset_control_assert(imx_pcie->pciephy_reset); ··· 944 860 u32 tmp; 945 861 int ret; 946 862 863 + if (!(imx_pcie->drvdata->flags & 864 + IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND)) { 865 + imx_pcie_ltssm_enable(dev); 866 + return 0; 867 + } 868 + 947 869 /* 948 870 * Force Gen1 operation when starting the link. In case the link is 949 871 * started in Gen2 mode, there is a possibility the devices on the ··· 965 875 /* Start LTSSM. */ 966 876 imx_pcie_ltssm_enable(dev); 967 877 968 - ret = dw_pcie_wait_for_link(pci); 969 - if (ret) 970 - goto err_reset_phy; 971 - 972 878 if (pci->max_link_speed > 1) { 879 + ret = dw_pcie_wait_for_link(pci); 880 + if (ret) 881 + goto err_reset_phy; 882 + 973 883 /* Allow faster modes after the link is up */ 974 884 dw_pcie_dbi_ro_wr_en(pci); 975 885 tmp = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); ··· 986 896 dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp); 987 897 dw_pcie_dbi_ro_wr_dis(pci); 988 898 989 - if (imx_pcie->drvdata->flags & 990 - IMX_PCIE_FLAG_IMX_SPEED_CHANGE) { 991 - 992 - /* 993 - * On i.MX7, DIRECT_SPEED_CHANGE behaves differently 994 - * from i.MX6 family when no link speed transition 995 - * occurs and we go Gen1 -> yep, Gen1. The difference 996 - * is that, in such case, it will not be cleared by HW 997 - * which will cause the following code to report false 998 - * failure. 999 - */ 1000 - ret = imx_pcie_wait_for_speed_change(imx_pcie); 1001 - if (ret) { 1002 - dev_err(dev, "Failed to bring link up!\n"); 1003 - goto err_reset_phy; 1004 - } 1005 - } 1006 - 1007 - /* Make sure link training is finished as well! */ 1008 - ret = dw_pcie_wait_for_link(pci); 1009 - if (ret) 899 + ret = imx_pcie_wait_for_speed_change(imx_pcie); 900 + if (ret) { 901 + dev_err(dev, "Failed to bring link up!\n"); 1010 902 goto err_reset_phy; 903 + } 1011 904 } else { 1012 905 dev_info(dev, "Link: Only Gen1 is enabled\n"); 1013 906 } 1014 907 1015 - tmp = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 1016 - dev_info(dev, "Link up, Gen%i\n", tmp & PCI_EXP_LNKSTA_CLS); 1017 908 return 0; 1018 909 1019 910 err_reset_phy: ··· 1253 1182 goto err_phy_off; 1254 1183 } 1255 1184 1185 + if (imx_pcie->drvdata->wait_pll_lock) { 1186 + ret = imx_pcie->drvdata->wait_pll_lock(imx_pcie); 1187 + if (ret < 0) 1188 + goto err_phy_off; 1189 + } 1190 + 1256 1191 imx_setup_phy_mpll(imx_pcie); 1257 1192 1258 1193 return 0; ··· 1291 1214 regulator_disable(imx_pcie->vpcie); 1292 1215 } 1293 1216 1217 + static void imx_pcie_host_post_init(struct dw_pcie_rp *pp) 1218 + { 1219 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1220 + struct imx_pcie *imx_pcie = to_imx_pcie(pci); 1221 + u32 val; 1222 + 1223 + if (imx_pcie->drvdata->flags & IMX_PCIE_FLAG_8GT_ECN_ERR051586) { 1224 + /* 1225 + * ERR051586: Compliance with 8GT/s Receiver Impedance ECN 1226 + * 1227 + * The default value of GEN3_RELATED_OFF[GEN3_ZRXDC_NONCOMPL] 1228 + * is 1 which makes receiver non-compliant with the ZRX-DC 1229 + * parameter for 2.5 GT/s when operating at 8 GT/s or higher. 1230 + * It causes unnecessary timeout in L1. 1231 + * 1232 + * Workaround: Program GEN3_RELATED_OFF[GEN3_ZRXDC_NONCOMPL] 1233 + * to 0. 1234 + */ 1235 + dw_pcie_dbi_ro_wr_en(pci); 1236 + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 1237 + val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 1238 + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 1239 + dw_pcie_dbi_ro_wr_dis(pci); 1240 + } 1241 + } 1242 + 1294 1243 /* 1295 1244 * In old DWC implementations, PCIE_ATU_INHIBIT_PAYLOAD in iATU Ctrl2 1296 1245 * register is reserved, so the generic DWC implementation of sending the ··· 1342 1239 static const struct dw_pcie_host_ops imx_pcie_host_dw_pme_ops = { 1343 1240 .init = imx_pcie_host_init, 1344 1241 .deinit = imx_pcie_host_exit, 1242 + .post_init = imx_pcie_host_post_init, 1345 1243 }; 1346 1244 1347 1245 static const struct dw_pcie_ops dw_pcie_ops = { ··· 1454 1350 dev_err(dev, "failed to initialize endpoint\n"); 1455 1351 return ret; 1456 1352 } 1353 + imx_pcie_host_post_init(pp); 1457 1354 1458 1355 ret = dw_pcie_ep_init_registers(ep); 1459 1356 if (ret) { ··· 1491 1386 } 1492 1387 } 1493 1388 1389 + static void imx_pcie_lut_save(struct imx_pcie *imx_pcie) 1390 + { 1391 + u32 data1, data2; 1392 + int i; 1393 + 1394 + for (i = 0; i < IMX95_MAX_LUT; i++) { 1395 + regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL, 1396 + IMX95_PEO_LUT_RWA | i); 1397 + regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, &data1); 1398 + regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, &data2); 1399 + if (data1 & IMX95_PE0_LUT_VLD) { 1400 + imx_pcie->luts[i].data1 = data1; 1401 + imx_pcie->luts[i].data2 = data2; 1402 + } else { 1403 + imx_pcie->luts[i].data1 = 0; 1404 + imx_pcie->luts[i].data2 = 0; 1405 + } 1406 + } 1407 + } 1408 + 1409 + static void imx_pcie_lut_restore(struct imx_pcie *imx_pcie) 1410 + { 1411 + int i; 1412 + 1413 + for (i = 0; i < IMX95_MAX_LUT; i++) { 1414 + if ((imx_pcie->luts[i].data1 & IMX95_PE0_LUT_VLD) == 0) 1415 + continue; 1416 + 1417 + regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, 1418 + imx_pcie->luts[i].data1); 1419 + regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, 1420 + imx_pcie->luts[i].data2); 1421 + regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL, i); 1422 + } 1423 + } 1424 + 1494 1425 static int imx_pcie_suspend_noirq(struct device *dev) 1495 1426 { 1496 1427 struct imx_pcie *imx_pcie = dev_get_drvdata(dev); ··· 1535 1394 return 0; 1536 1395 1537 1396 imx_pcie_msi_save_restore(imx_pcie, true); 1397 + if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT)) 1398 + imx_pcie_lut_save(imx_pcie); 1538 1399 if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) { 1539 1400 /* 1540 1401 * The minimum for a workaround would be to set PERST# and to ··· 1581 1438 if (ret) 1582 1439 return ret; 1583 1440 } 1441 + if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT)) 1442 + imx_pcie_lut_restore(imx_pcie); 1584 1443 imx_pcie_msi_save_restore(imx_pcie, false); 1585 1444 1586 1445 return 0; ··· 1794 1649 [IMX6Q] = { 1795 1650 .variant = IMX6Q, 1796 1651 .flags = IMX_PCIE_FLAG_IMX_PHY | 1797 - IMX_PCIE_FLAG_IMX_SPEED_CHANGE | 1652 + IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND | 1798 1653 IMX_PCIE_FLAG_BROKEN_SUSPEND | 1799 1654 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1800 1655 .dbi_length = 0x200, ··· 1810 1665 [IMX6SX] = { 1811 1666 .variant = IMX6SX, 1812 1667 .flags = IMX_PCIE_FLAG_IMX_PHY | 1813 - IMX_PCIE_FLAG_IMX_SPEED_CHANGE | 1668 + IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND | 1814 1669 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1815 1670 .gpr = "fsl,imx6q-iomuxc-gpr", 1816 1671 .ltssm_off = IOMUXC_GPR12, ··· 1825 1680 [IMX6QP] = { 1826 1681 .variant = IMX6QP, 1827 1682 .flags = IMX_PCIE_FLAG_IMX_PHY | 1828 - IMX_PCIE_FLAG_IMX_SPEED_CHANGE | 1683 + IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND | 1829 1684 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1830 1685 .dbi_length = 0x200, 1831 1686 .gpr = "fsl,imx6q-iomuxc-gpr", ··· 1892 1747 .variant = IMX95, 1893 1748 .flags = IMX_PCIE_FLAG_HAS_SERDES | 1894 1749 IMX_PCIE_FLAG_HAS_LUT | 1750 + IMX_PCIE_FLAG_8GT_ECN_ERR051586 | 1895 1751 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1896 1752 .ltssm_off = IMX95_PE0_GEN_CTRL_3, 1897 1753 .ltssm_mask = IMX95_PCIE_LTSSM_EN, 1898 1754 .mode_off[0] = IMX95_PE0_GEN_CTRL_1, 1899 1755 .mode_mask[0] = IMX95_PCIE_DEVICE_TYPE, 1756 + .core_reset = imx95_pcie_core_reset, 1900 1757 .init_phy = imx95_pcie_init_phy, 1758 + .wait_pll_lock = imx95_pcie_wait_for_phy_pll_lock, 1901 1759 }, 1902 1760 [IMX8MQ_EP] = { 1903 1761 .variant = IMX8MQ_EP, ··· 1947 1799 [IMX95_EP] = { 1948 1800 .variant = IMX95_EP, 1949 1801 .flags = IMX_PCIE_FLAG_HAS_SERDES | 1802 + IMX_PCIE_FLAG_8GT_ECN_ERR051586 | 1950 1803 IMX_PCIE_FLAG_SUPPORT_64BIT, 1951 1804 .ltssm_off = IMX95_PE0_GEN_CTRL_3, 1952 1805 .ltssm_mask = IMX95_PCIE_LTSSM_EN, 1953 1806 .mode_off[0] = IMX95_PE0_GEN_CTRL_1, 1954 1807 .mode_mask[0] = IMX95_PCIE_DEVICE_TYPE, 1955 1808 .init_phy = imx95_pcie_init_phy, 1809 + .core_reset = imx95_pcie_core_reset, 1810 + .wait_pll_lock = imx95_pcie_wait_for_phy_pll_lock, 1956 1811 .epc_features = &imx95_pcie_epc_features, 1957 1812 .mode = DW_PCIE_EP_TYPE, 1958 1813 },
+2 -3
drivers/pci/controller/dwc/pci-keystone.c
··· 492 492 * @pci: A pointer to the dw_pcie structure which holds the DesignWare PCIe host 493 493 * controller driver information. 494 494 */ 495 - static int ks_pcie_link_up(struct dw_pcie *pci) 495 + static bool ks_pcie_link_up(struct dw_pcie *pci) 496 496 { 497 497 u32 val; 498 498 499 499 val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0); 500 - val &= PORT_LOGIC_LTSSM_STATE_MASK; 501 - return (val == PORT_LOGIC_LTSSM_STATE_L0); 500 + return (val & PORT_LOGIC_LTSSM_STATE_MASK) == PORT_LOGIC_LTSSM_STATE_L0; 502 501 } 503 502 504 503 static void ks_pcie_stop_link(struct dw_pcie *pci)
+3 -3
drivers/pci/controller/dwc/pci-meson.c
··· 335 335 .write = pci_generic_config_write, 336 336 }; 337 337 338 - static int meson_pcie_link_up(struct dw_pcie *pci) 338 + static bool meson_pcie_link_up(struct dw_pcie *pci) 339 339 { 340 340 struct meson_pcie *mp = to_meson_pcie(pci); 341 341 struct device *dev = pci->dev; ··· 363 363 dev_dbg(dev, "speed_okay\n"); 364 364 365 365 if (smlh_up && rdlh_up && ltssm_up && speed_okay) 366 - return 1; 366 + return true; 367 367 368 368 cnt++; 369 369 ··· 371 371 } while (cnt < WAIT_LINKUP_TIMEOUT); 372 372 373 373 dev_err(dev, "error: wait linkup timeout\n"); 374 - return 0; 374 + return false; 375 375 } 376 376 377 377 static int meson_pcie_host_init(struct dw_pcie_rp *pp)
+3 -3
drivers/pci/controller/dwc/pcie-armada8k.c
··· 139 139 return ret; 140 140 } 141 141 142 - static int armada8k_pcie_link_up(struct dw_pcie *pci) 142 + static bool armada8k_pcie_link_up(struct dw_pcie *pci) 143 143 { 144 144 u32 reg; 145 145 u32 mask = PCIE_GLB_STS_RDLH_LINK_UP | PCIE_GLB_STS_PHY_LINK_UP; ··· 147 147 reg = dw_pcie_readl_dbi(pci, PCIE_GLOBAL_STATUS_REG); 148 148 149 149 if ((reg & mask) == mask) 150 - return 1; 150 + return true; 151 151 152 152 dev_dbg(pci->dev, "No link detected (Global-Status: 0x%08x).\n", reg); 153 - return 0; 153 + return false; 154 154 } 155 155 156 156 static int armada8k_pcie_start_link(struct dw_pcie *pci)
+251 -1
drivers/pci/controller/dwc/pcie-designware-debugfs.c
··· 642 642 &dwc_pcie_ltssm_status_ops); 643 643 } 644 644 645 + static int dw_pcie_ptm_check_capability(void *drvdata) 646 + { 647 + struct dw_pcie *pci = drvdata; 648 + 649 + pci->ptm_vsec_offset = dw_pcie_find_ptm_capability(pci); 650 + 651 + return pci->ptm_vsec_offset; 652 + } 653 + 654 + static int dw_pcie_ptm_context_update_write(void *drvdata, u8 mode) 655 + { 656 + struct dw_pcie *pci = drvdata; 657 + u32 val; 658 + 659 + if (mode == PCIE_PTM_CONTEXT_UPDATE_AUTO) { 660 + val = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL); 661 + val |= PTM_REQ_AUTO_UPDATE_ENABLED; 662 + dw_pcie_writel_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL, val); 663 + } else if (mode == PCIE_PTM_CONTEXT_UPDATE_MANUAL) { 664 + val = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL); 665 + val &= ~PTM_REQ_AUTO_UPDATE_ENABLED; 666 + val |= PTM_REQ_START_UPDATE; 667 + dw_pcie_writel_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL, val); 668 + } else { 669 + return -EINVAL; 670 + } 671 + 672 + return 0; 673 + } 674 + 675 + static int dw_pcie_ptm_context_update_read(void *drvdata, u8 *mode) 676 + { 677 + struct dw_pcie *pci = drvdata; 678 + u32 val; 679 + 680 + val = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL); 681 + if (FIELD_GET(PTM_REQ_AUTO_UPDATE_ENABLED, val)) 682 + *mode = PCIE_PTM_CONTEXT_UPDATE_AUTO; 683 + else 684 + /* 685 + * PTM_REQ_START_UPDATE is a self clearing register bit. So if 686 + * PTM_REQ_AUTO_UPDATE_ENABLED is not set, then it implies that 687 + * manual update is used. 688 + */ 689 + *mode = PCIE_PTM_CONTEXT_UPDATE_MANUAL; 690 + 691 + return 0; 692 + } 693 + 694 + static int dw_pcie_ptm_context_valid_write(void *drvdata, bool valid) 695 + { 696 + struct dw_pcie *pci = drvdata; 697 + u32 val; 698 + 699 + if (valid) { 700 + val = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL); 701 + val |= PTM_RES_CCONTEXT_VALID; 702 + dw_pcie_writel_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL, val); 703 + } else { 704 + val = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL); 705 + val &= ~PTM_RES_CCONTEXT_VALID; 706 + dw_pcie_writel_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL, val); 707 + } 708 + 709 + return 0; 710 + } 711 + 712 + static int dw_pcie_ptm_context_valid_read(void *drvdata, bool *valid) 713 + { 714 + struct dw_pcie *pci = drvdata; 715 + u32 val; 716 + 717 + val = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_RES_REQ_CTRL); 718 + *valid = !!FIELD_GET(PTM_RES_CCONTEXT_VALID, val); 719 + 720 + return 0; 721 + } 722 + 723 + static int dw_pcie_ptm_local_clock_read(void *drvdata, u64 *clock) 724 + { 725 + struct dw_pcie *pci = drvdata; 726 + u32 msb, lsb; 727 + 728 + do { 729 + msb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_LOCAL_MSB); 730 + lsb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_LOCAL_LSB); 731 + } while (msb != dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_LOCAL_MSB)); 732 + 733 + *clock = ((u64) msb) << 32 | lsb; 734 + 735 + return 0; 736 + } 737 + 738 + static int dw_pcie_ptm_master_clock_read(void *drvdata, u64 *clock) 739 + { 740 + struct dw_pcie *pci = drvdata; 741 + u32 msb, lsb; 742 + 743 + do { 744 + msb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_MASTER_MSB); 745 + lsb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_MASTER_LSB); 746 + } while (msb != dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_MASTER_MSB)); 747 + 748 + *clock = ((u64) msb) << 32 | lsb; 749 + 750 + return 0; 751 + } 752 + 753 + static int dw_pcie_ptm_t1_read(void *drvdata, u64 *clock) 754 + { 755 + struct dw_pcie *pci = drvdata; 756 + u32 msb, lsb; 757 + 758 + do { 759 + msb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T1_T2_MSB); 760 + lsb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T1_T2_LSB); 761 + } while (msb != dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T1_T2_MSB)); 762 + 763 + *clock = ((u64) msb) << 32 | lsb; 764 + 765 + return 0; 766 + } 767 + 768 + static int dw_pcie_ptm_t2_read(void *drvdata, u64 *clock) 769 + { 770 + struct dw_pcie *pci = drvdata; 771 + u32 msb, lsb; 772 + 773 + do { 774 + msb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T1_T2_MSB); 775 + lsb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T1_T2_LSB); 776 + } while (msb != dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T1_T2_MSB)); 777 + 778 + *clock = ((u64) msb) << 32 | lsb; 779 + 780 + return 0; 781 + } 782 + 783 + static int dw_pcie_ptm_t3_read(void *drvdata, u64 *clock) 784 + { 785 + struct dw_pcie *pci = drvdata; 786 + u32 msb, lsb; 787 + 788 + do { 789 + msb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T3_T4_MSB); 790 + lsb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T3_T4_LSB); 791 + } while (msb != dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T3_T4_MSB)); 792 + 793 + *clock = ((u64) msb) << 32 | lsb; 794 + 795 + return 0; 796 + } 797 + 798 + static int dw_pcie_ptm_t4_read(void *drvdata, u64 *clock) 799 + { 800 + struct dw_pcie *pci = drvdata; 801 + u32 msb, lsb; 802 + 803 + do { 804 + msb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T3_T4_MSB); 805 + lsb = dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T3_T4_LSB); 806 + } while (msb != dw_pcie_readl_dbi(pci, pci->ptm_vsec_offset + PTM_T3_T4_MSB)); 807 + 808 + *clock = ((u64) msb) << 32 | lsb; 809 + 810 + return 0; 811 + } 812 + 813 + static bool dw_pcie_ptm_context_update_visible(void *drvdata) 814 + { 815 + struct dw_pcie *pci = drvdata; 816 + 817 + return (pci->mode == DW_PCIE_EP_TYPE) ? true : false; 818 + } 819 + 820 + static bool dw_pcie_ptm_context_valid_visible(void *drvdata) 821 + { 822 + struct dw_pcie *pci = drvdata; 823 + 824 + return (pci->mode == DW_PCIE_RC_TYPE) ? true : false; 825 + } 826 + 827 + static bool dw_pcie_ptm_local_clock_visible(void *drvdata) 828 + { 829 + /* PTM local clock is always visible */ 830 + return true; 831 + } 832 + 833 + static bool dw_pcie_ptm_master_clock_visible(void *drvdata) 834 + { 835 + struct dw_pcie *pci = drvdata; 836 + 837 + return (pci->mode == DW_PCIE_EP_TYPE) ? true : false; 838 + } 839 + 840 + static bool dw_pcie_ptm_t1_visible(void *drvdata) 841 + { 842 + struct dw_pcie *pci = drvdata; 843 + 844 + return (pci->mode == DW_PCIE_EP_TYPE) ? true : false; 845 + } 846 + 847 + static bool dw_pcie_ptm_t2_visible(void *drvdata) 848 + { 849 + struct dw_pcie *pci = drvdata; 850 + 851 + return (pci->mode == DW_PCIE_RC_TYPE) ? true : false; 852 + } 853 + 854 + static bool dw_pcie_ptm_t3_visible(void *drvdata) 855 + { 856 + struct dw_pcie *pci = drvdata; 857 + 858 + return (pci->mode == DW_PCIE_RC_TYPE) ? true : false; 859 + } 860 + 861 + static bool dw_pcie_ptm_t4_visible(void *drvdata) 862 + { 863 + struct dw_pcie *pci = drvdata; 864 + 865 + return (pci->mode == DW_PCIE_EP_TYPE) ? true : false; 866 + } 867 + 868 + const struct pcie_ptm_ops dw_pcie_ptm_ops = { 869 + .check_capability = dw_pcie_ptm_check_capability, 870 + .context_update_write = dw_pcie_ptm_context_update_write, 871 + .context_update_read = dw_pcie_ptm_context_update_read, 872 + .context_valid_write = dw_pcie_ptm_context_valid_write, 873 + .context_valid_read = dw_pcie_ptm_context_valid_read, 874 + .local_clock_read = dw_pcie_ptm_local_clock_read, 875 + .master_clock_read = dw_pcie_ptm_master_clock_read, 876 + .t1_read = dw_pcie_ptm_t1_read, 877 + .t2_read = dw_pcie_ptm_t2_read, 878 + .t3_read = dw_pcie_ptm_t3_read, 879 + .t4_read = dw_pcie_ptm_t4_read, 880 + .context_update_visible = dw_pcie_ptm_context_update_visible, 881 + .context_valid_visible = dw_pcie_ptm_context_valid_visible, 882 + .local_clock_visible = dw_pcie_ptm_local_clock_visible, 883 + .master_clock_visible = dw_pcie_ptm_master_clock_visible, 884 + .t1_visible = dw_pcie_ptm_t1_visible, 885 + .t2_visible = dw_pcie_ptm_t2_visible, 886 + .t3_visible = dw_pcie_ptm_t3_visible, 887 + .t4_visible = dw_pcie_ptm_t4_visible, 888 + }; 889 + 645 890 void dwc_pcie_debugfs_deinit(struct dw_pcie *pci) 646 891 { 647 892 if (!pci->debugfs) 648 893 return; 649 894 895 + pcie_ptm_destroy_debugfs(pci->ptm_debugfs); 650 896 dwc_pcie_rasdes_debugfs_deinit(pci); 651 897 debugfs_remove_recursive(pci->debugfs->debug_dir); 652 898 } 653 899 654 - void dwc_pcie_debugfs_init(struct dw_pcie *pci) 900 + void dwc_pcie_debugfs_init(struct dw_pcie *pci, enum dw_pcie_device_mode mode) 655 901 { 656 902 char dirname[DWC_DEBUGFS_BUF_MAX]; 657 903 struct device *dev = pci->dev; ··· 920 674 err); 921 675 922 676 dwc_pcie_ltssm_debugfs_init(pci, dir); 677 + 678 + pci->mode = mode; 679 + pci->ptm_debugfs = pcie_ptm_create_debugfs(pci->dev, pci, 680 + &dw_pcie_ptm_ops); 923 681 }
+15 -15
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 256 256 return offset; 257 257 258 258 reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 259 - nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> PCI_REBAR_CTRL_NBAR_SHIFT; 259 + nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg); 260 260 261 261 for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) { 262 262 reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 263 - bar_index = reg & PCI_REBAR_CTRL_BAR_IDX; 263 + bar_index = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, reg); 264 264 if (bar_index == bar) 265 265 return offset; 266 266 } ··· 532 532 533 533 val = FIELD_GET(PCI_MSI_FLAGS_QSIZE, val); 534 534 535 - return val; 535 + return 1 << val; 536 536 } 537 537 538 538 static int dw_pcie_ep_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 539 - u8 interrupts) 539 + u8 nr_irqs) 540 540 { 541 541 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 542 542 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 543 543 struct dw_pcie_ep_func *ep_func; 544 + u8 mmc = order_base_2(nr_irqs); 544 545 u32 val, reg; 545 546 546 547 ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); ··· 551 550 reg = ep_func->msi_cap + PCI_MSI_FLAGS; 552 551 val = dw_pcie_ep_readw_dbi(ep, func_no, reg); 553 552 val &= ~PCI_MSI_FLAGS_QMASK; 554 - val |= FIELD_PREP(PCI_MSI_FLAGS_QMASK, interrupts); 553 + val |= FIELD_PREP(PCI_MSI_FLAGS_QMASK, mmc); 555 554 dw_pcie_dbi_ro_wr_en(pci); 556 555 dw_pcie_ep_writew_dbi(ep, func_no, reg, val); 557 556 dw_pcie_dbi_ro_wr_dis(pci); ··· 576 575 577 576 val &= PCI_MSIX_FLAGS_QSIZE; 578 577 579 - return val; 578 + return val + 1; 580 579 } 581 580 582 581 static int dw_pcie_ep_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 583 - u16 interrupts, enum pci_barno bir, u32 offset) 582 + u16 nr_irqs, enum pci_barno bir, u32 offset) 584 583 { 585 584 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 586 585 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 596 595 reg = ep_func->msix_cap + PCI_MSIX_FLAGS; 597 596 val = dw_pcie_ep_readw_dbi(ep, func_no, reg); 598 597 val &= ~PCI_MSIX_FLAGS_QSIZE; 599 - val |= interrupts; 598 + val |= nr_irqs - 1; /* encoded as N-1 */ 600 599 dw_pcie_writew_dbi(pci, reg, val); 601 600 602 601 reg = ep_func->msix_cap + PCI_MSIX_TABLE; ··· 604 603 dw_pcie_ep_writel_dbi(ep, func_no, reg, val); 605 604 606 605 reg = ep_func->msix_cap + PCI_MSIX_PBA; 607 - val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir; 606 + val = (offset + (nr_irqs * PCI_MSIX_ENTRY_SIZE)) | bir; 608 607 dw_pcie_ep_writel_dbi(ep, func_no, reg, val); 609 608 610 609 dw_pcie_dbi_ro_wr_dis(pci); ··· 672 671 * @ep: DWC EP device 673 672 * @func_no: Function number of the endpoint 674 673 * 675 - * Return: 0 if success, errono otherwise. 674 + * Return: 0 if success, errno otherwise. 676 675 */ 677 676 int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no) 678 677 { ··· 691 690 * @func_no: Function number of the endpoint 692 691 * @interrupt_num: Interrupt number to be raised 693 692 * 694 - * Return: 0 if success, errono otherwise. 693 + * Return: 0 if success, errno otherwise. 695 694 */ 696 695 int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, 697 696 u8 interrupt_num) ··· 876 875 877 876 if (offset) { 878 877 reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 879 - nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 880 - PCI_REBAR_CTRL_NBAR_SHIFT; 878 + nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg); 881 879 882 880 /* 883 881 * PCIe r6.0, sec 7.8.6.2 require us to support at least one ··· 897 897 * is why RESBAR_CAP_REG is written here. 898 898 */ 899 899 val = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 900 - bar = val & PCI_REBAR_CTRL_BAR_IDX; 900 + bar = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, val); 901 901 if (ep->epf_bar[bar]) 902 902 pci_epc_bar_size_to_rebar_cap(ep->epf_bar[bar]->size, &val); 903 903 else ··· 1013 1013 1014 1014 dw_pcie_ep_init_non_sticky_registers(pci); 1015 1015 1016 - dwc_pcie_debugfs_init(pci); 1016 + dwc_pcie_debugfs_init(pci, DW_PCIE_EP_TYPE); 1017 1017 1018 1018 return 0; 1019 1019
+80 -1
drivers/pci/controller/dwc/pcie-designware-host.c
··· 523 523 524 524 dw_pcie_iatu_detect(pci); 525 525 526 + if (pci->num_lanes < 1) 527 + pci->num_lanes = dw_pcie_link_get_max_link_width(pci); 528 + 529 + ret = of_pci_get_equalization_presets(dev, &pp->presets, pci->num_lanes); 530 + if (ret) 531 + goto err_free_msi; 532 + 526 533 /* 527 534 * Allocate the resource for MSG TLP before programming the iATU 528 535 * outbound window in dw_pcie_setup_rc(). Since the allocation depends ··· 574 567 if (pp->ops->post_init) 575 568 pp->ops->post_init(pp); 576 569 577 - dwc_pcie_debugfs_init(pci); 570 + dwc_pcie_debugfs_init(pci, DW_PCIE_RC_TYPE); 578 571 579 572 return 0; 580 573 ··· 835 828 return 0; 836 829 } 837 830 831 + static void dw_pcie_program_presets(struct dw_pcie_rp *pp, enum pci_bus_speed speed) 832 + { 833 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 834 + u8 lane_eq_offset, lane_reg_size, cap_id; 835 + u8 *presets; 836 + u32 cap; 837 + int i; 838 + 839 + if (speed == PCIE_SPEED_8_0GT) { 840 + presets = (u8 *)pp->presets.eq_presets_8gts; 841 + lane_eq_offset = PCI_SECPCI_LE_CTRL; 842 + cap_id = PCI_EXT_CAP_ID_SECPCI; 843 + /* For data rate of 8 GT/S each lane equalization control is 16bits wide*/ 844 + lane_reg_size = 0x2; 845 + } else if (speed == PCIE_SPEED_16_0GT) { 846 + presets = pp->presets.eq_presets_Ngts[EQ_PRESET_TYPE_16GTS - 1]; 847 + lane_eq_offset = PCI_PL_16GT_LE_CTRL; 848 + cap_id = PCI_EXT_CAP_ID_PL_16GT; 849 + lane_reg_size = 0x1; 850 + } else if (speed == PCIE_SPEED_32_0GT) { 851 + presets = pp->presets.eq_presets_Ngts[EQ_PRESET_TYPE_32GTS - 1]; 852 + lane_eq_offset = PCI_PL_32GT_LE_CTRL; 853 + cap_id = PCI_EXT_CAP_ID_PL_32GT; 854 + lane_reg_size = 0x1; 855 + } else if (speed == PCIE_SPEED_64_0GT) { 856 + presets = pp->presets.eq_presets_Ngts[EQ_PRESET_TYPE_64GTS - 1]; 857 + lane_eq_offset = PCI_PL_64GT_LE_CTRL; 858 + cap_id = PCI_EXT_CAP_ID_PL_64GT; 859 + lane_reg_size = 0x1; 860 + } else { 861 + return; 862 + } 863 + 864 + if (presets[0] == PCI_EQ_RESV) 865 + return; 866 + 867 + cap = dw_pcie_find_ext_capability(pci, cap_id); 868 + if (!cap) 869 + return; 870 + 871 + /* 872 + * Write preset values to the registers byte-by-byte for the given 873 + * number of lanes and register size. 874 + */ 875 + for (i = 0; i < pci->num_lanes * lane_reg_size; i++) 876 + dw_pcie_writeb_dbi(pci, cap + lane_eq_offset + i, presets[i]); 877 + } 878 + 879 + static void dw_pcie_config_presets(struct dw_pcie_rp *pp) 880 + { 881 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 882 + enum pci_bus_speed speed = pcie_link_speed[pci->max_link_speed]; 883 + 884 + /* 885 + * Lane equalization settings need to be applied for all data rates the 886 + * controller supports and for all supported lanes. 887 + */ 888 + 889 + if (speed >= PCIE_SPEED_8_0GT) 890 + dw_pcie_program_presets(pp, PCIE_SPEED_8_0GT); 891 + 892 + if (speed >= PCIE_SPEED_16_0GT) 893 + dw_pcie_program_presets(pp, PCIE_SPEED_16_0GT); 894 + 895 + if (speed >= PCIE_SPEED_32_0GT) 896 + dw_pcie_program_presets(pp, PCIE_SPEED_32_0GT); 897 + 898 + if (speed >= PCIE_SPEED_64_0GT) 899 + dw_pcie_program_presets(pp, PCIE_SPEED_64_0GT); 900 + } 901 + 838 902 int dw_pcie_setup_rc(struct dw_pcie_rp *pp) 839 903 { 840 904 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 959 881 PCI_COMMAND_MASTER | PCI_COMMAND_SERR; 960 882 dw_pcie_writel_dbi(pci, PCI_COMMAND, val); 961 883 884 + dw_pcie_config_presets(pp); 962 885 /* 963 886 * If the platform provides its own child bus config accesses, it means 964 887 * the platform uses its own address translation component rather than
+24 -5
drivers/pci/controller/dwc/pcie-designware.c
··· 54 54 [DW_PCIE_PWR_RST] = "pwr", 55 55 }; 56 56 57 + static const struct dwc_pcie_vsec_id dwc_pcie_ptm_vsec_ids[] = { 58 + { .vendor_id = PCI_VENDOR_ID_QCOM, /* EP */ 59 + .vsec_id = 0x03, .vsec_rev = 0x1 }, 60 + { .vendor_id = PCI_VENDOR_ID_QCOM, /* RC */ 61 + .vsec_id = 0x04, .vsec_rev = 0x1 }, 62 + { } 63 + }; 64 + 57 65 static int dw_pcie_get_clocks(struct dw_pcie *pci) 58 66 { 59 67 int i, ret; ··· 337 329 return dw_pcie_find_vsec_capability(pci, dwc_pcie_rasdes_vsec_ids); 338 330 } 339 331 EXPORT_SYMBOL_GPL(dw_pcie_find_rasdes_capability); 332 + 333 + u16 dw_pcie_find_ptm_capability(struct dw_pcie *pci) 334 + { 335 + return dw_pcie_find_vsec_capability(pci, dwc_pcie_ptm_vsec_ids); 336 + } 337 + EXPORT_SYMBOL_GPL(dw_pcie_find_ptm_capability); 340 338 341 339 int dw_pcie_read(void __iomem *addr, int size, u32 *val) 342 340 { ··· 725 711 } 726 712 EXPORT_SYMBOL_GPL(dw_pcie_wait_for_link); 727 713 728 - int dw_pcie_link_up(struct dw_pcie *pci) 714 + bool dw_pcie_link_up(struct dw_pcie *pci) 729 715 { 730 716 u32 val; 731 717 ··· 795 781 796 782 } 797 783 784 + int dw_pcie_link_get_max_link_width(struct dw_pcie *pci) 785 + { 786 + u8 cap = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 787 + u32 lnkcap = dw_pcie_readl_dbi(pci, cap + PCI_EXP_LNKCAP); 788 + 789 + return FIELD_GET(PCI_EXP_LNKCAP_MLW, lnkcap); 790 + } 791 + 798 792 static void dw_pcie_link_set_max_link_width(struct dw_pcie *pci, u32 num_lanes) 799 793 { 800 794 u32 lnkcap, lwsc, plc; ··· 819 797 /* Set link width speed control register */ 820 798 lwsc = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 821 799 lwsc &= ~PORT_LOGIC_LINK_WIDTH_MASK; 800 + lwsc |= PORT_LOGIC_LINK_WIDTH_1_LANES; 822 801 switch (num_lanes) { 823 802 case 1: 824 803 plc |= PORT_LINK_MODE_1_LANES; 825 - lwsc |= PORT_LOGIC_LINK_WIDTH_1_LANES; 826 804 break; 827 805 case 2: 828 806 plc |= PORT_LINK_MODE_2_LANES; 829 - lwsc |= PORT_LOGIC_LINK_WIDTH_2_LANES; 830 807 break; 831 808 case 4: 832 809 plc |= PORT_LINK_MODE_4_LANES; 833 - lwsc |= PORT_LOGIC_LINK_WIDTH_4_LANES; 834 810 break; 835 811 case 8: 836 812 plc |= PORT_LINK_MODE_8_LANES; 837 - lwsc |= PORT_LOGIC_LINK_WIDTH_8_LANES; 838 813 break; 839 814 default: 840 815 dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes);
+28 -4
drivers/pci/controller/dwc/pcie-designware.h
··· 25 25 #include <linux/pci-epc.h> 26 26 #include <linux/pci-epf.h> 27 27 28 + #include "../../pci.h" 29 + 28 30 /* DWC PCIe IP-core versions (native support since v4.70a) */ 29 31 #define DW_PCIE_VER_365A 0x3336352a 30 32 #define DW_PCIE_VER_460A 0x3436302a ··· 262 260 263 261 #define PCIE_RAS_DES_EVENT_COUNTER_DATA 0xc 264 262 263 + /* PTM register definitions */ 264 + #define PTM_RES_REQ_CTRL 0x8 265 + #define PTM_RES_CCONTEXT_VALID BIT(0) 266 + #define PTM_REQ_AUTO_UPDATE_ENABLED BIT(0) 267 + #define PTM_REQ_START_UPDATE BIT(1) 268 + 269 + #define PTM_LOCAL_LSB 0x10 270 + #define PTM_LOCAL_MSB 0x14 271 + #define PTM_T1_T2_LSB 0x18 272 + #define PTM_T1_T2_MSB 0x1c 273 + #define PTM_T3_T4_LSB 0x28 274 + #define PTM_T3_T4_MSB 0x2c 275 + #define PTM_MASTER_LSB 0x38 276 + #define PTM_MASTER_MSB 0x3c 277 + 265 278 /* 266 279 * The default address offset between dbi_base and atu_base. Root controller 267 280 * drivers are not required to initialize atu_base if the offset matches this ··· 429 412 int msg_atu_index; 430 413 struct resource *msg_res; 431 414 bool use_linkup_irq; 415 + struct pci_eq_presets presets; 432 416 }; 433 417 434 418 struct dw_pcie_ep_ops { ··· 480 462 size_t size, u32 val); 481 463 void (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg, 482 464 size_t size, u32 val); 483 - int (*link_up)(struct dw_pcie *pcie); 465 + bool (*link_up)(struct dw_pcie *pcie); 484 466 enum dw_pcie_ltssm (*get_ltssm)(struct dw_pcie *pcie); 485 467 int (*start_link)(struct dw_pcie *pcie); 486 468 void (*stop_link)(struct dw_pcie *pcie); ··· 521 503 struct gpio_desc *pe_rst; 522 504 bool suspended; 523 505 struct debugfs_info *debugfs; 506 + enum dw_pcie_device_mode mode; 507 + u16 ptm_vsec_offset; 508 + struct pci_ptm_debugfs *ptm_debugfs; 524 509 525 510 /* 526 511 * If iATU input addresses are offset from CPU physical addresses, ··· 551 530 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap); 552 531 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap); 553 532 u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci); 533 + u16 dw_pcie_find_ptm_capability(struct dw_pcie *pci); 554 534 555 535 int dw_pcie_read(void __iomem *addr, int size, u32 *val); 556 536 int dw_pcie_write(void __iomem *addr, int size, u32 val); ··· 559 537 u32 dw_pcie_read_dbi(struct dw_pcie *pci, u32 reg, size_t size); 560 538 void dw_pcie_write_dbi(struct dw_pcie *pci, u32 reg, size_t size, u32 val); 561 539 void dw_pcie_write_dbi2(struct dw_pcie *pci, u32 reg, size_t size, u32 val); 562 - int dw_pcie_link_up(struct dw_pcie *pci); 540 + bool dw_pcie_link_up(struct dw_pcie *pci); 563 541 void dw_pcie_upconfig_setup(struct dw_pcie *pci); 564 542 int dw_pcie_wait_for_link(struct dw_pcie *pci); 543 + int dw_pcie_link_get_max_link_width(struct dw_pcie *pci); 565 544 int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, 566 545 const struct dw_pcie_ob_atu_cfg *atu); 567 546 int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type, ··· 894 871 #endif 895 872 896 873 #ifdef CONFIG_PCIE_DW_DEBUGFS 897 - void dwc_pcie_debugfs_init(struct dw_pcie *pci); 874 + void dwc_pcie_debugfs_init(struct dw_pcie *pci, enum dw_pcie_device_mode mode); 898 875 void dwc_pcie_debugfs_deinit(struct dw_pcie *pci); 899 876 #else 900 - static inline void dwc_pcie_debugfs_init(struct dw_pcie *pci) 877 + static inline void dwc_pcie_debugfs_init(struct dw_pcie *pci, 878 + enum dw_pcie_device_mode mode) 901 879 { 902 880 } 903 881 static inline void dwc_pcie_debugfs_deinit(struct dw_pcie *pci)
+63 -39
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 8 8 * Author: Simon Xue <xxm@rock-chips.com> 9 9 */ 10 10 11 + #include <linux/bitfield.h> 11 12 #include <linux/clk.h> 12 13 #include <linux/gpio/consumer.h> 13 14 #include <linux/irqchip/chained_irq.h> ··· 22 21 #include <linux/regmap.h> 23 22 #include <linux/reset.h> 24 23 24 + #include "../../pci.h" 25 25 #include "pcie-designware.h" 26 26 27 27 /* ··· 35 33 36 34 #define to_rockchip_pcie(x) dev_get_drvdata((x)->dev) 37 35 38 - #define PCIE_CLIENT_RC_MODE HIWORD_UPDATE_BIT(0x40) 39 - #define PCIE_CLIENT_EP_MODE HIWORD_UPDATE(0xf0, 0x0) 40 - #define PCIE_CLIENT_ENABLE_LTSSM HIWORD_UPDATE_BIT(0xc) 41 - #define PCIE_CLIENT_DISABLE_LTSSM HIWORD_UPDATE(0x0c, 0x8) 42 - #define PCIE_CLIENT_INTR_STATUS_MISC 0x10 43 - #define PCIE_CLIENT_INTR_MASK_MISC 0x24 44 - #define PCIE_SMLH_LINKUP BIT(16) 45 - #define PCIE_RDLH_LINKUP BIT(17) 46 - #define PCIE_LINKUP (PCIE_SMLH_LINKUP | PCIE_RDLH_LINKUP) 47 - #define PCIE_RDLH_LINK_UP_CHGED BIT(1) 48 - #define PCIE_LINK_REQ_RST_NOT_INT BIT(2) 49 - #define PCIE_L0S_ENTRY 0x11 50 - #define PCIE_CLIENT_GENERAL_CONTROL 0x0 36 + /* General Control Register */ 37 + #define PCIE_CLIENT_GENERAL_CON 0x0 38 + #define PCIE_CLIENT_RC_MODE HIWORD_UPDATE_BIT(0x40) 39 + #define PCIE_CLIENT_EP_MODE HIWORD_UPDATE(0xf0, 0x0) 40 + #define PCIE_CLIENT_ENABLE_LTSSM HIWORD_UPDATE_BIT(0xc) 41 + #define PCIE_CLIENT_DISABLE_LTSSM HIWORD_UPDATE(0x0c, 0x8) 42 + 43 + /* Interrupt Status Register Related to Legacy Interrupt */ 51 44 #define PCIE_CLIENT_INTR_STATUS_LEGACY 0x8 45 + 46 + /* Interrupt Status Register Related to Miscellaneous Operation */ 47 + #define PCIE_CLIENT_INTR_STATUS_MISC 0x10 48 + #define PCIE_RDLH_LINK_UP_CHGED BIT(1) 49 + #define PCIE_LINK_REQ_RST_NOT_INT BIT(2) 50 + 51 + /* Interrupt Mask Register Related to Legacy Interrupt */ 52 52 #define PCIE_CLIENT_INTR_MASK_LEGACY 0x1c 53 - #define PCIE_CLIENT_GENERAL_DEBUG 0x104 53 + 54 + /* Interrupt Mask Register Related to Miscellaneous Operation */ 55 + #define PCIE_CLIENT_INTR_MASK_MISC 0x24 56 + 57 + /* Hot Reset Control Register */ 54 58 #define PCIE_CLIENT_HOT_RESET_CTRL 0x180 59 + #define PCIE_LTSSM_ENABLE_ENHANCE BIT(4) 60 + 61 + /* LTSSM Status Register */ 55 62 #define PCIE_CLIENT_LTSSM_STATUS 0x300 56 - #define PCIE_LTSSM_ENABLE_ENHANCE BIT(4) 57 - #define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0) 63 + #define PCIE_LINKUP 0x3 64 + #define PCIE_LINKUP_MASK GENMASK(17, 16) 65 + #define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0) 58 66 59 67 struct rockchip_pcie { 60 68 struct dw_pcie pci; ··· 175 163 static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip) 176 164 { 177 165 rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM, 178 - PCIE_CLIENT_GENERAL_CONTROL); 166 + PCIE_CLIENT_GENERAL_CON); 179 167 } 180 168 181 169 static void rockchip_pcie_disable_ltssm(struct rockchip_pcie *rockchip) 182 170 { 183 171 rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_DISABLE_LTSSM, 184 - PCIE_CLIENT_GENERAL_CONTROL); 172 + PCIE_CLIENT_GENERAL_CON); 185 173 } 186 174 187 - static int rockchip_pcie_link_up(struct dw_pcie *pci) 175 + static bool rockchip_pcie_link_up(struct dw_pcie *pci) 188 176 { 189 177 struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 190 178 u32 val = rockchip_pcie_get_ltssm(rockchip); 191 179 192 - if ((val & PCIE_LINKUP) == PCIE_LINKUP && 193 - (val & PCIE_LTSSM_STATUS_MASK) == PCIE_L0S_ENTRY) 194 - return 1; 180 + return FIELD_GET(PCIE_LINKUP_MASK, val) == PCIE_LINKUP; 181 + } 195 182 196 - return 0; 183 + static void rockchip_pcie_enable_l0s(struct dw_pcie *pci) 184 + { 185 + u32 cap, lnkcap; 186 + 187 + /* Enable L0S capability for all SoCs */ 188 + cap = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 189 + if (cap) { 190 + lnkcap = dw_pcie_readl_dbi(pci, cap + PCI_EXP_LNKCAP); 191 + lnkcap |= PCI_EXP_LNKCAP_ASPM_L0S; 192 + dw_pcie_dbi_ro_wr_en(pci); 193 + dw_pcie_writel_dbi(pci, cap + PCI_EXP_LNKCAP, lnkcap); 194 + dw_pcie_dbi_ro_wr_dis(pci); 195 + } 197 196 } 198 197 199 198 static int rockchip_pcie_start_link(struct dw_pcie *pci) ··· 225 202 * We need more extra time as before, rather than setting just 226 203 * 100us as we don't know how long should the device need to reset. 227 204 */ 228 - msleep(100); 205 + msleep(PCIE_T_PVPERL_MS); 229 206 gpiod_set_value_cansleep(rockchip->rst_gpio, 1); 230 207 231 208 return 0; ··· 255 232 256 233 irq_set_chained_handler_and_data(irq, rockchip_pcie_intx_handler, 257 234 rockchip); 235 + 236 + rockchip_pcie_enable_l0s(pci); 258 237 259 238 return 0; 260 239 } ··· 288 263 dev_err(dev, "failed to hide ATS capability\n"); 289 264 } 290 265 291 - static void rockchip_pcie_ep_pre_init(struct dw_pcie_ep *ep) 292 - { 293 - rockchip_pcie_ep_hide_broken_ats_cap_rk3588(ep); 294 - } 295 - 296 266 static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep) 297 267 { 298 268 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 299 269 enum pci_barno bar; 270 + 271 + rockchip_pcie_enable_l0s(pci); 272 + rockchip_pcie_ep_hide_broken_ats_cap_rk3588(ep); 300 273 301 274 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 302 275 dw_pcie_ep_reset_bar(pci, bar); ··· 365 342 366 343 static const struct dw_pcie_ep_ops rockchip_pcie_ep_ops = { 367 344 .init = rockchip_pcie_ep_init, 368 - .pre_init = rockchip_pcie_ep_pre_init, 369 345 .raise_irq = rockchip_pcie_raise_irq, 370 346 .get_features = rockchip_pcie_get_features, 371 347 }; ··· 432 410 433 411 static void rockchip_pcie_phy_deinit(struct rockchip_pcie *rockchip) 434 412 { 435 - phy_exit(rockchip->phy); 436 413 phy_power_off(rockchip->phy); 414 + phy_exit(rockchip->phy); 437 415 } 438 416 439 417 static const struct dw_pcie_ops dw_pcie_ops = { ··· 448 426 struct dw_pcie *pci = &rockchip->pci; 449 427 struct dw_pcie_rp *pp = &pci->pp; 450 428 struct device *dev = pci->dev; 451 - u32 reg, val; 429 + u32 reg; 452 430 453 431 reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC); 454 432 rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); ··· 457 435 dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip)); 458 436 459 437 if (reg & PCIE_RDLH_LINK_UP_CHGED) { 460 - val = rockchip_pcie_get_ltssm(rockchip); 461 - if ((val & PCIE_LINKUP) == PCIE_LINKUP) { 438 + if (rockchip_pcie_link_up(pci)) { 462 439 dev_dbg(dev, "Received Link up event. Starting enumeration!\n"); 463 440 /* Rescan the bus to enumerate endpoint devices */ 464 441 pci_lock_rescan_remove(); ··· 474 453 struct rockchip_pcie *rockchip = arg; 475 454 struct dw_pcie *pci = &rockchip->pci; 476 455 struct device *dev = pci->dev; 477 - u32 reg, val; 456 + u32 reg; 478 457 479 458 reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC); 480 459 rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); ··· 488 467 } 489 468 490 469 if (reg & PCIE_RDLH_LINK_UP_CHGED) { 491 - val = rockchip_pcie_get_ltssm(rockchip); 492 - if ((val & PCIE_LINKUP) == PCIE_LINKUP) { 470 + if (rockchip_pcie_link_up(pci)) { 493 471 dev_dbg(dev, "link up\n"); 494 472 dw_pcie_ep_linkup(&pci->ep); 495 473 } ··· 525 505 rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL); 526 506 527 507 rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_RC_MODE, 528 - PCIE_CLIENT_GENERAL_CONTROL); 508 + PCIE_CLIENT_GENERAL_CON); 529 509 530 510 pp = &rockchip->pci.pp; 531 511 pp->ops = &rockchip_pcie_host_ops; ··· 571 551 rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL); 572 552 573 553 rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_EP_MODE, 574 - PCIE_CLIENT_GENERAL_CONTROL); 554 + PCIE_CLIENT_GENERAL_CON); 575 555 576 556 rockchip->pci.ep.ops = &rockchip_pcie_ep_ops; 577 557 rockchip->pci.ep.page_size = SZ_64K; ··· 620 600 rockchip->pci.dev = dev; 621 601 rockchip->pci.ops = &dw_pcie_ops; 622 602 rockchip->data = data; 603 + 604 + /* Default N_FTS value (210) is broken, override it to 255 */ 605 + rockchip->pci.n_fts[0] = 255; /* Gen1 */ 606 + rockchip->pci.n_fts[1] = 255; /* Gen2+ */ 623 607 624 608 ret = rockchip_pcie_resource_get(pdev, rockchip); 625 609 if (ret)
+1
drivers/pci/controller/dwc/pcie-hisi.c
··· 15 15 #include <linux/pci-acpi.h> 16 16 #include <linux/pci-ecam.h> 17 17 #include "../../pci.h" 18 + #include "../pci-host-common.h" 18 19 19 20 #if defined(CONFIG_PCI_HISI) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)) 20 21
+3 -6
drivers/pci/controller/dwc/pcie-histb.c
··· 151 151 .write = histb_pcie_wr_own_conf, 152 152 }; 153 153 154 - static int histb_pcie_link_up(struct dw_pcie *pci) 154 + static bool histb_pcie_link_up(struct dw_pcie *pci) 155 155 { 156 156 struct histb_pcie *hipcie = to_histb_pcie(pci); 157 157 u32 regval; ··· 160 160 regval = histb_pcie_readl(hipcie, PCIE_SYS_STAT0); 161 161 status = histb_pcie_readl(hipcie, PCIE_SYS_STAT4); 162 162 status &= PCIE_LTSSM_STATE_MASK; 163 - if ((regval & PCIE_XMLH_LINK_UP) && (regval & PCIE_RDLH_LINK_UP) && 164 - (status == PCIE_LTSSM_STATE_ACTIVE)) 165 - return 1; 166 - 167 - return 0; 163 + return ((regval & PCIE_XMLH_LINK_UP) && (regval & PCIE_RDLH_LINK_UP) && 164 + (status == PCIE_LTSSM_STATE_ACTIVE)); 168 165 } 169 166 170 167 static int histb_pcie_start_link(struct dw_pcie *pci)
+1 -1
drivers/pci/controller/dwc/pcie-keembay.c
··· 101 101 writel(val, pcie->apb_base + PCIE_REGS_PCIE_APP_CNTRL); 102 102 } 103 103 104 - static int keembay_pcie_link_up(struct dw_pcie *pci) 104 + static bool keembay_pcie_link_up(struct dw_pcie *pci) 105 105 { 106 106 struct keembay_pcie *pcie = dev_get_drvdata(pci->dev); 107 107 u32 val;
+2 -5
drivers/pci/controller/dwc/pcie-kirin.c
··· 586 586 kirin_pcie_sideband_dbi_w_mode(kirin_pcie, false); 587 587 } 588 588 589 - static int kirin_pcie_link_up(struct dw_pcie *pci) 589 + static bool kirin_pcie_link_up(struct dw_pcie *pci) 590 590 { 591 591 struct kirin_pcie *kirin_pcie = to_kirin_pcie(pci); 592 592 u32 val; 593 593 594 594 regmap_read(kirin_pcie->apb, PCIE_APB_PHY_STATUS0, &val); 595 - if ((val & PCIE_LINKUP_ENABLE) == PCIE_LINKUP_ENABLE) 596 - return 1; 597 - 598 - return 0; 595 + return (val & PCIE_LINKUP_ENABLE) == PCIE_LINKUP_ENABLE; 599 596 } 600 597 601 598 static int kirin_pcie_start_link(struct dw_pcie *pci)
+9 -1
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 60 60 #define PARF_DEVICE_TYPE 0x1000 61 61 #define PARF_BDF_TO_SID_CFG 0x2c00 62 62 #define PARF_INT_ALL_5_MASK 0x2dcc 63 + #define PARF_INT_ALL_3_MASK 0x2e18 63 64 64 65 /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ 65 66 #define PARF_INT_ALL_LINK_DOWN BIT(1) ··· 132 131 133 132 /* PARF_INT_ALL_5_MASK fields */ 134 133 #define PARF_INT_ALL_5_MHI_RAM_DATA_PARITY_ERR BIT(0) 134 + 135 + /* PARF_INT_ALL_3_MASK fields */ 136 + #define PARF_INT_ALL_3_PTM_UPDATING BIT(4) 135 137 136 138 /* ELBI registers */ 137 139 #define ELBI_SYS_STTS 0x08 ··· 265 261 } 266 262 } 267 263 268 - static int qcom_pcie_dw_link_up(struct dw_pcie *pci) 264 + static bool qcom_pcie_dw_link_up(struct dw_pcie *pci) 269 265 { 270 266 struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 271 267 u32 reg; ··· 500 496 val &= ~PARF_INT_ALL_5_MHI_RAM_DATA_PARITY_ERR; 501 497 writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_5_MASK); 502 498 } 499 + 500 + val = readl_relaxed(pcie_ep->parf + PARF_INT_ALL_3_MASK); 501 + val &= ~PARF_INT_ALL_3_PTM_UPDATING; 502 + writel_relaxed(val, pcie_ep->parf + PARF_INT_ALL_3_MASK); 503 503 504 504 ret = dw_pcie_ep_init_registers(&pcie_ep->pci.ep); 505 505 if (ret) {
+4 -3
drivers/pci/controller/dwc/pcie-qcom.c
··· 289 289 static void qcom_ep_reset_deassert(struct qcom_pcie *pcie) 290 290 { 291 291 /* Ensure that PERST has been asserted for at least 100 ms */ 292 - msleep(100); 292 + msleep(PCIE_T_PVPERL_MS); 293 293 gpiod_set_value_cansleep(pcie->reset, 0); 294 294 usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 295 295 } ··· 1221 1221 return 0; 1222 1222 } 1223 1223 1224 - static int qcom_pcie_link_up(struct dw_pcie *pci) 1224 + static bool qcom_pcie_link_up(struct dw_pcie *pci) 1225 1225 { 1226 1226 u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1227 1227 u16 val = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA); 1228 1228 1229 - return !!(val & PCI_EXP_LNKSTA_DLLLA); 1229 + return val & PCI_EXP_LNKSTA_DLLLA; 1230 1230 } 1231 1231 1232 1232 static int qcom_pcie_host_init(struct dw_pcie_rp *pp) ··· 1840 1840 { .compatible = "qcom,pcie-apq8064", .data = &cfg_2_1_0 }, 1841 1841 { .compatible = "qcom,pcie-apq8084", .data = &cfg_1_0_0 }, 1842 1842 { .compatible = "qcom,pcie-ipq4019", .data = &cfg_2_4_0 }, 1843 + { .compatible = "qcom,pcie-ipq5018", .data = &cfg_2_9_0 }, 1843 1844 { .compatible = "qcom,pcie-ipq6018", .data = &cfg_2_9_0 }, 1844 1845 { .compatible = "qcom,pcie-ipq8064", .data = &cfg_2_1_0 }, 1845 1846 { .compatible = "qcom,pcie-ipq8064-v2", .data = &cfg_2_1_0 },
+2 -1
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 87 87 #define to_rcar_gen4_pcie(_dw) container_of(_dw, struct rcar_gen4_pcie, dw) 88 88 89 89 /* Common */ 90 - static int rcar_gen4_pcie_link_up(struct dw_pcie *dw) 90 + static bool rcar_gen4_pcie_link_up(struct dw_pcie *dw) 91 91 { 92 92 struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 93 93 u32 val, mask; ··· 403 403 .msix_capable = false, 404 404 .bar[BAR_1] = { .type = BAR_RESERVED, }, 405 405 .bar[BAR_3] = { .type = BAR_RESERVED, }, 406 + .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 }, 406 407 .bar[BAR_5] = { .type = BAR_RESERVED, }, 407 408 .align = SZ_1M, 408 409 };
+2 -5
drivers/pci/controller/dwc/pcie-spear13xx.c
··· 110 110 MSI_CTRL_INT, &app_reg->int_mask); 111 111 } 112 112 113 - static int spear13xx_pcie_link_up(struct dw_pcie *pci) 113 + static bool spear13xx_pcie_link_up(struct dw_pcie *pci) 114 114 { 115 115 struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci); 116 116 struct pcie_app_reg __iomem *app_reg = spear13xx_pcie->app_base; 117 117 118 - if (readl(&app_reg->app_status_1) & XMLH_LINK_UP) 119 - return 1; 120 - 121 - return 0; 118 + return readl(&app_reg->app_status_1) & XMLH_LINK_UP; 122 119 } 123 120 124 121 static int spear13xx_pcie_host_init(struct dw_pcie_rp *pp)
+12 -11
drivers/pci/controller/dwc/pcie-tegra194.c
··· 713 713 714 714 static void init_debugfs(struct tegra_pcie_dw *pcie) 715 715 { 716 - debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt", pcie->debugfs, 716 + struct device *dev = pcie->dev; 717 + char *name; 718 + 719 + name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node); 720 + if (!name) 721 + return; 722 + 723 + pcie->debugfs = debugfs_create_dir(name, NULL); 724 + 725 + debugfs_create_devm_seqfile(dev, "aspm_state_cnt", pcie->debugfs, 717 726 aspm_state_cnt); 718 727 } 719 728 #else ··· 1036 1027 return 0; 1037 1028 } 1038 1029 1039 - static int tegra_pcie_dw_link_up(struct dw_pcie *pci) 1030 + static bool tegra_pcie_dw_link_up(struct dw_pcie *pci) 1040 1031 { 1041 1032 struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 1042 1033 u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); 1043 1034 1044 - return !!(val & PCI_EXP_LNKSTA_DLLLA); 1035 + return val & PCI_EXP_LNKSTA_DLLLA; 1045 1036 } 1046 1037 1047 1038 static void tegra_pcie_dw_stop_link(struct dw_pcie *pci) ··· 1643 1634 static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie) 1644 1635 { 1645 1636 struct device *dev = pcie->dev; 1646 - char *name; 1647 1637 int ret; 1648 1638 1649 1639 pm_runtime_enable(dev); ··· 1672 1664 goto fail_host_init; 1673 1665 } 1674 1666 1675 - name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node); 1676 - if (!name) { 1677 - ret = -ENOMEM; 1678 - goto fail_host_init; 1679 - } 1680 - 1681 - pcie->debugfs = debugfs_create_dir(name, NULL); 1682 1667 init_debugfs(pcie); 1683 1668 1684 1669 return ret;
+1 -1
drivers/pci/controller/dwc/pcie-uniphier.c
··· 135 135 return 0; 136 136 } 137 137 138 - static int uniphier_pcie_link_up(struct dw_pcie *pci) 138 + static bool uniphier_pcie_link_up(struct dw_pcie *pci) 139 139 { 140 140 struct uniphier_pcie *pcie = to_uniphier_pcie(pci); 141 141 u32 val, mask;
+2 -2
drivers/pci/controller/dwc/pcie-visconti.c
··· 121 121 return readl_relaxed(pcie->mpu_base + reg); 122 122 } 123 123 124 - static int visconti_pcie_link_up(struct dw_pcie *pci) 124 + static bool visconti_pcie_link_up(struct dw_pcie *pci) 125 125 { 126 126 struct visconti_pcie *pcie = dev_get_drvdata(pci->dev); 127 127 void __iomem *addr = pcie->ulreg_base; 128 128 u32 val = readl_relaxed(addr + PCIE_UL_REG_V_PHY_ST_02); 129 129 130 - return !!(val & PCIE_UL_S_L0); 130 + return val & PCIE_UL_S_L0; 131 131 } 132 132 133 133 static int visconti_pcie_start_link(struct dw_pcie *pci)
+3 -9
drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c
··· 53 53 iowrite32(val, pcie->pci.csr_axi_slave_base + PCIE_PF_OFF + off); 54 54 } 55 55 56 - static int ls_g4_pcie_link_up(struct mobiveil_pcie *pci) 56 + static bool ls_g4_pcie_link_up(struct mobiveil_pcie *pci) 57 57 { 58 58 struct ls_g4_pcie *pcie = to_ls_g4_pcie(pci); 59 59 u32 state; 60 60 61 61 state = ls_g4_pcie_pf_readl(pcie, PCIE_PF_DBG); 62 - state = state & PF_DBG_LTSSM_MASK; 63 - 64 - if (state == PF_DBG_LTSSM_L0) 65 - return 1; 66 - 67 - return 0; 62 + return (state & PF_DBG_LTSSM_MASK) == PF_DBG_LTSSM_L0; 68 63 } 69 64 70 65 static void ls_g4_pcie_disable_interrupt(struct ls_g4_pcie *pcie) ··· 169 174 170 175 static void ls_g4_pcie_reset(struct work_struct *work) 171 176 { 172 - struct delayed_work *dwork = container_of(work, struct delayed_work, 173 - work); 177 + struct delayed_work *dwork = to_delayed_work(work); 174 178 struct ls_g4_pcie *pcie = container_of(dwork, struct ls_g4_pcie, dwork); 175 179 struct mobiveil_pcie *mv_pci = &pcie->pci; 176 180 u16 ctrl;
+1 -1
drivers/pci/controller/mobiveil/pcie-mobiveil.h
··· 160 160 }; 161 161 162 162 struct mobiveil_pab_ops { 163 - int (*link_up)(struct mobiveil_pcie *pcie); 163 + bool (*link_up)(struct mobiveil_pcie *pcie); 164 164 }; 165 165 166 166 struct mobiveil_pcie {
+20 -10
drivers/pci/controller/pci-host-common.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Generic PCI host driver common code 3 + * Common library for PCI host controller drivers 4 4 * 5 5 * Copyright (C) 2014 ARM Limited 6 6 * ··· 14 14 #include <linux/of_pci.h> 15 15 #include <linux/pci-ecam.h> 16 16 #include <linux/platform_device.h> 17 + 18 + #include "pci-host-common.h" 17 19 18 20 static void gen_pci_unmap_cfg(void *ptr) 19 21 { ··· 51 49 return cfg; 52 50 } 53 51 54 - int pci_host_common_probe(struct platform_device *pdev) 52 + int pci_host_common_init(struct platform_device *pdev, 53 + const struct pci_ecam_ops *ops) 55 54 { 56 55 struct device *dev = &pdev->dev; 57 56 struct pci_host_bridge *bridge; 58 57 struct pci_config_window *cfg; 59 - const struct pci_ecam_ops *ops; 60 - 61 - ops = of_device_get_match_data(&pdev->dev); 62 - if (!ops) 63 - return -ENODEV; 64 58 65 59 bridge = devm_pci_alloc_host_bridge(dev, 0); 66 60 if (!bridge) 67 61 return -ENOMEM; 68 - 69 - platform_set_drvdata(pdev, bridge); 70 62 71 63 of_pci_check_probe_only(); 72 64 ··· 69 73 if (IS_ERR(cfg)) 70 74 return PTR_ERR(cfg); 71 75 76 + platform_set_drvdata(pdev, bridge); 77 + 72 78 bridge->sysdata = cfg; 73 79 bridge->ops = (struct pci_ops *)&ops->pci_ops; 74 80 bridge->enable_device = ops->enable_device; ··· 78 80 bridge->msi_domain = true; 79 81 80 82 return pci_host_probe(bridge); 83 + } 84 + EXPORT_SYMBOL_GPL(pci_host_common_init); 85 + 86 + int pci_host_common_probe(struct platform_device *pdev) 87 + { 88 + const struct pci_ecam_ops *ops; 89 + 90 + ops = of_device_get_match_data(&pdev->dev); 91 + if (!ops) 92 + return -ENODEV; 93 + 94 + return pci_host_common_init(pdev, ops); 81 95 } 82 96 EXPORT_SYMBOL_GPL(pci_host_common_probe); 83 97 ··· 104 94 } 105 95 EXPORT_SYMBOL_GPL(pci_host_common_remove); 106 96 107 - MODULE_DESCRIPTION("Generic PCI host common driver"); 97 + MODULE_DESCRIPTION("Common library for PCI host controller drivers"); 108 98 MODULE_LICENSE("GPL v2");
+20
drivers/pci/controller/pci-host-common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Common library for PCI host controller drivers 4 + * 5 + * Copyright (C) 2014 ARM Limited 6 + * 7 + * Author: Will Deacon <will.deacon@arm.com> 8 + */ 9 + 10 + #ifndef _PCI_HOST_COMMON_H 11 + #define _PCI_HOST_COMMON_H 12 + 13 + struct pci_ecam_ops; 14 + 15 + int pci_host_common_probe(struct platform_device *pdev); 16 + int pci_host_common_init(struct platform_device *pdev, 17 + const struct pci_ecam_ops *ops); 18 + void pci_host_common_remove(struct platform_device *pdev); 19 + 20 + #endif
+2
drivers/pci/controller/pci-host-generic.c
··· 14 14 #include <linux/pci-ecam.h> 15 15 #include <linux/platform_device.h> 16 16 17 + #include "pci-host-common.h" 18 + 17 19 static const struct pci_ecam_ops gen_pci_cfg_cam_bus_ops = { 18 20 .bus_shift = 16, 19 21 .pci_ops = {
+9 -17
drivers/pci/controller/pci-mvebu.c
··· 1179 1179 unsigned int *tgt, 1180 1180 unsigned int *attr) 1181 1181 { 1182 - const int na = 3, ns = 2; 1183 - const __be32 *range; 1184 - int rlen, nranges, rangesz, pna, i; 1182 + struct of_range range; 1183 + struct of_range_parser parser; 1185 1184 1186 1185 *tgt = -1; 1187 1186 *attr = -1; 1188 1187 1189 - range = of_get_property(np, "ranges", &rlen); 1190 - if (!range) 1188 + if (of_pci_range_parser_init(&parser, np)) 1191 1189 return -EINVAL; 1192 1190 1193 - pna = of_n_addr_cells(np); 1194 - rangesz = pna + na + ns; 1195 - nranges = rlen / sizeof(__be32) / rangesz; 1196 - 1197 - for (i = 0; i < nranges; i++, range += rangesz) { 1198 - u32 flags = of_read_number(range, 1); 1199 - u32 slot = of_read_number(range + 1, 1); 1200 - u64 cpuaddr = of_read_number(range + na, pna); 1191 + for_each_of_range(&parser, &range) { 1201 1192 unsigned long rtype; 1193 + u32 slot = upper_32_bits(range.bus_addr); 1202 1194 1203 - if (DT_FLAGS_TO_TYPE(flags) == DT_TYPE_IO) 1195 + if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_IO) 1204 1196 rtype = IORESOURCE_IO; 1205 - else if (DT_FLAGS_TO_TYPE(flags) == DT_TYPE_MEM32) 1197 + else if (DT_FLAGS_TO_TYPE(range.flags) == DT_TYPE_MEM32) 1206 1198 rtype = IORESOURCE_MEM; 1207 1199 else 1208 1200 continue; 1209 1201 1210 1202 if (slot == PCI_SLOT(devfn) && type == rtype) { 1211 - *tgt = DT_CPUADDR_TO_TARGET(cpuaddr); 1212 - *attr = DT_CPUADDR_TO_ATTR(cpuaddr); 1203 + *tgt = DT_CPUADDR_TO_TARGET(range.cpu_addr); 1204 + *attr = DT_CPUADDR_TO_ATTR(range.cpu_addr); 1213 1205 return 0; 1214 1206 } 1215 1207 }
+2
drivers/pci/controller/pci-thunder-ecam.c
··· 11 11 #include <linux/pci-ecam.h> 12 12 #include <linux/platform_device.h> 13 13 14 + #include "pci-host-common.h" 15 + 14 16 #if defined(CONFIG_PCI_HOST_THUNDER_ECAM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)) 15 17 16 18 static void set_val(u32 v, int where, int size, u32 *val)
+1
drivers/pci/controller/pci-thunder-pem.c
··· 14 14 #include <linux/platform_device.h> 15 15 #include <linux/io-64-nonatomic-lo-hi.h> 16 16 #include "../pci.h" 17 + #include "pci-host-common.h" 17 18 18 19 #if defined(CONFIG_PCI_HOST_THUNDER_PEM) || (defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)) 19 20
+178 -69
drivers/pci/controller/pcie-apple.c
··· 18 18 * Author: Marc Zyngier <maz@kernel.org> 19 19 */ 20 20 21 + #include <linux/bitfield.h> 21 22 #include <linux/gpio/consumer.h> 22 23 #include <linux/kernel.h> 23 24 #include <linux/iopoll.h> ··· 31 30 #include <linux/of_irq.h> 32 31 #include <linux/pci-ecam.h> 33 32 33 + #include "pci-host-common.h" 34 + 35 + /* T8103 (original M1) and related SoCs */ 34 36 #define CORE_RC_PHYIF_CTL 0x00024 35 37 #define CORE_RC_PHYIF_CTL_RUN BIT(0) 36 38 #define CORE_RC_PHYIF_STAT 0x00028 ··· 44 40 #define CORE_RC_STAT_READY BIT(0) 45 41 #define CORE_FABRIC_STAT 0x04000 46 42 #define CORE_FABRIC_STAT_MASK 0x001F001F 47 - #define CORE_LANE_CFG(port) (0x84000 + 0x4000 * (port)) 48 - #define CORE_LANE_CFG_REFCLK0REQ BIT(0) 49 - #define CORE_LANE_CFG_REFCLK1REQ BIT(1) 50 - #define CORE_LANE_CFG_REFCLK0ACK BIT(2) 51 - #define CORE_LANE_CFG_REFCLK1ACK BIT(3) 52 - #define CORE_LANE_CFG_REFCLKEN (BIT(9) | BIT(10)) 53 - #define CORE_LANE_CTL(port) (0x84004 + 0x4000 * (port)) 54 - #define CORE_LANE_CTL_CFGACC BIT(15) 43 + 44 + #define CORE_PHY_DEFAULT_BASE(port) (0x84000 + 0x4000 * (port)) 45 + 46 + #define PHY_LANE_CFG 0x00000 47 + #define PHY_LANE_CFG_REFCLK0REQ BIT(0) 48 + #define PHY_LANE_CFG_REFCLK1REQ BIT(1) 49 + #define PHY_LANE_CFG_REFCLK0ACK BIT(2) 50 + #define PHY_LANE_CFG_REFCLK1ACK BIT(3) 51 + #define PHY_LANE_CFG_REFCLKEN (BIT(9) | BIT(10)) 52 + #define PHY_LANE_CFG_REFCLKCGEN (BIT(30) | BIT(31)) 53 + #define PHY_LANE_CTL 0x00004 54 + #define PHY_LANE_CTL_CFGACC BIT(15) 55 55 56 56 #define PORT_LTSSMCTL 0x00080 57 57 #define PORT_LTSSMCTL_START BIT(0) ··· 109 101 #define PORT_REFCLK_CGDIS BIT(8) 110 102 #define PORT_PERST 0x00814 111 103 #define PORT_PERST_OFF BIT(0) 112 - #define PORT_RID2SID(i16) (0x00828 + 4 * (i16)) 104 + #define PORT_RID2SID 0x00828 113 105 #define PORT_RID2SID_VALID BIT(31) 114 106 #define PORT_RID2SID_SID_SHIFT 16 115 107 #define PORT_RID2SID_BUS_SHIFT 8 ··· 127 119 #define PORT_TUNSTAT_PERST_ACK_PEND BIT(1) 128 120 #define PORT_PREFMEM_ENABLE 0x00994 129 121 130 - #define MAX_RID2SID 64 122 + /* T602x (M2-pro and co) */ 123 + #define PORT_T602X_MSIADDR 0x016c 124 + #define PORT_T602X_MSIADDR_HI 0x0170 125 + #define PORT_T602X_PERST 0x082c 126 + #define PORT_T602X_RID2SID 0x3000 127 + #define PORT_T602X_MSIMAP 0x3800 128 + 129 + #define PORT_MSIMAP_ENABLE BIT(31) 130 + #define PORT_MSIMAP_TARGET GENMASK(7, 0) 131 131 132 132 /* 133 133 * The doorbell address is set to 0xfffff000, which by convention ··· 146 130 */ 147 131 #define DOORBELL_ADDR CONFIG_PCIE_APPLE_MSI_DOORBELL_ADDR 148 132 133 + struct hw_info { 134 + u32 phy_lane_ctl; 135 + u32 port_msiaddr; 136 + u32 port_msiaddr_hi; 137 + u32 port_refclk; 138 + u32 port_perst; 139 + u32 port_rid2sid; 140 + u32 port_msimap; 141 + u32 max_rid2sid; 142 + }; 143 + 144 + static const struct hw_info t8103_hw = { 145 + .phy_lane_ctl = PHY_LANE_CTL, 146 + .port_msiaddr = PORT_MSIADDR, 147 + .port_msiaddr_hi = 0, 148 + .port_refclk = PORT_REFCLK, 149 + .port_perst = PORT_PERST, 150 + .port_rid2sid = PORT_RID2SID, 151 + .port_msimap = 0, 152 + .max_rid2sid = 64, 153 + }; 154 + 155 + static const struct hw_info t602x_hw = { 156 + .phy_lane_ctl = 0, 157 + .port_msiaddr = PORT_T602X_MSIADDR, 158 + .port_msiaddr_hi = PORT_T602X_MSIADDR_HI, 159 + .port_refclk = 0, 160 + .port_perst = PORT_T602X_PERST, 161 + .port_rid2sid = PORT_T602X_RID2SID, 162 + .port_msimap = PORT_T602X_MSIMAP, 163 + /* 16 on t602x, guess for autodetect on future HW */ 164 + .max_rid2sid = 512, 165 + }; 166 + 149 167 struct apple_pcie { 150 168 struct mutex lock; 151 169 struct device *dev; 152 170 void __iomem *base; 171 + const struct hw_info *hw; 153 172 unsigned long *bitmap; 154 173 struct list_head ports; 155 174 struct completion event; ··· 193 142 }; 194 143 195 144 struct apple_pcie_port { 145 + raw_spinlock_t lock; 196 146 struct apple_pcie *pcie; 197 147 struct device_node *np; 198 148 void __iomem *base; 149 + void __iomem *phy; 199 150 struct irq_domain *domain; 200 151 struct list_head entry; 201 - DECLARE_BITMAP(sid_map, MAX_RID2SID); 152 + unsigned long *sid_map; 202 153 int sid_map_sz; 203 154 int idx; 204 155 }; ··· 286 233 { 287 234 struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); 288 235 289 - writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKSET); 236 + guard(raw_spinlock_irqsave)(&port->lock); 237 + rmw_set(BIT(data->hwirq), port->base + PORT_INTMSK); 290 238 } 291 239 292 240 static void apple_port_irq_unmask(struct irq_data *data) 293 241 { 294 242 struct apple_pcie_port *port = irq_data_get_irq_chip_data(data); 295 243 296 - writel_relaxed(BIT(data->hwirq), port->base + PORT_INTMSKCLR); 244 + guard(raw_spinlock_irqsave)(&port->lock); 245 + rmw_clear(BIT(data->hwirq), port->base + PORT_INTMSK); 297 246 } 298 247 299 248 static bool hwirq_is_intx(unsigned int hwirq) ··· 399 344 static int apple_pcie_port_setup_irq(struct apple_pcie_port *port) 400 345 { 401 346 struct fwnode_handle *fwnode = &port->np->fwnode; 347 + struct apple_pcie *pcie = port->pcie; 402 348 unsigned int irq; 349 + u32 val = 0; 403 350 404 351 /* FIXME: consider moving each interrupt under each port */ 405 352 irq = irq_of_parse_and_map(to_of_node(dev_fwnode(port->pcie->dev)), ··· 416 359 return -ENOMEM; 417 360 418 361 /* Disable all interrupts */ 419 - writel_relaxed(~0, port->base + PORT_INTMSKSET); 362 + writel_relaxed(~0, port->base + PORT_INTMSK); 420 363 writel_relaxed(~0, port->base + PORT_INTSTAT); 364 + writel_relaxed(~0, port->base + PORT_LINKCMDSTS); 421 365 422 366 irq_set_chained_handler_and_data(irq, apple_port_irq_handler, port); 423 367 424 368 /* Configure MSI base address */ 425 369 BUILD_BUG_ON(upper_32_bits(DOORBELL_ADDR)); 426 - writel_relaxed(lower_32_bits(DOORBELL_ADDR), port->base + PORT_MSIADDR); 370 + writel_relaxed(lower_32_bits(DOORBELL_ADDR), 371 + port->base + pcie->hw->port_msiaddr); 372 + if (pcie->hw->port_msiaddr_hi) 373 + writel_relaxed(0, port->base + pcie->hw->port_msiaddr_hi); 427 374 428 375 /* Enable MSIs, shared between all ports */ 429 - writel_relaxed(0, port->base + PORT_MSIBASE); 430 - writel_relaxed((ilog2(port->pcie->nvecs) << PORT_MSICFG_L2MSINUM_SHIFT) | 431 - PORT_MSICFG_EN, port->base + PORT_MSICFG); 376 + if (pcie->hw->port_msimap) { 377 + for (int i = 0; i < pcie->nvecs; i++) 378 + writel_relaxed(FIELD_PREP(PORT_MSIMAP_TARGET, i) | 379 + PORT_MSIMAP_ENABLE, 380 + port->base + pcie->hw->port_msimap + 4 * i); 381 + } else { 382 + writel_relaxed(0, port->base + PORT_MSIBASE); 383 + val = ilog2(pcie->nvecs) << PORT_MSICFG_L2MSINUM_SHIFT; 384 + } 432 385 386 + writel_relaxed(val | PORT_MSICFG_EN, port->base + PORT_MSICFG); 433 387 return 0; 434 388 } 435 389 ··· 507 439 u32 stat; 508 440 int res; 509 441 510 - res = readl_relaxed_poll_timeout(pcie->base + CORE_RC_PHYIF_STAT, stat, 511 - stat & CORE_RC_PHYIF_STAT_REFCLK, 442 + if (pcie->hw->phy_lane_ctl) 443 + rmw_set(PHY_LANE_CTL_CFGACC, port->phy + pcie->hw->phy_lane_ctl); 444 + 445 + rmw_set(PHY_LANE_CFG_REFCLK0REQ, port->phy + PHY_LANE_CFG); 446 + 447 + res = readl_relaxed_poll_timeout(port->phy + PHY_LANE_CFG, 448 + stat, stat & PHY_LANE_CFG_REFCLK0ACK, 512 449 100, 50000); 513 450 if (res < 0) 514 451 return res; 515 452 516 - rmw_set(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx)); 517 - rmw_set(CORE_LANE_CFG_REFCLK0REQ, pcie->base + CORE_LANE_CFG(port->idx)); 518 - 519 - res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx), 520 - stat, stat & CORE_LANE_CFG_REFCLK0ACK, 521 - 100, 50000); 522 - if (res < 0) 523 - return res; 524 - 525 - rmw_set(CORE_LANE_CFG_REFCLK1REQ, pcie->base + CORE_LANE_CFG(port->idx)); 526 - res = readl_relaxed_poll_timeout(pcie->base + CORE_LANE_CFG(port->idx), 527 - stat, stat & CORE_LANE_CFG_REFCLK1ACK, 453 + rmw_set(PHY_LANE_CFG_REFCLK1REQ, port->phy + PHY_LANE_CFG); 454 + res = readl_relaxed_poll_timeout(port->phy + PHY_LANE_CFG, 455 + stat, stat & PHY_LANE_CFG_REFCLK1ACK, 528 456 100, 50000); 529 457 530 458 if (res < 0) 531 459 return res; 532 460 533 - rmw_clear(CORE_LANE_CTL_CFGACC, pcie->base + CORE_LANE_CTL(port->idx)); 461 + if (pcie->hw->phy_lane_ctl) 462 + rmw_clear(PHY_LANE_CTL_CFGACC, port->phy + pcie->hw->phy_lane_ctl); 534 463 535 - rmw_set(CORE_LANE_CFG_REFCLKEN, pcie->base + CORE_LANE_CFG(port->idx)); 536 - rmw_set(PORT_REFCLK_EN, port->base + PORT_REFCLK); 464 + rmw_set(PHY_LANE_CFG_REFCLKEN, port->phy + PHY_LANE_CFG); 465 + 466 + if (pcie->hw->port_refclk) 467 + rmw_set(PORT_REFCLK_EN, port->base + pcie->hw->port_refclk); 537 468 538 469 return 0; 470 + } 471 + 472 + static void __iomem *port_rid2sid_addr(struct apple_pcie_port *port, int idx) 473 + { 474 + return port->base + port->pcie->hw->port_rid2sid + 4 * idx; 539 475 } 540 476 541 477 static u32 apple_pcie_rid2sid_write(struct apple_pcie_port *port, 542 478 int idx, u32 val) 543 479 { 544 - writel_relaxed(val, port->base + PORT_RID2SID(idx)); 480 + writel_relaxed(val, port_rid2sid_addr(port, idx)); 545 481 /* Read back to ensure completion of the write */ 546 - return readl_relaxed(port->base + PORT_RID2SID(idx)); 482 + return readl_relaxed(port_rid2sid_addr(port, idx)); 547 483 } 548 484 549 485 static int apple_pcie_setup_port(struct apple_pcie *pcie, ··· 556 484 struct platform_device *platform = to_platform_device(pcie->dev); 557 485 struct apple_pcie_port *port; 558 486 struct gpio_desc *reset; 487 + struct resource *res; 488 + char name[16]; 559 489 u32 stat, idx; 560 490 int ret, i; 561 491 ··· 570 496 if (!port) 571 497 return -ENOMEM; 572 498 499 + port->sid_map = devm_bitmap_zalloc(pcie->dev, pcie->hw->max_rid2sid, GFP_KERNEL); 500 + if (!port->sid_map) 501 + return -ENOMEM; 502 + 573 503 ret = of_property_read_u32_index(np, "reg", 0, &idx); 574 504 if (ret) 575 505 return ret; ··· 583 505 port->pcie = pcie; 584 506 port->np = np; 585 507 586 - port->base = devm_platform_ioremap_resource(platform, port->idx + 2); 508 + raw_spin_lock_init(&port->lock); 509 + 510 + snprintf(name, sizeof(name), "port%d", port->idx); 511 + res = platform_get_resource_byname(platform, IORESOURCE_MEM, name); 512 + if (!res) 513 + res = platform_get_resource(platform, IORESOURCE_MEM, port->idx + 2); 514 + 515 + port->base = devm_ioremap_resource(&platform->dev, res); 587 516 if (IS_ERR(port->base)) 588 517 return PTR_ERR(port->base); 518 + 519 + snprintf(name, sizeof(name), "phy%d", port->idx); 520 + res = platform_get_resource_byname(platform, IORESOURCE_MEM, name); 521 + if (res) 522 + port->phy = devm_ioremap_resource(&platform->dev, res); 523 + else 524 + port->phy = pcie->base + CORE_PHY_DEFAULT_BASE(port->idx); 589 525 590 526 rmw_set(PORT_APPCLK_EN, port->base + PORT_APPCLK); 591 527 592 528 /* Assert PERST# before setting up the clock */ 593 - gpiod_set_value(reset, 1); 529 + gpiod_set_value_cansleep(reset, 1); 594 530 595 531 ret = apple_pcie_setup_refclk(pcie, port); 596 532 if (ret < 0) ··· 614 522 usleep_range(100, 200); 615 523 616 524 /* Deassert PERST# */ 617 - rmw_set(PORT_PERST_OFF, port->base + PORT_PERST); 618 - gpiod_set_value(reset, 0); 525 + rmw_set(PORT_PERST_OFF, port->base + pcie->hw->port_perst); 526 + gpiod_set_value_cansleep(reset, 0); 619 527 620 528 /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */ 621 529 msleep(100); ··· 627 535 return ret; 628 536 } 629 537 630 - rmw_clear(PORT_REFCLK_CGDIS, port->base + PORT_REFCLK); 538 + if (pcie->hw->port_refclk) 539 + rmw_clear(PORT_REFCLK_CGDIS, port->base + pcie->hw->port_refclk); 540 + else 541 + rmw_set(PHY_LANE_CFG_REFCLKCGEN, port->phy + PHY_LANE_CFG); 542 + 631 543 rmw_clear(PORT_APPCLK_CGDIS, port->base + PORT_APPCLK); 632 544 633 545 ret = apple_pcie_port_setup_irq(port); ··· 639 543 return ret; 640 544 641 545 /* Reset all RID/SID mappings, and check for RAZ/WI registers */ 642 - for (i = 0; i < MAX_RID2SID; i++) { 546 + for (i = 0; i < pcie->hw->max_rid2sid; i++) { 643 547 if (apple_pcie_rid2sid_write(port, i, 0xbad1d) != 0xbad1d) 644 548 break; 645 549 apple_pcie_rid2sid_write(port, i, 0); ··· 651 555 652 556 list_add_tail(&port->entry, &pcie->ports); 653 557 init_completion(&pcie->event); 558 + 559 + /* In the success path, we keep a reference to np around */ 560 + of_node_get(np); 654 561 655 562 ret = apple_pcie_port_register_irqs(port); 656 563 WARN_ON(ret); ··· 792 693 for_each_set_bit(idx, port->sid_map, port->sid_map_sz) { 793 694 u32 val; 794 695 795 - val = readl_relaxed(port->base + PORT_RID2SID(idx)); 696 + val = readl_relaxed(port_rid2sid_addr(port, idx)); 796 697 if ((val & 0xffff) == rid) { 797 698 apple_pcie_rid2sid_write(port, idx, 0); 798 699 bitmap_release_region(port->sid_map, idx, 0); ··· 806 707 807 708 static int apple_pcie_init(struct pci_config_window *cfg) 808 709 { 710 + struct apple_pcie *pcie = cfg->priv; 809 711 struct device *dev = cfg->parent; 810 - struct platform_device *platform = to_platform_device(dev); 811 - struct apple_pcie *pcie; 812 712 int ret; 813 713 814 - pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 815 - if (!pcie) 816 - return -ENOMEM; 817 - 818 - pcie->dev = dev; 819 - 820 - mutex_init(&pcie->lock); 821 - 822 - pcie->base = devm_platform_ioremap_resource(platform, 1); 823 - if (IS_ERR(pcie->base)) 824 - return PTR_ERR(pcie->base); 825 - 826 - cfg->priv = pcie; 827 - INIT_LIST_HEAD(&pcie->ports); 828 - 829 - ret = apple_msi_init(pcie); 830 - if (ret) 831 - return ret; 832 - 833 - for_each_child_of_node_scoped(dev->of_node, of_port) { 714 + for_each_available_child_of_node_scoped(dev->of_node, of_port) { 834 715 ret = apple_pcie_setup_port(pcie, of_port); 835 716 if (ret) { 836 - dev_err(pcie->dev, "Port %pOF setup fail: %d\n", of_port, ret); 717 + dev_err(dev, "Port %pOF setup fail: %d\n", of_port, ret); 837 718 return ret; 838 719 } 839 720 } ··· 832 753 } 833 754 }; 834 755 756 + static int apple_pcie_probe(struct platform_device *pdev) 757 + { 758 + struct device *dev = &pdev->dev; 759 + struct apple_pcie *pcie; 760 + int ret; 761 + 762 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 763 + if (!pcie) 764 + return -ENOMEM; 765 + 766 + pcie->dev = dev; 767 + pcie->hw = of_device_get_match_data(dev); 768 + if (!pcie->hw) 769 + return -ENODEV; 770 + pcie->base = devm_platform_ioremap_resource(pdev, 1); 771 + if (IS_ERR(pcie->base)) 772 + return PTR_ERR(pcie->base); 773 + 774 + mutex_init(&pcie->lock); 775 + INIT_LIST_HEAD(&pcie->ports); 776 + dev_set_drvdata(dev, pcie); 777 + 778 + ret = apple_msi_init(pcie); 779 + if (ret) 780 + return ret; 781 + 782 + return pci_host_common_init(pdev, &apple_pcie_cfg_ecam_ops); 783 + } 784 + 835 785 static const struct of_device_id apple_pcie_of_match[] = { 836 - { .compatible = "apple,pcie", .data = &apple_pcie_cfg_ecam_ops }, 786 + { .compatible = "apple,t6020-pcie", .data = &t602x_hw }, 787 + { .compatible = "apple,pcie", .data = &t8103_hw }, 837 788 { } 838 789 }; 839 790 MODULE_DEVICE_TABLE(of, apple_pcie_of_match); 840 791 841 792 static struct platform_driver apple_pcie_driver = { 842 - .probe = pci_host_common_probe, 793 + .probe = apple_pcie_probe, 843 794 .driver = { 844 795 .name = "pcie-apple", 845 796 .of_match_table = apple_pcie_of_match,
+4 -4
drivers/pci/controller/pcie-rcar-ep.c
··· 256 256 clear_bit(atu_index + 1, ep->ib_window_map); 257 257 } 258 258 259 - static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, 260 - u8 interrupts) 259 + static int rcar_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, u8 nr_irqs) 261 260 { 262 261 struct rcar_pcie_endpoint *ep = epc_get_drvdata(epc); 263 262 struct rcar_pcie *pcie = &ep->pcie; 263 + u8 mmc = order_base_2(nr_irqs); 264 264 u32 flags; 265 265 266 266 flags = rcar_pci_read_reg(pcie, MSICAP(fn)); 267 - flags |= interrupts << MSICAP0_MMESCAP_OFFSET; 267 + flags |= mmc << MSICAP0_MMESCAP_OFFSET; 268 268 rcar_pci_write_reg(pcie, flags, MSICAP(fn)); 269 269 270 270 return 0; ··· 280 280 if (!(flags & MSICAP0_MSIE)) 281 281 return -EINVAL; 282 282 283 - return ((flags & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET); 283 + return 1 << ((flags & MSICAP0_MMESE_MASK) >> MSICAP0_MMESE_OFFSET); 284 284 } 285 285 286 286 static int rcar_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
+6 -4
drivers/pci/controller/pcie-rockchip-ep.c
··· 308 308 } 309 309 310 310 static int rockchip_pcie_ep_set_msi(struct pci_epc *epc, u8 fn, u8 vfn, 311 - u8 multi_msg_cap) 311 + u8 nr_irqs) 312 312 { 313 313 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 314 314 struct rockchip_pcie *rockchip = &ep->rockchip; 315 + u8 mmc = order_base_2(nr_irqs); 315 316 u32 flags; 316 317 317 318 flags = rockchip_pcie_read(rockchip, ··· 320 319 ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 321 320 flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK; 322 321 flags |= 323 - (multi_msg_cap << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) | 322 + (mmc << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) | 324 323 (PCI_MSI_FLAGS_64BIT << ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET); 325 324 flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP; 326 325 rockchip_pcie_write(rockchip, flags, ··· 341 340 if (!(flags & ROCKCHIP_PCIE_EP_MSI_CTRL_ME)) 342 341 return -EINVAL; 343 342 344 - return ((flags & ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK) >> 345 - ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET); 343 + return 1 << ((flags & ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK) >> 344 + ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET); 346 345 } 347 346 348 347 static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn, ··· 695 694 .linkup_notifier = true, 696 695 .msi_capable = true, 697 696 .msix_capable = false, 697 + .intx_capable = true, 698 698 .align = ROCKCHIP_PCIE_AT_SIZE_ALIGN, 699 699 }; 700 700
+4 -3
drivers/pci/controller/pcie-rockchip.h
··· 319 319 "aclk", 320 320 }; 321 321 322 + /* NOTE: Do not reorder the deassert sequence of the following reset pins */ 322 323 static const char * const rockchip_pci_core_rsts[] = { 323 - "mgmt-sticky", 324 - "core", 325 - "mgmt", 326 324 "pipe", 325 + "mgmt", 326 + "core", 327 + "mgmt-sticky", 327 328 }; 328 329 329 330 struct rockchip_pcie {
+1
drivers/pci/controller/plda/pcie-microchip-host.c
··· 23 23 #include <linux/wordpart.h> 24 24 25 25 #include "../../pci.h" 26 + #include "../pci-host-common.h" 26 27 #include "pcie-plda.h" 27 28 28 29 #define MC_MAX_NUM_INBOUND_WINDOWS 8
+31 -194
drivers/pci/devres.c
··· 6 6 /* 7 7 * On the state of PCI's devres implementation: 8 8 * 9 - * The older devres API for PCI has two significant problems: 9 + * The older PCI devres API has one significant problem: 10 10 * 11 - * 1. It is very strongly tied to the statically allocated mapping table in 12 - * struct pcim_iomap_devres below. This is mostly solved in the sense of the 13 - * pcim_ functions in this file providing things like ranged mapping by 14 - * bypassing this table, whereas the functions that were present in the old 15 - * API still enter the mapping addresses into the table for users of the old 16 - * API. 17 - * 18 - * 2. The region-request-functions in pci.c do become managed IF the device has 19 - * been enabled with pcim_enable_device() instead of pci_enable_device(). 20 - * This resulted in the API becoming inconsistent: Some functions have an 21 - * obviously managed counter-part (e.g., pci_iomap() <-> pcim_iomap()), 22 - * whereas some don't and are never managed, while others don't and are 23 - * _sometimes_ managed (e.g. pci_request_region()). 24 - * 25 - * Consequently, in the new API, region requests performed by the pcim_ 26 - * functions are automatically cleaned up through the devres callback 27 - * pcim_addr_resource_release(). 28 - * 29 - * Users of pcim_enable_device() + pci_*region*() are redirected in 30 - * pci.c to the managed functions here in this file. This isn't exactly 31 - * perfect, but the only alternative way would be to port ALL drivers 32 - * using said combination to pcim_ functions. 11 + * It is very strongly tied to the statically allocated mapping table in struct 12 + * pcim_iomap_devres below. This is mostly solved in the sense of the pcim_ 13 + * functions in this file providing things like ranged mapping by bypassing 14 + * this table, whereas the functions that were present in the old API still 15 + * enter the mapping addresses into the table for users of the old API. 33 16 * 34 17 * TODO: 35 18 * Remove the legacy table entirely once all calls to pcim_iomap_table() in ··· 70 87 res->bar = -1; 71 88 } 72 89 73 - /* 74 - * The following functions, __pcim_*_region*, exist as counterparts to the 75 - * versions from pci.c - which, unfortunately, can be in "hybrid mode", i.e., 76 - * sometimes managed, sometimes not. 77 - * 78 - * To separate the APIs cleanly, we define our own, simplified versions here. 79 - */ 80 - 81 - /** 82 - * __pcim_request_region_range - Request a ranged region 83 - * @pdev: PCI device the region belongs to 84 - * @bar: BAR the range is within 85 - * @offset: offset from the BAR's start address 86 - * @maxlen: length in bytes, beginning at @offset 87 - * @name: name of the driver requesting the resource 88 - * @req_flags: flags for the request, e.g., for kernel-exclusive requests 89 - * 90 - * Returns: 0 on success, a negative error code on failure. 91 - * 92 - * Request a range within a device's PCI BAR. Sanity check the input. 93 - */ 94 - static int __pcim_request_region_range(struct pci_dev *pdev, int bar, 95 - unsigned long offset, 96 - unsigned long maxlen, 97 - const char *name, int req_flags) 98 - { 99 - resource_size_t start = pci_resource_start(pdev, bar); 100 - resource_size_t len = pci_resource_len(pdev, bar); 101 - unsigned long dev_flags = pci_resource_flags(pdev, bar); 102 - 103 - if (start == 0 || len == 0) /* Unused BAR. */ 104 - return 0; 105 - if (len <= offset) 106 - return -EINVAL; 107 - 108 - start += offset; 109 - len -= offset; 110 - 111 - if (len > maxlen && maxlen != 0) 112 - len = maxlen; 113 - 114 - if (dev_flags & IORESOURCE_IO) { 115 - if (!request_region(start, len, name)) 116 - return -EBUSY; 117 - } else if (dev_flags & IORESOURCE_MEM) { 118 - if (!__request_mem_region(start, len, name, req_flags)) 119 - return -EBUSY; 120 - } else { 121 - /* That's not a device we can request anything on. */ 122 - return -ENODEV; 123 - } 124 - 125 - return 0; 126 - } 127 - 128 - static void __pcim_release_region_range(struct pci_dev *pdev, int bar, 129 - unsigned long offset, 130 - unsigned long maxlen) 131 - { 132 - resource_size_t start = pci_resource_start(pdev, bar); 133 - resource_size_t len = pci_resource_len(pdev, bar); 134 - unsigned long flags = pci_resource_flags(pdev, bar); 135 - 136 - if (len <= offset || start == 0) 137 - return; 138 - 139 - if (len == 0 || maxlen == 0) /* This an unused BAR. Do nothing. */ 140 - return; 141 - 142 - start += offset; 143 - len -= offset; 144 - 145 - if (len > maxlen) 146 - len = maxlen; 147 - 148 - if (flags & IORESOURCE_IO) 149 - release_region(start, len); 150 - else if (flags & IORESOURCE_MEM) 151 - release_mem_region(start, len); 152 - } 153 - 154 - static int __pcim_request_region(struct pci_dev *pdev, int bar, 155 - const char *name, int flags) 156 - { 157 - unsigned long offset = 0; 158 - unsigned long len = pci_resource_len(pdev, bar); 159 - 160 - return __pcim_request_region_range(pdev, bar, offset, len, name, flags); 161 - } 162 - 163 - static void __pcim_release_region(struct pci_dev *pdev, int bar) 164 - { 165 - unsigned long offset = 0; 166 - unsigned long len = pci_resource_len(pdev, bar); 167 - 168 - __pcim_release_region_range(pdev, bar, offset, len); 169 - } 170 - 171 90 static void pcim_addr_resource_release(struct device *dev, void *resource_raw) 172 91 { 173 92 struct pci_dev *pdev = to_pci_dev(dev); ··· 77 192 78 193 switch (res->type) { 79 194 case PCIM_ADDR_DEVRES_TYPE_REGION: 80 - __pcim_release_region(pdev, res->bar); 195 + pci_release_region(pdev, res->bar); 81 196 break; 82 197 case PCIM_ADDR_DEVRES_TYPE_REGION_MAPPING: 83 198 pci_iounmap(pdev, res->baseaddr); 84 - __pcim_release_region(pdev, res->bar); 199 + pci_release_region(pdev, res->bar); 85 200 break; 86 201 case PCIM_ADDR_DEVRES_TYPE_MAPPING: 87 202 pci_iounmap(pdev, res->baseaddr); ··· 620 735 res->type = PCIM_ADDR_DEVRES_TYPE_REGION_MAPPING; 621 736 res->bar = bar; 622 737 623 - ret = __pcim_request_region(pdev, bar, name, 0); 738 + ret = pci_request_region(pdev, bar, name); 624 739 if (ret != 0) 625 740 goto err_region; 626 741 ··· 634 749 return res->baseaddr; 635 750 636 751 err_iomap: 637 - __pcim_release_region(pdev, bar); 752 + pci_release_region(pdev, bar); 638 753 err_region: 639 754 pcim_addr_devres_free(res); 640 755 ··· 708 823 } 709 824 EXPORT_SYMBOL(pcim_iomap_regions); 710 825 711 - static int _pcim_request_region(struct pci_dev *pdev, int bar, const char *name, 712 - int request_flags) 713 - { 714 - int ret; 715 - struct pcim_addr_devres *res; 716 - 717 - if (!pci_bar_index_is_valid(bar)) 718 - return -EINVAL; 719 - 720 - res = pcim_addr_devres_alloc(pdev); 721 - if (!res) 722 - return -ENOMEM; 723 - res->type = PCIM_ADDR_DEVRES_TYPE_REGION; 724 - res->bar = bar; 725 - 726 - ret = __pcim_request_region(pdev, bar, name, request_flags); 727 - if (ret != 0) { 728 - pcim_addr_devres_free(res); 729 - return ret; 730 - } 731 - 732 - devres_add(&pdev->dev, res); 733 - return 0; 734 - } 735 - 736 826 /** 737 827 * pcim_request_region - Request a PCI BAR 738 828 * @pdev: PCI device to request region for ··· 723 863 */ 724 864 int pcim_request_region(struct pci_dev *pdev, int bar, const char *name) 725 865 { 726 - return _pcim_request_region(pdev, bar, name, 0); 866 + int ret; 867 + struct pcim_addr_devres *res; 868 + 869 + if (!pci_bar_index_is_valid(bar)) 870 + return -EINVAL; 871 + 872 + res = pcim_addr_devres_alloc(pdev); 873 + if (!res) 874 + return -ENOMEM; 875 + res->type = PCIM_ADDR_DEVRES_TYPE_REGION; 876 + res->bar = bar; 877 + 878 + ret = pci_request_region(pdev, bar, name); 879 + if (ret != 0) { 880 + pcim_addr_devres_free(res); 881 + return ret; 882 + } 883 + 884 + devres_add(&pdev->dev, res); 885 + return 0; 727 886 } 728 887 EXPORT_SYMBOL(pcim_request_region); 729 - 730 - /** 731 - * pcim_request_region_exclusive - Request a PCI BAR exclusively 732 - * @pdev: PCI device to request region for 733 - * @bar: Index of BAR to request 734 - * @name: Name of the driver requesting the resource 735 - * 736 - * Returns: 0 on success, a negative error code on failure. 737 - * 738 - * Request region specified by @bar exclusively. 739 - * 740 - * The region will automatically be released on driver detach. If desired, 741 - * release manually only with pcim_release_region(). 742 - */ 743 - int pcim_request_region_exclusive(struct pci_dev *pdev, int bar, const char *name) 744 - { 745 - return _pcim_request_region(pdev, bar, name, IORESOURCE_EXCLUSIVE); 746 - } 747 888 748 889 /** 749 890 * pcim_release_region - Release a PCI BAR ··· 754 893 * Release a region manually that was previously requested by 755 894 * pcim_request_region(). 756 895 */ 757 - void pcim_release_region(struct pci_dev *pdev, int bar) 896 + static void pcim_release_region(struct pci_dev *pdev, int bar) 758 897 { 759 898 struct pcim_addr_devres res_searched; 760 899 ··· 815 954 return ret; 816 955 } 817 956 EXPORT_SYMBOL(pcim_request_all_regions); 818 - 819 - /** 820 - * pcim_iounmap_regions - Unmap and release PCI BARs (DEPRECATED) 821 - * @pdev: PCI device to map IO resources for 822 - * @mask: Mask of BARs to unmap and release 823 - * 824 - * Unmap and release regions specified by @mask. 825 - * 826 - * This function is DEPRECATED. Do not use it in new code. 827 - * Use pcim_iounmap_region() instead. 828 - */ 829 - void pcim_iounmap_regions(struct pci_dev *pdev, int mask) 830 - { 831 - int i; 832 - 833 - for (i = 0; i < PCI_STD_NUM_BARS; i++) { 834 - if (!mask_contains_bar(mask, i)) 835 - continue; 836 - 837 - pcim_iounmap_region(pdev, i); 838 - pcim_remove_bar_from_legacy_table(pdev, i); 839 - } 840 - } 841 - EXPORT_SYMBOL(pcim_iounmap_regions); 842 957 843 958 /** 844 959 * pcim_iomap_range - Create a ranged __iomap mapping within a PCI BAR
+2
drivers/pci/ecam.c
··· 84 84 goto err_exit_iomap; 85 85 } 86 86 87 + cfg->priv = dev_get_drvdata(dev); 88 + 87 89 if (ops->init) { 88 90 err = ops->init(cfg); 89 91 if (err)
+3 -23
drivers/pci/endpoint/functions/pci-epf-vntb.c
··· 408 408 */ 409 409 static int epf_ntb_config_spad_bar_alloc(struct epf_ntb *ntb) 410 410 { 411 - size_t align; 412 411 enum pci_barno barno; 413 412 struct epf_ntb_ctrl *ctrl; 414 413 u32 spad_size, ctrl_size; 415 - u64 size; 416 414 struct pci_epf *epf = ntb->epf; 417 415 struct device *dev = &epf->dev; 418 416 u32 spad_count; ··· 420 422 epf->func_no, 421 423 epf->vfunc_no); 422 424 barno = ntb->epf_ntb_bar[BAR_CONFIG]; 423 - size = epc_features->bar[barno].fixed_size; 424 - align = epc_features->align; 425 - 426 - if ((!IS_ALIGNED(size, align))) 427 - return -EINVAL; 428 - 429 425 spad_count = ntb->spad_count; 430 426 431 - ctrl_size = sizeof(struct epf_ntb_ctrl); 427 + ctrl_size = ALIGN(sizeof(struct epf_ntb_ctrl), sizeof(u32)); 432 428 spad_size = 2 * spad_count * sizeof(u32); 433 429 434 - if (!align) { 435 - ctrl_size = roundup_pow_of_two(ctrl_size); 436 - spad_size = roundup_pow_of_two(spad_size); 437 - } else { 438 - ctrl_size = ALIGN(ctrl_size, align); 439 - spad_size = ALIGN(spad_size, align); 440 - } 441 - 442 - if (!size) 443 - size = ctrl_size + spad_size; 444 - else if (size < ctrl_size + spad_size) 445 - return -EINVAL; 446 - 447 - base = pci_epf_alloc_space(epf, size, barno, epc_features, 0); 430 + base = pci_epf_alloc_space(epf, ctrl_size + spad_size, 431 + barno, epc_features, 0); 448 432 if (!base) { 449 433 dev_err(dev, "Config/Status/SPAD alloc region fail\n"); 450 434 return -ENOMEM;
+10 -16
drivers/pci/endpoint/pci-epc-core.c
··· 293 293 if (interrupt < 0) 294 294 return 0; 295 295 296 - interrupt = 1 << interrupt; 297 - 298 296 return interrupt; 299 297 } 300 298 EXPORT_SYMBOL_GPL(pci_epc_get_msi); ··· 302 304 * @epc: the EPC device on which MSI has to be configured 303 305 * @func_no: the physical endpoint function number in the EPC device 304 306 * @vfunc_no: the virtual endpoint function number in the physical function 305 - * @interrupts: number of MSI interrupts required by the EPF 307 + * @nr_irqs: number of MSI interrupts required by the EPF 306 308 * 307 309 * Invoke to set the required number of MSI interrupts. 308 310 */ 309 - int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u8 interrupts) 311 + int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u8 nr_irqs) 310 312 { 311 313 int ret; 312 - u8 encode_int; 313 314 314 315 if (!pci_epc_function_is_valid(epc, func_no, vfunc_no)) 315 316 return -EINVAL; 316 317 317 - if (interrupts < 1 || interrupts > 32) 318 + if (nr_irqs < 1 || nr_irqs > 32) 318 319 return -EINVAL; 319 320 320 321 if (!epc->ops->set_msi) 321 322 return 0; 322 323 323 - encode_int = order_base_2(interrupts); 324 - 325 324 mutex_lock(&epc->lock); 326 - ret = epc->ops->set_msi(epc, func_no, vfunc_no, encode_int); 325 + ret = epc->ops->set_msi(epc, func_no, vfunc_no, nr_irqs); 327 326 mutex_unlock(&epc->lock); 328 327 329 328 return ret; ··· 352 357 if (interrupt < 0) 353 358 return 0; 354 359 355 - return interrupt + 1; 360 + return interrupt; 356 361 } 357 362 EXPORT_SYMBOL_GPL(pci_epc_get_msix); 358 363 ··· 361 366 * @epc: the EPC device on which MSI-X has to be configured 362 367 * @func_no: the physical endpoint function number in the EPC device 363 368 * @vfunc_no: the virtual endpoint function number in the physical function 364 - * @interrupts: number of MSI-X interrupts required by the EPF 369 + * @nr_irqs: number of MSI-X interrupts required by the EPF 365 370 * @bir: BAR where the MSI-X table resides 366 371 * @offset: Offset pointing to the start of MSI-X table 367 372 * 368 373 * Invoke to set the required number of MSI-X interrupts. 369 374 */ 370 - int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 371 - u16 interrupts, enum pci_barno bir, u32 offset) 375 + int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u16 nr_irqs, 376 + enum pci_barno bir, u32 offset) 372 377 { 373 378 int ret; 374 379 375 380 if (!pci_epc_function_is_valid(epc, func_no, vfunc_no)) 376 381 return -EINVAL; 377 382 378 - if (interrupts < 1 || interrupts > 2048) 383 + if (nr_irqs < 1 || nr_irqs > 2048) 379 384 return -EINVAL; 380 385 381 386 if (!epc->ops->set_msix) 382 387 return 0; 383 388 384 389 mutex_lock(&epc->lock); 385 - ret = epc->ops->set_msix(epc, func_no, vfunc_no, interrupts - 1, bir, 386 - offset); 390 + ret = epc->ops->set_msix(epc, func_no, vfunc_no, nr_irqs, bir, offset); 387 391 mutex_unlock(&epc->lock); 388 392 389 393 return ret;
+15 -7
drivers/pci/endpoint/pci-epf-core.c
··· 236 236 } 237 237 238 238 dev = epc->dev.parent; 239 - dma_free_coherent(dev, epf_bar[bar].size, addr, 239 + dma_free_coherent(dev, epf_bar[bar].aligned_size, addr, 240 240 epf_bar[bar].phys_addr); 241 241 242 242 epf_bar[bar].phys_addr = 0; 243 243 epf_bar[bar].addr = NULL; 244 244 epf_bar[bar].size = 0; 245 + epf_bar[bar].aligned_size = 0; 245 246 epf_bar[bar].barno = 0; 246 247 epf_bar[bar].flags = 0; 247 248 } ··· 265 264 enum pci_epc_interface_type type) 266 265 { 267 266 u64 bar_fixed_size = epc_features->bar[bar].fixed_size; 268 - size_t align = epc_features->align; 267 + size_t aligned_size, align = epc_features->align; 269 268 struct pci_epf_bar *epf_bar; 270 269 dma_addr_t phys_addr; 271 270 struct pci_epc *epc; ··· 286 285 return NULL; 287 286 } 288 287 size = bar_fixed_size; 288 + } else { 289 + /* BAR size must be power of two */ 290 + size = roundup_pow_of_two(size); 289 291 } 290 292 291 - if (align) 292 - size = ALIGN(size, align); 293 - else 294 - size = roundup_pow_of_two(size); 293 + /* 294 + * Allocate enough memory to accommodate the iATU alignment 295 + * requirement. In most cases, this will be the same as .size but 296 + * it might be different if, for example, the fixed size of a BAR 297 + * is smaller than align. 298 + */ 299 + aligned_size = align ? ALIGN(size, align) : size; 295 300 296 301 if (type == PRIMARY_INTERFACE) { 297 302 epc = epf->epc; ··· 308 301 } 309 302 310 303 dev = epc->dev.parent; 311 - space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL); 304 + space = dma_alloc_coherent(dev, aligned_size, &phys_addr, GFP_KERNEL); 312 305 if (!space) { 313 306 dev_err(dev, "failed to allocate mem space\n"); 314 307 return NULL; ··· 317 310 epf_bar[bar].phys_addr = phys_addr; 318 311 epf_bar[bar].addr = space; 319 312 epf_bar[bar].size = size; 313 + epf_bar[bar].aligned_size = aligned_size; 320 314 epf_bar[bar].barno = bar; 321 315 if (upper_32_bits(size) || epc_features->bar[bar].only_64bit) 322 316 epf_bar[bar].flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+69 -4
drivers/pci/hotplug/pci_hotplug_core.c
··· 20 20 #include <linux/types.h> 21 21 #include <linux/kobject.h> 22 22 #include <linux/sysfs.h> 23 - #include <linux/pagemap.h> 24 23 #include <linux/init.h> 25 - #include <linux/mount.h> 26 - #include <linux/namei.h> 27 24 #include <linux/pci.h> 28 25 #include <linux/pci_hotplug.h> 29 - #include <linux/uaccess.h> 30 26 #include "../pci.h" 31 27 #include "cpci_hotplug.h" 32 28 ··· 487 491 pci_destroy_slot(pci_slot); 488 492 } 489 493 EXPORT_SYMBOL_GPL(pci_hp_destroy); 494 + 495 + static DECLARE_WAIT_QUEUE_HEAD(pci_hp_link_change_wq); 496 + 497 + /** 498 + * pci_hp_ignore_link_change - begin code section causing spurious link changes 499 + * @pdev: PCI hotplug bridge 500 + * 501 + * Mark the beginning of a code section causing spurious link changes on the 502 + * Secondary Bus of @pdev, e.g. as a side effect of a Secondary Bus Reset, 503 + * D3cold transition, firmware update or FPGA reconfiguration. 504 + * 505 + * Hotplug drivers can thus check whether such a code section is executing 506 + * concurrently, await it with pci_hp_spurious_link_change() and ignore the 507 + * resulting link change events. 508 + * 509 + * Must be paired with pci_hp_unignore_link_change(). May be called both 510 + * from the PCI core and from Endpoint drivers. May be called for bridges 511 + * which are not hotplug-capable, in which case it has no effect because 512 + * no hotplug driver is bound to the bridge. 513 + */ 514 + void pci_hp_ignore_link_change(struct pci_dev *pdev) 515 + { 516 + set_bit(PCI_LINK_CHANGING, &pdev->priv_flags); 517 + smp_mb__after_atomic(); /* pairs with implied barrier of wait_event() */ 518 + } 519 + 520 + /** 521 + * pci_hp_unignore_link_change - end code section causing spurious link changes 522 + * @pdev: PCI hotplug bridge 523 + * 524 + * Mark the end of a code section causing spurious link changes on the 525 + * Secondary Bus of @pdev. Must be paired with pci_hp_ignore_link_change(). 526 + */ 527 + void pci_hp_unignore_link_change(struct pci_dev *pdev) 528 + { 529 + set_bit(PCI_LINK_CHANGED, &pdev->priv_flags); 530 + mb(); /* ensure pci_hp_spurious_link_change() sees either bit set */ 531 + clear_bit(PCI_LINK_CHANGING, &pdev->priv_flags); 532 + wake_up_all(&pci_hp_link_change_wq); 533 + } 534 + 535 + /** 536 + * pci_hp_spurious_link_change - check for spurious link changes 537 + * @pdev: PCI hotplug bridge 538 + * 539 + * Check whether a code section is executing concurrently which is causing 540 + * spurious link changes on the Secondary Bus of @pdev. Await the end of the 541 + * code section if so. 542 + * 543 + * May be called by hotplug drivers to check whether a link change is spurious 544 + * and can be ignored. 545 + * 546 + * Because a genuine link change may have occurred in-between a spurious link 547 + * change and the invocation of this function, hotplug drivers should perform 548 + * sanity checks such as retrieving the current link state and bringing down 549 + * the slot if the link is down. 550 + * 551 + * Return: %true if such a code section has been executing concurrently, 552 + * otherwise %false. Also return %true if such a code section has not been 553 + * executing concurrently, but at least once since the last invocation of this 554 + * function. 555 + */ 556 + bool pci_hp_spurious_link_change(struct pci_dev *pdev) 557 + { 558 + wait_event(pci_hp_link_change_wq, 559 + !test_bit(PCI_LINK_CHANGING, &pdev->priv_flags)); 560 + 561 + return test_and_clear_bit(PCI_LINK_CHANGED, &pdev->priv_flags); 562 + } 490 563 491 564 static int __init pci_hotplug_init(void) 492 565 {
+1
drivers/pci/hotplug/pciehp.h
··· 187 187 int pciehp_card_present_or_link_active(struct controller *ctrl); 188 188 int pciehp_check_link_status(struct controller *ctrl); 189 189 int pciehp_check_link_active(struct controller *ctrl); 190 + bool pciehp_device_replaced(struct controller *ctrl); 190 191 void pciehp_release_ctrl(struct controller *ctrl); 191 192 192 193 int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot);
-29
drivers/pci/hotplug/pciehp_core.c
··· 284 284 return 0; 285 285 } 286 286 287 - static bool pciehp_device_replaced(struct controller *ctrl) 288 - { 289 - struct pci_dev *pdev __free(pci_dev_put) = NULL; 290 - u32 reg; 291 - 292 - if (pci_dev_is_disconnected(ctrl->pcie->port)) 293 - return false; 294 - 295 - pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0)); 296 - if (!pdev) 297 - return true; 298 - 299 - if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) || 300 - reg != (pdev->vendor | (pdev->device << 16)) || 301 - pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) || 302 - reg != (pdev->revision | (pdev->class << 8))) 303 - return true; 304 - 305 - if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL && 306 - (pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) || 307 - reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16)))) 308 - return true; 309 - 310 - if (pci_get_dsn(pdev) != ctrl->dsn) 311 - return true; 312 - 313 - return false; 314 - } 315 - 316 287 static int pciehp_resume_noirq(struct pcie_device *dev) 317 288 { 318 289 struct controller *ctrl = get_service_data(dev);
+1 -1
drivers/pci/hotplug/pciehp_ctrl.c
··· 131 131 INDICATOR_NOOP); 132 132 133 133 /* Don't carry LBMS indications across */ 134 - pcie_reset_lbms_count(ctrl->pcie->port); 134 + pcie_reset_lbms(ctrl->pcie->port); 135 135 } 136 136 137 137 static int pciehp_enable_slot(struct controller *ctrl);
+51 -27
drivers/pci/hotplug/pciehp_hpc.c
··· 563 563 PCI_EXP_SLTCTL_PWR_OFF); 564 564 } 565 565 566 - static void pciehp_ignore_dpc_link_change(struct controller *ctrl, 567 - struct pci_dev *pdev, int irq) 566 + bool pciehp_device_replaced(struct controller *ctrl) 567 + { 568 + struct pci_dev *pdev __free(pci_dev_put) = NULL; 569 + u32 reg; 570 + 571 + if (pci_dev_is_disconnected(ctrl->pcie->port)) 572 + return false; 573 + 574 + pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0)); 575 + if (!pdev) 576 + return true; 577 + 578 + if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) || 579 + reg != (pdev->vendor | (pdev->device << 16)) || 580 + pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) || 581 + reg != (pdev->revision | (pdev->class << 8))) 582 + return true; 583 + 584 + if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL && 585 + (pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) || 586 + reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16)))) 587 + return true; 588 + 589 + if (pci_get_dsn(pdev) != ctrl->dsn) 590 + return true; 591 + 592 + return false; 593 + } 594 + 595 + static void pciehp_ignore_link_change(struct controller *ctrl, 596 + struct pci_dev *pdev, int irq, 597 + u16 ignored_events) 568 598 { 569 599 /* 570 600 * Ignore link changes which occurred while waiting for DPC recovery. 571 601 * Could be several if DPC triggered multiple times consecutively. 602 + * Also ignore link changes caused by Secondary Bus Reset, etc. 572 603 */ 573 604 synchronize_hardirq(irq); 574 - atomic_and(~PCI_EXP_SLTSTA_DLLSC, &ctrl->pending_events); 605 + atomic_and(~ignored_events, &ctrl->pending_events); 575 606 if (pciehp_poll_mode) 576 607 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 577 - PCI_EXP_SLTSTA_DLLSC); 578 - ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored (recovered by DPC)\n", 579 - slot_name(ctrl)); 608 + ignored_events); 609 + ctrl_info(ctrl, "Slot(%s): Link Down/Up ignored\n", slot_name(ctrl)); 580 610 581 611 /* 582 612 * If the link is unexpectedly down after successful recovery, ··· 614 584 * Synthesize it to ensure that it is acted on. 615 585 */ 616 586 down_read_nested(&ctrl->reset_lock, ctrl->depth); 617 - if (!pciehp_check_link_active(ctrl)) 618 - pciehp_request(ctrl, PCI_EXP_SLTSTA_DLLSC); 587 + if (!pciehp_check_link_active(ctrl) || pciehp_device_replaced(ctrl)) 588 + pciehp_request(ctrl, ignored_events); 619 589 up_read(&ctrl->reset_lock); 620 590 } 621 591 ··· 762 732 763 733 /* 764 734 * Ignore Link Down/Up events caused by Downstream Port Containment 765 - * if recovery from the error succeeded. 735 + * if recovery succeeded, or caused by Secondary Bus Reset, 736 + * suspend to D3cold, firmware update, FPGA reconfiguration, etc. 766 737 */ 767 - if ((events & PCI_EXP_SLTSTA_DLLSC) && pci_dpc_recovered(pdev) && 738 + if ((events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) && 739 + (pci_dpc_recovered(pdev) || pci_hp_spurious_link_change(pdev)) && 768 740 ctrl->state == ON_STATE) { 769 - events &= ~PCI_EXP_SLTSTA_DLLSC; 770 - pciehp_ignore_dpc_link_change(ctrl, pdev, irq); 741 + u16 ignored_events = PCI_EXP_SLTSTA_DLLSC; 742 + 743 + if (!ctrl->inband_presence_disabled) 744 + ignored_events |= events & PCI_EXP_SLTSTA_PDC; 745 + 746 + events &= ~ignored_events; 747 + pciehp_ignore_link_change(ctrl, pdev, irq, ignored_events); 771 748 } 772 749 773 750 /* ··· 939 902 { 940 903 struct controller *ctrl = to_ctrl(hotplug_slot); 941 904 struct pci_dev *pdev = ctrl_dev(ctrl); 942 - u16 stat_mask = 0, ctrl_mask = 0; 943 905 int rc; 944 906 945 907 if (probe) ··· 946 910 947 911 down_write_nested(&ctrl->reset_lock, ctrl->depth); 948 912 949 - if (!ATTN_BUTTN(ctrl)) { 950 - ctrl_mask |= PCI_EXP_SLTCTL_PDCE; 951 - stat_mask |= PCI_EXP_SLTSTA_PDC; 952 - } 953 - ctrl_mask |= PCI_EXP_SLTCTL_DLLSCE; 954 - stat_mask |= PCI_EXP_SLTSTA_DLLSC; 955 - 956 - pcie_write_cmd(ctrl, 0, ctrl_mask); 957 - ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 958 - pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, 0); 913 + pci_hp_ignore_link_change(pdev); 959 914 960 915 rc = pci_bridge_secondary_bus_reset(ctrl->pcie->port); 961 916 962 - pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask); 963 - pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask); 964 - ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 965 - pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask); 917 + pci_hp_unignore_link_change(pdev); 966 918 967 919 up_write(&ctrl->reset_lock); 968 920 return rc;
-16
drivers/pci/iomap.c
··· 25 25 * 26 26 * @maxlen specifies the maximum length to map. If you want to get access to 27 27 * the complete BAR from offset to the end, pass %0 here. 28 - * 29 - * NOTE: 30 - * This function is never managed, even if you initialized with 31 - * pcim_enable_device(). 32 28 * */ 33 29 void __iomem *pci_iomap_range(struct pci_dev *dev, 34 30 int bar, ··· 72 76 * 73 77 * @maxlen specifies the maximum length to map. If you want to get access to 74 78 * the complete BAR from offset to the end, pass %0 here. 75 - * 76 - * NOTE: 77 - * This function is never managed, even if you initialized with 78 - * pcim_enable_device(). 79 79 * */ 80 80 void __iomem *pci_iomap_wc_range(struct pci_dev *dev, 81 81 int bar, ··· 119 127 * 120 128 * @maxlen specifies the maximum length to map. If you want to get access to 121 129 * the complete BAR without checking for its length first, pass %0 here. 122 - * 123 - * NOTE: 124 - * This function is never managed, even if you initialized with 125 - * pcim_enable_device(). If you need automatic cleanup, use pcim_iomap(). 126 130 * */ 127 131 void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) 128 132 { ··· 140 152 * 141 153 * @maxlen specifies the maximum length to map. If you want to get access to 142 154 * the complete BAR without checking for its length first, pass %0 here. 143 - * 144 - * NOTE: 145 - * This function is never managed, even if you initialized with 146 - * pcim_enable_device(). 147 155 * */ 148 156 void __iomem *pci_iomap_wc(struct pci_dev *dev, int bar, unsigned long maxlen) 149 157 {
+44
drivers/pci/of.c
··· 966 966 return slot_power_limit_mw; 967 967 } 968 968 EXPORT_SYMBOL_GPL(of_pci_get_slot_power_limit); 969 + 970 + /** 971 + * of_pci_get_equalization_presets - Parses the "eq-presets-Ngts" property. 972 + * 973 + * @dev: Device containing the properties. 974 + * @presets: Pointer to store the parsed data. 975 + * @num_lanes: Maximum number of lanes supported. 976 + * 977 + * If the property is present, read and store the data in the @presets structure. 978 + * Else, assign a default value of PCI_EQ_RESV. 979 + * 980 + * Return: 0 if the property is not available or successfully parsed else 981 + * errno otherwise. 982 + */ 983 + int of_pci_get_equalization_presets(struct device *dev, 984 + struct pci_eq_presets *presets, 985 + int num_lanes) 986 + { 987 + char name[20]; 988 + int ret; 989 + 990 + presets->eq_presets_8gts[0] = PCI_EQ_RESV; 991 + ret = of_property_read_u16_array(dev->of_node, "eq-presets-8gts", 992 + presets->eq_presets_8gts, num_lanes); 993 + if (ret && ret != -EINVAL) { 994 + dev_err(dev, "Error reading eq-presets-8gts: %d\n", ret); 995 + return ret; 996 + } 997 + 998 + for (int i = 0; i < EQ_PRESET_TYPE_MAX - 1; i++) { 999 + presets->eq_presets_Ngts[i][0] = PCI_EQ_RESV; 1000 + snprintf(name, sizeof(name), "eq-presets-%dgts", 8 << (i + 1)); 1001 + ret = of_property_read_u8_array(dev->of_node, name, 1002 + presets->eq_presets_Ngts[i], 1003 + num_lanes); 1004 + if (ret && ret != -EINVAL) { 1005 + dev_err(dev, "Error reading %s: %d\n", name, ret); 1006 + return ret; 1007 + } 1008 + } 1009 + 1010 + return 0; 1011 + } 1012 + EXPORT_SYMBOL_GPL(of_pci_get_equalization_presets);
+13 -10
drivers/pci/pci-acpi.c
··· 1676 1676 return NULL; 1677 1677 1678 1678 root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL); 1679 - if (!root_ops) { 1680 - kfree(ri); 1681 - return NULL; 1682 - } 1679 + if (!root_ops) 1680 + goto free_ri; 1683 1681 1684 1682 ri->cfg = pci_acpi_setup_ecam_mapping(root); 1685 - if (!ri->cfg) { 1686 - kfree(ri); 1687 - kfree(root_ops); 1688 - return NULL; 1689 - } 1683 + if (!ri->cfg) 1684 + goto free_root_ops; 1690 1685 1691 1686 root_ops->release_info = pci_acpi_generic_release_info; 1692 1687 root_ops->prepare_resources = pci_acpi_root_prepare_resources; 1693 1688 root_ops->pci_ops = (struct pci_ops *)&ri->cfg->ops->pci_ops; 1694 1689 bus = acpi_pci_root_create(root, root_ops, &ri->common, ri->cfg); 1695 1690 if (!bus) 1696 - return NULL; 1691 + goto free_cfg; 1697 1692 1698 1693 /* If we must preserve the resource configuration, claim now */ 1699 1694 host = pci_find_host_bridge(bus); ··· 1705 1710 pcie_bus_configure_settings(child); 1706 1711 1707 1712 return bus; 1713 + 1714 + free_cfg: 1715 + pci_ecam_free(ri->cfg); 1716 + free_root_ops: 1717 + kfree(root_ops); 1718 + free_ri: 1719 + kfree(ri); 1720 + return NULL; 1708 1721 } 1709 1722 1710 1723 void pcibios_add_bus(struct pci_bus *bus)
+1 -7
drivers/pci/pci-driver.c
··· 555 555 pci_enable_wake(pci_dev, PCI_D0, false); 556 556 } 557 557 558 - static void pci_pm_power_up_and_verify_state(struct pci_dev *pci_dev) 559 - { 560 - pci_power_up(pci_dev); 561 - pci_update_current_state(pci_dev, PCI_D0); 562 - } 563 - 564 558 static void pci_pm_default_resume_early(struct pci_dev *pci_dev) 565 559 { 566 560 pci_pm_power_up_and_verify_state(pci_dev); ··· 1501 1507 struct pci_driver *pci_drv; 1502 1508 const struct pci_device_id *found_id; 1503 1509 1504 - if (!pci_dev->match_driver) 1510 + if (pci_dev_binding_disallowed(pci_dev)) 1505 1511 return 0; 1506 1512 1507 1513 pci_drv = (struct pci_driver *)to_pci_driver(drv);
+4
drivers/pci/pci-sysfs.c
··· 1475 1475 return count; 1476 1476 } 1477 1477 1478 + pm_runtime_get_sync(dev); 1479 + struct device *pmdev __free(pm_runtime_put) = dev; 1480 + 1478 1481 if (sysfs_streq(buf, "default")) { 1479 1482 pci_init_reset_methods(pdev); 1480 1483 return count; ··· 1808 1805 &pcie_dev_attr_group, 1809 1806 #ifdef CONFIG_PCIEAER 1810 1807 &aer_stats_attr_group, 1808 + &aer_attr_group, 1811 1809 #endif 1812 1810 #ifdef CONFIG_PCIEASPM 1813 1811 &aspm_ctrl_attr_group,
+33 -55
drivers/pci/pci.c
··· 3192 3192 } 3193 3193 EXPORT_SYMBOL_GPL(pci_d3cold_disable); 3194 3194 3195 + void pci_pm_power_up_and_verify_state(struct pci_dev *pci_dev) 3196 + { 3197 + pci_power_up(pci_dev); 3198 + pci_update_current_state(pci_dev, PCI_D0); 3199 + } 3200 + 3195 3201 /** 3196 3202 * pci_pm_init - Initialize PM functions of given PCI device 3197 3203 * @dev: PCI device to handle. ··· 3208 3202 u16 status; 3209 3203 u16 pmc; 3210 3204 3211 - pm_runtime_forbid(&dev->dev); 3212 - pm_runtime_set_active(&dev->dev); 3213 - pm_runtime_enable(&dev->dev); 3214 3205 device_enable_async_suspend(&dev->dev); 3215 3206 dev->wakeup_prepared = false; 3216 3207 ··· 3269 3266 pci_read_config_word(dev, PCI_STATUS, &status); 3270 3267 if (status & PCI_STATUS_IMM_READY) 3271 3268 dev->imm_ready = 1; 3269 + pci_pm_power_up_and_verify_state(dev); 3270 + pm_runtime_forbid(&dev->dev); 3271 + pm_runtime_set_active(&dev->dev); 3272 + pm_runtime_enable(&dev->dev); 3272 3273 } 3273 3274 3274 3275 static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) ··· 3944 3937 if (!pci_bar_index_is_valid(bar)) 3945 3938 return; 3946 3939 3947 - /* 3948 - * This is done for backwards compatibility, because the old PCI devres 3949 - * API had a mode in which the function became managed if it had been 3950 - * enabled with pcim_enable_device() instead of pci_enable_device(). 3951 - */ 3952 - if (pci_is_managed(pdev)) { 3953 - pcim_release_region(pdev, bar); 3954 - return; 3955 - } 3956 - 3957 3940 if (pci_resource_len(pdev, bar) == 0) 3958 3941 return; 3959 3942 if (pci_resource_flags(pdev, bar) & IORESOURCE_IO) ··· 3981 3984 if (!pci_bar_index_is_valid(bar)) 3982 3985 return -EINVAL; 3983 3986 3984 - if (pci_is_managed(pdev)) { 3985 - if (exclusive == IORESOURCE_EXCLUSIVE) 3986 - return pcim_request_region_exclusive(pdev, bar, name); 3987 - 3988 - return pcim_request_region(pdev, bar, name); 3989 - } 3990 - 3991 3987 if (pci_resource_len(pdev, bar) == 0) 3992 3988 return 0; 3993 3989 ··· 4017 4027 * 4018 4028 * Returns 0 on success, or %EBUSY on error. A warning 4019 4029 * message is also printed on failure. 4020 - * 4021 - * NOTE: 4022 - * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4023 - * when pcim_enable_device() has been called in advance. This hybrid feature is 4024 - * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4025 4030 */ 4026 4031 int pci_request_region(struct pci_dev *pdev, int bar, const char *name) 4027 4032 { ··· 4069 4084 * @name: Name of the driver requesting the resources 4070 4085 * 4071 4086 * Returns: 0 on success, negative error code on failure. 4072 - * 4073 - * NOTE: 4074 - * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4075 - * when pcim_enable_device() has been called in advance. This hybrid feature is 4076 - * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4077 4087 */ 4078 4088 int pci_request_selected_regions(struct pci_dev *pdev, int bars, 4079 4089 const char *name) ··· 4084 4104 * @name: name of the driver requesting the resources 4085 4105 * 4086 4106 * Returns: 0 on success, negative error code on failure. 4087 - * 4088 - * NOTE: 4089 - * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4090 - * when pcim_enable_device() has been called in advance. This hybrid feature is 4091 - * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4092 4107 */ 4093 4108 int pci_request_selected_regions_exclusive(struct pci_dev *pdev, int bars, 4094 4109 const char *name) ··· 4119 4144 * 4120 4145 * Returns 0 on success, or %EBUSY on error. A warning 4121 4146 * message is also printed on failure. 4122 - * 4123 - * NOTE: 4124 - * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4125 - * when pcim_enable_device() has been called in advance. This hybrid feature is 4126 - * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4127 4147 */ 4128 4148 int pci_request_regions(struct pci_dev *pdev, const char *name) 4129 4149 { ··· 4143 4173 * 4144 4174 * Returns 0 on success, or %EBUSY on error. A warning message is also 4145 4175 * printed on failure. 4146 - * 4147 - * NOTE: 4148 - * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4149 - * when pcim_enable_device() has been called in advance. This hybrid feature is 4150 - * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4151 4176 */ 4152 4177 int pci_request_regions_exclusive(struct pci_dev *pdev, const char *name) 4153 4178 { ··· 4222 4257 #ifndef pci_remap_iospace 4223 4258 int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr) 4224 4259 { 4225 - #if defined(PCI_IOBASE) && defined(CONFIG_MMU) 4260 + #if defined(PCI_IOBASE) 4226 4261 unsigned long vaddr = (unsigned long)PCI_IOBASE + res->start; 4227 4262 4228 4263 if (!(res->flags & IORESOURCE_IO)) ··· 4255 4290 */ 4256 4291 void pci_unmap_iospace(struct resource *res) 4257 4292 { 4258 - #if defined(PCI_IOBASE) && defined(CONFIG_MMU) 4293 + #if defined(PCI_IOBASE) 4259 4294 unsigned long vaddr = (unsigned long)PCI_IOBASE + res->start; 4260 4295 4261 4296 vunmap_range(vaddr, vaddr + resource_size(res)); ··· 4683 4718 * @pdev: Device whose link to retrain. 4684 4719 * @use_lt: Use the LT bit if TRUE, or the DLLLA bit if FALSE, for status. 4685 4720 * 4721 + * Trigger retraining of the PCIe Link and wait for the completion of the 4722 + * retraining. As link retraining is known to asserts LBMS and may change 4723 + * the Link Speed, LBMS is cleared after the retraining and the Link Speed 4724 + * of the subordinate bus is updated. 4725 + * 4686 4726 * Retrain completion status is retrieved from the Link Status Register 4687 4727 * according to @use_lt. It is not verified whether the use of the DLLLA 4688 4728 * bit is valid. ··· 4727 4757 * to track link speed or width changes made by hardware itself 4728 4758 * in attempt to correct unreliable link operation. 4729 4759 */ 4730 - pcie_reset_lbms_count(pdev); 4760 + pcie_reset_lbms(pdev); 4761 + 4762 + /* 4763 + * Ensure the Link Speed updates after retraining in case the Link 4764 + * Speed was changed because of the retraining. While the bwctrl's 4765 + * IRQ handler normally picks up the new Link Speed, clearing LBMS 4766 + * races with the IRQ handler reading the Link Status register and 4767 + * can result in the handler returning early without updating the 4768 + * Link Speed. 4769 + */ 4770 + if (pdev->subordinate) 4771 + pcie_update_link_speed(pdev->subordinate); 4772 + 4731 4773 return rc; 4732 4774 } 4733 4775 ··· 4936 4954 delay); 4937 4955 if (!pcie_wait_for_link_delay(dev, true, delay)) { 4938 4956 /* Did not train, no need to wait any further */ 4939 - pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n"); 4957 + pci_info(dev, "Data Link Layer Link Active not set in %d msec\n", delay); 4940 4958 return -ENOTTY; 4941 4959 } 4942 4960 ··· 5520 5538 continue; 5521 5539 if (dev->subordinate) 5522 5540 pci_bus_unlock(dev->subordinate); 5523 - pci_dev_unlock(dev); 5541 + else 5542 + pci_dev_unlock(dev); 5524 5543 } 5525 5544 } 5526 5545 ··· 6784 6801 { 6785 6802 return 1; 6786 6803 } 6787 - 6788 - void __weak pci_fixup_cardbus(struct pci_bus *bus) 6789 - { 6790 - } 6791 - EXPORT_SYMBOL(pci_fixup_cardbus); 6792 6804 6793 6805 static int __init pci_setup(char *str) 6794 6806 {
+58 -17
drivers/pci/pci.h
··· 9 9 /* Number of possible devfns: 0.0 to 1f.7 inclusive */ 10 10 #define MAX_NR_DEVFNS 256 11 11 12 + #define MAX_NR_LANES 16 13 + 12 14 #define PCI_FIND_CAP_TTL 48 13 15 14 16 #define PCI_VSEC_ID_INTEL_TBT 0x1234 /* Thunderbolt */ ··· 150 148 void pci_dev_complete_resume(struct pci_dev *pci_dev); 151 149 void pci_config_pm_runtime_get(struct pci_dev *dev); 152 150 void pci_config_pm_runtime_put(struct pci_dev *dev); 151 + void pci_pm_power_up_and_verify_state(struct pci_dev *pci_dev); 153 152 void pci_pm_init(struct pci_dev *dev); 154 153 void pci_ea_init(struct pci_dev *dev); 155 154 void pci_msi_init(struct pci_dev *dev); ··· 230 227 231 228 /* Functions for PCI Hotplug drivers to use */ 232 229 int pci_hp_add_bridge(struct pci_dev *dev); 230 + bool pci_hp_spurious_link_change(struct pci_dev *pdev); 233 231 234 232 #if defined(CONFIG_SYSFS) && defined(HAVE_PCI_LEGACY) 235 233 void pci_create_legacy_files(struct pci_bus *bus); ··· 561 557 #define PCI_DPC_RECOVERED 1 562 558 #define PCI_DPC_RECOVERING 2 563 559 #define PCI_DEV_REMOVED 3 560 + #define PCI_LINK_CHANGED 4 561 + #define PCI_LINK_CHANGING 5 562 + #define PCI_LINK_LBMS_SEEN 6 563 + #define PCI_DEV_ALLOW_BINDING 7 564 564 565 565 static inline void pci_dev_assign_added(struct pci_dev *dev) 566 566 { ··· 588 580 return test_and_set_bit(PCI_DEV_REMOVED, &dev->priv_flags); 589 581 } 590 582 583 + static inline void pci_dev_allow_binding(struct pci_dev *dev) 584 + { 585 + set_bit(PCI_DEV_ALLOW_BINDING, &dev->priv_flags); 586 + } 587 + 588 + static inline bool pci_dev_binding_disallowed(struct pci_dev *dev) 589 + { 590 + return !test_bit(PCI_DEV_ALLOW_BINDING, &dev->priv_flags); 591 + } 592 + 591 593 #ifdef CONFIG_PCIEAER 592 594 #include <linux/aer.h> 593 595 ··· 605 587 606 588 struct aer_err_info { 607 589 struct pci_dev *dev[AER_MAX_MULTI_ERR_DEVICES]; 590 + int ratelimit_print[AER_MAX_MULTI_ERR_DEVICES]; 608 591 int error_dev_num; 592 + const char *level; /* printk level */ 609 593 610 594 unsigned int id:16; 611 595 612 596 unsigned int severity:2; /* 0:NONFATAL | 1:FATAL | 2:COR */ 613 - unsigned int __pad1:5; 597 + unsigned int root_ratelimit_print:1; /* 0=skip, 1=print */ 598 + unsigned int __pad1:4; 614 599 unsigned int multi_error_valid:1; 615 600 616 601 unsigned int first_error:5; ··· 625 604 struct pcie_tlp_log tlp; /* TLP Header */ 626 605 }; 627 606 628 - int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info); 629 - void aer_print_error(struct pci_dev *dev, struct aer_err_info *info); 607 + int aer_get_device_error_info(struct aer_err_info *info, int i); 608 + void aer_print_error(struct aer_err_info *info, int i); 630 609 631 610 int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2, 632 611 unsigned int tlp_len, bool flit, 633 612 struct pcie_tlp_log *log); 634 613 unsigned int aer_tlp_log_len(struct pci_dev *dev, u32 aercc); 635 614 void pcie_print_tlp_log(const struct pci_dev *dev, 636 - const struct pcie_tlp_log *log, const char *pfx); 615 + const struct pcie_tlp_log *log, const char *level, 616 + const char *pfx); 637 617 #endif /* CONFIG_PCIEAER */ 638 618 639 619 #ifdef CONFIG_PCIEPORTBUS ··· 846 824 #endif 847 825 848 826 #ifdef CONFIG_PCIEPORTBUS 849 - void pcie_reset_lbms_count(struct pci_dev *port); 850 - int pcie_lbms_count(struct pci_dev *port, unsigned long *val); 827 + void pcie_reset_lbms(struct pci_dev *port); 851 828 #else 852 - static inline void pcie_reset_lbms_count(struct pci_dev *port) {} 853 - static inline int pcie_lbms_count(struct pci_dev *port, unsigned long *val) 854 - { 855 - return -EOPNOTSUPP; 856 - } 829 + static inline void pcie_reset_lbms(struct pci_dev *port) {} 857 830 #endif 858 831 859 832 struct pci_dev_reset_methods { ··· 893 876 894 877 struct device_node; 895 878 879 + #define PCI_EQ_RESV 0xff 880 + 881 + enum equalization_preset_type { 882 + EQ_PRESET_TYPE_8GTS, 883 + EQ_PRESET_TYPE_16GTS, 884 + EQ_PRESET_TYPE_32GTS, 885 + EQ_PRESET_TYPE_64GTS, 886 + EQ_PRESET_TYPE_MAX 887 + }; 888 + 889 + struct pci_eq_presets { 890 + u16 eq_presets_8gts[MAX_NR_LANES]; 891 + u8 eq_presets_Ngts[EQ_PRESET_TYPE_MAX - 1][MAX_NR_LANES]; 892 + }; 893 + 896 894 #ifdef CONFIG_OF 897 895 int of_get_pci_domain_nr(struct device_node *node); 898 896 int of_pci_get_max_link_speed(struct device_node *node); ··· 922 890 923 891 int devm_of_pci_bridge_init(struct device *dev, struct pci_host_bridge *bridge); 924 892 bool of_pci_supply_present(struct device_node *np); 925 - 893 + int of_pci_get_equalization_presets(struct device *dev, 894 + struct pci_eq_presets *presets, 895 + int num_lanes); 926 896 #else 927 897 static inline int 928 898 of_get_pci_domain_nr(struct device_node *node) ··· 969 935 { 970 936 return false; 971 937 } 938 + 939 + static inline int of_pci_get_equalization_presets(struct device *dev, 940 + struct pci_eq_presets *presets, 941 + int num_lanes) 942 + { 943 + presets->eq_presets_8gts[0] = PCI_EQ_RESV; 944 + for (int i = 0; i < EQ_PRESET_TYPE_MAX - 1; i++) 945 + presets->eq_presets_Ngts[i][0] = PCI_EQ_RESV; 946 + 947 + return 0; 948 + } 972 949 #endif /* CONFIG_OF */ 973 950 974 951 struct of_changeset; ··· 1006 961 void pci_aer_init(struct pci_dev *dev); 1007 962 void pci_aer_exit(struct pci_dev *dev); 1008 963 extern const struct attribute_group aer_stats_attr_group; 964 + extern const struct attribute_group aer_attr_group; 1009 965 void pci_aer_clear_fatal_status(struct pci_dev *dev); 1010 966 int pci_aer_clear_status(struct pci_dev *dev); 1011 967 int pci_aer_raw_clear_status(struct pci_dev *dev); ··· 1104 1058 return PCI_UNKNOWN; 1105 1059 } 1106 1060 #endif 1107 - 1108 - int pcim_intx(struct pci_dev *dev, int enable); 1109 - int pcim_request_region_exclusive(struct pci_dev *pdev, int bar, 1110 - const char *name); 1111 - void pcim_release_region(struct pci_dev *pdev, int bar); 1112 1061 1113 1062 #ifdef CONFIG_PCI_MSI 1114 1063 int pci_msix_write_tph_tag(struct pci_dev *pdev, unsigned int index, u16 tag);
+309 -129
drivers/pci/pcie/aer.c
··· 28 28 #include <linux/interrupt.h> 29 29 #include <linux/delay.h> 30 30 #include <linux/kfifo.h> 31 + #include <linux/ratelimit.h> 31 32 #include <linux/slab.h> 32 33 #include <acpi/apei.h> 33 34 #include <acpi/ghes.h> ··· 55 54 DECLARE_KFIFO(aer_fifo, struct aer_err_source, AER_ERROR_SOURCES_MAX); 56 55 }; 57 56 58 - /* AER stats for the device */ 59 - struct aer_stats { 57 + /* AER info for the device */ 58 + struct aer_info { 60 59 61 60 /* 62 61 * Fields for all AER capable devices. They indicate the errors ··· 89 88 u64 rootport_total_cor_errs; 90 89 u64 rootport_total_fatal_errs; 91 90 u64 rootport_total_nonfatal_errs; 91 + 92 + /* Ratelimits for errors */ 93 + struct ratelimit_state correctable_ratelimit; 94 + struct ratelimit_state nonfatal_ratelimit; 92 95 }; 93 96 94 97 #define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \ ··· 382 377 if (!dev->aer_cap) 383 378 return; 384 379 385 - dev->aer_stats = kzalloc(sizeof(struct aer_stats), GFP_KERNEL); 380 + dev->aer_info = kzalloc(sizeof(*dev->aer_info), GFP_KERNEL); 381 + 382 + ratelimit_state_init(&dev->aer_info->correctable_ratelimit, 383 + DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); 384 + ratelimit_state_init(&dev->aer_info->nonfatal_ratelimit, 385 + DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); 386 386 387 387 /* 388 388 * We save/restore PCI_ERR_UNCOR_MASK, PCI_ERR_UNCOR_SEVER, ··· 408 398 409 399 void pci_aer_exit(struct pci_dev *dev) 410 400 { 411 - kfree(dev->aer_stats); 412 - dev->aer_stats = NULL; 401 + kfree(dev->aer_info); 402 + dev->aer_info = NULL; 413 403 } 414 404 415 405 #define AER_AGENT_RECEIVER 0 ··· 547 537 { \ 548 538 unsigned int i; \ 549 539 struct pci_dev *pdev = to_pci_dev(dev); \ 550 - u64 *stats = pdev->aer_stats->stats_array; \ 540 + u64 *stats = pdev->aer_info->stats_array; \ 551 541 size_t len = 0; \ 552 542 \ 553 - for (i = 0; i < ARRAY_SIZE(pdev->aer_stats->stats_array); i++) {\ 543 + for (i = 0; i < ARRAY_SIZE(pdev->aer_info->stats_array); i++) { \ 554 544 if (strings_array[i]) \ 555 545 len += sysfs_emit_at(buf, len, "%s %llu\n", \ 556 546 strings_array[i], \ ··· 561 551 i, stats[i]); \ 562 552 } \ 563 553 len += sysfs_emit_at(buf, len, "TOTAL_%s %llu\n", total_string, \ 564 - pdev->aer_stats->total_field); \ 554 + pdev->aer_info->total_field); \ 565 555 return len; \ 566 556 } \ 567 557 static DEVICE_ATTR_RO(name) ··· 582 572 char *buf) \ 583 573 { \ 584 574 struct pci_dev *pdev = to_pci_dev(dev); \ 585 - return sysfs_emit(buf, "%llu\n", pdev->aer_stats->field); \ 575 + return sysfs_emit(buf, "%llu\n", pdev->aer_info->field); \ 586 576 } \ 587 577 static DEVICE_ATTR_RO(name) 588 578 ··· 609 599 struct device *dev = kobj_to_dev(kobj); 610 600 struct pci_dev *pdev = to_pci_dev(dev); 611 601 612 - if (!pdev->aer_stats) 602 + if (!pdev->aer_info) 613 603 return 0; 614 604 615 605 if ((a == &dev_attr_aer_rootport_total_err_cor.attr || ··· 627 617 .is_visible = aer_stats_attrs_are_visible, 628 618 }; 629 619 620 + /* 621 + * Ratelimit interval 622 + * <=0: disabled with ratelimit.interval = 0 623 + * >0: enabled with ratelimit.interval in ms 624 + */ 625 + #define aer_ratelimit_interval_attr(name, ratelimit) \ 626 + static ssize_t \ 627 + name##_show(struct device *dev, struct device_attribute *attr, \ 628 + char *buf) \ 629 + { \ 630 + struct pci_dev *pdev = to_pci_dev(dev); \ 631 + \ 632 + return sysfs_emit(buf, "%d\n", \ 633 + pdev->aer_info->ratelimit.interval); \ 634 + } \ 635 + \ 636 + static ssize_t \ 637 + name##_store(struct device *dev, struct device_attribute *attr, \ 638 + const char *buf, size_t count) \ 639 + { \ 640 + struct pci_dev *pdev = to_pci_dev(dev); \ 641 + int interval; \ 642 + \ 643 + if (!capable(CAP_SYS_ADMIN)) \ 644 + return -EPERM; \ 645 + \ 646 + if (kstrtoint(buf, 0, &interval) < 0) \ 647 + return -EINVAL; \ 648 + \ 649 + if (interval <= 0) \ 650 + interval = 0; \ 651 + else \ 652 + interval = msecs_to_jiffies(interval); \ 653 + \ 654 + pdev->aer_info->ratelimit.interval = interval; \ 655 + \ 656 + return count; \ 657 + } \ 658 + static DEVICE_ATTR_RW(name); 659 + 660 + #define aer_ratelimit_burst_attr(name, ratelimit) \ 661 + static ssize_t \ 662 + name##_show(struct device *dev, struct device_attribute *attr, \ 663 + char *buf) \ 664 + { \ 665 + struct pci_dev *pdev = to_pci_dev(dev); \ 666 + \ 667 + return sysfs_emit(buf, "%d\n", \ 668 + pdev->aer_info->ratelimit.burst); \ 669 + } \ 670 + \ 671 + static ssize_t \ 672 + name##_store(struct device *dev, struct device_attribute *attr, \ 673 + const char *buf, size_t count) \ 674 + { \ 675 + struct pci_dev *pdev = to_pci_dev(dev); \ 676 + int burst; \ 677 + \ 678 + if (!capable(CAP_SYS_ADMIN)) \ 679 + return -EPERM; \ 680 + \ 681 + if (kstrtoint(buf, 0, &burst) < 0) \ 682 + return -EINVAL; \ 683 + \ 684 + pdev->aer_info->ratelimit.burst = burst; \ 685 + \ 686 + return count; \ 687 + } \ 688 + static DEVICE_ATTR_RW(name); 689 + 690 + #define aer_ratelimit_attrs(name) \ 691 + aer_ratelimit_interval_attr(name##_ratelimit_interval_ms, \ 692 + name##_ratelimit) \ 693 + aer_ratelimit_burst_attr(name##_ratelimit_burst, \ 694 + name##_ratelimit) 695 + 696 + aer_ratelimit_attrs(correctable) 697 + aer_ratelimit_attrs(nonfatal) 698 + 699 + static struct attribute *aer_attrs[] = { 700 + &dev_attr_correctable_ratelimit_interval_ms.attr, 701 + &dev_attr_correctable_ratelimit_burst.attr, 702 + &dev_attr_nonfatal_ratelimit_interval_ms.attr, 703 + &dev_attr_nonfatal_ratelimit_burst.attr, 704 + NULL 705 + }; 706 + 707 + static umode_t aer_attrs_are_visible(struct kobject *kobj, 708 + struct attribute *a, int n) 709 + { 710 + struct device *dev = kobj_to_dev(kobj); 711 + struct pci_dev *pdev = to_pci_dev(dev); 712 + 713 + if (!pdev->aer_info) 714 + return 0; 715 + 716 + return a->mode; 717 + } 718 + 719 + const struct attribute_group aer_attr_group = { 720 + .name = "aer", 721 + .attrs = aer_attrs, 722 + .is_visible = aer_attrs_are_visible, 723 + }; 724 + 630 725 static void pci_dev_aer_stats_incr(struct pci_dev *pdev, 631 726 struct aer_err_info *info) 632 727 { 633 728 unsigned long status = info->status & ~info->mask; 634 729 int i, max = -1; 635 730 u64 *counter = NULL; 636 - struct aer_stats *aer_stats = pdev->aer_stats; 731 + struct aer_info *aer_info = pdev->aer_info; 637 732 638 - if (!aer_stats) 733 + if (!aer_info) 639 734 return; 640 735 641 736 switch (info->severity) { 642 737 case AER_CORRECTABLE: 643 - aer_stats->dev_total_cor_errs++; 644 - counter = &aer_stats->dev_cor_errs[0]; 738 + aer_info->dev_total_cor_errs++; 739 + counter = &aer_info->dev_cor_errs[0]; 645 740 max = AER_MAX_TYPEOF_COR_ERRS; 646 741 break; 647 742 case AER_NONFATAL: 648 - aer_stats->dev_total_nonfatal_errs++; 649 - counter = &aer_stats->dev_nonfatal_errs[0]; 743 + aer_info->dev_total_nonfatal_errs++; 744 + counter = &aer_info->dev_nonfatal_errs[0]; 650 745 max = AER_MAX_TYPEOF_UNCOR_ERRS; 651 746 break; 652 747 case AER_FATAL: 653 - aer_stats->dev_total_fatal_errs++; 654 - counter = &aer_stats->dev_fatal_errs[0]; 748 + aer_info->dev_total_fatal_errs++; 749 + counter = &aer_info->dev_fatal_errs[0]; 655 750 max = AER_MAX_TYPEOF_UNCOR_ERRS; 656 751 break; 657 752 } ··· 768 653 static void pci_rootport_aer_stats_incr(struct pci_dev *pdev, 769 654 struct aer_err_source *e_src) 770 655 { 771 - struct aer_stats *aer_stats = pdev->aer_stats; 656 + struct aer_info *aer_info = pdev->aer_info; 772 657 773 - if (!aer_stats) 658 + if (!aer_info) 774 659 return; 775 660 776 661 if (e_src->status & PCI_ERR_ROOT_COR_RCV) 777 - aer_stats->rootport_total_cor_errs++; 662 + aer_info->rootport_total_cor_errs++; 778 663 779 664 if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) { 780 665 if (e_src->status & PCI_ERR_ROOT_FATAL_RCV) 781 - aer_stats->rootport_total_fatal_errs++; 666 + aer_info->rootport_total_fatal_errs++; 782 667 else 783 - aer_stats->rootport_total_nonfatal_errs++; 668 + aer_info->rootport_total_nonfatal_errs++; 784 669 } 785 670 } 786 671 787 - static void __aer_print_error(struct pci_dev *dev, 788 - struct aer_err_info *info) 672 + static int aer_ratelimit(struct pci_dev *dev, unsigned int severity) 673 + { 674 + switch (severity) { 675 + case AER_NONFATAL: 676 + return __ratelimit(&dev->aer_info->nonfatal_ratelimit); 677 + case AER_CORRECTABLE: 678 + return __ratelimit(&dev->aer_info->correctable_ratelimit); 679 + default: 680 + return 1; /* Don't ratelimit fatal errors */ 681 + } 682 + } 683 + 684 + static void __aer_print_error(struct pci_dev *dev, struct aer_err_info *info) 789 685 { 790 686 const char **strings; 791 687 unsigned long status = info->status & ~info->mask; 792 - const char *level, *errmsg; 688 + const char *level = info->level; 689 + const char *errmsg; 793 690 int i; 794 691 795 - if (info->severity == AER_CORRECTABLE) { 692 + if (info->severity == AER_CORRECTABLE) 796 693 strings = aer_correctable_error_string; 797 - level = KERN_WARNING; 798 - } else { 694 + else 799 695 strings = aer_uncorrectable_error_string; 800 - level = KERN_ERR; 801 - } 802 696 803 697 for_each_set_bit(i, &status, 32) { 804 698 errmsg = strings[i]; ··· 817 693 aer_printk(level, dev, " [%2d] %-22s%s\n", i, errmsg, 818 694 info->first_error == i ? " (First)" : ""); 819 695 } 820 - pci_dev_aer_stats_incr(dev, info); 821 696 } 822 697 823 - void aer_print_error(struct pci_dev *dev, struct aer_err_info *info) 698 + static void aer_print_source(struct pci_dev *dev, struct aer_err_info *info, 699 + bool found) 824 700 { 825 - int layer, agent; 826 - int id = pci_dev_id(dev); 827 - const char *level; 701 + u16 source = info->id; 702 + 703 + pci_info(dev, "%s%s error message received from %04x:%02x:%02x.%d%s\n", 704 + info->multi_error_valid ? "Multiple " : "", 705 + aer_error_severity_string[info->severity], 706 + pci_domain_nr(dev->bus), PCI_BUS_NUM(source), 707 + PCI_SLOT(source), PCI_FUNC(source), 708 + found ? "" : " (no details found"); 709 + } 710 + 711 + void aer_print_error(struct aer_err_info *info, int i) 712 + { 713 + struct pci_dev *dev; 714 + int layer, agent, id; 715 + const char *level = info->level; 716 + 717 + if (WARN_ON_ONCE(i >= AER_MAX_MULTI_ERR_DEVICES)) 718 + return; 719 + 720 + dev = info->dev[i]; 721 + id = pci_dev_id(dev); 722 + 723 + pci_dev_aer_stats_incr(dev, info); 724 + trace_aer_event(pci_name(dev), (info->status & ~info->mask), 725 + info->severity, info->tlp_header_valid, &info->tlp); 726 + 727 + if (!info->ratelimit_print[i]) 728 + return; 828 729 829 730 if (!info->status) { 830 731 pci_err(dev, "PCIe Bus Error: severity=%s, type=Inaccessible, (Unregistered Agent ID)\n", ··· 859 710 860 711 layer = AER_GET_LAYER_ERROR(info->severity, info->status); 861 712 agent = AER_GET_AGENT(info->severity, info->status); 862 - 863 - level = (info->severity == AER_CORRECTABLE) ? KERN_WARNING : KERN_ERR; 864 713 865 714 aer_printk(level, dev, "PCIe Bus Error: severity=%s, type=%s, (%s)\n", 866 715 aer_error_severity_string[info->severity], ··· 870 723 __aer_print_error(dev, info); 871 724 872 725 if (info->tlp_header_valid) 873 - pcie_print_tlp_log(dev, &info->tlp, dev_fmt(" ")); 726 + pcie_print_tlp_log(dev, &info->tlp, level, dev_fmt(" ")); 874 727 875 728 out: 876 729 if (info->id && info->error_dev_num > 1 && info->id == id) 877 730 pci_err(dev, " Error of this Agent is reported first\n"); 878 - 879 - trace_aer_event(dev_name(&dev->dev), (info->status & ~info->mask), 880 - info->severity, info->tlp_header_valid, &info->tlp); 881 - } 882 - 883 - static void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info) 884 - { 885 - u8 bus = info->id >> 8; 886 - u8 devfn = info->id & 0xff; 887 - 888 - pci_info(dev, "%s%s error message received from %04x:%02x:%02x.%d\n", 889 - info->multi_error_valid ? "Multiple " : "", 890 - aer_error_severity_string[info->severity], 891 - pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn), 892 - PCI_FUNC(devfn)); 893 731 } 894 732 895 733 #ifdef CONFIG_ACPI_APEI_PCIEAER ··· 897 765 { 898 766 int layer, agent, tlp_header_valid = 0; 899 767 u32 status, mask; 900 - struct aer_err_info info; 768 + struct aer_err_info info = { 769 + .severity = aer_severity, 770 + .first_error = PCI_ERR_CAP_FEP(aer->cap_control), 771 + }; 901 772 902 773 if (aer_severity == AER_CORRECTABLE) { 903 774 status = aer->cor_status; 904 775 mask = aer->cor_mask; 776 + info.level = KERN_WARNING; 905 777 } else { 906 778 status = aer->uncor_status; 907 779 mask = aer->uncor_mask; 780 + info.level = KERN_ERR; 908 781 tlp_header_valid = status & AER_LOG_TLP_MASKS; 909 782 } 783 + 784 + info.status = status; 785 + info.mask = mask; 786 + 787 + pci_dev_aer_stats_incr(dev, &info); 788 + trace_aer_event(pci_name(dev), (status & ~mask), 789 + aer_severity, tlp_header_valid, &aer->header_log); 790 + 791 + if (!aer_ratelimit(dev, info.severity)) 792 + return; 910 793 911 794 layer = AER_GET_LAYER_ERROR(aer_severity, status); 912 795 agent = AER_GET_AGENT(aer_severity, status); 913 796 914 - memset(&info, 0, sizeof(info)); 915 - info.severity = aer_severity; 916 - info.status = status; 917 - info.mask = mask; 918 - info.first_error = PCI_ERR_CAP_FEP(aer->cap_control); 919 - 920 - pci_err(dev, "aer_status: 0x%08x, aer_mask: 0x%08x\n", status, mask); 797 + aer_printk(info.level, dev, "aer_status: 0x%08x, aer_mask: 0x%08x\n", 798 + status, mask); 921 799 __aer_print_error(dev, &info); 922 - pci_err(dev, "aer_layer=%s, aer_agent=%s\n", 923 - aer_error_layer[layer], aer_agent_string[agent]); 800 + aer_printk(info.level, dev, "aer_layer=%s, aer_agent=%s\n", 801 + aer_error_layer[layer], aer_agent_string[agent]); 924 802 925 803 if (aer_severity != AER_CORRECTABLE) 926 - pci_err(dev, "aer_uncor_severity: 0x%08x\n", 927 - aer->uncor_severity); 804 + aer_printk(info.level, dev, "aer_uncor_severity: 0x%08x\n", 805 + aer->uncor_severity); 928 806 929 807 if (tlp_header_valid) 930 - pcie_print_tlp_log(dev, &aer->header_log, dev_fmt(" ")); 931 - 932 - trace_aer_event(dev_name(&dev->dev), (status & ~mask), 933 - aer_severity, tlp_header_valid, &aer->header_log); 808 + pcie_print_tlp_log(dev, &aer->header_log, info.level, 809 + dev_fmt(" ")); 934 810 } 935 811 EXPORT_SYMBOL_NS_GPL(pci_print_aer, "CXL"); 936 812 ··· 949 809 */ 950 810 static int add_error_device(struct aer_err_info *e_info, struct pci_dev *dev) 951 811 { 952 - if (e_info->error_dev_num < AER_MAX_MULTI_ERR_DEVICES) { 953 - e_info->dev[e_info->error_dev_num] = pci_dev_get(dev); 954 - e_info->error_dev_num++; 955 - return 0; 812 + int i = e_info->error_dev_num; 813 + 814 + if (i >= AER_MAX_MULTI_ERR_DEVICES) 815 + return -ENOSPC; 816 + 817 + e_info->dev[i] = pci_dev_get(dev); 818 + e_info->error_dev_num++; 819 + 820 + /* 821 + * Ratelimit AER log messages. "dev" is either the source 822 + * identified by the root's Error Source ID or it has an unmasked 823 + * error logged in its own AER Capability. Messages are emitted 824 + * when "ratelimit_print[i]" is non-zero. If we will print detail 825 + * for a downstream device, make sure we print the Error Source ID 826 + * from the root as well. 827 + */ 828 + if (aer_ratelimit(dev, e_info->severity)) { 829 + e_info->ratelimit_print[i] = 1; 830 + e_info->root_ratelimit_print = 1; 956 831 } 957 - return -ENOSPC; 832 + return 0; 958 833 } 959 834 960 835 /** ··· 1063 908 * e_info->error_dev_num and e_info->dev[], based on the given information. 1064 909 */ 1065 910 static bool find_source_device(struct pci_dev *parent, 1066 - struct aer_err_info *e_info) 911 + struct aer_err_info *e_info) 1067 912 { 1068 913 struct pci_dev *dev = parent; 1069 914 int result; ··· 1081 926 else 1082 927 pci_walk_bus(parent->subordinate, find_device_iter, e_info); 1083 928 1084 - if (!e_info->error_dev_num) { 1085 - u8 bus = e_info->id >> 8; 1086 - u8 devfn = e_info->id & 0xff; 1087 - 1088 - pci_info(parent, "found no error details for %04x:%02x:%02x.%d\n", 1089 - pci_domain_nr(parent->bus), bus, PCI_SLOT(devfn), 1090 - PCI_FUNC(devfn)); 929 + if (!e_info->error_dev_num) 1091 930 return false; 1092 - } 1093 931 return true; 1094 932 } 1095 933 ··· 1289 1141 pdev = pci_get_domain_bus_and_slot(entry.domain, entry.bus, 1290 1142 entry.devfn); 1291 1143 if (!pdev) { 1292 - pr_err("no pci_dev for %04x:%02x:%02x.%x\n", 1293 - entry.domain, entry.bus, 1294 - PCI_SLOT(entry.devfn), PCI_FUNC(entry.devfn)); 1144 + pr_err_ratelimited("%04x:%02x:%02x.%x: no pci_dev found\n", 1145 + entry.domain, entry.bus, 1146 + PCI_SLOT(entry.devfn), 1147 + PCI_FUNC(entry.devfn)); 1295 1148 continue; 1296 1149 } 1297 1150 pci_print_aer(pdev, entry.severity, entry.regs); ··· 1348 1199 1349 1200 /** 1350 1201 * aer_get_device_error_info - read error status from dev and store it to info 1351 - * @dev: pointer to the device expected to have an error record 1352 1202 * @info: pointer to structure to store the error record 1203 + * @i: index into info->dev[] 1353 1204 * 1354 1205 * Return: 1 on success, 0 on error. 1355 1206 * 1356 1207 * Note that @info is reused among all error devices. Clear fields properly. 1357 1208 */ 1358 - int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info) 1209 + int aer_get_device_error_info(struct aer_err_info *info, int i) 1359 1210 { 1360 - int type = pci_pcie_type(dev); 1361 - int aer = dev->aer_cap; 1211 + struct pci_dev *dev; 1212 + int type, aer; 1362 1213 u32 aercc; 1214 + 1215 + if (i >= AER_MAX_MULTI_ERR_DEVICES) 1216 + return 0; 1217 + 1218 + dev = info->dev[i]; 1219 + aer = dev->aer_cap; 1220 + type = pci_pcie_type(dev); 1363 1221 1364 1222 /* Must reset in this function */ 1365 1223 info->status = 0; ··· 1419 1263 1420 1264 /* Report all before handling them, to not lose records by reset etc. */ 1421 1265 for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) { 1422 - if (aer_get_device_error_info(e_info->dev[i], e_info)) 1423 - aer_print_error(e_info->dev[i], e_info); 1266 + if (aer_get_device_error_info(e_info, i)) 1267 + aer_print_error(e_info, i); 1424 1268 } 1425 1269 for (i = 0; i < e_info->error_dev_num && e_info->dev[i]; i++) { 1426 - if (aer_get_device_error_info(e_info->dev[i], e_info)) 1270 + if (aer_get_device_error_info(e_info, i)) 1427 1271 handle_error_source(e_info->dev[i], e_info); 1428 1272 } 1429 1273 } 1430 1274 1431 1275 /** 1432 - * aer_isr_one_error - consume an error detected by Root Port 1433 - * @rpc: pointer to the Root Port which holds an error 1276 + * aer_isr_one_error_type - consume a Correctable or Uncorrectable Error 1277 + * detected by Root Port or RCEC 1278 + * @root: pointer to Root Port or RCEC that signaled AER interrupt 1279 + * @info: pointer to AER error info 1280 + */ 1281 + static void aer_isr_one_error_type(struct pci_dev *root, 1282 + struct aer_err_info *info) 1283 + { 1284 + bool found; 1285 + 1286 + found = find_source_device(root, info); 1287 + 1288 + /* 1289 + * If we're going to log error messages, we've already set 1290 + * "info->root_ratelimit_print" and "info->ratelimit_print[i]" to 1291 + * non-zero (which enables printing) because this is either an 1292 + * ERR_FATAL or we found a device with an error logged in its AER 1293 + * Capability. 1294 + * 1295 + * If we didn't find the Error Source device, at least log the 1296 + * Requester ID from the ERR_* Message received by the Root Port or 1297 + * RCEC, ratelimited by the RP or RCEC. 1298 + */ 1299 + if (info->root_ratelimit_print || 1300 + (!found && aer_ratelimit(root, info->severity))) 1301 + aer_print_source(root, info, found); 1302 + 1303 + if (found) 1304 + aer_process_err_devices(info); 1305 + } 1306 + 1307 + /** 1308 + * aer_isr_one_error - consume error(s) signaled by an AER interrupt from 1309 + * Root Port or RCEC 1310 + * @root: pointer to Root Port or RCEC that signaled AER interrupt 1434 1311 * @e_src: pointer to an error source 1435 1312 */ 1436 - static void aer_isr_one_error(struct aer_rpc *rpc, 1437 - struct aer_err_source *e_src) 1313 + static void aer_isr_one_error(struct pci_dev *root, 1314 + struct aer_err_source *e_src) 1438 1315 { 1439 - struct pci_dev *pdev = rpc->rpd; 1440 - struct aer_err_info e_info; 1316 + u32 status = e_src->status; 1441 1317 1442 - pci_rootport_aer_stats_incr(pdev, e_src); 1318 + pci_rootport_aer_stats_incr(root, e_src); 1443 1319 1444 1320 /* 1445 1321 * There is a possibility that both correctable error and 1446 1322 * uncorrectable error being logged. Report correctable error first. 1447 1323 */ 1448 - if (e_src->status & PCI_ERR_ROOT_COR_RCV) { 1449 - e_info.id = ERR_COR_ID(e_src->id); 1450 - e_info.severity = AER_CORRECTABLE; 1324 + if (status & PCI_ERR_ROOT_COR_RCV) { 1325 + int multi = status & PCI_ERR_ROOT_MULTI_COR_RCV; 1326 + struct aer_err_info e_info = { 1327 + .id = ERR_COR_ID(e_src->id), 1328 + .severity = AER_CORRECTABLE, 1329 + .level = KERN_WARNING, 1330 + .multi_error_valid = multi ? 1 : 0, 1331 + }; 1451 1332 1452 - if (e_src->status & PCI_ERR_ROOT_MULTI_COR_RCV) 1453 - e_info.multi_error_valid = 1; 1454 - else 1455 - e_info.multi_error_valid = 0; 1456 - aer_print_port_info(pdev, &e_info); 1457 - 1458 - if (find_source_device(pdev, &e_info)) 1459 - aer_process_err_devices(&e_info); 1333 + aer_isr_one_error_type(root, &e_info); 1460 1334 } 1461 1335 1462 - if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) { 1463 - e_info.id = ERR_UNCOR_ID(e_src->id); 1336 + if (status & PCI_ERR_ROOT_UNCOR_RCV) { 1337 + int fatal = status & PCI_ERR_ROOT_FATAL_RCV; 1338 + int multi = status & PCI_ERR_ROOT_MULTI_UNCOR_RCV; 1339 + struct aer_err_info e_info = { 1340 + .id = ERR_UNCOR_ID(e_src->id), 1341 + .severity = fatal ? AER_FATAL : AER_NONFATAL, 1342 + .level = KERN_ERR, 1343 + .multi_error_valid = multi ? 1 : 0, 1344 + }; 1464 1345 1465 - if (e_src->status & PCI_ERR_ROOT_FATAL_RCV) 1466 - e_info.severity = AER_FATAL; 1467 - else 1468 - e_info.severity = AER_NONFATAL; 1469 - 1470 - if (e_src->status & PCI_ERR_ROOT_MULTI_UNCOR_RCV) 1471 - e_info.multi_error_valid = 1; 1472 - else 1473 - e_info.multi_error_valid = 0; 1474 - 1475 - aer_print_port_info(pdev, &e_info); 1476 - 1477 - if (find_source_device(pdev, &e_info)) 1478 - aer_process_err_devices(&e_info); 1346 + aer_isr_one_error_type(root, &e_info); 1479 1347 } 1480 1348 } 1481 1349 ··· 1520 1340 return IRQ_NONE; 1521 1341 1522 1342 while (kfifo_get(&rpc->aer_fifo, &e_src)) 1523 - aer_isr_one_error(rpc, &e_src); 1343 + aer_isr_one_error(rpc->rpd, &e_src); 1524 1344 return IRQ_HANDLED; 1525 1345 } 1526 1346
+19 -67
drivers/pci/pcie/bwctrl.c
··· 38 38 /** 39 39 * struct pcie_bwctrl_data - PCIe bandwidth controller 40 40 * @set_speed_mutex: Serializes link speed changes 41 - * @lbms_count: Count for LBMS (since last reset) 42 41 * @cdev: Thermal cooling device associated with the port 43 42 */ 44 43 struct pcie_bwctrl_data { 45 44 struct mutex set_speed_mutex; 46 - atomic_t lbms_count; 47 45 struct thermal_cooling_device *cdev; 48 46 }; 49 47 50 - /* 51 - * Prevent port removal during LBMS count accessors and Link Speed changes. 52 - * 53 - * These have to be differentiated because pcie_bwctrl_change_speed() calls 54 - * pcie_retrain_link() which uses LBMS count reset accessor on success 55 - * (using just one rwsem triggers "possible recursive locking detected" 56 - * warning). 57 - */ 58 - static DECLARE_RWSEM(pcie_bwctrl_lbms_rwsem); 48 + /* Prevent port removal during Link Speed changes. */ 59 49 static DECLARE_RWSEM(pcie_bwctrl_setspeed_rwsem); 60 50 61 51 static bool pcie_valid_speed(enum pci_bus_speed speed) ··· 117 127 if (ret != PCIBIOS_SUCCESSFUL) 118 128 return pcibios_err_to_errno(ret); 119 129 120 - ret = pcie_retrain_link(port, use_lt); 121 - if (ret < 0) 122 - return ret; 123 - 124 - /* 125 - * Ensure link speed updates also with platforms that have problems 126 - * with notifications. 127 - */ 128 - if (port->subordinate) 129 - pcie_update_link_speed(port->subordinate); 130 - 131 - return 0; 130 + return pcie_retrain_link(port, use_lt); 132 131 } 133 132 134 133 /** ··· 181 202 182 203 static void pcie_bwnotif_enable(struct pcie_device *srv) 183 204 { 184 - struct pcie_bwctrl_data *data = srv->port->link_bwctrl; 185 205 struct pci_dev *port = srv->port; 186 206 u16 link_status; 187 207 int ret; 188 208 189 - /* Count LBMS seen so far as one */ 209 + /* Note if LBMS has been seen so far */ 190 210 ret = pcie_capability_read_word(port, PCI_EXP_LNKSTA, &link_status); 191 211 if (ret == PCIBIOS_SUCCESSFUL && link_status & PCI_EXP_LNKSTA_LBMS) 192 - atomic_inc(&data->lbms_count); 212 + set_bit(PCI_LINK_LBMS_SEEN, &port->priv_flags); 193 213 194 214 pcie_capability_set_word(port, PCI_EXP_LNKCTL, 195 215 PCI_EXP_LNKCTL_LBMIE | PCI_EXP_LNKCTL_LABIE); ··· 211 233 static irqreturn_t pcie_bwnotif_irq(int irq, void *context) 212 234 { 213 235 struct pcie_device *srv = context; 214 - struct pcie_bwctrl_data *data = srv->port->link_bwctrl; 215 236 struct pci_dev *port = srv->port; 216 237 u16 link_status, events; 217 238 int ret; ··· 224 247 return IRQ_NONE; 225 248 226 249 if (events & PCI_EXP_LNKSTA_LBMS) 227 - atomic_inc(&data->lbms_count); 250 + set_bit(PCI_LINK_LBMS_SEEN, &port->priv_flags); 228 251 229 252 pcie_capability_write_word(port, PCI_EXP_LNKSTA, events); 230 253 ··· 239 262 return IRQ_HANDLED; 240 263 } 241 264 242 - void pcie_reset_lbms_count(struct pci_dev *port) 265 + void pcie_reset_lbms(struct pci_dev *port) 243 266 { 244 - struct pcie_bwctrl_data *data; 245 - 246 - guard(rwsem_read)(&pcie_bwctrl_lbms_rwsem); 247 - data = port->link_bwctrl; 248 - if (data) 249 - atomic_set(&data->lbms_count, 0); 250 - else 251 - pcie_capability_write_word(port, PCI_EXP_LNKSTA, 252 - PCI_EXP_LNKSTA_LBMS); 253 - } 254 - 255 - int pcie_lbms_count(struct pci_dev *port, unsigned long *val) 256 - { 257 - struct pcie_bwctrl_data *data; 258 - 259 - guard(rwsem_read)(&pcie_bwctrl_lbms_rwsem); 260 - data = port->link_bwctrl; 261 - if (!data) 262 - return -ENOTTY; 263 - 264 - *val = atomic_read(&data->lbms_count); 265 - 266 - return 0; 267 + clear_bit(PCI_LINK_LBMS_SEEN, &port->priv_flags); 268 + pcie_capability_write_word(port, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS); 267 269 } 268 270 269 271 static int pcie_bwnotif_probe(struct pcie_device *srv) ··· 264 308 return ret; 265 309 266 310 scoped_guard(rwsem_write, &pcie_bwctrl_setspeed_rwsem) { 267 - scoped_guard(rwsem_write, &pcie_bwctrl_lbms_rwsem) { 268 - port->link_bwctrl = data; 311 + port->link_bwctrl = data; 269 312 270 - ret = request_irq(srv->irq, pcie_bwnotif_irq, 271 - IRQF_SHARED, "PCIe bwctrl", srv); 272 - if (ret) { 273 - port->link_bwctrl = NULL; 274 - return ret; 275 - } 276 - 277 - pcie_bwnotif_enable(srv); 313 + ret = request_irq(srv->irq, pcie_bwnotif_irq, 314 + IRQF_SHARED, "PCIe bwctrl", srv); 315 + if (ret) { 316 + port->link_bwctrl = NULL; 317 + return ret; 278 318 } 319 + 320 + pcie_bwnotif_enable(srv); 279 321 } 280 322 281 323 pci_dbg(port, "enabled with IRQ %d\n", srv->irq); ··· 293 339 pcie_cooling_device_unregister(data->cdev); 294 340 295 341 scoped_guard(rwsem_write, &pcie_bwctrl_setspeed_rwsem) { 296 - scoped_guard(rwsem_write, &pcie_bwctrl_lbms_rwsem) { 297 - pcie_bwnotif_disable(srv->port); 342 + pcie_bwnotif_disable(srv->port); 298 343 299 - free_irq(srv->irq, srv); 344 + free_irq(srv->irq, srv); 300 345 301 - srv->port->link_bwctrl = NULL; 302 - } 346 + srv->port->link_bwctrl = NULL; 303 347 } 304 348 } 305 349
+43 -30
drivers/pci/pcie/dpc.c
··· 222 222 dpc_tlp_log_len(pdev), 223 223 pdev->subordinate->flit_mode, 224 224 &tlp_log); 225 - pcie_print_tlp_log(pdev, &tlp_log, dev_fmt("")); 225 + pcie_print_tlp_log(pdev, &tlp_log, KERN_ERR, dev_fmt("")); 226 226 227 227 if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG + 1) 228 228 goto clear_status; ··· 252 252 else 253 253 info->severity = AER_NONFATAL; 254 254 255 + info->level = KERN_ERR; 256 + 257 + info->dev[0] = dev; 258 + info->error_dev_num = 1; 259 + 255 260 return 1; 256 261 } 257 262 258 263 void dpc_process_error(struct pci_dev *pdev) 259 264 { 260 265 u16 cap = pdev->dpc_cap, status, source, reason, ext_reason; 261 - struct aer_err_info info; 266 + struct aer_err_info info = {}; 262 267 263 268 pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); 264 - pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source); 265 - 266 - pci_info(pdev, "containment event, status:%#06x source:%#06x\n", 267 - status, source); 268 269 269 270 reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN; 270 - ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT; 271 - pci_warn(pdev, "%s detected\n", 272 - (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR) ? 273 - "unmasked uncorrectable error" : 274 - (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE) ? 275 - "ERR_NONFATAL" : 276 - (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ? 277 - "ERR_FATAL" : 278 - (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ? 279 - "RP PIO error" : 280 - (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ? 281 - "software trigger" : 282 - "reserved error"); 283 271 284 - /* show RP PIO error detail information */ 285 - if (pdev->dpc_rp_extensions && 286 - reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT && 287 - ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) 288 - dpc_process_rp_pio_error(pdev); 289 - else if (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR && 290 - dpc_get_aer_uncorrect_severity(pdev, &info) && 291 - aer_get_device_error_info(pdev, &info)) { 292 - aer_print_error(pdev, &info); 293 - pci_aer_clear_nonfatal_status(pdev); 294 - pci_aer_clear_fatal_status(pdev); 272 + switch (reason) { 273 + case PCI_EXP_DPC_STATUS_TRIGGER_RSN_UNCOR: 274 + pci_warn(pdev, "containment event, status:%#06x: unmasked uncorrectable error detected\n", 275 + status); 276 + if (dpc_get_aer_uncorrect_severity(pdev, &info) && 277 + aer_get_device_error_info(&info, 0)) { 278 + aer_print_error(&info, 0); 279 + pci_aer_clear_nonfatal_status(pdev); 280 + pci_aer_clear_fatal_status(pdev); 281 + } 282 + break; 283 + case PCI_EXP_DPC_STATUS_TRIGGER_RSN_NFE: 284 + case PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE: 285 + pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, 286 + &source); 287 + pci_warn(pdev, "containment event, status:%#06x, %s received from %04x:%02x:%02x.%d\n", 288 + status, 289 + (reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_FE) ? 290 + "ERR_FATAL" : "ERR_NONFATAL", 291 + pci_domain_nr(pdev->bus), PCI_BUS_NUM(source), 292 + PCI_SLOT(source), PCI_FUNC(source)); 293 + break; 294 + case PCI_EXP_DPC_STATUS_TRIGGER_RSN_IN_EXT: 295 + ext_reason = status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT; 296 + pci_warn(pdev, "containment event, status:%#06x: %s detected\n", 297 + status, 298 + (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO) ? 299 + "RP PIO error" : 300 + (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_SW_TRIGGER) ? 301 + "software trigger" : 302 + "reserved error"); 303 + /* show RP PIO error detail information */ 304 + if (ext_reason == PCI_EXP_DPC_STATUS_TRIGGER_RSN_RP_PIO && 305 + pdev->dpc_rp_extensions) 306 + dpc_process_rp_pio_error(pdev); 307 + break; 295 308 } 296 309 } 297 310
-1
drivers/pci/pcie/err.c
··· 271 271 272 272 pci_uevent_ers(bridge, PCI_ERS_RESULT_DISCONNECT); 273 273 274 - /* TODO: Should kernel panic here? */ 275 274 pci_info(bridge, "device recovery failed\n"); 276 275 277 276 return status;
+300
drivers/pci/pcie/ptm.c
··· 5 5 */ 6 6 7 7 #include <linux/bitfield.h> 8 + #include <linux/debugfs.h> 8 9 #include <linux/module.h> 9 10 #include <linux/init.h> 10 11 #include <linux/pci.h> ··· 253 252 return dev->ptm_enabled; 254 253 } 255 254 EXPORT_SYMBOL(pcie_ptm_enabled); 255 + 256 + static ssize_t context_update_write(struct file *file, const char __user *ubuf, 257 + size_t count, loff_t *ppos) 258 + { 259 + struct pci_ptm_debugfs *ptm_debugfs = file->private_data; 260 + char buf[7]; 261 + int ret; 262 + u8 mode; 263 + 264 + if (!ptm_debugfs->ops->context_update_write) 265 + return -EOPNOTSUPP; 266 + 267 + if (count < 1 || count >= sizeof(buf)) 268 + return -EINVAL; 269 + 270 + ret = copy_from_user(buf, ubuf, count); 271 + if (ret) 272 + return -EFAULT; 273 + 274 + buf[count] = '\0'; 275 + 276 + if (sysfs_streq(buf, "auto")) 277 + mode = PCIE_PTM_CONTEXT_UPDATE_AUTO; 278 + else if (sysfs_streq(buf, "manual")) 279 + mode = PCIE_PTM_CONTEXT_UPDATE_MANUAL; 280 + else 281 + return -EINVAL; 282 + 283 + mutex_lock(&ptm_debugfs->lock); 284 + ret = ptm_debugfs->ops->context_update_write(ptm_debugfs->pdata, mode); 285 + mutex_unlock(&ptm_debugfs->lock); 286 + if (ret) 287 + return ret; 288 + 289 + return count; 290 + } 291 + 292 + static ssize_t context_update_read(struct file *file, char __user *ubuf, 293 + size_t count, loff_t *ppos) 294 + { 295 + struct pci_ptm_debugfs *ptm_debugfs = file->private_data; 296 + char buf[8]; /* Extra space for NULL termination at the end */ 297 + ssize_t pos; 298 + u8 mode; 299 + 300 + if (!ptm_debugfs->ops->context_update_read) 301 + return -EOPNOTSUPP; 302 + 303 + mutex_lock(&ptm_debugfs->lock); 304 + ptm_debugfs->ops->context_update_read(ptm_debugfs->pdata, &mode); 305 + mutex_unlock(&ptm_debugfs->lock); 306 + 307 + if (mode == PCIE_PTM_CONTEXT_UPDATE_AUTO) 308 + pos = scnprintf(buf, sizeof(buf), "auto\n"); 309 + else 310 + pos = scnprintf(buf, sizeof(buf), "manual\n"); 311 + 312 + return simple_read_from_buffer(ubuf, count, ppos, buf, pos); 313 + } 314 + 315 + static const struct file_operations context_update_fops = { 316 + .open = simple_open, 317 + .read = context_update_read, 318 + .write = context_update_write, 319 + }; 320 + 321 + static int context_valid_get(void *data, u64 *val) 322 + { 323 + struct pci_ptm_debugfs *ptm_debugfs = data; 324 + bool valid; 325 + int ret; 326 + 327 + if (!ptm_debugfs->ops->context_valid_read) 328 + return -EOPNOTSUPP; 329 + 330 + mutex_lock(&ptm_debugfs->lock); 331 + ret = ptm_debugfs->ops->context_valid_read(ptm_debugfs->pdata, &valid); 332 + mutex_unlock(&ptm_debugfs->lock); 333 + if (ret) 334 + return ret; 335 + 336 + *val = valid; 337 + 338 + return 0; 339 + } 340 + 341 + static int context_valid_set(void *data, u64 val) 342 + { 343 + struct pci_ptm_debugfs *ptm_debugfs = data; 344 + int ret; 345 + 346 + if (!ptm_debugfs->ops->context_valid_write) 347 + return -EOPNOTSUPP; 348 + 349 + mutex_lock(&ptm_debugfs->lock); 350 + ret = ptm_debugfs->ops->context_valid_write(ptm_debugfs->pdata, !!val); 351 + mutex_unlock(&ptm_debugfs->lock); 352 + 353 + return ret; 354 + } 355 + 356 + DEFINE_DEBUGFS_ATTRIBUTE(context_valid_fops, context_valid_get, 357 + context_valid_set, "%llu\n"); 358 + 359 + static int local_clock_get(void *data, u64 *val) 360 + { 361 + struct pci_ptm_debugfs *ptm_debugfs = data; 362 + u64 clock; 363 + int ret; 364 + 365 + if (!ptm_debugfs->ops->local_clock_read) 366 + return -EOPNOTSUPP; 367 + 368 + ret = ptm_debugfs->ops->local_clock_read(ptm_debugfs->pdata, &clock); 369 + if (ret) 370 + return ret; 371 + 372 + *val = clock; 373 + 374 + return 0; 375 + } 376 + 377 + DEFINE_DEBUGFS_ATTRIBUTE(local_clock_fops, local_clock_get, NULL, "%llu\n"); 378 + 379 + static int master_clock_get(void *data, u64 *val) 380 + { 381 + struct pci_ptm_debugfs *ptm_debugfs = data; 382 + u64 clock; 383 + int ret; 384 + 385 + if (!ptm_debugfs->ops->master_clock_read) 386 + return -EOPNOTSUPP; 387 + 388 + ret = ptm_debugfs->ops->master_clock_read(ptm_debugfs->pdata, &clock); 389 + if (ret) 390 + return ret; 391 + 392 + *val = clock; 393 + 394 + return 0; 395 + } 396 + 397 + DEFINE_DEBUGFS_ATTRIBUTE(master_clock_fops, master_clock_get, NULL, "%llu\n"); 398 + 399 + static int t1_get(void *data, u64 *val) 400 + { 401 + struct pci_ptm_debugfs *ptm_debugfs = data; 402 + u64 clock; 403 + int ret; 404 + 405 + if (!ptm_debugfs->ops->t1_read) 406 + return -EOPNOTSUPP; 407 + 408 + ret = ptm_debugfs->ops->t1_read(ptm_debugfs->pdata, &clock); 409 + if (ret) 410 + return ret; 411 + 412 + *val = clock; 413 + 414 + return 0; 415 + } 416 + 417 + DEFINE_DEBUGFS_ATTRIBUTE(t1_fops, t1_get, NULL, "%llu\n"); 418 + 419 + static int t2_get(void *data, u64 *val) 420 + { 421 + struct pci_ptm_debugfs *ptm_debugfs = data; 422 + u64 clock; 423 + int ret; 424 + 425 + if (!ptm_debugfs->ops->t2_read) 426 + return -EOPNOTSUPP; 427 + 428 + ret = ptm_debugfs->ops->t2_read(ptm_debugfs->pdata, &clock); 429 + if (ret) 430 + return ret; 431 + 432 + *val = clock; 433 + 434 + return 0; 435 + } 436 + 437 + DEFINE_DEBUGFS_ATTRIBUTE(t2_fops, t2_get, NULL, "%llu\n"); 438 + 439 + static int t3_get(void *data, u64 *val) 440 + { 441 + struct pci_ptm_debugfs *ptm_debugfs = data; 442 + u64 clock; 443 + int ret; 444 + 445 + if (!ptm_debugfs->ops->t3_read) 446 + return -EOPNOTSUPP; 447 + 448 + ret = ptm_debugfs->ops->t3_read(ptm_debugfs->pdata, &clock); 449 + if (ret) 450 + return ret; 451 + 452 + *val = clock; 453 + 454 + return 0; 455 + } 456 + 457 + DEFINE_DEBUGFS_ATTRIBUTE(t3_fops, t3_get, NULL, "%llu\n"); 458 + 459 + static int t4_get(void *data, u64 *val) 460 + { 461 + struct pci_ptm_debugfs *ptm_debugfs = data; 462 + u64 clock; 463 + int ret; 464 + 465 + if (!ptm_debugfs->ops->t4_read) 466 + return -EOPNOTSUPP; 467 + 468 + ret = ptm_debugfs->ops->t4_read(ptm_debugfs->pdata, &clock); 469 + if (ret) 470 + return ret; 471 + 472 + *val = clock; 473 + 474 + return 0; 475 + } 476 + 477 + DEFINE_DEBUGFS_ATTRIBUTE(t4_fops, t4_get, NULL, "%llu\n"); 478 + 479 + #define pcie_ptm_create_debugfs_file(pdata, mode, attr) \ 480 + do { \ 481 + if (ops->attr##_visible && ops->attr##_visible(pdata)) \ 482 + debugfs_create_file(#attr, mode, ptm_debugfs->debugfs, \ 483 + ptm_debugfs, &attr##_fops); \ 484 + } while (0) 485 + 486 + /* 487 + * pcie_ptm_create_debugfs() - Create debugfs entries for the PTM context 488 + * @dev: PTM capable component device 489 + * @pdata: Private data of the PTM capable component device 490 + * @ops: PTM callback structure 491 + * 492 + * Create debugfs entries for exposing the PTM context of the PTM capable 493 + * components such as Root Complex and Endpoint controllers. 494 + * 495 + * Return: Pointer to 'struct pci_ptm_debugfs' if success, NULL otherwise. 496 + */ 497 + struct pci_ptm_debugfs *pcie_ptm_create_debugfs(struct device *dev, void *pdata, 498 + const struct pcie_ptm_ops *ops) 499 + { 500 + struct pci_ptm_debugfs *ptm_debugfs; 501 + char *dirname; 502 + int ret; 503 + 504 + /* Caller must provide check_capability() callback */ 505 + if (!ops->check_capability) 506 + return NULL; 507 + 508 + /* Check for PTM capability before creating debugfs attrbutes */ 509 + ret = ops->check_capability(pdata); 510 + if (!ret) { 511 + dev_dbg(dev, "PTM capability not present\n"); 512 + return NULL; 513 + } 514 + 515 + ptm_debugfs = kzalloc(sizeof(*ptm_debugfs), GFP_KERNEL); 516 + if (!ptm_debugfs) 517 + return NULL; 518 + 519 + dirname = devm_kasprintf(dev, GFP_KERNEL, "pcie_ptm_%s", dev_name(dev)); 520 + if (!dirname) 521 + return NULL; 522 + 523 + ptm_debugfs->debugfs = debugfs_create_dir(dirname, NULL); 524 + ptm_debugfs->pdata = pdata; 525 + ptm_debugfs->ops = ops; 526 + mutex_init(&ptm_debugfs->lock); 527 + 528 + pcie_ptm_create_debugfs_file(pdata, 0644, context_update); 529 + pcie_ptm_create_debugfs_file(pdata, 0644, context_valid); 530 + pcie_ptm_create_debugfs_file(pdata, 0444, local_clock); 531 + pcie_ptm_create_debugfs_file(pdata, 0444, master_clock); 532 + pcie_ptm_create_debugfs_file(pdata, 0444, t1); 533 + pcie_ptm_create_debugfs_file(pdata, 0444, t2); 534 + pcie_ptm_create_debugfs_file(pdata, 0444, t3); 535 + pcie_ptm_create_debugfs_file(pdata, 0444, t4); 536 + 537 + return ptm_debugfs; 538 + } 539 + EXPORT_SYMBOL_GPL(pcie_ptm_create_debugfs); 540 + 541 + /* 542 + * pcie_ptm_destroy_debugfs() - Destroy debugfs entries for the PTM context 543 + * @ptm_debugfs: Pointer to the PTM debugfs struct 544 + */ 545 + void pcie_ptm_destroy_debugfs(struct pci_ptm_debugfs *ptm_debugfs) 546 + { 547 + if (!ptm_debugfs) 548 + return; 549 + 550 + mutex_destroy(&ptm_debugfs->lock); 551 + debugfs_remove_recursive(ptm_debugfs->debugfs); 552 + } 553 + EXPORT_SYMBOL_GPL(pcie_ptm_destroy_debugfs);
+4 -2
drivers/pci/pcie/tlp.c
··· 98 98 * pcie_print_tlp_log - Print TLP Header / Prefix Log contents 99 99 * @dev: PCIe device 100 100 * @log: TLP Log structure 101 + * @level: Printk log level 101 102 * @pfx: String prefix 102 103 * 103 104 * Prints TLP Header and Prefix Log information held by @log. 104 105 */ 105 106 void pcie_print_tlp_log(const struct pci_dev *dev, 106 - const struct pcie_tlp_log *log, const char *pfx) 107 + const struct pcie_tlp_log *log, const char *level, 108 + const char *pfx) 107 109 { 108 110 /* EE_PREFIX_STR fits the extended DW space needed for the Flit mode */ 109 111 char buf[11 * PCIE_STD_MAX_TLP_HEADERLOG + 1]; ··· 132 130 } 133 131 } 134 132 135 - pci_err(dev, "%sTLP Header%s: %s\n", pfx, 133 + dev_printk(level, &dev->dev, "%sTLP Header%s: %s\n", pfx, 136 134 log->flit ? " (Flit)" : "", buf); 137 135 }
+1 -2
drivers/pci/probe.c
··· 2058 2058 if (class == PCI_CLASS_BRIDGE_PCI) 2059 2059 goto bad; 2060 2060 pci_read_irq(dev); 2061 - pci_read_bases(dev, 6, PCI_ROM_ADDRESS); 2061 + pci_read_bases(dev, PCI_STD_NUM_BARS, PCI_ROM_ADDRESS); 2062 2062 2063 2063 pci_subsystem_ids(dev, &dev->subsystem_vendor, &dev->subsystem_device); 2064 2064 ··· 2711 2711 pci_set_msi_domain(dev); 2712 2712 2713 2713 /* Notifier could use PCI capabilities */ 2714 - dev->match_driver = false; 2715 2714 ret = device_add(&dev->dev); 2716 2715 WARN_ON(ret < 0); 2717 2716
+16 -6
drivers/pci/pwrctrl/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 - config HAVE_PWRCTL 3 + config HAVE_PWRCTRL 4 4 bool 5 5 6 - config PCI_PWRCTL 6 + config PCI_PWRCTRL 7 7 tristate 8 8 9 - config PCI_PWRCTL_PWRSEQ 9 + config PCI_PWRCTRL_PWRSEQ 10 10 tristate 11 11 select POWER_SEQUENCING 12 - select PCI_PWRCTL 12 + select PCI_PWRCTRL 13 13 14 - config PCI_PWRCTL_SLOT 14 + config PCI_PWRCTRL_SLOT 15 15 tristate "PCI Power Control driver for PCI slots" 16 - select PCI_PWRCTL 16 + select PCI_PWRCTRL 17 17 help 18 18 Say Y here to enable the PCI Power Control driver to control the power 19 19 state of PCI slots. ··· 21 21 This is a generic driver that controls the power state of different 22 22 PCI slots. The voltage regulators powering the rails of the PCI slots 23 23 are expected to be defined in the devicetree node of the PCI bridge. 24 + 25 + # deprecated 26 + config HAVE_PWRCTL 27 + bool 28 + select HAVE_PWRCTRL 29 + 30 + # deprecated 31 + config PCI_PWRCTL_PWRSEQ 32 + tristate 33 + select PCI_PWRCTRL_PWRSEQ
+4 -4
drivers/pci/pwrctrl/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 3 - obj-$(CONFIG_PCI_PWRCTL) += pci-pwrctrl-core.o 3 + obj-$(CONFIG_PCI_PWRCTRL) += pci-pwrctrl-core.o 4 4 pci-pwrctrl-core-y := core.o 5 5 6 - obj-$(CONFIG_PCI_PWRCTL_PWRSEQ) += pci-pwrctrl-pwrseq.o 6 + obj-$(CONFIG_PCI_PWRCTRL_PWRSEQ) += pci-pwrctrl-pwrseq.o 7 7 8 - obj-$(CONFIG_PCI_PWRCTL_SLOT) += pci-pwrctl-slot.o 9 - pci-pwrctl-slot-y := slot.o 8 + obj-$(CONFIG_PCI_PWRCTRL_SLOT) += pci-pwrctrl-slot.o 9 + pci-pwrctrl-slot-y := slot.o
+2
drivers/pci/pwrctrl/core.c
··· 101 101 */ 102 102 void pci_pwrctrl_device_unset_ready(struct pci_pwrctrl *pwrctrl) 103 103 { 104 + cancel_work_sync(&pwrctrl->work); 105 + 104 106 /* 105 107 * We don't have to delete the link here. Typically, this function 106 108 * is only called when the power control device is being detached. If
+26 -7
drivers/pci/quirks.c
··· 38 38 39 39 static bool pcie_lbms_seen(struct pci_dev *dev, u16 lnksta) 40 40 { 41 - unsigned long count; 42 - int ret; 41 + if (test_bit(PCI_LINK_LBMS_SEEN, &dev->priv_flags)) 42 + return true; 43 43 44 - ret = pcie_lbms_count(dev, &count); 45 - if (ret < 0) 46 - return lnksta & PCI_EXP_LNKSTA_LBMS; 47 - 48 - return count > 0; 44 + return lnksta & PCI_EXP_LNKSTA_LBMS; 49 45 } 50 46 51 47 /* ··· 4991 4995 PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4992 4996 } 4993 4997 4998 + static int pci_quirk_loongson_acs(struct pci_dev *dev, u16 acs_flags) 4999 + { 5000 + /* 5001 + * Loongson PCIe Root Ports don't advertise an ACS capability, but 5002 + * they do not allow peer-to-peer transactions between Root Ports. 5003 + * Allow each Root Port to be in a separate IOMMU group by masking 5004 + * SV/RR/CR/UF bits. 5005 + */ 5006 + return pci_acs_ctrl_enabled(acs_flags, 5007 + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 5008 + } 5009 + 4994 5010 /* 4995 5011 * Wangxun 40G/25G/10G/1G NICs have no ACS capability, but on 4996 5012 * multi-function devices, the hardware isolates the functions by ··· 5136 5128 { PCI_VENDOR_ID_BROADCOM, 0x1762, pci_quirk_mf_endpoint_acs }, 5137 5129 { PCI_VENDOR_ID_BROADCOM, 0x1763, pci_quirk_mf_endpoint_acs }, 5138 5130 { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs }, 5131 + /* Loongson PCIe Root Ports */ 5132 + { PCI_VENDOR_ID_LOONGSON, 0x3C09, pci_quirk_loongson_acs }, 5133 + { PCI_VENDOR_ID_LOONGSON, 0x3C19, pci_quirk_loongson_acs }, 5134 + { PCI_VENDOR_ID_LOONGSON, 0x3C29, pci_quirk_loongson_acs }, 5135 + { PCI_VENDOR_ID_LOONGSON, 0x7A09, pci_quirk_loongson_acs }, 5136 + { PCI_VENDOR_ID_LOONGSON, 0x7A19, pci_quirk_loongson_acs }, 5137 + { PCI_VENDOR_ID_LOONGSON, 0x7A29, pci_quirk_loongson_acs }, 5138 + { PCI_VENDOR_ID_LOONGSON, 0x7A39, pci_quirk_loongson_acs }, 5139 + { PCI_VENDOR_ID_LOONGSON, 0x7A49, pci_quirk_loongson_acs }, 5140 + { PCI_VENDOR_ID_LOONGSON, 0x7A59, pci_quirk_loongson_acs }, 5141 + { PCI_VENDOR_ID_LOONGSON, 0x7A69, pci_quirk_loongson_acs }, 5139 5142 /* Amazon Annapurna Labs */ 5140 5143 { PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs }, 5141 5144 /* Zhaoxin multi-function devices */
+9 -7
drivers/pci/setup-bus.c
··· 776 776 { 777 777 struct pci_dev *bridge = bus->self; 778 778 779 - pci_info(bridge, "PCI bridge to %pR\n", 780 - &bus->busn_res); 779 + pci_info(bridge, "PCI bridge to %pR\n", &bus->busn_res); 781 780 782 781 if (type & IORESOURCE_IO) 783 782 pci_setup_bridge_io(bridge); ··· 2301 2302 2302 2303 /* Depth last, allocate resources and update the hardware. */ 2303 2304 __pci_bus_assign_resources(bus, add_list, &fail_head); 2304 - if (add_list) 2305 - BUG_ON(!list_empty(add_list)); 2305 + if (WARN_ON_ONCE(add_list && !list_empty(add_list))) 2306 + free_list(add_list); 2306 2307 tried_times++; 2307 2308 2308 2309 /* Any device complain? */ ··· 2364 2365 pci_bridge_distribute_available_resources(bridge, &add_list); 2365 2366 2366 2367 __pci_bridge_assign_resources(bridge, &add_list, &fail_head); 2367 - BUG_ON(!list_empty(&add_list)); 2368 + if (WARN_ON_ONCE(!list_empty(&add_list))) 2369 + free_list(&add_list); 2368 2370 tried_times++; 2369 2371 2370 2372 if (list_empty(&fail_head)) ··· 2441 2441 2442 2442 __pci_bus_size_bridges(bridge->subordinate, &added); 2443 2443 __pci_bridge_assign_resources(bridge, &added, &failed); 2444 - BUG_ON(!list_empty(&added)); 2444 + if (WARN_ON_ONCE(!list_empty(&added))) 2445 + free_list(&added); 2445 2446 2446 2447 if (!list_empty(&failed)) { 2447 2448 ret = -ENOSPC; ··· 2498 2497 __pci_bus_size_bridges(dev->subordinate, &add_list); 2499 2498 up_read(&pci_bus_sem); 2500 2499 __pci_bus_assign_resources(bus, &add_list, NULL); 2501 - BUG_ON(!list_empty(&add_list)); 2500 + if (WARN_ON_ONCE(!list_empty(&add_list))) 2501 + free_list(&add_list); 2502 2502 } 2503 2503 EXPORT_SYMBOL_GPL(pci_assign_unassigned_bus_resources);
-1
drivers/pcmcia/cardbus.c
··· 72 72 pci_lock_rescan_remove(); 73 73 74 74 s->functions = pci_scan_slot(bus, PCI_DEVFN(0, 0)); 75 - pci_fixup_cardbus(bus); 76 75 77 76 max = bus->busn_res.start; 78 77 for (pass = 0; pass < 2; pass++)
-1
drivers/scsi/bnx2fc/Kconfig
··· 5 5 depends on (IPV6 || IPV6=n) 6 6 depends on LIBFC 7 7 depends on LIBFCOE 8 - depends on MMU 9 8 select NETDEVICES 10 9 select ETHERNET 11 10 select NET_VENDOR_BROADCOM
-1
drivers/scsi/bnx2i/Kconfig
··· 4 4 depends on NET 5 5 depends on PCI 6 6 depends on (IPV6 || IPV6=n) 7 - depends on MMU 8 7 select SCSI_ISCSI_ATTRS 9 8 select NETDEVICES 10 9 select ETHERNET
+1 -1
drivers/vfio/pci/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 menu "VFIO support for PCI devices" 3 - depends on PCI && MMU 3 + depends on PCI 4 4 5 5 config VFIO_PCI_CORE 6 6 tristate
-6
include/linux/pci-ecam.h
··· 93 93 extern const struct pci_ecam_ops tegra194_pcie_ops; /* Tegra194 PCIe */ 94 94 extern const struct pci_ecam_ops loongson_pci_ecam_ops; /* Loongson PCIe */ 95 95 #endif 96 - 97 - #if IS_ENABLED(CONFIG_PCI_HOST_COMMON) 98 - /* for DT-based PCI controllers that support ECAM */ 99 - int pci_host_common_probe(struct platform_device *pdev); 100 - void pci_host_common_remove(struct platform_device *pdev); 101 - #endif 102 96 #endif
+5 -6
include/linux/pci-epc.h
··· 100 100 void (*unmap_addr)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 101 101 phys_addr_t addr); 102 102 int (*set_msi)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 103 - u8 interrupts); 103 + u8 nr_irqs); 104 104 int (*get_msi)(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 105 105 int (*set_msix)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 106 - u16 interrupts, enum pci_barno, u32 offset); 106 + u16 nr_irqs, enum pci_barno, u32 offset); 107 107 int (*get_msix)(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 108 108 int (*raise_irq)(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 109 109 unsigned int type, u16 interrupt_num); ··· 286 286 u64 pci_addr, size_t size); 287 287 void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 288 288 phys_addr_t phys_addr); 289 - int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 290 - u8 interrupts); 289 + int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u8 nr_irqs); 291 290 int pci_epc_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 292 - int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 293 - u16 interrupts, enum pci_barno, u32 offset); 291 + int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u16 nr_irqs, 292 + enum pci_barno, u32 offset); 294 293 int pci_epc_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no); 295 294 int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 296 295 phys_addr_t phys_addr, u8 interrupt_num,
+3
include/linux/pci-epf.h
··· 114 114 * @phys_addr: physical address that should be mapped to the BAR 115 115 * @addr: virtual address corresponding to the @phys_addr 116 116 * @size: the size of the address space present in BAR 117 + * @aligned_size: the size actually allocated to accommodate the iATU alignment 118 + * requirement 117 119 * @barno: BAR number 118 120 * @flags: flags that are set for the BAR 119 121 */ ··· 123 121 dma_addr_t phys_addr; 124 122 void *addr; 125 123 size_t size; 124 + size_t aligned_size; 126 125 enum pci_barno barno; 127 126 int flags; 128 127 };
+54 -10
include/linux/pci.h
··· 348 348 u8 hdr_type; /* PCI header type (`multi' flag masked out) */ 349 349 #ifdef CONFIG_PCIEAER 350 350 u16 aer_cap; /* AER capability offset */ 351 - struct aer_stats *aer_stats; /* AER stats for this device */ 351 + struct aer_info *aer_info; /* AER info for this device */ 352 352 #endif 353 353 #ifdef CONFIG_PCIEPORTBUS 354 354 struct rcec_ea *rcec_ea; /* RCEC cached endpoint association */ ··· 424 424 unsigned int irq; 425 425 struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */ 426 426 struct resource driver_exclusive_resource; /* driver exclusive resource ranges */ 427 - 428 - bool match_driver; /* Skip attaching driver */ 429 427 430 428 unsigned int transparent:1; /* Subtractive decode bridge */ 431 429 unsigned int io_window:1; /* Bridge has I/O window */ ··· 1139 1141 resource_size_t, 1140 1142 resource_size_t); 1141 1143 1142 - /* Weak but can be overridden by arch */ 1143 - void pci_fixup_cardbus(struct pci_bus *); 1144 - 1145 1144 /* Generic PCI functions used internally */ 1146 1145 1147 1146 void pcibios_resource_to_bus(struct pci_bus *bus, struct pci_bus_region *region, ··· 1845 1850 static inline bool pcie_aspm_enabled(struct pci_dev *pdev) { return false; } 1846 1851 #endif 1847 1852 1853 + #ifdef CONFIG_HOTPLUG_PCI 1854 + void pci_hp_ignore_link_change(struct pci_dev *pdev); 1855 + void pci_hp_unignore_link_change(struct pci_dev *pdev); 1856 + #else 1857 + static inline void pci_hp_ignore_link_change(struct pci_dev *pdev) { } 1858 + static inline void pci_hp_unignore_link_change(struct pci_dev *pdev) { } 1859 + #endif 1860 + 1848 1861 #ifdef CONFIG_PCIEAER 1849 1862 bool pci_aer_available(void); 1850 1863 #else ··· 1860 1857 #endif 1861 1858 1862 1859 bool pci_ats_disabled(void); 1860 + 1861 + #define PCIE_PTM_CONTEXT_UPDATE_AUTO 0 1862 + #define PCIE_PTM_CONTEXT_UPDATE_MANUAL 1 1863 + 1864 + struct pcie_ptm_ops { 1865 + int (*check_capability)(void *drvdata); 1866 + int (*context_update_write)(void *drvdata, u8 mode); 1867 + int (*context_update_read)(void *drvdata, u8 *mode); 1868 + int (*context_valid_write)(void *drvdata, bool valid); 1869 + int (*context_valid_read)(void *drvdata, bool *valid); 1870 + int (*local_clock_read)(void *drvdata, u64 *clock); 1871 + int (*master_clock_read)(void *drvdata, u64 *clock); 1872 + int (*t1_read)(void *drvdata, u64 *clock); 1873 + int (*t2_read)(void *drvdata, u64 *clock); 1874 + int (*t3_read)(void *drvdata, u64 *clock); 1875 + int (*t4_read)(void *drvdata, u64 *clock); 1876 + 1877 + bool (*context_update_visible)(void *drvdata); 1878 + bool (*context_valid_visible)(void *drvdata); 1879 + bool (*local_clock_visible)(void *drvdata); 1880 + bool (*master_clock_visible)(void *drvdata); 1881 + bool (*t1_visible)(void *drvdata); 1882 + bool (*t2_visible)(void *drvdata); 1883 + bool (*t3_visible)(void *drvdata); 1884 + bool (*t4_visible)(void *drvdata); 1885 + }; 1886 + 1887 + struct pci_ptm_debugfs { 1888 + struct dentry *debugfs; 1889 + const struct pcie_ptm_ops *ops; 1890 + struct mutex lock; 1891 + void *pdata; 1892 + }; 1863 1893 1864 1894 #ifdef CONFIG_PCIE_PTM 1865 1895 int pci_enable_ptm(struct pci_dev *dev, u8 *granularity); ··· 1904 1868 static inline void pci_disable_ptm(struct pci_dev *dev) { } 1905 1869 static inline bool pcie_ptm_enabled(struct pci_dev *dev) 1906 1870 { return false; } 1871 + #endif 1872 + 1873 + #if IS_ENABLED(CONFIG_DEBUG_FS) && IS_ENABLED(CONFIG_PCIE_PTM) 1874 + struct pci_ptm_debugfs *pcie_ptm_create_debugfs(struct device *dev, void *pdata, 1875 + const struct pcie_ptm_ops *ops); 1876 + void pcie_ptm_destroy_debugfs(struct pci_ptm_debugfs *ptm_debugfs); 1877 + #else 1878 + static inline struct pci_ptm_debugfs 1879 + *pcie_ptm_create_debugfs(struct device *dev, void *pdata, 1880 + const struct pcie_ptm_ops *ops) { return NULL; } 1881 + static inline void 1882 + pcie_ptm_destroy_debugfs(struct pci_ptm_debugfs *ptm_debugfs) { } 1907 1883 #endif 1908 1884 1909 1885 void pci_cfg_access_lock(struct pci_dev *dev); ··· 2372 2324 void __iomem * const *pcim_iomap_table(struct pci_dev *pdev); 2373 2325 int pcim_request_region(struct pci_dev *pdev, int bar, const char *name); 2374 2326 int pcim_iomap_regions(struct pci_dev *pdev, int mask, const char *name); 2375 - void pcim_iounmap_regions(struct pci_dev *pdev, int mask); 2376 2327 void __iomem *pcim_iomap_range(struct pci_dev *pdev, int bar, 2377 2328 unsigned long offset, unsigned long len); 2378 2329 ··· 2742 2695 #endif 2743 2696 2744 2697 #include <linux/dma-mapping.h> 2745 - 2746 - #define pci_printk(level, pdev, fmt, arg...) \ 2747 - dev_printk(level, &(pdev)->dev, fmt, ##arg) 2748 2698 2749 2699 #define pci_emerg(pdev, fmt, arg...) dev_emerg(&(pdev)->dev, fmt, ##arg) 2750 2700 #define pci_alert(pdev, fmt, arg...) dev_alert(&(pdev)->dev, fmt, ##arg)
+2
include/linux/pm_runtime.h
··· 470 470 return __pm_runtime_idle(dev, RPM_GET_PUT | RPM_ASYNC); 471 471 } 472 472 473 + DEFINE_FREE(pm_runtime_put, struct device *, if (_T) pm_runtime_put(_T)) 474 + 473 475 /** 474 476 * __pm_runtime_put_autosuspend - Drop device usage counter and queue autosuspend if 0. 475 477 * @dev: Target device.
+11 -1
include/uapi/linux/pci_regs.h
··· 750 750 #define PCI_EXT_CAP_ID_NPEM 0x29 /* Native PCIe Enclosure Management */ 751 751 #define PCI_EXT_CAP_ID_PL_32GT 0x2A /* Physical Layer 32.0 GT/s */ 752 752 #define PCI_EXT_CAP_ID_DOE 0x2E /* Data Object Exchange */ 753 - #define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_DOE 753 + #define PCI_EXT_CAP_ID_PL_64GT 0x31 /* Physical Layer 64.0 GT/s */ 754 + #define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PL_64GT 754 755 755 756 #define PCI_EXT_CAP_DSN_SIZEOF 12 756 757 #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40 ··· 1145 1144 #define PCI_DLF_CAP 0x04 /* Capabilities Register */ 1146 1145 #define PCI_DLF_EXCHANGE_ENABLE 0x80000000 /* Data Link Feature Exchange Enable */ 1147 1146 1147 + /* Secondary PCIe Capability 8.0 GT/s */ 1148 + #define PCI_SECPCI_LE_CTRL 0x0c /* Lane Equalization Control Register */ 1149 + 1148 1150 /* Physical Layer 16.0 GT/s */ 1149 1151 #define PCI_PL_16GT_LE_CTRL 0x20 /* Lane Equalization Control Register */ 1150 1152 #define PCI_PL_16GT_LE_CTRL_DSP_TX_PRESET_MASK 0x0000000F 1151 1153 #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK 0x000000F0 1152 1154 #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT 4 1155 + 1156 + /* Physical Layer 32.0 GT/s */ 1157 + #define PCI_PL_32GT_LE_CTRL 0x20 /* Lane Equalization Control Register */ 1158 + 1159 + /* Physical Layer 64.0 GT/s */ 1160 + #define PCI_PL_64GT_LE_CTRL 0x20 /* Lane Equalization Control Register */ 1153 1161 1154 1162 /* Native PCIe Enclosure Management */ 1155 1163 #define PCI_NPEM_CAP 0x04 /* NPEM capability register */