Merge tag 'pci-v6.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Add PCI_FIND_NEXT_CAP() and PCI_FIND_NEXT_EXT_CAP() macros that
take config space accessor functions.

Implement pci_find_capability(), pci_find_ext_capability(), and
dwc, dwc endpoint, and cadence capability search interfaces with
them (Hans Zhang)

- Leave parent unit address 0 in 'interrupt-map' so that when we
build devicetree nodes to describe PCI functions that contain
multiple peripherals, we can build this property even when
interrupt controllers lack 'reg' properties (Lorenzo Pieralisi)

- Add a Xeon 6 quirk to disable Extended Tags and limit Max Read
Request Size to 128B to avoid a performance issue (Ilpo Järvinen)

- Add sysfs 'serial_number' file to expose the Device Serial Number
(Matthew Wood)

- Fix pci_acpi_preserve_config() memory leak (Nirmoy Das)

Resource management:

- Align m68k pcibios_enable_device() with other arches (Ilpo
Järvinen)

- Remove sparc pcibios_enable_device() implementations that don't do
anything beyond what pci_enable_resources() does (Ilpo Järvinen)

- Remove mips pcibios_enable_resources() and use
pci_enable_resources() instead (Ilpo Järvinen)

- Clean up bridge window sizing and assignment (Ilpo Järvinen),
including:

- Leave non-claimed bridge windows disabled

- Enable bridges even if a window wasn't assigned because not all
windows are required by downstream devices

- Preserve bridge window type when releasing the resource, since
the type is needed for reassignment

- Consolidate selection of bridge windows into two new
interfaces, pbus_select_window() and
pbus_select_window_for_type(), so this is done consistently

- Compute bridge window start and end earlier to avoid logging
stale information

MSI:

- Add quirk to disable MSI on RDC PCI to PCIe bridges (Marcos Del Sol
Vives)

Error handling:

- Align AER with EEH by allowing drivers to request a Bus Reset on
Non-Fatal Errors (in addition to the reset on Fatal Errors that we
already do) (Lukas Wunner)

- If error recovery fails, emit FAILED_RECOVERY uevents for the
devices, not for the bridge leading to them.

This makes them correspond to BEGIN_RECOVERY uevents (Lukas Wunner)

- Align AER with EEH by calling err_handler.error_detected()
callbacks to notify drivers if error recovery fails (Lukas Wunner)

- Align AER with EEH by restoring device error_state to
pci_channel_io_normal before the err_handler.slot_reset() callback.

This is earlier than before the err_handler.resume() callback
(Lukas Wunner)

- Emit a BEGIN_RECOVERY uevent when driver's
err_handler.error_detected() requests a reset, as well as when it
says recovery is complete or can be done without a reset (Niklas
Schnelle)

- Align s390 with AER and EEH by emitting uevents during error
recovery (Niklas Schnelle)

- Align EEH with AER and s390 by emitting BEGIN_RECOVERY,
SUCCESSFUL_RECOVERY, or FAILED_RECOVERY uevents depending on the
result of err_handler.error_detected() (Niklas Schnelle)

- Fix a NULL pointer dereference in aer_ratelimit() when ACPI GHES
error information identifies a device without an AER Capability
(Breno Leitao)

- Update error decoding and TLP Log printing for new errors in
current PCIe base spec (Lukas Wunner)

- Update error recovery documentation to match the current code
and use consistent nomenclature (Lukas Wunner)

ASPM:

- Enable all ClockPM and ASPM states for devicetree platforms, since
there's typically no firmware that enables ASPM

This is a risky change that may uncover hardware or configuration
defects at boot-time rather than when users enable ASPM via sysfs
later. Booting with "pcie_aspm=off" prevents this enabling
(Manivannan Sadhasivam)

- Remove the qcom code that enabled ASPM (Manivannan Sadhasivam)

Power management:

- If a device has already been disconnected, e.g., by a hotplug
removal, don't bother trying to resume it to D0 when detaching the
driver.

This avoids annoying "Unable to change power state from D3cold to
D0" messages (Mario Limonciello)

- Ensure devices are powered up before config reads for
'max_link_width', 'current_link_speed', 'current_link_width',
'secondary_bus_number', and 'subordinate_bus_number' sysfs files.

This prevents using invalid data (~0) in drivers or lspci and,
depending on how the PCIe controller reports errors, may avoid
error interrupts or crashes (Brian Norris)

Virtualization:

- Add rescan/remove locking when enabling/disabling SR-IOV, which
avoids list corruption on s390, where disabling SR-IOV also
generates hotplug events (Niklas Schnelle)

Peer-to-peer DMA:

- Free struct p2p_pgmap, not a member within it, in the
pci_p2pdma_add_resource() error path (Sungho Kim)

Endpoint framework:

- Document sysfs interface for BAR assignment of vNTB endpoint
functions (Jerome Brunet)

- Fix array underflow in endpoint BAR test case (Dan Carpenter)

- Skip endpoint IRQ test if the IRQ is out of range to avoid false
errors (Christian Bruel)

- Fix endpoint test case for controllers with fixed-size BARs smaller
than requested by the test (Marek Vasut)

- Restore inbound translation when disabling doorbell so the endpoint
doorbell test case can be run more than once (Niklas Cassel)

- Avoid a NULL pointer dereference when releasing DMA channels in
endpoint DMA test case (Shin'ichiro Kawasaki)

- Convert tegra194 interrupt number to MSI vector to fix endpoint
Kselftest MSI_TEST test case (Niklas Cassel)

- Reset tegra194 BARs when running in endpoint mode so the BAR tests
don't overwrite the ATU settings in BAR4 (Niklas Cassel)

- Handle errors in tegra194 BPMP transactions so we don't mistakenly
skip future PERST# assertion (Vidya Sagar)

AMD MDB PCIe controller driver:

- Update DT binding example to separate PERST# to a Root Port stanza
to make multiple Root Ports possible in the future (Sai Krishna
Musham)

- Add driver support for PERST# being described in a Root Port
stanza, falling back to the host bridge if not found there (Sai
Krishna Musham)

Freescale i.MX6 PCIe controller driver:

- Enable the 3.3V Vaux supply if available so devices can request
wakeup with either Beacon or WAKE# (Richard Zhu)

MediaTek PCIe Gen3 controller driver:

- Add optional sys clock ready time setting to avoid sys_clk_rdy
signal glitching in MT6991 and MT8196 (AngeloGioacchino Del Regno)

- Add DT binding and driver support for MT6991 and MT8196
(AngeloGioacchino Del Regno)

NVIDIA Tegra PCIe controller driver:

- When asserting PERST#, disable the controller instead of mistakenly
disabling the PLL twice (Nagarjuna Kristam)

- Convert struct tegra_msi mask_lock to raw spinlock to avoid a lock
nesting error (Marek Vasut)

Qualcomm PCIe controller driver:

- Select PCI Power Control Slot driver so slot voltage rails can be
turned on/off if described in Root Port devicetree node (Qiang Yu)

- Parse only PCI bridge child nodes in devicetree, skipping unrelated
nodes such as OPP (Operating Performance Points), which caused
probe failures (Krishna Chaitanya Chundru)

- Add 8.0 GT/s and 32.0 GT/s equalization settings (Ziyue Zhang)

- Consolidate Root Port 'phy' and 'reset' properties in struct
qcom_pcie_port, regardless of whether we got them from the Root
Port node or the host bridge node (Manivannan Sadhasivam)

- Fetch and map the ELBI register space in the DWC core rather than
in each driver individually (Krishna Chaitanya Chundru)

- Enable ECAM mechanism in DWC core by setting up iATU with 'CFG
Shift Feature' and use this in the qcom driver (Krishna Chaitanya
Chundru)

- Add SM8750 compatible to qcom,pcie-sm8550.yaml (Krishna Chaitanya
Chundru)

- Update qcom,pcie-x1e80100.yaml to allow fifth PCIe host on Qualcomm
Glymur, which is compatible with X1E80100 but doesn't have the
cnoc_sf_axi clock (Qiang Yu)

Renesas R-Car PCIe controller driver:

- Fix a typo that prevented correct PHY initialization (Marek Vasut)

- Add a missing 1ms delay after PWR reset assertion as required by
the V4H manual (Marek Vasut)

- Assure reset has completed before DBI access to avoid SError (Marek
Vasut)

- Fix inverted PHY initialization check, which sometimes led to
timeouts and failure to start the controller (Marek Vasut)

- Pass the correct IRQ domain to generic_handle_domain_irq() to fix a
regression when converting to msi_create_parent_irq_domain()
(Claudiu Beznea)

- Drop the spinlock protecting the PMSR register - it's no longer
required since pci_lock already serializes accesses (Marek Vasut)

- Convert struct rcar_msi mask_lock to raw spinlock to avoid a lock
nesting error (Marek Vasut)

SOPHGO PCIe controller driver:

- Check for existence of struct cdns_pcie.ops before using it to
allow Cadence drivers that don't need to supply ops (Chen Wang)

- Add DT binding and driver for the SOPHGO SG2042 PCIe controller
(Chen Wang)

STMicroelectronics STM32MP25 PCIe controller driver:

- Update pinctrl documentation of initial states and use in runtime
suspend/resume (Christian Bruel)

- Add pinctrl_pm_select_init_state() for use by stm32 driver, which
needs it during resume (Christian Bruel)

- Add devicetree bindings and drivers for the STMicroelectronics
STM32MP25 in host and endpoint modes (Christian Bruel)

Synopsys DesignWare PCIe controller driver:

- Add support for x16 in devicetree 'num-lanes' property (Konrad
Dybcio)

- Verify that if DT specifies a single IRQ for all eDMA channels, it
is named 'dma' (Niklas Cassel)

TI J721E PCIe driver:

- Add MODULE_DEVICE_TABLE() so driver can be autoloaded (Siddharth
Vadapalli)

- Power controller off before configuring the glue layer so the
controller latches the correct values on power-on (Siddharth
Vadapalli)

TI Keystone PCIe controller driver:

- Use devm_request_irq() so 'ks-pcie-error-irq' is freed when driver
exits with error (Siddharth Vadapalli)

- Add Peripheral Virtualization Unit (PVU), which restricts DMA from
PCIe devices to specific regions of host memory, to the ti,am65
binding (Jan Kiszka)

Xilinx NWL PCIe controller driver:

- Clear bootloader E_ECAM_CONTROL before merging in the new driver
value to avoid writing invalid values (Jani Nurminen)"

* tag 'pci-v6.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (141 commits)
PCI/AER: Avoid NULL pointer dereference in aer_ratelimit()
MAINTAINERS: Add entry for ST STM32MP25 PCIe drivers
PCI: stm32-ep: Add PCIe Endpoint support for STM32MP25
dt-bindings: PCI: Add STM32MP25 PCIe Endpoint bindings
PCI: stm32: Add PCIe host support for STM32MP25
PCI: xilinx-nwl: Fix ECAM programming
PCI: j721e: Fix incorrect error message in probe()
PCI: keystone: Use devm_request_irq() to free "ks-pcie-error-irq" on exit
dt-bindings: PCI: qcom,pcie-x1e80100: Set clocks minItems for the fifth Glymur PCIe Controller
PCI: dwc: Support 16-lane operation
PCI: Add lockdep assertion in pci_stop_and_remove_bus_device()
PCI/IOV: Add PCI rescan-remove locking when enabling/disabling SR-IOV
PCI: rcar-host: Convert struct rcar_msi mask_lock into raw spinlock
PCI: tegra194: Rename 'root_bus' to 'root_port_bus' in tegra_pcie_downstream_dev_to_D0()
PCI: tegra: Convert struct tegra_msi mask_lock into raw spinlock
PCI: rcar-gen4: Fix inverted break condition in PHY initialization
PCI: rcar-gen4: Assure reset occurs before DBI access
PCI: rcar-gen4: Add missing 1ms delay after PWR reset assertion
PCI: Set up bridge resources earlier
PCI: rcar-host: Drop PMSR spinlock
...

+3166 -1298
+9
Documentation/ABI/testing/sysfs-bus-pci
··· 612 613 # ls doe_features 614 0001:01 0001:02 doe_discovery
··· 612 613 # ls doe_features 614 0001:01 0001:02 doe_discovery 615 + 616 + What: /sys/bus/pci/devices/.../serial_number 617 + Date: December 2025 618 + Contact: Matthew Wood <thepacketgeek@gmail.com> 619 + Description: 620 + This is visible only for PCI devices that support the serial 621 + number extended capability. The file is read only and due to 622 + the possible sensitivity of accessible serial numbers, admin 623 + only.
+7 -2
Documentation/PCI/endpoint/pci-vntb-howto.rst
··· 90 attributes that can be configured by the user:: 91 92 # ls functions/pci_epf_vntb/func1/pci_epf_vntb.0/ 93 - db_count mw1 mw2 mw3 mw4 num_mws 94 - spad_count 95 96 A sample configuration for NTB function is given below:: 97 ··· 100 # echo 128 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/spad_count 101 # echo 1 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/num_mws 102 # echo 0x100000 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/mw1 103 104 A sample configuration for virtual NTB driver for virtual PCI bus:: 105
··· 90 attributes that can be configured by the user:: 91 92 # ls functions/pci_epf_vntb/func1/pci_epf_vntb.0/ 93 + ctrl_bar db_count mw1_bar mw2_bar mw3_bar mw4_bar spad_count 94 + db_bar mw1 mw2 mw3 mw4 num_mws vbus_number 95 + vntb_vid vntb_pid 96 97 A sample configuration for NTB function is given below:: 98 ··· 99 # echo 128 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/spad_count 100 # echo 1 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/num_mws 101 # echo 0x100000 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/mw1 102 + 103 + By default, each construct is assigned a BAR, as needed and in order. 104 + Should a specific BAR setup be required by the platform, BAR may be assigned 105 + to each construct using the related ``XYZ_bar`` entry. 106 107 A sample configuration for virtual NTB driver for virtual PCI bus:: 108
+34 -9
Documentation/PCI/pci-error-recovery.rst
··· 13 Many PCI bus controllers are able to detect a variety of hardware 14 PCI errors on the bus, such as parity errors on the data and address 15 buses, as well as SERR and PERR errors. Some of the more advanced 16 - chipsets are able to deal with these errors; these include PCI-E chipsets, 17 and the PCI-host bridges found on IBM Power4, Power5 and Power6-based 18 pSeries boxes. A typical action taken is to disconnect the affected device, 19 halting all I/O to it. The goal of a disconnection is to avoid system ··· 108 if it implements any, it must implement error_detected(). If a callback 109 is not implemented, the corresponding feature is considered unsupported. 110 For example, if mmio_enabled() and resume() aren't there, then it 111 - is assumed that the driver is not doing any direct recovery and requires 112 - a slot reset. Typically a driver will want to know about 113 a slot_reset(). 114 115 The actual steps taken by a platform to recover from a PCI error ··· 122 is isolated, in that all I/O is blocked: all reads return 0xffffffff, 123 all writes are ignored. 124 125 126 STEP 1: Notification 127 -------------------- ··· 145 All drivers participating in this system must implement this call. 146 The driver must return one of the following result codes: 147 148 - PCI_ERS_RESULT_CAN_RECOVER 149 Driver returns this if it thinks it might be able to recover 150 the HW by just banging IOs or if it wants to be given ··· 206 all drivers on a segment agree that they can try to recover and if no automatic 207 link reset was performed by the HW. If the platform can't just re-enable IOs 208 without a slot reset or a link reset, it will not call this callback, and 209 - instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset) 210 211 .. note:: 212 ··· 259 260 The next step taken depends on the results returned by the drivers. 261 If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform 262 - proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations). 263 264 If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform 265 proceeds to STEP 4 (Slot Reset) 266 267 STEP 3: Link Reset 268 ------------------ 269 - The platform resets the link. This is a PCI-Express specific step 270 and is done whenever a fatal error has been detected that can be 271 "solved" by resetting the link. 272 ··· 288 power-on followed by power-on BIOS/system firmware initialization. 289 Soft reset is also known as hot-reset. 290 291 - Powerpc fundamental reset is supported by PCI Express cards only 292 and results in device's state machines, hardware logic, port states and 293 configuration registers to initialize to their default conditions. 294 295 For most PCI devices, a soft reset will be sufficient for recovery. 296 Optional fundamental reset is provided to support a limited number 297 - of PCI Express devices for which a soft reset is not sufficient 298 for recovery. 299 300 If the platform supports PCI hotplug, then the reset might be ··· 338 - PCI_ERS_RESULT_DISCONNECT 339 Same as above. 340 341 - Drivers for PCI Express cards that require a fundamental reset must 342 set the needs_freset bit in the pci_dev structure in their probe function. 343 For example, the QLogic qla2xxx driver sets the needs_freset bit for certain 344 PCI card types::
··· 13 Many PCI bus controllers are able to detect a variety of hardware 14 PCI errors on the bus, such as parity errors on the data and address 15 buses, as well as SERR and PERR errors. Some of the more advanced 16 + chipsets are able to deal with these errors; these include PCIe chipsets, 17 and the PCI-host bridges found on IBM Power4, Power5 and Power6-based 18 pSeries boxes. A typical action taken is to disconnect the affected device, 19 halting all I/O to it. The goal of a disconnection is to avoid system ··· 108 if it implements any, it must implement error_detected(). If a callback 109 is not implemented, the corresponding feature is considered unsupported. 110 For example, if mmio_enabled() and resume() aren't there, then it 111 + is assumed that the driver does not need these callbacks 112 + for recovery. Typically a driver will want to know about 113 a slot_reset(). 114 115 The actual steps taken by a platform to recover from a PCI error ··· 122 is isolated, in that all I/O is blocked: all reads return 0xffffffff, 123 all writes are ignored. 124 125 + Similarly, on platforms supporting Downstream Port Containment 126 + (PCIe r7.0 sec 6.2.11), the link to the sub-hierarchy with the 127 + faulting device is disabled. Any device in the sub-hierarchy 128 + becomes inaccessible. 129 130 STEP 1: Notification 131 -------------------- ··· 141 All drivers participating in this system must implement this call. 142 The driver must return one of the following result codes: 143 144 + - PCI_ERS_RESULT_RECOVERED 145 + Driver returns this if it thinks the device is usable despite 146 + the error and does not need further intervention. 147 - PCI_ERS_RESULT_CAN_RECOVER 148 Driver returns this if it thinks it might be able to recover 149 the HW by just banging IOs or if it wants to be given ··· 199 all drivers on a segment agree that they can try to recover and if no automatic 200 link reset was performed by the HW. If the platform can't just re-enable IOs 201 without a slot reset or a link reset, it will not call this callback, and 202 + instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset). 203 + 204 + .. note:: 205 + 206 + On platforms supporting Advanced Error Reporting (PCIe r7.0 sec 6.2), 207 + the faulting device may already be accessible in STEP 1 (Notification). 208 + Drivers should nevertheless defer accesses to STEP 2 (MMIO Enabled) 209 + to be compatible with EEH on powerpc and with s390 (where devices are 210 + inaccessible until STEP 2). 211 + 212 + On platforms supporting Downstream Port Containment, the link to the 213 + sub-hierarchy with the faulting device is re-enabled in STEP 3 (Link 214 + Reset). Hence devices in the sub-hierarchy are inaccessible until 215 + STEP 4 (Slot Reset). 216 + 217 + For errors such as Surprise Down (PCIe r7.0 sec 6.2.7), the device 218 + may not even be accessible in STEP 4 (Slot Reset). Drivers can detect 219 + accessibility by checking whether reads from the device return all 1's 220 + (PCI_POSSIBLE_ERROR()). 221 222 .. note:: 223 ··· 234 235 The next step taken depends on the results returned by the drivers. 236 If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform 237 + proceeds to either STEP 3 (Link Reset) or to STEP 5 (Resume Operations). 238 239 If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform 240 proceeds to STEP 4 (Slot Reset) 241 242 STEP 3: Link Reset 243 ------------------ 244 + The platform resets the link. This is a PCIe specific step 245 and is done whenever a fatal error has been detected that can be 246 "solved" by resetting the link. 247 ··· 263 power-on followed by power-on BIOS/system firmware initialization. 264 Soft reset is also known as hot-reset. 265 266 + Powerpc fundamental reset is supported by PCIe cards only 267 and results in device's state machines, hardware logic, port states and 268 configuration registers to initialize to their default conditions. 269 270 For most PCI devices, a soft reset will be sufficient for recovery. 271 Optional fundamental reset is provided to support a limited number 272 + of PCIe devices for which a soft reset is not sufficient 273 for recovery. 274 275 If the platform supports PCI hotplug, then the reset might be ··· 313 - PCI_ERS_RESULT_DISCONNECT 314 Same as above. 315 316 + Drivers for PCIe cards that require a fundamental reset must 317 set the needs_freset bit in the pci_dev structure in their probe function. 318 For example, the QLogic qla2xxx driver sets the needs_freset bit for certain 319 PCI card types::
+39 -44
Documentation/PCI/pcieaer-howto.rst
··· 70 ---------------- 71 72 When a PCIe AER error is captured, an error message will be output to 73 - console. If it's a correctable error, it is output as an info message. 74 Otherwise, it is printed as an error. So users could choose different 75 log level to filter out correctable error messages. 76 77 Below shows an example:: 78 79 - 0000:50:00.0: PCIe Bus Error: severity=Uncorrected (Fatal), type=Transaction Layer, id=0500(Requester ID) 80 0000:50:00.0: device [8086:0329] error status/mask=00100000/00000000 81 - 0000:50:00.0: [20] Unsupported Request (First) 82 - 0000:50:00.0: TLP Header: 04000001 00200a03 05010000 00050100 83 84 In the example, 'Requester ID' means the ID of the device that sent 85 the error message to the Root Port. Please refer to PCIe specs for other ··· 138 an error. The Root Port, upon receiving an error reporting message, 139 internally processes and logs the error message in its AER 140 Capability structure. Error information being logged includes storing 141 - the error reporting agent's requestor ID into the Error Source 142 Identification Registers and setting the error bits of the Root Error 143 Status Register accordingly. If AER error reporting is enabled in the Root 144 Error Command Register, the Root Port generates an interrupt when an ··· 152 Provide callbacks 153 ----------------- 154 155 - callback reset_link to reset PCIe link 156 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 157 - 158 - This callback is used to reset the PCIe physical link when a 159 - fatal error happens. The Root Port AER service driver provides a 160 - default reset_link function, but different Upstream Ports might 161 - have different specifications to reset the PCIe link, so 162 - Upstream Port drivers may provide their own reset_link functions. 163 - 164 - Section 3.2.2.2 provides more detailed info on when to call 165 - reset_link. 166 - 167 PCI error-recovery callbacks 168 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 169 ··· 162 Data struct pci_driver has a pointer, err_handler, to point to 163 pci_error_handlers who consists of a couple of callback function 164 pointers. The AER driver follows the rules defined in 165 - pci-error-recovery.rst except PCIe-specific parts (e.g. 166 - reset_link). Please refer to pci-error-recovery.rst for detailed 167 definitions of the callbacks. 168 169 The sections below specify when to call the error callback functions. ··· 177 require any recovery actions. The AER driver clears the device's 178 correctable error status register accordingly and logs these errors. 179 180 - Non-correctable (non-fatal and fatal) errors 181 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 182 183 - If an error message indicates a non-fatal error, performing link reset 184 at upstream is not required. The AER driver calls error_detected(dev, 185 pci_channel_io_normal) to all drivers associated within a hierarchy in 186 question. For example:: ··· 203 204 A driver may return PCI_ERS_RESULT_CAN_RECOVER, 205 PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on 206 - whether it can recover or the AER driver calls mmio_enabled as next. 207 208 If an error message indicates a fatal error, kernel will broadcast 209 error_detected(dev, pci_channel_io_frozen) to all drivers within 210 - a hierarchy in question. Then, performing link reset at upstream is 211 - necessary. As different kinds of devices might use different approaches 212 - to reset link, AER port service driver is required to provide the 213 - function to reset link via callback parameter of pcie_do_recovery() 214 - function. If reset_link is not NULL, recovery function will use it 215 - to reset the link. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER 216 - and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes 217 - to mmio_enabled. 218 219 - Frequent Asked Questions 220 - ------------------------ 221 222 Q: 223 What happens if a PCIe device driver does not provide an 224 error recovery handler (pci_driver->err_handler is equal to NULL)? 225 226 A: 227 - The devices attached with the driver won't be recovered. If the 228 - error is fatal, kernel will print out warning messages. Please refer 229 - to section 3 for more information. 230 - 231 - Q: 232 - What happens if an upstream port service driver does not provide 233 - callback reset_link? 234 - 235 - A: 236 - Fatal error recovery will fail if the errors are reported by the 237 - upstream ports who are attached by the service driver. 238 239 240 Software error injection
··· 70 ---------------- 71 72 When a PCIe AER error is captured, an error message will be output to 73 + console. If it's a correctable error, it is output as a warning message. 74 Otherwise, it is printed as an error. So users could choose different 75 log level to filter out correctable error messages. 76 77 Below shows an example:: 78 79 + 0000:50:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Transaction Layer, (Requester ID) 80 0000:50:00.0: device [8086:0329] error status/mask=00100000/00000000 81 + 0000:50:00.0: [20] UnsupReq (First) 82 + 0000:50:00.0: TLP Header: 0x04000001 0x00200a03 0x05010000 0x00050100 83 84 In the example, 'Requester ID' means the ID of the device that sent 85 the error message to the Root Port. Please refer to PCIe specs for other ··· 138 an error. The Root Port, upon receiving an error reporting message, 139 internally processes and logs the error message in its AER 140 Capability structure. Error information being logged includes storing 141 + the error reporting agent's Requester ID into the Error Source 142 Identification Registers and setting the error bits of the Root Error 143 Status Register accordingly. If AER error reporting is enabled in the Root 144 Error Command Register, the Root Port generates an interrupt when an ··· 152 Provide callbacks 153 ----------------- 154 155 PCI error-recovery callbacks 156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 157 ··· 174 Data struct pci_driver has a pointer, err_handler, to point to 175 pci_error_handlers who consists of a couple of callback function 176 pointers. The AER driver follows the rules defined in 177 + pci-error-recovery.rst except PCIe-specific parts (see 178 + below). Please refer to pci-error-recovery.rst for detailed 179 definitions of the callbacks. 180 181 The sections below specify when to call the error callback functions. ··· 189 require any recovery actions. The AER driver clears the device's 190 correctable error status register accordingly and logs these errors. 191 192 + Uncorrectable (non-fatal and fatal) errors 193 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 194 195 + The AER driver performs a Secondary Bus Reset to recover from 196 + uncorrectable errors. The reset is applied at the port above 197 + the originating device: If the originating device is an Endpoint, 198 + only the Endpoint is reset. If on the other hand the originating 199 + device has subordinate devices, those are all affected by the 200 + reset as well. 201 + 202 + If the originating device is a Root Complex Integrated Endpoint, 203 + there's no port above where a Secondary Bus Reset could be applied. 204 + In this case, the AER driver instead applies a Function Level Reset. 205 + 206 + If an error message indicates a non-fatal error, performing a reset 207 at upstream is not required. The AER driver calls error_detected(dev, 208 pci_channel_io_normal) to all drivers associated within a hierarchy in 209 question. For example:: ··· 204 205 A driver may return PCI_ERS_RESULT_CAN_RECOVER, 206 PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on 207 + whether it can recover without a reset, considers the device unrecoverable 208 + or needs a reset for recovery. If all affected drivers agree that they can 209 + recover without a reset, it is skipped. Should one driver request a reset, 210 + it overrides all other drivers. 211 212 If an error message indicates a fatal error, kernel will broadcast 213 error_detected(dev, pci_channel_io_frozen) to all drivers within 214 + a hierarchy in question. Then, performing a reset at upstream is 215 + necessary. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER 216 + to indicate that recovery without a reset is possible, the error 217 + handling goes to mmio_enabled, but afterwards a reset is still 218 + performed. 219 220 + In other words, for non-fatal errors, drivers may opt in to a reset. 221 + But for fatal errors, they cannot opt out of a reset, based on the 222 + assumption that the link is unreliable. 223 + 224 + Frequently Asked Questions 225 + -------------------------- 226 227 Q: 228 What happens if a PCIe device driver does not provide an 229 error recovery handler (pci_driver->err_handler is equal to NULL)? 230 231 A: 232 + The devices attached with the driver won't be recovered. 233 + The kernel will print out informational messages to identify 234 + unrecoverable devices. 235 236 237 Software error injection
+23 -1
Documentation/devicetree/bindings/pci/amd,versal2-mdb-host.yaml
··· 71 - "#address-cells" 72 - "#interrupt-cells" 73 74 required: 75 - reg 76 - reg-names ··· 98 - | 99 #include <dt-bindings/interrupt-controller/arm-gic.h> 100 #include <dt-bindings/interrupt-controller/irq.h> 101 102 soc { 103 #address-cells = <2>; ··· 124 #size-cells = <2>; 125 #interrupt-cells = <1>; 126 device_type = "pci"; 127 pcie_intc_0: interrupt-controller { 128 #address-cells = <0>; 129 #interrupt-cells = <1>; 130 interrupt-controller; 131 - }; 132 }; 133 };
··· 71 - "#address-cells" 72 - "#interrupt-cells" 73 74 + patternProperties: 75 + '^pcie@[0-2],0$': 76 + type: object 77 + $ref: /schemas/pci/pci-pci-bridge.yaml# 78 + 79 + properties: 80 + reg: 81 + maxItems: 1 82 + 83 + unevaluatedProperties: false 84 + 85 required: 86 - reg 87 - reg-names ··· 87 - | 88 #include <dt-bindings/interrupt-controller/arm-gic.h> 89 #include <dt-bindings/interrupt-controller/irq.h> 90 + #include <dt-bindings/gpio/gpio.h> 91 92 soc { 93 #address-cells = <2>; ··· 112 #size-cells = <2>; 113 #interrupt-cells = <1>; 114 device_type = "pci"; 115 + 116 + pcie@0,0 { 117 + device_type = "pci"; 118 + reg = <0x0 0x0 0x0 0x0 0x0>; 119 + reset-gpios = <&tca6416_u37 7 GPIO_ACTIVE_LOW>; 120 + #address-cells = <3>; 121 + #size-cells = <2>; 122 + ranges; 123 + }; 124 + 125 pcie_intc_0: interrupt-controller { 126 #address-cells = <0>; 127 #interrupt-cells = <1>; 128 interrupt-controller; 129 + }; 130 }; 131 };
+35
Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
··· 52 - mediatek,mt8188-pcie 53 - mediatek,mt8195-pcie 54 - const: mediatek,mt8192-pcie 55 - const: mediatek,mt8192-pcie 56 - const: airoha,en7581-pcie 57 58 reg: ··· 214 reset-names: 215 minItems: 1 216 maxItems: 2 217 218 mediatek,pbus-csr: false 219
··· 52 - mediatek,mt8188-pcie 53 - mediatek,mt8195-pcie 54 - const: mediatek,mt8192-pcie 55 + - items: 56 + - enum: 57 + - mediatek,mt6991-pcie 58 + - const: mediatek,mt8196-pcie 59 - const: mediatek,mt8192-pcie 60 + - const: mediatek,mt8196-pcie 61 - const: airoha,en7581-pcie 62 63 reg: ··· 209 reset-names: 210 minItems: 1 211 maxItems: 2 212 + 213 + mediatek,pbus-csr: false 214 + 215 + - if: 216 + properties: 217 + compatible: 218 + contains: 219 + enum: 220 + - mediatek,mt8196-pcie 221 + then: 222 + properties: 223 + clocks: 224 + minItems: 6 225 + 226 + clock-names: 227 + items: 228 + - const: pl_250m 229 + - const: tl_26m 230 + - const: bus 231 + - const: low_power 232 + - const: peri_26m 233 + - const: peri_mem 234 + 235 + resets: 236 + minItems: 2 237 + 238 + reset-names: 239 + items: 240 + - const: phy 241 + - const: mac 242 243 mediatek,pbus-csr: false 244
+37 -37
Documentation/devicetree/bindings/pci/qcom,pcie-sa8255p.yaml
··· 77 #size-cells = <2>; 78 79 pci@1c00000 { 80 - compatible = "qcom,pcie-sa8255p"; 81 - reg = <0x4 0x00000000 0 0x10000000>; 82 - device_type = "pci"; 83 - #address-cells = <3>; 84 - #size-cells = <2>; 85 - ranges = <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>, 86 - <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x40000000>; 87 - bus-range = <0x00 0xff>; 88 - dma-coherent; 89 - linux,pci-domain = <0>; 90 - power-domains = <&scmi5_pd 0>; 91 - iommu-map = <0x0 &pcie_smmu 0x0000 0x1>, 92 - <0x100 &pcie_smmu 0x0001 0x1>; 93 - interrupt-parent = <&intc>; 94 - interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>, 95 - <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>, 96 - <GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>, 97 - <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>, 98 - <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>, 99 - <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>, 100 - <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, 101 - <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>; 102 - interrupt-names = "msi0", "msi1", "msi2", "msi3", 103 - "msi4", "msi5", "msi6", "msi7"; 104 105 - #interrupt-cells = <1>; 106 - interrupt-map-mask = <0 0 0 0x7>; 107 - interrupt-map = <0 0 0 1 &intc GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 108 - <0 0 0 2 &intc GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>, 109 - <0 0 0 3 &intc GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>, 110 - <0 0 0 4 &intc GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>; 111 112 - pcie@0 { 113 - device_type = "pci"; 114 - reg = <0x0 0x0 0x0 0x0 0x0>; 115 - bus-range = <0x01 0xff>; 116 117 - #address-cells = <3>; 118 - #size-cells = <2>; 119 - ranges; 120 }; 121 }; 122 };
··· 77 #size-cells = <2>; 78 79 pci@1c00000 { 80 + compatible = "qcom,pcie-sa8255p"; 81 + reg = <0x4 0x00000000 0 0x10000000>; 82 + device_type = "pci"; 83 + #address-cells = <3>; 84 + #size-cells = <2>; 85 + ranges = <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>, 86 + <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x40000000>; 87 + bus-range = <0x00 0xff>; 88 + dma-coherent; 89 + linux,pci-domain = <0>; 90 + power-domains = <&scmi5_pd 0>; 91 + iommu-map = <0x0 &pcie_smmu 0x0000 0x1>, 92 + <0x100 &pcie_smmu 0x0001 0x1>; 93 + interrupt-parent = <&intc>; 94 + interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>, 95 + <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>, 96 + <GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>, 97 + <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>, 98 + <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>, 99 + <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>, 100 + <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, 101 + <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>; 102 + interrupt-names = "msi0", "msi1", "msi2", "msi3", 103 + "msi4", "msi5", "msi6", "msi7"; 104 105 + #interrupt-cells = <1>; 106 + interrupt-map-mask = <0 0 0 0x7>; 107 + interrupt-map = <0 0 0 1 &intc GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 108 + <0 0 0 2 &intc GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>, 109 + <0 0 0 3 &intc GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>, 110 + <0 0 0 4 &intc GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>; 111 112 + pcie@0 { 113 + device_type = "pci"; 114 + reg = <0x0 0x0 0x0 0x0 0x0>; 115 + bus-range = <0x01 0xff>; 116 117 + #address-cells = <3>; 118 + #size-cells = <2>; 119 + ranges; 120 }; 121 }; 122 };
+1
Documentation/devicetree/bindings/pci/qcom,pcie-sm8550.yaml
··· 22 - enum: 23 - qcom,sar2130p-pcie 24 - qcom,pcie-sm8650 25 - const: qcom,pcie-sm8550 26 27 reg:
··· 22 - enum: 23 - qcom,sar2130p-pcie 24 - qcom,pcie-sm8650 25 + - qcom,pcie-sm8750 26 - const: qcom,pcie-sm8550 27 28 reg:
+2 -1
Documentation/devicetree/bindings/pci/qcom,pcie-x1e80100.yaml
··· 32 - const: mhi # MHI registers 33 34 clocks: 35 - minItems: 7 36 maxItems: 7 37 38 clock-names: 39 items: 40 - const: aux # Auxiliary clock 41 - const: cfg # Configuration clock
··· 32 - const: mhi # MHI registers 33 34 clocks: 35 + minItems: 6 36 maxItems: 7 37 38 clock-names: 39 + minItems: 6 40 items: 41 - const: aux # Auxiliary clock 42 - const: cfg # Configuration clock
+64
Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/sophgo,sg2042-pcie-host.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Sophgo SG2042 PCIe Host (Cadence PCIe Wrapper) 8 + 9 + description: 10 + Sophgo SG2042 PCIe host controller is based on the Cadence PCIe core. 11 + 12 + maintainers: 13 + - Chen Wang <unicorn_wang@outlook.com> 14 + 15 + properties: 16 + compatible: 17 + const: sophgo,sg2042-pcie-host 18 + 19 + reg: 20 + maxItems: 2 21 + 22 + reg-names: 23 + items: 24 + - const: reg 25 + - const: cfg 26 + 27 + vendor-id: 28 + const: 0x1f1c 29 + 30 + device-id: 31 + const: 0x2042 32 + 33 + msi-parent: true 34 + 35 + allOf: 36 + - $ref: cdns-pcie-host.yaml# 37 + 38 + required: 39 + - compatible 40 + - reg 41 + - reg-names 42 + 43 + unevaluatedProperties: false 44 + 45 + examples: 46 + - | 47 + #include <dt-bindings/interrupt-controller/irq.h> 48 + 49 + pcie@62000000 { 50 + compatible = "sophgo,sg2042-pcie-host"; 51 + device_type = "pci"; 52 + reg = <0x62000000 0x00800000>, 53 + <0x48000000 0x00001000>; 54 + reg-names = "reg", "cfg"; 55 + #address-cells = <3>; 56 + #size-cells = <2>; 57 + ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>, 58 + <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>; 59 + bus-range = <0x00 0xff>; 60 + vendor-id = <0x1f1c>; 61 + device-id = <0x2042>; 62 + cdns,no-bar-match-nbits = <48>; 63 + msi-parent = <&msi>; 64 + };
+33
Documentation/devicetree/bindings/pci/st,stm32-pcie-common.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/st,stm32-pcie-common.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STM32MP25 PCIe RC/EP controller 8 + 9 + maintainers: 10 + - Christian Bruel <christian.bruel@foss.st.com> 11 + 12 + description: 13 + STM32MP25 PCIe RC/EP common properties 14 + 15 + properties: 16 + clocks: 17 + maxItems: 1 18 + description: PCIe system clock 19 + 20 + resets: 21 + maxItems: 1 22 + 23 + power-domains: 24 + maxItems: 1 25 + 26 + access-controllers: 27 + maxItems: 1 28 + 29 + required: 30 + - clocks 31 + - resets 32 + 33 + additionalProperties: true
+73
Documentation/devicetree/bindings/pci/st,stm32-pcie-ep.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/st,stm32-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STMicroelectronics STM32MP25 PCIe Endpoint 8 + 9 + maintainers: 10 + - Christian Bruel <christian.bruel@foss.st.com> 11 + 12 + description: 13 + PCIe endpoint controller based on the Synopsys DesignWare PCIe core. 14 + 15 + allOf: 16 + - $ref: /schemas/pci/snps,dw-pcie-ep.yaml# 17 + - $ref: /schemas/pci/st,stm32-pcie-common.yaml# 18 + 19 + properties: 20 + compatible: 21 + const: st,stm32mp25-pcie-ep 22 + 23 + reg: 24 + items: 25 + - description: Data Bus Interface (DBI) registers. 26 + - description: Data Bus Interface (DBI) shadow registers. 27 + - description: Internal Address Translation Unit (iATU) registers. 28 + - description: PCIe configuration registers. 29 + 30 + reg-names: 31 + items: 32 + - const: dbi 33 + - const: dbi2 34 + - const: atu 35 + - const: addr_space 36 + 37 + reset-gpios: 38 + description: GPIO controlled connection to PERST# signal 39 + maxItems: 1 40 + 41 + phys: 42 + maxItems: 1 43 + 44 + required: 45 + - phys 46 + - reset-gpios 47 + 48 + unevaluatedProperties: false 49 + 50 + examples: 51 + - | 52 + #include <dt-bindings/clock/st,stm32mp25-rcc.h> 53 + #include <dt-bindings/gpio/gpio.h> 54 + #include <dt-bindings/phy/phy.h> 55 + #include <dt-bindings/reset/st,stm32mp25-rcc.h> 56 + 57 + pcie-ep@48400000 { 58 + compatible = "st,stm32mp25-pcie-ep"; 59 + reg = <0x48400000 0x400000>, 60 + <0x48500000 0x100000>, 61 + <0x48700000 0x80000>, 62 + <0x10000000 0x10000000>; 63 + reg-names = "dbi", "dbi2", "atu", "addr_space"; 64 + clocks = <&rcc CK_BUS_PCIE>; 65 + phys = <&combophy PHY_TYPE_PCIE>; 66 + resets = <&rcc PCIE_R>; 67 + pinctrl-names = "default", "init"; 68 + pinctrl-0 = <&pcie_pins_a>; 69 + pinctrl-1 = <&pcie_init_pins_a>; 70 + reset-gpios = <&gpioj 8 GPIO_ACTIVE_LOW>; 71 + access-controllers = <&rifsc 68>; 72 + power-domains = <&CLUSTER_PD>; 73 + };
+112
Documentation/devicetree/bindings/pci/st,stm32-pcie-host.yaml
···
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/st,stm32-pcie-host.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STMicroelectronics STM32MP25 PCIe Root Complex 8 + 9 + maintainers: 10 + - Christian Bruel <christian.bruel@foss.st.com> 11 + 12 + description: 13 + PCIe root complex controller based on the Synopsys DesignWare PCIe core. 14 + 15 + allOf: 16 + - $ref: /schemas/pci/snps,dw-pcie.yaml# 17 + - $ref: /schemas/pci/st,stm32-pcie-common.yaml# 18 + 19 + properties: 20 + compatible: 21 + const: st,stm32mp25-pcie-rc 22 + 23 + reg: 24 + items: 25 + - description: Data Bus Interface (DBI) registers. 26 + - description: PCIe configuration registers. 27 + 28 + reg-names: 29 + items: 30 + - const: dbi 31 + - const: config 32 + 33 + msi-parent: 34 + maxItems: 1 35 + 36 + patternProperties: 37 + '^pcie@[0-2],0$': 38 + type: object 39 + $ref: /schemas/pci/pci-pci-bridge.yaml# 40 + 41 + properties: 42 + reg: 43 + maxItems: 1 44 + 45 + phys: 46 + maxItems: 1 47 + 48 + reset-gpios: 49 + description: GPIO controlled connection to PERST# signal 50 + maxItems: 1 51 + 52 + wake-gpios: 53 + description: GPIO used as WAKE# input signal 54 + maxItems: 1 55 + 56 + required: 57 + - phys 58 + - ranges 59 + 60 + unevaluatedProperties: false 61 + 62 + required: 63 + - interrupt-map 64 + - interrupt-map-mask 65 + - ranges 66 + - dma-ranges 67 + 68 + unevaluatedProperties: false 69 + 70 + examples: 71 + - | 72 + #include <dt-bindings/clock/st,stm32mp25-rcc.h> 73 + #include <dt-bindings/gpio/gpio.h> 74 + #include <dt-bindings/interrupt-controller/arm-gic.h> 75 + #include <dt-bindings/phy/phy.h> 76 + #include <dt-bindings/reset/st,stm32mp25-rcc.h> 77 + 78 + pcie@48400000 { 79 + compatible = "st,stm32mp25-pcie-rc"; 80 + device_type = "pci"; 81 + reg = <0x48400000 0x400000>, 82 + <0x10000000 0x10000>; 83 + reg-names = "dbi", "config"; 84 + #interrupt-cells = <1>; 85 + interrupt-map-mask = <0 0 0 7>; 86 + interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 264 IRQ_TYPE_LEVEL_HIGH>, 87 + <0 0 0 2 &intc 0 0 GIC_SPI 265 IRQ_TYPE_LEVEL_HIGH>, 88 + <0 0 0 3 &intc 0 0 GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>, 89 + <0 0 0 4 &intc 0 0 GIC_SPI 267 IRQ_TYPE_LEVEL_HIGH>; 90 + #address-cells = <3>; 91 + #size-cells = <2>; 92 + ranges = <0x01000000 0x0 0x00000000 0x10010000 0x0 0x10000>, 93 + <0x02000000 0x0 0x10020000 0x10020000 0x0 0x7fe0000>, 94 + <0x42000000 0x0 0x18000000 0x18000000 0x0 0x8000000>; 95 + dma-ranges = <0x42000000 0x0 0x80000000 0x80000000 0x0 0x80000000>; 96 + clocks = <&rcc CK_BUS_PCIE>; 97 + resets = <&rcc PCIE_R>; 98 + msi-parent = <&v2m0>; 99 + access-controllers = <&rifsc 68>; 100 + power-domains = <&CLUSTER_PD>; 101 + 102 + pcie@0,0 { 103 + device_type = "pci"; 104 + reg = <0x0 0x0 0x0 0x0 0x0>; 105 + phys = <&combophy PHY_TYPE_PCIE>; 106 + wake-gpios = <&gpioh 5 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>; 107 + reset-gpios = <&gpioj 8 GPIO_ACTIVE_LOW>; 108 + #address-cells = <3>; 109 + #size-cells = <2>; 110 + ranges; 111 + }; 112 + };
+25 -3
Documentation/devicetree/bindings/pci/ti,am65-pci-host.yaml
··· 20 - ti,keystone-pcie 21 22 reg: 23 - maxItems: 4 24 25 reg-names: 26 items: 27 - const: app 28 - const: dbics 29 - const: config 30 - const: atu 31 32 interrupts: 33 maxItems: 1 ··· 73 items: 74 pattern: '^pcie-phy[0-1]$' 75 76 required: 77 - compatible 78 - reg ··· 102 - power-domains 103 - msi-map 104 - num-viewport 105 106 unevaluatedProperties: false 107 ··· 124 reg = <0x5500000 0x1000>, 125 <0x5501000 0x1000>, 126 <0x10000000 0x2000>, 127 - <0x5506000 0x1000>; 128 - reg-names = "app", "dbics", "config", "atu"; 129 power-domains = <&k3_pds 120 TI_SCI_PD_EXCLUSIVE>; 130 #address-cells = <3>; 131 #size-cells = <2>;
··· 20 - ti,keystone-pcie 21 22 reg: 23 + minItems: 4 24 + maxItems: 6 25 26 reg-names: 27 + minItems: 4 28 items: 29 - const: app 30 - const: dbics 31 - const: config 32 - const: atu 33 + - const: vmap_lp 34 + - const: vmap_hp 35 36 interrupts: 37 maxItems: 1 ··· 69 items: 70 pattern: '^pcie-phy[0-1]$' 71 72 + memory-region: 73 + maxItems: 1 74 + description: | 75 + phandle to a restricted DMA pool to be used for all devices behind 76 + this controller. The regions should be defined according to 77 + reserved-memory/shared-dma-pool.yaml. 78 + Note that enforcement via the PVU will only be available to 79 + ti,am654-pcie-rc devices. 80 + 81 required: 82 - compatible 83 - reg ··· 89 - power-domains 90 - msi-map 91 - num-viewport 92 + else: 93 + properties: 94 + reg: 95 + maxItems: 4 96 + 97 + reg-names: 98 + maxItems: 4 99 100 unevaluatedProperties: false 101 ··· 104 reg = <0x5500000 0x1000>, 105 <0x5501000 0x1000>, 106 <0x10000000 0x2000>, 107 + <0x5506000 0x1000>, 108 + <0x2900000 0x1000>, 109 + <0x2908000 0x1000>; 110 + reg-names = "app", "dbics", "config", "atu", "vmap_lp", "vmap_hp"; 111 power-domains = <&k3_pds 120 TI_SCI_PD_EXCLUSIVE>; 112 #address-cells = <3>; 113 #size-cells = <2>;
+55 -2
Documentation/driver-api/pin-control.rst
··· 1162 Pin control requests from drivers 1163 ================================= 1164 1165 - When a device driver is about to probe the device core will automatically 1166 - attempt to issue ``pinctrl_get_select_default()`` on these devices. 1167 This way driver writers do not need to add any of the boilerplate code 1168 of the type found below. However when doing fine-grained state selection 1169 and not using the "default" state, you may have to do some device driver ··· 1231 operation and going to sleep, moving from the ``PINCTRL_STATE_DEFAULT`` to 1232 ``PINCTRL_STATE_SLEEP`` at runtime, re-biasing or even re-muxing pins to save 1233 current in sleep mode. 1234 1235 A driver may request a certain control state to be activated, usually just the 1236 default state like this:
··· 1162 Pin control requests from drivers 1163 ================================= 1164 1165 + When a device driver is about to probe, the device core attaches the 1166 + standard states if they are defined in the device tree by calling 1167 + ``pinctrl_bind_pins()`` on these devices. 1168 + Possible standard state names are: "default", "init", "sleep" and "idle". 1169 + 1170 + - if ``default`` is defined in the device tree, it is selected before 1171 + device probe. 1172 + 1173 + - if ``init`` and ``default`` are defined in the device tree, the "init" 1174 + state is selected before the driver probe and the "default" state is 1175 + selected after the driver probe. 1176 + 1177 + - the ``sleep`` and ``idle`` states are for power management and can only 1178 + be selected with the PM API bellow. 1179 + 1180 + PM interfaces 1181 + ================= 1182 + PM runtime suspend/resume might need to execute the same init sequence as 1183 + during probe. Since the predefined states are already attached to the 1184 + device, the driver can activate these states explicitly with the 1185 + following helper functions: 1186 + 1187 + - ``pinctrl_pm_select_default_state()`` 1188 + - ``pinctrl_pm_select_init_state()`` 1189 + - ``pinctrl_pm_select_sleep_state()`` 1190 + - ``pinctrl_pm_select_idle_state()`` 1191 + 1192 + For example, if resuming the device depend on certain pinmux states 1193 + 1194 + .. code-block:: c 1195 + 1196 + foo_suspend() 1197 + { 1198 + /* suspend device */ 1199 + ... 1200 + 1201 + pinctrl_pm_select_sleep_state(dev); 1202 + } 1203 + 1204 + foo_resume() 1205 + { 1206 + pinctrl_pm_select_init_state(dev); 1207 + 1208 + /* resuming device */ 1209 + ... 1210 + 1211 + pinctrl_pm_select_default_state(dev); 1212 + } 1213 + 1214 This way driver writers do not need to add any of the boilerplate code 1215 of the type found below. However when doing fine-grained state selection 1216 and not using the "default" state, you may have to do some device driver ··· 1184 operation and going to sleep, moving from the ``PINCTRL_STATE_DEFAULT`` to 1185 ``PINCTRL_STATE_SLEEP`` at runtime, re-biasing or even re-muxing pins to save 1186 current in sleep mode. 1187 + 1188 + Another case is when the pinctrl needs to switch to a certain mode during 1189 + probe and then revert to the default state at the end of probe. For example 1190 + a PINMUX may need to be configured as a GPIO during probe. In this case, use 1191 + ``PINCTRL_STATE_INIT`` to switch state before probe, then move to 1192 + ``PINCTRL_STATE_DEFAULT`` at the end of probe for normal operation. 1193 1194 A driver may request a certain control state to be activated, usually just the 1195 default state like this:
+7
MAINTAINERS
··· 19723 S: Maintained 19724 F: drivers/pci/controller/dwc/pci-exynos.c 19725 19726 PCI DRIVER FOR SYNOPSYS DESIGNWARE 19727 M: Jingoo Han <jingoohan1@gmail.com> 19728 M: Manivannan Sadhasivam <mani@kernel.org>
··· 19723 S: Maintained 19724 F: drivers/pci/controller/dwc/pci-exynos.c 19725 19726 + PCI DRIVER FOR STM32MP25 19727 + M: Christian Bruel <christian.bruel@foss.st.com> 19728 + L: linux-pci@vger.kernel.org 19729 + S: Maintained 19730 + F: Documentation/devicetree/bindings/pci/st,stm32-pcie-*.yaml 19731 + F: drivers/pci/controller/dwc/*stm32* 19732 + 19733 PCI DRIVER FOR SYNOPSYS DESIGNWARE 19734 M: Jingoo Han <jingoohan1@gmail.com> 19735 M: Manivannan Sadhasivam <mani@kernel.org>
+11 -28
arch/m68k/kernel/pcibios.c
··· 44 */ 45 int pcibios_enable_device(struct pci_dev *dev, int mask) 46 { 47 - struct resource *r; 48 u16 cmd, newcmd; 49 - int idx; 50 51 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 52 - newcmd = cmd; 53 - 54 - for (idx = 0; idx < 6; idx++) { 55 - /* Only set up the requested stuff */ 56 - if (!(mask & (1 << idx))) 57 - continue; 58 - 59 - r = dev->resource + idx; 60 - if (!r->start && r->end) { 61 - pr_err("PCI: Device %s not available because of resource collisions\n", 62 - pci_name(dev)); 63 - return -EINVAL; 64 - } 65 - if (r->flags & IORESOURCE_IO) 66 - newcmd |= PCI_COMMAND_IO; 67 - if (r->flags & IORESOURCE_MEM) 68 - newcmd |= PCI_COMMAND_MEMORY; 69 - } 70 71 /* 72 * Bridges (eg, cardbus bridges) need to be fully enabled 73 */ 74 - if ((dev->class >> 16) == PCI_BASE_CLASS_BRIDGE) 75 newcmd |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY; 76 - 77 - 78 - if (newcmd != cmd) { 79 - pr_info("PCI: enabling device %s (0x%04x -> 0x%04x)\n", 80 - pci_name(dev), cmd, newcmd); 81 - pci_write_config_word(dev, PCI_COMMAND, newcmd); 82 } 83 return 0; 84 }
··· 44 */ 45 int pcibios_enable_device(struct pci_dev *dev, int mask) 46 { 47 u16 cmd, newcmd; 48 + int ret; 49 50 + ret = pci_enable_resources(dev, mask); 51 + if (ret < 0) 52 + return ret; 53 54 /* 55 * Bridges (eg, cardbus bridges) need to be fully enabled 56 */ 57 + if ((dev->class >> 16) == PCI_BASE_CLASS_BRIDGE) { 58 + pci_read_config_word(dev, PCI_COMMAND, &cmd); 59 newcmd |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY; 60 + if (newcmd != cmd) { 61 + pr_info("PCI: enabling bridge %s (0x%04x -> 0x%04x)\n", 62 + pci_name(dev), cmd, newcmd); 63 + pci_write_config_word(dev, PCI_COMMAND, newcmd); 64 + } 65 } 66 return 0; 67 }
+2 -36
arch/mips/pci/pci-legacy.c
··· 249 250 subsys_initcall(pcibios_init); 251 252 - static int pcibios_enable_resources(struct pci_dev *dev, int mask) 253 - { 254 - u16 cmd, old_cmd; 255 - int idx; 256 - struct resource *r; 257 - 258 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 259 - old_cmd = cmd; 260 - pci_dev_for_each_resource(dev, r, idx) { 261 - /* Only set up the requested stuff */ 262 - if (!(mask & (1<<idx))) 263 - continue; 264 - 265 - if (!(r->flags & (IORESOURCE_IO | IORESOURCE_MEM))) 266 - continue; 267 - if ((idx == PCI_ROM_RESOURCE) && 268 - (!(r->flags & IORESOURCE_ROM_ENABLE))) 269 - continue; 270 - if (!r->start && r->end) { 271 - pci_err(dev, 272 - "can't enable device: resource collisions\n"); 273 - return -EINVAL; 274 - } 275 - if (r->flags & IORESOURCE_IO) 276 - cmd |= PCI_COMMAND_IO; 277 - if (r->flags & IORESOURCE_MEM) 278 - cmd |= PCI_COMMAND_MEMORY; 279 - } 280 - if (cmd != old_cmd) { 281 - pci_info(dev, "enabling device (%04x -> %04x)\n", old_cmd, cmd); 282 - pci_write_config_word(dev, PCI_COMMAND, cmd); 283 - } 284 - return 0; 285 - } 286 - 287 int pcibios_enable_device(struct pci_dev *dev, int mask) 288 { 289 - int err = pcibios_enable_resources(dev, mask); 290 291 if (err < 0) 292 return err; 293
··· 249 250 subsys_initcall(pcibios_init); 251 252 int pcibios_enable_device(struct pci_dev *dev, int mask) 253 { 254 + int err; 255 256 + err = pci_enable_resources(dev, mask); 257 if (err < 0) 258 return err; 259
+1 -1
arch/powerpc/kernel/eeh_driver.c
··· 334 rc = driver->err_handler->error_detected(pdev, pci_channel_io_frozen); 335 336 edev->in_error = true; 337 - pci_uevent_ers(pdev, PCI_ERS_RESULT_NONE); 338 return rc; 339 } 340
··· 334 rc = driver->err_handler->error_detected(pdev, pci_channel_io_frozen); 335 336 edev->in_error = true; 337 + pci_uevent_ers(pdev, rc); 338 return rc; 339 } 340
+3
arch/s390/pci/pci_event.c
··· 88 pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT; 89 90 ers_res = driver->err_handler->error_detected(pdev, pdev->error_state); 91 if (ers_result_indicates_abort(ers_res)) 92 pr_info("%s: Automatic recovery failed after initial reporting\n", pci_name(pdev)); 93 else if (ers_res == PCI_ERS_RESULT_NEED_RESET) ··· 245 ers_res = PCI_ERS_RESULT_RECOVERED; 246 247 if (ers_res != PCI_ERS_RESULT_RECOVERED) { 248 pr_err("%s: Automatic recovery failed; operator intervention is required\n", 249 pci_name(pdev)); 250 status_str = "failed (driver can't recover)"; ··· 255 pr_info("%s: The device is ready to resume operations\n", pci_name(pdev)); 256 if (driver->err_handler->resume) 257 driver->err_handler->resume(pdev); 258 out_unlock: 259 pci_dev_unlock(pdev); 260 zpci_report_status(zdev, "recovery", status_str);
··· 88 pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT; 89 90 ers_res = driver->err_handler->error_detected(pdev, pdev->error_state); 91 + pci_uevent_ers(pdev, ers_res); 92 if (ers_result_indicates_abort(ers_res)) 93 pr_info("%s: Automatic recovery failed after initial reporting\n", pci_name(pdev)); 94 else if (ers_res == PCI_ERS_RESULT_NEED_RESET) ··· 244 ers_res = PCI_ERS_RESULT_RECOVERED; 245 246 if (ers_res != PCI_ERS_RESULT_RECOVERED) { 247 + pci_uevent_ers(pdev, PCI_ERS_RESULT_DISCONNECT); 248 pr_err("%s: Automatic recovery failed; operator intervention is required\n", 249 pci_name(pdev)); 250 status_str = "failed (driver can't recover)"; ··· 253 pr_info("%s: The device is ready to resume operations\n", pci_name(pdev)); 254 if (driver->err_handler->resume) 255 driver->err_handler->resume(pdev); 256 + pci_uevent_ers(pdev, PCI_ERS_RESULT_RECOVERED); 257 out_unlock: 258 pci_dev_unlock(pdev); 259 zpci_report_status(zdev, "recovery", status_str);
-27
arch/sparc/kernel/leon_pci.c
··· 60 pci_assign_unassigned_resources(); 61 pci_bus_add_devices(root_bus); 62 } 63 - 64 - int pcibios_enable_device(struct pci_dev *dev, int mask) 65 - { 66 - struct resource *res; 67 - u16 cmd, oldcmd; 68 - int i; 69 - 70 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 71 - oldcmd = cmd; 72 - 73 - pci_dev_for_each_resource(dev, res, i) { 74 - /* Only set up the requested stuff */ 75 - if (!(mask & (1<<i))) 76 - continue; 77 - 78 - if (res->flags & IORESOURCE_IO) 79 - cmd |= PCI_COMMAND_IO; 80 - if (res->flags & IORESOURCE_MEM) 81 - cmd |= PCI_COMMAND_MEMORY; 82 - } 83 - 84 - if (cmd != oldcmd) { 85 - pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 86 - pci_write_config_word(dev, PCI_COMMAND, cmd); 87 - } 88 - return 0; 89 - }
··· 60 pci_assign_unassigned_resources(); 61 pci_bus_add_devices(root_bus); 62 }
-27
arch/sparc/kernel/pci.c
··· 722 return bus; 723 } 724 725 - int pcibios_enable_device(struct pci_dev *dev, int mask) 726 - { 727 - struct resource *res; 728 - u16 cmd, oldcmd; 729 - int i; 730 - 731 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 732 - oldcmd = cmd; 733 - 734 - pci_dev_for_each_resource(dev, res, i) { 735 - /* Only set up the requested stuff */ 736 - if (!(mask & (1<<i))) 737 - continue; 738 - 739 - if (res->flags & IORESOURCE_IO) 740 - cmd |= PCI_COMMAND_IO; 741 - if (res->flags & IORESOURCE_MEM) 742 - cmd |= PCI_COMMAND_MEMORY; 743 - } 744 - 745 - if (cmd != oldcmd) { 746 - pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 747 - pci_write_config_word(dev, PCI_COMMAND, cmd); 748 - } 749 - return 0; 750 - } 751 - 752 /* Platform support for /proc/bus/pci/X/Y mmap()s. */ 753 int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma) 754 {
··· 722 return bus; 723 } 724 725 /* Platform support for /proc/bus/pci/X/Y mmap()s. */ 726 int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma) 727 {
-27
arch/sparc/kernel/pcic.c
··· 642 } 643 } 644 645 - int pcibios_enable_device(struct pci_dev *dev, int mask) 646 - { 647 - struct resource *res; 648 - u16 cmd, oldcmd; 649 - int i; 650 - 651 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 652 - oldcmd = cmd; 653 - 654 - pci_dev_for_each_resource(dev, res, i) { 655 - /* Only set up the requested stuff */ 656 - if (!(mask & (1<<i))) 657 - continue; 658 - 659 - if (res->flags & IORESOURCE_IO) 660 - cmd |= PCI_COMMAND_IO; 661 - if (res->flags & IORESOURCE_MEM) 662 - cmd |= PCI_COMMAND_MEMORY; 663 - } 664 - 665 - if (cmd != oldcmd) { 666 - pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 667 - pci_write_config_word(dev, PCI_COMMAND, cmd); 668 - } 669 - return 0; 670 - } 671 - 672 /* Makes compiler happy */ 673 static volatile int pcic_timer_dummy; 674
··· 642 } 643 } 644 645 /* Makes compiler happy */ 646 static volatile int pcic_timer_dummy; 647
+40
arch/x86/pci/fixup.c
··· 295 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC1, pcie_rootport_aspm_quirk); 296 297 /* 298 * Fixup to mark boot BIOS video selected by BIOS before it changes 299 * 300 * From information provided by "Jon Smirl" <jonsmirl@gmail.com>
··· 295 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC1, pcie_rootport_aspm_quirk); 296 297 /* 298 + * PCIe devices underneath Xeon 6 PCIe Root Port bifurcated to x2 have lower 299 + * performance with Extended Tags and MRRS > 128B. Work around the performance 300 + * problems by disabling Extended Tags and limiting MRRS to 128B. 301 + * 302 + * https://cdrdv2.intel.com/v1/dl/getContent/837176 303 + */ 304 + static int limit_mrrs_to_128(struct pci_host_bridge *b, struct pci_dev *pdev) 305 + { 306 + int readrq = pcie_get_readrq(pdev); 307 + 308 + if (readrq > 128) 309 + pcie_set_readrq(pdev, 128); 310 + 311 + return 0; 312 + } 313 + 314 + static void pci_xeon_x2_bifurc_quirk(struct pci_dev *pdev) 315 + { 316 + struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus); 317 + u32 linkcap; 318 + 319 + pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &linkcap); 320 + if (FIELD_GET(PCI_EXP_LNKCAP_MLW, linkcap) != 0x2) 321 + return; 322 + 323 + bridge->no_ext_tags = 1; 324 + bridge->enable_device = limit_mrrs_to_128; 325 + pci_info(pdev, "Disabling Extended Tags and limiting MRRS to 128B (performance reasons due to x2 PCIe link)\n"); 326 + } 327 + 328 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db0, pci_xeon_x2_bifurc_quirk); 329 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db1, pci_xeon_x2_bifurc_quirk); 330 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db2, pci_xeon_x2_bifurc_quirk); 331 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db3, pci_xeon_x2_bifurc_quirk); 332 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db6, pci_xeon_x2_bifurc_quirk); 333 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db7, pci_xeon_x2_bifurc_quirk); 334 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db8, pci_xeon_x2_bifurc_quirk); 335 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db9, pci_xeon_x2_bifurc_quirk); 336 + 337 + /* 338 * Fixup to mark boot BIOS video selected by BIOS before it changes 339 * 340 * From information provided by "Jon Smirl" <jonsmirl@gmail.com>
+7 -9
drivers/misc/pci_endpoint_test.c
··· 436 { 437 struct pci_dev *pdev = test->pdev; 438 u32 val; 439 - int ret; 440 441 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, 442 msix ? PCITEST_IRQ_TYPE_MSIX : ··· 454 if (!val) 455 return -ETIMEDOUT; 456 457 - ret = pci_irq_vector(pdev, msi_num - 1); 458 - if (ret < 0) 459 - return ret; 460 - 461 - if (ret != test->last_irq) 462 return -EIO; 463 464 return 0; ··· 937 switch (cmd) { 938 case PCITEST_BAR: 939 bar = arg; 940 - if (bar > BAR_5) 941 goto ret; 942 if (is_am654_pci_dev(pdev) && bar == BAR_0) 943 goto ret; ··· 1020 if (!test) 1021 return -ENOMEM; 1022 1023 - test->test_reg_bar = 0; 1024 - test->alignment = 0; 1025 test->pdev = pdev; 1026 test->irq_type = PCITEST_IRQ_TYPE_UNDEFINED; 1027
··· 436 { 437 struct pci_dev *pdev = test->pdev; 438 u32 val; 439 + int irq; 440 + 441 + irq = pci_irq_vector(pdev, msi_num - 1); 442 + if (irq < 0) 443 + return irq; 444 445 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, 446 msix ? PCITEST_IRQ_TYPE_MSIX : ··· 450 if (!val) 451 return -ETIMEDOUT; 452 453 + if (irq != test->last_irq) 454 return -EIO; 455 456 return 0; ··· 937 switch (cmd) { 938 case PCITEST_BAR: 939 bar = arg; 940 + if (bar <= NO_BAR || bar > BAR_5) 941 goto ret; 942 if (is_am654_pci_dev(pdev) && bar == BAR_0) 943 goto ret; ··· 1020 if (!test) 1021 return -ENOMEM; 1022 1023 test->pdev = pdev; 1024 test->irq_type = PCITEST_IRQ_TYPE_UNDEFINED; 1025
-1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 4215 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 4216 int err = 0; 4217 4218 - pdev->error_state = pci_channel_io_normal; 4219 err = pci_enable_device(pdev); 4220 if (err) 4221 goto disconnect;
··· 4215 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 4216 int err = 0; 4217 4218 err = pci_enable_device(pdev); 4219 if (err) 4220 goto disconnect;
-2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 3766 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 3767 struct net_device *netdev = adapter->netdev; 3768 3769 - pdev->error_state = pci_channel_io_normal; 3770 - 3771 err = pci_enable_device(pdev); 3772 if (err) 3773 return err;
··· 3766 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 3767 struct net_device *netdev = adapter->netdev; 3768 3769 err = pci_enable_device(pdev); 3770 if (err) 3771 return err;
-3
drivers/net/ethernet/sfc/efx_common.c
··· 1258 1259 /* For simplicity and reliability, we always require a slot reset and try to 1260 * reset the hardware when a pci error affecting the device is detected. 1261 - * We leave both the link_reset and mmio_enabled callback unimplemented: 1262 - * with our request for slot reset the mmio_enabled callback will never be 1263 - * called, and the link_reset callback is not used by AER or EEH mechanisms. 1264 */ 1265 const struct pci_error_handlers efx_err_handlers = { 1266 .error_detected = efx_io_error_detected,
··· 1258 1259 /* For simplicity and reliability, we always require a slot reset and try to 1260 * reset the hardware when a pci error affecting the device is detected. 1261 */ 1262 const struct pci_error_handlers efx_err_handlers = { 1263 .error_detected = efx_io_error_detected,
-3
drivers/net/ethernet/sfc/falcon/efx.c
··· 3127 3128 /* For simplicity and reliability, we always require a slot reset and try to 3129 * reset the hardware when a pci error affecting the device is detected. 3130 - * We leave both the link_reset and mmio_enabled callback unimplemented: 3131 - * with our request for slot reset the mmio_enabled callback will never be 3132 - * called, and the link_reset callback is not used by AER or EEH mechanisms. 3133 */ 3134 static const struct pci_error_handlers ef4_err_handlers = { 3135 .error_detected = ef4_io_error_detected,
··· 3127 3128 /* For simplicity and reliability, we always require a slot reset and try to 3129 * reset the hardware when a pci error affecting the device is detected. 3130 */ 3131 static const struct pci_error_handlers ef4_err_handlers = { 3132 .error_detected = ef4_io_error_detected,
-3
drivers/net/ethernet/sfc/siena/efx_common.c
··· 1285 1286 /* For simplicity and reliability, we always require a slot reset and try to 1287 * reset the hardware when a pci error affecting the device is detected. 1288 - * We leave both the link_reset and mmio_enabled callback unimplemented: 1289 - * with our request for slot reset the mmio_enabled callback will never be 1290 - * called, and the link_reset callback is not used by AER or EEH mechanisms. 1291 */ 1292 const struct pci_error_handlers efx_siena_err_handlers = { 1293 .error_detected = efx_io_error_detected,
··· 1285 1286 /* For simplicity and reliability, we always require a slot reset and try to 1287 * reset the hardware when a pci error affecting the device is detected. 1288 */ 1289 const struct pci_error_handlers efx_siena_err_handlers = { 1290 .error_detected = efx_io_error_detected,
+12 -5
drivers/pci/bus.c
··· 204 if (!r) 205 continue; 206 207 /* type_mask must match */ 208 if ((res->flags ^ r->flags) & type_mask) 209 continue; ··· 364 * before PCI client drivers. 365 */ 366 pdev = of_find_device_by_node(dn); 367 - if (pdev && of_pci_supply_present(dn)) { 368 - if (!device_link_add(&dev->dev, &pdev->dev, 369 - DL_FLAG_AUTOREMOVE_CONSUMER)) 370 - pci_err(dev, "failed to add device link to power control device %s\n", 371 - pdev->name); 372 } 373 374 if (!dn || of_device_is_available(dn))
··· 204 if (!r) 205 continue; 206 207 + if (r->flags & (IORESOURCE_UNSET|IORESOURCE_DISABLED)) 208 + continue; 209 + 210 /* type_mask must match */ 211 if ((res->flags ^ r->flags) & type_mask) 212 continue; ··· 361 * before PCI client drivers. 362 */ 363 pdev = of_find_device_by_node(dn); 364 + if (pdev) { 365 + if (of_pci_supply_present(dn)) { 366 + if (!device_link_add(&dev->dev, &pdev->dev, 367 + DL_FLAG_AUTOREMOVE_CONSUMER)) { 368 + pci_err(dev, "failed to add device link to power control device %s\n", 369 + pdev->name); 370 + } 371 + } 372 + put_device(&pdev->dev); 373 } 374 375 if (!dn || of_device_is_available(dn))
+10
drivers/pci/controller/cadence/Kconfig
··· 42 endpoint mode. This PCIe controller may be embedded into many 43 different vendors SoCs. 44 45 config PCI_J721E 46 tristate 47 select PCIE_CADENCE_HOST if PCI_J721E_HOST != n ··· 76 Say Y here if you want to support the TI J721E PCIe platform 77 controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe 78 core. 79 endmenu
··· 42 endpoint mode. This PCIe controller may be embedded into many 43 different vendors SoCs. 44 45 + config PCIE_SG2042_HOST 46 + tristate "Sophgo SG2042 PCIe controller (host mode)" 47 + depends on OF && (ARCH_SOPHGO || COMPILE_TEST) 48 + select PCIE_CADENCE_HOST 49 + help 50 + Say Y here if you want to support the Sophgo SG2042 PCIe platform 51 + controller in host mode. Sophgo SG2042 PCIe controller uses Cadence 52 + PCIe core. 53 + 54 config PCI_J721E 55 tristate 56 select PCIE_CADENCE_HOST if PCI_J721E_HOST != n ··· 67 Say Y here if you want to support the TI J721E PCIe platform 68 controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe 69 core. 70 + 71 endmenu
+1
drivers/pci/controller/cadence/Makefile
··· 4 obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o 5 obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o 6 obj-$(CONFIG_PCI_J721E) += pci-j721e.o
··· 4 obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o 5 obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o 6 obj-$(CONFIG_PCI_J721E) += pci-j721e.o 7 + obj-$(CONFIG_PCIE_SG2042_HOST) += pcie-sg2042.o
+27 -1
drivers/pci/controller/cadence/pci-j721e.c
··· 284 if (!ret) 285 offset = args.args[0]; 286 287 ret = j721e_pcie_set_mode(pcie, syscon, offset); 288 if (ret < 0) { 289 dev_err(dev, "Failed to set pci mode\n"); ··· 318 ret = j721e_pcie_set_lane_count(pcie, syscon, offset); 319 if (ret < 0) { 320 dev_err(dev, "Failed to set num-lanes\n"); 321 return ret; 322 } 323 ··· 465 }, 466 {}, 467 }; 468 469 static int j721e_pcie_probe(struct platform_device *pdev) 470 { ··· 575 576 ret = j721e_pcie_ctrl_init(pcie); 577 if (ret < 0) { 578 - dev_err_probe(dev, ret, "pm_runtime_get_sync failed\n"); 579 goto err_get_sync; 580 } 581
··· 284 if (!ret) 285 offset = args.args[0]; 286 287 + /* 288 + * The PCIe Controller's registers have different "reset-values" 289 + * depending on the "strap" settings programmed into the PCIEn_CTRL 290 + * register within the CTRL_MMR memory-mapped register space. 291 + * The registers latch onto a "reset-value" based on the "strap" 292 + * settings sampled after the PCIe Controller is powered on. 293 + * To ensure that the "reset-values" are sampled accurately, power 294 + * off the PCIe Controller before programming the "strap" settings 295 + * and power it on after that. The runtime PM APIs namely 296 + * pm_runtime_put_sync() and pm_runtime_get_sync() will decrement and 297 + * increment the usage counter respectively, causing GENPD to power off 298 + * and power on the PCIe Controller. 299 + */ 300 + ret = pm_runtime_put_sync(dev); 301 + if (ret < 0) { 302 + dev_err(dev, "Failed to power off PCIe Controller\n"); 303 + return ret; 304 + } 305 + 306 ret = j721e_pcie_set_mode(pcie, syscon, offset); 307 if (ret < 0) { 308 dev_err(dev, "Failed to set pci mode\n"); ··· 299 ret = j721e_pcie_set_lane_count(pcie, syscon, offset); 300 if (ret < 0) { 301 dev_err(dev, "Failed to set num-lanes\n"); 302 + return ret; 303 + } 304 + 305 + ret = pm_runtime_get_sync(dev); 306 + if (ret < 0) { 307 + dev_err(dev, "Failed to power on PCIe Controller\n"); 308 return ret; 309 } 310 ··· 440 }, 441 {}, 442 }; 443 + MODULE_DEVICE_TABLE(of, of_j721e_pcie_match); 444 445 static int j721e_pcie_probe(struct platform_device *pdev) 446 { ··· 549 550 ret = j721e_pcie_ctrl_init(pcie); 551 if (ret < 0) { 552 + dev_err_probe(dev, ret, "j721e_pcie_ctrl_init failed\n"); 553 goto err_get_sync; 554 } 555
+22 -18
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 21 22 static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn) 23 { 24 - u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET; 25 u32 first_vf_offset, stride; 26 27 if (vfn == 0) 28 return fn; 29 30 first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET); 31 stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE); 32 fn = fn + first_vf_offset + ((vfn - 1) * stride); ··· 39 struct pci_epf_header *hdr) 40 { 41 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 42 - u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET; 43 struct cdns_pcie *pcie = &ep->pcie; 44 u32 reg; 45 46 if (vfn > 1) { 47 dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n"); 48 return -EINVAL; ··· 229 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 230 struct cdns_pcie *pcie = &ep->pcie; 231 u8 mmc = order_base_2(nr_irqs); 232 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 233 u16 flags; 234 235 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 236 237 /* ··· 252 { 253 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 254 struct cdns_pcie *pcie = &ep->pcie; 255 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 256 u16 flags, mme; 257 258 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 259 260 /* Validate that the MSI feature is actually enabled. */ ··· 276 { 277 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 278 struct cdns_pcie *pcie = &ep->pcie; 279 - u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 280 u32 val, reg; 281 282 func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no); 283 284 reg = cap + PCI_MSIX_FLAGS; ··· 297 { 298 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 299 struct cdns_pcie *pcie = &ep->pcie; 300 - u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 301 u32 val, reg; 302 303 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 304 305 reg = cap + PCI_MSIX_FLAGS; ··· 386 u8 interrupt_num) 387 { 388 struct cdns_pcie *pcie = &ep->pcie; 389 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 390 u16 flags, mme, data, data_mask; 391 - u8 msi_count; 392 u64 pci_addr, pci_addr_mask = 0xff; 393 394 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 395 396 /* Check whether the MSI feature has been enabled by the PCI host. */ ··· 438 u32 *msi_addr_offset) 439 { 440 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 441 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 442 struct cdns_pcie *pcie = &ep->pcie; 443 u64 pci_addr, pci_addr_mask = 0xff; 444 u16 flags, mme, data, data_mask; 445 - u8 msi_count; 446 int ret; 447 int i; 448 449 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 450 451 /* Check whether the MSI feature has been enabled by the PCI host. */ ··· 488 static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, 489 u16 interrupt_num) 490 { 491 - u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 492 u32 tbl_offset, msg_data, reg; 493 struct cdns_pcie *pcie = &ep->pcie; 494 struct pci_epf_msix_tbl *msix_tbl; 495 struct cdns_pcie_epf *epf; 496 u64 pci_addr_mask = 0xff; 497 u64 msg_addr; 498 u16 flags; 499 - u8 bir; 500 501 epf = &ep->epf[fn]; 502 if (vfn > 0) 503 epf = &epf->epf[vfn - 1]; ··· 571 int max_epfs = sizeof(epc->function_num_map) * 8; 572 int ret, epf, last_fn; 573 u32 reg, value; 574 575 /* 576 * BIT(0) is hardwired to 1, hence function 0 is always enabled 577 * and can't be disabled anyway. ··· 597 continue; 598 599 value = cdns_pcie_ep_fn_readl(pcie, epf, 600 - CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET + 601 - PCI_EXP_DEVCAP); 602 value &= ~PCI_EXP_DEVCAP_FLR; 603 cdns_pcie_ep_fn_writel(pcie, epf, 604 - CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET + 605 - PCI_EXP_DEVCAP, value); 606 } 607 } 608 ··· 614 } 615 616 static const struct pci_epc_features cdns_pcie_epc_vf_features = { 617 - .linkup_notifier = false, 618 .msi_capable = true, 619 .msix_capable = true, 620 .align = 65536, 621 }; 622 623 static const struct pci_epc_features cdns_pcie_epc_features = { 624 - .linkup_notifier = false, 625 .msi_capable = true, 626 .msix_capable = true, 627 .align = 256,
··· 21 22 static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn) 23 { 24 u32 first_vf_offset, stride; 25 + u16 cap; 26 27 if (vfn == 0) 28 return fn; 29 30 + cap = cdns_pcie_find_ext_capability(pcie, PCI_EXT_CAP_ID_SRIOV); 31 first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET); 32 stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE); 33 fn = fn + first_vf_offset + ((vfn - 1) * stride); ··· 38 struct pci_epf_header *hdr) 39 { 40 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 41 struct cdns_pcie *pcie = &ep->pcie; 42 u32 reg; 43 + u16 cap; 44 45 + cap = cdns_pcie_find_ext_capability(pcie, PCI_EXT_CAP_ID_SRIOV); 46 if (vfn > 1) { 47 dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n"); 48 return -EINVAL; ··· 227 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 228 struct cdns_pcie *pcie = &ep->pcie; 229 u8 mmc = order_base_2(nr_irqs); 230 u16 flags; 231 + u8 cap; 232 233 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI); 234 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 235 236 /* ··· 249 { 250 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 251 struct cdns_pcie *pcie = &ep->pcie; 252 u16 flags, mme; 253 + u8 cap; 254 255 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 256 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 257 258 /* Validate that the MSI feature is actually enabled. */ ··· 272 { 273 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 274 struct cdns_pcie *pcie = &ep->pcie; 275 u32 val, reg; 276 + u8 cap; 277 278 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 279 func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no); 280 281 reg = cap + PCI_MSIX_FLAGS; ··· 292 { 293 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 294 struct cdns_pcie *pcie = &ep->pcie; 295 u32 val, reg; 296 + u8 cap; 297 298 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 299 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 300 301 reg = cap + PCI_MSIX_FLAGS; ··· 380 u8 interrupt_num) 381 { 382 struct cdns_pcie *pcie = &ep->pcie; 383 u16 flags, mme, data, data_mask; 384 u64 pci_addr, pci_addr_mask = 0xff; 385 + u8 msi_count, cap; 386 387 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI); 388 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 389 390 /* Check whether the MSI feature has been enabled by the PCI host. */ ··· 432 u32 *msi_addr_offset) 433 { 434 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 435 struct cdns_pcie *pcie = &ep->pcie; 436 u64 pci_addr, pci_addr_mask = 0xff; 437 u16 flags, mme, data, data_mask; 438 + u8 msi_count, cap; 439 int ret; 440 int i; 441 442 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI); 443 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 444 445 /* Check whether the MSI feature has been enabled by the PCI host. */ ··· 482 static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, 483 u16 interrupt_num) 484 { 485 u32 tbl_offset, msg_data, reg; 486 struct cdns_pcie *pcie = &ep->pcie; 487 struct pci_epf_msix_tbl *msix_tbl; 488 struct cdns_pcie_epf *epf; 489 u64 pci_addr_mask = 0xff; 490 u64 msg_addr; 491 + u8 bir, cap; 492 u16 flags; 493 494 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 495 epf = &ep->epf[fn]; 496 if (vfn > 0) 497 epf = &epf->epf[vfn - 1]; ··· 565 int max_epfs = sizeof(epc->function_num_map) * 8; 566 int ret, epf, last_fn; 567 u32 reg, value; 568 + u8 cap; 569 570 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_EXP); 571 /* 572 * BIT(0) is hardwired to 1, hence function 0 is always enabled 573 * and can't be disabled anyway. ··· 589 continue; 590 591 value = cdns_pcie_ep_fn_readl(pcie, epf, 592 + cap + PCI_EXP_DEVCAP); 593 value &= ~PCI_EXP_DEVCAP_FLR; 594 cdns_pcie_ep_fn_writel(pcie, epf, 595 + cap + PCI_EXP_DEVCAP, value); 596 } 597 } 598 ··· 608 } 609 610 static const struct pci_epc_features cdns_pcie_epc_vf_features = { 611 .msi_capable = true, 612 .msix_capable = true, 613 .align = 65536, 614 }; 615 616 static const struct pci_epc_features cdns_pcie_epc_features = { 617 .msi_capable = true, 618 .msix_capable = true, 619 .align = 256,
+1 -1
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 531 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1); 532 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1); 533 534 - if (pcie->ops->cpu_addr_fixup) 535 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 536 537 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
··· 531 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1); 532 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1); 533 534 + if (pcie->ops && pcie->ops->cpu_addr_fixup) 535 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 536 537 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
+16 -2
drivers/pci/controller/cadence/pcie-cadence.c
··· 8 #include <linux/of.h> 9 10 #include "pcie-cadence.h" 11 12 void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie) 13 { ··· 106 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1); 107 108 /* Set the CPU address */ 109 - if (pcie->ops->cpu_addr_fixup) 110 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 111 112 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) | ··· 137 } 138 139 /* Set the CPU address */ 140 - if (pcie->ops->cpu_addr_fixup) 141 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 142 143 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
··· 8 #include <linux/of.h> 9 10 #include "pcie-cadence.h" 11 + #include "../../pci.h" 12 + 13 + u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap) 14 + { 15 + return PCI_FIND_NEXT_CAP(cdns_pcie_read_cfg, PCI_CAPABILITY_LIST, 16 + cap, pcie); 17 + } 18 + EXPORT_SYMBOL_GPL(cdns_pcie_find_capability); 19 + 20 + u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap) 21 + { 22 + return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, pcie); 23 + } 24 + EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability); 25 26 void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie) 27 { ··· 92 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1); 93 94 /* Set the CPU address */ 95 + if (pcie->ops && pcie->ops->cpu_addr_fixup) 96 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 97 98 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) | ··· 123 } 124 125 /* Set the CPU address */ 126 + if (pcie->ops && pcie->ops->cpu_addr_fixup) 127 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 128 129 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
+37 -8
drivers/pci/controller/cadence/pcie-cadence.h
··· 125 */ 126 #define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) 127 128 - #define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90 129 - #define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xb0 130 - #define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET 0xc0 131 - #define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200 132 - 133 /* 134 * Endpoint PF Registers 135 */ ··· 362 return readl(pcie->reg_base + reg); 363 } 364 365 static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size) 366 { 367 void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4); ··· 494 495 static inline int cdns_pcie_start_link(struct cdns_pcie *pcie) 496 { 497 - if (pcie->ops->start_link) 498 return pcie->ops->start_link(pcie); 499 500 return 0; ··· 502 503 static inline void cdns_pcie_stop_link(struct cdns_pcie *pcie) 504 { 505 - if (pcie->ops->stop_link) 506 pcie->ops->stop_link(pcie); 507 } 508 509 static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie) 510 { 511 - if (pcie->ops->link_up) 512 return pcie->ops->link_up(pcie); 513 514 return true; ··· 561 { 562 } 563 #endif 564 565 void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); 566
··· 125 */ 126 #define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) 127 128 /* 129 * Endpoint PF Registers 130 */ ··· 367 return readl(pcie->reg_base + reg); 368 } 369 370 + static inline u16 cdns_pcie_readw(struct cdns_pcie *pcie, u32 reg) 371 + { 372 + return readw(pcie->reg_base + reg); 373 + } 374 + 375 + static inline u8 cdns_pcie_readb(struct cdns_pcie *pcie, u32 reg) 376 + { 377 + return readb(pcie->reg_base + reg); 378 + } 379 + 380 + static inline int cdns_pcie_read_cfg_byte(struct cdns_pcie *pcie, int where, 381 + u8 *val) 382 + { 383 + *val = cdns_pcie_readb(pcie, where); 384 + return PCIBIOS_SUCCESSFUL; 385 + } 386 + 387 + static inline int cdns_pcie_read_cfg_word(struct cdns_pcie *pcie, int where, 388 + u16 *val) 389 + { 390 + *val = cdns_pcie_readw(pcie, where); 391 + return PCIBIOS_SUCCESSFUL; 392 + } 393 + 394 + static inline int cdns_pcie_read_cfg_dword(struct cdns_pcie *pcie, int where, 395 + u32 *val) 396 + { 397 + *val = cdns_pcie_readl(pcie, where); 398 + return PCIBIOS_SUCCESSFUL; 399 + } 400 + 401 static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size) 402 { 403 void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4); ··· 468 469 static inline int cdns_pcie_start_link(struct cdns_pcie *pcie) 470 { 471 + if (pcie->ops && pcie->ops->start_link) 472 return pcie->ops->start_link(pcie); 473 474 return 0; ··· 476 477 static inline void cdns_pcie_stop_link(struct cdns_pcie *pcie) 478 { 479 + if (pcie->ops && pcie->ops->stop_link) 480 pcie->ops->stop_link(pcie); 481 } 482 483 static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie) 484 { 485 + if (pcie->ops && pcie->ops->link_up) 486 return pcie->ops->link_up(pcie); 487 488 return true; ··· 535 { 536 } 537 #endif 538 + 539 + u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap); 540 + u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap); 541 542 void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); 543
+134
drivers/pci/controller/cadence/pcie-sg2042.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC 4 + * 5 + * Copyright (C) 2025 Sophgo Technology Inc. 6 + * Copyright (C) 2025 Chen Wang <unicorn_wang@outlook.com> 7 + */ 8 + 9 + #include <linux/mod_devicetable.h> 10 + #include <linux/pci.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pm_runtime.h> 13 + 14 + #include "pcie-cadence.h" 15 + 16 + /* 17 + * SG2042 only supports 4-byte aligned access, so for the rootbus (i.e. to 18 + * read/write the Root Port itself, read32/write32 is required. For 19 + * non-rootbus (i.e. to read/write the PCIe peripheral registers, supports 20 + * 1/2/4 byte aligned access, so directly using read/write should be fine. 21 + */ 22 + 23 + static struct pci_ops sg2042_pcie_root_ops = { 24 + .map_bus = cdns_pci_map_bus, 25 + .read = pci_generic_config_read32, 26 + .write = pci_generic_config_write32, 27 + }; 28 + 29 + static struct pci_ops sg2042_pcie_child_ops = { 30 + .map_bus = cdns_pci_map_bus, 31 + .read = pci_generic_config_read, 32 + .write = pci_generic_config_write, 33 + }; 34 + 35 + static int sg2042_pcie_probe(struct platform_device *pdev) 36 + { 37 + struct device *dev = &pdev->dev; 38 + struct pci_host_bridge *bridge; 39 + struct cdns_pcie *pcie; 40 + struct cdns_pcie_rc *rc; 41 + int ret; 42 + 43 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); 44 + if (!bridge) 45 + return dev_err_probe(dev, -ENOMEM, "Failed to alloc host bridge!\n"); 46 + 47 + bridge->ops = &sg2042_pcie_root_ops; 48 + bridge->child_ops = &sg2042_pcie_child_ops; 49 + 50 + rc = pci_host_bridge_priv(bridge); 51 + pcie = &rc->pcie; 52 + pcie->dev = dev; 53 + 54 + platform_set_drvdata(pdev, pcie); 55 + 56 + pm_runtime_set_active(dev); 57 + pm_runtime_no_callbacks(dev); 58 + devm_pm_runtime_enable(dev); 59 + 60 + ret = cdns_pcie_init_phy(dev, pcie); 61 + if (ret) 62 + return dev_err_probe(dev, ret, "Failed to init phy!\n"); 63 + 64 + ret = cdns_pcie_host_setup(rc); 65 + if (ret) { 66 + dev_err_probe(dev, ret, "Failed to setup host!\n"); 67 + cdns_pcie_disable_phy(pcie); 68 + return ret; 69 + } 70 + 71 + return 0; 72 + } 73 + 74 + static void sg2042_pcie_remove(struct platform_device *pdev) 75 + { 76 + struct cdns_pcie *pcie = platform_get_drvdata(pdev); 77 + struct device *dev = &pdev->dev; 78 + struct cdns_pcie_rc *rc; 79 + 80 + rc = container_of(pcie, struct cdns_pcie_rc, pcie); 81 + cdns_pcie_host_disable(rc); 82 + 83 + cdns_pcie_disable_phy(pcie); 84 + 85 + pm_runtime_disable(dev); 86 + } 87 + 88 + static int sg2042_pcie_suspend_noirq(struct device *dev) 89 + { 90 + struct cdns_pcie *pcie = dev_get_drvdata(dev); 91 + 92 + cdns_pcie_disable_phy(pcie); 93 + 94 + return 0; 95 + } 96 + 97 + static int sg2042_pcie_resume_noirq(struct device *dev) 98 + { 99 + struct cdns_pcie *pcie = dev_get_drvdata(dev); 100 + int ret; 101 + 102 + ret = cdns_pcie_enable_phy(pcie); 103 + if (ret) { 104 + dev_err(dev, "failed to enable PHY\n"); 105 + return ret; 106 + } 107 + 108 + return 0; 109 + } 110 + 111 + static DEFINE_NOIRQ_DEV_PM_OPS(sg2042_pcie_pm_ops, 112 + sg2042_pcie_suspend_noirq, 113 + sg2042_pcie_resume_noirq); 114 + 115 + static const struct of_device_id sg2042_pcie_of_match[] = { 116 + { .compatible = "sophgo,sg2042-pcie-host" }, 117 + {}, 118 + }; 119 + MODULE_DEVICE_TABLE(of, sg2042_pcie_of_match); 120 + 121 + static struct platform_driver sg2042_pcie_driver = { 122 + .driver = { 123 + .name = "sg2042-pcie", 124 + .of_match_table = sg2042_pcie_of_match, 125 + .pm = pm_sleep_ptr(&sg2042_pcie_pm_ops), 126 + }, 127 + .probe = sg2042_pcie_probe, 128 + .remove = sg2042_pcie_remove, 129 + }; 130 + module_platform_driver(sg2042_pcie_driver); 131 + 132 + MODULE_LICENSE("GPL"); 133 + MODULE_DESCRIPTION("PCIe controller driver for SG2042 SoCs"); 134 + MODULE_AUTHOR("Chen Wang <unicorn_wang@outlook.com>");
+26
drivers/pci/controller/dwc/Kconfig
··· 20 bool 21 select PCIE_DW 22 select IRQ_MSI_LIB 23 24 config PCIE_DW_EP 25 bool ··· 299 select CRC8 300 select PCIE_QCOM_COMMON 301 select PCI_HOST_COMMON 302 help 303 Say Y here to enable PCIe controller support on Qualcomm SoCs. The 304 PCIe controller uses the DesignWare core plus Qualcomm-specific ··· 423 select PCIE_DW_HOST 424 help 425 Say Y here if you want PCIe support on SPEAr13XX SoCs. 426 427 config PCI_DRA7XX 428 tristate
··· 20 bool 21 select PCIE_DW 22 select IRQ_MSI_LIB 23 + select PCI_HOST_COMMON 24 25 config PCIE_DW_EP 26 bool ··· 298 select CRC8 299 select PCIE_QCOM_COMMON 300 select PCI_HOST_COMMON 301 + select PCI_PWRCTRL_SLOT 302 help 303 Say Y here to enable PCIe controller support on Qualcomm SoCs. The 304 PCIe controller uses the DesignWare core plus Qualcomm-specific ··· 421 select PCIE_DW_HOST 422 help 423 Say Y here if you want PCIe support on SPEAr13XX SoCs. 424 + 425 + config PCIE_STM32_HOST 426 + tristate "STMicroelectronics STM32MP25 PCIe Controller (host mode)" 427 + depends on ARCH_STM32 || COMPILE_TEST 428 + depends on PCI_MSI 429 + select PCIE_DW_HOST 430 + help 431 + Enables Root Complex (RC) support for the DesignWare core based PCIe 432 + controller found in STM32MP25 SoC. 433 + 434 + This driver can also be built as a module. If so, the module 435 + will be called pcie-stm32. 436 + 437 + config PCIE_STM32_EP 438 + tristate "STMicroelectronics STM32MP25 PCIe Controller (endpoint mode)" 439 + depends on ARCH_STM32 || COMPILE_TEST 440 + depends on PCI_ENDPOINT 441 + select PCIE_DW_EP 442 + help 443 + Enables Endpoint (EP) support for the DesignWare core based PCIe 444 + controller found in STM32MP25 SoC. 445 + 446 + This driver can also be built as a module. If so, the module 447 + will be called pcie-stm32-ep. 448 449 config PCI_DRA7XX 450 tristate
+2
drivers/pci/controller/dwc/Makefile
··· 31 obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o 32 obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o 33 obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o 34 35 # The following drivers are for devices that use the generic ACPI 36 # pci_root.c driver but don't support standard ECAM config access.
··· 31 obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o 32 obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o 33 obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o 34 + obj-$(CONFIG_PCIE_STM32_HOST) += pcie-stm32.o 35 + obj-$(CONFIG_PCIE_STM32_EP) += pcie-stm32-ep.o 36 37 # The following drivers are for devices that use the generic ACPI 38 # pci_root.c driver but don't support standard ECAM config access.
-1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 426 static const struct pci_epc_features dra7xx_pcie_epc_features = { 427 .linkup_notifier = true, 428 .msi_capable = true, 429 - .msix_capable = false, 430 }; 431 432 static const struct pci_epc_features*
··· 426 static const struct pci_epc_features dra7xx_pcie_epc_features = { 427 .linkup_notifier = true, 428 .msi_capable = true, 429 }; 430 431 static const struct pci_epc_features*
+31 -31
drivers/pci/controller/dwc/pci-exynos.c
··· 53 54 struct exynos_pcie { 55 struct dw_pcie pci; 56 - void __iomem *elbi_base; 57 struct clk_bulk_data *clks; 58 struct phy *phy; 59 struct regulator_bulk_data supplies[2]; ··· 70 71 static void exynos_pcie_sideband_dbi_w_mode(struct exynos_pcie *ep, bool on) 72 { 73 u32 val; 74 75 - val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_SLV_AWMISC); 76 if (on) 77 val |= PCIE_ELBI_SLV_DBI_ENABLE; 78 else 79 val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 80 - exynos_pcie_writel(ep->elbi_base, val, PCIE_ELBI_SLV_AWMISC); 81 } 82 83 static void exynos_pcie_sideband_dbi_r_mode(struct exynos_pcie *ep, bool on) 84 { 85 u32 val; 86 87 - val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_SLV_ARMISC); 88 if (on) 89 val |= PCIE_ELBI_SLV_DBI_ENABLE; 90 else 91 val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 92 - exynos_pcie_writel(ep->elbi_base, val, PCIE_ELBI_SLV_ARMISC); 93 } 94 95 static void exynos_pcie_assert_core_reset(struct exynos_pcie *ep) 96 { 97 u32 val; 98 99 - val = exynos_pcie_readl(ep->elbi_base, PCIE_CORE_RESET); 100 val &= ~PCIE_CORE_RESET_ENABLE; 101 - exynos_pcie_writel(ep->elbi_base, val, PCIE_CORE_RESET); 102 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_STICKY_RESET); 103 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_NONSTICKY_RESET); 104 } 105 106 static void exynos_pcie_deassert_core_reset(struct exynos_pcie *ep) 107 { 108 u32 val; 109 110 - val = exynos_pcie_readl(ep->elbi_base, PCIE_CORE_RESET); 111 val |= PCIE_CORE_RESET_ENABLE; 112 113 - exynos_pcie_writel(ep->elbi_base, val, PCIE_CORE_RESET); 114 - exynos_pcie_writel(ep->elbi_base, 1, PCIE_STICKY_RESET); 115 - exynos_pcie_writel(ep->elbi_base, 1, PCIE_NONSTICKY_RESET); 116 - exynos_pcie_writel(ep->elbi_base, 1, PCIE_APP_INIT_RESET); 117 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_APP_INIT_RESET); 118 } 119 120 static int exynos_pcie_start_link(struct dw_pcie *pci) 121 { 122 - struct exynos_pcie *ep = to_exynos_pcie(pci); 123 u32 val; 124 125 - val = exynos_pcie_readl(ep->elbi_base, PCIE_SW_WAKE); 126 val &= ~PCIE_BUS_EN; 127 - exynos_pcie_writel(ep->elbi_base, val, PCIE_SW_WAKE); 128 129 /* assert LTSSM enable */ 130 - exynos_pcie_writel(ep->elbi_base, PCIE_ELBI_LTSSM_ENABLE, 131 PCIE_APP_LTSSM_ENABLE); 132 return 0; 133 } 134 135 static void exynos_pcie_clear_irq_pulse(struct exynos_pcie *ep) 136 { 137 - u32 val = exynos_pcie_readl(ep->elbi_base, PCIE_IRQ_PULSE); 138 139 - exynos_pcie_writel(ep->elbi_base, val, PCIE_IRQ_PULSE); 140 } 141 142 static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg) ··· 154 155 static void exynos_pcie_enable_irq_pulse(struct exynos_pcie *ep) 156 { 157 u32 val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT | 158 IRQ_INTC_ASSERT | IRQ_INTD_ASSERT; 159 160 - exynos_pcie_writel(ep->elbi_base, val, PCIE_IRQ_EN_PULSE); 161 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_IRQ_EN_LEVEL); 162 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_IRQ_EN_SPECIAL); 163 } 164 165 static u32 exynos_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, ··· 217 218 static bool exynos_pcie_link_up(struct dw_pcie *pci) 219 { 220 - struct exynos_pcie *ep = to_exynos_pcie(pci); 221 - u32 val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_RDLH_LINKUP); 222 223 return val & PCIE_ELBI_XMLH_LINKUP; 224 } ··· 299 ep->phy = devm_of_phy_get(dev, np, NULL); 300 if (IS_ERR(ep->phy)) 301 return PTR_ERR(ep->phy); 302 - 303 - /* External Local Bus interface (ELBI) registers */ 304 - ep->elbi_base = devm_platform_ioremap_resource_byname(pdev, "elbi"); 305 - if (IS_ERR(ep->elbi_base)) 306 - return PTR_ERR(ep->elbi_base); 307 308 ret = devm_clk_bulk_get_all_enabled(dev, &ep->clks); 309 if (ret < 0)
··· 53 54 struct exynos_pcie { 55 struct dw_pcie pci; 56 struct clk_bulk_data *clks; 57 struct phy *phy; 58 struct regulator_bulk_data supplies[2]; ··· 71 72 static void exynos_pcie_sideband_dbi_w_mode(struct exynos_pcie *ep, bool on) 73 { 74 + struct dw_pcie *pci = &ep->pci; 75 u32 val; 76 77 + val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_SLV_AWMISC); 78 if (on) 79 val |= PCIE_ELBI_SLV_DBI_ENABLE; 80 else 81 val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 82 + exynos_pcie_writel(pci->elbi_base, val, PCIE_ELBI_SLV_AWMISC); 83 } 84 85 static void exynos_pcie_sideband_dbi_r_mode(struct exynos_pcie *ep, bool on) 86 { 87 + struct dw_pcie *pci = &ep->pci; 88 u32 val; 89 90 + val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_SLV_ARMISC); 91 if (on) 92 val |= PCIE_ELBI_SLV_DBI_ENABLE; 93 else 94 val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 95 + exynos_pcie_writel(pci->elbi_base, val, PCIE_ELBI_SLV_ARMISC); 96 } 97 98 static void exynos_pcie_assert_core_reset(struct exynos_pcie *ep) 99 { 100 + struct dw_pcie *pci = &ep->pci; 101 u32 val; 102 103 + val = exynos_pcie_readl(pci->elbi_base, PCIE_CORE_RESET); 104 val &= ~PCIE_CORE_RESET_ENABLE; 105 + exynos_pcie_writel(pci->elbi_base, val, PCIE_CORE_RESET); 106 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_STICKY_RESET); 107 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_NONSTICKY_RESET); 108 } 109 110 static void exynos_pcie_deassert_core_reset(struct exynos_pcie *ep) 111 { 112 + struct dw_pcie *pci = &ep->pci; 113 u32 val; 114 115 + val = exynos_pcie_readl(pci->elbi_base, PCIE_CORE_RESET); 116 val |= PCIE_CORE_RESET_ENABLE; 117 118 + exynos_pcie_writel(pci->elbi_base, val, PCIE_CORE_RESET); 119 + exynos_pcie_writel(pci->elbi_base, 1, PCIE_STICKY_RESET); 120 + exynos_pcie_writel(pci->elbi_base, 1, PCIE_NONSTICKY_RESET); 121 + exynos_pcie_writel(pci->elbi_base, 1, PCIE_APP_INIT_RESET); 122 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_APP_INIT_RESET); 123 } 124 125 static int exynos_pcie_start_link(struct dw_pcie *pci) 126 { 127 u32 val; 128 129 + val = exynos_pcie_readl(pci->elbi_base, PCIE_SW_WAKE); 130 val &= ~PCIE_BUS_EN; 131 + exynos_pcie_writel(pci->elbi_base, val, PCIE_SW_WAKE); 132 133 /* assert LTSSM enable */ 134 + exynos_pcie_writel(pci->elbi_base, PCIE_ELBI_LTSSM_ENABLE, 135 PCIE_APP_LTSSM_ENABLE); 136 return 0; 137 } 138 139 static void exynos_pcie_clear_irq_pulse(struct exynos_pcie *ep) 140 { 141 + struct dw_pcie *pci = &ep->pci; 142 143 + u32 val = exynos_pcie_readl(pci->elbi_base, PCIE_IRQ_PULSE); 144 + 145 + exynos_pcie_writel(pci->elbi_base, val, PCIE_IRQ_PULSE); 146 } 147 148 static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg) ··· 150 151 static void exynos_pcie_enable_irq_pulse(struct exynos_pcie *ep) 152 { 153 + struct dw_pcie *pci = &ep->pci; 154 + 155 u32 val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT | 156 IRQ_INTC_ASSERT | IRQ_INTD_ASSERT; 157 158 + exynos_pcie_writel(pci->elbi_base, val, PCIE_IRQ_EN_PULSE); 159 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_IRQ_EN_LEVEL); 160 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_IRQ_EN_SPECIAL); 161 } 162 163 static u32 exynos_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, ··· 211 212 static bool exynos_pcie_link_up(struct dw_pcie *pci) 213 { 214 + u32 val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_RDLH_LINKUP); 215 216 return val & PCIE_ELBI_XMLH_LINKUP; 217 } ··· 294 ep->phy = devm_of_phy_get(dev, np, NULL); 295 if (IS_ERR(ep->phy)) 296 return PTR_ERR(ep->phy); 297 298 ret = devm_clk_bulk_get_all_enabled(dev, &ep->clks); 299 if (ret < 0)
+4 -4
drivers/pci/controller/dwc/pci-imx6.c
··· 1387 } 1388 1389 static const struct pci_epc_features imx8m_pcie_epc_features = { 1390 - .linkup_notifier = false, 1391 .msi_capable = true, 1392 - .msix_capable = false, 1393 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1394 .bar[BAR_3] = { .type = BAR_RESERVED, }, 1395 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, }, ··· 1396 }; 1397 1398 static const struct pci_epc_features imx8q_pcie_epc_features = { 1399 - .linkup_notifier = false, 1400 .msi_capable = true, 1401 - .msix_capable = false, 1402 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1403 .bar[BAR_3] = { .type = BAR_RESERVED, }, 1404 .bar[BAR_5] = { .type = BAR_RESERVED, }, ··· 1740 /* Limit link speed */ 1741 pci->max_link_speed = 1; 1742 of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed); 1743 1744 imx_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); 1745 if (IS_ERR(imx_pcie->vpcie)) {
··· 1387 } 1388 1389 static const struct pci_epc_features imx8m_pcie_epc_features = { 1390 .msi_capable = true, 1391 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1392 .bar[BAR_3] = { .type = BAR_RESERVED, }, 1393 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, }, ··· 1398 }; 1399 1400 static const struct pci_epc_features imx8q_pcie_epc_features = { 1401 .msi_capable = true, 1402 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1403 .bar[BAR_3] = { .type = BAR_RESERVED, }, 1404 .bar[BAR_5] = { .type = BAR_RESERVED, }, ··· 1744 /* Limit link speed */ 1745 pci->max_link_speed = 1; 1746 of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed); 1747 + 1748 + ret = devm_regulator_get_enable_optional(&pdev->dev, "vpcie3v3aux"); 1749 + if (ret < 0 && ret != -ENODEV) 1750 + return dev_err_probe(dev, ret, "failed to enable Vaux supply\n"); 1751 1752 imx_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); 1753 if (IS_ERR(imx_pcie->vpcie)) {
+4 -5
drivers/pci/controller/dwc/pci-keystone.c
··· 960 } 961 962 static const struct pci_epc_features ks_pcie_am654_epc_features = { 963 - .linkup_notifier = false, 964 .msi_capable = true, 965 .msix_capable = true, 966 .bar[BAR_0] = { .type = BAR_RESERVED, }, ··· 1200 if (irq < 0) 1201 return irq; 1202 1203 - ret = request_irq(irq, ks_pcie_err_irq_handler, IRQF_SHARED, 1204 - "ks-pcie-error-irq", ks_pcie); 1205 if (ret < 0) { 1206 dev_err(dev, "failed to request error IRQ %d\n", 1207 irq); ··· 1212 if (ret) 1213 num_lanes = 1; 1214 1215 - phy = devm_kzalloc(dev, sizeof(*phy) * num_lanes, GFP_KERNEL); 1216 if (!phy) 1217 return -ENOMEM; 1218 1219 - link = devm_kzalloc(dev, sizeof(*link) * num_lanes, GFP_KERNEL); 1220 if (!link) 1221 return -ENOMEM; 1222
··· 960 } 961 962 static const struct pci_epc_features ks_pcie_am654_epc_features = { 963 .msi_capable = true, 964 .msix_capable = true, 965 .bar[BAR_0] = { .type = BAR_RESERVED, }, ··· 1201 if (irq < 0) 1202 return irq; 1203 1204 + ret = devm_request_irq(dev, irq, ks_pcie_err_irq_handler, IRQF_SHARED, 1205 + "ks-pcie-error-irq", ks_pcie); 1206 if (ret < 0) { 1207 dev_err(dev, "failed to request error IRQ %d\n", 1208 irq); ··· 1213 if (ret) 1214 num_lanes = 1; 1215 1216 + phy = devm_kcalloc(dev, num_lanes, sizeof(*phy), GFP_KERNEL); 1217 if (!phy) 1218 return -ENOMEM; 1219 1220 + link = devm_kcalloc(dev, num_lanes, sizeof(*link), GFP_KERNEL); 1221 if (!link) 1222 return -ENOMEM; 1223
+1
drivers/pci/controller/dwc/pcie-al.c
··· 352 return -ENOENT; 353 } 354 al_pcie->ecam_size = resource_size(ecam_res); 355 356 controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 357 "controller");
··· 352 return -ENOENT; 353 } 354 al_pcie->ecam_size = resource_size(ecam_res); 355 + pci->pp.native_ecam = true; 356 357 controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 358 "controller");
+51 -1
drivers/pci/controller/dwc/pcie-amd-mdb.c
··· 18 #include <linux/resource.h> 19 #include <linux/types.h> 20 21 #include "pcie-designware.h" 22 23 #define AMD_MDB_TLP_IR_STATUS_MISC 0x4C0 ··· 57 * @slcr: MDB System Level Control and Status Register (SLCR) base 58 * @intx_domain: INTx IRQ domain pointer 59 * @mdb_domain: MDB IRQ domain pointer 60 * @intx_irq: INTx IRQ interrupt number 61 */ 62 struct amd_mdb_pcie { ··· 65 void __iomem *slcr; 66 struct irq_domain *intx_domain; 67 struct irq_domain *mdb_domain; 68 int intx_irq; 69 }; 70 ··· 287 struct device_node *pcie_intc_node; 288 int err; 289 290 - pcie_intc_node = of_get_next_child(node, NULL); 291 if (!pcie_intc_node) { 292 dev_err(dev, "No PCIe Intc node found\n"); 293 return -ENODEV; ··· 405 return 0; 406 } 407 408 static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie, 409 struct platform_device *pdev) 410 { ··· 451 452 pp->ops = &amd_mdb_pcie_host_ops; 453 454 err = dw_pcie_host_init(pp); 455 if (err) { 456 dev_err(dev, "Failed to initialize host, err=%d\n", err); ··· 475 struct device *dev = &pdev->dev; 476 struct amd_mdb_pcie *pcie; 477 struct dw_pcie *pci; 478 479 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 480 if (!pcie) ··· 485 pci->dev = dev; 486 487 platform_set_drvdata(pdev, pcie); 488 489 return amd_mdb_add_pcie_port(pcie, pdev); 490 }
··· 18 #include <linux/resource.h> 19 #include <linux/types.h> 20 21 + #include "../../pci.h" 22 #include "pcie-designware.h" 23 24 #define AMD_MDB_TLP_IR_STATUS_MISC 0x4C0 ··· 56 * @slcr: MDB System Level Control and Status Register (SLCR) base 57 * @intx_domain: INTx IRQ domain pointer 58 * @mdb_domain: MDB IRQ domain pointer 59 + * @perst_gpio: GPIO descriptor for PERST# signal handling 60 * @intx_irq: INTx IRQ interrupt number 61 */ 62 struct amd_mdb_pcie { ··· 63 void __iomem *slcr; 64 struct irq_domain *intx_domain; 65 struct irq_domain *mdb_domain; 66 + struct gpio_desc *perst_gpio; 67 int intx_irq; 68 }; 69 ··· 284 struct device_node *pcie_intc_node; 285 int err; 286 287 + pcie_intc_node = of_get_child_by_name(node, "interrupt-controller"); 288 if (!pcie_intc_node) { 289 dev_err(dev, "No PCIe Intc node found\n"); 290 return -ENODEV; ··· 402 return 0; 403 } 404 405 + static int amd_mdb_parse_pcie_port(struct amd_mdb_pcie *pcie) 406 + { 407 + struct device *dev = pcie->pci.dev; 408 + struct device_node *pcie_port_node __maybe_unused; 409 + 410 + /* 411 + * This platform currently supports only one Root Port, so the loop 412 + * will execute only once. 413 + * TODO: Enhance the driver to handle multiple Root Ports in the future. 414 + */ 415 + for_each_child_of_node_with_prefix(dev->of_node, pcie_port_node, "pcie") { 416 + pcie->perst_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(pcie_port_node), 417 + "reset", GPIOD_OUT_HIGH, NULL); 418 + if (IS_ERR(pcie->perst_gpio)) 419 + return dev_err_probe(dev, PTR_ERR(pcie->perst_gpio), 420 + "Failed to request reset GPIO\n"); 421 + return 0; 422 + } 423 + 424 + return -ENODEV; 425 + } 426 + 427 static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie, 428 struct platform_device *pdev) 429 { ··· 426 427 pp->ops = &amd_mdb_pcie_host_ops; 428 429 + if (pcie->perst_gpio) { 430 + mdelay(PCIE_T_PVPERL_MS); 431 + gpiod_set_value_cansleep(pcie->perst_gpio, 0); 432 + mdelay(PCIE_RESET_CONFIG_WAIT_MS); 433 + } 434 + 435 err = dw_pcie_host_init(pp); 436 if (err) { 437 dev_err(dev, "Failed to initialize host, err=%d\n", err); ··· 444 struct device *dev = &pdev->dev; 445 struct amd_mdb_pcie *pcie; 446 struct dw_pcie *pci; 447 + int ret; 448 449 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 450 if (!pcie) ··· 453 pci->dev = dev; 454 455 platform_set_drvdata(pdev, pcie); 456 + 457 + ret = amd_mdb_parse_pcie_port(pcie); 458 + /* 459 + * If amd_mdb_parse_pcie_port returns -ENODEV, it indicates that the 460 + * PCIe Bridge node was not found in the device tree. This is not 461 + * considered a fatal error and will trigger a fallback where the 462 + * reset GPIO is acquired directly from the PCIe Host Bridge node. 463 + */ 464 + if (ret) { 465 + if (ret != -ENODEV) 466 + return ret; 467 + 468 + pcie->perst_gpio = devm_gpiod_get_optional(dev, "reset", 469 + GPIOD_OUT_HIGH); 470 + if (IS_ERR(pcie->perst_gpio)) 471 + return dev_err_probe(dev, PTR_ERR(pcie->perst_gpio), 472 + "Failed to request reset GPIO\n"); 473 + } 474 475 return amd_mdb_add_pcie_port(pcie, pdev); 476 }
-2
drivers/pci/controller/dwc/pcie-artpec6.c
··· 370 } 371 372 static const struct pci_epc_features artpec6_pcie_epc_features = { 373 - .linkup_notifier = false, 374 .msi_capable = true, 375 - .msix_capable = false, 376 }; 377 378 static const struct pci_epc_features *
··· 370 } 371 372 static const struct pci_epc_features artpec6_pcie_epc_features = { 373 .msi_capable = true, 374 }; 375 376 static const struct pci_epc_features *
+2 -29
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 69 } 70 EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar); 71 72 - static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no, 73 - u8 cap_ptr, u8 cap) 74 - { 75 - u8 cap_id, next_cap_ptr; 76 - u16 reg; 77 - 78 - if (!cap_ptr) 79 - return 0; 80 - 81 - reg = dw_pcie_ep_readw_dbi(ep, func_no, cap_ptr); 82 - cap_id = (reg & 0x00ff); 83 - 84 - if (cap_id > PCI_CAP_ID_MAX) 85 - return 0; 86 - 87 - if (cap_id == cap) 88 - return cap_ptr; 89 - 90 - next_cap_ptr = (reg & 0xff00) >> 8; 91 - return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap); 92 - } 93 - 94 static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap) 95 { 96 - u8 next_cap_ptr; 97 - u16 reg; 98 - 99 - reg = dw_pcie_ep_readw_dbi(ep, func_no, PCI_CAPABILITY_LIST); 100 - next_cap_ptr = (reg & 0x00ff); 101 - 102 - return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap); 103 } 104 105 /**
··· 69 } 70 EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar); 71 72 static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap) 73 { 74 + return PCI_FIND_NEXT_CAP(dw_pcie_ep_read_cfg, PCI_CAPABILITY_LIST, 75 + cap, ep, func_no); 76 } 77 78 /**
+134 -14
drivers/pci/controller/dwc/pcie-designware-host.c
··· 8 * Author: Jingoo Han <jg1.han@samsung.com> 9 */ 10 11 #include <linux/iopoll.h> 12 #include <linux/irqchip/chained_irq.h> 13 #include <linux/irqchip/irq-msi-lib.h> ··· 32 #define DW_PCIE_MSI_FLAGS_SUPPORTED (MSI_FLAG_MULTI_PCI_MSI | \ 33 MSI_FLAG_PCI_MSIX | \ 34 MSI_GENERIC_FLAGS_MASK) 35 36 static const struct msi_parent_ops dw_pcie_msi_parent_ops = { 37 .required_flags = DW_PCIE_MSI_FLAGS_REQUIRED, ··· 416 } 417 } 418 419 static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp) 420 { 421 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 513 struct resource_entry *win; 514 struct resource *res; 515 int ret; 516 - 517 - ret = dw_pcie_get_resources(pci); 518 - if (ret) 519 - return ret; 520 521 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 522 if (!res) { ··· 523 pp->cfg0_size = resource_size(res); 524 pp->cfg0_base = res->start; 525 526 - pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 527 - if (IS_ERR(pp->va_cfg0_base)) 528 - return PTR_ERR(pp->va_cfg0_base); 529 530 /* Get the I/O range from DT */ 531 win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_IO); ··· 587 if (ret) 588 return ret; 589 590 - /* Set default bus ops */ 591 - bridge->ops = &dw_pcie_ops; 592 - bridge->child_ops = &dw_child_pcie_ops; 593 - 594 if (pp->ops->init) { 595 ret = pp->ops->init(pp); 596 if (ret) 597 - return ret; 598 } 599 600 if (pci_msi_enabled()) { ··· 632 if (ret) 633 goto err_free_msi; 634 635 /* 636 * Allocate the resource for MSG TLP before programming the iATU 637 * outbound window in dw_pcie_setup_rc(). Since the allocation depends ··· 675 /* Ignore errors, the link may come up later */ 676 dw_pcie_wait_for_link(pci); 677 678 - bridge->sysdata = pp; 679 - 680 ret = pci_host_probe(bridge); 681 if (ret) 682 goto err_stop_link; ··· 700 if (pp->ops->deinit) 701 pp->ops->deinit(pp); 702 703 return ret; 704 } 705 EXPORT_SYMBOL_GPL(dw_pcie_host_init); ··· 726 727 if (pp->ops->deinit) 728 pp->ops->deinit(pp); 729 } 730 EXPORT_SYMBOL_GPL(dw_pcie_host_deinit); 731
··· 8 * Author: Jingoo Han <jg1.han@samsung.com> 9 */ 10 11 + #include <linux/align.h> 12 #include <linux/iopoll.h> 13 #include <linux/irqchip/chained_irq.h> 14 #include <linux/irqchip/irq-msi-lib.h> ··· 31 #define DW_PCIE_MSI_FLAGS_SUPPORTED (MSI_FLAG_MULTI_PCI_MSI | \ 32 MSI_FLAG_PCI_MSIX | \ 33 MSI_GENERIC_FLAGS_MASK) 34 + 35 + #define IS_256MB_ALIGNED(x) IS_ALIGNED(x, SZ_256M) 36 37 static const struct msi_parent_ops dw_pcie_msi_parent_ops = { 38 .required_flags = DW_PCIE_MSI_FLAGS_REQUIRED, ··· 413 } 414 } 415 416 + static int dw_pcie_config_ecam_iatu(struct dw_pcie_rp *pp) 417 + { 418 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 419 + struct dw_pcie_ob_atu_cfg atu = {0}; 420 + resource_size_t bus_range_max; 421 + struct resource_entry *bus; 422 + int ret; 423 + 424 + bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS); 425 + 426 + /* 427 + * Root bus under the host bridge doesn't require any iATU configuration 428 + * as DBI region will be used to access root bus config space. 429 + * Immediate bus under Root Bus, needs type 0 iATU configuration and 430 + * remaining buses need type 1 iATU configuration. 431 + */ 432 + atu.index = 0; 433 + atu.type = PCIE_ATU_TYPE_CFG0; 434 + atu.parent_bus_addr = pp->cfg0_base + SZ_1M; 435 + /* 1MiB is to cover 1 (bus) * 32 (devices) * 8 (functions) */ 436 + atu.size = SZ_1M; 437 + atu.ctrl2 = PCIE_ATU_CFG_SHIFT_MODE_ENABLE; 438 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 439 + if (ret) 440 + return ret; 441 + 442 + bus_range_max = resource_size(bus->res); 443 + 444 + if (bus_range_max < 2) 445 + return 0; 446 + 447 + /* Configure remaining buses in type 1 iATU configuration */ 448 + atu.index = 1; 449 + atu.type = PCIE_ATU_TYPE_CFG1; 450 + atu.parent_bus_addr = pp->cfg0_base + SZ_2M; 451 + atu.size = (SZ_1M * bus_range_max) - SZ_2M; 452 + atu.ctrl2 = PCIE_ATU_CFG_SHIFT_MODE_ENABLE; 453 + 454 + return dw_pcie_prog_outbound_atu(pci, &atu); 455 + } 456 + 457 + static int dw_pcie_create_ecam_window(struct dw_pcie_rp *pp, struct resource *res) 458 + { 459 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 460 + struct device *dev = pci->dev; 461 + struct resource_entry *bus; 462 + 463 + bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS); 464 + if (!bus) 465 + return -ENODEV; 466 + 467 + pp->cfg = pci_ecam_create(dev, res, bus->res, &pci_generic_ecam_ops); 468 + if (IS_ERR(pp->cfg)) 469 + return PTR_ERR(pp->cfg); 470 + 471 + pci->dbi_base = pp->cfg->win; 472 + pci->dbi_phys_addr = res->start; 473 + 474 + return 0; 475 + } 476 + 477 + static bool dw_pcie_ecam_enabled(struct dw_pcie_rp *pp, struct resource *config_res) 478 + { 479 + struct resource *bus_range; 480 + u64 nr_buses; 481 + 482 + /* Vendor glue drivers may implement their own ECAM mechanism */ 483 + if (pp->native_ecam) 484 + return false; 485 + 486 + /* 487 + * PCIe spec r6.0, sec 7.2.2 mandates the base address used for ECAM to 488 + * be aligned on a 2^(n+20) byte boundary, where n is the number of bits 489 + * used for representing 'bus' in BDF. Since the DWC cores always use 8 490 + * bits for representing 'bus', the base address has to be aligned to 491 + * 2^28 byte boundary, which is 256 MiB. 492 + */ 493 + if (!IS_256MB_ALIGNED(config_res->start)) 494 + return false; 495 + 496 + bus_range = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res; 497 + if (!bus_range) 498 + return false; 499 + 500 + nr_buses = resource_size(config_res) >> PCIE_ECAM_BUS_SHIFT; 501 + 502 + return nr_buses >= resource_size(bus_range); 503 + } 504 + 505 static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp) 506 { 507 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 421 struct resource_entry *win; 422 struct resource *res; 423 int ret; 424 425 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 426 if (!res) { ··· 435 pp->cfg0_size = resource_size(res); 436 pp->cfg0_base = res->start; 437 438 + pp->ecam_enabled = dw_pcie_ecam_enabled(pp, res); 439 + if (pp->ecam_enabled) { 440 + ret = dw_pcie_create_ecam_window(pp, res); 441 + if (ret) 442 + return ret; 443 + 444 + pp->bridge->ops = (struct pci_ops *)&pci_generic_ecam_ops.pci_ops; 445 + pp->bridge->sysdata = pp->cfg; 446 + pp->cfg->priv = pp; 447 + } else { 448 + pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 449 + if (IS_ERR(pp->va_cfg0_base)) 450 + return PTR_ERR(pp->va_cfg0_base); 451 + 452 + /* Set default bus ops */ 453 + pp->bridge->ops = &dw_pcie_ops; 454 + pp->bridge->child_ops = &dw_child_pcie_ops; 455 + pp->bridge->sysdata = pp; 456 + } 457 + 458 + ret = dw_pcie_get_resources(pci); 459 + if (ret) { 460 + if (pp->cfg) 461 + pci_ecam_free(pp->cfg); 462 + return ret; 463 + } 464 465 /* Get the I/O range from DT */ 466 win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_IO); ··· 476 if (ret) 477 return ret; 478 479 if (pp->ops->init) { 480 ret = pp->ops->init(pp); 481 if (ret) 482 + goto err_free_ecam; 483 } 484 485 if (pci_msi_enabled()) { ··· 525 if (ret) 526 goto err_free_msi; 527 528 + if (pp->ecam_enabled) { 529 + ret = dw_pcie_config_ecam_iatu(pp); 530 + if (ret) { 531 + dev_err(dev, "Failed to configure iATU in ECAM mode\n"); 532 + goto err_free_msi; 533 + } 534 + } 535 + 536 /* 537 * Allocate the resource for MSG TLP before programming the iATU 538 * outbound window in dw_pcie_setup_rc(). Since the allocation depends ··· 560 /* Ignore errors, the link may come up later */ 561 dw_pcie_wait_for_link(pci); 562 563 ret = pci_host_probe(bridge); 564 if (ret) 565 goto err_stop_link; ··· 587 if (pp->ops->deinit) 588 pp->ops->deinit(pp); 589 590 + err_free_ecam: 591 + if (pp->cfg) 592 + pci_ecam_free(pp->cfg); 593 + 594 return ret; 595 } 596 EXPORT_SYMBOL_GPL(dw_pcie_host_init); ··· 609 610 if (pp->ops->deinit) 611 pp->ops->deinit(pp); 612 + 613 + if (pp->cfg) 614 + pci_ecam_free(pp->cfg); 615 } 616 EXPORT_SYMBOL_GPL(dw_pcie_host_deinit); 617
-1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 61 } 62 63 static const struct pci_epc_features dw_plat_pcie_epc_features = { 64 - .linkup_notifier = false, 65 .msi_capable = true, 66 .msix_capable = true, 67 };
··· 61 } 62 63 static const struct pci_epc_features dw_plat_pcie_epc_features = { 64 .msi_capable = true, 65 .msix_capable = true, 66 };
+18 -76
drivers/pci/controller/dwc/pcie-designware.c
··· 167 } 168 } 169 170 /* LLDD is supposed to manually switch the clocks and resets state */ 171 if (dw_pcie_cap_is(pci, REQ_RES)) { 172 ret = dw_pcie_get_clocks(pci); ··· 221 pci->type = ver; 222 } 223 224 - /* 225 - * These interfaces resemble the pci_find_*capability() interfaces, but these 226 - * are for configuring host controllers, which are bridges *to* PCI devices but 227 - * are not PCI devices themselves. 228 - */ 229 - static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr, 230 - u8 cap) 231 - { 232 - u8 cap_id, next_cap_ptr; 233 - u16 reg; 234 - 235 - if (!cap_ptr) 236 - return 0; 237 - 238 - reg = dw_pcie_readw_dbi(pci, cap_ptr); 239 - cap_id = (reg & 0x00ff); 240 - 241 - if (cap_id > PCI_CAP_ID_MAX) 242 - return 0; 243 - 244 - if (cap_id == cap) 245 - return cap_ptr; 246 - 247 - next_cap_ptr = (reg & 0xff00) >> 8; 248 - return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); 249 - } 250 - 251 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) 252 { 253 - u8 next_cap_ptr; 254 - u16 reg; 255 - 256 - reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); 257 - next_cap_ptr = (reg & 0x00ff); 258 - 259 - return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); 260 } 261 EXPORT_SYMBOL_GPL(dw_pcie_find_capability); 262 263 - static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start, 264 - u8 cap) 265 - { 266 - u32 header; 267 - int ttl; 268 - int pos = PCI_CFG_SPACE_SIZE; 269 - 270 - /* minimum 8 bytes per capability */ 271 - ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; 272 - 273 - if (start) 274 - pos = start; 275 - 276 - header = dw_pcie_readl_dbi(pci, pos); 277 - /* 278 - * If we have no capabilities, this is indicated by cap ID, 279 - * cap version and next pointer all being 0. 280 - */ 281 - if (header == 0) 282 - return 0; 283 - 284 - while (ttl-- > 0) { 285 - if (PCI_EXT_CAP_ID(header) == cap && pos != start) 286 - return pos; 287 - 288 - pos = PCI_EXT_CAP_NEXT(header); 289 - if (pos < PCI_CFG_SPACE_SIZE) 290 - break; 291 - 292 - header = dw_pcie_readl_dbi(pci, pos); 293 - } 294 - 295 - return 0; 296 - } 297 - 298 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) 299 { 300 - return dw_pcie_find_next_ext_capability(pci, 0, cap); 301 } 302 EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); 303 ··· 243 if (vendor_id != dw_pcie_readw_dbi(pci, PCI_VENDOR_ID)) 244 return 0; 245 246 - while ((vsec = dw_pcie_find_next_ext_capability(pci, vsec, 247 - PCI_EXT_CAP_ID_VNDR))) { 248 header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER); 249 if (PCI_VNDR_HEADER_ID(header) == vsec_id) 250 return vsec; ··· 508 val = dw_pcie_enable_ecrc(val); 509 dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val); 510 511 - val = PCIE_ATU_ENABLE; 512 if (atu->type == PCIE_ATU_TYPE_MSG) { 513 /* The data-less messages only for now */ 514 val |= PCIE_ATU_INHIBIT_PAYLOAD | atu->code; ··· 782 case 8: 783 plc |= PORT_LINK_MODE_8_LANES; 784 break; 785 default: 786 dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes); 787 return; ··· 989 char name[15]; 990 int ret; 991 992 - if (pci->edma.nr_irqs == 1) 993 - return 0; 994 - else if (pci->edma.nr_irqs > 1) 995 return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; 996 997 ret = platform_get_irq_byname_optional(pdev, "dma");
··· 167 } 168 } 169 170 + /* ELBI is an optional resource */ 171 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); 172 + if (res) { 173 + pci->elbi_base = devm_ioremap_resource(pci->dev, res); 174 + if (IS_ERR(pci->elbi_base)) 175 + return PTR_ERR(pci->elbi_base); 176 + } 177 + 178 /* LLDD is supposed to manually switch the clocks and resets state */ 179 if (dw_pcie_cap_is(pci, REQ_RES)) { 180 ret = dw_pcie_get_clocks(pci); ··· 213 pci->type = ver; 214 } 215 216 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) 217 { 218 + return PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap, 219 + pci); 220 } 221 EXPORT_SYMBOL_GPL(dw_pcie_find_capability); 222 223 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) 224 { 225 + return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, pci); 226 } 227 EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); 228 ··· 302 if (vendor_id != dw_pcie_readw_dbi(pci, PCI_VENDOR_ID)) 303 return 0; 304 305 + while ((vsec = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, vsec, 306 + PCI_EXT_CAP_ID_VNDR, pci))) { 307 header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER); 308 if (PCI_VNDR_HEADER_ID(header) == vsec_id) 309 return vsec; ··· 567 val = dw_pcie_enable_ecrc(val); 568 dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val); 569 570 + val = PCIE_ATU_ENABLE | atu->ctrl2; 571 if (atu->type == PCIE_ATU_TYPE_MSG) { 572 /* The data-less messages only for now */ 573 val |= PCIE_ATU_INHIBIT_PAYLOAD | atu->code; ··· 841 case 8: 842 plc |= PORT_LINK_MODE_8_LANES; 843 break; 844 + case 16: 845 + plc |= PORT_LINK_MODE_16_LANES; 846 + break; 847 default: 848 dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes); 849 return; ··· 1045 char name[15]; 1046 int ret; 1047 1048 + if (pci->edma.nr_irqs > 1) 1049 return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; 1050 1051 ret = platform_get_irq_byname_optional(pdev, "dma");
+52 -3
drivers/pci/controller/dwc/pcie-designware.h
··· 20 #include <linux/irq.h> 21 #include <linux/msi.h> 22 #include <linux/pci.h> 23 #include <linux/reset.h> 24 25 #include <linux/pci-epc.h> ··· 91 #define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3) 92 #define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7) 93 #define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf) 94 95 #define PCIE_PORT_LANE_SKEW 0x714 96 #define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0) ··· 125 #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) 126 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 127 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) 128 - #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT 0x1 129 130 #define GEN3_EQ_CONTROL_OFF 0x8A8 131 #define GEN3_EQ_CONTROL_OFF_FB_MODE GENMASK(3, 0) ··· 135 #define GEN3_EQ_FB_MODE_DIR_CHANGE_OFF 0x8AC 136 #define GEN3_EQ_FMDC_T_MIN_PHASE23 GENMASK(4, 0) 137 #define GEN3_EQ_FMDC_N_EVALS GENMASK(9, 5) 138 - #define GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA GENMASK(13, 10) 139 - #define GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA GENMASK(17, 14) 140 141 #define PCIE_PORT_MULTI_LANE_CTRL 0x8C0 142 #define PORT_MLTI_UPCFG_SUPPORT BIT(7) ··· 170 #define PCIE_ATU_REGION_CTRL2 0x004 171 #define PCIE_ATU_ENABLE BIT(31) 172 #define PCIE_ATU_BAR_MODE_ENABLE BIT(30) 173 #define PCIE_ATU_INHIBIT_PAYLOAD BIT(22) 174 #define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19) 175 #define PCIE_ATU_LOWER_BASE 0x008 ··· 389 u8 func_no; 390 u8 code; 391 u8 routing; 392 u64 parent_bus_addr; 393 u64 pci_addr; 394 u64 size; ··· 428 struct resource *msg_res; 429 bool use_linkup_irq; 430 struct pci_eq_presets presets; 431 }; 432 433 struct dw_pcie_ep_ops { ··· 498 resource_size_t dbi_phys_addr; 499 void __iomem *dbi_base2; 500 void __iomem *atu_base; 501 resource_size_t atu_phys_addr; 502 size_t atu_size; 503 resource_size_t parent_bus_offset; ··· 616 dw_pcie_write_dbi2(pci, reg, 0x4, val); 617 } 618 619 static inline unsigned int dw_pcie_ep_get_dbi_offset(struct dw_pcie_ep *ep, 620 u8 func_no) 621 { ··· 700 u32 reg) 701 { 702 return dw_pcie_ep_read_dbi(ep, func_no, reg, 0x1); 703 } 704 705 static inline unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
··· 20 #include <linux/irq.h> 21 #include <linux/msi.h> 22 #include <linux/pci.h> 23 + #include <linux/pci-ecam.h> 24 #include <linux/reset.h> 25 26 #include <linux/pci-epc.h> ··· 90 #define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3) 91 #define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7) 92 #define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf) 93 + #define PORT_LINK_MODE_16_LANES PORT_LINK_MODE(0x1f) 94 95 #define PCIE_PORT_LANE_SKEW 0x714 96 #define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0) ··· 123 #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) 124 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 125 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) 126 127 #define GEN3_EQ_CONTROL_OFF 0x8A8 128 #define GEN3_EQ_CONTROL_OFF_FB_MODE GENMASK(3, 0) ··· 134 #define GEN3_EQ_FB_MODE_DIR_CHANGE_OFF 0x8AC 135 #define GEN3_EQ_FMDC_T_MIN_PHASE23 GENMASK(4, 0) 136 #define GEN3_EQ_FMDC_N_EVALS GENMASK(9, 5) 137 + #define GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA GENMASK(13, 10) 138 + #define GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA GENMASK(17, 14) 139 140 #define PCIE_PORT_MULTI_LANE_CTRL 0x8C0 141 #define PORT_MLTI_UPCFG_SUPPORT BIT(7) ··· 169 #define PCIE_ATU_REGION_CTRL2 0x004 170 #define PCIE_ATU_ENABLE BIT(31) 171 #define PCIE_ATU_BAR_MODE_ENABLE BIT(30) 172 + #define PCIE_ATU_CFG_SHIFT_MODE_ENABLE BIT(28) 173 #define PCIE_ATU_INHIBIT_PAYLOAD BIT(22) 174 #define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19) 175 #define PCIE_ATU_LOWER_BASE 0x008 ··· 387 u8 func_no; 388 u8 code; 389 u8 routing; 390 + u32 ctrl2; 391 u64 parent_bus_addr; 392 u64 pci_addr; 393 u64 size; ··· 425 struct resource *msg_res; 426 bool use_linkup_irq; 427 struct pci_eq_presets presets; 428 + struct pci_config_window *cfg; 429 + bool ecam_enabled; 430 + bool native_ecam; 431 }; 432 433 struct dw_pcie_ep_ops { ··· 492 resource_size_t dbi_phys_addr; 493 void __iomem *dbi_base2; 494 void __iomem *atu_base; 495 + void __iomem *elbi_base; 496 resource_size_t atu_phys_addr; 497 size_t atu_size; 498 resource_size_t parent_bus_offset; ··· 609 dw_pcie_write_dbi2(pci, reg, 0x4, val); 610 } 611 612 + static inline int dw_pcie_read_cfg_byte(struct dw_pcie *pci, int where, 613 + u8 *val) 614 + { 615 + *val = dw_pcie_readb_dbi(pci, where); 616 + return PCIBIOS_SUCCESSFUL; 617 + } 618 + 619 + static inline int dw_pcie_read_cfg_word(struct dw_pcie *pci, int where, 620 + u16 *val) 621 + { 622 + *val = dw_pcie_readw_dbi(pci, where); 623 + return PCIBIOS_SUCCESSFUL; 624 + } 625 + 626 + static inline int dw_pcie_read_cfg_dword(struct dw_pcie *pci, int where, 627 + u32 *val) 628 + { 629 + *val = dw_pcie_readl_dbi(pci, where); 630 + return PCIBIOS_SUCCESSFUL; 631 + } 632 + 633 static inline unsigned int dw_pcie_ep_get_dbi_offset(struct dw_pcie_ep *ep, 634 u8 func_no) 635 { ··· 672 u32 reg) 673 { 674 return dw_pcie_ep_read_dbi(ep, func_no, reg, 0x1); 675 + } 676 + 677 + static inline int dw_pcie_ep_read_cfg_byte(struct dw_pcie_ep *ep, u8 func_no, 678 + int where, u8 *val) 679 + { 680 + *val = dw_pcie_ep_readb_dbi(ep, func_no, where); 681 + return PCIBIOS_SUCCESSFUL; 682 + } 683 + 684 + static inline int dw_pcie_ep_read_cfg_word(struct dw_pcie_ep *ep, u8 func_no, 685 + int where, u16 *val) 686 + { 687 + *val = dw_pcie_ep_readw_dbi(ep, func_no, where); 688 + return PCIBIOS_SUCCESSFUL; 689 + } 690 + 691 + static inline int dw_pcie_ep_read_cfg_dword(struct dw_pcie_ep *ep, u8 func_no, 692 + int where, u32 *val) 693 + { 694 + *val = dw_pcie_ep_readl_dbi(ep, func_no, where); 695 + return PCIBIOS_SUCCESSFUL; 696 } 697 698 static inline unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
-2
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 331 .linkup_notifier = true, 332 .msi_capable = true, 333 .msix_capable = true, 334 - .intx_capable = false, 335 .align = SZ_64K, 336 .bar[BAR_0] = { .type = BAR_RESIZABLE, }, 337 .bar[BAR_1] = { .type = BAR_RESIZABLE, }, ··· 351 .linkup_notifier = true, 352 .msi_capable = true, 353 .msix_capable = true, 354 - .intx_capable = false, 355 .align = SZ_64K, 356 .bar[BAR_0] = { .type = BAR_RESIZABLE, }, 357 .bar[BAR_1] = { .type = BAR_RESIZABLE, },
··· 331 .linkup_notifier = true, 332 .msi_capable = true, 333 .msix_capable = true, 334 .align = SZ_64K, 335 .bar[BAR_0] = { .type = BAR_RESIZABLE, }, 336 .bar[BAR_1] = { .type = BAR_RESIZABLE, }, ··· 352 .linkup_notifier = true, 353 .msi_capable = true, 354 .msix_capable = true, 355 .align = SZ_64K, 356 .bar[BAR_0] = { .type = BAR_RESIZABLE, }, 357 .bar[BAR_1] = { .type = BAR_RESIZABLE, },
-1
drivers/pci/controller/dwc/pcie-keembay.c
··· 309 } 310 311 static const struct pci_epc_features keembay_pcie_epc_features = { 312 - .linkup_notifier = false, 313 .msi_capable = true, 314 .msix_capable = true, 315 .bar[BAR_0] = { .only_64bit = true, },
··· 309 } 310 311 static const struct pci_epc_features keembay_pcie_epc_features = { 312 .msi_capable = true, 313 .msix_capable = true, 314 .bar[BAR_0] = { .only_64bit = true, },
+34 -24
drivers/pci/controller/dwc/pcie-qcom-common.c
··· 8 #include "pcie-designware.h" 9 #include "pcie-qcom-common.h" 10 11 - void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci) 12 { 13 u32 reg; 14 15 /* 16 * GEN3_RELATED_OFF register is repurposed to apply equalization ··· 21 * determines the data rate for which these equalization settings are 22 * applied. 23 */ 24 - reg = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 25 - reg &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 26 - reg &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; 27 - reg |= FIELD_PREP(GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK, 28 - GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT); 29 - dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, reg); 30 31 - reg = dw_pcie_readl_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF); 32 - reg &= ~(GEN3_EQ_FMDC_T_MIN_PHASE23 | 33 - GEN3_EQ_FMDC_N_EVALS | 34 - GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA | 35 - GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA); 36 - reg |= FIELD_PREP(GEN3_EQ_FMDC_T_MIN_PHASE23, 0x1) | 37 - FIELD_PREP(GEN3_EQ_FMDC_N_EVALS, 0xd) | 38 - FIELD_PREP(GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA, 0x5) | 39 - FIELD_PREP(GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA, 0x5); 40 - dw_pcie_writel_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF, reg); 41 42 - reg = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); 43 - reg &= ~(GEN3_EQ_CONTROL_OFF_FB_MODE | 44 - GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE | 45 - GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL | 46 - GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC); 47 - dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, reg); 48 } 49 - EXPORT_SYMBOL_GPL(qcom_pcie_common_set_16gt_equalization); 50 51 void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci) 52 {
··· 8 #include "pcie-designware.h" 9 #include "pcie-qcom-common.h" 10 11 + void qcom_pcie_common_set_equalization(struct dw_pcie *pci) 12 { 13 + struct device *dev = pci->dev; 14 u32 reg; 15 + u16 speed; 16 17 /* 18 * GEN3_RELATED_OFF register is repurposed to apply equalization ··· 19 * determines the data rate for which these equalization settings are 20 * applied. 21 */ 22 23 + for (speed = PCIE_SPEED_8_0GT; speed <= pcie_link_speed[pci->max_link_speed]; speed++) { 24 + if (speed > PCIE_SPEED_32_0GT) { 25 + dev_warn(dev, "Skipped equalization settings for unsupported data rate\n"); 26 + break; 27 + } 28 29 + reg = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 30 + reg &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 31 + reg &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; 32 + reg |= FIELD_PREP(GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK, 33 + speed - PCIE_SPEED_8_0GT); 34 + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, reg); 35 + 36 + reg = dw_pcie_readl_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF); 37 + reg &= ~(GEN3_EQ_FMDC_T_MIN_PHASE23 | 38 + GEN3_EQ_FMDC_N_EVALS | 39 + GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA | 40 + GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA); 41 + reg |= FIELD_PREP(GEN3_EQ_FMDC_T_MIN_PHASE23, 0x1) | 42 + FIELD_PREP(GEN3_EQ_FMDC_N_EVALS, 0xd) | 43 + FIELD_PREP(GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA, 0x5) | 44 + FIELD_PREP(GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA, 0x5); 45 + dw_pcie_writel_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF, reg); 46 + 47 + reg = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); 48 + reg &= ~(GEN3_EQ_CONTROL_OFF_FB_MODE | 49 + GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE | 50 + GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL | 51 + GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC); 52 + dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, reg); 53 + } 54 } 55 + EXPORT_SYMBOL_GPL(qcom_pcie_common_set_equalization); 56 57 void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci) 58 {
+1 -1
drivers/pci/controller/dwc/pcie-qcom-common.h
··· 8 9 struct dw_pcie; 10 11 - void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci); 12 void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci); 13 14 #endif
··· 8 9 struct dw_pcie; 10 11 + void qcom_pcie_common_set_equalization(struct dw_pcie *pci); 12 void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci); 13 14 #endif
+6 -17
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 179 * struct qcom_pcie_ep - Qualcomm PCIe Endpoint Controller 180 * @pci: Designware PCIe controller struct 181 * @parf: Qualcomm PCIe specific PARF register base 182 - * @elbi: Designware PCIe specific ELBI register base 183 * @mmio: MMIO register base 184 * @perst_map: PERST regmap 185 * @mmio_res: MMIO region resource ··· 201 struct dw_pcie pci; 202 203 void __iomem *parf; 204 - void __iomem *elbi; 205 void __iomem *mmio; 206 struct regmap *perst_map; 207 struct resource *mmio_res; ··· 265 266 static bool qcom_pcie_dw_link_up(struct dw_pcie *pci) 267 { 268 - struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 269 u32 reg; 270 271 - reg = readl_relaxed(pcie_ep->elbi + ELBI_SYS_STTS); 272 273 return reg & XMLH_LINK_UP; 274 } ··· 291 static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base, 292 u32 reg, size_t size, u32 val) 293 { 294 - struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 295 int ret; 296 297 - writel(1, pcie_ep->elbi + ELBI_CS2_ENABLE); 298 299 ret = dw_pcie_write(pci->dbi_base2 + reg, size, val); 300 if (ret) 301 dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret); 302 303 - writel(0, pcie_ep->elbi + ELBI_CS2_ENABLE); 304 } 305 306 static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep) ··· 507 goto err_disable_resources; 508 } 509 510 - if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) { 511 - qcom_pcie_common_set_16gt_equalization(pci); 512 qcom_pcie_common_set_16gt_lane_margining(pci); 513 - } 514 515 /* 516 * The physical address of the MMIO region which is exposed as the BAR ··· 578 if (IS_ERR(pci->dbi_base)) 579 return PTR_ERR(pci->dbi_base); 580 pci->dbi_base2 = pci->dbi_base; 581 - 582 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); 583 - pcie_ep->elbi = devm_pci_remap_cfg_resource(dev, res); 584 - if (IS_ERR(pcie_ep->elbi)) 585 - return PTR_ERR(pcie_ep->elbi); 586 587 pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 588 "mmio"); ··· 822 static const struct pci_epc_features qcom_pcie_epc_features = { 823 .linkup_notifier = true, 824 .msi_capable = true, 825 - .msix_capable = false, 826 .align = SZ_4K, 827 .bar[BAR_0] = { .only_64bit = true, }, 828 .bar[BAR_1] = { .type = BAR_RESERVED, }, ··· 864 pcie_ep->pci.dev = dev; 865 pcie_ep->pci.ops = &pci_ops; 866 pcie_ep->pci.ep.ops = &pci_ep_ops; 867 - pcie_ep->pci.edma.nr_irqs = 1; 868 869 pcie_ep->cfg = of_device_get_match_data(dev); 870 if (pcie_ep->cfg && pcie_ep->cfg->hdma_support) {
··· 179 * struct qcom_pcie_ep - Qualcomm PCIe Endpoint Controller 180 * @pci: Designware PCIe controller struct 181 * @parf: Qualcomm PCIe specific PARF register base 182 * @mmio: MMIO register base 183 * @perst_map: PERST regmap 184 * @mmio_res: MMIO region resource ··· 202 struct dw_pcie pci; 203 204 void __iomem *parf; 205 void __iomem *mmio; 206 struct regmap *perst_map; 207 struct resource *mmio_res; ··· 267 268 static bool qcom_pcie_dw_link_up(struct dw_pcie *pci) 269 { 270 u32 reg; 271 272 + reg = readl_relaxed(pci->elbi_base + ELBI_SYS_STTS); 273 274 return reg & XMLH_LINK_UP; 275 } ··· 294 static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base, 295 u32 reg, size_t size, u32 val) 296 { 297 int ret; 298 299 + writel(1, pci->elbi_base + ELBI_CS2_ENABLE); 300 301 ret = dw_pcie_write(pci->dbi_base2 + reg, size, val); 302 if (ret) 303 dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret); 304 305 + writel(0, pci->elbi_base + ELBI_CS2_ENABLE); 306 } 307 308 static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep) ··· 511 goto err_disable_resources; 512 } 513 514 + qcom_pcie_common_set_equalization(pci); 515 + 516 + if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) 517 qcom_pcie_common_set_16gt_lane_margining(pci); 518 519 /* 520 * The physical address of the MMIO region which is exposed as the BAR ··· 582 if (IS_ERR(pci->dbi_base)) 583 return PTR_ERR(pci->dbi_base); 584 pci->dbi_base2 = pci->dbi_base; 585 586 pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 587 "mmio"); ··· 831 static const struct pci_epc_features qcom_pcie_epc_features = { 832 .linkup_notifier = true, 833 .msi_capable = true, 834 .align = SZ_4K, 835 .bar[BAR_0] = { .only_64bit = true, }, 836 .bar[BAR_1] = { .type = BAR_RESERVED, }, ··· 874 pcie_ep->pci.dev = dev; 875 pcie_ep->pci.ops = &pci_ops; 876 pcie_ep->pci.ep.ops = &pci_ep_ops; 877 878 pcie_ep->cfg = of_device_get_match_data(dev); 879 if (pcie_ep->cfg && pcie_ep->cfg->hdma_support) {
+116 -95
drivers/pci/controller/dwc/pcie-qcom.c
··· 55 #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8 56 #define PARF_Q2A_FLUSH 0x1ac 57 #define PARF_LTSSM 0x1b0 58 #define PARF_INT_ALL_STATUS 0x224 59 #define PARF_INT_ALL_CLEAR 0x228 60 #define PARF_INT_ALL_MASK 0x22c ··· 65 #define PARF_DBI_BASE_ADDR_V2_HI 0x354 66 #define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358 67 #define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c 68 #define PARF_NO_SNOOP_OVERRIDE 0x3d4 69 #define PARF_ATU_BASE_ADDR 0x634 70 #define PARF_ATU_BASE_ADDR_HI 0x638 ··· 98 99 /* PARF_SYS_CTRL register fields */ 100 #define MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN BIT(29) 101 #define MST_WAKEUP_EN BIT(13) 102 #define SLV_WAKEUP_EN BIT(12) 103 #define MSTR_ACLK_CGC_DIS BIT(10) ··· 145 146 /* PARF_LTSSM register fields */ 147 #define LTSSM_EN BIT(8) 148 149 /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ 150 #define PARF_INT_ALL_LINK_UP BIT(13) ··· 262 int (*get_resources)(struct qcom_pcie *pcie); 263 int (*init)(struct qcom_pcie *pcie); 264 int (*post_init)(struct qcom_pcie *pcie); 265 - void (*host_post_init)(struct qcom_pcie *pcie); 266 void (*deinit)(struct qcom_pcie *pcie); 267 void (*ltssm_enable)(struct qcom_pcie *pcie); 268 int (*config_sid)(struct qcom_pcie *pcie); ··· 290 struct qcom_pcie { 291 struct dw_pcie *pci; 292 void __iomem *parf; /* DT parf */ 293 - void __iomem *elbi; /* DT elbi */ 294 void __iomem *mhi; 295 union qcom_pcie_resources res; 296 - struct phy *phy; 297 - struct gpio_desc *reset; 298 struct icc_path *icc_mem; 299 struct icc_path *icc_cpu; 300 const struct qcom_pcie_cfg *cfg; ··· 308 struct qcom_pcie_port *port; 309 int val = assert ? 1 : 0; 310 311 - if (list_empty(&pcie->ports)) 312 - gpiod_set_value_cansleep(pcie->reset, val); 313 - else 314 - list_for_each_entry(port, &pcie->ports, list) 315 - gpiod_set_value_cansleep(port->reset, val); 316 317 usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 318 } ··· 326 qcom_perst_assert(pcie, false); 327 } 328 329 static int qcom_pcie_start_link(struct dw_pcie *pci) 330 { 331 struct qcom_pcie *pcie = to_qcom_pcie(pci); 332 333 - if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) { 334 - qcom_pcie_common_set_16gt_equalization(pci); 335 qcom_pcie_common_set_16gt_lane_margining(pci); 336 - } 337 338 /* Enable Link Training state machine */ 339 if (pcie->cfg->ops->ltssm_enable) ··· 463 464 static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) 465 { 466 u32 val; 467 468 /* enable link training */ 469 - val = readl(pcie->elbi + ELBI_SYS_CTRL); 470 val |= ELBI_SYS_CTRL_LT_ENABLE; 471 - writel(val, pcie->elbi + ELBI_SYS_CTRL); 472 } 473 474 static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie) ··· 1094 return 0; 1095 } 1096 1097 - static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata) 1098 - { 1099 - /* 1100 - * Downstream devices need to be in D0 state before enabling PCI PM 1101 - * substates. 1102 - */ 1103 - pci_set_power_state_locked(pdev, PCI_D0); 1104 - pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL); 1105 - 1106 - return 0; 1107 - } 1108 - 1109 - static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie) 1110 - { 1111 - struct dw_pcie_rp *pp = &pcie->pci->pp; 1112 - 1113 - pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL); 1114 - } 1115 - 1116 static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) 1117 { 1118 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; ··· 1288 return val & PCI_EXP_LNKSTA_DLLLA; 1289 } 1290 1291 - static void qcom_pcie_phy_exit(struct qcom_pcie *pcie) 1292 - { 1293 - struct qcom_pcie_port *port; 1294 - 1295 - if (list_empty(&pcie->ports)) 1296 - phy_exit(pcie->phy); 1297 - else 1298 - list_for_each_entry(port, &pcie->ports, list) 1299 - phy_exit(port->phy); 1300 - } 1301 - 1302 static void qcom_pcie_phy_power_off(struct qcom_pcie *pcie) 1303 { 1304 struct qcom_pcie_port *port; 1305 1306 - if (list_empty(&pcie->ports)) { 1307 - phy_power_off(pcie->phy); 1308 - } else { 1309 - list_for_each_entry(port, &pcie->ports, list) 1310 - phy_power_off(port->phy); 1311 - } 1312 } 1313 1314 static int qcom_pcie_phy_power_on(struct qcom_pcie *pcie) 1315 { 1316 struct qcom_pcie_port *port; 1317 - int ret = 0; 1318 1319 - if (list_empty(&pcie->ports)) { 1320 - ret = phy_set_mode_ext(pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 1321 if (ret) 1322 return ret; 1323 1324 - ret = phy_power_on(pcie->phy); 1325 - if (ret) 1326 return ret; 1327 - } else { 1328 - list_for_each_entry(port, &pcie->ports, list) { 1329 - ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 1330 - if (ret) 1331 - return ret; 1332 - 1333 - ret = phy_power_on(port->phy); 1334 - if (ret) { 1335 - qcom_pcie_phy_power_off(pcie); 1336 - return ret; 1337 - } 1338 } 1339 } 1340 1341 - return ret; 1342 } 1343 1344 static int qcom_pcie_host_init(struct dw_pcie_rp *pp) 1345 { 1346 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1347 struct qcom_pcie *pcie = to_qcom_pcie(pci); 1348 int ret; 1349 1350 qcom_ep_reset_assert(pcie); ··· 1328 ret = pcie->cfg->ops->init(pcie); 1329 if (ret) 1330 return ret; 1331 1332 ret = qcom_pcie_phy_power_on(pcie); 1333 if (ret) ··· 1380 pcie->cfg->ops->deinit(pcie); 1381 } 1382 1383 - static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp) 1384 - { 1385 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1386 - struct qcom_pcie *pcie = to_qcom_pcie(pci); 1387 - 1388 - if (pcie->cfg->ops->host_post_init) 1389 - pcie->cfg->ops->host_post_init(pcie); 1390 - } 1391 - 1392 static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 1393 .init = qcom_pcie_host_init, 1394 .deinit = qcom_pcie_host_deinit, 1395 - .post_init = qcom_pcie_host_post_init, 1396 }; 1397 1398 /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ ··· 1444 .get_resources = qcom_pcie_get_resources_2_7_0, 1445 .init = qcom_pcie_init_2_7_0, 1446 .post_init = qcom_pcie_post_init_2_7_0, 1447 - .host_post_init = qcom_pcie_host_post_init_2_7_0, 1448 .deinit = qcom_pcie_deinit_2_7_0, 1449 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1450 .config_sid = qcom_pcie_config_sid_1_9_0, ··· 1454 .get_resources = qcom_pcie_get_resources_2_7_0, 1455 .init = qcom_pcie_init_2_7_0, 1456 .post_init = qcom_pcie_post_init_2_7_0, 1457 - .host_post_init = qcom_pcie_host_post_init_2_7_0, 1458 .deinit = qcom_pcie_deinit_2_7_0, 1459 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1460 }; ··· 1750 int ret = -ENOENT; 1751 1752 for_each_available_child_of_node_scoped(dev->of_node, of_port) { 1753 ret = qcom_pcie_parse_port(pcie, of_port); 1754 if (ret) 1755 goto err_port_del; ··· 1760 return ret; 1761 1762 err_port_del: 1763 - list_for_each_entry_safe(port, tmp, &pcie->ports, list) 1764 list_del(&port->list); 1765 1766 return ret; 1767 } ··· 1771 static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie) 1772 { 1773 struct device *dev = pcie->pci->dev; 1774 int ret; 1775 1776 - pcie->phy = devm_phy_optional_get(dev, "pciephy"); 1777 - if (IS_ERR(pcie->phy)) 1778 - return PTR_ERR(pcie->phy); 1779 1780 - pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); 1781 - if (IS_ERR(pcie->reset)) 1782 - return PTR_ERR(pcie->reset); 1783 1784 - ret = phy_init(pcie->phy); 1785 if (ret) 1786 return ret; 1787 1788 return 0; 1789 } ··· 1884 pcie->parf = devm_platform_ioremap_resource_byname(pdev, "parf"); 1885 if (IS_ERR(pcie->parf)) { 1886 ret = PTR_ERR(pcie->parf); 1887 - goto err_pm_runtime_put; 1888 - } 1889 - 1890 - pcie->elbi = devm_platform_ioremap_resource_byname(pdev, "elbi"); 1891 - if (IS_ERR(pcie->elbi)) { 1892 - ret = PTR_ERR(pcie->elbi); 1893 goto err_pm_runtime_put; 1894 } 1895 ··· 2004 err_host_deinit: 2005 dw_pcie_host_deinit(pp); 2006 err_phy_exit: 2007 - qcom_pcie_phy_exit(pcie); 2008 - list_for_each_entry_safe(port, tmp, &pcie->ports, list) 2009 list_del(&port->list); 2010 err_pm_runtime_put: 2011 pm_runtime_put(dev); 2012 pm_runtime_disable(dev);
··· 55 #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8 56 #define PARF_Q2A_FLUSH 0x1ac 57 #define PARF_LTSSM 0x1b0 58 + #define PARF_SLV_DBI_ELBI 0x1b4 59 #define PARF_INT_ALL_STATUS 0x224 60 #define PARF_INT_ALL_CLEAR 0x228 61 #define PARF_INT_ALL_MASK 0x22c ··· 64 #define PARF_DBI_BASE_ADDR_V2_HI 0x354 65 #define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358 66 #define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c 67 + #define PARF_BLOCK_SLV_AXI_WR_BASE 0x360 68 + #define PARF_BLOCK_SLV_AXI_WR_BASE_HI 0x364 69 + #define PARF_BLOCK_SLV_AXI_WR_LIMIT 0x368 70 + #define PARF_BLOCK_SLV_AXI_WR_LIMIT_HI 0x36c 71 + #define PARF_BLOCK_SLV_AXI_RD_BASE 0x370 72 + #define PARF_BLOCK_SLV_AXI_RD_BASE_HI 0x374 73 + #define PARF_BLOCK_SLV_AXI_RD_LIMIT 0x378 74 + #define PARF_BLOCK_SLV_AXI_RD_LIMIT_HI 0x37c 75 + #define PARF_ECAM_BASE 0x380 76 + #define PARF_ECAM_BASE_HI 0x384 77 #define PARF_NO_SNOOP_OVERRIDE 0x3d4 78 #define PARF_ATU_BASE_ADDR 0x634 79 #define PARF_ATU_BASE_ADDR_HI 0x638 ··· 87 88 /* PARF_SYS_CTRL register fields */ 89 #define MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN BIT(29) 90 + #define PCIE_ECAM_BLOCKER_EN BIT(26) 91 #define MST_WAKEUP_EN BIT(13) 92 #define SLV_WAKEUP_EN BIT(12) 93 #define MSTR_ACLK_CGC_DIS BIT(10) ··· 133 134 /* PARF_LTSSM register fields */ 135 #define LTSSM_EN BIT(8) 136 + 137 + /* PARF_SLV_DBI_ELBI */ 138 + #define SLV_DBI_ELBI_ADDR_BASE GENMASK(11, 0) 139 140 /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ 141 #define PARF_INT_ALL_LINK_UP BIT(13) ··· 247 int (*get_resources)(struct qcom_pcie *pcie); 248 int (*init)(struct qcom_pcie *pcie); 249 int (*post_init)(struct qcom_pcie *pcie); 250 void (*deinit)(struct qcom_pcie *pcie); 251 void (*ltssm_enable)(struct qcom_pcie *pcie); 252 int (*config_sid)(struct qcom_pcie *pcie); ··· 276 struct qcom_pcie { 277 struct dw_pcie *pci; 278 void __iomem *parf; /* DT parf */ 279 void __iomem *mhi; 280 union qcom_pcie_resources res; 281 struct icc_path *icc_mem; 282 struct icc_path *icc_cpu; 283 const struct qcom_pcie_cfg *cfg; ··· 297 struct qcom_pcie_port *port; 298 int val = assert ? 1 : 0; 299 300 + list_for_each_entry(port, &pcie->ports, list) 301 + gpiod_set_value_cansleep(port->reset, val); 302 303 usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 304 } ··· 318 qcom_perst_assert(pcie, false); 319 } 320 321 + static void qcom_pci_config_ecam(struct dw_pcie_rp *pp) 322 + { 323 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 324 + struct qcom_pcie *pcie = to_qcom_pcie(pci); 325 + u64 addr, addr_end; 326 + u32 val; 327 + 328 + writel_relaxed(lower_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE); 329 + writel_relaxed(upper_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE_HI); 330 + 331 + /* 332 + * The only device on the root bus is a single Root Port. If we try to 333 + * access any devices other than Device/Function 00.0 on Bus 0, the TLP 334 + * will go outside of the controller to the PCI bus. But with CFG Shift 335 + * Feature (ECAM) enabled in iATU, there is no guarantee that the 336 + * response is going to be all F's. Hence, to make sure that the 337 + * requester gets all F's response for accesses other than the Root 338 + * Port, configure iATU to block the transactions starting from 339 + * function 1 of the root bus to the end of the root bus (i.e., from 340 + * dbi_base + 4KB to dbi_base + 1MB). 341 + */ 342 + addr = pci->dbi_phys_addr + SZ_4K; 343 + writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE); 344 + writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE_HI); 345 + 346 + writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE); 347 + writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE_HI); 348 + 349 + addr_end = pci->dbi_phys_addr + SZ_1M - 1; 350 + 351 + writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT); 352 + writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT_HI); 353 + 354 + writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT); 355 + writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT_HI); 356 + 357 + val = readl_relaxed(pcie->parf + PARF_SYS_CTRL); 358 + val |= PCIE_ECAM_BLOCKER_EN; 359 + writel_relaxed(val, pcie->parf + PARF_SYS_CTRL); 360 + } 361 + 362 static int qcom_pcie_start_link(struct dw_pcie *pci) 363 { 364 struct qcom_pcie *pcie = to_qcom_pcie(pci); 365 366 + qcom_pcie_common_set_equalization(pci); 367 + 368 + if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) 369 qcom_pcie_common_set_16gt_lane_margining(pci); 370 371 /* Enable Link Training state machine */ 372 if (pcie->cfg->ops->ltssm_enable) ··· 414 415 static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) 416 { 417 + struct dw_pcie *pci = pcie->pci; 418 u32 val; 419 420 + if (!pci->elbi_base) { 421 + dev_err(pci->dev, "ELBI is not present\n"); 422 + return; 423 + } 424 /* enable link training */ 425 + val = readl(pci->elbi_base + ELBI_SYS_CTRL); 426 val |= ELBI_SYS_CTRL_LT_ENABLE; 427 + writel(val, pci->elbi_base + ELBI_SYS_CTRL); 428 } 429 430 static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie) ··· 1040 return 0; 1041 } 1042 1043 static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) 1044 { 1045 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; ··· 1253 return val & PCI_EXP_LNKSTA_DLLLA; 1254 } 1255 1256 static void qcom_pcie_phy_power_off(struct qcom_pcie *pcie) 1257 { 1258 struct qcom_pcie_port *port; 1259 1260 + list_for_each_entry(port, &pcie->ports, list) 1261 + phy_power_off(port->phy); 1262 } 1263 1264 static int qcom_pcie_phy_power_on(struct qcom_pcie *pcie) 1265 { 1266 struct qcom_pcie_port *port; 1267 + int ret; 1268 1269 + list_for_each_entry(port, &pcie->ports, list) { 1270 + ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 1271 if (ret) 1272 return ret; 1273 1274 + ret = phy_power_on(port->phy); 1275 + if (ret) { 1276 + qcom_pcie_phy_power_off(pcie); 1277 return ret; 1278 } 1279 } 1280 1281 + return 0; 1282 } 1283 1284 static int qcom_pcie_host_init(struct dw_pcie_rp *pp) 1285 { 1286 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1287 struct qcom_pcie *pcie = to_qcom_pcie(pci); 1288 + u16 offset; 1289 int ret; 1290 1291 qcom_ep_reset_assert(pcie); ··· 1317 ret = pcie->cfg->ops->init(pcie); 1318 if (ret) 1319 return ret; 1320 + 1321 + if (pp->ecam_enabled) { 1322 + /* 1323 + * Override ELBI when ECAM is enabled, as when ECAM is enabled, 1324 + * ELBI moves under the 'config' space. 1325 + */ 1326 + offset = FIELD_GET(SLV_DBI_ELBI_ADDR_BASE, readl(pcie->parf + PARF_SLV_DBI_ELBI)); 1327 + pci->elbi_base = pci->dbi_base + offset; 1328 + 1329 + qcom_pci_config_ecam(pp); 1330 + } 1331 1332 ret = qcom_pcie_phy_power_on(pcie); 1333 if (ret) ··· 1358 pcie->cfg->ops->deinit(pcie); 1359 } 1360 1361 static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 1362 .init = qcom_pcie_host_init, 1363 .deinit = qcom_pcie_host_deinit, 1364 }; 1365 1366 /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ ··· 1432 .get_resources = qcom_pcie_get_resources_2_7_0, 1433 .init = qcom_pcie_init_2_7_0, 1434 .post_init = qcom_pcie_post_init_2_7_0, 1435 .deinit = qcom_pcie_deinit_2_7_0, 1436 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1437 .config_sid = qcom_pcie_config_sid_1_9_0, ··· 1443 .get_resources = qcom_pcie_get_resources_2_7_0, 1444 .init = qcom_pcie_init_2_7_0, 1445 .post_init = qcom_pcie_post_init_2_7_0, 1446 .deinit = qcom_pcie_deinit_2_7_0, 1447 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1448 }; ··· 1740 int ret = -ENOENT; 1741 1742 for_each_available_child_of_node_scoped(dev->of_node, of_port) { 1743 + if (!of_node_is_type(of_port, "pci")) 1744 + continue; 1745 ret = qcom_pcie_parse_port(pcie, of_port); 1746 if (ret) 1747 goto err_port_del; ··· 1748 return ret; 1749 1750 err_port_del: 1751 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 1752 + phy_exit(port->phy); 1753 list_del(&port->list); 1754 + } 1755 1756 return ret; 1757 } ··· 1757 static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie) 1758 { 1759 struct device *dev = pcie->pci->dev; 1760 + struct qcom_pcie_port *port; 1761 + struct gpio_desc *reset; 1762 + struct phy *phy; 1763 int ret; 1764 1765 + phy = devm_phy_optional_get(dev, "pciephy"); 1766 + if (IS_ERR(phy)) 1767 + return PTR_ERR(phy); 1768 1769 + reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); 1770 + if (IS_ERR(reset)) 1771 + return PTR_ERR(reset); 1772 1773 + ret = phy_init(phy); 1774 if (ret) 1775 return ret; 1776 + 1777 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 1778 + if (!port) 1779 + return -ENOMEM; 1780 + 1781 + port->reset = reset; 1782 + port->phy = phy; 1783 + INIT_LIST_HEAD(&port->list); 1784 + list_add_tail(&port->list, &pcie->ports); 1785 1786 return 0; 1787 } ··· 1858 pcie->parf = devm_platform_ioremap_resource_byname(pdev, "parf"); 1859 if (IS_ERR(pcie->parf)) { 1860 ret = PTR_ERR(pcie->parf); 1861 goto err_pm_runtime_put; 1862 } 1863 ··· 1984 err_host_deinit: 1985 dw_pcie_host_deinit(pp); 1986 err_phy_exit: 1987 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 1988 + phy_exit(port->phy); 1989 list_del(&port->list); 1990 + } 1991 err_pm_runtime_put: 1992 pm_runtime_put(dev); 1993 pm_runtime_disable(dev);
+25 -5
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 182 return ret; 183 } 184 185 - if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc)) 186 reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 187 188 val = readl(rcar->base + PCIEMSR0); 189 if (rcar->drvdata->mode == DW_PCIE_RC_TYPE) { ··· 212 ret = reset_control_deassert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 213 if (ret) 214 goto err_unprepare; 215 216 if (rcar->drvdata->additional_common_init) 217 rcar->drvdata->additional_common_init(rcar); ··· 420 } 421 422 static const struct pci_epc_features rcar_gen4_pcie_epc_features = { 423 - .linkup_notifier = false, 424 .msi_capable = true, 425 - .msix_capable = false, 426 .bar[BAR_1] = { .type = BAR_RESERVED, }, 427 .bar[BAR_3] = { .type = BAR_RESERVED, }, 428 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 }, ··· 721 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(23, 22), BIT(22)); 722 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(18, 16), GENMASK(17, 16)); 723 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(7, 6), BIT(6)); 724 - rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(11, 0)); 725 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x1d4, GENMASK(16, 15), GENMASK(16, 15)); 726 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x514, BIT(26), BIT(26)); 727 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(16), 0); ··· 731 val &= ~APP_HOLD_PHY_RST; 732 writel(val, rcar->base + PCIERSTCTRL1); 733 734 - ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, !(val & BIT(18)), 100, 10000); 735 if (ret < 0) 736 return ret; 737
··· 182 return ret; 183 } 184 185 + if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc)) { 186 reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 187 + /* 188 + * R-Car V4H Reference Manual R19UH0186EJ0130 Rev.1.30 Apr. 189 + * 21, 2025 page 585 Figure 9.3.2 Software Reset flow (B) 190 + * indicates that for peripherals in HSC domain, after 191 + * reset has been asserted by writing a matching reset bit 192 + * into register SRCR, it is mandatory to wait 1ms. 193 + */ 194 + fsleep(1000); 195 + } 196 197 val = readl(rcar->base + PCIEMSR0); 198 if (rcar->drvdata->mode == DW_PCIE_RC_TYPE) { ··· 203 ret = reset_control_deassert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 204 if (ret) 205 goto err_unprepare; 206 + 207 + /* 208 + * Assure the reset is latched and the core is ready for DBI access. 209 + * On R-Car V4H, the PCIe reset is asynchronous and does not take 210 + * effect immediately, but needs a short time to complete. In case 211 + * DBI access happens in that short time, that access generates an 212 + * SError. To make sure that condition can never happen, read back the 213 + * state of the reset, which should turn the asynchronous reset into 214 + * synchronous one, and wait a little over 1ms to add additional 215 + * safety margin. 216 + */ 217 + reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 218 + fsleep(1000); 219 220 if (rcar->drvdata->additional_common_init) 221 rcar->drvdata->additional_common_init(rcar); ··· 398 } 399 400 static const struct pci_epc_features rcar_gen4_pcie_epc_features = { 401 .msi_capable = true, 402 .bar[BAR_1] = { .type = BAR_RESERVED, }, 403 .bar[BAR_3] = { .type = BAR_RESERVED, }, 404 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 }, ··· 701 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(23, 22), BIT(22)); 702 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(18, 16), GENMASK(17, 16)); 703 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(7, 6), BIT(6)); 704 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(1, 0)); 705 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x1d4, GENMASK(16, 15), GENMASK(16, 15)); 706 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x514, BIT(26), BIT(26)); 707 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(16), 0); ··· 711 val &= ~APP_HOLD_PHY_RST; 712 writel(val, rcar->base + PCIERSTCTRL1); 713 714 + ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, val & BIT(18), 100, 10000); 715 if (ret < 0) 716 return ret; 717
+364
drivers/pci/controller/dwc/pcie-stm32-ep.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * STMicroelectronics STM32MP25 PCIe endpoint driver. 4 + * 5 + * Copyright (C) 2025 STMicroelectronics 6 + * Author: Christian Bruel <christian.bruel@foss.st.com> 7 + */ 8 + 9 + #include <linux/clk.h> 10 + #include <linux/mfd/syscon.h> 11 + #include <linux/of_platform.h> 12 + #include <linux/of_gpio.h> 13 + #include <linux/phy/phy.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/pm_runtime.h> 16 + #include <linux/regmap.h> 17 + #include <linux/reset.h> 18 + #include "pcie-designware.h" 19 + #include "pcie-stm32.h" 20 + 21 + struct stm32_pcie { 22 + struct dw_pcie pci; 23 + struct regmap *regmap; 24 + struct reset_control *rst; 25 + struct phy *phy; 26 + struct clk *clk; 27 + struct gpio_desc *perst_gpio; 28 + unsigned int perst_irq; 29 + }; 30 + 31 + static void stm32_pcie_ep_init(struct dw_pcie_ep *ep) 32 + { 33 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 34 + enum pci_barno bar; 35 + 36 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 37 + dw_pcie_ep_reset_bar(pci, bar); 38 + } 39 + 40 + static int stm32_pcie_enable_link(struct dw_pcie *pci) 41 + { 42 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 43 + 44 + regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 45 + STM32MP25_PCIECR_LTSSM_EN, 46 + STM32MP25_PCIECR_LTSSM_EN); 47 + 48 + return dw_pcie_wait_for_link(pci); 49 + } 50 + 51 + static void stm32_pcie_disable_link(struct dw_pcie *pci) 52 + { 53 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 54 + 55 + regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, STM32MP25_PCIECR_LTSSM_EN, 0); 56 + } 57 + 58 + static int stm32_pcie_start_link(struct dw_pcie *pci) 59 + { 60 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 61 + int ret; 62 + 63 + dev_dbg(pci->dev, "Enable link\n"); 64 + 65 + ret = stm32_pcie_enable_link(pci); 66 + if (ret) { 67 + dev_err(pci->dev, "PCIe cannot establish link: %d\n", ret); 68 + return ret; 69 + } 70 + 71 + enable_irq(stm32_pcie->perst_irq); 72 + 73 + return 0; 74 + } 75 + 76 + static void stm32_pcie_stop_link(struct dw_pcie *pci) 77 + { 78 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 79 + 80 + dev_dbg(pci->dev, "Disable link\n"); 81 + 82 + disable_irq(stm32_pcie->perst_irq); 83 + 84 + stm32_pcie_disable_link(pci); 85 + } 86 + 87 + static int stm32_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 88 + unsigned int type, u16 interrupt_num) 89 + { 90 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 91 + 92 + switch (type) { 93 + case PCI_IRQ_INTX: 94 + return dw_pcie_ep_raise_intx_irq(ep, func_no); 95 + case PCI_IRQ_MSI: 96 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 97 + default: 98 + dev_err(pci->dev, "UNKNOWN IRQ type\n"); 99 + return -EINVAL; 100 + } 101 + } 102 + 103 + static const struct pci_epc_features stm32_pcie_epc_features = { 104 + .msi_capable = true, 105 + .align = SZ_64K, 106 + }; 107 + 108 + static const struct pci_epc_features* 109 + stm32_pcie_get_features(struct dw_pcie_ep *ep) 110 + { 111 + return &stm32_pcie_epc_features; 112 + } 113 + 114 + static const struct dw_pcie_ep_ops stm32_pcie_ep_ops = { 115 + .init = stm32_pcie_ep_init, 116 + .raise_irq = stm32_pcie_raise_irq, 117 + .get_features = stm32_pcie_get_features, 118 + }; 119 + 120 + static const struct dw_pcie_ops dw_pcie_ops = { 121 + .start_link = stm32_pcie_start_link, 122 + .stop_link = stm32_pcie_stop_link, 123 + }; 124 + 125 + static int stm32_pcie_enable_resources(struct stm32_pcie *stm32_pcie) 126 + { 127 + int ret; 128 + 129 + ret = phy_init(stm32_pcie->phy); 130 + if (ret) 131 + return ret; 132 + 133 + ret = clk_prepare_enable(stm32_pcie->clk); 134 + if (ret) 135 + phy_exit(stm32_pcie->phy); 136 + 137 + return ret; 138 + } 139 + 140 + static void stm32_pcie_disable_resources(struct stm32_pcie *stm32_pcie) 141 + { 142 + clk_disable_unprepare(stm32_pcie->clk); 143 + 144 + phy_exit(stm32_pcie->phy); 145 + } 146 + 147 + static void stm32_pcie_perst_assert(struct dw_pcie *pci) 148 + { 149 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 150 + struct dw_pcie_ep *ep = &stm32_pcie->pci.ep; 151 + struct device *dev = pci->dev; 152 + 153 + dev_dbg(dev, "PERST asserted by host\n"); 154 + 155 + pci_epc_deinit_notify(ep->epc); 156 + 157 + stm32_pcie_disable_resources(stm32_pcie); 158 + 159 + pm_runtime_put_sync(dev); 160 + } 161 + 162 + static void stm32_pcie_perst_deassert(struct dw_pcie *pci) 163 + { 164 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 165 + struct device *dev = pci->dev; 166 + struct dw_pcie_ep *ep = &pci->ep; 167 + int ret; 168 + 169 + dev_dbg(dev, "PERST de-asserted by host\n"); 170 + 171 + ret = pm_runtime_resume_and_get(dev); 172 + if (ret < 0) { 173 + dev_err(dev, "Failed to resume runtime PM: %d\n", ret); 174 + return; 175 + } 176 + 177 + ret = stm32_pcie_enable_resources(stm32_pcie); 178 + if (ret) { 179 + dev_err(dev, "Failed to enable resources: %d\n", ret); 180 + goto err_pm_put_sync; 181 + } 182 + 183 + /* 184 + * Reprogram the configuration space registers here because the DBI 185 + * registers were reset by the PHY RCC during phy_init(). 186 + */ 187 + ret = dw_pcie_ep_init_registers(ep); 188 + if (ret) { 189 + dev_err(dev, "Failed to complete initialization: %d\n", ret); 190 + goto err_disable_resources; 191 + } 192 + 193 + pci_epc_init_notify(ep->epc); 194 + 195 + return; 196 + 197 + err_disable_resources: 198 + stm32_pcie_disable_resources(stm32_pcie); 199 + 200 + err_pm_put_sync: 201 + pm_runtime_put_sync(dev); 202 + } 203 + 204 + static irqreturn_t stm32_pcie_ep_perst_irq_thread(int irq, void *data) 205 + { 206 + struct stm32_pcie *stm32_pcie = data; 207 + struct dw_pcie *pci = &stm32_pcie->pci; 208 + u32 perst; 209 + 210 + perst = gpiod_get_value(stm32_pcie->perst_gpio); 211 + if (perst) 212 + stm32_pcie_perst_assert(pci); 213 + else 214 + stm32_pcie_perst_deassert(pci); 215 + 216 + irq_set_irq_type(gpiod_to_irq(stm32_pcie->perst_gpio), 217 + (perst ? IRQF_TRIGGER_HIGH : IRQF_TRIGGER_LOW)); 218 + 219 + return IRQ_HANDLED; 220 + } 221 + 222 + static int stm32_add_pcie_ep(struct stm32_pcie *stm32_pcie, 223 + struct platform_device *pdev) 224 + { 225 + struct dw_pcie_ep *ep = &stm32_pcie->pci.ep; 226 + struct device *dev = &pdev->dev; 227 + int ret; 228 + 229 + ret = regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 230 + STM32MP25_PCIECR_TYPE_MASK, 231 + STM32MP25_PCIECR_EP); 232 + if (ret) 233 + return ret; 234 + 235 + reset_control_assert(stm32_pcie->rst); 236 + reset_control_deassert(stm32_pcie->rst); 237 + 238 + ep->ops = &stm32_pcie_ep_ops; 239 + 240 + ret = dw_pcie_ep_init(ep); 241 + if (ret) { 242 + dev_err(dev, "Failed to initialize ep: %d\n", ret); 243 + return ret; 244 + } 245 + 246 + ret = stm32_pcie_enable_resources(stm32_pcie); 247 + if (ret) { 248 + dev_err(dev, "Failed to enable resources: %d\n", ret); 249 + dw_pcie_ep_deinit(ep); 250 + return ret; 251 + } 252 + 253 + return 0; 254 + } 255 + 256 + static int stm32_pcie_probe(struct platform_device *pdev) 257 + { 258 + struct stm32_pcie *stm32_pcie; 259 + struct device *dev = &pdev->dev; 260 + int ret; 261 + 262 + stm32_pcie = devm_kzalloc(dev, sizeof(*stm32_pcie), GFP_KERNEL); 263 + if (!stm32_pcie) 264 + return -ENOMEM; 265 + 266 + stm32_pcie->pci.dev = dev; 267 + stm32_pcie->pci.ops = &dw_pcie_ops; 268 + 269 + stm32_pcie->regmap = syscon_regmap_lookup_by_compatible("st,stm32mp25-syscfg"); 270 + if (IS_ERR(stm32_pcie->regmap)) 271 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->regmap), 272 + "No syscfg specified\n"); 273 + 274 + stm32_pcie->phy = devm_phy_get(dev, NULL); 275 + if (IS_ERR(stm32_pcie->phy)) 276 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->phy), 277 + "failed to get pcie-phy\n"); 278 + 279 + stm32_pcie->clk = devm_clk_get(dev, NULL); 280 + if (IS_ERR(stm32_pcie->clk)) 281 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->clk), 282 + "Failed to get PCIe clock source\n"); 283 + 284 + stm32_pcie->rst = devm_reset_control_get_exclusive(dev, NULL); 285 + if (IS_ERR(stm32_pcie->rst)) 286 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->rst), 287 + "Failed to get PCIe reset\n"); 288 + 289 + stm32_pcie->perst_gpio = devm_gpiod_get(dev, "reset", GPIOD_IN); 290 + if (IS_ERR(stm32_pcie->perst_gpio)) 291 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->perst_gpio), 292 + "Failed to get reset GPIO\n"); 293 + 294 + ret = phy_set_mode(stm32_pcie->phy, PHY_MODE_PCIE); 295 + if (ret) 296 + return ret; 297 + 298 + platform_set_drvdata(pdev, stm32_pcie); 299 + 300 + pm_runtime_get_noresume(dev); 301 + 302 + ret = devm_pm_runtime_enable(dev); 303 + if (ret < 0) { 304 + pm_runtime_put_noidle(&pdev->dev); 305 + return dev_err_probe(dev, ret, "Failed to enable runtime PM\n"); 306 + } 307 + 308 + stm32_pcie->perst_irq = gpiod_to_irq(stm32_pcie->perst_gpio); 309 + 310 + /* Will be enabled in start_link when device is initialized. */ 311 + irq_set_status_flags(stm32_pcie->perst_irq, IRQ_NOAUTOEN); 312 + 313 + ret = devm_request_threaded_irq(dev, stm32_pcie->perst_irq, NULL, 314 + stm32_pcie_ep_perst_irq_thread, 315 + IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 316 + "perst_irq", stm32_pcie); 317 + if (ret) { 318 + pm_runtime_put_noidle(&pdev->dev); 319 + return dev_err_probe(dev, ret, "Failed to request PERST IRQ\n"); 320 + } 321 + 322 + ret = stm32_add_pcie_ep(stm32_pcie, pdev); 323 + if (ret) 324 + pm_runtime_put_noidle(&pdev->dev); 325 + 326 + return ret; 327 + } 328 + 329 + static void stm32_pcie_remove(struct platform_device *pdev) 330 + { 331 + struct stm32_pcie *stm32_pcie = platform_get_drvdata(pdev); 332 + struct dw_pcie *pci = &stm32_pcie->pci; 333 + struct dw_pcie_ep *ep = &pci->ep; 334 + 335 + dw_pcie_stop_link(pci); 336 + 337 + pci_epc_deinit_notify(ep->epc); 338 + dw_pcie_ep_deinit(ep); 339 + 340 + stm32_pcie_disable_resources(stm32_pcie); 341 + 342 + pm_runtime_put_sync(&pdev->dev); 343 + } 344 + 345 + static const struct of_device_id stm32_pcie_ep_of_match[] = { 346 + { .compatible = "st,stm32mp25-pcie-ep" }, 347 + {}, 348 + }; 349 + 350 + static struct platform_driver stm32_pcie_ep_driver = { 351 + .probe = stm32_pcie_probe, 352 + .remove = stm32_pcie_remove, 353 + .driver = { 354 + .name = "stm32-ep-pcie", 355 + .of_match_table = stm32_pcie_ep_of_match, 356 + }, 357 + }; 358 + 359 + module_platform_driver(stm32_pcie_ep_driver); 360 + 361 + MODULE_AUTHOR("Christian Bruel <christian.bruel@foss.st.com>"); 362 + MODULE_DESCRIPTION("STM32MP25 PCIe Endpoint Controller driver"); 363 + MODULE_LICENSE("GPL"); 364 + MODULE_DEVICE_TABLE(of, stm32_pcie_ep_of_match);
+358
drivers/pci/controller/dwc/pcie-stm32.c
···
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * STMicroelectronics STM32MP25 PCIe root complex driver. 4 + * 5 + * Copyright (C) 2025 STMicroelectronics 6 + * Author: Christian Bruel <christian.bruel@foss.st.com> 7 + */ 8 + 9 + #include <linux/clk.h> 10 + #include <linux/mfd/syscon.h> 11 + #include <linux/of_platform.h> 12 + #include <linux/phy/phy.h> 13 + #include <linux/pinctrl/consumer.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/pm_runtime.h> 16 + #include <linux/pm_wakeirq.h> 17 + #include <linux/regmap.h> 18 + #include <linux/reset.h> 19 + #include "pcie-designware.h" 20 + #include "pcie-stm32.h" 21 + #include "../../pci.h" 22 + 23 + struct stm32_pcie { 24 + struct dw_pcie pci; 25 + struct regmap *regmap; 26 + struct reset_control *rst; 27 + struct phy *phy; 28 + struct clk *clk; 29 + struct gpio_desc *perst_gpio; 30 + struct gpio_desc *wake_gpio; 31 + }; 32 + 33 + static void stm32_pcie_deassert_perst(struct stm32_pcie *stm32_pcie) 34 + { 35 + if (stm32_pcie->perst_gpio) { 36 + msleep(PCIE_T_PVPERL_MS); 37 + gpiod_set_value(stm32_pcie->perst_gpio, 0); 38 + } 39 + 40 + msleep(PCIE_RESET_CONFIG_WAIT_MS); 41 + } 42 + 43 + static void stm32_pcie_assert_perst(struct stm32_pcie *stm32_pcie) 44 + { 45 + gpiod_set_value(stm32_pcie->perst_gpio, 1); 46 + } 47 + 48 + static int stm32_pcie_start_link(struct dw_pcie *pci) 49 + { 50 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 51 + 52 + return regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 53 + STM32MP25_PCIECR_LTSSM_EN, 54 + STM32MP25_PCIECR_LTSSM_EN); 55 + } 56 + 57 + static void stm32_pcie_stop_link(struct dw_pcie *pci) 58 + { 59 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 60 + 61 + regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 62 + STM32MP25_PCIECR_LTSSM_EN, 0); 63 + } 64 + 65 + static int stm32_pcie_suspend_noirq(struct device *dev) 66 + { 67 + struct stm32_pcie *stm32_pcie = dev_get_drvdata(dev); 68 + int ret; 69 + 70 + ret = dw_pcie_suspend_noirq(&stm32_pcie->pci); 71 + if (ret) 72 + return ret; 73 + 74 + stm32_pcie_assert_perst(stm32_pcie); 75 + 76 + clk_disable_unprepare(stm32_pcie->clk); 77 + 78 + if (!device_wakeup_path(dev)) 79 + phy_exit(stm32_pcie->phy); 80 + 81 + return pinctrl_pm_select_sleep_state(dev); 82 + } 83 + 84 + static int stm32_pcie_resume_noirq(struct device *dev) 85 + { 86 + struct stm32_pcie *stm32_pcie = dev_get_drvdata(dev); 87 + int ret; 88 + 89 + /* 90 + * The core clock is gated with CLKREQ# from the COMBOPHY REFCLK, 91 + * thus if no device is present, must deassert it with a GPIO from 92 + * pinctrl pinmux before accessing the DBI registers. 93 + */ 94 + ret = pinctrl_pm_select_init_state(dev); 95 + if (ret) { 96 + dev_err(dev, "Failed to activate pinctrl pm state: %d\n", ret); 97 + return ret; 98 + } 99 + 100 + if (!device_wakeup_path(dev)) { 101 + ret = phy_init(stm32_pcie->phy); 102 + if (ret) { 103 + pinctrl_pm_select_default_state(dev); 104 + return ret; 105 + } 106 + } 107 + 108 + ret = clk_prepare_enable(stm32_pcie->clk); 109 + if (ret) 110 + goto err_phy_exit; 111 + 112 + stm32_pcie_deassert_perst(stm32_pcie); 113 + 114 + ret = dw_pcie_resume_noirq(&stm32_pcie->pci); 115 + if (ret) 116 + goto err_disable_clk; 117 + 118 + pinctrl_pm_select_default_state(dev); 119 + 120 + return 0; 121 + 122 + err_disable_clk: 123 + stm32_pcie_assert_perst(stm32_pcie); 124 + clk_disable_unprepare(stm32_pcie->clk); 125 + 126 + err_phy_exit: 127 + phy_exit(stm32_pcie->phy); 128 + pinctrl_pm_select_default_state(dev); 129 + 130 + return ret; 131 + } 132 + 133 + static const struct dev_pm_ops stm32_pcie_pm_ops = { 134 + NOIRQ_SYSTEM_SLEEP_PM_OPS(stm32_pcie_suspend_noirq, 135 + stm32_pcie_resume_noirq) 136 + }; 137 + 138 + static const struct dw_pcie_host_ops stm32_pcie_host_ops = { 139 + }; 140 + 141 + static const struct dw_pcie_ops dw_pcie_ops = { 142 + .start_link = stm32_pcie_start_link, 143 + .stop_link = stm32_pcie_stop_link 144 + }; 145 + 146 + static int stm32_add_pcie_port(struct stm32_pcie *stm32_pcie) 147 + { 148 + struct device *dev = stm32_pcie->pci.dev; 149 + unsigned int wake_irq; 150 + int ret; 151 + 152 + ret = phy_set_mode(stm32_pcie->phy, PHY_MODE_PCIE); 153 + if (ret) 154 + return ret; 155 + 156 + ret = phy_init(stm32_pcie->phy); 157 + if (ret) 158 + return ret; 159 + 160 + ret = regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 161 + STM32MP25_PCIECR_TYPE_MASK, 162 + STM32MP25_PCIECR_RC); 163 + if (ret) 164 + goto err_phy_exit; 165 + 166 + stm32_pcie_deassert_perst(stm32_pcie); 167 + 168 + if (stm32_pcie->wake_gpio) { 169 + wake_irq = gpiod_to_irq(stm32_pcie->wake_gpio); 170 + ret = dev_pm_set_dedicated_wake_irq(dev, wake_irq); 171 + if (ret) { 172 + dev_err(dev, "Failed to enable wakeup irq %d\n", ret); 173 + goto err_assert_perst; 174 + } 175 + irq_set_irq_type(wake_irq, IRQ_TYPE_EDGE_FALLING); 176 + } 177 + 178 + return 0; 179 + 180 + err_assert_perst: 181 + stm32_pcie_assert_perst(stm32_pcie); 182 + 183 + err_phy_exit: 184 + phy_exit(stm32_pcie->phy); 185 + 186 + return ret; 187 + } 188 + 189 + static void stm32_remove_pcie_port(struct stm32_pcie *stm32_pcie) 190 + { 191 + dev_pm_clear_wake_irq(stm32_pcie->pci.dev); 192 + 193 + stm32_pcie_assert_perst(stm32_pcie); 194 + 195 + phy_exit(stm32_pcie->phy); 196 + } 197 + 198 + static int stm32_pcie_parse_port(struct stm32_pcie *stm32_pcie) 199 + { 200 + struct device *dev = stm32_pcie->pci.dev; 201 + struct device_node *root_port; 202 + 203 + root_port = of_get_next_available_child(dev->of_node, NULL); 204 + 205 + stm32_pcie->phy = devm_of_phy_get(dev, root_port, NULL); 206 + if (IS_ERR(stm32_pcie->phy)) { 207 + of_node_put(root_port); 208 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->phy), 209 + "Failed to get pcie-phy\n"); 210 + } 211 + 212 + stm32_pcie->perst_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(root_port), 213 + "reset", GPIOD_OUT_HIGH, NULL); 214 + if (IS_ERR(stm32_pcie->perst_gpio)) { 215 + if (PTR_ERR(stm32_pcie->perst_gpio) != -ENOENT) { 216 + of_node_put(root_port); 217 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->perst_gpio), 218 + "Failed to get reset GPIO\n"); 219 + } 220 + stm32_pcie->perst_gpio = NULL; 221 + } 222 + 223 + stm32_pcie->wake_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(root_port), 224 + "wake", GPIOD_IN, NULL); 225 + 226 + if (IS_ERR(stm32_pcie->wake_gpio)) { 227 + if (PTR_ERR(stm32_pcie->wake_gpio) != -ENOENT) { 228 + of_node_put(root_port); 229 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->wake_gpio), 230 + "Failed to get wake GPIO\n"); 231 + } 232 + stm32_pcie->wake_gpio = NULL; 233 + } 234 + 235 + of_node_put(root_port); 236 + 237 + return 0; 238 + } 239 + 240 + static int stm32_pcie_probe(struct platform_device *pdev) 241 + { 242 + struct stm32_pcie *stm32_pcie; 243 + struct device *dev = &pdev->dev; 244 + int ret; 245 + 246 + stm32_pcie = devm_kzalloc(dev, sizeof(*stm32_pcie), GFP_KERNEL); 247 + if (!stm32_pcie) 248 + return -ENOMEM; 249 + 250 + stm32_pcie->pci.dev = dev; 251 + stm32_pcie->pci.ops = &dw_pcie_ops; 252 + stm32_pcie->pci.pp.ops = &stm32_pcie_host_ops; 253 + 254 + stm32_pcie->regmap = syscon_regmap_lookup_by_compatible("st,stm32mp25-syscfg"); 255 + if (IS_ERR(stm32_pcie->regmap)) 256 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->regmap), 257 + "No syscfg specified\n"); 258 + 259 + stm32_pcie->clk = devm_clk_get(dev, NULL); 260 + if (IS_ERR(stm32_pcie->clk)) 261 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->clk), 262 + "Failed to get PCIe clock source\n"); 263 + 264 + stm32_pcie->rst = devm_reset_control_get_exclusive(dev, NULL); 265 + if (IS_ERR(stm32_pcie->rst)) 266 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->rst), 267 + "Failed to get PCIe reset\n"); 268 + 269 + ret = stm32_pcie_parse_port(stm32_pcie); 270 + if (ret) 271 + return ret; 272 + 273 + platform_set_drvdata(pdev, stm32_pcie); 274 + 275 + ret = stm32_add_pcie_port(stm32_pcie); 276 + if (ret) 277 + return ret; 278 + 279 + reset_control_assert(stm32_pcie->rst); 280 + reset_control_deassert(stm32_pcie->rst); 281 + 282 + ret = clk_prepare_enable(stm32_pcie->clk); 283 + if (ret) { 284 + dev_err(dev, "Core clock enable failed %d\n", ret); 285 + goto err_remove_port; 286 + } 287 + 288 + ret = pm_runtime_set_active(dev); 289 + if (ret < 0) { 290 + dev_err_probe(dev, ret, "Failed to activate runtime PM\n"); 291 + goto err_disable_clk; 292 + } 293 + 294 + pm_runtime_no_callbacks(dev); 295 + 296 + ret = devm_pm_runtime_enable(dev); 297 + if (ret < 0) { 298 + dev_err_probe(dev, ret, "Failed to enable runtime PM\n"); 299 + goto err_disable_clk; 300 + } 301 + 302 + ret = dw_pcie_host_init(&stm32_pcie->pci.pp); 303 + if (ret) 304 + goto err_disable_clk; 305 + 306 + if (stm32_pcie->wake_gpio) 307 + device_init_wakeup(dev, true); 308 + 309 + return 0; 310 + 311 + err_disable_clk: 312 + clk_disable_unprepare(stm32_pcie->clk); 313 + 314 + err_remove_port: 315 + stm32_remove_pcie_port(stm32_pcie); 316 + 317 + return ret; 318 + } 319 + 320 + static void stm32_pcie_remove(struct platform_device *pdev) 321 + { 322 + struct stm32_pcie *stm32_pcie = platform_get_drvdata(pdev); 323 + struct dw_pcie_rp *pp = &stm32_pcie->pci.pp; 324 + 325 + if (stm32_pcie->wake_gpio) 326 + device_init_wakeup(&pdev->dev, false); 327 + 328 + dw_pcie_host_deinit(pp); 329 + 330 + clk_disable_unprepare(stm32_pcie->clk); 331 + 332 + stm32_remove_pcie_port(stm32_pcie); 333 + 334 + pm_runtime_put_noidle(&pdev->dev); 335 + } 336 + 337 + static const struct of_device_id stm32_pcie_of_match[] = { 338 + { .compatible = "st,stm32mp25-pcie-rc" }, 339 + {}, 340 + }; 341 + 342 + static struct platform_driver stm32_pcie_driver = { 343 + .probe = stm32_pcie_probe, 344 + .remove = stm32_pcie_remove, 345 + .driver = { 346 + .name = "stm32-pcie", 347 + .of_match_table = stm32_pcie_of_match, 348 + .pm = &stm32_pcie_pm_ops, 349 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 350 + }, 351 + }; 352 + 353 + module_platform_driver(stm32_pcie_driver); 354 + 355 + MODULE_AUTHOR("Christian Bruel <christian.bruel@foss.st.com>"); 356 + MODULE_DESCRIPTION("STM32MP25 PCIe Controller driver"); 357 + MODULE_LICENSE("GPL"); 358 + MODULE_DEVICE_TABLE(of, stm32_pcie_of_match);
+16
drivers/pci/controller/dwc/pcie-stm32.h
···
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * ST PCIe driver definitions for STM32-MP25 SoC 4 + * 5 + * Copyright (C) 2025 STMicroelectronics - All Rights Reserved 6 + * Author: Christian Bruel <christian.bruel@foss.st.com> 7 + */ 8 + 9 + #define to_stm32_pcie(x) dev_get_drvdata((x)->dev) 10 + 11 + #define STM32MP25_PCIECR_TYPE_MASK GENMASK(11, 8) 12 + #define STM32MP25_PCIECR_EP 0 13 + #define STM32MP25_PCIECR_LTSSM_EN BIT(2) 14 + #define STM32MP25_PCIECR_RC BIT(10) 15 + 16 + #define SYSCFG_PCIECR 0x6000
+37 -14
drivers/pci/controller/dwc/pcie-tegra194.c
··· 1214 struct mrq_uphy_response resp; 1215 struct tegra_bpmp_message msg; 1216 struct mrq_uphy_request req; 1217 1218 /* 1219 * Controller-5 doesn't need to have its state set by BPMP-FW in ··· 1237 msg.rx.data = &resp; 1238 msg.rx.size = sizeof(resp); 1239 1240 - return tegra_bpmp_transfer(pcie->bpmp, &msg); 1241 } 1242 1243 static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie, ··· 1252 struct mrq_uphy_response resp; 1253 struct tegra_bpmp_message msg; 1254 struct mrq_uphy_request req; 1255 1256 memset(&req, 0, sizeof(req)); 1257 memset(&resp, 0, sizeof(resp)); ··· 1272 msg.rx.data = &resp; 1273 msg.rx.size = sizeof(resp); 1274 1275 - return tegra_bpmp_transfer(pcie->bpmp, &msg); 1276 } 1277 1278 static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) 1279 { 1280 struct dw_pcie_rp *pp = &pcie->pci.pp; 1281 - struct pci_bus *child, *root_bus = NULL; 1282 struct pci_dev *pdev; 1283 1284 /* ··· 1297 */ 1298 1299 list_for_each_entry(child, &pp->bridge->bus->children, node) { 1300 - /* Bring downstream devices to D0 if they are not already in */ 1301 if (child->parent == pp->bridge->bus) { 1302 - root_bus = child; 1303 break; 1304 } 1305 } 1306 1307 - if (!root_bus) { 1308 - dev_err(pcie->dev, "Failed to find downstream devices\n"); 1309 return; 1310 } 1311 1312 - list_for_each_entry(pdev, &root_bus->devices, bus_list) { 1313 if (PCI_SLOT(pdev->devfn) == 0) { 1314 if (pci_set_power_state(pdev, PCI_D0)) 1315 dev_err(pcie->dev, ··· 1736 ret); 1737 } 1738 1739 - ret = tegra_pcie_bpmp_set_pll_state(pcie, false); 1740 if (ret) 1741 - dev_err(pcie->dev, "Failed to turn off UPHY: %d\n", ret); 1742 1743 pcie->ep_state = EP_STATE_DISABLED; 1744 dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n"); ··· 1955 return IRQ_HANDLED; 1956 } 1957 1958 static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq) 1959 { 1960 /* Tegra194 supports only INTA */ ··· 1978 1979 static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq) 1980 { 1981 - if (unlikely(irq > 31)) 1982 return -EINVAL; 1983 1984 - appl_writel(pcie, BIT(irq), APPL_MSI_CTRL_1); 1985 1986 return 0; 1987 } ··· 2021 2022 static const struct pci_epc_features tegra_pcie_epc_features = { 2023 .linkup_notifier = true, 2024 - .msi_capable = false, 2025 - .msix_capable = false, 2026 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, 2027 .only_64bit = true, }, 2028 .bar[BAR_1] = { .type = BAR_RESERVED, }, ··· 2039 } 2040 2041 static const struct dw_pcie_ep_ops pcie_ep_ops = { 2042 .raise_irq = tegra_pcie_ep_raise_irq, 2043 .get_features = tegra_pcie_ep_get_features, 2044 };
··· 1214 struct mrq_uphy_response resp; 1215 struct tegra_bpmp_message msg; 1216 struct mrq_uphy_request req; 1217 + int err; 1218 1219 /* 1220 * Controller-5 doesn't need to have its state set by BPMP-FW in ··· 1236 msg.rx.data = &resp; 1237 msg.rx.size = sizeof(resp); 1238 1239 + err = tegra_bpmp_transfer(pcie->bpmp, &msg); 1240 + if (err) 1241 + return err; 1242 + if (msg.rx.ret) 1243 + return -EINVAL; 1244 + 1245 + return 0; 1246 } 1247 1248 static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie, ··· 1245 struct mrq_uphy_response resp; 1246 struct tegra_bpmp_message msg; 1247 struct mrq_uphy_request req; 1248 + int err; 1249 1250 memset(&req, 0, sizeof(req)); 1251 memset(&resp, 0, sizeof(resp)); ··· 1264 msg.rx.data = &resp; 1265 msg.rx.size = sizeof(resp); 1266 1267 + err = tegra_bpmp_transfer(pcie->bpmp, &msg); 1268 + if (err) 1269 + return err; 1270 + if (msg.rx.ret) 1271 + return -EINVAL; 1272 + 1273 + return 0; 1274 } 1275 1276 static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) 1277 { 1278 struct dw_pcie_rp *pp = &pcie->pci.pp; 1279 + struct pci_bus *child, *root_port_bus = NULL; 1280 struct pci_dev *pdev; 1281 1282 /* ··· 1283 */ 1284 1285 list_for_each_entry(child, &pp->bridge->bus->children, node) { 1286 if (child->parent == pp->bridge->bus) { 1287 + root_port_bus = child; 1288 break; 1289 } 1290 } 1291 1292 + if (!root_port_bus) { 1293 + dev_err(pcie->dev, "Failed to find downstream bus of Root Port\n"); 1294 return; 1295 } 1296 1297 + /* Bring downstream devices to D0 if they are not already in */ 1298 + list_for_each_entry(pdev, &root_port_bus->devices, bus_list) { 1299 if (PCI_SLOT(pdev->devfn) == 0) { 1300 if (pci_set_power_state(pdev, PCI_D0)) 1301 dev_err(pcie->dev, ··· 1722 ret); 1723 } 1724 1725 + ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false); 1726 if (ret) 1727 + dev_err(pcie->dev, "Failed to disable controller: %d\n", ret); 1728 1729 pcie->ep_state = EP_STATE_DISABLED; 1730 dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n"); ··· 1941 return IRQ_HANDLED; 1942 } 1943 1944 + static void tegra_pcie_ep_init(struct dw_pcie_ep *ep) 1945 + { 1946 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1947 + enum pci_barno bar; 1948 + 1949 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 1950 + dw_pcie_ep_reset_bar(pci, bar); 1951 + }; 1952 + 1953 static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq) 1954 { 1955 /* Tegra194 supports only INTA */ ··· 1955 1956 static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq) 1957 { 1958 + if (unlikely(irq > 32)) 1959 return -EINVAL; 1960 1961 + appl_writel(pcie, BIT(irq - 1), APPL_MSI_CTRL_1); 1962 1963 return 0; 1964 } ··· 1998 1999 static const struct pci_epc_features tegra_pcie_epc_features = { 2000 .linkup_notifier = true, 2001 + .msi_capable = true, 2002 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, 2003 .only_64bit = true, }, 2004 .bar[BAR_1] = { .type = BAR_RESERVED, }, ··· 2017 } 2018 2019 static const struct dw_pcie_ep_ops pcie_ep_ops = { 2020 + .init = tegra_pcie_ep_init, 2021 .raise_irq = tegra_pcie_ep_raise_irq, 2022 .get_features = tegra_pcie_ep_get_features, 2023 };
+2 -6
drivers/pci/controller/pci-hyperv.c
··· 1680 /** 1681 * hv_msi_free() - Free the MSI. 1682 * @domain: The interrupt domain pointer 1683 - * @info: Extra MSI-related context 1684 * @irq: Identifies the IRQ. 1685 * 1686 * The Hyper-V parent partition and hypervisor are tracking the ··· 1687 * table up to date. This callback sends a message that frees 1688 * the IRT entry and related tracking nonsense. 1689 */ 1690 - static void hv_msi_free(struct irq_domain *domain, struct msi_domain_info *info, 1691 - unsigned int irq) 1692 { 1693 struct hv_pcibus_device *hbus; 1694 struct hv_pci_dev *hpdev; ··· 2179 2180 static void hv_pcie_domain_free(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs) 2181 { 2182 - struct msi_domain_info *info = d->host_data; 2183 - 2184 for (int i = 0; i < nr_irqs; i++) 2185 - hv_msi_free(d, info, virq + i); 2186 2187 irq_domain_free_irqs_top(d, virq, nr_irqs); 2188 }
··· 1680 /** 1681 * hv_msi_free() - Free the MSI. 1682 * @domain: The interrupt domain pointer 1683 * @irq: Identifies the IRQ. 1684 * 1685 * The Hyper-V parent partition and hypervisor are tracking the ··· 1688 * table up to date. This callback sends a message that frees 1689 * the IRT entry and related tracking nonsense. 1690 */ 1691 + static void hv_msi_free(struct irq_domain *domain, unsigned int irq) 1692 { 1693 struct hv_pcibus_device *hbus; 1694 struct hv_pci_dev *hpdev; ··· 2181 2182 static void hv_pcie_domain_free(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs) 2183 { 2184 for (int i = 0; i < nr_irqs; i++) 2185 + hv_msi_free(d, virq + i); 2186 2187 irq_domain_free_irqs_top(d, virq, nr_irqs); 2188 }
+14 -15
drivers/pci/controller/pci-tegra.c
··· 14 */ 15 16 #include <linux/clk.h> 17 #include <linux/debugfs.h> 18 #include <linux/delay.h> 19 #include <linux/export.h> ··· 271 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 272 struct irq_domain *domain; 273 struct mutex map_lock; 274 - spinlock_t mask_lock; 275 void *virt; 276 dma_addr_t phys; 277 int irq; ··· 1345 unsigned int i; 1346 int err; 1347 1348 - port->phys = devm_kcalloc(dev, sizeof(phy), port->lanes, GFP_KERNEL); 1349 if (!port->phys) 1350 return -ENOMEM; 1351 ··· 1582 struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1583 struct tegra_pcie *pcie = msi_to_pcie(msi); 1584 unsigned int index = d->hwirq / 32; 1585 - unsigned long flags; 1586 u32 value; 1587 1588 - spin_lock_irqsave(&msi->mask_lock, flags); 1589 - value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1590 - value &= ~BIT(d->hwirq % 32); 1591 - afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1592 - spin_unlock_irqrestore(&msi->mask_lock, flags); 1593 } 1594 1595 static void tegra_msi_irq_unmask(struct irq_data *d) ··· 1596 struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1597 struct tegra_pcie *pcie = msi_to_pcie(msi); 1598 unsigned int index = d->hwirq / 32; 1599 - unsigned long flags; 1600 u32 value; 1601 1602 - spin_lock_irqsave(&msi->mask_lock, flags); 1603 - value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1604 - value |= BIT(d->hwirq % 32); 1605 - afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1606 - spin_unlock_irqrestore(&msi->mask_lock, flags); 1607 } 1608 1609 static void tegra_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ··· 1710 int err; 1711 1712 mutex_init(&msi->map_lock); 1713 - spin_lock_init(&msi->mask_lock); 1714 1715 if (IS_ENABLED(CONFIG_PCI_MSI)) { 1716 err = tegra_allocate_domains(msi);
··· 14 */ 15 16 #include <linux/clk.h> 17 + #include <linux/cleanup.h> 18 #include <linux/debugfs.h> 19 #include <linux/delay.h> 20 #include <linux/export.h> ··· 270 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 271 struct irq_domain *domain; 272 struct mutex map_lock; 273 + raw_spinlock_t mask_lock; 274 void *virt; 275 dma_addr_t phys; 276 int irq; ··· 1344 unsigned int i; 1345 int err; 1346 1347 + port->phys = devm_kcalloc(dev, port->lanes, sizeof(phy), GFP_KERNEL); 1348 if (!port->phys) 1349 return -ENOMEM; 1350 ··· 1581 struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1582 struct tegra_pcie *pcie = msi_to_pcie(msi); 1583 unsigned int index = d->hwirq / 32; 1584 u32 value; 1585 1586 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 1587 + value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1588 + value &= ~BIT(d->hwirq % 32); 1589 + afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1590 + } 1591 } 1592 1593 static void tegra_msi_irq_unmask(struct irq_data *d) ··· 1596 struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1597 struct tegra_pcie *pcie = msi_to_pcie(msi); 1598 unsigned int index = d->hwirq / 32; 1599 u32 value; 1600 1601 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 1602 + value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1603 + value |= BIT(d->hwirq % 32); 1604 + afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1605 + } 1606 } 1607 1608 static void tegra_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ··· 1711 int err; 1712 1713 mutex_init(&msi->map_lock); 1714 + raw_spin_lock_init(&msi->mask_lock); 1715 1716 if (IS_ENABLED(CONFIG_PCI_MSI)) { 1717 err = tegra_allocate_domains(msi);
+1 -1
drivers/pci/controller/pci-xgene-msi.c
··· 311 msi_val = xgene_msi_int_read(xgene_msi, i); 312 if (msi_val) { 313 dev_err(&pdev->dev, "Failed to clear spurious IRQ\n"); 314 - return EINVAL; 315 } 316 317 irq = platform_get_irq(pdev, i);
··· 311 msi_val = xgene_msi_int_read(xgene_msi, i); 312 if (msi_val) { 313 dev_err(&pdev->dev, "Failed to clear spurious IRQ\n"); 314 + return -EINVAL; 315 } 316 317 irq = platform_get_irq(pdev, i);
+23
drivers/pci/controller/pcie-mediatek-gen3.c
··· 102 #define PCIE_MSI_SET_ADDR_HI_BASE 0xc80 103 #define PCIE_MSI_SET_ADDR_HI_OFFSET 0x04 104 105 #define PCIE_ICMD_PM_REG 0x198 106 #define PCIE_TURN_OFF_LINK BIT(4) 107 ··· 152 * struct mtk_gen3_pcie_pdata - differentiate between host generations 153 * @power_up: pcie power_up callback 154 * @phy_resets: phy reset lines SoC data. 155 * @flags: pcie device flags. 156 */ 157 struct mtk_gen3_pcie_pdata { ··· 161 const char *id[MAX_NUM_PHY_RESETS]; 162 int num_resets; 163 } phy_resets; 164 u32 flags; 165 }; 166 ··· 438 val &= ~PCIE_CONF_LINK2_LCR2_LINK_SPEED; 439 val |= FIELD_PREP(PCIE_CONF_LINK2_LCR2_LINK_SPEED, pcie->max_link_speed); 440 writel_relaxed(val, pcie->base + PCIE_CONF_LINK2_CTL_STS); 441 } 442 443 /* Set class code */ ··· 1340 }, 1341 }; 1342 1343 static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_en7581 = { 1344 .power_up = mtk_pcie_en7581_power_up, 1345 .phy_resets = { ··· 1363 static const struct of_device_id mtk_pcie_of_match[] = { 1364 { .compatible = "airoha,en7581-pcie", .data = &mtk_pcie_soc_en7581 }, 1365 { .compatible = "mediatek,mt8192-pcie", .data = &mtk_pcie_soc_mt8192 }, 1366 {}, 1367 }; 1368 MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);
··· 102 #define PCIE_MSI_SET_ADDR_HI_BASE 0xc80 103 #define PCIE_MSI_SET_ADDR_HI_OFFSET 0x04 104 105 + #define PCIE_RESOURCE_CTRL_REG 0xd2c 106 + #define PCIE_RSRC_SYS_CLK_RDY_TIME_MASK GENMASK(7, 0) 107 + 108 #define PCIE_ICMD_PM_REG 0x198 109 #define PCIE_TURN_OFF_LINK BIT(4) 110 ··· 149 * struct mtk_gen3_pcie_pdata - differentiate between host generations 150 * @power_up: pcie power_up callback 151 * @phy_resets: phy reset lines SoC data. 152 + * @sys_clk_rdy_time_us: System clock ready time override (microseconds) 153 * @flags: pcie device flags. 154 */ 155 struct mtk_gen3_pcie_pdata { ··· 157 const char *id[MAX_NUM_PHY_RESETS]; 158 int num_resets; 159 } phy_resets; 160 + u8 sys_clk_rdy_time_us; 161 u32 flags; 162 }; 163 ··· 433 val &= ~PCIE_CONF_LINK2_LCR2_LINK_SPEED; 434 val |= FIELD_PREP(PCIE_CONF_LINK2_LCR2_LINK_SPEED, pcie->max_link_speed); 435 writel_relaxed(val, pcie->base + PCIE_CONF_LINK2_CTL_STS); 436 + } 437 + 438 + /* If parameter is present, adjust SYS_CLK_RDY_TIME to avoid glitching */ 439 + if (pcie->soc->sys_clk_rdy_time_us) { 440 + val = readl_relaxed(pcie->base + PCIE_RESOURCE_CTRL_REG); 441 + FIELD_MODIFY(PCIE_RSRC_SYS_CLK_RDY_TIME_MASK, &val, 442 + pcie->soc->sys_clk_rdy_time_us); 443 + writel_relaxed(val, pcie->base + PCIE_RESOURCE_CTRL_REG); 444 } 445 446 /* Set class code */ ··· 1327 }, 1328 }; 1329 1330 + static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_mt8196 = { 1331 + .power_up = mtk_pcie_power_up, 1332 + .phy_resets = { 1333 + .id[0] = "phy", 1334 + .num_resets = 1, 1335 + }, 1336 + .sys_clk_rdy_time_us = 10, 1337 + }; 1338 + 1339 static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_en7581 = { 1340 .power_up = mtk_pcie_en7581_power_up, 1341 .phy_resets = { ··· 1341 static const struct of_device_id mtk_pcie_of_match[] = { 1342 { .compatible = "airoha,en7581-pcie", .data = &mtk_pcie_soc_en7581 }, 1343 { .compatible = "mediatek,mt8192-pcie", .data = &mtk_pcie_soc_mt8192 }, 1344 + { .compatible = "mediatek,mt8196-pcie", .data = &mtk_pcie_soc_mt8196 }, 1345 {}, 1346 }; 1347 MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);
-2
drivers/pci/controller/pcie-rcar-ep.c
··· 436 } 437 438 static const struct pci_epc_features rcar_pcie_epc_features = { 439 - .linkup_notifier = false, 440 .msi_capable = true, 441 - .msix_capable = false, 442 /* use 64-bit BARs so mark BAR[1,3,5] as reserved */ 443 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = 128, 444 .only_64bit = true, },
··· 436 } 437 438 static const struct pci_epc_features rcar_pcie_epc_features = { 439 .msi_capable = true, 440 /* use 64-bit BARs so mark BAR[1,3,5] as reserved */ 441 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = 128, 442 .only_64bit = true, },
+16 -26
drivers/pci/controller/pcie-rcar-host.c
··· 12 */ 13 14 #include <linux/bitops.h> 15 #include <linux/clk.h> 16 #include <linux/clk-provider.h> 17 #include <linux/delay.h> ··· 39 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 40 struct irq_domain *domain; 41 struct mutex map_lock; 42 - spinlock_t mask_lock; 43 int irq1; 44 int irq2; 45 }; ··· 53 int (*phy_init_fn)(struct rcar_pcie_host *host); 54 }; 55 56 - static DEFINE_SPINLOCK(pmsr_lock); 57 - 58 static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base) 59 { 60 - unsigned long flags; 61 u32 pmsr, val; 62 int ret = 0; 63 64 - spin_lock_irqsave(&pmsr_lock, flags); 65 - 66 - if (!pcie_base || pm_runtime_suspended(pcie_dev)) { 67 - ret = -EINVAL; 68 - goto unlock_exit; 69 - } 70 71 pmsr = readl(pcie_base + PMSR); 72 ··· 81 writel(L1FAEG | PMEL1RX, pcie_base + PMSR); 82 } 83 84 - unlock_exit: 85 - spin_unlock_irqrestore(&pmsr_lock, flags); 86 return ret; 87 } 88 ··· 576 unsigned int index = find_first_bit(&reg, 32); 577 int ret; 578 579 - ret = generic_handle_domain_irq(msi->domain->parent, index); 580 if (ret) { 581 /* Unknown MSI, just clear it */ 582 dev_dbg(dev, "unexpected MSI\n"); ··· 603 { 604 struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 605 struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 606 - unsigned long flags; 607 u32 value; 608 609 - spin_lock_irqsave(&msi->mask_lock, flags); 610 - value = rcar_pci_read_reg(pcie, PCIEMSIIER); 611 - value &= ~BIT(d->hwirq); 612 - rcar_pci_write_reg(pcie, value, PCIEMSIIER); 613 - spin_unlock_irqrestore(&msi->mask_lock, flags); 614 } 615 616 static void rcar_msi_irq_unmask(struct irq_data *d) 617 { 618 struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 619 struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 620 - unsigned long flags; 621 u32 value; 622 623 - spin_lock_irqsave(&msi->mask_lock, flags); 624 - value = rcar_pci_read_reg(pcie, PCIEMSIIER); 625 - value |= BIT(d->hwirq); 626 - rcar_pci_write_reg(pcie, value, PCIEMSIIER); 627 - spin_unlock_irqrestore(&msi->mask_lock, flags); 628 } 629 630 static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ··· 735 int err; 736 737 mutex_init(&msi->map_lock); 738 - spin_lock_init(&msi->mask_lock); 739 740 err = of_address_to_resource(dev->of_node, 0, &res); 741 if (err)
··· 12 */ 13 14 #include <linux/bitops.h> 15 + #include <linux/cleanup.h> 16 #include <linux/clk.h> 17 #include <linux/clk-provider.h> 18 #include <linux/delay.h> ··· 38 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 39 struct irq_domain *domain; 40 struct mutex map_lock; 41 + raw_spinlock_t mask_lock; 42 int irq1; 43 int irq2; 44 }; ··· 52 int (*phy_init_fn)(struct rcar_pcie_host *host); 53 }; 54 55 static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base) 56 { 57 u32 pmsr, val; 58 int ret = 0; 59 60 + if (!pcie_base || pm_runtime_suspended(pcie_dev)) 61 + return -EINVAL; 62 63 pmsr = readl(pcie_base + PMSR); 64 ··· 87 writel(L1FAEG | PMEL1RX, pcie_base + PMSR); 88 } 89 90 return ret; 91 } 92 ··· 584 unsigned int index = find_first_bit(&reg, 32); 585 int ret; 586 587 + ret = generic_handle_domain_irq(msi->domain, index); 588 if (ret) { 589 /* Unknown MSI, just clear it */ 590 dev_dbg(dev, "unexpected MSI\n"); ··· 611 { 612 struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 613 struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 614 u32 value; 615 616 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 617 + value = rcar_pci_read_reg(pcie, PCIEMSIIER); 618 + value &= ~BIT(d->hwirq); 619 + rcar_pci_write_reg(pcie, value, PCIEMSIIER); 620 + } 621 } 622 623 static void rcar_msi_irq_unmask(struct irq_data *d) 624 { 625 struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 626 struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 627 u32 value; 628 629 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 630 + value = rcar_pci_read_reg(pcie, PCIEMSIIER); 631 + value |= BIT(d->hwirq); 632 + rcar_pci_write_reg(pcie, value, PCIEMSIIER); 633 + } 634 } 635 636 static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ··· 745 int err; 746 747 mutex_init(&msi->map_lock); 748 + raw_spin_lock_init(&msi->mask_lock); 749 750 err = of_address_to_resource(dev->of_node, 0, &res); 751 if (err)
-1
drivers/pci/controller/pcie-rockchip-ep.c
··· 694 static const struct pci_epc_features rockchip_pcie_epc_features = { 695 .linkup_notifier = true, 696 .msi_capable = true, 697 - .msix_capable = false, 698 .intx_capable = true, 699 .align = ROCKCHIP_PCIE_AT_SIZE_ALIGN, 700 };
··· 694 static const struct pci_epc_features rockchip_pcie_epc_features = { 695 .linkup_notifier = true, 696 .msi_capable = true, 697 .intx_capable = true, 698 .align = ROCKCHIP_PCIE_AT_SIZE_ALIGN, 699 };
+4 -3
drivers/pci/controller/pcie-xilinx-nwl.c
··· 718 nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 719 E_ECAM_CR_ENABLE, E_ECAM_CONTROL); 720 721 - nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 722 - (NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT), 723 - E_ECAM_CONTROL); 724 725 nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base), 726 E_ECAM_BASE_LO);
··· 718 nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 719 E_ECAM_CR_ENABLE, E_ECAM_CONTROL); 720 721 + ecam_val = nwl_bridge_readl(pcie, E_ECAM_CONTROL); 722 + ecam_val &= ~E_ECAM_SIZE_LOC; 723 + ecam_val |= NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT; 724 + nwl_bridge_writel(pcie, ecam_val, E_ECAM_CONTROL); 725 726 nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base), 727 E_ECAM_BASE_LO);
+1 -2
drivers/pci/controller/plda/pcie-plda-host.c
··· 599 600 bridge = devm_pci_alloc_host_bridge(dev, 0); 601 if (!bridge) 602 - return dev_err_probe(dev, -ENOMEM, 603 - "failed to alloc bridge\n"); 604 605 if (port->host_ops && port->host_ops->host_init) { 606 ret = port->host_ops->host_init(port);
··· 599 600 bridge = devm_pci_alloc_host_bridge(dev, 0); 601 if (!bridge) 602 + return -ENOMEM; 603 604 if (port->host_ops && port->host_ops->host_init) { 605 ret = port->host_ops->host_init(port);
+30 -8
drivers/pci/endpoint/functions/pci-epf-test.c
··· 301 if (!epf_test->dma_supported) 302 return; 303 304 - dma_release_channel(epf_test->dma_chan_tx); 305 - if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) { 306 epf_test->dma_chan_tx = NULL; 307 - epf_test->dma_chan_rx = NULL; 308 - return; 309 } 310 311 - dma_release_channel(epf_test->dma_chan_rx); 312 - epf_test->dma_chan_rx = NULL; 313 } 314 315 static void pci_epf_test_print_rate(struct pci_epf_test *epf_test, ··· 777 u32 status = le32_to_cpu(reg->status); 778 struct pci_epf *epf = epf_test->epf; 779 struct pci_epc *epc = epf->epc; 780 781 if (bar < BAR_0) 782 goto set_status_err; 783 784 pci_epf_test_doorbell_cleanup(epf_test); 785 - pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, &epf_test->db_bar); 786 787 status |= STATUS_DOORBELL_DISABLE_SUCCESS; 788 reg->status = cpu_to_le32(status); ··· 1067 if (bar == test_reg_bar) 1068 continue; 1069 1070 - base = pci_epf_alloc_space(epf, bar_size[bar], bar, 1071 epc_features, PRIMARY_INTERFACE); 1072 if (!base) 1073 dev_err(dev, "Failed to allocate space for BAR%d\n",
··· 301 if (!epf_test->dma_supported) 302 return; 303 304 + if (epf_test->dma_chan_tx) { 305 + dma_release_channel(epf_test->dma_chan_tx); 306 + if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) { 307 + epf_test->dma_chan_tx = NULL; 308 + epf_test->dma_chan_rx = NULL; 309 + return; 310 + } 311 epf_test->dma_chan_tx = NULL; 312 } 313 314 + if (epf_test->dma_chan_rx) { 315 + dma_release_channel(epf_test->dma_chan_rx); 316 + epf_test->dma_chan_rx = NULL; 317 + } 318 } 319 320 static void pci_epf_test_print_rate(struct pci_epf_test *epf_test, ··· 772 u32 status = le32_to_cpu(reg->status); 773 struct pci_epf *epf = epf_test->epf; 774 struct pci_epc *epc = epf->epc; 775 + int ret; 776 777 if (bar < BAR_0) 778 goto set_status_err; 779 780 pci_epf_test_doorbell_cleanup(epf_test); 781 + 782 + /* 783 + * The doorbell feature temporarily overrides the inbound translation 784 + * to point to the address stored in epf_test->db_bar.phys_addr, i.e., 785 + * it calls set_bar() twice without ever calling clear_bar(), as 786 + * calling clear_bar() would clear the BAR's PCI address assigned by 787 + * the host. Thus, when disabling the doorbell, restore the inbound 788 + * translation to point to the memory allocated for the BAR. 789 + */ 790 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, &epf->bar[bar]); 791 + if (ret) 792 + goto set_status_err; 793 794 status |= STATUS_DOORBELL_DISABLE_SUCCESS; 795 reg->status = cpu_to_le32(status); ··· 1050 if (bar == test_reg_bar) 1051 continue; 1052 1053 + if (epc_features->bar[bar].type == BAR_FIXED) 1054 + test_reg_size = epc_features->bar[bar].fixed_size; 1055 + else 1056 + test_reg_size = bar_size[bar]; 1057 + 1058 + base = pci_epf_alloc_space(epf, test_reg_size, bar, 1059 epc_features, PRIMARY_INTERFACE); 1060 if (!base) 1061 dev_err(dev, "Failed to allocate space for BAR%d\n",
+1 -1
drivers/pci/endpoint/pci-ep-msi.c
··· 24 struct pci_epf *epf; 25 26 epc = pci_epc_get(dev_name(msi_desc_to_dev(desc))); 27 - if (!epc) 28 return; 29 30 epf = list_first_entry_or_null(&epc->pci_epf, struct pci_epf, list);
··· 24 struct pci_epf *epf; 25 26 epc = pci_epc_get(dev_name(msi_desc_to_dev(desc))); 27 + if (IS_ERR(epc)) 28 return; 29 30 epf = list_first_entry_or_null(&epc->pci_epf, struct pci_epf, list);
+4 -4
drivers/pci/hotplug/cpqphp_pci.c
··· 1302 1303 dbg("found io_node(base, length) = %x, %x\n", 1304 io_node->base, io_node->length); 1305 - dbg("populated slot =%d \n", populated_slot); 1306 if (!populated_slot) { 1307 io_node->next = ctrl->io_head; 1308 ctrl->io_head = io_node; ··· 1325 1326 dbg("found mem_node(base, length) = %x, %x\n", 1327 mem_node->base, mem_node->length); 1328 - dbg("populated slot =%d \n", populated_slot); 1329 if (!populated_slot) { 1330 mem_node->next = ctrl->mem_head; 1331 ctrl->mem_head = mem_node; ··· 1349 p_mem_node->length = pre_mem_length << 16; 1350 dbg("found p_mem_node(base, length) = %x, %x\n", 1351 p_mem_node->base, p_mem_node->length); 1352 - dbg("populated slot =%d \n", populated_slot); 1353 1354 if (!populated_slot) { 1355 p_mem_node->next = ctrl->p_mem_head; ··· 1373 bus_node->length = max_bus - secondary_bus + 1; 1374 dbg("found bus_node(base, length) = %x, %x\n", 1375 bus_node->base, bus_node->length); 1376 - dbg("populated slot =%d \n", populated_slot); 1377 if (!populated_slot) { 1378 bus_node->next = ctrl->bus_head; 1379 ctrl->bus_head = bus_node;
··· 1302 1303 dbg("found io_node(base, length) = %x, %x\n", 1304 io_node->base, io_node->length); 1305 + dbg("populated slot = %d\n", populated_slot); 1306 if (!populated_slot) { 1307 io_node->next = ctrl->io_head; 1308 ctrl->io_head = io_node; ··· 1325 1326 dbg("found mem_node(base, length) = %x, %x\n", 1327 mem_node->base, mem_node->length); 1328 + dbg("populated slot = %d\n", populated_slot); 1329 if (!populated_slot) { 1330 mem_node->next = ctrl->mem_head; 1331 ctrl->mem_head = mem_node; ··· 1349 p_mem_node->length = pre_mem_length << 16; 1350 dbg("found p_mem_node(base, length) = %x, %x\n", 1351 p_mem_node->base, p_mem_node->length); 1352 + dbg("populated slot = %d\n", populated_slot); 1353 1354 if (!populated_slot) { 1355 p_mem_node->next = ctrl->p_mem_head; ··· 1373 bus_node->length = max_bus - secondary_bus + 1; 1374 dbg("found bus_node(base, length) = %x, %x\n", 1375 bus_node->base, bus_node->length); 1376 + dbg("populated slot = %d\n", populated_slot); 1377 if (!populated_slot) { 1378 bus_node->next = ctrl->bus_head; 1379 ctrl->bus_head = bus_node;
+3 -3
drivers/pci/hotplug/ibmphp_hpc.c
··· 124 unsigned long ultemp; 125 unsigned long data; // actual data HILO format 126 127 - debug_polling("%s - Entry WPGBbar[%p] index[%x] \n", __func__, WPGBbar, index); 128 129 //-------------------------------------------------------------------- 130 // READ - step 1 ··· 147 ultemp = ultemp << 8; 148 data |= ultemp; 149 } else { 150 - err("this controller type is not supported \n"); 151 return HPC_ERROR; 152 } 153 ··· 258 ultemp = ultemp << 8; 259 data |= ultemp; 260 } else { 261 - err("this controller type is not supported \n"); 262 return HPC_ERROR; 263 } 264
··· 124 unsigned long ultemp; 125 unsigned long data; // actual data HILO format 126 127 + debug_polling("%s - Entry WPGBbar[%p] index[%x]\n", __func__, WPGBbar, index); 128 129 //-------------------------------------------------------------------- 130 // READ - step 1 ··· 147 ultemp = ultemp << 8; 148 data |= ultemp; 149 } else { 150 + err("this controller type is not supported\n"); 151 return HPC_ERROR; 152 } 153 ··· 258 ultemp = ultemp << 8; 259 data |= ultemp; 260 } else { 261 + err("this controller type is not supported\n"); 262 return HPC_ERROR; 263 } 264
+5
drivers/pci/iov.c
··· 629 if (dev->no_vf_scan) 630 return 0; 631 632 for (i = 0; i < num_vfs; i++) { 633 rc = pci_iov_add_virtfn(dev, i); 634 if (rc) 635 goto failed; 636 } 637 return 0; 638 failed: 639 while (i--) 640 pci_iov_remove_virtfn(dev, i); 641 642 return rc; 643 } ··· 765 struct pci_sriov *iov = dev->sriov; 766 int i; 767 768 for (i = 0; i < iov->num_VFs; i++) 769 pci_iov_remove_virtfn(dev, i); 770 } 771 772 static void sriov_disable(struct pci_dev *dev)
··· 629 if (dev->no_vf_scan) 630 return 0; 631 632 + pci_lock_rescan_remove(); 633 for (i = 0; i < num_vfs; i++) { 634 rc = pci_iov_add_virtfn(dev, i); 635 if (rc) 636 goto failed; 637 } 638 + pci_unlock_rescan_remove(); 639 return 0; 640 failed: 641 while (i--) 642 pci_iov_remove_virtfn(dev, i); 643 + pci_unlock_rescan_remove(); 644 645 return rc; 646 } ··· 762 struct pci_sriov *iov = dev->sriov; 763 int i; 764 765 + pci_lock_rescan_remove(); 766 for (i = 0; i < iov->num_VFs; i++) 767 pci_iov_remove_virtfn(dev, i); 768 + pci_unlock_rescan_remove(); 769 } 770 771 static void sriov_disable(struct pci_dev *dev)
+15 -7
drivers/pci/of_property.c
··· 279 mapp++; 280 *mapp = out_irq[i].np->phandle; 281 mapp++; 282 - if (addr_sz[i]) { 283 - ret = of_property_read_u32_array(out_irq[i].np, 284 - "reg", mapp, 285 - addr_sz[i]); 286 - if (ret) 287 - goto failed; 288 - } 289 mapp += addr_sz[i]; 290 memcpy(mapp, out_irq[i].args, 291 out_irq[i].args_count * sizeof(u32));
··· 279 mapp++; 280 *mapp = out_irq[i].np->phandle; 281 mapp++; 282 + 283 + /* 284 + * A device address does not affect the device <-> 285 + * interrupt-controller HW connection for all 286 + * modern interrupt controllers; moreover, the 287 + * kernel (i.e., of_irq_parse_raw()) ignores the 288 + * values in the parent unit address cells while 289 + * parsing the interrupt-map property because they 290 + * are irrelevant for interrupt mapping in modern 291 + * systems. 292 + * 293 + * Leave the parent unit address initialized to 0 -- 294 + * just take into account the #address-cells size 295 + * to build the property properly. 296 + */ 297 mapp += addr_sz[i]; 298 memcpy(mapp, out_irq[i].args, 299 out_irq[i].args_count * sizeof(u32));
+2 -3
drivers/pci/p2pdma.c
··· 360 pages_free: 361 devm_memunmap_pages(&pdev->dev, pgmap); 362 pgmap_free: 363 - devm_kfree(&pdev->dev, pgmap); 364 return error; 365 } 366 EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); ··· 738 * pci_has_p2pmem - check if a given PCI device has published any p2pmem 739 * @pdev: PCI device to check 740 */ 741 - bool pci_has_p2pmem(struct pci_dev *pdev) 742 { 743 struct pci_p2pdma *p2pdma; 744 bool res; ··· 750 751 return res; 752 } 753 - EXPORT_SYMBOL_GPL(pci_has_p2pmem); 754 755 /** 756 * pci_p2pmem_find_many - find a peer-to-peer DMA memory device compatible with
··· 360 pages_free: 361 devm_memunmap_pages(&pdev->dev, pgmap); 362 pgmap_free: 363 + devm_kfree(&pdev->dev, p2p_pgmap); 364 return error; 365 } 366 EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); ··· 738 * pci_has_p2pmem - check if a given PCI device has published any p2pmem 739 * @pdev: PCI device to check 740 */ 741 + static bool pci_has_p2pmem(struct pci_dev *pdev) 742 { 743 struct pci_p2pdma *p2pdma; 744 bool res; ··· 750 751 return res; 752 } 753 754 /** 755 * pci_p2pmem_find_many - find a peer-to-peer DMA memory device compatible with
+4 -2
drivers/pci/pci-acpi.c
··· 122 123 bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge) 124 { 125 if (ACPI_HANDLE(&host_bridge->dev)) { 126 union acpi_object *obj; 127 ··· 137 1, DSM_PCI_PRESERVE_BOOT_CONFIG, 138 NULL, ACPI_TYPE_INTEGER); 139 if (obj && obj->integer.value == 0) 140 - return true; 141 ACPI_FREE(obj); 142 } 143 144 - return false; 145 } 146 147 /* _HPX PCI Setting Record (Type 0); same as _HPP */
··· 122 123 bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge) 124 { 125 + bool ret = false; 126 + 127 if (ACPI_HANDLE(&host_bridge->dev)) { 128 union acpi_object *obj; 129 ··· 135 1, DSM_PCI_PRESERVE_BOOT_CONFIG, 136 NULL, ACPI_TYPE_INTEGER); 137 if (obj && obj->integer.value == 0) 138 + ret = true; 139 ACPI_FREE(obj); 140 } 141 142 + return ret; 143 } 144 145 /* _HPX PCI Setting Record (Type 0); same as _HPP */
+2 -1
drivers/pci/pci-driver.c
··· 1582 return 0; 1583 } 1584 1585 - #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) 1586 /** 1587 * pci_uevent_ers - emit a uevent during recovery path of PCI device 1588 * @pdev: PCI device undergoing error recovery ··· 1596 switch (err_type) { 1597 case PCI_ERS_RESULT_NONE: 1598 case PCI_ERS_RESULT_CAN_RECOVER: 1599 envp[idx++] = "ERROR_EVENT=BEGIN_RECOVERY"; 1600 envp[idx++] = "DEVICE_ONLINE=0"; 1601 break;
··· 1582 return 0; 1583 } 1584 1585 + #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) || defined(CONFIG_S390) 1586 /** 1587 * pci_uevent_ers - emit a uevent during recovery path of PCI device 1588 * @pdev: PCI device undergoing error recovery ··· 1596 switch (err_type) { 1597 case PCI_ERS_RESULT_NONE: 1598 case PCI_ERS_RESULT_CAN_RECOVER: 1599 + case PCI_ERS_RESULT_NEED_RESET: 1600 envp[idx++] = "ERROR_EVENT=BEGIN_RECOVERY"; 1601 envp[idx++] = "DEVICE_ONLINE=0"; 1602 break;
+60 -8
drivers/pci/pci-sysfs.c
··· 30 #include <linux/msi.h> 31 #include <linux/of.h> 32 #include <linux/aperture.h> 33 #include "pci.h" 34 35 #ifndef ARCH_PCI_DEV_GROUPS ··· 178 179 for (i = 0; i < max; i++) { 180 struct resource *res = &pci_dev->resource[i]; 181 pci_resource_to_user(pci_dev, i, res, &start, &end); 182 len += sysfs_emit_at(buf, len, "0x%016llx 0x%016llx 0x%016llx\n", 183 (unsigned long long)start, ··· 209 struct device_attribute *attr, char *buf) 210 { 211 struct pci_dev *pdev = to_pci_dev(dev); 212 213 - return sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev)); 214 } 215 static DEVICE_ATTR_RO(max_link_width); 216 ··· 228 int err; 229 enum pci_bus_speed speed; 230 231 err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 232 if (err) 233 return -EINVAL; 234 ··· 248 u16 linkstat; 249 int err; 250 251 err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 252 if (err) 253 return -EINVAL; 254 ··· 267 u8 sec_bus; 268 int err; 269 270 err = pci_read_config_byte(pci_dev, PCI_SECONDARY_BUS, &sec_bus); 271 if (err) 272 return -EINVAL; 273 ··· 286 u8 sub_bus; 287 int err; 288 289 err = pci_read_config_byte(pci_dev, PCI_SUBORDINATE_BUS, &sub_bus); 290 if (err) 291 return -EINVAL; 292 ··· 719 IORESOURCE_ROM_SHADOW)); 720 } 721 static DEVICE_ATTR_RO(boot_vga); 722 723 static ssize_t pci_read_config(struct file *filp, struct kobject *kobj, 724 const struct bin_attribute *bin_attr, char *buf, ··· 1597 const char *buf, size_t count) 1598 { 1599 struct pci_dev *pdev = to_pci_dev(dev); 1600 - unsigned long size, flags; 1601 int ret, i; 1602 u16 cmd; 1603 1604 if (kstrtoul(buf, 0, &size) < 0) 1605 return -EINVAL; 1606 1607 device_lock(dev); ··· 1629 pci_write_config_word(pdev, PCI_COMMAND, 1630 cmd & ~PCI_COMMAND_MEMORY); 1631 1632 - flags = pci_resource_flags(pdev, n); 1633 - 1634 pci_remove_resource_files(pdev); 1635 1636 - for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { 1637 - if (pci_resource_len(pdev, i) && 1638 - pci_resource_flags(pdev, i) == flags) 1639 pci_release_resource(pdev, i); 1640 } 1641 1642 ret = pci_resize_resource(pdev, n, size); 1643 1644 - pci_assign_unassigned_bus_resources(pdev->bus); 1645 1646 if (pci_create_resource_files(pdev)) 1647 pci_warn(pdev, "Failed to recreate resource files after BAR resizing\n"); ··· 1746 1747 static struct attribute *pci_dev_dev_attrs[] = { 1748 &dev_attr_boot_vga.attr, 1749 NULL, 1750 }; 1751 ··· 1757 struct pci_dev *pdev = to_pci_dev(dev); 1758 1759 if (a == &dev_attr_boot_vga.attr && pci_is_vga(pdev)) 1760 return a->mode; 1761 1762 return 0;
··· 30 #include <linux/msi.h> 31 #include <linux/of.h> 32 #include <linux/aperture.h> 33 + #include <linux/unaligned.h> 34 #include "pci.h" 35 36 #ifndef ARCH_PCI_DEV_GROUPS ··· 177 178 for (i = 0; i < max; i++) { 179 struct resource *res = &pci_dev->resource[i]; 180 + struct resource zerores = {}; 181 + 182 + /* For backwards compatibility */ 183 + if (i >= PCI_BRIDGE_RESOURCES && i <= PCI_BRIDGE_RESOURCE_END && 184 + res->flags & (IORESOURCE_UNSET | IORESOURCE_DISABLED)) 185 + res = &zerores; 186 + 187 pci_resource_to_user(pci_dev, i, res, &start, &end); 188 len += sysfs_emit_at(buf, len, "0x%016llx 0x%016llx 0x%016llx\n", 189 (unsigned long long)start, ··· 201 struct device_attribute *attr, char *buf) 202 { 203 struct pci_dev *pdev = to_pci_dev(dev); 204 + ssize_t ret; 205 206 + /* We read PCI_EXP_LNKCAP, so we need the device to be accessible. */ 207 + pci_config_pm_runtime_get(pdev); 208 + ret = sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev)); 209 + pci_config_pm_runtime_put(pdev); 210 + 211 + return ret; 212 } 213 static DEVICE_ATTR_RO(max_link_width); 214 ··· 214 int err; 215 enum pci_bus_speed speed; 216 217 + pci_config_pm_runtime_get(pci_dev); 218 err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 219 + pci_config_pm_runtime_put(pci_dev); 220 + 221 if (err) 222 return -EINVAL; 223 ··· 231 u16 linkstat; 232 int err; 233 234 + pci_config_pm_runtime_get(pci_dev); 235 err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 236 + pci_config_pm_runtime_put(pci_dev); 237 + 238 if (err) 239 return -EINVAL; 240 ··· 247 u8 sec_bus; 248 int err; 249 250 + pci_config_pm_runtime_get(pci_dev); 251 err = pci_read_config_byte(pci_dev, PCI_SECONDARY_BUS, &sec_bus); 252 + pci_config_pm_runtime_put(pci_dev); 253 + 254 if (err) 255 return -EINVAL; 256 ··· 263 u8 sub_bus; 264 int err; 265 266 + pci_config_pm_runtime_get(pci_dev); 267 err = pci_read_config_byte(pci_dev, PCI_SUBORDINATE_BUS, &sub_bus); 268 + pci_config_pm_runtime_put(pci_dev); 269 + 270 if (err) 271 return -EINVAL; 272 ··· 693 IORESOURCE_ROM_SHADOW)); 694 } 695 static DEVICE_ATTR_RO(boot_vga); 696 + 697 + static ssize_t serial_number_show(struct device *dev, 698 + struct device_attribute *attr, char *buf) 699 + { 700 + struct pci_dev *pci_dev = to_pci_dev(dev); 701 + u64 dsn; 702 + u8 bytes[8]; 703 + 704 + dsn = pci_get_dsn(pci_dev); 705 + if (!dsn) 706 + return -EIO; 707 + 708 + put_unaligned_be64(dsn, bytes); 709 + return sysfs_emit(buf, "%8phD\n", bytes); 710 + } 711 + static DEVICE_ATTR_ADMIN_RO(serial_number); 712 713 static ssize_t pci_read_config(struct file *filp, struct kobject *kobj, 714 const struct bin_attribute *bin_attr, char *buf, ··· 1555 const char *buf, size_t count) 1556 { 1557 struct pci_dev *pdev = to_pci_dev(dev); 1558 + struct pci_bus *bus = pdev->bus; 1559 + struct resource *b_win, *res; 1560 + unsigned long size; 1561 int ret, i; 1562 u16 cmd; 1563 1564 if (kstrtoul(buf, 0, &size) < 0) 1565 + return -EINVAL; 1566 + 1567 + b_win = pbus_select_window(bus, pci_resource_n(pdev, n)); 1568 + if (!b_win) 1569 return -EINVAL; 1570 1571 device_lock(dev); ··· 1581 pci_write_config_word(pdev, PCI_COMMAND, 1582 cmd & ~PCI_COMMAND_MEMORY); 1583 1584 pci_remove_resource_files(pdev); 1585 1586 + pci_dev_for_each_resource(pdev, res, i) { 1587 + if (i >= PCI_BRIDGE_RESOURCES) 1588 + break; 1589 + 1590 + if (b_win == pbus_select_window(bus, res)) 1591 pci_release_resource(pdev, i); 1592 } 1593 1594 ret = pci_resize_resource(pdev, n, size); 1595 1596 + pci_assign_unassigned_bus_resources(bus); 1597 1598 if (pci_create_resource_files(pdev)) 1599 pci_warn(pdev, "Failed to recreate resource files after BAR resizing\n"); ··· 1698 1699 static struct attribute *pci_dev_dev_attrs[] = { 1700 &dev_attr_boot_vga.attr, 1701 + &dev_attr_serial_number.attr, 1702 NULL, 1703 }; 1704 ··· 1708 struct pci_dev *pdev = to_pci_dev(dev); 1709 1710 if (a == &dev_attr_boot_vga.attr && pci_is_vga(pdev)) 1711 + return a->mode; 1712 + 1713 + if (a == &dev_attr_serial_number.attr && pci_get_dsn(pdev)) 1714 return a->mode; 1715 1716 return 0;
+15 -66
drivers/pci/pci.c
··· 423 return 1; 424 } 425 426 - static u8 __pci_find_next_cap_ttl(struct pci_bus *bus, unsigned int devfn, 427 - u8 pos, int cap, int *ttl) 428 - { 429 - u8 id; 430 - u16 ent; 431 - 432 - pci_bus_read_config_byte(bus, devfn, pos, &pos); 433 - 434 - while ((*ttl)--) { 435 - if (pos < 0x40) 436 - break; 437 - pos &= ~3; 438 - pci_bus_read_config_word(bus, devfn, pos, &ent); 439 - 440 - id = ent & 0xff; 441 - if (id == 0xff) 442 - break; 443 - if (id == cap) 444 - return pos; 445 - pos = (ent >> 8); 446 - } 447 - return 0; 448 - } 449 - 450 static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn, 451 u8 pos, int cap) 452 { 453 - int ttl = PCI_FIND_CAP_TTL; 454 - 455 - return __pci_find_next_cap_ttl(bus, devfn, pos, cap, &ttl); 456 } 457 458 u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap) ··· 527 */ 528 u16 pci_find_next_ext_capability(struct pci_dev *dev, u16 start, int cap) 529 { 530 - u32 header; 531 - int ttl; 532 - u16 pos = PCI_CFG_SPACE_SIZE; 533 - 534 - /* minimum 8 bytes per capability */ 535 - ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; 536 - 537 if (dev->cfg_size <= PCI_CFG_SPACE_SIZE) 538 return 0; 539 540 - if (start) 541 - pos = start; 542 - 543 - if (pci_read_config_dword(dev, pos, &header) != PCIBIOS_SUCCESSFUL) 544 - return 0; 545 - 546 - /* 547 - * If we have no capabilities, this is indicated by cap ID, 548 - * cap version and next pointer all being 0. 549 - */ 550 - if (header == 0) 551 - return 0; 552 - 553 - while (ttl-- > 0) { 554 - if (PCI_EXT_CAP_ID(header) == cap && pos != start) 555 - return pos; 556 - 557 - pos = PCI_EXT_CAP_NEXT(header); 558 - if (pos < PCI_CFG_SPACE_SIZE) 559 - break; 560 - 561 - if (pci_read_config_dword(dev, pos, &header) != PCIBIOS_SUCCESSFUL) 562 - break; 563 - } 564 - 565 - return 0; 566 } 567 EXPORT_SYMBOL_GPL(pci_find_next_ext_capability); 568 ··· 591 592 static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap) 593 { 594 - int rc, ttl = PCI_FIND_CAP_TTL; 595 u8 cap, mask; 596 597 if (ht_cap == HT_CAPTYPE_SLAVE || ht_cap == HT_CAPTYPE_HOST) ··· 599 else 600 mask = HT_5BIT_CAP_MASK; 601 602 - pos = __pci_find_next_cap_ttl(dev->bus, dev->devfn, pos, 603 - PCI_CAP_ID_HT, &ttl); 604 while (pos) { 605 rc = pci_read_config_byte(dev, pos + 3, &cap); 606 if (rc != PCIBIOS_SUCCESSFUL) ··· 609 if ((cap & mask) == ht_cap) 610 return pos; 611 612 - pos = __pci_find_next_cap_ttl(dev->bus, dev->devfn, 613 - pos + PCI_CAP_LIST_NEXT, 614 - PCI_CAP_ID_HT, &ttl); 615 } 616 617 return 0; ··· 1315 else 1316 dev->current_state = state; 1317 1318 return -EIO; 1319 } 1320
··· 423 return 1; 424 } 425 426 static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn, 427 u8 pos, int cap) 428 { 429 + return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, bus, devfn); 430 } 431 432 u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap) ··· 553 */ 554 u16 pci_find_next_ext_capability(struct pci_dev *dev, u16 start, int cap) 555 { 556 if (dev->cfg_size <= PCI_CFG_SPACE_SIZE) 557 return 0; 558 559 + return PCI_FIND_NEXT_EXT_CAP(pci_bus_read_config, start, cap, 560 + dev->bus, dev->devfn); 561 } 562 EXPORT_SYMBOL_GPL(pci_find_next_ext_capability); 563 ··· 648 649 static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap) 650 { 651 + int rc; 652 u8 cap, mask; 653 654 if (ht_cap == HT_CAPTYPE_SLAVE || ht_cap == HT_CAPTYPE_HOST) ··· 656 else 657 mask = HT_5BIT_CAP_MASK; 658 659 + pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, 660 + PCI_CAP_ID_HT, dev->bus, dev->devfn); 661 while (pos) { 662 rc = pci_read_config_byte(dev, pos + 3, &cap); 663 if (rc != PCIBIOS_SUCCESSFUL) ··· 666 if ((cap & mask) == ht_cap) 667 return pos; 668 669 + pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, 670 + pos + PCI_CAP_LIST_NEXT, 671 + PCI_CAP_ID_HT, dev->bus, 672 + dev->devfn); 673 } 674 675 return 0; ··· 1371 else 1372 dev->current_state = state; 1373 1374 + return -EIO; 1375 + } 1376 + 1377 + if (pci_dev_is_disconnected(dev)) { 1378 + dev->current_state = PCI_D3cold; 1379 return -EIO; 1380 } 1381
+95 -1
drivers/pci/pci.h
··· 2 #ifndef DRIVERS_PCI_H 3 #define DRIVERS_PCI_H 4 5 #include <linux/pci.h> 6 7 struct pcie_tlp_log; 8 9 /* Number of possible devfns: 0.0 to 1f.7 inclusive */ 10 #define MAX_NR_DEVFNS 256 11 12 #define MAX_NR_LANES 16 13 ··· 84 #define PCIE_MSG_CODE_DEASSERT_INTC 0x26 85 #define PCIE_MSG_CODE_DEASSERT_INTD 0x27 86 87 extern const unsigned char pcie_link_speed[]; 88 extern bool pci_early_dump; 89 90 bool pcie_cap_has_lnkctl(const struct pci_dev *dev); 91 bool pcie_cap_has_lnkctl2(const struct pci_dev *dev); 92 bool pcie_cap_has_rtctl(const struct pci_dev *dev); 93 94 /* Functions internal to the PCI core code */ 95 ··· 422 void pci_put_host_bridge_device(struct device *dev); 423 424 unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge); 425 - int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type); 426 int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); 427 428 int pci_configure_extended_tags(struct pci_dev *dev, void *ign); ··· 473 return resno; 474 } 475 476 void pci_reassigndev_resource_alignment(struct pci_dev *dev); 477 void pci_disable_bridge_window(struct pci_dev *dev); 478 struct pci_bus *pci_bus_get(struct pci_bus *bus);
··· 2 #ifndef DRIVERS_PCI_H 3 #define DRIVERS_PCI_H 4 5 + #include <linux/align.h> 6 + #include <linux/bitfield.h> 7 #include <linux/pci.h> 8 9 struct pcie_tlp_log; 10 11 /* Number of possible devfns: 0.0 to 1f.7 inclusive */ 12 #define MAX_NR_DEVFNS 256 13 + #define PCI_MAX_NR_DEVS 32 14 15 #define MAX_NR_LANES 16 16 ··· 81 #define PCIE_MSG_CODE_DEASSERT_INTC 0x26 82 #define PCIE_MSG_CODE_DEASSERT_INTD 0x27 83 84 + #define PCI_BUS_BRIDGE_IO_WINDOW 0 85 + #define PCI_BUS_BRIDGE_MEM_WINDOW 1 86 + #define PCI_BUS_BRIDGE_PREF_MEM_WINDOW 2 87 + 88 extern const unsigned char pcie_link_speed[]; 89 extern bool pci_early_dump; 90 + 91 + extern struct mutex pci_rescan_remove_lock; 92 93 bool pcie_cap_has_lnkctl(const struct pci_dev *dev); 94 bool pcie_cap_has_lnkctl2(const struct pci_dev *dev); 95 bool pcie_cap_has_rtctl(const struct pci_dev *dev); 96 + 97 + /* Standard Capability finder */ 98 + /** 99 + * PCI_FIND_NEXT_CAP - Find a PCI standard capability 100 + * @read_cfg: Function pointer for reading PCI config space 101 + * @start: Starting position to begin search 102 + * @cap: Capability ID to find 103 + * @args: Arguments to pass to read_cfg function 104 + * 105 + * Search the capability list in PCI config space to find @cap. 106 + * Implements TTL (time-to-live) protection against infinite loops. 107 + * 108 + * Return: Position of the capability if found, 0 otherwise. 109 + */ 110 + #define PCI_FIND_NEXT_CAP(read_cfg, start, cap, args...) \ 111 + ({ \ 112 + int __ttl = PCI_FIND_CAP_TTL; \ 113 + u8 __id, __found_pos = 0; \ 114 + u8 __pos = (start); \ 115 + u16 __ent; \ 116 + \ 117 + read_cfg##_byte(args, __pos, &__pos); \ 118 + \ 119 + while (__ttl--) { \ 120 + if (__pos < PCI_STD_HEADER_SIZEOF) \ 121 + break; \ 122 + \ 123 + __pos = ALIGN_DOWN(__pos, 4); \ 124 + read_cfg##_word(args, __pos, &__ent); \ 125 + \ 126 + __id = FIELD_GET(PCI_CAP_ID_MASK, __ent); \ 127 + if (__id == 0xff) \ 128 + break; \ 129 + \ 130 + if (__id == (cap)) { \ 131 + __found_pos = __pos; \ 132 + break; \ 133 + } \ 134 + \ 135 + __pos = FIELD_GET(PCI_CAP_LIST_NEXT_MASK, __ent); \ 136 + } \ 137 + __found_pos; \ 138 + }) 139 + 140 + /* Extended Capability finder */ 141 + /** 142 + * PCI_FIND_NEXT_EXT_CAP - Find a PCI extended capability 143 + * @read_cfg: Function pointer for reading PCI config space 144 + * @start: Starting position to begin search (0 for initial search) 145 + * @cap: Extended capability ID to find 146 + * @args: Arguments to pass to read_cfg function 147 + * 148 + * Search the extended capability list in PCI config space to find @cap. 149 + * Implements TTL protection against infinite loops using a calculated 150 + * maximum search count. 151 + * 152 + * Return: Position of the capability if found, 0 otherwise. 153 + */ 154 + #define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, args...) \ 155 + ({ \ 156 + u16 __pos = (start) ?: PCI_CFG_SPACE_SIZE; \ 157 + u16 __found_pos = 0; \ 158 + int __ttl, __ret; \ 159 + u32 __header; \ 160 + \ 161 + __ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; \ 162 + while (__ttl-- > 0 && __pos >= PCI_CFG_SPACE_SIZE) { \ 163 + __ret = read_cfg##_dword(args, __pos, &__header); \ 164 + if (__ret != PCIBIOS_SUCCESSFUL) \ 165 + break; \ 166 + \ 167 + if (__header == 0) \ 168 + break; \ 169 + \ 170 + if (PCI_EXT_CAP_ID(__header) == (cap) && __pos != start) {\ 171 + __found_pos = __pos; \ 172 + break; \ 173 + } \ 174 + \ 175 + __pos = PCI_EXT_CAP_NEXT(__header); \ 176 + } \ 177 + __found_pos; \ 178 + }) 179 180 /* Functions internal to the PCI core code */ 181 ··· 330 void pci_put_host_bridge_device(struct device *dev); 331 332 unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge); 333 + int pbus_reassign_bridge_resources(struct pci_bus *bus, struct resource *res); 334 int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); 335 336 int pci_configure_extended_tags(struct pci_dev *dev, void *ign); ··· 381 return resno; 382 } 383 384 + struct resource *pbus_select_window(struct pci_bus *bus, 385 + const struct resource *res); 386 void pci_reassigndev_resource_alignment(struct pci_dev *dev); 387 void pci_disable_bridge_window(struct pci_dev *dev); 388 struct pci_bus *pci_bus_get(struct pci_bus *bus);
+40 -9
drivers/pci/pcie/aer.c
··· 43 #define AER_ERROR_SOURCES_MAX 128 44 45 #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ 46 - #define AER_MAX_TYPEOF_UNCOR_ERRS 27 /* as per PCI_ERR_UNCOR_STATUS*/ 47 48 struct aer_err_source { 49 u32 status; /* PCI_ERR_ROOT_STATUS */ ··· 96 }; 97 98 #define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \ 99 PCI_ERR_UNC_ECRC| \ 100 PCI_ERR_UNC_UNSUP| \ 101 PCI_ERR_UNC_COMP_ABORT| \ 102 PCI_ERR_UNC_UNX_COMP| \ 103 - PCI_ERR_UNC_MALF_TLP) 104 105 #define SYSTEM_ERROR_INTR_ON_MESG_MASK (PCI_EXP_RTCTL_SECEE| \ 106 PCI_EXP_RTCTL_SENFEE| \ ··· 393 return; 394 395 dev->aer_info = kzalloc(sizeof(*dev->aer_info), GFP_KERNEL); 396 397 ratelimit_state_init(&dev->aer_info->correctable_ratelimit, 398 DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); ··· 539 "AtomicOpBlocked", /* Bit Position 24 */ 540 "TLPBlockedErr", /* Bit Position 25 */ 541 "PoisonTLPBlocked", /* Bit Position 26 */ 542 - NULL, /* Bit Position 27 */ 543 - NULL, /* Bit Position 28 */ 544 - NULL, /* Bit Position 29 */ 545 - NULL, /* Bit Position 30 */ 546 - NULL, /* Bit Position 31 */ 547 }; 548 549 static const char *aer_agent_string[] = { ··· 800 801 static int aer_ratelimit(struct pci_dev *dev, unsigned int severity) 802 { 803 switch (severity) { 804 case AER_NONFATAL: 805 return __ratelimit(&dev->aer_info->nonfatal_ratelimit); ··· 811 default: 812 return 1; /* Don't ratelimit fatal errors */ 813 } 814 } 815 816 static void __aer_print_error(struct pci_dev *dev, struct aer_err_info *info) ··· 941 status = aer->uncor_status; 942 mask = aer->uncor_mask; 943 info.level = KERN_ERR; 944 - tlp_header_valid = status & AER_LOG_TLP_MASKS; 945 } 946 947 info.status = status; ··· 1432 pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc); 1433 info->first_error = PCI_ERR_CAP_FEP(aercc); 1434 1435 - if (info->status & AER_LOG_TLP_MASKS) { 1436 info->tlp_header_valid = 1; 1437 pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, 1438 aer + PCI_ERR_PREFIX_LOG,
··· 43 #define AER_ERROR_SOURCES_MAX 128 44 45 #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ 46 + #define AER_MAX_TYPEOF_UNCOR_ERRS 32 /* as per PCI_ERR_UNCOR_STATUS*/ 47 48 struct aer_err_source { 49 u32 status; /* PCI_ERR_ROOT_STATUS */ ··· 96 }; 97 98 #define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \ 99 + PCI_ERR_UNC_POISON_BLK | \ 100 PCI_ERR_UNC_ECRC| \ 101 PCI_ERR_UNC_UNSUP| \ 102 PCI_ERR_UNC_COMP_ABORT| \ 103 PCI_ERR_UNC_UNX_COMP| \ 104 + PCI_ERR_UNC_ACSV | \ 105 + PCI_ERR_UNC_MCBTLP | \ 106 + PCI_ERR_UNC_ATOMEG | \ 107 + PCI_ERR_UNC_DMWR_BLK | \ 108 + PCI_ERR_UNC_XLAT_BLK | \ 109 + PCI_ERR_UNC_TLPPRE | \ 110 + PCI_ERR_UNC_MALF_TLP | \ 111 + PCI_ERR_UNC_IDE_CHECK | \ 112 + PCI_ERR_UNC_MISR_IDE | \ 113 + PCI_ERR_UNC_PCRC_CHECK) 114 115 #define SYSTEM_ERROR_INTR_ON_MESG_MASK (PCI_EXP_RTCTL_SECEE| \ 116 PCI_EXP_RTCTL_SENFEE| \ ··· 383 return; 384 385 dev->aer_info = kzalloc(sizeof(*dev->aer_info), GFP_KERNEL); 386 + if (!dev->aer_info) { 387 + dev->aer_cap = 0; 388 + return; 389 + } 390 391 ratelimit_state_init(&dev->aer_info->correctable_ratelimit, 392 DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); ··· 525 "AtomicOpBlocked", /* Bit Position 24 */ 526 "TLPBlockedErr", /* Bit Position 25 */ 527 "PoisonTLPBlocked", /* Bit Position 26 */ 528 + "DMWrReqBlocked", /* Bit Position 27 */ 529 + "IDECheck", /* Bit Position 28 */ 530 + "MisIDETLP", /* Bit Position 29 */ 531 + "PCRC_CHECK", /* Bit Position 30 */ 532 + "TLPXlatBlocked", /* Bit Position 31 */ 533 }; 534 535 static const char *aer_agent_string[] = { ··· 786 787 static int aer_ratelimit(struct pci_dev *dev, unsigned int severity) 788 { 789 + if (!dev->aer_info) 790 + return 1; 791 + 792 switch (severity) { 793 case AER_NONFATAL: 794 return __ratelimit(&dev->aer_info->nonfatal_ratelimit); ··· 794 default: 795 return 1; /* Don't ratelimit fatal errors */ 796 } 797 + } 798 + 799 + static bool tlp_header_logged(u32 status, u32 capctl) 800 + { 801 + /* Errors for which a header is always logged (PCIe r7.0 sec 6.2.7) */ 802 + if (status & AER_LOG_TLP_MASKS) 803 + return true; 804 + 805 + /* Completion Timeout header is only logged on capable devices */ 806 + if (status & PCI_ERR_UNC_COMP_TIME && 807 + capctl & PCI_ERR_CAP_COMP_TIME_LOG) 808 + return true; 809 + 810 + return false; 811 } 812 813 static void __aer_print_error(struct pci_dev *dev, struct aer_err_info *info) ··· 910 status = aer->uncor_status; 911 mask = aer->uncor_mask; 912 info.level = KERN_ERR; 913 + tlp_header_valid = tlp_header_logged(status, aer->cap_control); 914 } 915 916 info.status = status; ··· 1401 pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc); 1402 info->first_error = PCI_ERR_CAP_FEP(aercc); 1403 1404 + if (tlp_header_logged(info->status, aercc)) { 1405 info->tlp_header_valid = 1; 1406 pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, 1407 aer + PCI_ERR_PREFIX_LOG,
+43 -2
drivers/pci/pcie/aspm.c
··· 15 #include <linux/math.h> 16 #include <linux/module.h> 17 #include <linux/moduleparam.h> 18 #include <linux/pci.h> 19 #include <linux/pci_regs.h> 20 #include <linux/errno.h> ··· 236 u32 aspm_support:7; /* Supported ASPM state */ 237 u32 aspm_enabled:7; /* Enabled ASPM state */ 238 u32 aspm_capable:7; /* Capable ASPM state with latency */ 239 - u32 aspm_default:7; /* Default ASPM state by BIOS */ 240 u32 aspm_disable:7; /* Disabled ASPM state */ 241 242 /* Clock PM state */ 243 u32 clkpm_capable:1; /* Clock PM capable? */ 244 u32 clkpm_enabled:1; /* Current Clock PM state */ 245 - u32 clkpm_default:1; /* Default Clock PM state by BIOS */ 246 u32 clkpm_disable:1; /* Clock PM disabled */ 247 }; 248 ··· 376 pcie_set_clkpm_nocheck(link, enable); 377 } 378 379 static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist) 380 { 381 int capable = 1, enabled = 1; ··· 410 } 411 link->clkpm_enabled = enabled; 412 link->clkpm_default = enabled; 413 link->clkpm_capable = capable; 414 link->clkpm_disable = blacklist ? 1 : 0; 415 } ··· 804 aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap); 805 } 806 807 static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) 808 { 809 struct pci_dev *child = link->downstream, *parent = link->pdev; ··· 906 907 /* Save default state */ 908 link->aspm_default = link->aspm_enabled; 909 910 /* Setup initial capable state. Will be updated later */ 911 link->aspm_capable = link->aspm_support;
··· 15 #include <linux/math.h> 16 #include <linux/module.h> 17 #include <linux/moduleparam.h> 18 + #include <linux/of.h> 19 #include <linux/pci.h> 20 #include <linux/pci_regs.h> 21 #include <linux/errno.h> ··· 235 u32 aspm_support:7; /* Supported ASPM state */ 236 u32 aspm_enabled:7; /* Enabled ASPM state */ 237 u32 aspm_capable:7; /* Capable ASPM state with latency */ 238 + u32 aspm_default:7; /* Default ASPM state by BIOS or 239 + override */ 240 u32 aspm_disable:7; /* Disabled ASPM state */ 241 242 /* Clock PM state */ 243 u32 clkpm_capable:1; /* Clock PM capable? */ 244 u32 clkpm_enabled:1; /* Current Clock PM state */ 245 + u32 clkpm_default:1; /* Default Clock PM state by BIOS or 246 + override */ 247 u32 clkpm_disable:1; /* Clock PM disabled */ 248 }; 249 ··· 373 pcie_set_clkpm_nocheck(link, enable); 374 } 375 376 + static void pcie_clkpm_override_default_link_state(struct pcie_link_state *link, 377 + int enabled) 378 + { 379 + struct pci_dev *pdev = link->downstream; 380 + 381 + /* For devicetree platforms, enable ClockPM by default */ 382 + if (of_have_populated_dt() && !enabled) { 383 + link->clkpm_default = 1; 384 + pci_info(pdev, "ASPM: DT platform, enabling ClockPM\n"); 385 + } 386 + } 387 + 388 static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist) 389 { 390 int capable = 1, enabled = 1; ··· 395 } 396 link->clkpm_enabled = enabled; 397 link->clkpm_default = enabled; 398 + pcie_clkpm_override_default_link_state(link, enabled); 399 link->clkpm_capable = capable; 400 link->clkpm_disable = blacklist ? 1 : 0; 401 } ··· 788 aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap); 789 } 790 791 + #define FLAG(x, y, d) (((x) & (PCIE_LINK_STATE_##y)) ? d : "") 792 + 793 + static void pcie_aspm_override_default_link_state(struct pcie_link_state *link) 794 + { 795 + struct pci_dev *pdev = link->downstream; 796 + u32 override; 797 + 798 + /* For devicetree platforms, enable all ASPM states by default */ 799 + if (of_have_populated_dt()) { 800 + link->aspm_default = PCIE_LINK_STATE_ASPM_ALL; 801 + override = link->aspm_default & ~link->aspm_enabled; 802 + if (override) 803 + pci_info(pdev, "ASPM: DT platform, enabling%s%s%s%s%s%s%s\n", 804 + FLAG(override, L0S_UP, " L0s-up"), 805 + FLAG(override, L0S_DW, " L0s-dw"), 806 + FLAG(override, L1, " L1"), 807 + FLAG(override, L1_1, " ASPM-L1.1"), 808 + FLAG(override, L1_2, " ASPM-L1.2"), 809 + FLAG(override, L1_1_PCIPM, " PCI-PM-L1.1"), 810 + FLAG(override, L1_2_PCIPM, " PCI-PM-L1.2")); 811 + } 812 + } 813 + 814 static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) 815 { 816 struct pci_dev *child = link->downstream, *parent = link->pdev; ··· 867 868 /* Save default state */ 869 link->aspm_default = link->aspm_enabled; 870 + 871 + pcie_aspm_override_default_link_state(link); 872 873 /* Setup initial capable state. Will be updated later */ 874 link->aspm_capable = link->aspm_support;
+31 -9
drivers/pci/pcie/err.c
··· 108 return report_error_detected(dev, pci_channel_io_normal, data); 109 } 110 111 static int report_mmio_enabled(struct pci_dev *dev, void *data) 112 { 113 struct pci_driver *pdrv; ··· 153 154 device_lock(&dev->dev); 155 pdrv = dev->driver; 156 - if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->slot_reset) 157 goto out; 158 159 err_handler = pdrv->err_handler; ··· 236 pci_walk_bridge(bridge, pci_pm_runtime_get_sync, NULL); 237 238 pci_dbg(bridge, "broadcast error_detected message\n"); 239 - if (state == pci_channel_io_frozen) { 240 pci_walk_bridge(bridge, report_frozen_detected, &status); 241 - if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) { 242 - pci_warn(bridge, "subordinate device reset failed\n"); 243 - goto failed; 244 - } 245 - } else { 246 pci_walk_bridge(bridge, report_normal_detected, &status); 247 - } 248 249 if (status == PCI_ERS_RESULT_CAN_RECOVER) { 250 status = PCI_ERS_RESULT_RECOVERED; 251 pci_dbg(bridge, "broadcast mmio_enabled message\n"); 252 pci_walk_bridge(bridge, report_mmio_enabled, &status); 253 } 254 255 if (status == PCI_ERS_RESULT_NEED_RESET) { ··· 291 failed: 292 pci_walk_bridge(bridge, pci_pm_runtime_put, NULL); 293 294 - pci_uevent_ers(bridge, PCI_ERS_RESULT_DISCONNECT); 295 296 pci_info(bridge, "device recovery failed\n"); 297
··· 108 return report_error_detected(dev, pci_channel_io_normal, data); 109 } 110 111 + static int report_perm_failure_detected(struct pci_dev *dev, void *data) 112 + { 113 + struct pci_driver *pdrv; 114 + const struct pci_error_handlers *err_handler; 115 + 116 + device_lock(&dev->dev); 117 + pdrv = dev->driver; 118 + if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->error_detected) 119 + goto out; 120 + 121 + err_handler = pdrv->err_handler; 122 + err_handler->error_detected(dev, pci_channel_io_perm_failure); 123 + out: 124 + pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); 125 + device_unlock(&dev->dev); 126 + return 0; 127 + } 128 + 129 static int report_mmio_enabled(struct pci_dev *dev, void *data) 130 { 131 struct pci_driver *pdrv; ··· 135 136 device_lock(&dev->dev); 137 pdrv = dev->driver; 138 + if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || 139 + !pdrv || !pdrv->err_handler || !pdrv->err_handler->slot_reset) 140 goto out; 141 142 err_handler = pdrv->err_handler; ··· 217 pci_walk_bridge(bridge, pci_pm_runtime_get_sync, NULL); 218 219 pci_dbg(bridge, "broadcast error_detected message\n"); 220 + if (state == pci_channel_io_frozen) 221 pci_walk_bridge(bridge, report_frozen_detected, &status); 222 + else 223 pci_walk_bridge(bridge, report_normal_detected, &status); 224 225 if (status == PCI_ERS_RESULT_CAN_RECOVER) { 226 status = PCI_ERS_RESULT_RECOVERED; 227 pci_dbg(bridge, "broadcast mmio_enabled message\n"); 228 pci_walk_bridge(bridge, report_mmio_enabled, &status); 229 + } 230 + 231 + if (status == PCI_ERS_RESULT_NEED_RESET || 232 + state == pci_channel_io_frozen) { 233 + if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) { 234 + pci_warn(bridge, "subordinate device reset failed\n"); 235 + goto failed; 236 + } 237 } 238 239 if (status == PCI_ERS_RESULT_NEED_RESET) { ··· 269 failed: 270 pci_walk_bridge(bridge, pci_pm_runtime_put, NULL); 271 272 + pci_walk_bridge(bridge, report_perm_failure_detected, NULL); 273 274 pci_info(bridge, "device recovery failed\n"); 275
+63 -25
drivers/pci/probe.c
··· 3 * PCI detection and setup code 4 */ 5 6 #include <linux/kernel.h> 7 #include <linux/delay.h> 8 #include <linux/init.h> ··· 420 limit |= ((unsigned long) io_limit_hi << 16); 421 } 422 423 if (base <= limit) { 424 - res->flags = (io_base_lo & PCI_IO_RANGE_TYPE_MASK) | IORESOURCE_IO; 425 region.start = base; 426 region.end = limit + io_granularity - 1; 427 pcibios_bus_to_resource(dev->bus, res, &region); 428 if (log) 429 pci_info(dev, " bridge window %pR\n", res); 430 } 431 } 432 ··· 445 pci_read_config_word(dev, PCI_MEMORY_LIMIT, &mem_limit_lo); 446 base = ((unsigned long) mem_base_lo & PCI_MEMORY_RANGE_MASK) << 16; 447 limit = ((unsigned long) mem_limit_lo & PCI_MEMORY_RANGE_MASK) << 16; 448 if (base <= limit) { 449 - res->flags = (mem_base_lo & PCI_MEMORY_RANGE_TYPE_MASK) | IORESOURCE_MEM; 450 region.start = base; 451 region.end = limit + 0xfffff; 452 pcibios_bus_to_resource(dev->bus, res, &region); 453 if (log) 454 pci_info(dev, " bridge window %pR\n", res); 455 } 456 } 457 ··· 499 return; 500 } 501 502 if (base <= limit) { 503 - res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | 504 - IORESOURCE_MEM | IORESOURCE_PREFETCH; 505 - if (res->flags & PCI_PREF_RANGE_TYPE_64) 506 - res->flags |= IORESOURCE_MEM_64; 507 region.start = base; 508 region.end = limit + 0xfffff; 509 pcibios_bus_to_resource(dev->bus, res, &region); 510 if (log) 511 pci_info(dev, " bridge window %pR\n", res); 512 } 513 } 514 ··· 538 } 539 if (io) { 540 bridge->io_window = 1; 541 - pci_read_bridge_io(bridge, &res, true); 542 } 543 544 - pci_read_bridge_mmio(bridge, &res, true); 545 546 /* 547 * DECchip 21050 pass 2 errata: the bridge may miss an address ··· 583 bridge->pref_64_window = 1; 584 } 585 586 - pci_read_bridge_mmio_pref(bridge, &res, true); 587 } 588 589 void pci_read_bridge_bases(struct pci_bus *child) ··· 606 for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) 607 child->resource[i] = &dev->resource[PCI_BRIDGE_RESOURCES+i]; 608 609 - pci_read_bridge_io(child->self, child->resource[0], false); 610 - pci_read_bridge_mmio(child->self, child->resource[1], false); 611 - pci_read_bridge_mmio_pref(child->self, child->resource[2], false); 612 613 if (!dev->transparent) 614 return; ··· 1937 1938 static void early_dump_pci_device(struct pci_dev *pdev) 1939 { 1940 - u32 value[256 / 4]; 1941 int i; 1942 1943 pci_info(pdev, "config space:\n"); 1944 1945 - for (i = 0; i < 256; i += 4) 1946 - pci_read_config_dword(pdev, i, &value[i / 4]); 1947 1948 print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, 1949 - value, 256, false); 1950 } 1951 1952 static const char *pci_type_str(struct pci_dev *dev) ··· 2010 dev->sysdata = dev->bus->sysdata; 2011 dev->dev.parent = dev->bus->bridge; 2012 dev->dev.bus = &pci_bus_type; 2013 - dev->hdr_type = hdr_type & 0x7f; 2014 - dev->multifunction = !!(hdr_type & 0x80); 2015 dev->error_state = pci_channel_io_normal; 2016 set_pcie_port_type(dev); 2017 ··· 2541 struct device_node *np; 2542 2543 np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn); 2544 - if (!np || of_find_device_by_node(np)) 2545 return NULL; 2546 2547 /* 2548 * First check whether the pwrctrl device really needs to be created or ··· 2557 */ 2558 if (!of_pci_supply_present(np)) { 2559 pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name); 2560 - return NULL; 2561 } 2562 2563 /* Now create the pwrctrl device */ 2564 pdev = of_platform_device_create(np, NULL, &host->dev); 2565 if (!pdev) { 2566 pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name); 2567 - return NULL; 2568 } 2569 2570 return pdev; 2571 } 2572 #else 2573 static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn) ··· 3083 { 3084 unsigned int used_buses, normal_bridges = 0, hotplug_bridges = 0; 3085 unsigned int start = bus->busn_res.start; 3086 - unsigned int devfn, cmax, max = start; 3087 struct pci_dev *dev; 3088 3089 dev_dbg(&bus->dev, "scanning bus\n"); 3090 3091 /* Go find them, Rover! */ 3092 - for (devfn = 0; devfn < 256; devfn += 8) 3093 - pci_scan_slot(bus, devfn); 3094 3095 /* Reserve buses for SR-IOV capability */ 3096 used_buses = pci_iov_bus_range(bus); ··· 3507 * pci_rescan_bus(), pci_rescan_bus_bridge_resize() and PCI device removal 3508 * routines should always be executed under this mutex. 3509 */ 3510 - static DEFINE_MUTEX(pci_rescan_remove_lock); 3511 3512 void pci_lock_rescan_remove(void) 3513 {
··· 3 * PCI detection and setup code 4 */ 5 6 + #include <linux/array_size.h> 7 #include <linux/kernel.h> 8 #include <linux/delay.h> 9 #include <linux/init.h> ··· 419 limit |= ((unsigned long) io_limit_hi << 16); 420 } 421 422 + res->flags = (io_base_lo & PCI_IO_RANGE_TYPE_MASK) | IORESOURCE_IO; 423 + 424 if (base <= limit) { 425 region.start = base; 426 region.end = limit + io_granularity - 1; 427 pcibios_bus_to_resource(dev->bus, res, &region); 428 if (log) 429 pci_info(dev, " bridge window %pR\n", res); 430 + } else { 431 + resource_set_range(res, 0, 0); 432 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 433 } 434 } 435 ··· 440 pci_read_config_word(dev, PCI_MEMORY_LIMIT, &mem_limit_lo); 441 base = ((unsigned long) mem_base_lo & PCI_MEMORY_RANGE_MASK) << 16; 442 limit = ((unsigned long) mem_limit_lo & PCI_MEMORY_RANGE_MASK) << 16; 443 + 444 + res->flags = (mem_base_lo & PCI_MEMORY_RANGE_TYPE_MASK) | IORESOURCE_MEM; 445 + 446 if (base <= limit) { 447 region.start = base; 448 region.end = limit + 0xfffff; 449 pcibios_bus_to_resource(dev->bus, res, &region); 450 if (log) 451 pci_info(dev, " bridge window %pR\n", res); 452 + } else { 453 + resource_set_range(res, 0, 0); 454 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 455 } 456 } 457 ··· 489 return; 490 } 491 492 + res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | IORESOURCE_MEM | 493 + IORESOURCE_PREFETCH; 494 + if (res->flags & PCI_PREF_RANGE_TYPE_64) 495 + res->flags |= IORESOURCE_MEM_64; 496 + 497 if (base <= limit) { 498 region.start = base; 499 region.end = limit + 0xfffff; 500 pcibios_bus_to_resource(dev->bus, res, &region); 501 if (log) 502 pci_info(dev, " bridge window %pR\n", res); 503 + } else { 504 + resource_set_range(res, 0, 0); 505 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 506 } 507 } 508 ··· 524 } 525 if (io) { 526 bridge->io_window = 1; 527 + pci_read_bridge_io(bridge, 528 + pci_resource_n(bridge, PCI_BRIDGE_IO_WINDOW), 529 + true); 530 } 531 532 + pci_read_bridge_mmio(bridge, 533 + pci_resource_n(bridge, PCI_BRIDGE_MEM_WINDOW), 534 + true); 535 536 /* 537 * DECchip 21050 pass 2 errata: the bridge may miss an address ··· 565 bridge->pref_64_window = 1; 566 } 567 568 + pci_read_bridge_mmio_pref(bridge, 569 + pci_resource_n(bridge, 570 + PCI_BRIDGE_PREF_MEM_WINDOW), 571 + true); 572 } 573 574 void pci_read_bridge_bases(struct pci_bus *child) ··· 585 for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) 586 child->resource[i] = &dev->resource[PCI_BRIDGE_RESOURCES+i]; 587 588 + pci_read_bridge_io(child->self, 589 + child->resource[PCI_BUS_BRIDGE_IO_WINDOW], false); 590 + pci_read_bridge_mmio(child->self, 591 + child->resource[PCI_BUS_BRIDGE_MEM_WINDOW], false); 592 + pci_read_bridge_mmio_pref(child->self, 593 + child->resource[PCI_BUS_BRIDGE_PREF_MEM_WINDOW], 594 + false); 595 596 if (!dev->transparent) 597 return; ··· 1912 1913 static void early_dump_pci_device(struct pci_dev *pdev) 1914 { 1915 + u32 value[PCI_CFG_SPACE_SIZE / sizeof(u32)]; 1916 int i; 1917 1918 pci_info(pdev, "config space:\n"); 1919 1920 + for (i = 0; i < ARRAY_SIZE(value); i++) 1921 + pci_read_config_dword(pdev, i * sizeof(u32), &value[i]); 1922 1923 print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, 1924 + value, ARRAY_SIZE(value) * sizeof(u32), false); 1925 } 1926 1927 static const char *pci_type_str(struct pci_dev *dev) ··· 1985 dev->sysdata = dev->bus->sysdata; 1986 dev->dev.parent = dev->bus->bridge; 1987 dev->dev.bus = &pci_bus_type; 1988 + dev->hdr_type = FIELD_GET(PCI_HEADER_TYPE_MASK, hdr_type); 1989 + dev->multifunction = FIELD_GET(PCI_HEADER_TYPE_MFD, hdr_type); 1990 dev->error_state = pci_channel_io_normal; 1991 set_pcie_port_type(dev); 1992 ··· 2516 struct device_node *np; 2517 2518 np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn); 2519 + if (!np) 2520 return NULL; 2521 + 2522 + pdev = of_find_device_by_node(np); 2523 + if (pdev) { 2524 + put_device(&pdev->dev); 2525 + goto err_put_of_node; 2526 + } 2527 2528 /* 2529 * First check whether the pwrctrl device really needs to be created or ··· 2526 */ 2527 if (!of_pci_supply_present(np)) { 2528 pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name); 2529 + goto err_put_of_node; 2530 } 2531 2532 /* Now create the pwrctrl device */ 2533 pdev = of_platform_device_create(np, NULL, &host->dev); 2534 if (!pdev) { 2535 pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name); 2536 + goto err_put_of_node; 2537 } 2538 2539 + of_node_put(np); 2540 + 2541 return pdev; 2542 + 2543 + err_put_of_node: 2544 + of_node_put(np); 2545 + 2546 + return NULL; 2547 } 2548 #else 2549 static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn) ··· 3045 { 3046 unsigned int used_buses, normal_bridges = 0, hotplug_bridges = 0; 3047 unsigned int start = bus->busn_res.start; 3048 + unsigned int devnr, cmax, max = start; 3049 struct pci_dev *dev; 3050 3051 dev_dbg(&bus->dev, "scanning bus\n"); 3052 3053 /* Go find them, Rover! */ 3054 + for (devnr = 0; devnr < PCI_MAX_NR_DEVS; devnr++) 3055 + pci_scan_slot(bus, PCI_DEVFN(devnr, 0)); 3056 3057 /* Reserve buses for SR-IOV capability */ 3058 used_buses = pci_iov_bus_range(bus); ··· 3469 * pci_rescan_bus(), pci_rescan_bus_bridge_resize() and PCI device removal 3470 * routines should always be executed under this mutex. 3471 */ 3472 + DEFINE_MUTEX(pci_rescan_remove_lock); 3473 3474 void pci_lock_rescan_remove(void) 3475 {
+3 -9
drivers/pci/pwrctrl/slot.c
··· 49 ret = regulator_bulk_enable(slot->num_supplies, slot->supplies); 50 if (ret < 0) { 51 dev_err_probe(dev, ret, "Failed to enable slot regulators\n"); 52 - goto err_regulator_free; 53 } 54 55 ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_slot_power_off, 56 slot); 57 if (ret) 58 - goto err_regulator_disable; 59 60 clk = devm_clk_get_optional_enabled(dev, NULL); 61 if (IS_ERR(clk)) { ··· 71 return dev_err_probe(dev, ret, "Failed to register pwrctrl driver\n"); 72 73 return 0; 74 - 75 - err_regulator_disable: 76 - regulator_bulk_disable(slot->num_supplies, slot->supplies); 77 - err_regulator_free: 78 - regulator_bulk_free(slot->num_supplies, slot->supplies); 79 - 80 - return ret; 81 } 82 83 static const struct of_device_id pci_pwrctrl_slot_of_match[] = {
··· 49 ret = regulator_bulk_enable(slot->num_supplies, slot->supplies); 50 if (ret < 0) { 51 dev_err_probe(dev, ret, "Failed to enable slot regulators\n"); 52 + regulator_bulk_free(slot->num_supplies, slot->supplies); 53 + return ret; 54 } 55 56 ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_slot_power_off, 57 slot); 58 if (ret) 59 + return ret; 60 61 clk = devm_clk_get_optional_enabled(dev, NULL); 62 if (IS_ERR(clk)) { ··· 70 return dev_err_probe(dev, ret, "Failed to register pwrctrl driver\n"); 71 72 return 0; 73 } 74 75 static const struct of_device_id pci_pwrctrl_slot_of_match[] = {
+1
drivers/pci/quirks.c
··· 2717 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi); 2718 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi); 2719 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x5a3f, quirk_disable_msi); 2720 2721 /* 2722 * The APC bridge device in AMD 780 family northbridges has some random
··· 2717 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi); 2718 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi); 2719 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x5a3f, quirk_disable_msi); 2720 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_RDC, 0x1031, quirk_disable_msi); 2721 2722 /* 2723 * The APC bridge device in AMD 780 family northbridges has some random
+3
drivers/pci/remove.c
··· 31 return; 32 33 of_device_unregister(pdev); 34 of_node_clear_flag(np, OF_POPULATED); 35 } 36 ··· 140 */ 141 void pci_stop_and_remove_bus_device(struct pci_dev *dev) 142 { 143 pci_stop_bus_device(dev); 144 pci_remove_bus_device(dev); 145 }
··· 31 return; 32 33 of_device_unregister(pdev); 34 + put_device(&pdev->dev); 35 + 36 of_node_clear_flag(np, OF_POPULATED); 37 } 38 ··· 138 */ 139 void pci_stop_and_remove_bus_device(struct pci_dev *dev) 140 { 141 + lockdep_assert_held(&pci_rescan_remove_lock); 142 pci_stop_bus_device(dev); 143 pci_remove_bus_device(dev); 144 }
+441 -414
drivers/pci/setup-bus.c
··· 28 #include <linux/acpi.h> 29 #include "pci.h" 30 31 unsigned int pci_flags; 32 EXPORT_SYMBOL_GPL(pci_flags); 33 ··· 140 res->flags = dev_res->flags; 141 } 142 143 static bool pdev_resources_assignable(struct pci_dev *dev) 144 { 145 u16 class = dev->class >> 8, command; ··· 291 return true; 292 } 293 294 /* Sort resources by alignment */ 295 static void pdev_sort_resources(struct pci_dev *dev, struct list_head *head) 296 { ··· 331 resource_size_t r_align; 332 struct list_head *n; 333 334 - if (r->flags & IORESOURCE_PCI_FIXED) 335 - continue; 336 - 337 - if (!(r->flags) || r->parent) 338 continue; 339 340 r_align = pci_resource_alignment(dev, r); ··· 380 return false; 381 } 382 383 - static inline void reset_resource(struct resource *res) 384 { 385 res->start = 0; 386 res->end = 0; 387 res->flags = 0; ··· 550 } 551 552 /* Return: @true if assignment of a required resource failed. */ 553 - static bool pci_required_resource_failed(struct list_head *fail_head) 554 { 555 struct pci_dev_resource *fail_res; 556 557 list_for_each_entry(fail_res, fail_head, list) { 558 int idx = pci_resource_num(fail_res->dev, fail_res->res); 559 560 if (!pci_resource_is_optional(fail_res->dev, idx)) 561 return true; ··· 603 struct pci_dev_resource *dev_res, *tmp_res, *dev_res2; 604 struct resource *res; 605 struct pci_dev *dev; 606 - const char *res_name; 607 - int idx; 608 unsigned long fail_type; 609 resource_size_t add_align, align; 610 ··· 674 } 675 676 /* Without realloc_head and only optional fails, nothing more to do. */ 677 - if (!pci_required_resource_failed(&local_fail_head) && 678 list_empty(realloc_head)) { 679 list_for_each_entry(save_res, &save_head, list) { 680 struct resource *res = save_res->res; ··· 710 res = dev_res->res; 711 dev = dev_res->dev; 712 713 - if (!res->parent) 714 - continue; 715 - 716 - idx = pci_resource_num(dev, res); 717 - res_name = pci_resource_name(dev, idx); 718 - pci_dbg(dev, "%s %pR: releasing\n", res_name, res); 719 - 720 - release_resource(res); 721 restore_dev_resource(dev_res); 722 } 723 /* Restore start/end/flags from saved list */ ··· 740 0 /* don't care */); 741 } 742 743 - reset_resource(res); 744 } 745 746 free_list(head); ··· 781 782 res = bus->resource[0]; 783 pcibios_resource_to_bus(bridge->bus, &region, res); 784 - if (res->flags & IORESOURCE_IO) { 785 /* 786 * The IO resource is allocated a range twice as large as it 787 * would normally need. This allows us to set both IO regs. ··· 795 796 res = bus->resource[1]; 797 pcibios_resource_to_bus(bridge->bus, &region, res); 798 - if (res->flags & IORESOURCE_IO) { 799 pci_info(bridge, " bridge window %pR\n", res); 800 pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 801 region.start); ··· 805 806 res = bus->resource[2]; 807 pcibios_resource_to_bus(bridge->bus, &region, res); 808 - if (res->flags & IORESOURCE_MEM) { 809 pci_info(bridge, " bridge window %pR\n", res); 810 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 811 region.start); ··· 815 816 res = bus->resource[3]; 817 pcibios_resource_to_bus(bridge->bus, &region, res); 818 - if (res->flags & IORESOURCE_MEM) { 819 pci_info(bridge, " bridge window %pR\n", res); 820 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 821 region.start); ··· 856 res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 857 res_name = pci_resource_name(bridge, PCI_BRIDGE_IO_WINDOW); 858 pcibios_resource_to_bus(bridge->bus, &region, res); 859 - if (res->flags & IORESOURCE_IO) { 860 pci_read_config_word(bridge, PCI_IO_BASE, &l); 861 io_base_lo = (region.start >> 8) & io_mask; 862 io_limit_lo = (region.end >> 8) & io_mask; ··· 888 res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 889 res_name = pci_resource_name(bridge, PCI_BRIDGE_MEM_WINDOW); 890 pcibios_resource_to_bus(bridge->bus, &region, res); 891 - if (res->flags & IORESOURCE_MEM) { 892 l = (region.start >> 16) & 0xfff0; 893 l |= region.end & 0xfff00000; 894 pci_info(bridge, " %s %pR\n", res_name, res); ··· 917 res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 918 res_name = pci_resource_name(bridge, PCI_BRIDGE_PREF_MEM_WINDOW); 919 pcibios_resource_to_bus(bridge->bus, &region, res); 920 - if (res->flags & IORESOURCE_PREFETCH) { 921 l = (region.start >> 16) & 0xfff0; 922 l |= region.end & 0xfff00000; 923 if (res->flags & IORESOURCE_MEM_64) { ··· 953 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); 954 } 955 956 void __weak pcibios_setup_bridge(struct pci_bus *bus, unsigned long type) 957 { 958 } ··· 986 987 int pci_claim_bridge_resource(struct pci_dev *bridge, int i) 988 { 989 if (i < PCI_BRIDGE_RESOURCES || i > PCI_BRIDGE_RESOURCE_END) 990 return 0; 991 ··· 997 if ((bridge->class >> 8) != PCI_CLASS_BRIDGE_PCI) 998 return 0; 999 1000 - if (!pci_bus_clip_resource(bridge, i)) 1001 - return -EINVAL; /* Clipping didn't change anything */ 1002 - 1003 - switch (i) { 1004 - case PCI_BRIDGE_IO_WINDOW: 1005 - pci_setup_bridge_io(bridge); 1006 - break; 1007 - case PCI_BRIDGE_MEM_WINDOW: 1008 - pci_setup_bridge_mmio(bridge); 1009 - break; 1010 - case PCI_BRIDGE_PREF_MEM_WINDOW: 1011 - pci_setup_bridge_mmio_pref(bridge); 1012 - break; 1013 - default: 1014 return -EINVAL; 1015 - } 1016 1017 - if (pci_claim_resource(bridge, i) == 0) 1018 - return 0; /* Claimed a smaller window */ 1019 1020 - return -EINVAL; 1021 } 1022 1023 /* ··· 1035 PCI_PREF_RANGE_TYPE_64; 1036 } 1037 } 1038 - } 1039 - 1040 - /* 1041 - * Helper function for sizing routines. Assigned resources have non-NULL 1042 - * parent resource. 1043 - * 1044 - * Return first unassigned resource of the correct type. If there is none, 1045 - * return first assigned resource of the correct type. If none of the 1046 - * above, return NULL. 1047 - * 1048 - * Returning an assigned resource of the correct type allows the caller to 1049 - * distinguish between already assigned and no resource of the correct type. 1050 - */ 1051 - static struct resource *find_bus_resource_of_type(struct pci_bus *bus, 1052 - unsigned long type_mask, 1053 - unsigned long type) 1054 - { 1055 - struct resource *r, *r_assigned = NULL; 1056 - 1057 - pci_bus_for_each_resource(bus, r) { 1058 - if (r == &ioport_resource || r == &iomem_resource) 1059 - continue; 1060 - if (r && (r->flags & type_mask) == type && !r->parent) 1061 - return r; 1062 - if (r && (r->flags & type_mask) == type && !r_assigned) 1063 - r_assigned = r; 1064 - } 1065 - return r_assigned; 1066 } 1067 1068 static resource_size_t calculate_iosize(resource_size_t size, ··· 1127 struct list_head *realloc_head) 1128 { 1129 struct pci_dev *dev; 1130 - struct resource *b_res = find_bus_resource_of_type(bus, IORESOURCE_IO, 1131 - IORESOURCE_IO); 1132 resource_size_t size = 0, size0 = 0, size1 = 0; 1133 resource_size_t children_add_size = 0; 1134 resource_size_t min_align, align; ··· 1148 1149 if (r->parent || !(r->flags & IORESOURCE_IO)) 1150 continue; 1151 - r_size = resource_size(r); 1152 1153 if (r_size < SZ_1K) 1154 /* Might be re-aligned for ISA */ 1155 size += r_size; ··· 1171 size0 = calculate_iosize(size, min_size, size1, 0, 0, 1172 resource_size(b_res), min_align); 1173 1174 size1 = size0; 1175 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1176 size1 = calculate_iosize(size, min_size, size1, add_size, ··· 1185 if (bus->self && (b_res->start || b_res->end)) 1186 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 1187 b_res, &bus->busn_res); 1188 - b_res->flags = 0; 1189 return; 1190 } 1191 1192 resource_set_range(b_res, min_align, size0); 1193 b_res->flags |= IORESOURCE_STARTALIGN; 1194 if (bus->self && size1 > size0 && realloc_head) { 1195 add_to_list(realloc_head, bus->self, b_res, size1-size0, 1196 min_align); 1197 pci_info(bus->self, "bridge window %pR to %pR add_size %llx\n", ··· 1226 /** 1227 * pbus_upstream_space_available - Check no upstream resource limits allocation 1228 * @bus: The bus 1229 - * @mask: Mask the resource flag, then compare it with type 1230 - * @type: The type of resource from bridge 1231 * @size: The size required from the bridge window 1232 * @align: Required alignment for the resource 1233 * 1234 - * Checks that @size can fit inside the upstream bridge resources that are 1235 - * already assigned. 1236 * 1237 * Return: %true if enough space is available on all assigned upstream 1238 * resources. 1239 */ 1240 - static bool pbus_upstream_space_available(struct pci_bus *bus, unsigned long mask, 1241 - unsigned long type, resource_size_t size, 1242 resource_size_t align) 1243 { 1244 struct resource_constraint constraint = { ··· 1247 .align = align, 1248 }; 1249 struct pci_bus *downstream = bus; 1250 - struct resource *r; 1251 1252 while ((bus = bus->parent)) { 1253 if (pci_is_root_bus(bus)) 1254 break; 1255 1256 - pci_bus_for_each_resource(bus, r) { 1257 - if (!r || !r->parent || (r->flags & mask) != type) 1258 - continue; 1259 - 1260 - if (resource_size(r) >= size) { 1261 - struct resource gap = {}; 1262 - 1263 - if (find_resource_space(r, &gap, size, &constraint) == 0) { 1264 - gap.flags = type; 1265 - pci_dbg(bus->self, 1266 - "Assigned bridge window %pR to %pR free space at %pR\n", 1267 - r, &bus->busn_res, &gap); 1268 - return true; 1269 - } 1270 - } 1271 - 1272 - if (bus->self) { 1273 - pci_info(bus->self, 1274 - "Assigned bridge window %pR to %pR cannot fit 0x%llx required for %s bridging to %pR\n", 1275 - r, &bus->busn_res, 1276 - (unsigned long long)size, 1277 - pci_name(downstream->self), 1278 - &downstream->busn_res); 1279 - } 1280 - 1281 return false; 1282 } 1283 } 1284 1285 return true; ··· 1289 * pbus_size_mem() - Size the memory window of a given bus 1290 * 1291 * @bus: The bus 1292 - * @mask: Mask the resource flag, then compare it with type 1293 - * @type: The type of free resource from bridge 1294 - * @type2: Second match type 1295 - * @type3: Third match type 1296 * @min_size: The minimum memory window that must be allocated 1297 * @add_size: Additional optional memory window 1298 * @realloc_head: Track the additional memory window on this list 1299 * 1300 - * Calculate the size of the bus and minimal alignment which guarantees 1301 - * that all child resources fit in this size. 1302 * 1303 - * Return -ENOSPC if there's no available bus resource of the desired 1304 - * type. Otherwise, set the bus resource start/end to indicate the 1305 - * required size, add things to realloc_head (if supplied), and return 0. 1306 */ 1307 - static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, 1308 - unsigned long type, unsigned long type2, 1309 - unsigned long type3, resource_size_t min_size, 1310 resource_size_t add_size, 1311 struct list_head *realloc_head) 1312 { ··· 1312 resource_size_t min_align, win_align, align, size, size0, size1 = 0; 1313 resource_size_t aligns[28]; /* Alignments from 1MB to 128TB */ 1314 int order, max_order; 1315 - struct resource *b_res = find_bus_resource_of_type(bus, 1316 - mask | IORESOURCE_PREFETCH, type); 1317 resource_size_t children_add_size = 0; 1318 resource_size_t children_add_align = 0; 1319 resource_size_t add_align = 0; 1320 1321 if (!b_res) 1322 - return -ENOSPC; 1323 1324 /* If resource is already assigned, nothing more to do */ 1325 if (b_res->parent) 1326 - return 0; 1327 1328 memset(aligns, 0, sizeof(aligns)); 1329 max_order = 0; ··· 1338 const char *r_name = pci_resource_name(dev, i); 1339 resource_size_t r_size; 1340 1341 - if (r->parent || (r->flags & IORESOURCE_PCI_FIXED) || 1342 - ((r->flags & mask) != type && 1343 - (r->flags & mask) != type2 && 1344 - (r->flags & mask) != type3)) 1345 continue; 1346 r_size = resource_size(r); 1347 1348 /* Put SRIOV requested res to the optional list */ ··· 1388 } 1389 } 1390 1391 win_align = window_alignment(bus, b_res->flags); 1392 min_align = calculate_mem_align(aligns, max_order); 1393 min_align = max(min_align, win_align); 1394 - size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align); 1395 1396 if (bus->self && size0 && 1397 - !pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type, 1398 - size0, min_align)) { 1399 - min_align = 1ULL << (max_order + __ffs(SZ_1M)); 1400 - min_align = max(min_align, win_align); 1401 - size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), win_align); 1402 pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n", 1403 b_res, &bus->busn_res); 1404 } ··· 1413 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1414 add_align = max(min_align, add_align); 1415 size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1416 - resource_size(b_res), add_align); 1417 1418 if (bus->self && size1 && 1419 - !pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type, 1420 - size1, add_align)) { 1421 - min_align = 1ULL << (max_order + __ffs(SZ_1M)); 1422 - min_align = max(min_align, win_align); 1423 size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1424 - resource_size(b_res), win_align); 1425 pci_info(bus->self, 1426 "bridge window %pR to %pR requires relaxed alignment rules\n", 1427 b_res, &bus->busn_res); ··· 1432 if (bus->self && (b_res->start || b_res->end)) 1433 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 1434 b_res, &bus->busn_res); 1435 - b_res->flags = 0; 1436 - return 0; 1437 } 1438 1439 resource_set_range(b_res, min_align, size0); 1440 b_res->flags |= IORESOURCE_STARTALIGN; 1441 if (bus->self && size1 > size0 && realloc_head) { 1442 add_to_list(realloc_head, bus->self, b_res, size1-size0, add_align); 1443 pci_info(bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n", 1444 b_res, &bus->busn_res, 1445 (unsigned long long) (size1 - size0), 1446 (unsigned long long) add_align); 1447 } 1448 - return 0; 1449 } 1450 1451 unsigned long pci_cardbus_resource_alignment(struct resource *res) ··· 1550 void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) 1551 { 1552 struct pci_dev *dev; 1553 - unsigned long mask, prefmask, type2 = 0, type3 = 0; 1554 resource_size_t additional_io_size = 0, additional_mmio_size = 0, 1555 additional_mmio_pref_size = 0; 1556 struct resource *pref; 1557 struct pci_host_bridge *host; 1558 - int hdr_type, ret; 1559 1560 list_for_each_entry(dev, &bus->devices, bus_list) { 1561 struct pci_bus *b = dev->subordinate; ··· 1604 pbus_size_io(bus, realloc_head ? 0 : additional_io_size, 1605 additional_io_size, realloc_head); 1606 1607 - /* 1608 - * If there's a 64-bit prefetchable MMIO window, compute 1609 - * the size required to put all 64-bit prefetchable 1610 - * resources in it. 1611 - */ 1612 - mask = IORESOURCE_MEM; 1613 - prefmask = IORESOURCE_MEM | IORESOURCE_PREFETCH; 1614 - if (pref && (pref->flags & IORESOURCE_MEM_64)) { 1615 - prefmask |= IORESOURCE_MEM_64; 1616 - ret = pbus_size_mem(bus, prefmask, prefmask, 1617 - prefmask, prefmask, 1618 - realloc_head ? 0 : additional_mmio_pref_size, 1619 - additional_mmio_pref_size, realloc_head); 1620 - 1621 - /* 1622 - * If successful, all non-prefetchable resources 1623 - * and any 32-bit prefetchable resources will go in 1624 - * the non-prefetchable window. 1625 - */ 1626 - if (ret == 0) { 1627 - mask = prefmask; 1628 - type2 = prefmask & ~IORESOURCE_MEM_64; 1629 - type3 = prefmask & ~IORESOURCE_PREFETCH; 1630 - } 1631 } 1632 1633 - /* 1634 - * If there is no 64-bit prefetchable window, compute the 1635 - * size required to put all prefetchable resources in the 1636 - * 32-bit prefetchable window (if there is one). 1637 - */ 1638 - if (!type2) { 1639 - prefmask &= ~IORESOURCE_MEM_64; 1640 - ret = pbus_size_mem(bus, prefmask, prefmask, 1641 - prefmask, prefmask, 1642 - realloc_head ? 0 : additional_mmio_pref_size, 1643 - additional_mmio_pref_size, realloc_head); 1644 - 1645 - /* 1646 - * If successful, only non-prefetchable resources 1647 - * will go in the non-prefetchable window. 1648 - */ 1649 - if (ret == 0) 1650 - mask = prefmask; 1651 - else 1652 - additional_mmio_size += additional_mmio_pref_size; 1653 - 1654 - type2 = type3 = IORESOURCE_MEM; 1655 - } 1656 - 1657 - /* 1658 - * Compute the size required to put everything else in the 1659 - * non-prefetchable window. This includes: 1660 - * 1661 - * - all non-prefetchable resources 1662 - * - 32-bit prefetchable resources if there's a 64-bit 1663 - * prefetchable window or no prefetchable window at all 1664 - * - 64-bit prefetchable resources if there's no prefetchable 1665 - * window at all 1666 - * 1667 - * Note that the strategy in __pci_assign_resource() must match 1668 - * that used here. Specifically, we cannot put a 32-bit 1669 - * prefetchable resource in a 64-bit prefetchable window. 1670 - */ 1671 - pbus_size_mem(bus, mask, IORESOURCE_MEM, type2, type3, 1672 realloc_head ? 0 : additional_mmio_size, 1673 additional_mmio_size, realloc_head); 1674 break; ··· 1804 } 1805 } 1806 1807 - #define PCI_RES_TYPE_MASK \ 1808 - (IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_PREFETCH |\ 1809 - IORESOURCE_MEM_64) 1810 - 1811 static void pci_bridge_release_resources(struct pci_bus *bus, 1812 - unsigned long type) 1813 { 1814 struct pci_dev *dev = bus->self; 1815 - struct resource *r; 1816 - unsigned int old_flags; 1817 - struct resource *b_res; 1818 - int idx = 1; 1819 1820 - b_res = &dev->resource[PCI_BRIDGE_RESOURCES]; 1821 - 1822 - /* 1823 - * 1. If IO port assignment fails, release bridge IO port. 1824 - * 2. If non pref MMIO assignment fails, release bridge nonpref MMIO. 1825 - * 3. If 64bit pref MMIO assignment fails, and bridge pref is 64bit, 1826 - * release bridge pref MMIO. 1827 - * 4. If pref MMIO assignment fails, and bridge pref is 32bit, 1828 - * release bridge pref MMIO. 1829 - * 5. If pref MMIO assignment fails, and bridge pref is not 1830 - * assigned, release bridge nonpref MMIO. 1831 - */ 1832 - if (type & IORESOURCE_IO) 1833 - idx = 0; 1834 - else if (!(type & IORESOURCE_PREFETCH)) 1835 - idx = 1; 1836 - else if ((type & IORESOURCE_MEM_64) && 1837 - (b_res[2].flags & IORESOURCE_MEM_64)) 1838 - idx = 2; 1839 - else if (!(b_res[2].flags & IORESOURCE_MEM_64) && 1840 - (b_res[2].flags & IORESOURCE_PREFETCH)) 1841 - idx = 2; 1842 - else 1843 - idx = 1; 1844 - 1845 - r = &b_res[idx]; 1846 - 1847 - if (!r->parent) 1848 return; 1849 1850 - /* If there are children, release them all */ 1851 - release_child_resources(r); 1852 - if (!release_resource(r)) { 1853 - type = old_flags = r->flags & PCI_RES_TYPE_MASK; 1854 - pci_info(dev, "resource %d %pR released\n", 1855 - PCI_BRIDGE_RESOURCES + idx, r); 1856 - /* Keep the old size */ 1857 - resource_set_range(r, 0, resource_size(r)); 1858 - r->flags = 0; 1859 1860 - /* Avoiding touch the one without PREF */ 1861 - if (type & IORESOURCE_PREFETCH) 1862 - type = IORESOURCE_PREFETCH; 1863 - __pci_setup_bridge(bus, type); 1864 - /* For next child res under same bridge */ 1865 - r->flags = old_flags; 1866 - } 1867 } 1868 1869 enum release_type { ··· 1835 * a larger window later. 1836 */ 1837 static void pci_bus_release_bridge_resources(struct pci_bus *bus, 1838 - unsigned long type, 1839 enum release_type rel_type) 1840 { 1841 struct pci_dev *dev; ··· 1843 1844 list_for_each_entry(dev, &bus->devices, bus_list) { 1845 struct pci_bus *b = dev->subordinate; 1846 if (!b) 1847 continue; 1848 ··· 1853 if ((dev->class >> 8) != PCI_CLASS_BRIDGE_PCI) 1854 continue; 1855 1856 - if (rel_type == whole_subtree) 1857 - pci_bus_release_bridge_resources(b, type, 1858 - whole_subtree); 1859 } 1860 1861 if (pci_is_root_bus(bus)) ··· 1871 return; 1872 1873 if ((rel_type == whole_subtree) || is_leaf_bridge) 1874 - pci_bridge_release_resources(bus, type); 1875 } 1876 1877 static void pci_bus_dump_res(struct pci_bus *bus) ··· 2046 avail->start = min(avail->start + tmp, avail->end + 1); 2047 } 2048 2049 - static void remove_dev_resources(struct pci_dev *dev, struct resource *io, 2050 - struct resource *mmio, 2051 - struct resource *mmio_pref) 2052 { 2053 - struct resource *res; 2054 2055 pci_dev_for_each_resource(dev, res) { 2056 - if (resource_type(res) == IORESOURCE_IO) { 2057 - remove_dev_resource(io, dev, res); 2058 - } else if (resource_type(res) == IORESOURCE_MEM) { 2059 2060 - /* 2061 - * Make sure prefetchable memory is reduced from 2062 - * the correct resource. Specifically we put 32-bit 2063 - * prefetchable memory in non-prefetchable window 2064 - * if there is a 64-bit prefetchable window. 2065 - * 2066 - * See comments in __pci_bus_size_bridges() for 2067 - * more information. 2068 - */ 2069 - if ((res->flags & IORESOURCE_PREFETCH) && 2070 - ((res->flags & IORESOURCE_MEM_64) == 2071 - (mmio_pref->flags & IORESOURCE_MEM_64))) 2072 - remove_dev_resource(mmio_pref, dev, res); 2073 - else 2074 - remove_dev_resource(mmio, dev, res); 2075 - } 2076 } 2077 } 2078 ··· 2074 * shared with the bridges. 2075 */ 2076 static void pci_bus_distribute_available_resources(struct pci_bus *bus, 2077 - struct list_head *add_list, 2078 - struct resource io, 2079 - struct resource mmio, 2080 - struct resource mmio_pref) 2081 { 2082 unsigned int normal_bridges = 0, hotplug_bridges = 0; 2083 - struct resource *io_res, *mmio_res, *mmio_pref_res; 2084 struct pci_dev *dev, *bridge = bus->self; 2085 - resource_size_t io_per_b, mmio_per_b, mmio_pref_per_b, align; 2086 2087 - io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 2088 - mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 2089 - mmio_pref_res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 2090 2091 - /* 2092 - * The alignment of this bridge is yet to be considered, hence it must 2093 - * be done now before extending its bridge window. 2094 - */ 2095 - align = pci_resource_alignment(bridge, io_res); 2096 - if (!io_res->parent && align) 2097 - io.start = min(ALIGN(io.start, align), io.end + 1); 2098 2099 - align = pci_resource_alignment(bridge, mmio_res); 2100 - if (!mmio_res->parent && align) 2101 - mmio.start = min(ALIGN(mmio.start, align), mmio.end + 1); 2102 2103 - align = pci_resource_alignment(bridge, mmio_pref_res); 2104 - if (!mmio_pref_res->parent && align) 2105 - mmio_pref.start = min(ALIGN(mmio_pref.start, align), 2106 - mmio_pref.end + 1); 2107 - 2108 - /* 2109 - * Now that we have adjusted for alignment, update the bridge window 2110 - * resources to fill as much remaining resource space as possible. 2111 - */ 2112 - adjust_bridge_window(bridge, io_res, add_list, resource_size(&io)); 2113 - adjust_bridge_window(bridge, mmio_res, add_list, resource_size(&mmio)); 2114 - adjust_bridge_window(bridge, mmio_pref_res, add_list, 2115 - resource_size(&mmio_pref)); 2116 2117 /* 2118 * Calculate how many hotplug bridges and normal bridges there ··· 2130 */ 2131 list_for_each_entry(dev, &bus->devices, bus_list) { 2132 if (!dev->is_virtfn) 2133 - remove_dev_resources(dev, &io, &mmio, &mmio_pref); 2134 } 2135 2136 /* ··· 2142 * split between non-hotplug bridges. This is to allow possible 2143 * hotplug bridges below them to get the extra space as well. 2144 */ 2145 - if (hotplug_bridges) { 2146 - io_per_b = div64_ul(resource_size(&io), hotplug_bridges); 2147 - mmio_per_b = div64_ul(resource_size(&mmio), hotplug_bridges); 2148 - mmio_pref_per_b = div64_ul(resource_size(&mmio_pref), 2149 - hotplug_bridges); 2150 - } else { 2151 - io_per_b = div64_ul(resource_size(&io), normal_bridges); 2152 - mmio_per_b = div64_ul(resource_size(&mmio), normal_bridges); 2153 - mmio_pref_per_b = div64_ul(resource_size(&mmio_pref), 2154 - normal_bridges); 2155 } 2156 2157 for_each_pci_bridge(dev, bus) { ··· 2157 if (hotplug_bridges && !dev->is_hotplug_bridge) 2158 continue; 2159 2160 - res = &dev->resource[PCI_BRIDGE_IO_WINDOW]; 2161 2162 - /* 2163 - * Make sure the split resource space is properly aligned 2164 - * for bridge windows (align it down to avoid going above 2165 - * what is available). 2166 - */ 2167 - align = pci_resource_alignment(dev, res); 2168 - resource_set_size(&io, ALIGN_DOWN_IF_NONZERO(io_per_b, align)); 2169 2170 - /* 2171 - * The x_per_b holds the extra resource space that can be 2172 - * added for each bridge but there is the minimal already 2173 - * reserved as well so adjust x.start down accordingly to 2174 - * cover the whole space. 2175 - */ 2176 - io.start -= resource_size(res); 2177 2178 - res = &dev->resource[PCI_BRIDGE_MEM_WINDOW]; 2179 - align = pci_resource_alignment(dev, res); 2180 - resource_set_size(&mmio, 2181 - ALIGN_DOWN_IF_NONZERO(mmio_per_b,align)); 2182 - mmio.start -= resource_size(res); 2183 2184 - res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 2185 - align = pci_resource_alignment(dev, res); 2186 - resource_set_size(&mmio_pref, 2187 - ALIGN_DOWN_IF_NONZERO(mmio_pref_per_b, align)); 2188 - mmio_pref.start -= resource_size(res); 2189 - 2190 - pci_bus_distribute_available_resources(b, add_list, io, mmio, 2191 - mmio_pref); 2192 - 2193 - io.start += io.end + 1; 2194 - mmio.start += mmio.end + 1; 2195 - mmio_pref.start += mmio_pref.end + 1; 2196 } 2197 } 2198 2199 static void pci_bridge_distribute_available_resources(struct pci_dev *bridge, 2200 struct list_head *add_list) 2201 { 2202 - struct resource available_io, available_mmio, available_mmio_pref; 2203 2204 if (!bridge->is_hotplug_bridge) 2205 return; ··· 2199 pci_dbg(bridge, "distributing available resources\n"); 2200 2201 /* Take the initial extra resources from the hotplug port */ 2202 - available_io = bridge->resource[PCI_BRIDGE_IO_WINDOW]; 2203 - available_mmio = bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 2204 - available_mmio_pref = bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 2205 2206 pci_bus_distribute_available_resources(bridge->subordinate, 2207 - add_list, available_io, 2208 - available_mmio, 2209 - available_mmio_pref); 2210 } 2211 2212 static bool pci_bridge_resources_not_assigned(struct pci_dev *dev) ··· 2268 * enough to contain child device resources. 2269 */ 2270 list_for_each_entry(fail_res, fail_head, list) { 2271 - pci_bus_release_bridge_resources(fail_res->dev->bus, 2272 - fail_res->flags & PCI_RES_TYPE_MASK, 2273 - rel_type); 2274 } 2275 2276 /* Restore size and flags */ 2277 - list_for_each_entry(fail_res, fail_head, list) { 2278 - struct resource *res = fail_res->res; 2279 - struct pci_dev *dev = fail_res->dev; 2280 - int idx = pci_resource_num(dev, res); 2281 - 2282 restore_dev_resource(fail_res); 2283 - 2284 - if (!pci_is_bridge(dev)) 2285 - continue; 2286 - 2287 - if (idx >= PCI_BRIDGE_RESOURCES && 2288 - idx <= PCI_BRIDGE_RESOURCE_END) 2289 - res->flags = 0; 2290 - } 2291 2292 free_list(fail_head); 2293 } ··· 2414 } 2415 EXPORT_SYMBOL_GPL(pci_assign_unassigned_bridge_resources); 2416 2417 - int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type) 2418 { 2419 struct pci_dev_resource *dev_res; 2420 - struct pci_dev *next; 2421 LIST_HEAD(saved); 2422 LIST_HEAD(added); 2423 LIST_HEAD(failed); ··· 2432 2433 down_read(&pci_bus_sem); 2434 2435 - /* Walk to the root hub, releasing bridge BARs when possible */ 2436 - next = bridge; 2437 - do { 2438 - bridge = next; 2439 - for (i = PCI_BRIDGE_RESOURCES; i < PCI_BRIDGE_RESOURCE_END; 2440 - i++) { 2441 - struct resource *res = &bridge->resource[i]; 2442 - const char *res_name = pci_resource_name(bridge, i); 2443 2444 - if ((res->flags ^ type) & PCI_RES_TYPE_MASK) 2445 - continue; 2446 2447 - /* Ignore BARs which are still in use */ 2448 - if (res->child) 2449 - continue; 2450 - 2451 ret = add_to_list(&saved, bridge, res, 0, 0); 2452 if (ret) 2453 goto cleanup; 2454 2455 - pci_info(bridge, "%s %pR: releasing\n", res_name, res); 2456 2457 - if (res->parent) 2458 - release_resource(res); 2459 - res->start = 0; 2460 - res->end = 0; 2461 - break; 2462 } 2463 - if (i == PCI_BRIDGE_RESOURCE_END) 2464 - break; 2465 2466 - next = bridge->bus ? bridge->bus->self : NULL; 2467 - } while (next); 2468 2469 if (list_empty(&saved)) { 2470 up_read(&pci_bus_sem); ··· 2469 free_list(&added); 2470 2471 if (!list_empty(&failed)) { 2472 - ret = -ENOSPC; 2473 - goto cleanup; 2474 } 2475 2476 list_for_each_entry(dev_res, &saved, list) {
··· 28 #include <linux/acpi.h> 29 #include "pci.h" 30 31 + #define PCI_RES_TYPE_MASK \ 32 + (IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_PREFETCH |\ 33 + IORESOURCE_MEM_64) 34 + 35 unsigned int pci_flags; 36 EXPORT_SYMBOL_GPL(pci_flags); 37 ··· 136 res->flags = dev_res->flags; 137 } 138 139 + /* 140 + * Helper function for sizing routines. Assigned resources have non-NULL 141 + * parent resource. 142 + * 143 + * Return first unassigned resource of the correct type. If there is none, 144 + * return first assigned resource of the correct type. If none of the 145 + * above, return NULL. 146 + * 147 + * Returning an assigned resource of the correct type allows the caller to 148 + * distinguish between already assigned and no resource of the correct type. 149 + */ 150 + static struct resource *find_bus_resource_of_type(struct pci_bus *bus, 151 + unsigned long type_mask, 152 + unsigned long type) 153 + { 154 + struct resource *r, *r_assigned = NULL; 155 + 156 + pci_bus_for_each_resource(bus, r) { 157 + if (!r || r == &ioport_resource || r == &iomem_resource) 158 + continue; 159 + 160 + if ((r->flags & type_mask) != type) 161 + continue; 162 + 163 + if (!r->parent) 164 + return r; 165 + if (!r_assigned) 166 + r_assigned = r; 167 + } 168 + return r_assigned; 169 + } 170 + 171 + /** 172 + * pbus_select_window_for_type - Select bridge window for a resource type 173 + * @bus: PCI bus 174 + * @type: Resource type (resource flags can be passed as is) 175 + * 176 + * Select the bridge window based on a resource @type. 177 + * 178 + * For memory resources, the selection is done as follows: 179 + * 180 + * Any non-prefetchable resource is put into the non-prefetchable window. 181 + * 182 + * If there is no prefetchable MMIO window, put all memory resources into the 183 + * non-prefetchable window. 184 + * 185 + * If there's a 64-bit prefetchable MMIO window, put all 64-bit prefetchable 186 + * resources into it and place 32-bit prefetchable memory into the 187 + * non-prefetchable window. 188 + * 189 + * Otherwise, put all prefetchable resources into the prefetchable window. 190 + * 191 + * Return: the bridge window resource or NULL if no bridge window is found. 192 + */ 193 + static struct resource *pbus_select_window_for_type(struct pci_bus *bus, 194 + unsigned long type) 195 + { 196 + int iores_type = type & IORESOURCE_TYPE_BITS; /* w/o 64bit & pref */ 197 + struct resource *mmio, *mmio_pref, *win; 198 + 199 + type &= PCI_RES_TYPE_MASK; /* with 64bit & pref */ 200 + 201 + if ((iores_type != IORESOURCE_IO) && (iores_type != IORESOURCE_MEM)) 202 + return NULL; 203 + 204 + if (pci_is_root_bus(bus)) { 205 + win = find_bus_resource_of_type(bus, type, type); 206 + if (win) 207 + return win; 208 + 209 + type &= ~IORESOURCE_MEM_64; 210 + win = find_bus_resource_of_type(bus, type, type); 211 + if (win) 212 + return win; 213 + 214 + type &= ~IORESOURCE_PREFETCH; 215 + return find_bus_resource_of_type(bus, type, type); 216 + } 217 + 218 + switch (iores_type) { 219 + case IORESOURCE_IO: 220 + return pci_bus_resource_n(bus, PCI_BUS_BRIDGE_IO_WINDOW); 221 + 222 + case IORESOURCE_MEM: 223 + mmio = pci_bus_resource_n(bus, PCI_BUS_BRIDGE_MEM_WINDOW); 224 + mmio_pref = pci_bus_resource_n(bus, PCI_BUS_BRIDGE_PREF_MEM_WINDOW); 225 + 226 + if (!(type & IORESOURCE_PREFETCH) || 227 + !(mmio_pref->flags & IORESOURCE_MEM)) 228 + return mmio; 229 + 230 + if ((type & IORESOURCE_MEM_64) || 231 + !(mmio_pref->flags & IORESOURCE_MEM_64)) 232 + return mmio_pref; 233 + 234 + return mmio; 235 + default: 236 + return NULL; 237 + } 238 + } 239 + 240 + /** 241 + * pbus_select_window - Select bridge window for a resource 242 + * @bus: PCI bus 243 + * @res: Resource 244 + * 245 + * Select the bridge window for @res. If the resource is already assigned, 246 + * return the current bridge window. 247 + * 248 + * For memory resources, the selection is done as follows: 249 + * 250 + * Any non-prefetchable resource is put into the non-prefetchable window. 251 + * 252 + * If there is no prefetchable MMIO window, put all memory resources into the 253 + * non-prefetchable window. 254 + * 255 + * If there's a 64-bit prefetchable MMIO window, put all 64-bit prefetchable 256 + * resources into it and place 32-bit prefetchable memory into the 257 + * non-prefetchable window. 258 + * 259 + * Otherwise, put all prefetchable resources into the prefetchable window. 260 + * 261 + * Return: the bridge window resource or NULL if no bridge window is found. 262 + */ 263 + struct resource *pbus_select_window(struct pci_bus *bus, 264 + const struct resource *res) 265 + { 266 + if (res->parent) 267 + return res->parent; 268 + 269 + return pbus_select_window_for_type(bus, res->flags); 270 + } 271 + 272 static bool pdev_resources_assignable(struct pci_dev *dev) 273 { 274 u16 class = dev->class >> 8, command; ··· 154 return true; 155 } 156 157 + static bool pdev_resource_assignable(struct pci_dev *dev, struct resource *res) 158 + { 159 + int idx = pci_resource_num(dev, res); 160 + 161 + if (!res->flags) 162 + return false; 163 + 164 + if (idx >= PCI_BRIDGE_RESOURCES && idx <= PCI_BRIDGE_RESOURCE_END && 165 + res->flags & IORESOURCE_DISABLED) 166 + return false; 167 + 168 + return true; 169 + } 170 + 171 + static bool pdev_resource_should_fit(struct pci_dev *dev, struct resource *res) 172 + { 173 + if (res->parent) 174 + return false; 175 + 176 + if (res->flags & IORESOURCE_PCI_FIXED) 177 + return false; 178 + 179 + return pdev_resource_assignable(dev, res); 180 + } 181 + 182 /* Sort resources by alignment */ 183 static void pdev_sort_resources(struct pci_dev *dev, struct list_head *head) 184 { ··· 169 resource_size_t r_align; 170 struct list_head *n; 171 172 + if (!pdev_resource_should_fit(dev, r)) 173 continue; 174 175 r_align = pci_resource_alignment(dev, r); ··· 221 return false; 222 } 223 224 + static inline void reset_resource(struct pci_dev *dev, struct resource *res) 225 { 226 + int idx = pci_resource_num(dev, res); 227 + 228 + if (idx >= PCI_BRIDGE_RESOURCES && idx <= PCI_BRIDGE_RESOURCE_END) { 229 + res->flags |= IORESOURCE_UNSET; 230 + return; 231 + } 232 + 233 res->start = 0; 234 res->end = 0; 235 res->flags = 0; ··· 384 } 385 386 /* Return: @true if assignment of a required resource failed. */ 387 + static bool pci_required_resource_failed(struct list_head *fail_head, 388 + unsigned long type) 389 { 390 struct pci_dev_resource *fail_res; 391 392 + type &= PCI_RES_TYPE_MASK; 393 + 394 list_for_each_entry(fail_res, fail_head, list) { 395 int idx = pci_resource_num(fail_res->dev, fail_res->res); 396 + 397 + if (type && (fail_res->flags & PCI_RES_TYPE_MASK) != type) 398 + continue; 399 400 if (!pci_resource_is_optional(fail_res->dev, idx)) 401 return true; ··· 431 struct pci_dev_resource *dev_res, *tmp_res, *dev_res2; 432 struct resource *res; 433 struct pci_dev *dev; 434 unsigned long fail_type; 435 resource_size_t add_align, align; 436 ··· 504 } 505 506 /* Without realloc_head and only optional fails, nothing more to do. */ 507 + if (!pci_required_resource_failed(&local_fail_head, 0) && 508 list_empty(realloc_head)) { 509 list_for_each_entry(save_res, &save_head, list) { 510 struct resource *res = save_res->res; ··· 540 res = dev_res->res; 541 dev = dev_res->dev; 542 543 + pci_release_resource(dev, pci_resource_num(dev, res)); 544 restore_dev_resource(dev_res); 545 } 546 /* Restore start/end/flags from saved list */ ··· 577 0 /* don't care */); 578 } 579 580 + reset_resource(dev, res); 581 } 582 583 free_list(head); ··· 618 619 res = bus->resource[0]; 620 pcibios_resource_to_bus(bridge->bus, &region, res); 621 + if (res->parent && res->flags & IORESOURCE_IO) { 622 /* 623 * The IO resource is allocated a range twice as large as it 624 * would normally need. This allows us to set both IO regs. ··· 632 633 res = bus->resource[1]; 634 pcibios_resource_to_bus(bridge->bus, &region, res); 635 + if (res->parent && res->flags & IORESOURCE_IO) { 636 pci_info(bridge, " bridge window %pR\n", res); 637 pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 638 region.start); ··· 642 643 res = bus->resource[2]; 644 pcibios_resource_to_bus(bridge->bus, &region, res); 645 + if (res->parent && res->flags & IORESOURCE_MEM) { 646 pci_info(bridge, " bridge window %pR\n", res); 647 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 648 region.start); ··· 652 653 res = bus->resource[3]; 654 pcibios_resource_to_bus(bridge->bus, &region, res); 655 + if (res->parent && res->flags & IORESOURCE_MEM) { 656 pci_info(bridge, " bridge window %pR\n", res); 657 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 658 region.start); ··· 693 res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 694 res_name = pci_resource_name(bridge, PCI_BRIDGE_IO_WINDOW); 695 pcibios_resource_to_bus(bridge->bus, &region, res); 696 + if (res->parent && res->flags & IORESOURCE_IO) { 697 pci_read_config_word(bridge, PCI_IO_BASE, &l); 698 io_base_lo = (region.start >> 8) & io_mask; 699 io_limit_lo = (region.end >> 8) & io_mask; ··· 725 res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 726 res_name = pci_resource_name(bridge, PCI_BRIDGE_MEM_WINDOW); 727 pcibios_resource_to_bus(bridge->bus, &region, res); 728 + if (res->parent && res->flags & IORESOURCE_MEM) { 729 l = (region.start >> 16) & 0xfff0; 730 l |= region.end & 0xfff00000; 731 pci_info(bridge, " %s %pR\n", res_name, res); ··· 754 res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 755 res_name = pci_resource_name(bridge, PCI_BRIDGE_PREF_MEM_WINDOW); 756 pcibios_resource_to_bus(bridge->bus, &region, res); 757 + if (res->parent && res->flags & IORESOURCE_PREFETCH) { 758 l = (region.start >> 16) & 0xfff0; 759 l |= region.end & 0xfff00000; 760 if (res->flags & IORESOURCE_MEM_64) { ··· 790 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); 791 } 792 793 + static void pci_setup_one_bridge_window(struct pci_dev *bridge, int resno) 794 + { 795 + switch (resno) { 796 + case PCI_BRIDGE_IO_WINDOW: 797 + pci_setup_bridge_io(bridge); 798 + break; 799 + case PCI_BRIDGE_MEM_WINDOW: 800 + pci_setup_bridge_mmio(bridge); 801 + break; 802 + case PCI_BRIDGE_PREF_MEM_WINDOW: 803 + pci_setup_bridge_mmio_pref(bridge); 804 + break; 805 + default: 806 + return; 807 + } 808 + } 809 + 810 void __weak pcibios_setup_bridge(struct pci_bus *bus, unsigned long type) 811 { 812 } ··· 806 807 int pci_claim_bridge_resource(struct pci_dev *bridge, int i) 808 { 809 + int ret = -EINVAL; 810 + 811 if (i < PCI_BRIDGE_RESOURCES || i > PCI_BRIDGE_RESOURCE_END) 812 return 0; 813 ··· 815 if ((bridge->class >> 8) != PCI_CLASS_BRIDGE_PCI) 816 return 0; 817 818 + if (i > PCI_BRIDGE_PREF_MEM_WINDOW) 819 return -EINVAL; 820 821 + /* Try to clip the resource and claim the smaller window */ 822 + if (pci_bus_clip_resource(bridge, i)) 823 + ret = pci_claim_resource(bridge, i); 824 825 + pci_setup_one_bridge_window(bridge, i); 826 + 827 + return ret; 828 } 829 830 /* ··· 864 PCI_PREF_RANGE_TYPE_64; 865 } 866 } 867 } 868 869 static resource_size_t calculate_iosize(resource_size_t size, ··· 984 struct list_head *realloc_head) 985 { 986 struct pci_dev *dev; 987 + struct resource *b_res = pbus_select_window_for_type(bus, IORESOURCE_IO); 988 resource_size_t size = 0, size0 = 0, size1 = 0; 989 resource_size_t children_add_size = 0; 990 resource_size_t min_align, align; ··· 1006 1007 if (r->parent || !(r->flags & IORESOURCE_IO)) 1008 continue; 1009 1010 + if (!pdev_resource_assignable(dev, r)) 1011 + continue; 1012 + 1013 + r_size = resource_size(r); 1014 if (r_size < SZ_1K) 1015 /* Might be re-aligned for ISA */ 1016 size += r_size; ··· 1026 size0 = calculate_iosize(size, min_size, size1, 0, 0, 1027 resource_size(b_res), min_align); 1028 1029 + if (size0) 1030 + b_res->flags &= ~IORESOURCE_DISABLED; 1031 + 1032 size1 = size0; 1033 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1034 size1 = calculate_iosize(size, min_size, size1, add_size, ··· 1037 if (bus->self && (b_res->start || b_res->end)) 1038 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 1039 b_res, &bus->busn_res); 1040 + b_res->flags |= IORESOURCE_DISABLED; 1041 return; 1042 } 1043 1044 resource_set_range(b_res, min_align, size0); 1045 b_res->flags |= IORESOURCE_STARTALIGN; 1046 if (bus->self && size1 > size0 && realloc_head) { 1047 + b_res->flags &= ~IORESOURCE_DISABLED; 1048 add_to_list(realloc_head, bus->self, b_res, size1-size0, 1049 min_align); 1050 pci_info(bus->self, "bridge window %pR to %pR add_size %llx\n", ··· 1077 /** 1078 * pbus_upstream_space_available - Check no upstream resource limits allocation 1079 * @bus: The bus 1080 + * @res: The resource to help select the correct bridge window 1081 * @size: The size required from the bridge window 1082 * @align: Required alignment for the resource 1083 * 1084 + * Check that @size can fit inside the upstream bridge resources that are 1085 + * already assigned. Select the upstream bridge window based on the type of 1086 + * @res. 1087 * 1088 * Return: %true if enough space is available on all assigned upstream 1089 * resources. 1090 */ 1091 + static bool pbus_upstream_space_available(struct pci_bus *bus, 1092 + struct resource *res, 1093 + resource_size_t size, 1094 resource_size_t align) 1095 { 1096 struct resource_constraint constraint = { ··· 1097 .align = align, 1098 }; 1099 struct pci_bus *downstream = bus; 1100 1101 while ((bus = bus->parent)) { 1102 if (pci_is_root_bus(bus)) 1103 break; 1104 1105 + res = pbus_select_window(bus, res); 1106 + if (!res) 1107 return false; 1108 + if (!res->parent) 1109 + continue; 1110 + 1111 + if (resource_size(res) >= size) { 1112 + struct resource gap = {}; 1113 + 1114 + if (find_resource_space(res, &gap, size, &constraint) == 0) { 1115 + gap.flags = res->flags; 1116 + pci_dbg(bus->self, 1117 + "Assigned bridge window %pR to %pR free space at %pR\n", 1118 + res, &bus->busn_res, &gap); 1119 + return true; 1120 + } 1121 } 1122 + 1123 + if (bus->self) { 1124 + pci_info(bus->self, 1125 + "Assigned bridge window %pR to %pR cannot fit 0x%llx required for %s bridging to %pR\n", 1126 + res, &bus->busn_res, 1127 + (unsigned long long)size, 1128 + pci_name(downstream->self), 1129 + &downstream->busn_res); 1130 + } 1131 + 1132 + return false; 1133 } 1134 1135 return true; ··· 1139 * pbus_size_mem() - Size the memory window of a given bus 1140 * 1141 * @bus: The bus 1142 + * @type: The type of bridge resource 1143 * @min_size: The minimum memory window that must be allocated 1144 * @add_size: Additional optional memory window 1145 * @realloc_head: Track the additional memory window on this list 1146 * 1147 + * Calculate the size of the bus resource for @type and minimal alignment 1148 + * which guarantees that all child resources fit in this size. 1149 * 1150 + * Set the bus resource start/end to indicate the required size if there an 1151 + * available unassigned bus resource of the desired @type. 1152 + * 1153 + * Add optional resource requests to the @realloc_head list if it is 1154 + * supplied. 1155 */ 1156 + static void pbus_size_mem(struct pci_bus *bus, unsigned long type, 1157 + resource_size_t min_size, 1158 resource_size_t add_size, 1159 struct list_head *realloc_head) 1160 { ··· 1164 resource_size_t min_align, win_align, align, size, size0, size1 = 0; 1165 resource_size_t aligns[28]; /* Alignments from 1MB to 128TB */ 1166 int order, max_order; 1167 + struct resource *b_res = pbus_select_window_for_type(bus, type); 1168 resource_size_t children_add_size = 0; 1169 resource_size_t children_add_align = 0; 1170 resource_size_t add_align = 0; 1171 + resource_size_t relaxed_align; 1172 + resource_size_t old_size; 1173 1174 if (!b_res) 1175 + return; 1176 1177 /* If resource is already assigned, nothing more to do */ 1178 if (b_res->parent) 1179 + return; 1180 1181 memset(aligns, 0, sizeof(aligns)); 1182 max_order = 0; ··· 1189 const char *r_name = pci_resource_name(dev, i); 1190 resource_size_t r_size; 1191 1192 + if (!pdev_resources_assignable(dev) || 1193 + !pdev_resource_should_fit(dev, r)) 1194 continue; 1195 + if (b_res != pbus_select_window(bus, r)) 1196 + continue; 1197 + 1198 r_size = resource_size(r); 1199 1200 /* Put SRIOV requested res to the optional list */ ··· 1238 } 1239 } 1240 1241 + old_size = resource_size(b_res); 1242 win_align = window_alignment(bus, b_res->flags); 1243 min_align = calculate_mem_align(aligns, max_order); 1244 min_align = max(min_align, win_align); 1245 + size0 = calculate_memsize(size, min_size, 0, 0, old_size, min_align); 1246 + 1247 + if (size0) { 1248 + resource_set_range(b_res, min_align, size0); 1249 + b_res->flags &= ~IORESOURCE_DISABLED; 1250 + } 1251 1252 if (bus->self && size0 && 1253 + !pbus_upstream_space_available(bus, b_res, size0, min_align)) { 1254 + relaxed_align = 1ULL << (max_order + __ffs(SZ_1M)); 1255 + relaxed_align = max(relaxed_align, win_align); 1256 + min_align = min(min_align, relaxed_align); 1257 + size0 = calculate_memsize(size, min_size, 0, 0, old_size, win_align); 1258 + resource_set_range(b_res, min_align, size0); 1259 pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n", 1260 b_res, &bus->busn_res); 1261 } ··· 1256 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1257 add_align = max(min_align, add_align); 1258 size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1259 + old_size, add_align); 1260 1261 if (bus->self && size1 && 1262 + !pbus_upstream_space_available(bus, b_res, size1, add_align)) { 1263 + relaxed_align = 1ULL << (max_order + __ffs(SZ_1M)); 1264 + relaxed_align = max(relaxed_align, win_align); 1265 + min_align = min(min_align, relaxed_align); 1266 size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1267 + old_size, win_align); 1268 pci_info(bus->self, 1269 "bridge window %pR to %pR requires relaxed alignment rules\n", 1270 b_res, &bus->busn_res); ··· 1275 if (bus->self && (b_res->start || b_res->end)) 1276 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 1277 b_res, &bus->busn_res); 1278 + b_res->flags |= IORESOURCE_DISABLED; 1279 + return; 1280 } 1281 1282 resource_set_range(b_res, min_align, size0); 1283 b_res->flags |= IORESOURCE_STARTALIGN; 1284 if (bus->self && size1 > size0 && realloc_head) { 1285 + b_res->flags &= ~IORESOURCE_DISABLED; 1286 add_to_list(realloc_head, bus->self, b_res, size1-size0, add_align); 1287 pci_info(bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n", 1288 b_res, &bus->busn_res, 1289 (unsigned long long) (size1 - size0), 1290 (unsigned long long) add_align); 1291 } 1292 } 1293 1294 unsigned long pci_cardbus_resource_alignment(struct resource *res) ··· 1393 void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) 1394 { 1395 struct pci_dev *dev; 1396 resource_size_t additional_io_size = 0, additional_mmio_size = 0, 1397 additional_mmio_pref_size = 0; 1398 struct resource *pref; 1399 struct pci_host_bridge *host; 1400 + int hdr_type; 1401 1402 list_for_each_entry(dev, &bus->devices, bus_list) { 1403 struct pci_bus *b = dev->subordinate; ··· 1448 pbus_size_io(bus, realloc_head ? 0 : additional_io_size, 1449 additional_io_size, realloc_head); 1450 1451 + if (pref) { 1452 + pbus_size_mem(bus, 1453 + IORESOURCE_MEM | IORESOURCE_PREFETCH | 1454 + (pref->flags & IORESOURCE_MEM_64), 1455 + realloc_head ? 0 : additional_mmio_pref_size, 1456 + additional_mmio_pref_size, realloc_head); 1457 } 1458 1459 + pbus_size_mem(bus, IORESOURCE_MEM, 1460 realloc_head ? 0 : additional_mmio_size, 1461 additional_mmio_size, realloc_head); 1462 break; ··· 1704 } 1705 } 1706 1707 static void pci_bridge_release_resources(struct pci_bus *bus, 1708 + struct resource *b_win) 1709 { 1710 struct pci_dev *dev = bus->self; 1711 + int idx, ret; 1712 1713 + if (!b_win->parent) 1714 return; 1715 1716 + idx = pci_resource_num(dev, b_win); 1717 1718 + /* If there are children, release them all */ 1719 + release_child_resources(b_win); 1720 + 1721 + ret = pci_release_resource(dev, idx); 1722 + if (ret) 1723 + return; 1724 + 1725 + pci_setup_one_bridge_window(dev, idx); 1726 } 1727 1728 enum release_type { ··· 1776 * a larger window later. 1777 */ 1778 static void pci_bus_release_bridge_resources(struct pci_bus *bus, 1779 + struct resource *b_win, 1780 enum release_type rel_type) 1781 { 1782 struct pci_dev *dev; ··· 1784 1785 list_for_each_entry(dev, &bus->devices, bus_list) { 1786 struct pci_bus *b = dev->subordinate; 1787 + struct resource *res; 1788 + 1789 if (!b) 1790 continue; 1791 ··· 1792 if ((dev->class >> 8) != PCI_CLASS_BRIDGE_PCI) 1793 continue; 1794 1795 + if (rel_type != whole_subtree) 1796 + continue; 1797 + 1798 + pci_bus_for_each_resource(b, res) { 1799 + if (res->parent != b_win) 1800 + continue; 1801 + 1802 + pci_bus_release_bridge_resources(b, res, rel_type); 1803 + } 1804 } 1805 1806 if (pci_is_root_bus(bus)) ··· 1804 return; 1805 1806 if ((rel_type == whole_subtree) || is_leaf_bridge) 1807 + pci_bridge_release_resources(bus, b_win); 1808 } 1809 1810 static void pci_bus_dump_res(struct pci_bus *bus) ··· 1979 avail->start = min(avail->start + tmp, avail->end + 1); 1980 } 1981 1982 + static void remove_dev_resources(struct pci_dev *dev, 1983 + struct resource available[PCI_P2P_BRIDGE_RESOURCE_NUM]) 1984 { 1985 + struct resource *res, *b_win; 1986 + int idx; 1987 1988 pci_dev_for_each_resource(dev, res) { 1989 + b_win = pbus_select_window(dev->bus, res); 1990 + if (!b_win) 1991 + continue; 1992 1993 + idx = pci_resource_num(dev->bus->self, b_win); 1994 + idx -= PCI_BRIDGE_RESOURCES; 1995 + 1996 + remove_dev_resource(&available[idx], dev, res); 1997 } 1998 } 1999 ··· 2019 * shared with the bridges. 2020 */ 2021 static void pci_bus_distribute_available_resources(struct pci_bus *bus, 2022 + struct list_head *add_list, 2023 + struct resource available_in[PCI_P2P_BRIDGE_RESOURCE_NUM]) 2024 { 2025 + struct resource available[PCI_P2P_BRIDGE_RESOURCE_NUM]; 2026 unsigned int normal_bridges = 0, hotplug_bridges = 0; 2027 struct pci_dev *dev, *bridge = bus->self; 2028 + resource_size_t per_bridge[PCI_P2P_BRIDGE_RESOURCE_NUM]; 2029 + resource_size_t align; 2030 + int i; 2031 2032 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2033 + struct resource *res = pci_bus_resource_n(bus, i); 2034 2035 + available[i] = available_in[i]; 2036 2037 + /* 2038 + * The alignment of this bridge is yet to be considered, 2039 + * hence it must be done now before extending its bridge 2040 + * window. 2041 + */ 2042 + align = pci_resource_alignment(bridge, res); 2043 + if (!res->parent && align) 2044 + available[i].start = min(ALIGN(available[i].start, align), 2045 + available[i].end + 1); 2046 2047 + /* 2048 + * Now that we have adjusted for alignment, update the 2049 + * bridge window resources to fill as much remaining 2050 + * resource space as possible. 2051 + */ 2052 + adjust_bridge_window(bridge, res, add_list, 2053 + resource_size(&available[i])); 2054 + } 2055 2056 /* 2057 * Calculate how many hotplug bridges and normal bridges there ··· 2081 */ 2082 list_for_each_entry(dev, &bus->devices, bus_list) { 2083 if (!dev->is_virtfn) 2084 + remove_dev_resources(dev, available); 2085 } 2086 2087 /* ··· 2093 * split between non-hotplug bridges. This is to allow possible 2094 * hotplug bridges below them to get the extra space as well. 2095 */ 2096 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2097 + per_bridge[i] = div64_ul(resource_size(&available[i]), 2098 + hotplug_bridges ?: normal_bridges); 2099 } 2100 2101 for_each_pci_bridge(dev, bus) { ··· 2115 if (hotplug_bridges && !dev->is_hotplug_bridge) 2116 continue; 2117 2118 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2119 + res = pci_bus_resource_n(bus, i); 2120 2121 + /* 2122 + * Make sure the split resource space is properly 2123 + * aligned for bridge windows (align it down to 2124 + * avoid going above what is available). 2125 + */ 2126 + align = pci_resource_alignment(dev, res); 2127 + resource_set_size(&available[i], 2128 + ALIGN_DOWN_IF_NONZERO(per_bridge[i], 2129 + align)); 2130 2131 + /* 2132 + * The per_bridge holds the extra resource space 2133 + * that can be added for each bridge but there is 2134 + * the minimal already reserved as well so adjust 2135 + * x.start down accordingly to cover the whole 2136 + * space. 2137 + */ 2138 + available[i].start -= resource_size(res); 2139 + } 2140 2141 + pci_bus_distribute_available_resources(b, add_list, available); 2142 2143 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) 2144 + available[i].start += available[i].end + 1; 2145 } 2146 } 2147 2148 static void pci_bridge_distribute_available_resources(struct pci_dev *bridge, 2149 struct list_head *add_list) 2150 { 2151 + struct resource *res, available[PCI_P2P_BRIDGE_RESOURCE_NUM]; 2152 + unsigned int i; 2153 2154 if (!bridge->is_hotplug_bridge) 2155 return; ··· 2165 pci_dbg(bridge, "distributing available resources\n"); 2166 2167 /* Take the initial extra resources from the hotplug port */ 2168 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2169 + res = pci_resource_n(bridge, PCI_BRIDGE_RESOURCES + i); 2170 + available[i] = *res; 2171 + } 2172 2173 pci_bus_distribute_available_resources(bridge->subordinate, 2174 + add_list, available); 2175 } 2176 2177 static bool pci_bridge_resources_not_assigned(struct pci_dev *dev) ··· 2235 * enough to contain child device resources. 2236 */ 2237 list_for_each_entry(fail_res, fail_head, list) { 2238 + struct pci_bus *bus = fail_res->dev->bus; 2239 + struct resource *b_win; 2240 + 2241 + b_win = pbus_select_window_for_type(bus, fail_res->flags); 2242 + if (!b_win) 2243 + continue; 2244 + pci_bus_release_bridge_resources(bus, b_win, rel_type); 2245 } 2246 2247 /* Restore size and flags */ 2248 + list_for_each_entry(fail_res, fail_head, list) 2249 restore_dev_resource(fail_res); 2250 2251 free_list(fail_head); 2252 } ··· 2389 } 2390 EXPORT_SYMBOL_GPL(pci_assign_unassigned_bridge_resources); 2391 2392 + /* 2393 + * Walk to the root bus, find the bridge window relevant for @res and 2394 + * release it when possible. If the bridge window contains assigned 2395 + * resources, it cannot be released. 2396 + */ 2397 + int pbus_reassign_bridge_resources(struct pci_bus *bus, struct resource *res) 2398 { 2399 + unsigned long type = res->flags; 2400 struct pci_dev_resource *dev_res; 2401 + struct pci_dev *bridge; 2402 LIST_HEAD(saved); 2403 LIST_HEAD(added); 2404 LIST_HEAD(failed); ··· 2401 2402 down_read(&pci_bus_sem); 2403 2404 + while (!pci_is_root_bus(bus)) { 2405 + bridge = bus->self; 2406 + res = pbus_select_window(bus, res); 2407 + if (!res) 2408 + break; 2409 2410 + i = pci_resource_num(bridge, res); 2411 2412 + /* Ignore BARs which are still in use */ 2413 + if (!res->child) { 2414 ret = add_to_list(&saved, bridge, res, 0, 0); 2415 if (ret) 2416 goto cleanup; 2417 2418 + pci_release_resource(bridge, i); 2419 + } else { 2420 + const char *res_name = pci_resource_name(bridge, i); 2421 2422 + pci_warn(bridge, 2423 + "%s %pR: was not released (still contains assigned resources)\n", 2424 + res_name, res); 2425 } 2426 2427 + bus = bus->parent; 2428 + } 2429 2430 if (list_empty(&saved)) { 2431 up_read(&pci_bus_sem); ··· 2446 free_list(&added); 2447 2448 if (!list_empty(&failed)) { 2449 + if (pci_required_resource_failed(&failed, type)) { 2450 + ret = -ENOSPC; 2451 + goto cleanup; 2452 + } 2453 + /* Only resources with unrelated types failed (again) */ 2454 + free_list(&failed); 2455 } 2456 2457 list_for_each_entry(dev_res, &saved, list) {
+29 -17
drivers/pci/setup-res.c
··· 359 360 res->flags &= ~IORESOURCE_UNSET; 361 res->flags &= ~IORESOURCE_STARTALIGN; 362 pci_info(dev, "%s %pR: assigned\n", res_name, res); 363 if (resno < PCI_BRIDGE_RESOURCES) 364 pci_update_resource(dev, resno); ··· 409 return 0; 410 } 411 412 - void pci_release_resource(struct pci_dev *dev, int resno) 413 { 414 struct resource *res = pci_resource_n(dev, resno); 415 const char *res_name = pci_resource_name(dev, resno); 416 417 if (!res->parent) 418 - return; 419 420 pci_info(dev, "%s %pR: releasing\n", res_name, res); 421 422 - release_resource(res); 423 res->end = resource_size(res) - 1; 424 res->start = 0; 425 res->flags |= IORESOURCE_UNSET; 426 } 427 EXPORT_SYMBOL(pci_release_resource); 428 ··· 496 497 /* Check if the new config works by trying to assign everything. */ 498 if (dev->bus->self) { 499 - ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); 500 if (ret) 501 goto error_resize; 502 } ··· 530 if (pci_resource_is_optional(dev, i)) 531 continue; 532 533 - if (r->flags & IORESOURCE_UNSET) { 534 - pci_err(dev, "%s %pR: not assigned; can't enable device\n", 535 - r_name, r); 536 - return -EINVAL; 537 } 538 539 - if (!r->parent) { 540 - pci_err(dev, "%s %pR: not claimed; can't enable device\n", 541 - r_name, r); 542 - return -EINVAL; 543 } 544 - 545 - if (r->flags & IORESOURCE_IO) 546 - cmd |= PCI_COMMAND_IO; 547 - if (r->flags & IORESOURCE_MEM) 548 - cmd |= PCI_COMMAND_MEMORY; 549 } 550 551 if (cmd != old_cmd) {
··· 359 360 res->flags &= ~IORESOURCE_UNSET; 361 res->flags &= ~IORESOURCE_STARTALIGN; 362 + if (resno >= PCI_BRIDGE_RESOURCES && resno <= PCI_BRIDGE_RESOURCE_END) 363 + res->flags &= ~IORESOURCE_DISABLED; 364 + 365 pci_info(dev, "%s %pR: assigned\n", res_name, res); 366 if (resno < PCI_BRIDGE_RESOURCES) 367 pci_update_resource(dev, resno); ··· 406 return 0; 407 } 408 409 + int pci_release_resource(struct pci_dev *dev, int resno) 410 { 411 struct resource *res = pci_resource_n(dev, resno); 412 const char *res_name = pci_resource_name(dev, resno); 413 + int ret; 414 415 if (!res->parent) 416 + return 0; 417 418 pci_info(dev, "%s %pR: releasing\n", res_name, res); 419 420 + ret = release_resource(res); 421 + if (ret) 422 + return ret; 423 res->end = resource_size(res) - 1; 424 res->start = 0; 425 res->flags |= IORESOURCE_UNSET; 426 + 427 + return 0; 428 } 429 EXPORT_SYMBOL(pci_release_resource); 430 ··· 488 489 /* Check if the new config works by trying to assign everything. */ 490 if (dev->bus->self) { 491 + ret = pbus_reassign_bridge_resources(dev->bus, res); 492 if (ret) 493 goto error_resize; 494 } ··· 522 if (pci_resource_is_optional(dev, i)) 523 continue; 524 525 + if (i < PCI_BRIDGE_RESOURCES) { 526 + if (r->flags & IORESOURCE_UNSET) { 527 + pci_err(dev, "%s %pR: not assigned; can't enable device\n", 528 + r_name, r); 529 + return -EINVAL; 530 + } 531 + 532 + if (!r->parent) { 533 + pci_err(dev, "%s %pR: not claimed; can't enable device\n", 534 + r_name, r); 535 + return -EINVAL; 536 + } 537 } 538 539 + if (r->parent) { 540 + if (r->flags & IORESOURCE_IO) 541 + cmd |= PCI_COMMAND_IO; 542 + if (r->flags & IORESOURCE_MEM) 543 + cmd |= PCI_COMMAND_MEMORY; 544 } 545 } 546 547 if (cmd != old_cmd) {
+11 -12
drivers/pci/switch/switchtec.c
··· 269 270 dev_dbg(&stdev->dev, "%s\n", __func__); 271 272 - mutex_lock(&stdev->mrpc_mutex); 273 cancel_delayed_work(&stdev->mrpc_timeout); 274 mrpc_complete_cmd(stdev); 275 - mutex_unlock(&stdev->mrpc_mutex); 276 } 277 278 static void mrpc_error_complete_cmd(struct switchtec_dev *stdev) ··· 1321 cancel_delayed_work_sync(&stdev->mrpc_timeout); 1322 1323 /* Mark the hardware as unavailable and complete all completions */ 1324 - mutex_lock(&stdev->mrpc_mutex); 1325 - stdev->alive = false; 1326 1327 - /* Wake up and kill any users waiting on an MRPC request */ 1328 - list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) { 1329 - stuser->cmd_done = true; 1330 - wake_up_interruptible(&stuser->cmd_comp); 1331 - list_del_init(&stuser->list); 1332 - stuser_put(stuser); 1333 } 1334 - 1335 - mutex_unlock(&stdev->mrpc_mutex); 1336 1337 /* Wake up any users waiting on event_wq */ 1338 wake_up_interruptible(&stdev->event_wq);
··· 269 270 dev_dbg(&stdev->dev, "%s\n", __func__); 271 272 + guard(mutex)(&stdev->mrpc_mutex); 273 cancel_delayed_work(&stdev->mrpc_timeout); 274 mrpc_complete_cmd(stdev); 275 } 276 277 static void mrpc_error_complete_cmd(struct switchtec_dev *stdev) ··· 1322 cancel_delayed_work_sync(&stdev->mrpc_timeout); 1323 1324 /* Mark the hardware as unavailable and complete all completions */ 1325 + scoped_guard (mutex, &stdev->mrpc_mutex) { 1326 + stdev->alive = false; 1327 1328 + /* Wake up and kill any users waiting on an MRPC request */ 1329 + list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) { 1330 + stuser->cmd_done = true; 1331 + wake_up_interruptible(&stuser->cmd_comp); 1332 + list_del_init(&stuser->list); 1333 + stuser_put(stuser); 1334 + } 1335 + 1336 } 1337 1338 /* Wake up any users waiting on event_wq */ 1339 wake_up_interruptible(&stdev->event_wq);
+13
drivers/pinctrl/core.c
··· 1656 EXPORT_SYMBOL_GPL(pinctrl_pm_select_default_state); 1657 1658 /** 1659 * pinctrl_pm_select_sleep_state() - select sleep pinctrl state for PM 1660 * @dev: device to select sleep state for 1661 */
··· 1656 EXPORT_SYMBOL_GPL(pinctrl_pm_select_default_state); 1657 1658 /** 1659 + * pinctrl_pm_select_init_state() - select init pinctrl state for PM 1660 + * @dev: device to select init state for 1661 + */ 1662 + int pinctrl_pm_select_init_state(struct device *dev) 1663 + { 1664 + if (!dev->pins) 1665 + return 0; 1666 + 1667 + return pinctrl_select_bound_state(dev, dev->pins->init_state); 1668 + } 1669 + EXPORT_SYMBOL_GPL(pinctrl_pm_select_init_state); 1670 + 1671 + /** 1672 * pinctrl_pm_select_sleep_state() - select sleep pinctrl state for PM 1673 * @dev: device to select sleep state for 1674 */
+1 -1
drivers/scsi/lpfc/lpfc_init.c
··· 14367 * as desired. 14368 * 14369 * Return codes 14370 - * PCI_ERS_RESULT_CAN_RECOVER - can be recovered with reset_link 14371 * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery 14372 * PCI_ERS_RESULT_DISCONNECT - device could not be recovered 14373 **/
··· 14367 * as desired. 14368 * 14369 * Return codes 14370 + * PCI_ERS_RESULT_CAN_RECOVER - can be recovered without reset 14371 * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery 14372 * PCI_ERS_RESULT_DISCONNECT - device could not be recovered 14373 **/
-5
drivers/scsi/qla2xxx/qla_os.c
··· 7884 "Slot Reset.\n"); 7885 7886 ha->pci_error_state = QLA_PCI_SLOT_RESET; 7887 - /* Workaround: qla2xxx driver which access hardware earlier 7888 - * needs error state to be pci_channel_io_online. 7889 - * Otherwise mailbox command timesout. 7890 - */ 7891 - pdev->error_state = pci_channel_io_normal; 7892 7893 pci_restore_state(pdev); 7894
··· 7884 "Slot Reset.\n"); 7885 7886 ha->pci_error_state = QLA_PCI_SLOT_RESET; 7887 7888 pci_restore_state(pdev); 7889
-5
include/linux/pci-p2pdma.h
··· 21 u64 offset); 22 int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, 23 int num_clients, bool verbose); 24 - bool pci_has_p2pmem(struct pci_dev *pdev); 25 struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients); 26 void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size); 27 void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size); ··· 43 struct device **clients, int num_clients, bool verbose) 44 { 45 return -1; 46 - } 47 - static inline bool pci_has_p2pmem(struct pci_dev *pdev) 48 - { 49 - return false; 50 } 51 static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients, 52 int num_clients)
··· 21 u64 offset); 22 int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, 23 int num_clients, bool verbose); 24 struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients); 25 void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size); 26 void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size); ··· 44 struct device **clients, int num_clients, bool verbose) 45 { 46 return -1; 47 } 48 static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients, 49 int num_clients)
+4 -3
include/linux/pci.h
··· 119 #define PCI_CB_BRIDGE_MEM_1_WINDOW (PCI_BRIDGE_RESOURCES + 3) 120 121 /* Total number of bridge resources for P2P and CardBus */ 122 - #define PCI_BRIDGE_RESOURCE_NUM 4 123 124 /* Resources assigned to buses behind the bridge */ 125 PCI_BRIDGE_RESOURCES, ··· 1418 void pcibios_reset_secondary_bus(struct pci_dev *dev); 1419 void pci_update_resource(struct pci_dev *dev, int resno); 1420 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 1421 - void pci_release_resource(struct pci_dev *dev, int resno); 1422 static inline int pci_rebar_bytes_to_size(u64 bytes) 1423 { 1424 bytes = roundup_pow_of_two(bytes); ··· 2765 return false; 2766 } 2767 2768 - #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) 2769 void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); 2770 #endif 2771
··· 119 #define PCI_CB_BRIDGE_MEM_1_WINDOW (PCI_BRIDGE_RESOURCES + 3) 120 121 /* Total number of bridge resources for P2P and CardBus */ 122 + #define PCI_P2P_BRIDGE_RESOURCE_NUM 3 123 + #define PCI_BRIDGE_RESOURCE_NUM 4 124 125 /* Resources assigned to buses behind the bridge */ 126 PCI_BRIDGE_RESOURCES, ··· 1417 void pcibios_reset_secondary_bus(struct pci_dev *dev); 1418 void pci_update_resource(struct pci_dev *dev, int resno); 1419 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 1420 + int pci_release_resource(struct pci_dev *dev, int resno); 1421 static inline int pci_rebar_bytes_to_size(u64 bytes) 1422 { 1423 bytes = roundup_pow_of_two(bytes); ··· 2764 return false; 2765 } 2766 2767 + #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) || defined(CONFIG_S390) 2768 void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); 2769 #endif 2770
+10
include/linux/pinctrl/consumer.h
··· 48 49 #ifdef CONFIG_PM 50 int pinctrl_pm_select_default_state(struct device *dev); 51 int pinctrl_pm_select_sleep_state(struct device *dev); 52 int pinctrl_pm_select_idle_state(struct device *dev); 53 #else 54 static inline int pinctrl_pm_select_default_state(struct device *dev) 55 { 56 return 0; 57 } ··· 144 } 145 146 static inline int pinctrl_pm_select_default_state(struct device *dev) 147 { 148 return 0; 149 }
··· 48 49 #ifdef CONFIG_PM 50 int pinctrl_pm_select_default_state(struct device *dev); 51 + int pinctrl_pm_select_init_state(struct device *dev); 52 int pinctrl_pm_select_sleep_state(struct device *dev); 53 int pinctrl_pm_select_idle_state(struct device *dev); 54 #else 55 static inline int pinctrl_pm_select_default_state(struct device *dev) 56 + { 57 + return 0; 58 + } 59 + static inline int pinctrl_pm_select_init_state(struct device *dev) 60 { 61 return 0; 62 } ··· 139 } 140 141 static inline int pinctrl_pm_select_default_state(struct device *dev) 142 + { 143 + return 0; 144 + } 145 + 146 + static inline int pinctrl_pm_select_init_state(struct device *dev) 147 { 148 return 0; 149 }
+10
include/uapi/linux/pci_regs.h
··· 207 208 /* Capability lists */ 209 210 #define PCI_CAP_LIST_ID 0 /* Capability ID */ 211 #define PCI_CAP_ID_PM 0x01 /* Power Management */ 212 #define PCI_CAP_ID_AGP 0x02 /* Accelerated Graphics Port */ ··· 779 #define PCI_ERR_UNC_MCBTLP 0x00800000 /* MC blocked TLP */ 780 #define PCI_ERR_UNC_ATOMEG 0x01000000 /* Atomic egress blocked */ 781 #define PCI_ERR_UNC_TLPPRE 0x02000000 /* TLP prefix blocked */ 782 #define PCI_ERR_UNCOR_MASK 0x08 /* Uncorrectable Error Mask */ 783 /* Same bits as above */ 784 #define PCI_ERR_UNCOR_SEVER 0x0c /* Uncorrectable Error Severity */ ··· 807 #define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */ 808 #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ 809 #define PCI_ERR_CAP_PREFIX_LOG_PRESENT 0x00000800 /* TLP Prefix Log Present */ 810 #define PCI_ERR_CAP_TLP_LOG_FLIT 0x00040000 /* TLP was logged in Flit Mode */ 811 #define PCI_ERR_CAP_TLP_LOG_SIZE 0x00f80000 /* Logged TLP Size (only in Flit mode) */ 812 #define PCI_ERR_HEADER_LOG 0x1c /* Header Log Register (16 bytes) */
··· 207 208 /* Capability lists */ 209 210 + #define PCI_CAP_ID_MASK 0x00ff /* Capability ID mask */ 211 + #define PCI_CAP_LIST_NEXT_MASK 0xff00 /* Next Capability Pointer mask */ 212 + 213 #define PCI_CAP_LIST_ID 0 /* Capability ID */ 214 #define PCI_CAP_ID_PM 0x01 /* Power Management */ 215 #define PCI_CAP_ID_AGP 0x02 /* Accelerated Graphics Port */ ··· 776 #define PCI_ERR_UNC_MCBTLP 0x00800000 /* MC blocked TLP */ 777 #define PCI_ERR_UNC_ATOMEG 0x01000000 /* Atomic egress blocked */ 778 #define PCI_ERR_UNC_TLPPRE 0x02000000 /* TLP prefix blocked */ 779 + #define PCI_ERR_UNC_POISON_BLK 0x04000000 /* Poisoned TLP Egress Blocked */ 780 + #define PCI_ERR_UNC_DMWR_BLK 0x08000000 /* DMWr Request Egress Blocked */ 781 + #define PCI_ERR_UNC_IDE_CHECK 0x10000000 /* IDE Check Failed */ 782 + #define PCI_ERR_UNC_MISR_IDE 0x20000000 /* Misrouted IDE TLP */ 783 + #define PCI_ERR_UNC_PCRC_CHECK 0x40000000 /* PCRC Check Failed */ 784 + #define PCI_ERR_UNC_XLAT_BLK 0x80000000 /* TLP Translation Egress Blocked */ 785 #define PCI_ERR_UNCOR_MASK 0x08 /* Uncorrectable Error Mask */ 786 /* Same bits as above */ 787 #define PCI_ERR_UNCOR_SEVER 0x0c /* Uncorrectable Error Severity */ ··· 798 #define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */ 799 #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ 800 #define PCI_ERR_CAP_PREFIX_LOG_PRESENT 0x00000800 /* TLP Prefix Log Present */ 801 + #define PCI_ERR_CAP_COMP_TIME_LOG 0x00001000 /* Completion Timeout Prefix/Header Log Capable */ 802 #define PCI_ERR_CAP_TLP_LOG_FLIT 0x00040000 /* TLP was logged in Flit Mode */ 803 #define PCI_ERR_CAP_TLP_LOG_SIZE 0x00f80000 /* Logged TLP Size (only in Flit mode) */ 804 #define PCI_ERR_HEADER_LOG 0x1c /* Header Log Register (16 bytes) */
+4
tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
··· 121 122 for (i = 1; i <= 32; i++) { 123 pci_ep_ioctl(PCITEST_MSI, i); 124 EXPECT_FALSE(ret) TH_LOG("Test failed for MSI%d", i); 125 } 126 } ··· 139 140 for (i = 1; i <= 2048; i++) { 141 pci_ep_ioctl(PCITEST_MSIX, i); 142 EXPECT_FALSE(ret) TH_LOG("Test failed for MSI-X%d", i); 143 } 144 }
··· 121 122 for (i = 1; i <= 32; i++) { 123 pci_ep_ioctl(PCITEST_MSI, i); 124 + if (ret == -EINVAL) 125 + SKIP(return, "MSI%d is disabled", i); 126 EXPECT_FALSE(ret) TH_LOG("Test failed for MSI%d", i); 127 } 128 } ··· 137 138 for (i = 1; i <= 2048; i++) { 139 pci_ep_ioctl(PCITEST_MSIX, i); 140 + if (ret == -EINVAL) 141 + SKIP(return, "MSI-X%d is disabled", i); 142 EXPECT_FALSE(ret) TH_LOG("Test failed for MSI-X%d", i); 143 } 144 }