Merge tag 'pci-v6.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Add PCI_FIND_NEXT_CAP() and PCI_FIND_NEXT_EXT_CAP() macros that
take config space accessor functions.

Implement pci_find_capability(), pci_find_ext_capability(), and
dwc, dwc endpoint, and cadence capability search interfaces with
them (Hans Zhang)

- Leave parent unit address 0 in 'interrupt-map' so that when we
build devicetree nodes to describe PCI functions that contain
multiple peripherals, we can build this property even when
interrupt controllers lack 'reg' properties (Lorenzo Pieralisi)

- Add a Xeon 6 quirk to disable Extended Tags and limit Max Read
Request Size to 128B to avoid a performance issue (Ilpo Järvinen)

- Add sysfs 'serial_number' file to expose the Device Serial Number
(Matthew Wood)

- Fix pci_acpi_preserve_config() memory leak (Nirmoy Das)

Resource management:

- Align m68k pcibios_enable_device() with other arches (Ilpo
Järvinen)

- Remove sparc pcibios_enable_device() implementations that don't do
anything beyond what pci_enable_resources() does (Ilpo Järvinen)

- Remove mips pcibios_enable_resources() and use
pci_enable_resources() instead (Ilpo Järvinen)

- Clean up bridge window sizing and assignment (Ilpo Järvinen),
including:

- Leave non-claimed bridge windows disabled

- Enable bridges even if a window wasn't assigned because not all
windows are required by downstream devices

- Preserve bridge window type when releasing the resource, since
the type is needed for reassignment

- Consolidate selection of bridge windows into two new
interfaces, pbus_select_window() and
pbus_select_window_for_type(), so this is done consistently

- Compute bridge window start and end earlier to avoid logging
stale information

MSI:

- Add quirk to disable MSI on RDC PCI to PCIe bridges (Marcos Del Sol
Vives)

Error handling:

- Align AER with EEH by allowing drivers to request a Bus Reset on
Non-Fatal Errors (in addition to the reset on Fatal Errors that we
already do) (Lukas Wunner)

- If error recovery fails, emit FAILED_RECOVERY uevents for the
devices, not for the bridge leading to them.

This makes them correspond to BEGIN_RECOVERY uevents (Lukas Wunner)

- Align AER with EEH by calling err_handler.error_detected()
callbacks to notify drivers if error recovery fails (Lukas Wunner)

- Align AER with EEH by restoring device error_state to
pci_channel_io_normal before the err_handler.slot_reset() callback.

This is earlier than before the err_handler.resume() callback
(Lukas Wunner)

- Emit a BEGIN_RECOVERY uevent when driver's
err_handler.error_detected() requests a reset, as well as when it
says recovery is complete or can be done without a reset (Niklas
Schnelle)

- Align s390 with AER and EEH by emitting uevents during error
recovery (Niklas Schnelle)

- Align EEH with AER and s390 by emitting BEGIN_RECOVERY,
SUCCESSFUL_RECOVERY, or FAILED_RECOVERY uevents depending on the
result of err_handler.error_detected() (Niklas Schnelle)

- Fix a NULL pointer dereference in aer_ratelimit() when ACPI GHES
error information identifies a device without an AER Capability
(Breno Leitao)

- Update error decoding and TLP Log printing for new errors in
current PCIe base spec (Lukas Wunner)

- Update error recovery documentation to match the current code
and use consistent nomenclature (Lukas Wunner)

ASPM:

- Enable all ClockPM and ASPM states for devicetree platforms, since
there's typically no firmware that enables ASPM

This is a risky change that may uncover hardware or configuration
defects at boot-time rather than when users enable ASPM via sysfs
later. Booting with "pcie_aspm=off" prevents this enabling
(Manivannan Sadhasivam)

- Remove the qcom code that enabled ASPM (Manivannan Sadhasivam)

Power management:

- If a device has already been disconnected, e.g., by a hotplug
removal, don't bother trying to resume it to D0 when detaching the
driver.

This avoids annoying "Unable to change power state from D3cold to
D0" messages (Mario Limonciello)

- Ensure devices are powered up before config reads for
'max_link_width', 'current_link_speed', 'current_link_width',
'secondary_bus_number', and 'subordinate_bus_number' sysfs files.

This prevents using invalid data (~0) in drivers or lspci and,
depending on how the PCIe controller reports errors, may avoid
error interrupts or crashes (Brian Norris)

Virtualization:

- Add rescan/remove locking when enabling/disabling SR-IOV, which
avoids list corruption on s390, where disabling SR-IOV also
generates hotplug events (Niklas Schnelle)

Peer-to-peer DMA:

- Free struct p2p_pgmap, not a member within it, in the
pci_p2pdma_add_resource() error path (Sungho Kim)

Endpoint framework:

- Document sysfs interface for BAR assignment of vNTB endpoint
functions (Jerome Brunet)

- Fix array underflow in endpoint BAR test case (Dan Carpenter)

- Skip endpoint IRQ test if the IRQ is out of range to avoid false
errors (Christian Bruel)

- Fix endpoint test case for controllers with fixed-size BARs smaller
than requested by the test (Marek Vasut)

- Restore inbound translation when disabling doorbell so the endpoint
doorbell test case can be run more than once (Niklas Cassel)

- Avoid a NULL pointer dereference when releasing DMA channels in
endpoint DMA test case (Shin'ichiro Kawasaki)

- Convert tegra194 interrupt number to MSI vector to fix endpoint
Kselftest MSI_TEST test case (Niklas Cassel)

- Reset tegra194 BARs when running in endpoint mode so the BAR tests
don't overwrite the ATU settings in BAR4 (Niklas Cassel)

- Handle errors in tegra194 BPMP transactions so we don't mistakenly
skip future PERST# assertion (Vidya Sagar)

AMD MDB PCIe controller driver:

- Update DT binding example to separate PERST# to a Root Port stanza
to make multiple Root Ports possible in the future (Sai Krishna
Musham)

- Add driver support for PERST# being described in a Root Port
stanza, falling back to the host bridge if not found there (Sai
Krishna Musham)

Freescale i.MX6 PCIe controller driver:

- Enable the 3.3V Vaux supply if available so devices can request
wakeup with either Beacon or WAKE# (Richard Zhu)

MediaTek PCIe Gen3 controller driver:

- Add optional sys clock ready time setting to avoid sys_clk_rdy
signal glitching in MT6991 and MT8196 (AngeloGioacchino Del Regno)

- Add DT binding and driver support for MT6991 and MT8196
(AngeloGioacchino Del Regno)

NVIDIA Tegra PCIe controller driver:

- When asserting PERST#, disable the controller instead of mistakenly
disabling the PLL twice (Nagarjuna Kristam)

- Convert struct tegra_msi mask_lock to raw spinlock to avoid a lock
nesting error (Marek Vasut)

Qualcomm PCIe controller driver:

- Select PCI Power Control Slot driver so slot voltage rails can be
turned on/off if described in Root Port devicetree node (Qiang Yu)

- Parse only PCI bridge child nodes in devicetree, skipping unrelated
nodes such as OPP (Operating Performance Points), which caused
probe failures (Krishna Chaitanya Chundru)

- Add 8.0 GT/s and 32.0 GT/s equalization settings (Ziyue Zhang)

- Consolidate Root Port 'phy' and 'reset' properties in struct
qcom_pcie_port, regardless of whether we got them from the Root
Port node or the host bridge node (Manivannan Sadhasivam)

- Fetch and map the ELBI register space in the DWC core rather than
in each driver individually (Krishna Chaitanya Chundru)

- Enable ECAM mechanism in DWC core by setting up iATU with 'CFG
Shift Feature' and use this in the qcom driver (Krishna Chaitanya
Chundru)

- Add SM8750 compatible to qcom,pcie-sm8550.yaml (Krishna Chaitanya
Chundru)

- Update qcom,pcie-x1e80100.yaml to allow fifth PCIe host on Qualcomm
Glymur, which is compatible with X1E80100 but doesn't have the
cnoc_sf_axi clock (Qiang Yu)

Renesas R-Car PCIe controller driver:

- Fix a typo that prevented correct PHY initialization (Marek Vasut)

- Add a missing 1ms delay after PWR reset assertion as required by
the V4H manual (Marek Vasut)

- Assure reset has completed before DBI access to avoid SError (Marek
Vasut)

- Fix inverted PHY initialization check, which sometimes led to
timeouts and failure to start the controller (Marek Vasut)

- Pass the correct IRQ domain to generic_handle_domain_irq() to fix a
regression when converting to msi_create_parent_irq_domain()
(Claudiu Beznea)

- Drop the spinlock protecting the PMSR register - it's no longer
required since pci_lock already serializes accesses (Marek Vasut)

- Convert struct rcar_msi mask_lock to raw spinlock to avoid a lock
nesting error (Marek Vasut)

SOPHGO PCIe controller driver:

- Check for existence of struct cdns_pcie.ops before using it to
allow Cadence drivers that don't need to supply ops (Chen Wang)

- Add DT binding and driver for the SOPHGO SG2042 PCIe controller
(Chen Wang)

STMicroelectronics STM32MP25 PCIe controller driver:

- Update pinctrl documentation of initial states and use in runtime
suspend/resume (Christian Bruel)

- Add pinctrl_pm_select_init_state() for use by stm32 driver, which
needs it during resume (Christian Bruel)

- Add devicetree bindings and drivers for the STMicroelectronics
STM32MP25 in host and endpoint modes (Christian Bruel)

Synopsys DesignWare PCIe controller driver:

- Add support for x16 in devicetree 'num-lanes' property (Konrad
Dybcio)

- Verify that if DT specifies a single IRQ for all eDMA channels, it
is named 'dma' (Niklas Cassel)

TI J721E PCIe driver:

- Add MODULE_DEVICE_TABLE() so driver can be autoloaded (Siddharth
Vadapalli)

- Power controller off before configuring the glue layer so the
controller latches the correct values on power-on (Siddharth
Vadapalli)

TI Keystone PCIe controller driver:

- Use devm_request_irq() so 'ks-pcie-error-irq' is freed when driver
exits with error (Siddharth Vadapalli)

- Add Peripheral Virtualization Unit (PVU), which restricts DMA from
PCIe devices to specific regions of host memory, to the ti,am65
binding (Jan Kiszka)

Xilinx NWL PCIe controller driver:

- Clear bootloader E_ECAM_CONTROL before merging in the new driver
value to avoid writing invalid values (Jani Nurminen)"

* tag 'pci-v6.18-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (141 commits)
PCI/AER: Avoid NULL pointer dereference in aer_ratelimit()
MAINTAINERS: Add entry for ST STM32MP25 PCIe drivers
PCI: stm32-ep: Add PCIe Endpoint support for STM32MP25
dt-bindings: PCI: Add STM32MP25 PCIe Endpoint bindings
PCI: stm32: Add PCIe host support for STM32MP25
PCI: xilinx-nwl: Fix ECAM programming
PCI: j721e: Fix incorrect error message in probe()
PCI: keystone: Use devm_request_irq() to free "ks-pcie-error-irq" on exit
dt-bindings: PCI: qcom,pcie-x1e80100: Set clocks minItems for the fifth Glymur PCIe Controller
PCI: dwc: Support 16-lane operation
PCI: Add lockdep assertion in pci_stop_and_remove_bus_device()
PCI/IOV: Add PCI rescan-remove locking when enabling/disabling SR-IOV
PCI: rcar-host: Convert struct rcar_msi mask_lock into raw spinlock
PCI: tegra194: Rename 'root_bus' to 'root_port_bus' in tegra_pcie_downstream_dev_to_D0()
PCI: tegra: Convert struct tegra_msi mask_lock into raw spinlock
PCI: rcar-gen4: Fix inverted break condition in PHY initialization
PCI: rcar-gen4: Assure reset occurs before DBI access
PCI: rcar-gen4: Add missing 1ms delay after PWR reset assertion
PCI: Set up bridge resources earlier
PCI: rcar-host: Drop PMSR spinlock
...

+3166 -1298
+9
Documentation/ABI/testing/sysfs-bus-pci
··· 612 612 613 613 # ls doe_features 614 614 0001:01 0001:02 doe_discovery 615 + 616 + What: /sys/bus/pci/devices/.../serial_number 617 + Date: December 2025 618 + Contact: Matthew Wood <thepacketgeek@gmail.com> 619 + Description: 620 + This is visible only for PCI devices that support the serial 621 + number extended capability. The file is read only and due to 622 + the possible sensitivity of accessible serial numbers, admin 623 + only.
+7 -2
Documentation/PCI/endpoint/pci-vntb-howto.rst
··· 90 90 attributes that can be configured by the user:: 91 91 92 92 # ls functions/pci_epf_vntb/func1/pci_epf_vntb.0/ 93 - db_count mw1 mw2 mw3 mw4 num_mws 94 - spad_count 93 + ctrl_bar db_count mw1_bar mw2_bar mw3_bar mw4_bar spad_count 94 + db_bar mw1 mw2 mw3 mw4 num_mws vbus_number 95 + vntb_vid vntb_pid 95 96 96 97 A sample configuration for NTB function is given below:: 97 98 ··· 100 99 # echo 128 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/spad_count 101 100 # echo 1 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/num_mws 102 101 # echo 0x100000 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/mw1 102 + 103 + By default, each construct is assigned a BAR, as needed and in order. 104 + Should a specific BAR setup be required by the platform, BAR may be assigned 105 + to each construct using the related ``XYZ_bar`` entry. 103 106 104 107 A sample configuration for virtual NTB driver for virtual PCI bus:: 105 108
+34 -9
Documentation/PCI/pci-error-recovery.rst
··· 13 13 Many PCI bus controllers are able to detect a variety of hardware 14 14 PCI errors on the bus, such as parity errors on the data and address 15 15 buses, as well as SERR and PERR errors. Some of the more advanced 16 - chipsets are able to deal with these errors; these include PCI-E chipsets, 16 + chipsets are able to deal with these errors; these include PCIe chipsets, 17 17 and the PCI-host bridges found on IBM Power4, Power5 and Power6-based 18 18 pSeries boxes. A typical action taken is to disconnect the affected device, 19 19 halting all I/O to it. The goal of a disconnection is to avoid system ··· 108 108 if it implements any, it must implement error_detected(). If a callback 109 109 is not implemented, the corresponding feature is considered unsupported. 110 110 For example, if mmio_enabled() and resume() aren't there, then it 111 - is assumed that the driver is not doing any direct recovery and requires 112 - a slot reset. Typically a driver will want to know about 111 + is assumed that the driver does not need these callbacks 112 + for recovery. Typically a driver will want to know about 113 113 a slot_reset(). 114 114 115 115 The actual steps taken by a platform to recover from a PCI error ··· 122 122 is isolated, in that all I/O is blocked: all reads return 0xffffffff, 123 123 all writes are ignored. 124 124 125 + Similarly, on platforms supporting Downstream Port Containment 126 + (PCIe r7.0 sec 6.2.11), the link to the sub-hierarchy with the 127 + faulting device is disabled. Any device in the sub-hierarchy 128 + becomes inaccessible. 125 129 126 130 STEP 1: Notification 127 131 -------------------- ··· 145 141 All drivers participating in this system must implement this call. 146 142 The driver must return one of the following result codes: 147 143 144 + - PCI_ERS_RESULT_RECOVERED 145 + Driver returns this if it thinks the device is usable despite 146 + the error and does not need further intervention. 148 147 - PCI_ERS_RESULT_CAN_RECOVER 149 148 Driver returns this if it thinks it might be able to recover 150 149 the HW by just banging IOs or if it wants to be given ··· 206 199 all drivers on a segment agree that they can try to recover and if no automatic 207 200 link reset was performed by the HW. If the platform can't just re-enable IOs 208 201 without a slot reset or a link reset, it will not call this callback, and 209 - instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset) 202 + instead will have gone directly to STEP 3 (Link Reset) or STEP 4 (Slot Reset). 203 + 204 + .. note:: 205 + 206 + On platforms supporting Advanced Error Reporting (PCIe r7.0 sec 6.2), 207 + the faulting device may already be accessible in STEP 1 (Notification). 208 + Drivers should nevertheless defer accesses to STEP 2 (MMIO Enabled) 209 + to be compatible with EEH on powerpc and with s390 (where devices are 210 + inaccessible until STEP 2). 211 + 212 + On platforms supporting Downstream Port Containment, the link to the 213 + sub-hierarchy with the faulting device is re-enabled in STEP 3 (Link 214 + Reset). Hence devices in the sub-hierarchy are inaccessible until 215 + STEP 4 (Slot Reset). 216 + 217 + For errors such as Surprise Down (PCIe r7.0 sec 6.2.7), the device 218 + may not even be accessible in STEP 4 (Slot Reset). Drivers can detect 219 + accessibility by checking whether reads from the device return all 1's 220 + (PCI_POSSIBLE_ERROR()). 210 221 211 222 .. note:: 212 223 ··· 259 234 260 235 The next step taken depends on the results returned by the drivers. 261 236 If all drivers returned PCI_ERS_RESULT_RECOVERED, then the platform 262 - proceeds to either STEP3 (Link Reset) or to STEP 5 (Resume Operations). 237 + proceeds to either STEP 3 (Link Reset) or to STEP 5 (Resume Operations). 263 238 264 239 If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform 265 240 proceeds to STEP 4 (Slot Reset) 266 241 267 242 STEP 3: Link Reset 268 243 ------------------ 269 - The platform resets the link. This is a PCI-Express specific step 244 + The platform resets the link. This is a PCIe specific step 270 245 and is done whenever a fatal error has been detected that can be 271 246 "solved" by resetting the link. 272 247 ··· 288 263 power-on followed by power-on BIOS/system firmware initialization. 289 264 Soft reset is also known as hot-reset. 290 265 291 - Powerpc fundamental reset is supported by PCI Express cards only 266 + Powerpc fundamental reset is supported by PCIe cards only 292 267 and results in device's state machines, hardware logic, port states and 293 268 configuration registers to initialize to their default conditions. 294 269 295 270 For most PCI devices, a soft reset will be sufficient for recovery. 296 271 Optional fundamental reset is provided to support a limited number 297 - of PCI Express devices for which a soft reset is not sufficient 272 + of PCIe devices for which a soft reset is not sufficient 298 273 for recovery. 299 274 300 275 If the platform supports PCI hotplug, then the reset might be ··· 338 313 - PCI_ERS_RESULT_DISCONNECT 339 314 Same as above. 340 315 341 - Drivers for PCI Express cards that require a fundamental reset must 316 + Drivers for PCIe cards that require a fundamental reset must 342 317 set the needs_freset bit in the pci_dev structure in their probe function. 343 318 For example, the QLogic qla2xxx driver sets the needs_freset bit for certain 344 319 PCI card types::
+39 -44
Documentation/PCI/pcieaer-howto.rst
··· 70 70 ---------------- 71 71 72 72 When a PCIe AER error is captured, an error message will be output to 73 - console. If it's a correctable error, it is output as an info message. 73 + console. If it's a correctable error, it is output as a warning message. 74 74 Otherwise, it is printed as an error. So users could choose different 75 75 log level to filter out correctable error messages. 76 76 77 77 Below shows an example:: 78 78 79 - 0000:50:00.0: PCIe Bus Error: severity=Uncorrected (Fatal), type=Transaction Layer, id=0500(Requester ID) 79 + 0000:50:00.0: PCIe Bus Error: severity=Uncorrectable (Fatal), type=Transaction Layer, (Requester ID) 80 80 0000:50:00.0: device [8086:0329] error status/mask=00100000/00000000 81 - 0000:50:00.0: [20] Unsupported Request (First) 82 - 0000:50:00.0: TLP Header: 04000001 00200a03 05010000 00050100 81 + 0000:50:00.0: [20] UnsupReq (First) 82 + 0000:50:00.0: TLP Header: 0x04000001 0x00200a03 0x05010000 0x00050100 83 83 84 84 In the example, 'Requester ID' means the ID of the device that sent 85 85 the error message to the Root Port. Please refer to PCIe specs for other ··· 138 138 an error. The Root Port, upon receiving an error reporting message, 139 139 internally processes and logs the error message in its AER 140 140 Capability structure. Error information being logged includes storing 141 - the error reporting agent's requestor ID into the Error Source 141 + the error reporting agent's Requester ID into the Error Source 142 142 Identification Registers and setting the error bits of the Root Error 143 143 Status Register accordingly. If AER error reporting is enabled in the Root 144 144 Error Command Register, the Root Port generates an interrupt when an ··· 152 152 Provide callbacks 153 153 ----------------- 154 154 155 - callback reset_link to reset PCIe link 156 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 157 - 158 - This callback is used to reset the PCIe physical link when a 159 - fatal error happens. The Root Port AER service driver provides a 160 - default reset_link function, but different Upstream Ports might 161 - have different specifications to reset the PCIe link, so 162 - Upstream Port drivers may provide their own reset_link functions. 163 - 164 - Section 3.2.2.2 provides more detailed info on when to call 165 - reset_link. 166 - 167 155 PCI error-recovery callbacks 168 156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 169 157 ··· 162 174 Data struct pci_driver has a pointer, err_handler, to point to 163 175 pci_error_handlers who consists of a couple of callback function 164 176 pointers. The AER driver follows the rules defined in 165 - pci-error-recovery.rst except PCIe-specific parts (e.g. 166 - reset_link). Please refer to pci-error-recovery.rst for detailed 177 + pci-error-recovery.rst except PCIe-specific parts (see 178 + below). Please refer to pci-error-recovery.rst for detailed 167 179 definitions of the callbacks. 168 180 169 181 The sections below specify when to call the error callback functions. ··· 177 189 require any recovery actions. The AER driver clears the device's 178 190 correctable error status register accordingly and logs these errors. 179 191 180 - Non-correctable (non-fatal and fatal) errors 181 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 192 + Uncorrectable (non-fatal and fatal) errors 193 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 182 194 183 - If an error message indicates a non-fatal error, performing link reset 195 + The AER driver performs a Secondary Bus Reset to recover from 196 + uncorrectable errors. The reset is applied at the port above 197 + the originating device: If the originating device is an Endpoint, 198 + only the Endpoint is reset. If on the other hand the originating 199 + device has subordinate devices, those are all affected by the 200 + reset as well. 201 + 202 + If the originating device is a Root Complex Integrated Endpoint, 203 + there's no port above where a Secondary Bus Reset could be applied. 204 + In this case, the AER driver instead applies a Function Level Reset. 205 + 206 + If an error message indicates a non-fatal error, performing a reset 184 207 at upstream is not required. The AER driver calls error_detected(dev, 185 208 pci_channel_io_normal) to all drivers associated within a hierarchy in 186 209 question. For example:: ··· 203 204 204 205 A driver may return PCI_ERS_RESULT_CAN_RECOVER, 205 206 PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on 206 - whether it can recover or the AER driver calls mmio_enabled as next. 207 + whether it can recover without a reset, considers the device unrecoverable 208 + or needs a reset for recovery. If all affected drivers agree that they can 209 + recover without a reset, it is skipped. Should one driver request a reset, 210 + it overrides all other drivers. 207 211 208 212 If an error message indicates a fatal error, kernel will broadcast 209 213 error_detected(dev, pci_channel_io_frozen) to all drivers within 210 - a hierarchy in question. Then, performing link reset at upstream is 211 - necessary. As different kinds of devices might use different approaches 212 - to reset link, AER port service driver is required to provide the 213 - function to reset link via callback parameter of pcie_do_recovery() 214 - function. If reset_link is not NULL, recovery function will use it 215 - to reset the link. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER 216 - and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes 217 - to mmio_enabled. 214 + a hierarchy in question. Then, performing a reset at upstream is 215 + necessary. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER 216 + to indicate that recovery without a reset is possible, the error 217 + handling goes to mmio_enabled, but afterwards a reset is still 218 + performed. 218 219 219 - Frequent Asked Questions 220 - ------------------------ 220 + In other words, for non-fatal errors, drivers may opt in to a reset. 221 + But for fatal errors, they cannot opt out of a reset, based on the 222 + assumption that the link is unreliable. 223 + 224 + Frequently Asked Questions 225 + -------------------------- 221 226 222 227 Q: 223 228 What happens if a PCIe device driver does not provide an 224 229 error recovery handler (pci_driver->err_handler is equal to NULL)? 225 230 226 231 A: 227 - The devices attached with the driver won't be recovered. If the 228 - error is fatal, kernel will print out warning messages. Please refer 229 - to section 3 for more information. 230 - 231 - Q: 232 - What happens if an upstream port service driver does not provide 233 - callback reset_link? 234 - 235 - A: 236 - Fatal error recovery will fail if the errors are reported by the 237 - upstream ports who are attached by the service driver. 232 + The devices attached with the driver won't be recovered. 233 + The kernel will print out informational messages to identify 234 + unrecoverable devices. 238 235 239 236 240 237 Software error injection
+23 -1
Documentation/devicetree/bindings/pci/amd,versal2-mdb-host.yaml
··· 71 71 - "#address-cells" 72 72 - "#interrupt-cells" 73 73 74 + patternProperties: 75 + '^pcie@[0-2],0$': 76 + type: object 77 + $ref: /schemas/pci/pci-pci-bridge.yaml# 78 + 79 + properties: 80 + reg: 81 + maxItems: 1 82 + 83 + unevaluatedProperties: false 84 + 74 85 required: 75 86 - reg 76 87 - reg-names ··· 98 87 - | 99 88 #include <dt-bindings/interrupt-controller/arm-gic.h> 100 89 #include <dt-bindings/interrupt-controller/irq.h> 90 + #include <dt-bindings/gpio/gpio.h> 101 91 102 92 soc { 103 93 #address-cells = <2>; ··· 124 112 #size-cells = <2>; 125 113 #interrupt-cells = <1>; 126 114 device_type = "pci"; 115 + 116 + pcie@0,0 { 117 + device_type = "pci"; 118 + reg = <0x0 0x0 0x0 0x0 0x0>; 119 + reset-gpios = <&tca6416_u37 7 GPIO_ACTIVE_LOW>; 120 + #address-cells = <3>; 121 + #size-cells = <2>; 122 + ranges; 123 + }; 124 + 127 125 pcie_intc_0: interrupt-controller { 128 126 #address-cells = <0>; 129 127 #interrupt-cells = <1>; 130 128 interrupt-controller; 131 - }; 129 + }; 132 130 }; 133 131 };
+35
Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
··· 52 52 - mediatek,mt8188-pcie 53 53 - mediatek,mt8195-pcie 54 54 - const: mediatek,mt8192-pcie 55 + - items: 56 + - enum: 57 + - mediatek,mt6991-pcie 58 + - const: mediatek,mt8196-pcie 55 59 - const: mediatek,mt8192-pcie 60 + - const: mediatek,mt8196-pcie 56 61 - const: airoha,en7581-pcie 57 62 58 63 reg: ··· 214 209 reset-names: 215 210 minItems: 1 216 211 maxItems: 2 212 + 213 + mediatek,pbus-csr: false 214 + 215 + - if: 216 + properties: 217 + compatible: 218 + contains: 219 + enum: 220 + - mediatek,mt8196-pcie 221 + then: 222 + properties: 223 + clocks: 224 + minItems: 6 225 + 226 + clock-names: 227 + items: 228 + - const: pl_250m 229 + - const: tl_26m 230 + - const: bus 231 + - const: low_power 232 + - const: peri_26m 233 + - const: peri_mem 234 + 235 + resets: 236 + minItems: 2 237 + 238 + reset-names: 239 + items: 240 + - const: phy 241 + - const: mac 217 242 218 243 mediatek,pbus-csr: false 219 244
+37 -37
Documentation/devicetree/bindings/pci/qcom,pcie-sa8255p.yaml
··· 77 77 #size-cells = <2>; 78 78 79 79 pci@1c00000 { 80 - compatible = "qcom,pcie-sa8255p"; 81 - reg = <0x4 0x00000000 0 0x10000000>; 82 - device_type = "pci"; 83 - #address-cells = <3>; 84 - #size-cells = <2>; 85 - ranges = <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>, 86 - <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x40000000>; 87 - bus-range = <0x00 0xff>; 88 - dma-coherent; 89 - linux,pci-domain = <0>; 90 - power-domains = <&scmi5_pd 0>; 91 - iommu-map = <0x0 &pcie_smmu 0x0000 0x1>, 92 - <0x100 &pcie_smmu 0x0001 0x1>; 93 - interrupt-parent = <&intc>; 94 - interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>, 95 - <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>, 96 - <GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>, 97 - <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>, 98 - <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>, 99 - <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>, 100 - <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, 101 - <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>; 102 - interrupt-names = "msi0", "msi1", "msi2", "msi3", 103 - "msi4", "msi5", "msi6", "msi7"; 80 + compatible = "qcom,pcie-sa8255p"; 81 + reg = <0x4 0x00000000 0 0x10000000>; 82 + device_type = "pci"; 83 + #address-cells = <3>; 84 + #size-cells = <2>; 85 + ranges = <0x02000000 0x0 0x40100000 0x0 0x40100000 0x0 0x1ff00000>, 86 + <0x43000000 0x4 0x10100000 0x4 0x10100000 0x0 0x40000000>; 87 + bus-range = <0x00 0xff>; 88 + dma-coherent; 89 + linux,pci-domain = <0>; 90 + power-domains = <&scmi5_pd 0>; 91 + iommu-map = <0x0 &pcie_smmu 0x0000 0x1>, 92 + <0x100 &pcie_smmu 0x0001 0x1>; 93 + interrupt-parent = <&intc>; 94 + interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>, 95 + <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>, 96 + <GIC_SPI 309 IRQ_TYPE_LEVEL_HIGH>, 97 + <GIC_SPI 312 IRQ_TYPE_LEVEL_HIGH>, 98 + <GIC_SPI 313 IRQ_TYPE_LEVEL_HIGH>, 99 + <GIC_SPI 314 IRQ_TYPE_LEVEL_HIGH>, 100 + <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH>, 101 + <GIC_SPI 375 IRQ_TYPE_LEVEL_HIGH>; 102 + interrupt-names = "msi0", "msi1", "msi2", "msi3", 103 + "msi4", "msi5", "msi6", "msi7"; 104 104 105 - #interrupt-cells = <1>; 106 - interrupt-map-mask = <0 0 0 0x7>; 107 - interrupt-map = <0 0 0 1 &intc GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 108 - <0 0 0 2 &intc GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>, 109 - <0 0 0 3 &intc GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>, 110 - <0 0 0 4 &intc GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>; 105 + #interrupt-cells = <1>; 106 + interrupt-map-mask = <0 0 0 0x7>; 107 + interrupt-map = <0 0 0 1 &intc GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 108 + <0 0 0 2 &intc GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>, 109 + <0 0 0 3 &intc GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>, 110 + <0 0 0 4 &intc GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>; 111 111 112 - pcie@0 { 113 - device_type = "pci"; 114 - reg = <0x0 0x0 0x0 0x0 0x0>; 115 - bus-range = <0x01 0xff>; 112 + pcie@0 { 113 + device_type = "pci"; 114 + reg = <0x0 0x0 0x0 0x0 0x0>; 115 + bus-range = <0x01 0xff>; 116 116 117 - #address-cells = <3>; 118 - #size-cells = <2>; 119 - ranges; 117 + #address-cells = <3>; 118 + #size-cells = <2>; 119 + ranges; 120 120 }; 121 121 }; 122 122 };
+1
Documentation/devicetree/bindings/pci/qcom,pcie-sm8550.yaml
··· 22 22 - enum: 23 23 - qcom,sar2130p-pcie 24 24 - qcom,pcie-sm8650 25 + - qcom,pcie-sm8750 25 26 - const: qcom,pcie-sm8550 26 27 27 28 reg:
+2 -1
Documentation/devicetree/bindings/pci/qcom,pcie-x1e80100.yaml
··· 32 32 - const: mhi # MHI registers 33 33 34 34 clocks: 35 - minItems: 7 35 + minItems: 6 36 36 maxItems: 7 37 37 38 38 clock-names: 39 + minItems: 6 39 40 items: 40 41 - const: aux # Auxiliary clock 41 42 - const: cfg # Configuration clock
+64
Documentation/devicetree/bindings/pci/sophgo,sg2042-pcie-host.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/sophgo,sg2042-pcie-host.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Sophgo SG2042 PCIe Host (Cadence PCIe Wrapper) 8 + 9 + description: 10 + Sophgo SG2042 PCIe host controller is based on the Cadence PCIe core. 11 + 12 + maintainers: 13 + - Chen Wang <unicorn_wang@outlook.com> 14 + 15 + properties: 16 + compatible: 17 + const: sophgo,sg2042-pcie-host 18 + 19 + reg: 20 + maxItems: 2 21 + 22 + reg-names: 23 + items: 24 + - const: reg 25 + - const: cfg 26 + 27 + vendor-id: 28 + const: 0x1f1c 29 + 30 + device-id: 31 + const: 0x2042 32 + 33 + msi-parent: true 34 + 35 + allOf: 36 + - $ref: cdns-pcie-host.yaml# 37 + 38 + required: 39 + - compatible 40 + - reg 41 + - reg-names 42 + 43 + unevaluatedProperties: false 44 + 45 + examples: 46 + - | 47 + #include <dt-bindings/interrupt-controller/irq.h> 48 + 49 + pcie@62000000 { 50 + compatible = "sophgo,sg2042-pcie-host"; 51 + device_type = "pci"; 52 + reg = <0x62000000 0x00800000>, 53 + <0x48000000 0x00001000>; 54 + reg-names = "reg", "cfg"; 55 + #address-cells = <3>; 56 + #size-cells = <2>; 57 + ranges = <0x81000000 0 0x00000000 0xde000000 0 0x00010000>, 58 + <0x82000000 0 0xd0400000 0xd0400000 0 0x0d000000>; 59 + bus-range = <0x00 0xff>; 60 + vendor-id = <0x1f1c>; 61 + device-id = <0x2042>; 62 + cdns,no-bar-match-nbits = <48>; 63 + msi-parent = <&msi>; 64 + };
+33
Documentation/devicetree/bindings/pci/st,stm32-pcie-common.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/st,stm32-pcie-common.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STM32MP25 PCIe RC/EP controller 8 + 9 + maintainers: 10 + - Christian Bruel <christian.bruel@foss.st.com> 11 + 12 + description: 13 + STM32MP25 PCIe RC/EP common properties 14 + 15 + properties: 16 + clocks: 17 + maxItems: 1 18 + description: PCIe system clock 19 + 20 + resets: 21 + maxItems: 1 22 + 23 + power-domains: 24 + maxItems: 1 25 + 26 + access-controllers: 27 + maxItems: 1 28 + 29 + required: 30 + - clocks 31 + - resets 32 + 33 + additionalProperties: true
+73
Documentation/devicetree/bindings/pci/st,stm32-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/st,stm32-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STMicroelectronics STM32MP25 PCIe Endpoint 8 + 9 + maintainers: 10 + - Christian Bruel <christian.bruel@foss.st.com> 11 + 12 + description: 13 + PCIe endpoint controller based on the Synopsys DesignWare PCIe core. 14 + 15 + allOf: 16 + - $ref: /schemas/pci/snps,dw-pcie-ep.yaml# 17 + - $ref: /schemas/pci/st,stm32-pcie-common.yaml# 18 + 19 + properties: 20 + compatible: 21 + const: st,stm32mp25-pcie-ep 22 + 23 + reg: 24 + items: 25 + - description: Data Bus Interface (DBI) registers. 26 + - description: Data Bus Interface (DBI) shadow registers. 27 + - description: Internal Address Translation Unit (iATU) registers. 28 + - description: PCIe configuration registers. 29 + 30 + reg-names: 31 + items: 32 + - const: dbi 33 + - const: dbi2 34 + - const: atu 35 + - const: addr_space 36 + 37 + reset-gpios: 38 + description: GPIO controlled connection to PERST# signal 39 + maxItems: 1 40 + 41 + phys: 42 + maxItems: 1 43 + 44 + required: 45 + - phys 46 + - reset-gpios 47 + 48 + unevaluatedProperties: false 49 + 50 + examples: 51 + - | 52 + #include <dt-bindings/clock/st,stm32mp25-rcc.h> 53 + #include <dt-bindings/gpio/gpio.h> 54 + #include <dt-bindings/phy/phy.h> 55 + #include <dt-bindings/reset/st,stm32mp25-rcc.h> 56 + 57 + pcie-ep@48400000 { 58 + compatible = "st,stm32mp25-pcie-ep"; 59 + reg = <0x48400000 0x400000>, 60 + <0x48500000 0x100000>, 61 + <0x48700000 0x80000>, 62 + <0x10000000 0x10000000>; 63 + reg-names = "dbi", "dbi2", "atu", "addr_space"; 64 + clocks = <&rcc CK_BUS_PCIE>; 65 + phys = <&combophy PHY_TYPE_PCIE>; 66 + resets = <&rcc PCIE_R>; 67 + pinctrl-names = "default", "init"; 68 + pinctrl-0 = <&pcie_pins_a>; 69 + pinctrl-1 = <&pcie_init_pins_a>; 70 + reset-gpios = <&gpioj 8 GPIO_ACTIVE_LOW>; 71 + access-controllers = <&rifsc 68>; 72 + power-domains = <&CLUSTER_PD>; 73 + };
+112
Documentation/devicetree/bindings/pci/st,stm32-pcie-host.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/st,stm32-pcie-host.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: STMicroelectronics STM32MP25 PCIe Root Complex 8 + 9 + maintainers: 10 + - Christian Bruel <christian.bruel@foss.st.com> 11 + 12 + description: 13 + PCIe root complex controller based on the Synopsys DesignWare PCIe core. 14 + 15 + allOf: 16 + - $ref: /schemas/pci/snps,dw-pcie.yaml# 17 + - $ref: /schemas/pci/st,stm32-pcie-common.yaml# 18 + 19 + properties: 20 + compatible: 21 + const: st,stm32mp25-pcie-rc 22 + 23 + reg: 24 + items: 25 + - description: Data Bus Interface (DBI) registers. 26 + - description: PCIe configuration registers. 27 + 28 + reg-names: 29 + items: 30 + - const: dbi 31 + - const: config 32 + 33 + msi-parent: 34 + maxItems: 1 35 + 36 + patternProperties: 37 + '^pcie@[0-2],0$': 38 + type: object 39 + $ref: /schemas/pci/pci-pci-bridge.yaml# 40 + 41 + properties: 42 + reg: 43 + maxItems: 1 44 + 45 + phys: 46 + maxItems: 1 47 + 48 + reset-gpios: 49 + description: GPIO controlled connection to PERST# signal 50 + maxItems: 1 51 + 52 + wake-gpios: 53 + description: GPIO used as WAKE# input signal 54 + maxItems: 1 55 + 56 + required: 57 + - phys 58 + - ranges 59 + 60 + unevaluatedProperties: false 61 + 62 + required: 63 + - interrupt-map 64 + - interrupt-map-mask 65 + - ranges 66 + - dma-ranges 67 + 68 + unevaluatedProperties: false 69 + 70 + examples: 71 + - | 72 + #include <dt-bindings/clock/st,stm32mp25-rcc.h> 73 + #include <dt-bindings/gpio/gpio.h> 74 + #include <dt-bindings/interrupt-controller/arm-gic.h> 75 + #include <dt-bindings/phy/phy.h> 76 + #include <dt-bindings/reset/st,stm32mp25-rcc.h> 77 + 78 + pcie@48400000 { 79 + compatible = "st,stm32mp25-pcie-rc"; 80 + device_type = "pci"; 81 + reg = <0x48400000 0x400000>, 82 + <0x10000000 0x10000>; 83 + reg-names = "dbi", "config"; 84 + #interrupt-cells = <1>; 85 + interrupt-map-mask = <0 0 0 7>; 86 + interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 264 IRQ_TYPE_LEVEL_HIGH>, 87 + <0 0 0 2 &intc 0 0 GIC_SPI 265 IRQ_TYPE_LEVEL_HIGH>, 88 + <0 0 0 3 &intc 0 0 GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>, 89 + <0 0 0 4 &intc 0 0 GIC_SPI 267 IRQ_TYPE_LEVEL_HIGH>; 90 + #address-cells = <3>; 91 + #size-cells = <2>; 92 + ranges = <0x01000000 0x0 0x00000000 0x10010000 0x0 0x10000>, 93 + <0x02000000 0x0 0x10020000 0x10020000 0x0 0x7fe0000>, 94 + <0x42000000 0x0 0x18000000 0x18000000 0x0 0x8000000>; 95 + dma-ranges = <0x42000000 0x0 0x80000000 0x80000000 0x0 0x80000000>; 96 + clocks = <&rcc CK_BUS_PCIE>; 97 + resets = <&rcc PCIE_R>; 98 + msi-parent = <&v2m0>; 99 + access-controllers = <&rifsc 68>; 100 + power-domains = <&CLUSTER_PD>; 101 + 102 + pcie@0,0 { 103 + device_type = "pci"; 104 + reg = <0x0 0x0 0x0 0x0 0x0>; 105 + phys = <&combophy PHY_TYPE_PCIE>; 106 + wake-gpios = <&gpioh 5 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>; 107 + reset-gpios = <&gpioj 8 GPIO_ACTIVE_LOW>; 108 + #address-cells = <3>; 109 + #size-cells = <2>; 110 + ranges; 111 + }; 112 + };
+25 -3
Documentation/devicetree/bindings/pci/ti,am65-pci-host.yaml
··· 20 20 - ti,keystone-pcie 21 21 22 22 reg: 23 - maxItems: 4 23 + minItems: 4 24 + maxItems: 6 24 25 25 26 reg-names: 27 + minItems: 4 26 28 items: 27 29 - const: app 28 30 - const: dbics 29 31 - const: config 30 32 - const: atu 33 + - const: vmap_lp 34 + - const: vmap_hp 31 35 32 36 interrupts: 33 37 maxItems: 1 ··· 73 69 items: 74 70 pattern: '^pcie-phy[0-1]$' 75 71 72 + memory-region: 73 + maxItems: 1 74 + description: | 75 + phandle to a restricted DMA pool to be used for all devices behind 76 + this controller. The regions should be defined according to 77 + reserved-memory/shared-dma-pool.yaml. 78 + Note that enforcement via the PVU will only be available to 79 + ti,am654-pcie-rc devices. 80 + 76 81 required: 77 82 - compatible 78 83 - reg ··· 102 89 - power-domains 103 90 - msi-map 104 91 - num-viewport 92 + else: 93 + properties: 94 + reg: 95 + maxItems: 4 96 + 97 + reg-names: 98 + maxItems: 4 105 99 106 100 unevaluatedProperties: false 107 101 ··· 124 104 reg = <0x5500000 0x1000>, 125 105 <0x5501000 0x1000>, 126 106 <0x10000000 0x2000>, 127 - <0x5506000 0x1000>; 128 - reg-names = "app", "dbics", "config", "atu"; 107 + <0x5506000 0x1000>, 108 + <0x2900000 0x1000>, 109 + <0x2908000 0x1000>; 110 + reg-names = "app", "dbics", "config", "atu", "vmap_lp", "vmap_hp"; 129 111 power-domains = <&k3_pds 120 TI_SCI_PD_EXCLUSIVE>; 130 112 #address-cells = <3>; 131 113 #size-cells = <2>;
+55 -2
Documentation/driver-api/pin-control.rst
··· 1162 1162 Pin control requests from drivers 1163 1163 ================================= 1164 1164 1165 - When a device driver is about to probe the device core will automatically 1166 - attempt to issue ``pinctrl_get_select_default()`` on these devices. 1165 + When a device driver is about to probe, the device core attaches the 1166 + standard states if they are defined in the device tree by calling 1167 + ``pinctrl_bind_pins()`` on these devices. 1168 + Possible standard state names are: "default", "init", "sleep" and "idle". 1169 + 1170 + - if ``default`` is defined in the device tree, it is selected before 1171 + device probe. 1172 + 1173 + - if ``init`` and ``default`` are defined in the device tree, the "init" 1174 + state is selected before the driver probe and the "default" state is 1175 + selected after the driver probe. 1176 + 1177 + - the ``sleep`` and ``idle`` states are for power management and can only 1178 + be selected with the PM API bellow. 1179 + 1180 + PM interfaces 1181 + ================= 1182 + PM runtime suspend/resume might need to execute the same init sequence as 1183 + during probe. Since the predefined states are already attached to the 1184 + device, the driver can activate these states explicitly with the 1185 + following helper functions: 1186 + 1187 + - ``pinctrl_pm_select_default_state()`` 1188 + - ``pinctrl_pm_select_init_state()`` 1189 + - ``pinctrl_pm_select_sleep_state()`` 1190 + - ``pinctrl_pm_select_idle_state()`` 1191 + 1192 + For example, if resuming the device depend on certain pinmux states 1193 + 1194 + .. code-block:: c 1195 + 1196 + foo_suspend() 1197 + { 1198 + /* suspend device */ 1199 + ... 1200 + 1201 + pinctrl_pm_select_sleep_state(dev); 1202 + } 1203 + 1204 + foo_resume() 1205 + { 1206 + pinctrl_pm_select_init_state(dev); 1207 + 1208 + /* resuming device */ 1209 + ... 1210 + 1211 + pinctrl_pm_select_default_state(dev); 1212 + } 1213 + 1167 1214 This way driver writers do not need to add any of the boilerplate code 1168 1215 of the type found below. However when doing fine-grained state selection 1169 1216 and not using the "default" state, you may have to do some device driver ··· 1231 1184 operation and going to sleep, moving from the ``PINCTRL_STATE_DEFAULT`` to 1232 1185 ``PINCTRL_STATE_SLEEP`` at runtime, re-biasing or even re-muxing pins to save 1233 1186 current in sleep mode. 1187 + 1188 + Another case is when the pinctrl needs to switch to a certain mode during 1189 + probe and then revert to the default state at the end of probe. For example 1190 + a PINMUX may need to be configured as a GPIO during probe. In this case, use 1191 + ``PINCTRL_STATE_INIT`` to switch state before probe, then move to 1192 + ``PINCTRL_STATE_DEFAULT`` at the end of probe for normal operation. 1234 1193 1235 1194 A driver may request a certain control state to be activated, usually just the 1236 1195 default state like this:
+7
MAINTAINERS
··· 19723 19723 S: Maintained 19724 19724 F: drivers/pci/controller/dwc/pci-exynos.c 19725 19725 19726 + PCI DRIVER FOR STM32MP25 19727 + M: Christian Bruel <christian.bruel@foss.st.com> 19728 + L: linux-pci@vger.kernel.org 19729 + S: Maintained 19730 + F: Documentation/devicetree/bindings/pci/st,stm32-pcie-*.yaml 19731 + F: drivers/pci/controller/dwc/*stm32* 19732 + 19726 19733 PCI DRIVER FOR SYNOPSYS DESIGNWARE 19727 19734 M: Jingoo Han <jingoohan1@gmail.com> 19728 19735 M: Manivannan Sadhasivam <mani@kernel.org>
+11 -28
arch/m68k/kernel/pcibios.c
··· 44 44 */ 45 45 int pcibios_enable_device(struct pci_dev *dev, int mask) 46 46 { 47 - struct resource *r; 48 47 u16 cmd, newcmd; 49 - int idx; 48 + int ret; 50 49 51 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 52 - newcmd = cmd; 53 - 54 - for (idx = 0; idx < 6; idx++) { 55 - /* Only set up the requested stuff */ 56 - if (!(mask & (1 << idx))) 57 - continue; 58 - 59 - r = dev->resource + idx; 60 - if (!r->start && r->end) { 61 - pr_err("PCI: Device %s not available because of resource collisions\n", 62 - pci_name(dev)); 63 - return -EINVAL; 64 - } 65 - if (r->flags & IORESOURCE_IO) 66 - newcmd |= PCI_COMMAND_IO; 67 - if (r->flags & IORESOURCE_MEM) 68 - newcmd |= PCI_COMMAND_MEMORY; 69 - } 50 + ret = pci_enable_resources(dev, mask); 51 + if (ret < 0) 52 + return ret; 70 53 71 54 /* 72 55 * Bridges (eg, cardbus bridges) need to be fully enabled 73 56 */ 74 - if ((dev->class >> 16) == PCI_BASE_CLASS_BRIDGE) 57 + if ((dev->class >> 16) == PCI_BASE_CLASS_BRIDGE) { 58 + pci_read_config_word(dev, PCI_COMMAND, &cmd); 75 59 newcmd |= PCI_COMMAND_IO | PCI_COMMAND_MEMORY; 76 - 77 - 78 - if (newcmd != cmd) { 79 - pr_info("PCI: enabling device %s (0x%04x -> 0x%04x)\n", 80 - pci_name(dev), cmd, newcmd); 81 - pci_write_config_word(dev, PCI_COMMAND, newcmd); 60 + if (newcmd != cmd) { 61 + pr_info("PCI: enabling bridge %s (0x%04x -> 0x%04x)\n", 62 + pci_name(dev), cmd, newcmd); 63 + pci_write_config_word(dev, PCI_COMMAND, newcmd); 64 + } 82 65 } 83 66 return 0; 84 67 }
+2 -36
arch/mips/pci/pci-legacy.c
··· 249 249 250 250 subsys_initcall(pcibios_init); 251 251 252 - static int pcibios_enable_resources(struct pci_dev *dev, int mask) 253 - { 254 - u16 cmd, old_cmd; 255 - int idx; 256 - struct resource *r; 257 - 258 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 259 - old_cmd = cmd; 260 - pci_dev_for_each_resource(dev, r, idx) { 261 - /* Only set up the requested stuff */ 262 - if (!(mask & (1<<idx))) 263 - continue; 264 - 265 - if (!(r->flags & (IORESOURCE_IO | IORESOURCE_MEM))) 266 - continue; 267 - if ((idx == PCI_ROM_RESOURCE) && 268 - (!(r->flags & IORESOURCE_ROM_ENABLE))) 269 - continue; 270 - if (!r->start && r->end) { 271 - pci_err(dev, 272 - "can't enable device: resource collisions\n"); 273 - return -EINVAL; 274 - } 275 - if (r->flags & IORESOURCE_IO) 276 - cmd |= PCI_COMMAND_IO; 277 - if (r->flags & IORESOURCE_MEM) 278 - cmd |= PCI_COMMAND_MEMORY; 279 - } 280 - if (cmd != old_cmd) { 281 - pci_info(dev, "enabling device (%04x -> %04x)\n", old_cmd, cmd); 282 - pci_write_config_word(dev, PCI_COMMAND, cmd); 283 - } 284 - return 0; 285 - } 286 - 287 252 int pcibios_enable_device(struct pci_dev *dev, int mask) 288 253 { 289 - int err = pcibios_enable_resources(dev, mask); 254 + int err; 290 255 256 + err = pci_enable_resources(dev, mask); 291 257 if (err < 0) 292 258 return err; 293 259
+1 -1
arch/powerpc/kernel/eeh_driver.c
··· 334 334 rc = driver->err_handler->error_detected(pdev, pci_channel_io_frozen); 335 335 336 336 edev->in_error = true; 337 - pci_uevent_ers(pdev, PCI_ERS_RESULT_NONE); 337 + pci_uevent_ers(pdev, rc); 338 338 return rc; 339 339 } 340 340
+3
arch/s390/pci/pci_event.c
··· 88 88 pci_ers_result_t ers_res = PCI_ERS_RESULT_DISCONNECT; 89 89 90 90 ers_res = driver->err_handler->error_detected(pdev, pdev->error_state); 91 + pci_uevent_ers(pdev, ers_res); 91 92 if (ers_result_indicates_abort(ers_res)) 92 93 pr_info("%s: Automatic recovery failed after initial reporting\n", pci_name(pdev)); 93 94 else if (ers_res == PCI_ERS_RESULT_NEED_RESET) ··· 245 244 ers_res = PCI_ERS_RESULT_RECOVERED; 246 245 247 246 if (ers_res != PCI_ERS_RESULT_RECOVERED) { 247 + pci_uevent_ers(pdev, PCI_ERS_RESULT_DISCONNECT); 248 248 pr_err("%s: Automatic recovery failed; operator intervention is required\n", 249 249 pci_name(pdev)); 250 250 status_str = "failed (driver can't recover)"; ··· 255 253 pr_info("%s: The device is ready to resume operations\n", pci_name(pdev)); 256 254 if (driver->err_handler->resume) 257 255 driver->err_handler->resume(pdev); 256 + pci_uevent_ers(pdev, PCI_ERS_RESULT_RECOVERED); 258 257 out_unlock: 259 258 pci_dev_unlock(pdev); 260 259 zpci_report_status(zdev, "recovery", status_str);
-27
arch/sparc/kernel/leon_pci.c
··· 60 60 pci_assign_unassigned_resources(); 61 61 pci_bus_add_devices(root_bus); 62 62 } 63 - 64 - int pcibios_enable_device(struct pci_dev *dev, int mask) 65 - { 66 - struct resource *res; 67 - u16 cmd, oldcmd; 68 - int i; 69 - 70 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 71 - oldcmd = cmd; 72 - 73 - pci_dev_for_each_resource(dev, res, i) { 74 - /* Only set up the requested stuff */ 75 - if (!(mask & (1<<i))) 76 - continue; 77 - 78 - if (res->flags & IORESOURCE_IO) 79 - cmd |= PCI_COMMAND_IO; 80 - if (res->flags & IORESOURCE_MEM) 81 - cmd |= PCI_COMMAND_MEMORY; 82 - } 83 - 84 - if (cmd != oldcmd) { 85 - pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 86 - pci_write_config_word(dev, PCI_COMMAND, cmd); 87 - } 88 - return 0; 89 - }
-27
arch/sparc/kernel/pci.c
··· 722 722 return bus; 723 723 } 724 724 725 - int pcibios_enable_device(struct pci_dev *dev, int mask) 726 - { 727 - struct resource *res; 728 - u16 cmd, oldcmd; 729 - int i; 730 - 731 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 732 - oldcmd = cmd; 733 - 734 - pci_dev_for_each_resource(dev, res, i) { 735 - /* Only set up the requested stuff */ 736 - if (!(mask & (1<<i))) 737 - continue; 738 - 739 - if (res->flags & IORESOURCE_IO) 740 - cmd |= PCI_COMMAND_IO; 741 - if (res->flags & IORESOURCE_MEM) 742 - cmd |= PCI_COMMAND_MEMORY; 743 - } 744 - 745 - if (cmd != oldcmd) { 746 - pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 747 - pci_write_config_word(dev, PCI_COMMAND, cmd); 748 - } 749 - return 0; 750 - } 751 - 752 725 /* Platform support for /proc/bus/pci/X/Y mmap()s. */ 753 726 int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma) 754 727 {
-27
arch/sparc/kernel/pcic.c
··· 642 642 } 643 643 } 644 644 645 - int pcibios_enable_device(struct pci_dev *dev, int mask) 646 - { 647 - struct resource *res; 648 - u16 cmd, oldcmd; 649 - int i; 650 - 651 - pci_read_config_word(dev, PCI_COMMAND, &cmd); 652 - oldcmd = cmd; 653 - 654 - pci_dev_for_each_resource(dev, res, i) { 655 - /* Only set up the requested stuff */ 656 - if (!(mask & (1<<i))) 657 - continue; 658 - 659 - if (res->flags & IORESOURCE_IO) 660 - cmd |= PCI_COMMAND_IO; 661 - if (res->flags & IORESOURCE_MEM) 662 - cmd |= PCI_COMMAND_MEMORY; 663 - } 664 - 665 - if (cmd != oldcmd) { 666 - pci_info(dev, "enabling device (%04x -> %04x)\n", oldcmd, cmd); 667 - pci_write_config_word(dev, PCI_COMMAND, cmd); 668 - } 669 - return 0; 670 - } 671 - 672 645 /* Makes compiler happy */ 673 646 static volatile int pcic_timer_dummy; 674 647
+40
arch/x86/pci/fixup.c
··· 295 295 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MCH_PC1, pcie_rootport_aspm_quirk); 296 296 297 297 /* 298 + * PCIe devices underneath Xeon 6 PCIe Root Port bifurcated to x2 have lower 299 + * performance with Extended Tags and MRRS > 128B. Work around the performance 300 + * problems by disabling Extended Tags and limiting MRRS to 128B. 301 + * 302 + * https://cdrdv2.intel.com/v1/dl/getContent/837176 303 + */ 304 + static int limit_mrrs_to_128(struct pci_host_bridge *b, struct pci_dev *pdev) 305 + { 306 + int readrq = pcie_get_readrq(pdev); 307 + 308 + if (readrq > 128) 309 + pcie_set_readrq(pdev, 128); 310 + 311 + return 0; 312 + } 313 + 314 + static void pci_xeon_x2_bifurc_quirk(struct pci_dev *pdev) 315 + { 316 + struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus); 317 + u32 linkcap; 318 + 319 + pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &linkcap); 320 + if (FIELD_GET(PCI_EXP_LNKCAP_MLW, linkcap) != 0x2) 321 + return; 322 + 323 + bridge->no_ext_tags = 1; 324 + bridge->enable_device = limit_mrrs_to_128; 325 + pci_info(pdev, "Disabling Extended Tags and limiting MRRS to 128B (performance reasons due to x2 PCIe link)\n"); 326 + } 327 + 328 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db0, pci_xeon_x2_bifurc_quirk); 329 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db1, pci_xeon_x2_bifurc_quirk); 330 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db2, pci_xeon_x2_bifurc_quirk); 331 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db3, pci_xeon_x2_bifurc_quirk); 332 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db6, pci_xeon_x2_bifurc_quirk); 333 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db7, pci_xeon_x2_bifurc_quirk); 334 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db8, pci_xeon_x2_bifurc_quirk); 335 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x0db9, pci_xeon_x2_bifurc_quirk); 336 + 337 + /* 298 338 * Fixup to mark boot BIOS video selected by BIOS before it changes 299 339 * 300 340 * From information provided by "Jon Smirl" <jonsmirl@gmail.com>
+7 -9
drivers/misc/pci_endpoint_test.c
··· 436 436 { 437 437 struct pci_dev *pdev = test->pdev; 438 438 u32 val; 439 - int ret; 439 + int irq; 440 + 441 + irq = pci_irq_vector(pdev, msi_num - 1); 442 + if (irq < 0) 443 + return irq; 440 444 441 445 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, 442 446 msix ? PCITEST_IRQ_TYPE_MSIX : ··· 454 450 if (!val) 455 451 return -ETIMEDOUT; 456 452 457 - ret = pci_irq_vector(pdev, msi_num - 1); 458 - if (ret < 0) 459 - return ret; 460 - 461 - if (ret != test->last_irq) 453 + if (irq != test->last_irq) 462 454 return -EIO; 463 455 464 456 return 0; ··· 937 937 switch (cmd) { 938 938 case PCITEST_BAR: 939 939 bar = arg; 940 - if (bar > BAR_5) 940 + if (bar <= NO_BAR || bar > BAR_5) 941 941 goto ret; 942 942 if (is_am654_pci_dev(pdev) && bar == BAR_0) 943 943 goto ret; ··· 1020 1020 if (!test) 1021 1021 return -ENOMEM; 1022 1022 1023 - test->test_reg_bar = 0; 1024 - test->alignment = 0; 1025 1023 test->pdev = pdev; 1026 1024 test->irq_type = PCITEST_IRQ_TYPE_UNDEFINED; 1027 1025
-1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 4215 4215 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 4216 4216 int err = 0; 4217 4217 4218 - pdev->error_state = pci_channel_io_normal; 4219 4218 err = pci_enable_device(pdev); 4220 4219 if (err) 4221 4220 goto disconnect;
-2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 3766 3766 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 3767 3767 struct net_device *netdev = adapter->netdev; 3768 3768 3769 - pdev->error_state = pci_channel_io_normal; 3770 - 3771 3769 err = pci_enable_device(pdev); 3772 3770 if (err) 3773 3771 return err;
-3
drivers/net/ethernet/sfc/efx_common.c
··· 1258 1258 1259 1259 /* For simplicity and reliability, we always require a slot reset and try to 1260 1260 * reset the hardware when a pci error affecting the device is detected. 1261 - * We leave both the link_reset and mmio_enabled callback unimplemented: 1262 - * with our request for slot reset the mmio_enabled callback will never be 1263 - * called, and the link_reset callback is not used by AER or EEH mechanisms. 1264 1261 */ 1265 1262 const struct pci_error_handlers efx_err_handlers = { 1266 1263 .error_detected = efx_io_error_detected,
-3
drivers/net/ethernet/sfc/falcon/efx.c
··· 3127 3127 3128 3128 /* For simplicity and reliability, we always require a slot reset and try to 3129 3129 * reset the hardware when a pci error affecting the device is detected. 3130 - * We leave both the link_reset and mmio_enabled callback unimplemented: 3131 - * with our request for slot reset the mmio_enabled callback will never be 3132 - * called, and the link_reset callback is not used by AER or EEH mechanisms. 3133 3130 */ 3134 3131 static const struct pci_error_handlers ef4_err_handlers = { 3135 3132 .error_detected = ef4_io_error_detected,
-3
drivers/net/ethernet/sfc/siena/efx_common.c
··· 1285 1285 1286 1286 /* For simplicity and reliability, we always require a slot reset and try to 1287 1287 * reset the hardware when a pci error affecting the device is detected. 1288 - * We leave both the link_reset and mmio_enabled callback unimplemented: 1289 - * with our request for slot reset the mmio_enabled callback will never be 1290 - * called, and the link_reset callback is not used by AER or EEH mechanisms. 1291 1288 */ 1292 1289 const struct pci_error_handlers efx_siena_err_handlers = { 1293 1290 .error_detected = efx_io_error_detected,
+12 -5
drivers/pci/bus.c
··· 204 204 if (!r) 205 205 continue; 206 206 207 + if (r->flags & (IORESOURCE_UNSET|IORESOURCE_DISABLED)) 208 + continue; 209 + 207 210 /* type_mask must match */ 208 211 if ((res->flags ^ r->flags) & type_mask) 209 212 continue; ··· 364 361 * before PCI client drivers. 365 362 */ 366 363 pdev = of_find_device_by_node(dn); 367 - if (pdev && of_pci_supply_present(dn)) { 368 - if (!device_link_add(&dev->dev, &pdev->dev, 369 - DL_FLAG_AUTOREMOVE_CONSUMER)) 370 - pci_err(dev, "failed to add device link to power control device %s\n", 371 - pdev->name); 364 + if (pdev) { 365 + if (of_pci_supply_present(dn)) { 366 + if (!device_link_add(&dev->dev, &pdev->dev, 367 + DL_FLAG_AUTOREMOVE_CONSUMER)) { 368 + pci_err(dev, "failed to add device link to power control device %s\n", 369 + pdev->name); 370 + } 371 + } 372 + put_device(&pdev->dev); 372 373 } 373 374 374 375 if (!dn || of_device_is_available(dn))
+10
drivers/pci/controller/cadence/Kconfig
··· 42 42 endpoint mode. This PCIe controller may be embedded into many 43 43 different vendors SoCs. 44 44 45 + config PCIE_SG2042_HOST 46 + tristate "Sophgo SG2042 PCIe controller (host mode)" 47 + depends on OF && (ARCH_SOPHGO || COMPILE_TEST) 48 + select PCIE_CADENCE_HOST 49 + help 50 + Say Y here if you want to support the Sophgo SG2042 PCIe platform 51 + controller in host mode. Sophgo SG2042 PCIe controller uses Cadence 52 + PCIe core. 53 + 45 54 config PCI_J721E 46 55 tristate 47 56 select PCIE_CADENCE_HOST if PCI_J721E_HOST != n ··· 76 67 Say Y here if you want to support the TI J721E PCIe platform 77 68 controller in endpoint mode. TI J721E PCIe controller uses Cadence PCIe 78 69 core. 70 + 79 71 endmenu
+1
drivers/pci/controller/cadence/Makefile
··· 4 4 obj-$(CONFIG_PCIE_CADENCE_EP) += pcie-cadence-ep.o 5 5 obj-$(CONFIG_PCIE_CADENCE_PLAT) += pcie-cadence-plat.o 6 6 obj-$(CONFIG_PCI_J721E) += pci-j721e.o 7 + obj-$(CONFIG_PCIE_SG2042_HOST) += pcie-sg2042.o
+27 -1
drivers/pci/controller/cadence/pci-j721e.c
··· 284 284 if (!ret) 285 285 offset = args.args[0]; 286 286 287 + /* 288 + * The PCIe Controller's registers have different "reset-values" 289 + * depending on the "strap" settings programmed into the PCIEn_CTRL 290 + * register within the CTRL_MMR memory-mapped register space. 291 + * The registers latch onto a "reset-value" based on the "strap" 292 + * settings sampled after the PCIe Controller is powered on. 293 + * To ensure that the "reset-values" are sampled accurately, power 294 + * off the PCIe Controller before programming the "strap" settings 295 + * and power it on after that. The runtime PM APIs namely 296 + * pm_runtime_put_sync() and pm_runtime_get_sync() will decrement and 297 + * increment the usage counter respectively, causing GENPD to power off 298 + * and power on the PCIe Controller. 299 + */ 300 + ret = pm_runtime_put_sync(dev); 301 + if (ret < 0) { 302 + dev_err(dev, "Failed to power off PCIe Controller\n"); 303 + return ret; 304 + } 305 + 287 306 ret = j721e_pcie_set_mode(pcie, syscon, offset); 288 307 if (ret < 0) { 289 308 dev_err(dev, "Failed to set pci mode\n"); ··· 318 299 ret = j721e_pcie_set_lane_count(pcie, syscon, offset); 319 300 if (ret < 0) { 320 301 dev_err(dev, "Failed to set num-lanes\n"); 302 + return ret; 303 + } 304 + 305 + ret = pm_runtime_get_sync(dev); 306 + if (ret < 0) { 307 + dev_err(dev, "Failed to power on PCIe Controller\n"); 321 308 return ret; 322 309 } 323 310 ··· 465 440 }, 466 441 {}, 467 442 }; 443 + MODULE_DEVICE_TABLE(of, of_j721e_pcie_match); 468 444 469 445 static int j721e_pcie_probe(struct platform_device *pdev) 470 446 { ··· 575 549 576 550 ret = j721e_pcie_ctrl_init(pcie); 577 551 if (ret < 0) { 578 - dev_err_probe(dev, ret, "pm_runtime_get_sync failed\n"); 552 + dev_err_probe(dev, ret, "j721e_pcie_ctrl_init failed\n"); 579 553 goto err_get_sync; 580 554 } 581 555
+22 -18
drivers/pci/controller/cadence/pcie-cadence-ep.c
··· 21 21 22 22 static u8 cdns_pcie_get_fn_from_vfn(struct cdns_pcie *pcie, u8 fn, u8 vfn) 23 23 { 24 - u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET; 25 24 u32 first_vf_offset, stride; 25 + u16 cap; 26 26 27 27 if (vfn == 0) 28 28 return fn; 29 29 30 + cap = cdns_pcie_find_ext_capability(pcie, PCI_EXT_CAP_ID_SRIOV); 30 31 first_vf_offset = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_OFFSET); 31 32 stride = cdns_pcie_ep_fn_readw(pcie, fn, cap + PCI_SRIOV_VF_STRIDE); 32 33 fn = fn + first_vf_offset + ((vfn - 1) * stride); ··· 39 38 struct pci_epf_header *hdr) 40 39 { 41 40 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 42 - u32 cap = CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET; 43 41 struct cdns_pcie *pcie = &ep->pcie; 44 42 u32 reg; 43 + u16 cap; 45 44 45 + cap = cdns_pcie_find_ext_capability(pcie, PCI_EXT_CAP_ID_SRIOV); 46 46 if (vfn > 1) { 47 47 dev_err(&epc->dev, "Only Virtual Function #1 has deviceID\n"); 48 48 return -EINVAL; ··· 229 227 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 230 228 struct cdns_pcie *pcie = &ep->pcie; 231 229 u8 mmc = order_base_2(nr_irqs); 232 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 233 230 u16 flags; 231 + u8 cap; 234 232 233 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI); 235 234 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 236 235 237 236 /* ··· 252 249 { 253 250 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 254 251 struct cdns_pcie *pcie = &ep->pcie; 255 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 256 252 u16 flags, mme; 253 + u8 cap; 257 254 255 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 258 256 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 259 257 260 258 /* Validate that the MSI feature is actually enabled. */ ··· 276 272 { 277 273 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 278 274 struct cdns_pcie *pcie = &ep->pcie; 279 - u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 280 275 u32 val, reg; 276 + u8 cap; 281 277 278 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 282 279 func_no = cdns_pcie_get_fn_from_vfn(pcie, func_no, vfunc_no); 283 280 284 281 reg = cap + PCI_MSIX_FLAGS; ··· 297 292 { 298 293 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 299 294 struct cdns_pcie *pcie = &ep->pcie; 300 - u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 301 295 u32 val, reg; 296 + u8 cap; 302 297 298 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 303 299 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 304 300 305 301 reg = cap + PCI_MSIX_FLAGS; ··· 386 380 u8 interrupt_num) 387 381 { 388 382 struct cdns_pcie *pcie = &ep->pcie; 389 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 390 383 u16 flags, mme, data, data_mask; 391 - u8 msi_count; 392 384 u64 pci_addr, pci_addr_mask = 0xff; 385 + u8 msi_count, cap; 393 386 387 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI); 394 388 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 395 389 396 390 /* Check whether the MSI feature has been enabled by the PCI host. */ ··· 438 432 u32 *msi_addr_offset) 439 433 { 440 434 struct cdns_pcie_ep *ep = epc_get_drvdata(epc); 441 - u32 cap = CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET; 442 435 struct cdns_pcie *pcie = &ep->pcie; 443 436 u64 pci_addr, pci_addr_mask = 0xff; 444 437 u16 flags, mme, data, data_mask; 445 - u8 msi_count; 438 + u8 msi_count, cap; 446 439 int ret; 447 440 int i; 448 441 442 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSI); 449 443 fn = cdns_pcie_get_fn_from_vfn(pcie, fn, vfn); 450 444 451 445 /* Check whether the MSI feature has been enabled by the PCI host. */ ··· 488 482 static int cdns_pcie_ep_send_msix_irq(struct cdns_pcie_ep *ep, u8 fn, u8 vfn, 489 483 u16 interrupt_num) 490 484 { 491 - u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET; 492 485 u32 tbl_offset, msg_data, reg; 493 486 struct cdns_pcie *pcie = &ep->pcie; 494 487 struct pci_epf_msix_tbl *msix_tbl; 495 488 struct cdns_pcie_epf *epf; 496 489 u64 pci_addr_mask = 0xff; 497 490 u64 msg_addr; 491 + u8 bir, cap; 498 492 u16 flags; 499 - u8 bir; 500 493 494 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_MSIX); 501 495 epf = &ep->epf[fn]; 502 496 if (vfn > 0) 503 497 epf = &epf->epf[vfn - 1]; ··· 571 565 int max_epfs = sizeof(epc->function_num_map) * 8; 572 566 int ret, epf, last_fn; 573 567 u32 reg, value; 568 + u8 cap; 574 569 570 + cap = cdns_pcie_find_capability(pcie, PCI_CAP_ID_EXP); 575 571 /* 576 572 * BIT(0) is hardwired to 1, hence function 0 is always enabled 577 573 * and can't be disabled anyway. ··· 597 589 continue; 598 590 599 591 value = cdns_pcie_ep_fn_readl(pcie, epf, 600 - CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET + 601 - PCI_EXP_DEVCAP); 592 + cap + PCI_EXP_DEVCAP); 602 593 value &= ~PCI_EXP_DEVCAP_FLR; 603 594 cdns_pcie_ep_fn_writel(pcie, epf, 604 - CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET + 605 - PCI_EXP_DEVCAP, value); 595 + cap + PCI_EXP_DEVCAP, value); 606 596 } 607 597 } 608 598 ··· 614 608 } 615 609 616 610 static const struct pci_epc_features cdns_pcie_epc_vf_features = { 617 - .linkup_notifier = false, 618 611 .msi_capable = true, 619 612 .msix_capable = true, 620 613 .align = 65536, 621 614 }; 622 615 623 616 static const struct pci_epc_features cdns_pcie_epc_features = { 624 - .linkup_notifier = false, 625 617 .msi_capable = true, 626 618 .msix_capable = true, 627 619 .align = 256,
+1 -1
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 531 531 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_PCI_ADDR1(0), addr1); 532 532 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(0), desc1); 533 533 534 - if (pcie->ops->cpu_addr_fixup) 534 + if (pcie->ops && pcie->ops->cpu_addr_fixup) 535 535 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 536 536 537 537 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(12) |
+16 -2
drivers/pci/controller/cadence/pcie-cadence.c
··· 8 8 #include <linux/of.h> 9 9 10 10 #include "pcie-cadence.h" 11 + #include "../../pci.h" 12 + 13 + u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap) 14 + { 15 + return PCI_FIND_NEXT_CAP(cdns_pcie_read_cfg, PCI_CAPABILITY_LIST, 16 + cap, pcie); 17 + } 18 + EXPORT_SYMBOL_GPL(cdns_pcie_find_capability); 19 + 20 + u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap) 21 + { 22 + return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, pcie); 23 + } 24 + EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability); 11 25 12 26 void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie) 13 27 { ··· 106 92 cdns_pcie_writel(pcie, CDNS_PCIE_AT_OB_REGION_DESC1(r), desc1); 107 93 108 94 /* Set the CPU address */ 109 - if (pcie->ops->cpu_addr_fixup) 95 + if (pcie->ops && pcie->ops->cpu_addr_fixup) 110 96 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 111 97 112 98 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(nbits) | ··· 137 123 } 138 124 139 125 /* Set the CPU address */ 140 - if (pcie->ops->cpu_addr_fixup) 126 + if (pcie->ops && pcie->ops->cpu_addr_fixup) 141 127 cpu_addr = pcie->ops->cpu_addr_fixup(pcie, cpu_addr); 142 128 143 129 addr0 = CDNS_PCIE_AT_OB_REGION_CPU_ADDR0_NBITS(17) |
+37 -8
drivers/pci/controller/cadence/pcie-cadence.h
··· 125 125 */ 126 126 #define CDNS_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) 127 127 128 - #define CDNS_PCIE_EP_FUNC_MSI_CAP_OFFSET 0x90 129 - #define CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET 0xb0 130 - #define CDNS_PCIE_EP_FUNC_DEV_CAP_OFFSET 0xc0 131 - #define CDNS_PCIE_EP_FUNC_SRIOV_CAP_OFFSET 0x200 132 - 133 128 /* 134 129 * Endpoint PF Registers 135 130 */ ··· 362 367 return readl(pcie->reg_base + reg); 363 368 } 364 369 370 + static inline u16 cdns_pcie_readw(struct cdns_pcie *pcie, u32 reg) 371 + { 372 + return readw(pcie->reg_base + reg); 373 + } 374 + 375 + static inline u8 cdns_pcie_readb(struct cdns_pcie *pcie, u32 reg) 376 + { 377 + return readb(pcie->reg_base + reg); 378 + } 379 + 380 + static inline int cdns_pcie_read_cfg_byte(struct cdns_pcie *pcie, int where, 381 + u8 *val) 382 + { 383 + *val = cdns_pcie_readb(pcie, where); 384 + return PCIBIOS_SUCCESSFUL; 385 + } 386 + 387 + static inline int cdns_pcie_read_cfg_word(struct cdns_pcie *pcie, int where, 388 + u16 *val) 389 + { 390 + *val = cdns_pcie_readw(pcie, where); 391 + return PCIBIOS_SUCCESSFUL; 392 + } 393 + 394 + static inline int cdns_pcie_read_cfg_dword(struct cdns_pcie *pcie, int where, 395 + u32 *val) 396 + { 397 + *val = cdns_pcie_readl(pcie, where); 398 + return PCIBIOS_SUCCESSFUL; 399 + } 400 + 365 401 static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size) 366 402 { 367 403 void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4); ··· 494 468 495 469 static inline int cdns_pcie_start_link(struct cdns_pcie *pcie) 496 470 { 497 - if (pcie->ops->start_link) 471 + if (pcie->ops && pcie->ops->start_link) 498 472 return pcie->ops->start_link(pcie); 499 473 500 474 return 0; ··· 502 476 503 477 static inline void cdns_pcie_stop_link(struct cdns_pcie *pcie) 504 478 { 505 - if (pcie->ops->stop_link) 479 + if (pcie->ops && pcie->ops->stop_link) 506 480 pcie->ops->stop_link(pcie); 507 481 } 508 482 509 483 static inline bool cdns_pcie_link_up(struct cdns_pcie *pcie) 510 484 { 511 - if (pcie->ops->link_up) 485 + if (pcie->ops && pcie->ops->link_up) 512 486 return pcie->ops->link_up(pcie); 513 487 514 488 return true; ··· 561 535 { 562 536 } 563 537 #endif 538 + 539 + u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap); 540 + u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap); 564 541 565 542 void cdns_pcie_detect_quiet_min_delay_set(struct cdns_pcie *pcie); 566 543
+134
drivers/pci/controller/cadence/pcie-sg2042.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * pcie-sg2042 - PCIe controller driver for Sophgo SG2042 SoC 4 + * 5 + * Copyright (C) 2025 Sophgo Technology Inc. 6 + * Copyright (C) 2025 Chen Wang <unicorn_wang@outlook.com> 7 + */ 8 + 9 + #include <linux/mod_devicetable.h> 10 + #include <linux/pci.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pm_runtime.h> 13 + 14 + #include "pcie-cadence.h" 15 + 16 + /* 17 + * SG2042 only supports 4-byte aligned access, so for the rootbus (i.e. to 18 + * read/write the Root Port itself, read32/write32 is required. For 19 + * non-rootbus (i.e. to read/write the PCIe peripheral registers, supports 20 + * 1/2/4 byte aligned access, so directly using read/write should be fine. 21 + */ 22 + 23 + static struct pci_ops sg2042_pcie_root_ops = { 24 + .map_bus = cdns_pci_map_bus, 25 + .read = pci_generic_config_read32, 26 + .write = pci_generic_config_write32, 27 + }; 28 + 29 + static struct pci_ops sg2042_pcie_child_ops = { 30 + .map_bus = cdns_pci_map_bus, 31 + .read = pci_generic_config_read, 32 + .write = pci_generic_config_write, 33 + }; 34 + 35 + static int sg2042_pcie_probe(struct platform_device *pdev) 36 + { 37 + struct device *dev = &pdev->dev; 38 + struct pci_host_bridge *bridge; 39 + struct cdns_pcie *pcie; 40 + struct cdns_pcie_rc *rc; 41 + int ret; 42 + 43 + bridge = devm_pci_alloc_host_bridge(dev, sizeof(*rc)); 44 + if (!bridge) 45 + return dev_err_probe(dev, -ENOMEM, "Failed to alloc host bridge!\n"); 46 + 47 + bridge->ops = &sg2042_pcie_root_ops; 48 + bridge->child_ops = &sg2042_pcie_child_ops; 49 + 50 + rc = pci_host_bridge_priv(bridge); 51 + pcie = &rc->pcie; 52 + pcie->dev = dev; 53 + 54 + platform_set_drvdata(pdev, pcie); 55 + 56 + pm_runtime_set_active(dev); 57 + pm_runtime_no_callbacks(dev); 58 + devm_pm_runtime_enable(dev); 59 + 60 + ret = cdns_pcie_init_phy(dev, pcie); 61 + if (ret) 62 + return dev_err_probe(dev, ret, "Failed to init phy!\n"); 63 + 64 + ret = cdns_pcie_host_setup(rc); 65 + if (ret) { 66 + dev_err_probe(dev, ret, "Failed to setup host!\n"); 67 + cdns_pcie_disable_phy(pcie); 68 + return ret; 69 + } 70 + 71 + return 0; 72 + } 73 + 74 + static void sg2042_pcie_remove(struct platform_device *pdev) 75 + { 76 + struct cdns_pcie *pcie = platform_get_drvdata(pdev); 77 + struct device *dev = &pdev->dev; 78 + struct cdns_pcie_rc *rc; 79 + 80 + rc = container_of(pcie, struct cdns_pcie_rc, pcie); 81 + cdns_pcie_host_disable(rc); 82 + 83 + cdns_pcie_disable_phy(pcie); 84 + 85 + pm_runtime_disable(dev); 86 + } 87 + 88 + static int sg2042_pcie_suspend_noirq(struct device *dev) 89 + { 90 + struct cdns_pcie *pcie = dev_get_drvdata(dev); 91 + 92 + cdns_pcie_disable_phy(pcie); 93 + 94 + return 0; 95 + } 96 + 97 + static int sg2042_pcie_resume_noirq(struct device *dev) 98 + { 99 + struct cdns_pcie *pcie = dev_get_drvdata(dev); 100 + int ret; 101 + 102 + ret = cdns_pcie_enable_phy(pcie); 103 + if (ret) { 104 + dev_err(dev, "failed to enable PHY\n"); 105 + return ret; 106 + } 107 + 108 + return 0; 109 + } 110 + 111 + static DEFINE_NOIRQ_DEV_PM_OPS(sg2042_pcie_pm_ops, 112 + sg2042_pcie_suspend_noirq, 113 + sg2042_pcie_resume_noirq); 114 + 115 + static const struct of_device_id sg2042_pcie_of_match[] = { 116 + { .compatible = "sophgo,sg2042-pcie-host" }, 117 + {}, 118 + }; 119 + MODULE_DEVICE_TABLE(of, sg2042_pcie_of_match); 120 + 121 + static struct platform_driver sg2042_pcie_driver = { 122 + .driver = { 123 + .name = "sg2042-pcie", 124 + .of_match_table = sg2042_pcie_of_match, 125 + .pm = pm_sleep_ptr(&sg2042_pcie_pm_ops), 126 + }, 127 + .probe = sg2042_pcie_probe, 128 + .remove = sg2042_pcie_remove, 129 + }; 130 + module_platform_driver(sg2042_pcie_driver); 131 + 132 + MODULE_LICENSE("GPL"); 133 + MODULE_DESCRIPTION("PCIe controller driver for SG2042 SoCs"); 134 + MODULE_AUTHOR("Chen Wang <unicorn_wang@outlook.com>");
+26
drivers/pci/controller/dwc/Kconfig
··· 20 20 bool 21 21 select PCIE_DW 22 22 select IRQ_MSI_LIB 23 + select PCI_HOST_COMMON 23 24 24 25 config PCIE_DW_EP 25 26 bool ··· 299 298 select CRC8 300 299 select PCIE_QCOM_COMMON 301 300 select PCI_HOST_COMMON 301 + select PCI_PWRCTRL_SLOT 302 302 help 303 303 Say Y here to enable PCIe controller support on Qualcomm SoCs. The 304 304 PCIe controller uses the DesignWare core plus Qualcomm-specific ··· 423 421 select PCIE_DW_HOST 424 422 help 425 423 Say Y here if you want PCIe support on SPEAr13XX SoCs. 424 + 425 + config PCIE_STM32_HOST 426 + tristate "STMicroelectronics STM32MP25 PCIe Controller (host mode)" 427 + depends on ARCH_STM32 || COMPILE_TEST 428 + depends on PCI_MSI 429 + select PCIE_DW_HOST 430 + help 431 + Enables Root Complex (RC) support for the DesignWare core based PCIe 432 + controller found in STM32MP25 SoC. 433 + 434 + This driver can also be built as a module. If so, the module 435 + will be called pcie-stm32. 436 + 437 + config PCIE_STM32_EP 438 + tristate "STMicroelectronics STM32MP25 PCIe Controller (endpoint mode)" 439 + depends on ARCH_STM32 || COMPILE_TEST 440 + depends on PCI_ENDPOINT 441 + select PCIE_DW_EP 442 + help 443 + Enables Endpoint (EP) support for the DesignWare core based PCIe 444 + controller found in STM32MP25 SoC. 445 + 446 + This driver can also be built as a module. If so, the module 447 + will be called pcie-stm32-ep. 426 448 427 449 config PCI_DRA7XX 428 450 tristate
+2
drivers/pci/controller/dwc/Makefile
··· 31 31 obj-$(CONFIG_PCIE_UNIPHIER_EP) += pcie-uniphier-ep.o 32 32 obj-$(CONFIG_PCIE_VISCONTI_HOST) += pcie-visconti.o 33 33 obj-$(CONFIG_PCIE_RCAR_GEN4) += pcie-rcar-gen4.o 34 + obj-$(CONFIG_PCIE_STM32_HOST) += pcie-stm32.o 35 + obj-$(CONFIG_PCIE_STM32_EP) += pcie-stm32-ep.o 34 36 35 37 # The following drivers are for devices that use the generic ACPI 36 38 # pci_root.c driver but don't support standard ECAM config access.
-1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 426 426 static const struct pci_epc_features dra7xx_pcie_epc_features = { 427 427 .linkup_notifier = true, 428 428 .msi_capable = true, 429 - .msix_capable = false, 430 429 }; 431 430 432 431 static const struct pci_epc_features*
+31 -31
drivers/pci/controller/dwc/pci-exynos.c
··· 53 53 54 54 struct exynos_pcie { 55 55 struct dw_pcie pci; 56 - void __iomem *elbi_base; 57 56 struct clk_bulk_data *clks; 58 57 struct phy *phy; 59 58 struct regulator_bulk_data supplies[2]; ··· 70 71 71 72 static void exynos_pcie_sideband_dbi_w_mode(struct exynos_pcie *ep, bool on) 72 73 { 74 + struct dw_pcie *pci = &ep->pci; 73 75 u32 val; 74 76 75 - val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_SLV_AWMISC); 77 + val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_SLV_AWMISC); 76 78 if (on) 77 79 val |= PCIE_ELBI_SLV_DBI_ENABLE; 78 80 else 79 81 val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 80 - exynos_pcie_writel(ep->elbi_base, val, PCIE_ELBI_SLV_AWMISC); 82 + exynos_pcie_writel(pci->elbi_base, val, PCIE_ELBI_SLV_AWMISC); 81 83 } 82 84 83 85 static void exynos_pcie_sideband_dbi_r_mode(struct exynos_pcie *ep, bool on) 84 86 { 87 + struct dw_pcie *pci = &ep->pci; 85 88 u32 val; 86 89 87 - val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_SLV_ARMISC); 90 + val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_SLV_ARMISC); 88 91 if (on) 89 92 val |= PCIE_ELBI_SLV_DBI_ENABLE; 90 93 else 91 94 val &= ~PCIE_ELBI_SLV_DBI_ENABLE; 92 - exynos_pcie_writel(ep->elbi_base, val, PCIE_ELBI_SLV_ARMISC); 95 + exynos_pcie_writel(pci->elbi_base, val, PCIE_ELBI_SLV_ARMISC); 93 96 } 94 97 95 98 static void exynos_pcie_assert_core_reset(struct exynos_pcie *ep) 96 99 { 100 + struct dw_pcie *pci = &ep->pci; 97 101 u32 val; 98 102 99 - val = exynos_pcie_readl(ep->elbi_base, PCIE_CORE_RESET); 103 + val = exynos_pcie_readl(pci->elbi_base, PCIE_CORE_RESET); 100 104 val &= ~PCIE_CORE_RESET_ENABLE; 101 - exynos_pcie_writel(ep->elbi_base, val, PCIE_CORE_RESET); 102 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_STICKY_RESET); 103 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_NONSTICKY_RESET); 105 + exynos_pcie_writel(pci->elbi_base, val, PCIE_CORE_RESET); 106 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_STICKY_RESET); 107 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_NONSTICKY_RESET); 104 108 } 105 109 106 110 static void exynos_pcie_deassert_core_reset(struct exynos_pcie *ep) 107 111 { 112 + struct dw_pcie *pci = &ep->pci; 108 113 u32 val; 109 114 110 - val = exynos_pcie_readl(ep->elbi_base, PCIE_CORE_RESET); 115 + val = exynos_pcie_readl(pci->elbi_base, PCIE_CORE_RESET); 111 116 val |= PCIE_CORE_RESET_ENABLE; 112 117 113 - exynos_pcie_writel(ep->elbi_base, val, PCIE_CORE_RESET); 114 - exynos_pcie_writel(ep->elbi_base, 1, PCIE_STICKY_RESET); 115 - exynos_pcie_writel(ep->elbi_base, 1, PCIE_NONSTICKY_RESET); 116 - exynos_pcie_writel(ep->elbi_base, 1, PCIE_APP_INIT_RESET); 117 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_APP_INIT_RESET); 118 + exynos_pcie_writel(pci->elbi_base, val, PCIE_CORE_RESET); 119 + exynos_pcie_writel(pci->elbi_base, 1, PCIE_STICKY_RESET); 120 + exynos_pcie_writel(pci->elbi_base, 1, PCIE_NONSTICKY_RESET); 121 + exynos_pcie_writel(pci->elbi_base, 1, PCIE_APP_INIT_RESET); 122 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_APP_INIT_RESET); 118 123 } 119 124 120 125 static int exynos_pcie_start_link(struct dw_pcie *pci) 121 126 { 122 - struct exynos_pcie *ep = to_exynos_pcie(pci); 123 127 u32 val; 124 128 125 - val = exynos_pcie_readl(ep->elbi_base, PCIE_SW_WAKE); 129 + val = exynos_pcie_readl(pci->elbi_base, PCIE_SW_WAKE); 126 130 val &= ~PCIE_BUS_EN; 127 - exynos_pcie_writel(ep->elbi_base, val, PCIE_SW_WAKE); 131 + exynos_pcie_writel(pci->elbi_base, val, PCIE_SW_WAKE); 128 132 129 133 /* assert LTSSM enable */ 130 - exynos_pcie_writel(ep->elbi_base, PCIE_ELBI_LTSSM_ENABLE, 134 + exynos_pcie_writel(pci->elbi_base, PCIE_ELBI_LTSSM_ENABLE, 131 135 PCIE_APP_LTSSM_ENABLE); 132 136 return 0; 133 137 } 134 138 135 139 static void exynos_pcie_clear_irq_pulse(struct exynos_pcie *ep) 136 140 { 137 - u32 val = exynos_pcie_readl(ep->elbi_base, PCIE_IRQ_PULSE); 141 + struct dw_pcie *pci = &ep->pci; 138 142 139 - exynos_pcie_writel(ep->elbi_base, val, PCIE_IRQ_PULSE); 143 + u32 val = exynos_pcie_readl(pci->elbi_base, PCIE_IRQ_PULSE); 144 + 145 + exynos_pcie_writel(pci->elbi_base, val, PCIE_IRQ_PULSE); 140 146 } 141 147 142 148 static irqreturn_t exynos_pcie_irq_handler(int irq, void *arg) ··· 154 150 155 151 static void exynos_pcie_enable_irq_pulse(struct exynos_pcie *ep) 156 152 { 153 + struct dw_pcie *pci = &ep->pci; 154 + 157 155 u32 val = IRQ_INTA_ASSERT | IRQ_INTB_ASSERT | 158 156 IRQ_INTC_ASSERT | IRQ_INTD_ASSERT; 159 157 160 - exynos_pcie_writel(ep->elbi_base, val, PCIE_IRQ_EN_PULSE); 161 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_IRQ_EN_LEVEL); 162 - exynos_pcie_writel(ep->elbi_base, 0, PCIE_IRQ_EN_SPECIAL); 158 + exynos_pcie_writel(pci->elbi_base, val, PCIE_IRQ_EN_PULSE); 159 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_IRQ_EN_LEVEL); 160 + exynos_pcie_writel(pci->elbi_base, 0, PCIE_IRQ_EN_SPECIAL); 163 161 } 164 162 165 163 static u32 exynos_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, ··· 217 211 218 212 static bool exynos_pcie_link_up(struct dw_pcie *pci) 219 213 { 220 - struct exynos_pcie *ep = to_exynos_pcie(pci); 221 - u32 val = exynos_pcie_readl(ep->elbi_base, PCIE_ELBI_RDLH_LINKUP); 214 + u32 val = exynos_pcie_readl(pci->elbi_base, PCIE_ELBI_RDLH_LINKUP); 222 215 223 216 return val & PCIE_ELBI_XMLH_LINKUP; 224 217 } ··· 299 294 ep->phy = devm_of_phy_get(dev, np, NULL); 300 295 if (IS_ERR(ep->phy)) 301 296 return PTR_ERR(ep->phy); 302 - 303 - /* External Local Bus interface (ELBI) registers */ 304 - ep->elbi_base = devm_platform_ioremap_resource_byname(pdev, "elbi"); 305 - if (IS_ERR(ep->elbi_base)) 306 - return PTR_ERR(ep->elbi_base); 307 297 308 298 ret = devm_clk_bulk_get_all_enabled(dev, &ep->clks); 309 299 if (ret < 0)
+4 -4
drivers/pci/controller/dwc/pci-imx6.c
··· 1387 1387 } 1388 1388 1389 1389 static const struct pci_epc_features imx8m_pcie_epc_features = { 1390 - .linkup_notifier = false, 1391 1390 .msi_capable = true, 1392 - .msix_capable = false, 1393 1391 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1394 1392 .bar[BAR_3] = { .type = BAR_RESERVED, }, 1395 1393 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, }, ··· 1396 1398 }; 1397 1399 1398 1400 static const struct pci_epc_features imx8q_pcie_epc_features = { 1399 - .linkup_notifier = false, 1400 1401 .msi_capable = true, 1401 - .msix_capable = false, 1402 1402 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1403 1403 .bar[BAR_3] = { .type = BAR_RESERVED, }, 1404 1404 .bar[BAR_5] = { .type = BAR_RESERVED, }, ··· 1740 1744 /* Limit link speed */ 1741 1745 pci->max_link_speed = 1; 1742 1746 of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed); 1747 + 1748 + ret = devm_regulator_get_enable_optional(&pdev->dev, "vpcie3v3aux"); 1749 + if (ret < 0 && ret != -ENODEV) 1750 + return dev_err_probe(dev, ret, "failed to enable Vaux supply\n"); 1743 1751 1744 1752 imx_pcie->vpcie = devm_regulator_get_optional(&pdev->dev, "vpcie"); 1745 1753 if (IS_ERR(imx_pcie->vpcie)) {
+4 -5
drivers/pci/controller/dwc/pci-keystone.c
··· 960 960 } 961 961 962 962 static const struct pci_epc_features ks_pcie_am654_epc_features = { 963 - .linkup_notifier = false, 964 963 .msi_capable = true, 965 964 .msix_capable = true, 966 965 .bar[BAR_0] = { .type = BAR_RESERVED, }, ··· 1200 1201 if (irq < 0) 1201 1202 return irq; 1202 1203 1203 - ret = request_irq(irq, ks_pcie_err_irq_handler, IRQF_SHARED, 1204 - "ks-pcie-error-irq", ks_pcie); 1204 + ret = devm_request_irq(dev, irq, ks_pcie_err_irq_handler, IRQF_SHARED, 1205 + "ks-pcie-error-irq", ks_pcie); 1205 1206 if (ret < 0) { 1206 1207 dev_err(dev, "failed to request error IRQ %d\n", 1207 1208 irq); ··· 1212 1213 if (ret) 1213 1214 num_lanes = 1; 1214 1215 1215 - phy = devm_kzalloc(dev, sizeof(*phy) * num_lanes, GFP_KERNEL); 1216 + phy = devm_kcalloc(dev, num_lanes, sizeof(*phy), GFP_KERNEL); 1216 1217 if (!phy) 1217 1218 return -ENOMEM; 1218 1219 1219 - link = devm_kzalloc(dev, sizeof(*link) * num_lanes, GFP_KERNEL); 1220 + link = devm_kcalloc(dev, num_lanes, sizeof(*link), GFP_KERNEL); 1220 1221 if (!link) 1221 1222 return -ENOMEM; 1222 1223
+1
drivers/pci/controller/dwc/pcie-al.c
··· 352 352 return -ENOENT; 353 353 } 354 354 al_pcie->ecam_size = resource_size(ecam_res); 355 + pci->pp.native_ecam = true; 355 356 356 357 controller_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 357 358 "controller");
+51 -1
drivers/pci/controller/dwc/pcie-amd-mdb.c
··· 18 18 #include <linux/resource.h> 19 19 #include <linux/types.h> 20 20 21 + #include "../../pci.h" 21 22 #include "pcie-designware.h" 22 23 23 24 #define AMD_MDB_TLP_IR_STATUS_MISC 0x4C0 ··· 57 56 * @slcr: MDB System Level Control and Status Register (SLCR) base 58 57 * @intx_domain: INTx IRQ domain pointer 59 58 * @mdb_domain: MDB IRQ domain pointer 59 + * @perst_gpio: GPIO descriptor for PERST# signal handling 60 60 * @intx_irq: INTx IRQ interrupt number 61 61 */ 62 62 struct amd_mdb_pcie { ··· 65 63 void __iomem *slcr; 66 64 struct irq_domain *intx_domain; 67 65 struct irq_domain *mdb_domain; 66 + struct gpio_desc *perst_gpio; 68 67 int intx_irq; 69 68 }; 70 69 ··· 287 284 struct device_node *pcie_intc_node; 288 285 int err; 289 286 290 - pcie_intc_node = of_get_next_child(node, NULL); 287 + pcie_intc_node = of_get_child_by_name(node, "interrupt-controller"); 291 288 if (!pcie_intc_node) { 292 289 dev_err(dev, "No PCIe Intc node found\n"); 293 290 return -ENODEV; ··· 405 402 return 0; 406 403 } 407 404 405 + static int amd_mdb_parse_pcie_port(struct amd_mdb_pcie *pcie) 406 + { 407 + struct device *dev = pcie->pci.dev; 408 + struct device_node *pcie_port_node __maybe_unused; 409 + 410 + /* 411 + * This platform currently supports only one Root Port, so the loop 412 + * will execute only once. 413 + * TODO: Enhance the driver to handle multiple Root Ports in the future. 414 + */ 415 + for_each_child_of_node_with_prefix(dev->of_node, pcie_port_node, "pcie") { 416 + pcie->perst_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(pcie_port_node), 417 + "reset", GPIOD_OUT_HIGH, NULL); 418 + if (IS_ERR(pcie->perst_gpio)) 419 + return dev_err_probe(dev, PTR_ERR(pcie->perst_gpio), 420 + "Failed to request reset GPIO\n"); 421 + return 0; 422 + } 423 + 424 + return -ENODEV; 425 + } 426 + 408 427 static int amd_mdb_add_pcie_port(struct amd_mdb_pcie *pcie, 409 428 struct platform_device *pdev) 410 429 { ··· 451 426 452 427 pp->ops = &amd_mdb_pcie_host_ops; 453 428 429 + if (pcie->perst_gpio) { 430 + mdelay(PCIE_T_PVPERL_MS); 431 + gpiod_set_value_cansleep(pcie->perst_gpio, 0); 432 + mdelay(PCIE_RESET_CONFIG_WAIT_MS); 433 + } 434 + 454 435 err = dw_pcie_host_init(pp); 455 436 if (err) { 456 437 dev_err(dev, "Failed to initialize host, err=%d\n", err); ··· 475 444 struct device *dev = &pdev->dev; 476 445 struct amd_mdb_pcie *pcie; 477 446 struct dw_pcie *pci; 447 + int ret; 478 448 479 449 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 480 450 if (!pcie) ··· 485 453 pci->dev = dev; 486 454 487 455 platform_set_drvdata(pdev, pcie); 456 + 457 + ret = amd_mdb_parse_pcie_port(pcie); 458 + /* 459 + * If amd_mdb_parse_pcie_port returns -ENODEV, it indicates that the 460 + * PCIe Bridge node was not found in the device tree. This is not 461 + * considered a fatal error and will trigger a fallback where the 462 + * reset GPIO is acquired directly from the PCIe Host Bridge node. 463 + */ 464 + if (ret) { 465 + if (ret != -ENODEV) 466 + return ret; 467 + 468 + pcie->perst_gpio = devm_gpiod_get_optional(dev, "reset", 469 + GPIOD_OUT_HIGH); 470 + if (IS_ERR(pcie->perst_gpio)) 471 + return dev_err_probe(dev, PTR_ERR(pcie->perst_gpio), 472 + "Failed to request reset GPIO\n"); 473 + } 488 474 489 475 return amd_mdb_add_pcie_port(pcie, pdev); 490 476 }
-2
drivers/pci/controller/dwc/pcie-artpec6.c
··· 370 370 } 371 371 372 372 static const struct pci_epc_features artpec6_pcie_epc_features = { 373 - .linkup_notifier = false, 374 373 .msi_capable = true, 375 - .msix_capable = false, 376 374 }; 377 375 378 376 static const struct pci_epc_features *
+2 -29
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 69 69 } 70 70 EXPORT_SYMBOL_GPL(dw_pcie_ep_reset_bar); 71 71 72 - static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie_ep *ep, u8 func_no, 73 - u8 cap_ptr, u8 cap) 74 - { 75 - u8 cap_id, next_cap_ptr; 76 - u16 reg; 77 - 78 - if (!cap_ptr) 79 - return 0; 80 - 81 - reg = dw_pcie_ep_readw_dbi(ep, func_no, cap_ptr); 82 - cap_id = (reg & 0x00ff); 83 - 84 - if (cap_id > PCI_CAP_ID_MAX) 85 - return 0; 86 - 87 - if (cap_id == cap) 88 - return cap_ptr; 89 - 90 - next_cap_ptr = (reg & 0xff00) >> 8; 91 - return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap); 92 - } 93 - 94 72 static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap) 95 73 { 96 - u8 next_cap_ptr; 97 - u16 reg; 98 - 99 - reg = dw_pcie_ep_readw_dbi(ep, func_no, PCI_CAPABILITY_LIST); 100 - next_cap_ptr = (reg & 0x00ff); 101 - 102 - return __dw_pcie_ep_find_next_cap(ep, func_no, next_cap_ptr, cap); 74 + return PCI_FIND_NEXT_CAP(dw_pcie_ep_read_cfg, PCI_CAPABILITY_LIST, 75 + cap, ep, func_no); 103 76 } 104 77 105 78 /**
+134 -14
drivers/pci/controller/dwc/pcie-designware-host.c
··· 8 8 * Author: Jingoo Han <jg1.han@samsung.com> 9 9 */ 10 10 11 + #include <linux/align.h> 11 12 #include <linux/iopoll.h> 12 13 #include <linux/irqchip/chained_irq.h> 13 14 #include <linux/irqchip/irq-msi-lib.h> ··· 32 31 #define DW_PCIE_MSI_FLAGS_SUPPORTED (MSI_FLAG_MULTI_PCI_MSI | \ 33 32 MSI_FLAG_PCI_MSIX | \ 34 33 MSI_GENERIC_FLAGS_MASK) 34 + 35 + #define IS_256MB_ALIGNED(x) IS_ALIGNED(x, SZ_256M) 35 36 36 37 static const struct msi_parent_ops dw_pcie_msi_parent_ops = { 37 38 .required_flags = DW_PCIE_MSI_FLAGS_REQUIRED, ··· 416 413 } 417 414 } 418 415 416 + static int dw_pcie_config_ecam_iatu(struct dw_pcie_rp *pp) 417 + { 418 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 419 + struct dw_pcie_ob_atu_cfg atu = {0}; 420 + resource_size_t bus_range_max; 421 + struct resource_entry *bus; 422 + int ret; 423 + 424 + bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS); 425 + 426 + /* 427 + * Root bus under the host bridge doesn't require any iATU configuration 428 + * as DBI region will be used to access root bus config space. 429 + * Immediate bus under Root Bus, needs type 0 iATU configuration and 430 + * remaining buses need type 1 iATU configuration. 431 + */ 432 + atu.index = 0; 433 + atu.type = PCIE_ATU_TYPE_CFG0; 434 + atu.parent_bus_addr = pp->cfg0_base + SZ_1M; 435 + /* 1MiB is to cover 1 (bus) * 32 (devices) * 8 (functions) */ 436 + atu.size = SZ_1M; 437 + atu.ctrl2 = PCIE_ATU_CFG_SHIFT_MODE_ENABLE; 438 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 439 + if (ret) 440 + return ret; 441 + 442 + bus_range_max = resource_size(bus->res); 443 + 444 + if (bus_range_max < 2) 445 + return 0; 446 + 447 + /* Configure remaining buses in type 1 iATU configuration */ 448 + atu.index = 1; 449 + atu.type = PCIE_ATU_TYPE_CFG1; 450 + atu.parent_bus_addr = pp->cfg0_base + SZ_2M; 451 + atu.size = (SZ_1M * bus_range_max) - SZ_2M; 452 + atu.ctrl2 = PCIE_ATU_CFG_SHIFT_MODE_ENABLE; 453 + 454 + return dw_pcie_prog_outbound_atu(pci, &atu); 455 + } 456 + 457 + static int dw_pcie_create_ecam_window(struct dw_pcie_rp *pp, struct resource *res) 458 + { 459 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 460 + struct device *dev = pci->dev; 461 + struct resource_entry *bus; 462 + 463 + bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS); 464 + if (!bus) 465 + return -ENODEV; 466 + 467 + pp->cfg = pci_ecam_create(dev, res, bus->res, &pci_generic_ecam_ops); 468 + if (IS_ERR(pp->cfg)) 469 + return PTR_ERR(pp->cfg); 470 + 471 + pci->dbi_base = pp->cfg->win; 472 + pci->dbi_phys_addr = res->start; 473 + 474 + return 0; 475 + } 476 + 477 + static bool dw_pcie_ecam_enabled(struct dw_pcie_rp *pp, struct resource *config_res) 478 + { 479 + struct resource *bus_range; 480 + u64 nr_buses; 481 + 482 + /* Vendor glue drivers may implement their own ECAM mechanism */ 483 + if (pp->native_ecam) 484 + return false; 485 + 486 + /* 487 + * PCIe spec r6.0, sec 7.2.2 mandates the base address used for ECAM to 488 + * be aligned on a 2^(n+20) byte boundary, where n is the number of bits 489 + * used for representing 'bus' in BDF. Since the DWC cores always use 8 490 + * bits for representing 'bus', the base address has to be aligned to 491 + * 2^28 byte boundary, which is 256 MiB. 492 + */ 493 + if (!IS_256MB_ALIGNED(config_res->start)) 494 + return false; 495 + 496 + bus_range = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res; 497 + if (!bus_range) 498 + return false; 499 + 500 + nr_buses = resource_size(config_res) >> PCIE_ECAM_BUS_SHIFT; 501 + 502 + return nr_buses >= resource_size(bus_range); 503 + } 504 + 419 505 static int dw_pcie_host_get_resources(struct dw_pcie_rp *pp) 420 506 { 421 507 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 513 421 struct resource_entry *win; 514 422 struct resource *res; 515 423 int ret; 516 - 517 - ret = dw_pcie_get_resources(pci); 518 - if (ret) 519 - return ret; 520 424 521 425 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 522 426 if (!res) { ··· 523 435 pp->cfg0_size = resource_size(res); 524 436 pp->cfg0_base = res->start; 525 437 526 - pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 527 - if (IS_ERR(pp->va_cfg0_base)) 528 - return PTR_ERR(pp->va_cfg0_base); 438 + pp->ecam_enabled = dw_pcie_ecam_enabled(pp, res); 439 + if (pp->ecam_enabled) { 440 + ret = dw_pcie_create_ecam_window(pp, res); 441 + if (ret) 442 + return ret; 443 + 444 + pp->bridge->ops = (struct pci_ops *)&pci_generic_ecam_ops.pci_ops; 445 + pp->bridge->sysdata = pp->cfg; 446 + pp->cfg->priv = pp; 447 + } else { 448 + pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 449 + if (IS_ERR(pp->va_cfg0_base)) 450 + return PTR_ERR(pp->va_cfg0_base); 451 + 452 + /* Set default bus ops */ 453 + pp->bridge->ops = &dw_pcie_ops; 454 + pp->bridge->child_ops = &dw_child_pcie_ops; 455 + pp->bridge->sysdata = pp; 456 + } 457 + 458 + ret = dw_pcie_get_resources(pci); 459 + if (ret) { 460 + if (pp->cfg) 461 + pci_ecam_free(pp->cfg); 462 + return ret; 463 + } 529 464 530 465 /* Get the I/O range from DT */ 531 466 win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_IO); ··· 587 476 if (ret) 588 477 return ret; 589 478 590 - /* Set default bus ops */ 591 - bridge->ops = &dw_pcie_ops; 592 - bridge->child_ops = &dw_child_pcie_ops; 593 - 594 479 if (pp->ops->init) { 595 480 ret = pp->ops->init(pp); 596 481 if (ret) 597 - return ret; 482 + goto err_free_ecam; 598 483 } 599 484 600 485 if (pci_msi_enabled()) { ··· 632 525 if (ret) 633 526 goto err_free_msi; 634 527 528 + if (pp->ecam_enabled) { 529 + ret = dw_pcie_config_ecam_iatu(pp); 530 + if (ret) { 531 + dev_err(dev, "Failed to configure iATU in ECAM mode\n"); 532 + goto err_free_msi; 533 + } 534 + } 535 + 635 536 /* 636 537 * Allocate the resource for MSG TLP before programming the iATU 637 538 * outbound window in dw_pcie_setup_rc(). Since the allocation depends ··· 675 560 /* Ignore errors, the link may come up later */ 676 561 dw_pcie_wait_for_link(pci); 677 562 678 - bridge->sysdata = pp; 679 - 680 563 ret = pci_host_probe(bridge); 681 564 if (ret) 682 565 goto err_stop_link; ··· 700 587 if (pp->ops->deinit) 701 588 pp->ops->deinit(pp); 702 589 590 + err_free_ecam: 591 + if (pp->cfg) 592 + pci_ecam_free(pp->cfg); 593 + 703 594 return ret; 704 595 } 705 596 EXPORT_SYMBOL_GPL(dw_pcie_host_init); ··· 726 609 727 610 if (pp->ops->deinit) 728 611 pp->ops->deinit(pp); 612 + 613 + if (pp->cfg) 614 + pci_ecam_free(pp->cfg); 729 615 } 730 616 EXPORT_SYMBOL_GPL(dw_pcie_host_deinit); 731 617
-1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 61 61 } 62 62 63 63 static const struct pci_epc_features dw_plat_pcie_epc_features = { 64 - .linkup_notifier = false, 65 64 .msi_capable = true, 66 65 .msix_capable = true, 67 66 };
+18 -76
drivers/pci/controller/dwc/pcie-designware.c
··· 167 167 } 168 168 } 169 169 170 + /* ELBI is an optional resource */ 171 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); 172 + if (res) { 173 + pci->elbi_base = devm_ioremap_resource(pci->dev, res); 174 + if (IS_ERR(pci->elbi_base)) 175 + return PTR_ERR(pci->elbi_base); 176 + } 177 + 170 178 /* LLDD is supposed to manually switch the clocks and resets state */ 171 179 if (dw_pcie_cap_is(pci, REQ_RES)) { 172 180 ret = dw_pcie_get_clocks(pci); ··· 221 213 pci->type = ver; 222 214 } 223 215 224 - /* 225 - * These interfaces resemble the pci_find_*capability() interfaces, but these 226 - * are for configuring host controllers, which are bridges *to* PCI devices but 227 - * are not PCI devices themselves. 228 - */ 229 - static u8 __dw_pcie_find_next_cap(struct dw_pcie *pci, u8 cap_ptr, 230 - u8 cap) 231 - { 232 - u8 cap_id, next_cap_ptr; 233 - u16 reg; 234 - 235 - if (!cap_ptr) 236 - return 0; 237 - 238 - reg = dw_pcie_readw_dbi(pci, cap_ptr); 239 - cap_id = (reg & 0x00ff); 240 - 241 - if (cap_id > PCI_CAP_ID_MAX) 242 - return 0; 243 - 244 - if (cap_id == cap) 245 - return cap_ptr; 246 - 247 - next_cap_ptr = (reg & 0xff00) >> 8; 248 - return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); 249 - } 250 - 251 216 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) 252 217 { 253 - u8 next_cap_ptr; 254 - u16 reg; 255 - 256 - reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); 257 - next_cap_ptr = (reg & 0x00ff); 258 - 259 - return __dw_pcie_find_next_cap(pci, next_cap_ptr, cap); 218 + return PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap, 219 + pci); 260 220 } 261 221 EXPORT_SYMBOL_GPL(dw_pcie_find_capability); 262 222 263 - static u16 dw_pcie_find_next_ext_capability(struct dw_pcie *pci, u16 start, 264 - u8 cap) 265 - { 266 - u32 header; 267 - int ttl; 268 - int pos = PCI_CFG_SPACE_SIZE; 269 - 270 - /* minimum 8 bytes per capability */ 271 - ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; 272 - 273 - if (start) 274 - pos = start; 275 - 276 - header = dw_pcie_readl_dbi(pci, pos); 277 - /* 278 - * If we have no capabilities, this is indicated by cap ID, 279 - * cap version and next pointer all being 0. 280 - */ 281 - if (header == 0) 282 - return 0; 283 - 284 - while (ttl-- > 0) { 285 - if (PCI_EXT_CAP_ID(header) == cap && pos != start) 286 - return pos; 287 - 288 - pos = PCI_EXT_CAP_NEXT(header); 289 - if (pos < PCI_CFG_SPACE_SIZE) 290 - break; 291 - 292 - header = dw_pcie_readl_dbi(pci, pos); 293 - } 294 - 295 - return 0; 296 - } 297 - 298 223 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) 299 224 { 300 - return dw_pcie_find_next_ext_capability(pci, 0, cap); 225 + return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, pci); 301 226 } 302 227 EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); 303 228 ··· 243 302 if (vendor_id != dw_pcie_readw_dbi(pci, PCI_VENDOR_ID)) 244 303 return 0; 245 304 246 - while ((vsec = dw_pcie_find_next_ext_capability(pci, vsec, 247 - PCI_EXT_CAP_ID_VNDR))) { 305 + while ((vsec = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, vsec, 306 + PCI_EXT_CAP_ID_VNDR, pci))) { 248 307 header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER); 249 308 if (PCI_VNDR_HEADER_ID(header) == vsec_id) 250 309 return vsec; ··· 508 567 val = dw_pcie_enable_ecrc(val); 509 568 dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val); 510 569 511 - val = PCIE_ATU_ENABLE; 570 + val = PCIE_ATU_ENABLE | atu->ctrl2; 512 571 if (atu->type == PCIE_ATU_TYPE_MSG) { 513 572 /* The data-less messages only for now */ 514 573 val |= PCIE_ATU_INHIBIT_PAYLOAD | atu->code; ··· 782 841 case 8: 783 842 plc |= PORT_LINK_MODE_8_LANES; 784 843 break; 844 + case 16: 845 + plc |= PORT_LINK_MODE_16_LANES; 846 + break; 785 847 default: 786 848 dev_err(pci->dev, "num-lanes %u: invalid value\n", num_lanes); 787 849 return; ··· 989 1045 char name[15]; 990 1046 int ret; 991 1047 992 - if (pci->edma.nr_irqs == 1) 993 - return 0; 994 - else if (pci->edma.nr_irqs > 1) 1048 + if (pci->edma.nr_irqs > 1) 995 1049 return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; 996 1050 997 1051 ret = platform_get_irq_byname_optional(pdev, "dma");
+52 -3
drivers/pci/controller/dwc/pcie-designware.h
··· 20 20 #include <linux/irq.h> 21 21 #include <linux/msi.h> 22 22 #include <linux/pci.h> 23 + #include <linux/pci-ecam.h> 23 24 #include <linux/reset.h> 24 25 25 26 #include <linux/pci-epc.h> ··· 91 90 #define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3) 92 91 #define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7) 93 92 #define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf) 93 + #define PORT_LINK_MODE_16_LANES PORT_LINK_MODE(0x1f) 94 94 95 95 #define PCIE_PORT_LANE_SKEW 0x714 96 96 #define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0) ··· 125 123 #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) 126 124 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 127 125 #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) 128 - #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT 0x1 129 126 130 127 #define GEN3_EQ_CONTROL_OFF 0x8A8 131 128 #define GEN3_EQ_CONTROL_OFF_FB_MODE GENMASK(3, 0) ··· 135 134 #define GEN3_EQ_FB_MODE_DIR_CHANGE_OFF 0x8AC 136 135 #define GEN3_EQ_FMDC_T_MIN_PHASE23 GENMASK(4, 0) 137 136 #define GEN3_EQ_FMDC_N_EVALS GENMASK(9, 5) 138 - #define GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA GENMASK(13, 10) 139 - #define GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA GENMASK(17, 14) 137 + #define GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA GENMASK(13, 10) 138 + #define GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA GENMASK(17, 14) 140 139 141 140 #define PCIE_PORT_MULTI_LANE_CTRL 0x8C0 142 141 #define PORT_MLTI_UPCFG_SUPPORT BIT(7) ··· 170 169 #define PCIE_ATU_REGION_CTRL2 0x004 171 170 #define PCIE_ATU_ENABLE BIT(31) 172 171 #define PCIE_ATU_BAR_MODE_ENABLE BIT(30) 172 + #define PCIE_ATU_CFG_SHIFT_MODE_ENABLE BIT(28) 173 173 #define PCIE_ATU_INHIBIT_PAYLOAD BIT(22) 174 174 #define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19) 175 175 #define PCIE_ATU_LOWER_BASE 0x008 ··· 389 387 u8 func_no; 390 388 u8 code; 391 389 u8 routing; 390 + u32 ctrl2; 392 391 u64 parent_bus_addr; 393 392 u64 pci_addr; 394 393 u64 size; ··· 428 425 struct resource *msg_res; 429 426 bool use_linkup_irq; 430 427 struct pci_eq_presets presets; 428 + struct pci_config_window *cfg; 429 + bool ecam_enabled; 430 + bool native_ecam; 431 431 }; 432 432 433 433 struct dw_pcie_ep_ops { ··· 498 492 resource_size_t dbi_phys_addr; 499 493 void __iomem *dbi_base2; 500 494 void __iomem *atu_base; 495 + void __iomem *elbi_base; 501 496 resource_size_t atu_phys_addr; 502 497 size_t atu_size; 503 498 resource_size_t parent_bus_offset; ··· 616 609 dw_pcie_write_dbi2(pci, reg, 0x4, val); 617 610 } 618 611 612 + static inline int dw_pcie_read_cfg_byte(struct dw_pcie *pci, int where, 613 + u8 *val) 614 + { 615 + *val = dw_pcie_readb_dbi(pci, where); 616 + return PCIBIOS_SUCCESSFUL; 617 + } 618 + 619 + static inline int dw_pcie_read_cfg_word(struct dw_pcie *pci, int where, 620 + u16 *val) 621 + { 622 + *val = dw_pcie_readw_dbi(pci, where); 623 + return PCIBIOS_SUCCESSFUL; 624 + } 625 + 626 + static inline int dw_pcie_read_cfg_dword(struct dw_pcie *pci, int where, 627 + u32 *val) 628 + { 629 + *val = dw_pcie_readl_dbi(pci, where); 630 + return PCIBIOS_SUCCESSFUL; 631 + } 632 + 619 633 static inline unsigned int dw_pcie_ep_get_dbi_offset(struct dw_pcie_ep *ep, 620 634 u8 func_no) 621 635 { ··· 700 672 u32 reg) 701 673 { 702 674 return dw_pcie_ep_read_dbi(ep, func_no, reg, 0x1); 675 + } 676 + 677 + static inline int dw_pcie_ep_read_cfg_byte(struct dw_pcie_ep *ep, u8 func_no, 678 + int where, u8 *val) 679 + { 680 + *val = dw_pcie_ep_readb_dbi(ep, func_no, where); 681 + return PCIBIOS_SUCCESSFUL; 682 + } 683 + 684 + static inline int dw_pcie_ep_read_cfg_word(struct dw_pcie_ep *ep, u8 func_no, 685 + int where, u16 *val) 686 + { 687 + *val = dw_pcie_ep_readw_dbi(ep, func_no, where); 688 + return PCIBIOS_SUCCESSFUL; 689 + } 690 + 691 + static inline int dw_pcie_ep_read_cfg_dword(struct dw_pcie_ep *ep, u8 func_no, 692 + int where, u32 *val) 693 + { 694 + *val = dw_pcie_ep_readl_dbi(ep, func_no, where); 695 + return PCIBIOS_SUCCESSFUL; 703 696 } 704 697 705 698 static inline unsigned int dw_pcie_ep_get_dbi2_offset(struct dw_pcie_ep *ep,
-2
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 331 331 .linkup_notifier = true, 332 332 .msi_capable = true, 333 333 .msix_capable = true, 334 - .intx_capable = false, 335 334 .align = SZ_64K, 336 335 .bar[BAR_0] = { .type = BAR_RESIZABLE, }, 337 336 .bar[BAR_1] = { .type = BAR_RESIZABLE, }, ··· 351 352 .linkup_notifier = true, 352 353 .msi_capable = true, 353 354 .msix_capable = true, 354 - .intx_capable = false, 355 355 .align = SZ_64K, 356 356 .bar[BAR_0] = { .type = BAR_RESIZABLE, }, 357 357 .bar[BAR_1] = { .type = BAR_RESIZABLE, },
-1
drivers/pci/controller/dwc/pcie-keembay.c
··· 309 309 } 310 310 311 311 static const struct pci_epc_features keembay_pcie_epc_features = { 312 - .linkup_notifier = false, 313 312 .msi_capable = true, 314 313 .msix_capable = true, 315 314 .bar[BAR_0] = { .only_64bit = true, },
+34 -24
drivers/pci/controller/dwc/pcie-qcom-common.c
··· 8 8 #include "pcie-designware.h" 9 9 #include "pcie-qcom-common.h" 10 10 11 - void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci) 11 + void qcom_pcie_common_set_equalization(struct dw_pcie *pci) 12 12 { 13 + struct device *dev = pci->dev; 13 14 u32 reg; 15 + u16 speed; 14 16 15 17 /* 16 18 * GEN3_RELATED_OFF register is repurposed to apply equalization ··· 21 19 * determines the data rate for which these equalization settings are 22 20 * applied. 23 21 */ 24 - reg = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 25 - reg &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 26 - reg &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; 27 - reg |= FIELD_PREP(GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK, 28 - GEN3_RELATED_OFF_RATE_SHADOW_SEL_16_0GT); 29 - dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, reg); 30 22 31 - reg = dw_pcie_readl_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF); 32 - reg &= ~(GEN3_EQ_FMDC_T_MIN_PHASE23 | 33 - GEN3_EQ_FMDC_N_EVALS | 34 - GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA | 35 - GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA); 36 - reg |= FIELD_PREP(GEN3_EQ_FMDC_T_MIN_PHASE23, 0x1) | 37 - FIELD_PREP(GEN3_EQ_FMDC_N_EVALS, 0xd) | 38 - FIELD_PREP(GEN3_EQ_FMDC_MAX_PRE_CUSROR_DELTA, 0x5) | 39 - FIELD_PREP(GEN3_EQ_FMDC_MAX_POST_CUSROR_DELTA, 0x5); 40 - dw_pcie_writel_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF, reg); 23 + for (speed = PCIE_SPEED_8_0GT; speed <= pcie_link_speed[pci->max_link_speed]; speed++) { 24 + if (speed > PCIE_SPEED_32_0GT) { 25 + dev_warn(dev, "Skipped equalization settings for unsupported data rate\n"); 26 + break; 27 + } 41 28 42 - reg = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); 43 - reg &= ~(GEN3_EQ_CONTROL_OFF_FB_MODE | 44 - GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE | 45 - GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL | 46 - GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC); 47 - dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, reg); 29 + reg = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 30 + reg &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 31 + reg &= ~GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK; 32 + reg |= FIELD_PREP(GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK, 33 + speed - PCIE_SPEED_8_0GT); 34 + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, reg); 35 + 36 + reg = dw_pcie_readl_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF); 37 + reg &= ~(GEN3_EQ_FMDC_T_MIN_PHASE23 | 38 + GEN3_EQ_FMDC_N_EVALS | 39 + GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA | 40 + GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA); 41 + reg |= FIELD_PREP(GEN3_EQ_FMDC_T_MIN_PHASE23, 0x1) | 42 + FIELD_PREP(GEN3_EQ_FMDC_N_EVALS, 0xd) | 43 + FIELD_PREP(GEN3_EQ_FMDC_MAX_PRE_CURSOR_DELTA, 0x5) | 44 + FIELD_PREP(GEN3_EQ_FMDC_MAX_POST_CURSOR_DELTA, 0x5); 45 + dw_pcie_writel_dbi(pci, GEN3_EQ_FB_MODE_DIR_CHANGE_OFF, reg); 46 + 47 + reg = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); 48 + reg &= ~(GEN3_EQ_CONTROL_OFF_FB_MODE | 49 + GEN3_EQ_CONTROL_OFF_PHASE23_EXIT_MODE | 50 + GEN3_EQ_CONTROL_OFF_FOM_INC_INITIAL_EVAL | 51 + GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC); 52 + dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, reg); 53 + } 48 54 } 49 - EXPORT_SYMBOL_GPL(qcom_pcie_common_set_16gt_equalization); 55 + EXPORT_SYMBOL_GPL(qcom_pcie_common_set_equalization); 50 56 51 57 void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci) 52 58 {
+1 -1
drivers/pci/controller/dwc/pcie-qcom-common.h
··· 8 8 9 9 struct dw_pcie; 10 10 11 - void qcom_pcie_common_set_16gt_equalization(struct dw_pcie *pci); 11 + void qcom_pcie_common_set_equalization(struct dw_pcie *pci); 12 12 void qcom_pcie_common_set_16gt_lane_margining(struct dw_pcie *pci); 13 13 14 14 #endif
+6 -17
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 179 179 * struct qcom_pcie_ep - Qualcomm PCIe Endpoint Controller 180 180 * @pci: Designware PCIe controller struct 181 181 * @parf: Qualcomm PCIe specific PARF register base 182 - * @elbi: Designware PCIe specific ELBI register base 183 182 * @mmio: MMIO register base 184 183 * @perst_map: PERST regmap 185 184 * @mmio_res: MMIO region resource ··· 201 202 struct dw_pcie pci; 202 203 203 204 void __iomem *parf; 204 - void __iomem *elbi; 205 205 void __iomem *mmio; 206 206 struct regmap *perst_map; 207 207 struct resource *mmio_res; ··· 265 267 266 268 static bool qcom_pcie_dw_link_up(struct dw_pcie *pci) 267 269 { 268 - struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 269 270 u32 reg; 270 271 271 - reg = readl_relaxed(pcie_ep->elbi + ELBI_SYS_STTS); 272 + reg = readl_relaxed(pci->elbi_base + ELBI_SYS_STTS); 272 273 273 274 return reg & XMLH_LINK_UP; 274 275 } ··· 291 294 static void qcom_pcie_dw_write_dbi2(struct dw_pcie *pci, void __iomem *base, 292 295 u32 reg, size_t size, u32 val) 293 296 { 294 - struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 295 297 int ret; 296 298 297 - writel(1, pcie_ep->elbi + ELBI_CS2_ENABLE); 299 + writel(1, pci->elbi_base + ELBI_CS2_ENABLE); 298 300 299 301 ret = dw_pcie_write(pci->dbi_base2 + reg, size, val); 300 302 if (ret) 301 303 dev_err(pci->dev, "Failed to write DBI2 register (0x%x): %d\n", reg, ret); 302 304 303 - writel(0, pcie_ep->elbi + ELBI_CS2_ENABLE); 305 + writel(0, pci->elbi_base + ELBI_CS2_ENABLE); 304 306 } 305 307 306 308 static void qcom_pcie_ep_icc_update(struct qcom_pcie_ep *pcie_ep) ··· 507 511 goto err_disable_resources; 508 512 } 509 513 510 - if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) { 511 - qcom_pcie_common_set_16gt_equalization(pci); 514 + qcom_pcie_common_set_equalization(pci); 515 + 516 + if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) 512 517 qcom_pcie_common_set_16gt_lane_margining(pci); 513 - } 514 518 515 519 /* 516 520 * The physical address of the MMIO region which is exposed as the BAR ··· 578 582 if (IS_ERR(pci->dbi_base)) 579 583 return PTR_ERR(pci->dbi_base); 580 584 pci->dbi_base2 = pci->dbi_base; 581 - 582 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); 583 - pcie_ep->elbi = devm_pci_remap_cfg_resource(dev, res); 584 - if (IS_ERR(pcie_ep->elbi)) 585 - return PTR_ERR(pcie_ep->elbi); 586 585 587 586 pcie_ep->mmio_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 588 587 "mmio"); ··· 822 831 static const struct pci_epc_features qcom_pcie_epc_features = { 823 832 .linkup_notifier = true, 824 833 .msi_capable = true, 825 - .msix_capable = false, 826 834 .align = SZ_4K, 827 835 .bar[BAR_0] = { .only_64bit = true, }, 828 836 .bar[BAR_1] = { .type = BAR_RESERVED, }, ··· 864 874 pcie_ep->pci.dev = dev; 865 875 pcie_ep->pci.ops = &pci_ops; 866 876 pcie_ep->pci.ep.ops = &pci_ep_ops; 867 - pcie_ep->pci.edma.nr_irqs = 1; 868 877 869 878 pcie_ep->cfg = of_device_get_match_data(dev); 870 879 if (pcie_ep->cfg && pcie_ep->cfg->hdma_support) {
+116 -95
drivers/pci/controller/dwc/pcie-qcom.c
··· 55 55 #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8 56 56 #define PARF_Q2A_FLUSH 0x1ac 57 57 #define PARF_LTSSM 0x1b0 58 + #define PARF_SLV_DBI_ELBI 0x1b4 58 59 #define PARF_INT_ALL_STATUS 0x224 59 60 #define PARF_INT_ALL_CLEAR 0x228 60 61 #define PARF_INT_ALL_MASK 0x22c ··· 65 64 #define PARF_DBI_BASE_ADDR_V2_HI 0x354 66 65 #define PARF_SLV_ADDR_SPACE_SIZE_V2 0x358 67 66 #define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c 67 + #define PARF_BLOCK_SLV_AXI_WR_BASE 0x360 68 + #define PARF_BLOCK_SLV_AXI_WR_BASE_HI 0x364 69 + #define PARF_BLOCK_SLV_AXI_WR_LIMIT 0x368 70 + #define PARF_BLOCK_SLV_AXI_WR_LIMIT_HI 0x36c 71 + #define PARF_BLOCK_SLV_AXI_RD_BASE 0x370 72 + #define PARF_BLOCK_SLV_AXI_RD_BASE_HI 0x374 73 + #define PARF_BLOCK_SLV_AXI_RD_LIMIT 0x378 74 + #define PARF_BLOCK_SLV_AXI_RD_LIMIT_HI 0x37c 75 + #define PARF_ECAM_BASE 0x380 76 + #define PARF_ECAM_BASE_HI 0x384 68 77 #define PARF_NO_SNOOP_OVERRIDE 0x3d4 69 78 #define PARF_ATU_BASE_ADDR 0x634 70 79 #define PARF_ATU_BASE_ADDR_HI 0x638 ··· 98 87 99 88 /* PARF_SYS_CTRL register fields */ 100 89 #define MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN BIT(29) 90 + #define PCIE_ECAM_BLOCKER_EN BIT(26) 101 91 #define MST_WAKEUP_EN BIT(13) 102 92 #define SLV_WAKEUP_EN BIT(12) 103 93 #define MSTR_ACLK_CGC_DIS BIT(10) ··· 145 133 146 134 /* PARF_LTSSM register fields */ 147 135 #define LTSSM_EN BIT(8) 136 + 137 + /* PARF_SLV_DBI_ELBI */ 138 + #define SLV_DBI_ELBI_ADDR_BASE GENMASK(11, 0) 148 139 149 140 /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ 150 141 #define PARF_INT_ALL_LINK_UP BIT(13) ··· 262 247 int (*get_resources)(struct qcom_pcie *pcie); 263 248 int (*init)(struct qcom_pcie *pcie); 264 249 int (*post_init)(struct qcom_pcie *pcie); 265 - void (*host_post_init)(struct qcom_pcie *pcie); 266 250 void (*deinit)(struct qcom_pcie *pcie); 267 251 void (*ltssm_enable)(struct qcom_pcie *pcie); 268 252 int (*config_sid)(struct qcom_pcie *pcie); ··· 290 276 struct qcom_pcie { 291 277 struct dw_pcie *pci; 292 278 void __iomem *parf; /* DT parf */ 293 - void __iomem *elbi; /* DT elbi */ 294 279 void __iomem *mhi; 295 280 union qcom_pcie_resources res; 296 - struct phy *phy; 297 - struct gpio_desc *reset; 298 281 struct icc_path *icc_mem; 299 282 struct icc_path *icc_cpu; 300 283 const struct qcom_pcie_cfg *cfg; ··· 308 297 struct qcom_pcie_port *port; 309 298 int val = assert ? 1 : 0; 310 299 311 - if (list_empty(&pcie->ports)) 312 - gpiod_set_value_cansleep(pcie->reset, val); 313 - else 314 - list_for_each_entry(port, &pcie->ports, list) 315 - gpiod_set_value_cansleep(port->reset, val); 300 + list_for_each_entry(port, &pcie->ports, list) 301 + gpiod_set_value_cansleep(port->reset, val); 316 302 317 303 usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 318 304 } ··· 326 318 qcom_perst_assert(pcie, false); 327 319 } 328 320 321 + static void qcom_pci_config_ecam(struct dw_pcie_rp *pp) 322 + { 323 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 324 + struct qcom_pcie *pcie = to_qcom_pcie(pci); 325 + u64 addr, addr_end; 326 + u32 val; 327 + 328 + writel_relaxed(lower_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE); 329 + writel_relaxed(upper_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE_HI); 330 + 331 + /* 332 + * The only device on the root bus is a single Root Port. If we try to 333 + * access any devices other than Device/Function 00.0 on Bus 0, the TLP 334 + * will go outside of the controller to the PCI bus. But with CFG Shift 335 + * Feature (ECAM) enabled in iATU, there is no guarantee that the 336 + * response is going to be all F's. Hence, to make sure that the 337 + * requester gets all F's response for accesses other than the Root 338 + * Port, configure iATU to block the transactions starting from 339 + * function 1 of the root bus to the end of the root bus (i.e., from 340 + * dbi_base + 4KB to dbi_base + 1MB). 341 + */ 342 + addr = pci->dbi_phys_addr + SZ_4K; 343 + writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE); 344 + writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE_HI); 345 + 346 + writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE); 347 + writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE_HI); 348 + 349 + addr_end = pci->dbi_phys_addr + SZ_1M - 1; 350 + 351 + writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT); 352 + writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT_HI); 353 + 354 + writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT); 355 + writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT_HI); 356 + 357 + val = readl_relaxed(pcie->parf + PARF_SYS_CTRL); 358 + val |= PCIE_ECAM_BLOCKER_EN; 359 + writel_relaxed(val, pcie->parf + PARF_SYS_CTRL); 360 + } 361 + 329 362 static int qcom_pcie_start_link(struct dw_pcie *pci) 330 363 { 331 364 struct qcom_pcie *pcie = to_qcom_pcie(pci); 332 365 333 - if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) { 334 - qcom_pcie_common_set_16gt_equalization(pci); 366 + qcom_pcie_common_set_equalization(pci); 367 + 368 + if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) 335 369 qcom_pcie_common_set_16gt_lane_margining(pci); 336 - } 337 370 338 371 /* Enable Link Training state machine */ 339 372 if (pcie->cfg->ops->ltssm_enable) ··· 463 414 464 415 static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) 465 416 { 417 + struct dw_pcie *pci = pcie->pci; 466 418 u32 val; 467 419 420 + if (!pci->elbi_base) { 421 + dev_err(pci->dev, "ELBI is not present\n"); 422 + return; 423 + } 468 424 /* enable link training */ 469 - val = readl(pcie->elbi + ELBI_SYS_CTRL); 425 + val = readl(pci->elbi_base + ELBI_SYS_CTRL); 470 426 val |= ELBI_SYS_CTRL_LT_ENABLE; 471 - writel(val, pcie->elbi + ELBI_SYS_CTRL); 427 + writel(val, pci->elbi_base + ELBI_SYS_CTRL); 472 428 } 473 429 474 430 static int qcom_pcie_get_resources_2_1_0(struct qcom_pcie *pcie) ··· 1094 1040 return 0; 1095 1041 } 1096 1042 1097 - static int qcom_pcie_enable_aspm(struct pci_dev *pdev, void *userdata) 1098 - { 1099 - /* 1100 - * Downstream devices need to be in D0 state before enabling PCI PM 1101 - * substates. 1102 - */ 1103 - pci_set_power_state_locked(pdev, PCI_D0); 1104 - pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL); 1105 - 1106 - return 0; 1107 - } 1108 - 1109 - static void qcom_pcie_host_post_init_2_7_0(struct qcom_pcie *pcie) 1110 - { 1111 - struct dw_pcie_rp *pp = &pcie->pci->pp; 1112 - 1113 - pci_walk_bus(pp->bridge->bus, qcom_pcie_enable_aspm, NULL); 1114 - } 1115 - 1116 1043 static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) 1117 1044 { 1118 1045 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; ··· 1288 1253 return val & PCI_EXP_LNKSTA_DLLLA; 1289 1254 } 1290 1255 1291 - static void qcom_pcie_phy_exit(struct qcom_pcie *pcie) 1292 - { 1293 - struct qcom_pcie_port *port; 1294 - 1295 - if (list_empty(&pcie->ports)) 1296 - phy_exit(pcie->phy); 1297 - else 1298 - list_for_each_entry(port, &pcie->ports, list) 1299 - phy_exit(port->phy); 1300 - } 1301 - 1302 1256 static void qcom_pcie_phy_power_off(struct qcom_pcie *pcie) 1303 1257 { 1304 1258 struct qcom_pcie_port *port; 1305 1259 1306 - if (list_empty(&pcie->ports)) { 1307 - phy_power_off(pcie->phy); 1308 - } else { 1309 - list_for_each_entry(port, &pcie->ports, list) 1310 - phy_power_off(port->phy); 1311 - } 1260 + list_for_each_entry(port, &pcie->ports, list) 1261 + phy_power_off(port->phy); 1312 1262 } 1313 1263 1314 1264 static int qcom_pcie_phy_power_on(struct qcom_pcie *pcie) 1315 1265 { 1316 1266 struct qcom_pcie_port *port; 1317 - int ret = 0; 1267 + int ret; 1318 1268 1319 - if (list_empty(&pcie->ports)) { 1320 - ret = phy_set_mode_ext(pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 1269 + list_for_each_entry(port, &pcie->ports, list) { 1270 + ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 1321 1271 if (ret) 1322 1272 return ret; 1323 1273 1324 - ret = phy_power_on(pcie->phy); 1325 - if (ret) 1274 + ret = phy_power_on(port->phy); 1275 + if (ret) { 1276 + qcom_pcie_phy_power_off(pcie); 1326 1277 return ret; 1327 - } else { 1328 - list_for_each_entry(port, &pcie->ports, list) { 1329 - ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 1330 - if (ret) 1331 - return ret; 1332 - 1333 - ret = phy_power_on(port->phy); 1334 - if (ret) { 1335 - qcom_pcie_phy_power_off(pcie); 1336 - return ret; 1337 - } 1338 1278 } 1339 1279 } 1340 1280 1341 - return ret; 1281 + return 0; 1342 1282 } 1343 1283 1344 1284 static int qcom_pcie_host_init(struct dw_pcie_rp *pp) 1345 1285 { 1346 1286 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1347 1287 struct qcom_pcie *pcie = to_qcom_pcie(pci); 1288 + u16 offset; 1348 1289 int ret; 1349 1290 1350 1291 qcom_ep_reset_assert(pcie); ··· 1328 1317 ret = pcie->cfg->ops->init(pcie); 1329 1318 if (ret) 1330 1319 return ret; 1320 + 1321 + if (pp->ecam_enabled) { 1322 + /* 1323 + * Override ELBI when ECAM is enabled, as when ECAM is enabled, 1324 + * ELBI moves under the 'config' space. 1325 + */ 1326 + offset = FIELD_GET(SLV_DBI_ELBI_ADDR_BASE, readl(pcie->parf + PARF_SLV_DBI_ELBI)); 1327 + pci->elbi_base = pci->dbi_base + offset; 1328 + 1329 + qcom_pci_config_ecam(pp); 1330 + } 1331 1331 1332 1332 ret = qcom_pcie_phy_power_on(pcie); 1333 1333 if (ret) ··· 1380 1358 pcie->cfg->ops->deinit(pcie); 1381 1359 } 1382 1360 1383 - static void qcom_pcie_host_post_init(struct dw_pcie_rp *pp) 1384 - { 1385 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1386 - struct qcom_pcie *pcie = to_qcom_pcie(pci); 1387 - 1388 - if (pcie->cfg->ops->host_post_init) 1389 - pcie->cfg->ops->host_post_init(pcie); 1390 - } 1391 - 1392 1361 static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 1393 1362 .init = qcom_pcie_host_init, 1394 1363 .deinit = qcom_pcie_host_deinit, 1395 - .post_init = qcom_pcie_host_post_init, 1396 1364 }; 1397 1365 1398 1366 /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ ··· 1444 1432 .get_resources = qcom_pcie_get_resources_2_7_0, 1445 1433 .init = qcom_pcie_init_2_7_0, 1446 1434 .post_init = qcom_pcie_post_init_2_7_0, 1447 - .host_post_init = qcom_pcie_host_post_init_2_7_0, 1448 1435 .deinit = qcom_pcie_deinit_2_7_0, 1449 1436 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1450 1437 .config_sid = qcom_pcie_config_sid_1_9_0, ··· 1454 1443 .get_resources = qcom_pcie_get_resources_2_7_0, 1455 1444 .init = qcom_pcie_init_2_7_0, 1456 1445 .post_init = qcom_pcie_post_init_2_7_0, 1457 - .host_post_init = qcom_pcie_host_post_init_2_7_0, 1458 1446 .deinit = qcom_pcie_deinit_2_7_0, 1459 1447 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1460 1448 }; ··· 1750 1740 int ret = -ENOENT; 1751 1741 1752 1742 for_each_available_child_of_node_scoped(dev->of_node, of_port) { 1743 + if (!of_node_is_type(of_port, "pci")) 1744 + continue; 1753 1745 ret = qcom_pcie_parse_port(pcie, of_port); 1754 1746 if (ret) 1755 1747 goto err_port_del; ··· 1760 1748 return ret; 1761 1749 1762 1750 err_port_del: 1763 - list_for_each_entry_safe(port, tmp, &pcie->ports, list) 1751 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 1752 + phy_exit(port->phy); 1764 1753 list_del(&port->list); 1754 + } 1765 1755 1766 1756 return ret; 1767 1757 } ··· 1771 1757 static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie) 1772 1758 { 1773 1759 struct device *dev = pcie->pci->dev; 1760 + struct qcom_pcie_port *port; 1761 + struct gpio_desc *reset; 1762 + struct phy *phy; 1774 1763 int ret; 1775 1764 1776 - pcie->phy = devm_phy_optional_get(dev, "pciephy"); 1777 - if (IS_ERR(pcie->phy)) 1778 - return PTR_ERR(pcie->phy); 1765 + phy = devm_phy_optional_get(dev, "pciephy"); 1766 + if (IS_ERR(phy)) 1767 + return PTR_ERR(phy); 1779 1768 1780 - pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); 1781 - if (IS_ERR(pcie->reset)) 1782 - return PTR_ERR(pcie->reset); 1769 + reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); 1770 + if (IS_ERR(reset)) 1771 + return PTR_ERR(reset); 1783 1772 1784 - ret = phy_init(pcie->phy); 1773 + ret = phy_init(phy); 1785 1774 if (ret) 1786 1775 return ret; 1776 + 1777 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 1778 + if (!port) 1779 + return -ENOMEM; 1780 + 1781 + port->reset = reset; 1782 + port->phy = phy; 1783 + INIT_LIST_HEAD(&port->list); 1784 + list_add_tail(&port->list, &pcie->ports); 1787 1785 1788 1786 return 0; 1789 1787 } ··· 1884 1858 pcie->parf = devm_platform_ioremap_resource_byname(pdev, "parf"); 1885 1859 if (IS_ERR(pcie->parf)) { 1886 1860 ret = PTR_ERR(pcie->parf); 1887 - goto err_pm_runtime_put; 1888 - } 1889 - 1890 - pcie->elbi = devm_platform_ioremap_resource_byname(pdev, "elbi"); 1891 - if (IS_ERR(pcie->elbi)) { 1892 - ret = PTR_ERR(pcie->elbi); 1893 1861 goto err_pm_runtime_put; 1894 1862 } 1895 1863 ··· 2004 1984 err_host_deinit: 2005 1985 dw_pcie_host_deinit(pp); 2006 1986 err_phy_exit: 2007 - qcom_pcie_phy_exit(pcie); 2008 - list_for_each_entry_safe(port, tmp, &pcie->ports, list) 1987 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 1988 + phy_exit(port->phy); 2009 1989 list_del(&port->list); 1990 + } 2010 1991 err_pm_runtime_put: 2011 1992 pm_runtime_put(dev); 2012 1993 pm_runtime_disable(dev);
+25 -5
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 182 182 return ret; 183 183 } 184 184 185 - if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc)) 185 + if (!reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc)) { 186 186 reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 187 + /* 188 + * R-Car V4H Reference Manual R19UH0186EJ0130 Rev.1.30 Apr. 189 + * 21, 2025 page 585 Figure 9.3.2 Software Reset flow (B) 190 + * indicates that for peripherals in HSC domain, after 191 + * reset has been asserted by writing a matching reset bit 192 + * into register SRCR, it is mandatory to wait 1ms. 193 + */ 194 + fsleep(1000); 195 + } 187 196 188 197 val = readl(rcar->base + PCIEMSR0); 189 198 if (rcar->drvdata->mode == DW_PCIE_RC_TYPE) { ··· 212 203 ret = reset_control_deassert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 213 204 if (ret) 214 205 goto err_unprepare; 206 + 207 + /* 208 + * Assure the reset is latched and the core is ready for DBI access. 209 + * On R-Car V4H, the PCIe reset is asynchronous and does not take 210 + * effect immediately, but needs a short time to complete. In case 211 + * DBI access happens in that short time, that access generates an 212 + * SError. To make sure that condition can never happen, read back the 213 + * state of the reset, which should turn the asynchronous reset into 214 + * synchronous one, and wait a little over 1ms to add additional 215 + * safety margin. 216 + */ 217 + reset_control_status(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 218 + fsleep(1000); 215 219 216 220 if (rcar->drvdata->additional_common_init) 217 221 rcar->drvdata->additional_common_init(rcar); ··· 420 398 } 421 399 422 400 static const struct pci_epc_features rcar_gen4_pcie_epc_features = { 423 - .linkup_notifier = false, 424 401 .msi_capable = true, 425 - .msix_capable = false, 426 402 .bar[BAR_1] = { .type = BAR_RESERVED, }, 427 403 .bar[BAR_3] = { .type = BAR_RESERVED, }, 428 404 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 }, ··· 721 701 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(23, 22), BIT(22)); 722 702 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(18, 16), GENMASK(17, 16)); 723 703 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(7, 6), BIT(6)); 724 - rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(11, 0)); 704 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(1, 0)); 725 705 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x1d4, GENMASK(16, 15), GENMASK(16, 15)); 726 706 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x514, BIT(26), BIT(26)); 727 707 rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(16), 0); ··· 731 711 val &= ~APP_HOLD_PHY_RST; 732 712 writel(val, rcar->base + PCIERSTCTRL1); 733 713 734 - ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, !(val & BIT(18)), 100, 10000); 714 + ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, val & BIT(18), 100, 10000); 735 715 if (ret < 0) 736 716 return ret; 737 717
+364
drivers/pci/controller/dwc/pcie-stm32-ep.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * STMicroelectronics STM32MP25 PCIe endpoint driver. 4 + * 5 + * Copyright (C) 2025 STMicroelectronics 6 + * Author: Christian Bruel <christian.bruel@foss.st.com> 7 + */ 8 + 9 + #include <linux/clk.h> 10 + #include <linux/mfd/syscon.h> 11 + #include <linux/of_platform.h> 12 + #include <linux/of_gpio.h> 13 + #include <linux/phy/phy.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/pm_runtime.h> 16 + #include <linux/regmap.h> 17 + #include <linux/reset.h> 18 + #include "pcie-designware.h" 19 + #include "pcie-stm32.h" 20 + 21 + struct stm32_pcie { 22 + struct dw_pcie pci; 23 + struct regmap *regmap; 24 + struct reset_control *rst; 25 + struct phy *phy; 26 + struct clk *clk; 27 + struct gpio_desc *perst_gpio; 28 + unsigned int perst_irq; 29 + }; 30 + 31 + static void stm32_pcie_ep_init(struct dw_pcie_ep *ep) 32 + { 33 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 34 + enum pci_barno bar; 35 + 36 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 37 + dw_pcie_ep_reset_bar(pci, bar); 38 + } 39 + 40 + static int stm32_pcie_enable_link(struct dw_pcie *pci) 41 + { 42 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 43 + 44 + regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 45 + STM32MP25_PCIECR_LTSSM_EN, 46 + STM32MP25_PCIECR_LTSSM_EN); 47 + 48 + return dw_pcie_wait_for_link(pci); 49 + } 50 + 51 + static void stm32_pcie_disable_link(struct dw_pcie *pci) 52 + { 53 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 54 + 55 + regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, STM32MP25_PCIECR_LTSSM_EN, 0); 56 + } 57 + 58 + static int stm32_pcie_start_link(struct dw_pcie *pci) 59 + { 60 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 61 + int ret; 62 + 63 + dev_dbg(pci->dev, "Enable link\n"); 64 + 65 + ret = stm32_pcie_enable_link(pci); 66 + if (ret) { 67 + dev_err(pci->dev, "PCIe cannot establish link: %d\n", ret); 68 + return ret; 69 + } 70 + 71 + enable_irq(stm32_pcie->perst_irq); 72 + 73 + return 0; 74 + } 75 + 76 + static void stm32_pcie_stop_link(struct dw_pcie *pci) 77 + { 78 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 79 + 80 + dev_dbg(pci->dev, "Disable link\n"); 81 + 82 + disable_irq(stm32_pcie->perst_irq); 83 + 84 + stm32_pcie_disable_link(pci); 85 + } 86 + 87 + static int stm32_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 88 + unsigned int type, u16 interrupt_num) 89 + { 90 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 91 + 92 + switch (type) { 93 + case PCI_IRQ_INTX: 94 + return dw_pcie_ep_raise_intx_irq(ep, func_no); 95 + case PCI_IRQ_MSI: 96 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 97 + default: 98 + dev_err(pci->dev, "UNKNOWN IRQ type\n"); 99 + return -EINVAL; 100 + } 101 + } 102 + 103 + static const struct pci_epc_features stm32_pcie_epc_features = { 104 + .msi_capable = true, 105 + .align = SZ_64K, 106 + }; 107 + 108 + static const struct pci_epc_features* 109 + stm32_pcie_get_features(struct dw_pcie_ep *ep) 110 + { 111 + return &stm32_pcie_epc_features; 112 + } 113 + 114 + static const struct dw_pcie_ep_ops stm32_pcie_ep_ops = { 115 + .init = stm32_pcie_ep_init, 116 + .raise_irq = stm32_pcie_raise_irq, 117 + .get_features = stm32_pcie_get_features, 118 + }; 119 + 120 + static const struct dw_pcie_ops dw_pcie_ops = { 121 + .start_link = stm32_pcie_start_link, 122 + .stop_link = stm32_pcie_stop_link, 123 + }; 124 + 125 + static int stm32_pcie_enable_resources(struct stm32_pcie *stm32_pcie) 126 + { 127 + int ret; 128 + 129 + ret = phy_init(stm32_pcie->phy); 130 + if (ret) 131 + return ret; 132 + 133 + ret = clk_prepare_enable(stm32_pcie->clk); 134 + if (ret) 135 + phy_exit(stm32_pcie->phy); 136 + 137 + return ret; 138 + } 139 + 140 + static void stm32_pcie_disable_resources(struct stm32_pcie *stm32_pcie) 141 + { 142 + clk_disable_unprepare(stm32_pcie->clk); 143 + 144 + phy_exit(stm32_pcie->phy); 145 + } 146 + 147 + static void stm32_pcie_perst_assert(struct dw_pcie *pci) 148 + { 149 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 150 + struct dw_pcie_ep *ep = &stm32_pcie->pci.ep; 151 + struct device *dev = pci->dev; 152 + 153 + dev_dbg(dev, "PERST asserted by host\n"); 154 + 155 + pci_epc_deinit_notify(ep->epc); 156 + 157 + stm32_pcie_disable_resources(stm32_pcie); 158 + 159 + pm_runtime_put_sync(dev); 160 + } 161 + 162 + static void stm32_pcie_perst_deassert(struct dw_pcie *pci) 163 + { 164 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 165 + struct device *dev = pci->dev; 166 + struct dw_pcie_ep *ep = &pci->ep; 167 + int ret; 168 + 169 + dev_dbg(dev, "PERST de-asserted by host\n"); 170 + 171 + ret = pm_runtime_resume_and_get(dev); 172 + if (ret < 0) { 173 + dev_err(dev, "Failed to resume runtime PM: %d\n", ret); 174 + return; 175 + } 176 + 177 + ret = stm32_pcie_enable_resources(stm32_pcie); 178 + if (ret) { 179 + dev_err(dev, "Failed to enable resources: %d\n", ret); 180 + goto err_pm_put_sync; 181 + } 182 + 183 + /* 184 + * Reprogram the configuration space registers here because the DBI 185 + * registers were reset by the PHY RCC during phy_init(). 186 + */ 187 + ret = dw_pcie_ep_init_registers(ep); 188 + if (ret) { 189 + dev_err(dev, "Failed to complete initialization: %d\n", ret); 190 + goto err_disable_resources; 191 + } 192 + 193 + pci_epc_init_notify(ep->epc); 194 + 195 + return; 196 + 197 + err_disable_resources: 198 + stm32_pcie_disable_resources(stm32_pcie); 199 + 200 + err_pm_put_sync: 201 + pm_runtime_put_sync(dev); 202 + } 203 + 204 + static irqreturn_t stm32_pcie_ep_perst_irq_thread(int irq, void *data) 205 + { 206 + struct stm32_pcie *stm32_pcie = data; 207 + struct dw_pcie *pci = &stm32_pcie->pci; 208 + u32 perst; 209 + 210 + perst = gpiod_get_value(stm32_pcie->perst_gpio); 211 + if (perst) 212 + stm32_pcie_perst_assert(pci); 213 + else 214 + stm32_pcie_perst_deassert(pci); 215 + 216 + irq_set_irq_type(gpiod_to_irq(stm32_pcie->perst_gpio), 217 + (perst ? IRQF_TRIGGER_HIGH : IRQF_TRIGGER_LOW)); 218 + 219 + return IRQ_HANDLED; 220 + } 221 + 222 + static int stm32_add_pcie_ep(struct stm32_pcie *stm32_pcie, 223 + struct platform_device *pdev) 224 + { 225 + struct dw_pcie_ep *ep = &stm32_pcie->pci.ep; 226 + struct device *dev = &pdev->dev; 227 + int ret; 228 + 229 + ret = regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 230 + STM32MP25_PCIECR_TYPE_MASK, 231 + STM32MP25_PCIECR_EP); 232 + if (ret) 233 + return ret; 234 + 235 + reset_control_assert(stm32_pcie->rst); 236 + reset_control_deassert(stm32_pcie->rst); 237 + 238 + ep->ops = &stm32_pcie_ep_ops; 239 + 240 + ret = dw_pcie_ep_init(ep); 241 + if (ret) { 242 + dev_err(dev, "Failed to initialize ep: %d\n", ret); 243 + return ret; 244 + } 245 + 246 + ret = stm32_pcie_enable_resources(stm32_pcie); 247 + if (ret) { 248 + dev_err(dev, "Failed to enable resources: %d\n", ret); 249 + dw_pcie_ep_deinit(ep); 250 + return ret; 251 + } 252 + 253 + return 0; 254 + } 255 + 256 + static int stm32_pcie_probe(struct platform_device *pdev) 257 + { 258 + struct stm32_pcie *stm32_pcie; 259 + struct device *dev = &pdev->dev; 260 + int ret; 261 + 262 + stm32_pcie = devm_kzalloc(dev, sizeof(*stm32_pcie), GFP_KERNEL); 263 + if (!stm32_pcie) 264 + return -ENOMEM; 265 + 266 + stm32_pcie->pci.dev = dev; 267 + stm32_pcie->pci.ops = &dw_pcie_ops; 268 + 269 + stm32_pcie->regmap = syscon_regmap_lookup_by_compatible("st,stm32mp25-syscfg"); 270 + if (IS_ERR(stm32_pcie->regmap)) 271 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->regmap), 272 + "No syscfg specified\n"); 273 + 274 + stm32_pcie->phy = devm_phy_get(dev, NULL); 275 + if (IS_ERR(stm32_pcie->phy)) 276 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->phy), 277 + "failed to get pcie-phy\n"); 278 + 279 + stm32_pcie->clk = devm_clk_get(dev, NULL); 280 + if (IS_ERR(stm32_pcie->clk)) 281 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->clk), 282 + "Failed to get PCIe clock source\n"); 283 + 284 + stm32_pcie->rst = devm_reset_control_get_exclusive(dev, NULL); 285 + if (IS_ERR(stm32_pcie->rst)) 286 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->rst), 287 + "Failed to get PCIe reset\n"); 288 + 289 + stm32_pcie->perst_gpio = devm_gpiod_get(dev, "reset", GPIOD_IN); 290 + if (IS_ERR(stm32_pcie->perst_gpio)) 291 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->perst_gpio), 292 + "Failed to get reset GPIO\n"); 293 + 294 + ret = phy_set_mode(stm32_pcie->phy, PHY_MODE_PCIE); 295 + if (ret) 296 + return ret; 297 + 298 + platform_set_drvdata(pdev, stm32_pcie); 299 + 300 + pm_runtime_get_noresume(dev); 301 + 302 + ret = devm_pm_runtime_enable(dev); 303 + if (ret < 0) { 304 + pm_runtime_put_noidle(&pdev->dev); 305 + return dev_err_probe(dev, ret, "Failed to enable runtime PM\n"); 306 + } 307 + 308 + stm32_pcie->perst_irq = gpiod_to_irq(stm32_pcie->perst_gpio); 309 + 310 + /* Will be enabled in start_link when device is initialized. */ 311 + irq_set_status_flags(stm32_pcie->perst_irq, IRQ_NOAUTOEN); 312 + 313 + ret = devm_request_threaded_irq(dev, stm32_pcie->perst_irq, NULL, 314 + stm32_pcie_ep_perst_irq_thread, 315 + IRQF_TRIGGER_HIGH | IRQF_ONESHOT, 316 + "perst_irq", stm32_pcie); 317 + if (ret) { 318 + pm_runtime_put_noidle(&pdev->dev); 319 + return dev_err_probe(dev, ret, "Failed to request PERST IRQ\n"); 320 + } 321 + 322 + ret = stm32_add_pcie_ep(stm32_pcie, pdev); 323 + if (ret) 324 + pm_runtime_put_noidle(&pdev->dev); 325 + 326 + return ret; 327 + } 328 + 329 + static void stm32_pcie_remove(struct platform_device *pdev) 330 + { 331 + struct stm32_pcie *stm32_pcie = platform_get_drvdata(pdev); 332 + struct dw_pcie *pci = &stm32_pcie->pci; 333 + struct dw_pcie_ep *ep = &pci->ep; 334 + 335 + dw_pcie_stop_link(pci); 336 + 337 + pci_epc_deinit_notify(ep->epc); 338 + dw_pcie_ep_deinit(ep); 339 + 340 + stm32_pcie_disable_resources(stm32_pcie); 341 + 342 + pm_runtime_put_sync(&pdev->dev); 343 + } 344 + 345 + static const struct of_device_id stm32_pcie_ep_of_match[] = { 346 + { .compatible = "st,stm32mp25-pcie-ep" }, 347 + {}, 348 + }; 349 + 350 + static struct platform_driver stm32_pcie_ep_driver = { 351 + .probe = stm32_pcie_probe, 352 + .remove = stm32_pcie_remove, 353 + .driver = { 354 + .name = "stm32-ep-pcie", 355 + .of_match_table = stm32_pcie_ep_of_match, 356 + }, 357 + }; 358 + 359 + module_platform_driver(stm32_pcie_ep_driver); 360 + 361 + MODULE_AUTHOR("Christian Bruel <christian.bruel@foss.st.com>"); 362 + MODULE_DESCRIPTION("STM32MP25 PCIe Endpoint Controller driver"); 363 + MODULE_LICENSE("GPL"); 364 + MODULE_DEVICE_TABLE(of, stm32_pcie_ep_of_match);
+358
drivers/pci/controller/dwc/pcie-stm32.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * STMicroelectronics STM32MP25 PCIe root complex driver. 4 + * 5 + * Copyright (C) 2025 STMicroelectronics 6 + * Author: Christian Bruel <christian.bruel@foss.st.com> 7 + */ 8 + 9 + #include <linux/clk.h> 10 + #include <linux/mfd/syscon.h> 11 + #include <linux/of_platform.h> 12 + #include <linux/phy/phy.h> 13 + #include <linux/pinctrl/consumer.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/pm_runtime.h> 16 + #include <linux/pm_wakeirq.h> 17 + #include <linux/regmap.h> 18 + #include <linux/reset.h> 19 + #include "pcie-designware.h" 20 + #include "pcie-stm32.h" 21 + #include "../../pci.h" 22 + 23 + struct stm32_pcie { 24 + struct dw_pcie pci; 25 + struct regmap *regmap; 26 + struct reset_control *rst; 27 + struct phy *phy; 28 + struct clk *clk; 29 + struct gpio_desc *perst_gpio; 30 + struct gpio_desc *wake_gpio; 31 + }; 32 + 33 + static void stm32_pcie_deassert_perst(struct stm32_pcie *stm32_pcie) 34 + { 35 + if (stm32_pcie->perst_gpio) { 36 + msleep(PCIE_T_PVPERL_MS); 37 + gpiod_set_value(stm32_pcie->perst_gpio, 0); 38 + } 39 + 40 + msleep(PCIE_RESET_CONFIG_WAIT_MS); 41 + } 42 + 43 + static void stm32_pcie_assert_perst(struct stm32_pcie *stm32_pcie) 44 + { 45 + gpiod_set_value(stm32_pcie->perst_gpio, 1); 46 + } 47 + 48 + static int stm32_pcie_start_link(struct dw_pcie *pci) 49 + { 50 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 51 + 52 + return regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 53 + STM32MP25_PCIECR_LTSSM_EN, 54 + STM32MP25_PCIECR_LTSSM_EN); 55 + } 56 + 57 + static void stm32_pcie_stop_link(struct dw_pcie *pci) 58 + { 59 + struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); 60 + 61 + regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 62 + STM32MP25_PCIECR_LTSSM_EN, 0); 63 + } 64 + 65 + static int stm32_pcie_suspend_noirq(struct device *dev) 66 + { 67 + struct stm32_pcie *stm32_pcie = dev_get_drvdata(dev); 68 + int ret; 69 + 70 + ret = dw_pcie_suspend_noirq(&stm32_pcie->pci); 71 + if (ret) 72 + return ret; 73 + 74 + stm32_pcie_assert_perst(stm32_pcie); 75 + 76 + clk_disable_unprepare(stm32_pcie->clk); 77 + 78 + if (!device_wakeup_path(dev)) 79 + phy_exit(stm32_pcie->phy); 80 + 81 + return pinctrl_pm_select_sleep_state(dev); 82 + } 83 + 84 + static int stm32_pcie_resume_noirq(struct device *dev) 85 + { 86 + struct stm32_pcie *stm32_pcie = dev_get_drvdata(dev); 87 + int ret; 88 + 89 + /* 90 + * The core clock is gated with CLKREQ# from the COMBOPHY REFCLK, 91 + * thus if no device is present, must deassert it with a GPIO from 92 + * pinctrl pinmux before accessing the DBI registers. 93 + */ 94 + ret = pinctrl_pm_select_init_state(dev); 95 + if (ret) { 96 + dev_err(dev, "Failed to activate pinctrl pm state: %d\n", ret); 97 + return ret; 98 + } 99 + 100 + if (!device_wakeup_path(dev)) { 101 + ret = phy_init(stm32_pcie->phy); 102 + if (ret) { 103 + pinctrl_pm_select_default_state(dev); 104 + return ret; 105 + } 106 + } 107 + 108 + ret = clk_prepare_enable(stm32_pcie->clk); 109 + if (ret) 110 + goto err_phy_exit; 111 + 112 + stm32_pcie_deassert_perst(stm32_pcie); 113 + 114 + ret = dw_pcie_resume_noirq(&stm32_pcie->pci); 115 + if (ret) 116 + goto err_disable_clk; 117 + 118 + pinctrl_pm_select_default_state(dev); 119 + 120 + return 0; 121 + 122 + err_disable_clk: 123 + stm32_pcie_assert_perst(stm32_pcie); 124 + clk_disable_unprepare(stm32_pcie->clk); 125 + 126 + err_phy_exit: 127 + phy_exit(stm32_pcie->phy); 128 + pinctrl_pm_select_default_state(dev); 129 + 130 + return ret; 131 + } 132 + 133 + static const struct dev_pm_ops stm32_pcie_pm_ops = { 134 + NOIRQ_SYSTEM_SLEEP_PM_OPS(stm32_pcie_suspend_noirq, 135 + stm32_pcie_resume_noirq) 136 + }; 137 + 138 + static const struct dw_pcie_host_ops stm32_pcie_host_ops = { 139 + }; 140 + 141 + static const struct dw_pcie_ops dw_pcie_ops = { 142 + .start_link = stm32_pcie_start_link, 143 + .stop_link = stm32_pcie_stop_link 144 + }; 145 + 146 + static int stm32_add_pcie_port(struct stm32_pcie *stm32_pcie) 147 + { 148 + struct device *dev = stm32_pcie->pci.dev; 149 + unsigned int wake_irq; 150 + int ret; 151 + 152 + ret = phy_set_mode(stm32_pcie->phy, PHY_MODE_PCIE); 153 + if (ret) 154 + return ret; 155 + 156 + ret = phy_init(stm32_pcie->phy); 157 + if (ret) 158 + return ret; 159 + 160 + ret = regmap_update_bits(stm32_pcie->regmap, SYSCFG_PCIECR, 161 + STM32MP25_PCIECR_TYPE_MASK, 162 + STM32MP25_PCIECR_RC); 163 + if (ret) 164 + goto err_phy_exit; 165 + 166 + stm32_pcie_deassert_perst(stm32_pcie); 167 + 168 + if (stm32_pcie->wake_gpio) { 169 + wake_irq = gpiod_to_irq(stm32_pcie->wake_gpio); 170 + ret = dev_pm_set_dedicated_wake_irq(dev, wake_irq); 171 + if (ret) { 172 + dev_err(dev, "Failed to enable wakeup irq %d\n", ret); 173 + goto err_assert_perst; 174 + } 175 + irq_set_irq_type(wake_irq, IRQ_TYPE_EDGE_FALLING); 176 + } 177 + 178 + return 0; 179 + 180 + err_assert_perst: 181 + stm32_pcie_assert_perst(stm32_pcie); 182 + 183 + err_phy_exit: 184 + phy_exit(stm32_pcie->phy); 185 + 186 + return ret; 187 + } 188 + 189 + static void stm32_remove_pcie_port(struct stm32_pcie *stm32_pcie) 190 + { 191 + dev_pm_clear_wake_irq(stm32_pcie->pci.dev); 192 + 193 + stm32_pcie_assert_perst(stm32_pcie); 194 + 195 + phy_exit(stm32_pcie->phy); 196 + } 197 + 198 + static int stm32_pcie_parse_port(struct stm32_pcie *stm32_pcie) 199 + { 200 + struct device *dev = stm32_pcie->pci.dev; 201 + struct device_node *root_port; 202 + 203 + root_port = of_get_next_available_child(dev->of_node, NULL); 204 + 205 + stm32_pcie->phy = devm_of_phy_get(dev, root_port, NULL); 206 + if (IS_ERR(stm32_pcie->phy)) { 207 + of_node_put(root_port); 208 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->phy), 209 + "Failed to get pcie-phy\n"); 210 + } 211 + 212 + stm32_pcie->perst_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(root_port), 213 + "reset", GPIOD_OUT_HIGH, NULL); 214 + if (IS_ERR(stm32_pcie->perst_gpio)) { 215 + if (PTR_ERR(stm32_pcie->perst_gpio) != -ENOENT) { 216 + of_node_put(root_port); 217 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->perst_gpio), 218 + "Failed to get reset GPIO\n"); 219 + } 220 + stm32_pcie->perst_gpio = NULL; 221 + } 222 + 223 + stm32_pcie->wake_gpio = devm_fwnode_gpiod_get(dev, of_fwnode_handle(root_port), 224 + "wake", GPIOD_IN, NULL); 225 + 226 + if (IS_ERR(stm32_pcie->wake_gpio)) { 227 + if (PTR_ERR(stm32_pcie->wake_gpio) != -ENOENT) { 228 + of_node_put(root_port); 229 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->wake_gpio), 230 + "Failed to get wake GPIO\n"); 231 + } 232 + stm32_pcie->wake_gpio = NULL; 233 + } 234 + 235 + of_node_put(root_port); 236 + 237 + return 0; 238 + } 239 + 240 + static int stm32_pcie_probe(struct platform_device *pdev) 241 + { 242 + struct stm32_pcie *stm32_pcie; 243 + struct device *dev = &pdev->dev; 244 + int ret; 245 + 246 + stm32_pcie = devm_kzalloc(dev, sizeof(*stm32_pcie), GFP_KERNEL); 247 + if (!stm32_pcie) 248 + return -ENOMEM; 249 + 250 + stm32_pcie->pci.dev = dev; 251 + stm32_pcie->pci.ops = &dw_pcie_ops; 252 + stm32_pcie->pci.pp.ops = &stm32_pcie_host_ops; 253 + 254 + stm32_pcie->regmap = syscon_regmap_lookup_by_compatible("st,stm32mp25-syscfg"); 255 + if (IS_ERR(stm32_pcie->regmap)) 256 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->regmap), 257 + "No syscfg specified\n"); 258 + 259 + stm32_pcie->clk = devm_clk_get(dev, NULL); 260 + if (IS_ERR(stm32_pcie->clk)) 261 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->clk), 262 + "Failed to get PCIe clock source\n"); 263 + 264 + stm32_pcie->rst = devm_reset_control_get_exclusive(dev, NULL); 265 + if (IS_ERR(stm32_pcie->rst)) 266 + return dev_err_probe(dev, PTR_ERR(stm32_pcie->rst), 267 + "Failed to get PCIe reset\n"); 268 + 269 + ret = stm32_pcie_parse_port(stm32_pcie); 270 + if (ret) 271 + return ret; 272 + 273 + platform_set_drvdata(pdev, stm32_pcie); 274 + 275 + ret = stm32_add_pcie_port(stm32_pcie); 276 + if (ret) 277 + return ret; 278 + 279 + reset_control_assert(stm32_pcie->rst); 280 + reset_control_deassert(stm32_pcie->rst); 281 + 282 + ret = clk_prepare_enable(stm32_pcie->clk); 283 + if (ret) { 284 + dev_err(dev, "Core clock enable failed %d\n", ret); 285 + goto err_remove_port; 286 + } 287 + 288 + ret = pm_runtime_set_active(dev); 289 + if (ret < 0) { 290 + dev_err_probe(dev, ret, "Failed to activate runtime PM\n"); 291 + goto err_disable_clk; 292 + } 293 + 294 + pm_runtime_no_callbacks(dev); 295 + 296 + ret = devm_pm_runtime_enable(dev); 297 + if (ret < 0) { 298 + dev_err_probe(dev, ret, "Failed to enable runtime PM\n"); 299 + goto err_disable_clk; 300 + } 301 + 302 + ret = dw_pcie_host_init(&stm32_pcie->pci.pp); 303 + if (ret) 304 + goto err_disable_clk; 305 + 306 + if (stm32_pcie->wake_gpio) 307 + device_init_wakeup(dev, true); 308 + 309 + return 0; 310 + 311 + err_disable_clk: 312 + clk_disable_unprepare(stm32_pcie->clk); 313 + 314 + err_remove_port: 315 + stm32_remove_pcie_port(stm32_pcie); 316 + 317 + return ret; 318 + } 319 + 320 + static void stm32_pcie_remove(struct platform_device *pdev) 321 + { 322 + struct stm32_pcie *stm32_pcie = platform_get_drvdata(pdev); 323 + struct dw_pcie_rp *pp = &stm32_pcie->pci.pp; 324 + 325 + if (stm32_pcie->wake_gpio) 326 + device_init_wakeup(&pdev->dev, false); 327 + 328 + dw_pcie_host_deinit(pp); 329 + 330 + clk_disable_unprepare(stm32_pcie->clk); 331 + 332 + stm32_remove_pcie_port(stm32_pcie); 333 + 334 + pm_runtime_put_noidle(&pdev->dev); 335 + } 336 + 337 + static const struct of_device_id stm32_pcie_of_match[] = { 338 + { .compatible = "st,stm32mp25-pcie-rc" }, 339 + {}, 340 + }; 341 + 342 + static struct platform_driver stm32_pcie_driver = { 343 + .probe = stm32_pcie_probe, 344 + .remove = stm32_pcie_remove, 345 + .driver = { 346 + .name = "stm32-pcie", 347 + .of_match_table = stm32_pcie_of_match, 348 + .pm = &stm32_pcie_pm_ops, 349 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 350 + }, 351 + }; 352 + 353 + module_platform_driver(stm32_pcie_driver); 354 + 355 + MODULE_AUTHOR("Christian Bruel <christian.bruel@foss.st.com>"); 356 + MODULE_DESCRIPTION("STM32MP25 PCIe Controller driver"); 357 + MODULE_LICENSE("GPL"); 358 + MODULE_DEVICE_TABLE(of, stm32_pcie_of_match);
+16
drivers/pci/controller/dwc/pcie-stm32.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * ST PCIe driver definitions for STM32-MP25 SoC 4 + * 5 + * Copyright (C) 2025 STMicroelectronics - All Rights Reserved 6 + * Author: Christian Bruel <christian.bruel@foss.st.com> 7 + */ 8 + 9 + #define to_stm32_pcie(x) dev_get_drvdata((x)->dev) 10 + 11 + #define STM32MP25_PCIECR_TYPE_MASK GENMASK(11, 8) 12 + #define STM32MP25_PCIECR_EP 0 13 + #define STM32MP25_PCIECR_LTSSM_EN BIT(2) 14 + #define STM32MP25_PCIECR_RC BIT(10) 15 + 16 + #define SYSCFG_PCIECR 0x6000
+37 -14
drivers/pci/controller/dwc/pcie-tegra194.c
··· 1214 1214 struct mrq_uphy_response resp; 1215 1215 struct tegra_bpmp_message msg; 1216 1216 struct mrq_uphy_request req; 1217 + int err; 1217 1218 1218 1219 /* 1219 1220 * Controller-5 doesn't need to have its state set by BPMP-FW in ··· 1237 1236 msg.rx.data = &resp; 1238 1237 msg.rx.size = sizeof(resp); 1239 1238 1240 - return tegra_bpmp_transfer(pcie->bpmp, &msg); 1239 + err = tegra_bpmp_transfer(pcie->bpmp, &msg); 1240 + if (err) 1241 + return err; 1242 + if (msg.rx.ret) 1243 + return -EINVAL; 1244 + 1245 + return 0; 1241 1246 } 1242 1247 1243 1248 static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie, ··· 1252 1245 struct mrq_uphy_response resp; 1253 1246 struct tegra_bpmp_message msg; 1254 1247 struct mrq_uphy_request req; 1248 + int err; 1255 1249 1256 1250 memset(&req, 0, sizeof(req)); 1257 1251 memset(&resp, 0, sizeof(resp)); ··· 1272 1264 msg.rx.data = &resp; 1273 1265 msg.rx.size = sizeof(resp); 1274 1266 1275 - return tegra_bpmp_transfer(pcie->bpmp, &msg); 1267 + err = tegra_bpmp_transfer(pcie->bpmp, &msg); 1268 + if (err) 1269 + return err; 1270 + if (msg.rx.ret) 1271 + return -EINVAL; 1272 + 1273 + return 0; 1276 1274 } 1277 1275 1278 1276 static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) 1279 1277 { 1280 1278 struct dw_pcie_rp *pp = &pcie->pci.pp; 1281 - struct pci_bus *child, *root_bus = NULL; 1279 + struct pci_bus *child, *root_port_bus = NULL; 1282 1280 struct pci_dev *pdev; 1283 1281 1284 1282 /* ··· 1297 1283 */ 1298 1284 1299 1285 list_for_each_entry(child, &pp->bridge->bus->children, node) { 1300 - /* Bring downstream devices to D0 if they are not already in */ 1301 1286 if (child->parent == pp->bridge->bus) { 1302 - root_bus = child; 1287 + root_port_bus = child; 1303 1288 break; 1304 1289 } 1305 1290 } 1306 1291 1307 - if (!root_bus) { 1308 - dev_err(pcie->dev, "Failed to find downstream devices\n"); 1292 + if (!root_port_bus) { 1293 + dev_err(pcie->dev, "Failed to find downstream bus of Root Port\n"); 1309 1294 return; 1310 1295 } 1311 1296 1312 - list_for_each_entry(pdev, &root_bus->devices, bus_list) { 1297 + /* Bring downstream devices to D0 if they are not already in */ 1298 + list_for_each_entry(pdev, &root_port_bus->devices, bus_list) { 1313 1299 if (PCI_SLOT(pdev->devfn) == 0) { 1314 1300 if (pci_set_power_state(pdev, PCI_D0)) 1315 1301 dev_err(pcie->dev, ··· 1736 1722 ret); 1737 1723 } 1738 1724 1739 - ret = tegra_pcie_bpmp_set_pll_state(pcie, false); 1725 + ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false); 1740 1726 if (ret) 1741 - dev_err(pcie->dev, "Failed to turn off UPHY: %d\n", ret); 1727 + dev_err(pcie->dev, "Failed to disable controller: %d\n", ret); 1742 1728 1743 1729 pcie->ep_state = EP_STATE_DISABLED; 1744 1730 dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n"); ··· 1955 1941 return IRQ_HANDLED; 1956 1942 } 1957 1943 1944 + static void tegra_pcie_ep_init(struct dw_pcie_ep *ep) 1945 + { 1946 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1947 + enum pci_barno bar; 1948 + 1949 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 1950 + dw_pcie_ep_reset_bar(pci, bar); 1951 + }; 1952 + 1958 1953 static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq) 1959 1954 { 1960 1955 /* Tegra194 supports only INTA */ ··· 1978 1955 1979 1956 static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq) 1980 1957 { 1981 - if (unlikely(irq > 31)) 1958 + if (unlikely(irq > 32)) 1982 1959 return -EINVAL; 1983 1960 1984 - appl_writel(pcie, BIT(irq), APPL_MSI_CTRL_1); 1961 + appl_writel(pcie, BIT(irq - 1), APPL_MSI_CTRL_1); 1985 1962 1986 1963 return 0; 1987 1964 } ··· 2021 1998 2022 1999 static const struct pci_epc_features tegra_pcie_epc_features = { 2023 2000 .linkup_notifier = true, 2024 - .msi_capable = false, 2025 - .msix_capable = false, 2001 + .msi_capable = true, 2026 2002 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, 2027 2003 .only_64bit = true, }, 2028 2004 .bar[BAR_1] = { .type = BAR_RESERVED, }, ··· 2039 2017 } 2040 2018 2041 2019 static const struct dw_pcie_ep_ops pcie_ep_ops = { 2020 + .init = tegra_pcie_ep_init, 2042 2021 .raise_irq = tegra_pcie_ep_raise_irq, 2043 2022 .get_features = tegra_pcie_ep_get_features, 2044 2023 };
+2 -6
drivers/pci/controller/pci-hyperv.c
··· 1680 1680 /** 1681 1681 * hv_msi_free() - Free the MSI. 1682 1682 * @domain: The interrupt domain pointer 1683 - * @info: Extra MSI-related context 1684 1683 * @irq: Identifies the IRQ. 1685 1684 * 1686 1685 * The Hyper-V parent partition and hypervisor are tracking the ··· 1687 1688 * table up to date. This callback sends a message that frees 1688 1689 * the IRT entry and related tracking nonsense. 1689 1690 */ 1690 - static void hv_msi_free(struct irq_domain *domain, struct msi_domain_info *info, 1691 - unsigned int irq) 1691 + static void hv_msi_free(struct irq_domain *domain, unsigned int irq) 1692 1692 { 1693 1693 struct hv_pcibus_device *hbus; 1694 1694 struct hv_pci_dev *hpdev; ··· 2179 2181 2180 2182 static void hv_pcie_domain_free(struct irq_domain *d, unsigned int virq, unsigned int nr_irqs) 2181 2183 { 2182 - struct msi_domain_info *info = d->host_data; 2183 - 2184 2184 for (int i = 0; i < nr_irqs; i++) 2185 - hv_msi_free(d, info, virq + i); 2185 + hv_msi_free(d, virq + i); 2186 2186 2187 2187 irq_domain_free_irqs_top(d, virq, nr_irqs); 2188 2188 }
+14 -15
drivers/pci/controller/pci-tegra.c
··· 14 14 */ 15 15 16 16 #include <linux/clk.h> 17 + #include <linux/cleanup.h> 17 18 #include <linux/debugfs.h> 18 19 #include <linux/delay.h> 19 20 #include <linux/export.h> ··· 271 270 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 272 271 struct irq_domain *domain; 273 272 struct mutex map_lock; 274 - spinlock_t mask_lock; 273 + raw_spinlock_t mask_lock; 275 274 void *virt; 276 275 dma_addr_t phys; 277 276 int irq; ··· 1345 1344 unsigned int i; 1346 1345 int err; 1347 1346 1348 - port->phys = devm_kcalloc(dev, sizeof(phy), port->lanes, GFP_KERNEL); 1347 + port->phys = devm_kcalloc(dev, port->lanes, sizeof(phy), GFP_KERNEL); 1349 1348 if (!port->phys) 1350 1349 return -ENOMEM; 1351 1350 ··· 1582 1581 struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1583 1582 struct tegra_pcie *pcie = msi_to_pcie(msi); 1584 1583 unsigned int index = d->hwirq / 32; 1585 - unsigned long flags; 1586 1584 u32 value; 1587 1585 1588 - spin_lock_irqsave(&msi->mask_lock, flags); 1589 - value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1590 - value &= ~BIT(d->hwirq % 32); 1591 - afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1592 - spin_unlock_irqrestore(&msi->mask_lock, flags); 1586 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 1587 + value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1588 + value &= ~BIT(d->hwirq % 32); 1589 + afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1590 + } 1593 1591 } 1594 1592 1595 1593 static void tegra_msi_irq_unmask(struct irq_data *d) ··· 1596 1596 struct tegra_msi *msi = irq_data_get_irq_chip_data(d); 1597 1597 struct tegra_pcie *pcie = msi_to_pcie(msi); 1598 1598 unsigned int index = d->hwirq / 32; 1599 - unsigned long flags; 1600 1599 u32 value; 1601 1600 1602 - spin_lock_irqsave(&msi->mask_lock, flags); 1603 - value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1604 - value |= BIT(d->hwirq % 32); 1605 - afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1606 - spin_unlock_irqrestore(&msi->mask_lock, flags); 1601 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 1602 + value = afi_readl(pcie, AFI_MSI_EN_VEC(index)); 1603 + value |= BIT(d->hwirq % 32); 1604 + afi_writel(pcie, value, AFI_MSI_EN_VEC(index)); 1605 + } 1607 1606 } 1608 1607 1609 1608 static void tegra_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ··· 1710 1711 int err; 1711 1712 1712 1713 mutex_init(&msi->map_lock); 1713 - spin_lock_init(&msi->mask_lock); 1714 + raw_spin_lock_init(&msi->mask_lock); 1714 1715 1715 1716 if (IS_ENABLED(CONFIG_PCI_MSI)) { 1716 1717 err = tegra_allocate_domains(msi);
+1 -1
drivers/pci/controller/pci-xgene-msi.c
··· 311 311 msi_val = xgene_msi_int_read(xgene_msi, i); 312 312 if (msi_val) { 313 313 dev_err(&pdev->dev, "Failed to clear spurious IRQ\n"); 314 - return EINVAL; 314 + return -EINVAL; 315 315 } 316 316 317 317 irq = platform_get_irq(pdev, i);
+23
drivers/pci/controller/pcie-mediatek-gen3.c
··· 102 102 #define PCIE_MSI_SET_ADDR_HI_BASE 0xc80 103 103 #define PCIE_MSI_SET_ADDR_HI_OFFSET 0x04 104 104 105 + #define PCIE_RESOURCE_CTRL_REG 0xd2c 106 + #define PCIE_RSRC_SYS_CLK_RDY_TIME_MASK GENMASK(7, 0) 107 + 105 108 #define PCIE_ICMD_PM_REG 0x198 106 109 #define PCIE_TURN_OFF_LINK BIT(4) 107 110 ··· 152 149 * struct mtk_gen3_pcie_pdata - differentiate between host generations 153 150 * @power_up: pcie power_up callback 154 151 * @phy_resets: phy reset lines SoC data. 152 + * @sys_clk_rdy_time_us: System clock ready time override (microseconds) 155 153 * @flags: pcie device flags. 156 154 */ 157 155 struct mtk_gen3_pcie_pdata { ··· 161 157 const char *id[MAX_NUM_PHY_RESETS]; 162 158 int num_resets; 163 159 } phy_resets; 160 + u8 sys_clk_rdy_time_us; 164 161 u32 flags; 165 162 }; 166 163 ··· 438 433 val &= ~PCIE_CONF_LINK2_LCR2_LINK_SPEED; 439 434 val |= FIELD_PREP(PCIE_CONF_LINK2_LCR2_LINK_SPEED, pcie->max_link_speed); 440 435 writel_relaxed(val, pcie->base + PCIE_CONF_LINK2_CTL_STS); 436 + } 437 + 438 + /* If parameter is present, adjust SYS_CLK_RDY_TIME to avoid glitching */ 439 + if (pcie->soc->sys_clk_rdy_time_us) { 440 + val = readl_relaxed(pcie->base + PCIE_RESOURCE_CTRL_REG); 441 + FIELD_MODIFY(PCIE_RSRC_SYS_CLK_RDY_TIME_MASK, &val, 442 + pcie->soc->sys_clk_rdy_time_us); 443 + writel_relaxed(val, pcie->base + PCIE_RESOURCE_CTRL_REG); 441 444 } 442 445 443 446 /* Set class code */ ··· 1340 1327 }, 1341 1328 }; 1342 1329 1330 + static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_mt8196 = { 1331 + .power_up = mtk_pcie_power_up, 1332 + .phy_resets = { 1333 + .id[0] = "phy", 1334 + .num_resets = 1, 1335 + }, 1336 + .sys_clk_rdy_time_us = 10, 1337 + }; 1338 + 1343 1339 static const struct mtk_gen3_pcie_pdata mtk_pcie_soc_en7581 = { 1344 1340 .power_up = mtk_pcie_en7581_power_up, 1345 1341 .phy_resets = { ··· 1363 1341 static const struct of_device_id mtk_pcie_of_match[] = { 1364 1342 { .compatible = "airoha,en7581-pcie", .data = &mtk_pcie_soc_en7581 }, 1365 1343 { .compatible = "mediatek,mt8192-pcie", .data = &mtk_pcie_soc_mt8192 }, 1344 + { .compatible = "mediatek,mt8196-pcie", .data = &mtk_pcie_soc_mt8196 }, 1366 1345 {}, 1367 1346 }; 1368 1347 MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);
-2
drivers/pci/controller/pcie-rcar-ep.c
··· 436 436 } 437 437 438 438 static const struct pci_epc_features rcar_pcie_epc_features = { 439 - .linkup_notifier = false, 440 439 .msi_capable = true, 441 - .msix_capable = false, 442 440 /* use 64-bit BARs so mark BAR[1,3,5] as reserved */ 443 441 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = 128, 444 442 .only_64bit = true, },
+16 -26
drivers/pci/controller/pcie-rcar-host.c
··· 12 12 */ 13 13 14 14 #include <linux/bitops.h> 15 + #include <linux/cleanup.h> 15 16 #include <linux/clk.h> 16 17 #include <linux/clk-provider.h> 17 18 #include <linux/delay.h> ··· 39 38 DECLARE_BITMAP(used, INT_PCI_MSI_NR); 40 39 struct irq_domain *domain; 41 40 struct mutex map_lock; 42 - spinlock_t mask_lock; 41 + raw_spinlock_t mask_lock; 43 42 int irq1; 44 43 int irq2; 45 44 }; ··· 53 52 int (*phy_init_fn)(struct rcar_pcie_host *host); 54 53 }; 55 54 56 - static DEFINE_SPINLOCK(pmsr_lock); 57 - 58 55 static int rcar_pcie_wakeup(struct device *pcie_dev, void __iomem *pcie_base) 59 56 { 60 - unsigned long flags; 61 57 u32 pmsr, val; 62 58 int ret = 0; 63 59 64 - spin_lock_irqsave(&pmsr_lock, flags); 65 - 66 - if (!pcie_base || pm_runtime_suspended(pcie_dev)) { 67 - ret = -EINVAL; 68 - goto unlock_exit; 69 - } 60 + if (!pcie_base || pm_runtime_suspended(pcie_dev)) 61 + return -EINVAL; 70 62 71 63 pmsr = readl(pcie_base + PMSR); 72 64 ··· 81 87 writel(L1FAEG | PMEL1RX, pcie_base + PMSR); 82 88 } 83 89 84 - unlock_exit: 85 - spin_unlock_irqrestore(&pmsr_lock, flags); 86 90 return ret; 87 91 } 88 92 ··· 576 584 unsigned int index = find_first_bit(&reg, 32); 577 585 int ret; 578 586 579 - ret = generic_handle_domain_irq(msi->domain->parent, index); 587 + ret = generic_handle_domain_irq(msi->domain, index); 580 588 if (ret) { 581 589 /* Unknown MSI, just clear it */ 582 590 dev_dbg(dev, "unexpected MSI\n"); ··· 603 611 { 604 612 struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 605 613 struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 606 - unsigned long flags; 607 614 u32 value; 608 615 609 - spin_lock_irqsave(&msi->mask_lock, flags); 610 - value = rcar_pci_read_reg(pcie, PCIEMSIIER); 611 - value &= ~BIT(d->hwirq); 612 - rcar_pci_write_reg(pcie, value, PCIEMSIIER); 613 - spin_unlock_irqrestore(&msi->mask_lock, flags); 616 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 617 + value = rcar_pci_read_reg(pcie, PCIEMSIIER); 618 + value &= ~BIT(d->hwirq); 619 + rcar_pci_write_reg(pcie, value, PCIEMSIIER); 620 + } 614 621 } 615 622 616 623 static void rcar_msi_irq_unmask(struct irq_data *d) 617 624 { 618 625 struct rcar_msi *msi = irq_data_get_irq_chip_data(d); 619 626 struct rcar_pcie *pcie = &msi_to_host(msi)->pcie; 620 - unsigned long flags; 621 627 u32 value; 622 628 623 - spin_lock_irqsave(&msi->mask_lock, flags); 624 - value = rcar_pci_read_reg(pcie, PCIEMSIIER); 625 - value |= BIT(d->hwirq); 626 - rcar_pci_write_reg(pcie, value, PCIEMSIIER); 627 - spin_unlock_irqrestore(&msi->mask_lock, flags); 629 + scoped_guard(raw_spinlock_irqsave, &msi->mask_lock) { 630 + value = rcar_pci_read_reg(pcie, PCIEMSIIER); 631 + value |= BIT(d->hwirq); 632 + rcar_pci_write_reg(pcie, value, PCIEMSIIER); 633 + } 628 634 } 629 635 630 636 static void rcar_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) ··· 735 745 int err; 736 746 737 747 mutex_init(&msi->map_lock); 738 - spin_lock_init(&msi->mask_lock); 748 + raw_spin_lock_init(&msi->mask_lock); 739 749 740 750 err = of_address_to_resource(dev->of_node, 0, &res); 741 751 if (err)
-1
drivers/pci/controller/pcie-rockchip-ep.c
··· 694 694 static const struct pci_epc_features rockchip_pcie_epc_features = { 695 695 .linkup_notifier = true, 696 696 .msi_capable = true, 697 - .msix_capable = false, 698 697 .intx_capable = true, 699 698 .align = ROCKCHIP_PCIE_AT_SIZE_ALIGN, 700 699 };
+4 -3
drivers/pci/controller/pcie-xilinx-nwl.c
··· 718 718 nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 719 719 E_ECAM_CR_ENABLE, E_ECAM_CONTROL); 720 720 721 - nwl_bridge_writel(pcie, nwl_bridge_readl(pcie, E_ECAM_CONTROL) | 722 - (NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT), 723 - E_ECAM_CONTROL); 721 + ecam_val = nwl_bridge_readl(pcie, E_ECAM_CONTROL); 722 + ecam_val &= ~E_ECAM_SIZE_LOC; 723 + ecam_val |= NWL_ECAM_MAX_SIZE << E_ECAM_SIZE_SHIFT; 724 + nwl_bridge_writel(pcie, ecam_val, E_ECAM_CONTROL); 724 725 725 726 nwl_bridge_writel(pcie, lower_32_bits(pcie->phys_ecam_base), 726 727 E_ECAM_BASE_LO);
+1 -2
drivers/pci/controller/plda/pcie-plda-host.c
··· 599 599 600 600 bridge = devm_pci_alloc_host_bridge(dev, 0); 601 601 if (!bridge) 602 - return dev_err_probe(dev, -ENOMEM, 603 - "failed to alloc bridge\n"); 602 + return -ENOMEM; 604 603 605 604 if (port->host_ops && port->host_ops->host_init) { 606 605 ret = port->host_ops->host_init(port);
+30 -8
drivers/pci/endpoint/functions/pci-epf-test.c
··· 301 301 if (!epf_test->dma_supported) 302 302 return; 303 303 304 - dma_release_channel(epf_test->dma_chan_tx); 305 - if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) { 304 + if (epf_test->dma_chan_tx) { 305 + dma_release_channel(epf_test->dma_chan_tx); 306 + if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) { 307 + epf_test->dma_chan_tx = NULL; 308 + epf_test->dma_chan_rx = NULL; 309 + return; 310 + } 306 311 epf_test->dma_chan_tx = NULL; 307 - epf_test->dma_chan_rx = NULL; 308 - return; 309 312 } 310 313 311 - dma_release_channel(epf_test->dma_chan_rx); 312 - epf_test->dma_chan_rx = NULL; 314 + if (epf_test->dma_chan_rx) { 315 + dma_release_channel(epf_test->dma_chan_rx); 316 + epf_test->dma_chan_rx = NULL; 317 + } 313 318 } 314 319 315 320 static void pci_epf_test_print_rate(struct pci_epf_test *epf_test, ··· 777 772 u32 status = le32_to_cpu(reg->status); 778 773 struct pci_epf *epf = epf_test->epf; 779 774 struct pci_epc *epc = epf->epc; 775 + int ret; 780 776 781 777 if (bar < BAR_0) 782 778 goto set_status_err; 783 779 784 780 pci_epf_test_doorbell_cleanup(epf_test); 785 - pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, &epf_test->db_bar); 781 + 782 + /* 783 + * The doorbell feature temporarily overrides the inbound translation 784 + * to point to the address stored in epf_test->db_bar.phys_addr, i.e., 785 + * it calls set_bar() twice without ever calling clear_bar(), as 786 + * calling clear_bar() would clear the BAR's PCI address assigned by 787 + * the host. Thus, when disabling the doorbell, restore the inbound 788 + * translation to point to the memory allocated for the BAR. 789 + */ 790 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, &epf->bar[bar]); 791 + if (ret) 792 + goto set_status_err; 786 793 787 794 status |= STATUS_DOORBELL_DISABLE_SUCCESS; 788 795 reg->status = cpu_to_le32(status); ··· 1067 1050 if (bar == test_reg_bar) 1068 1051 continue; 1069 1052 1070 - base = pci_epf_alloc_space(epf, bar_size[bar], bar, 1053 + if (epc_features->bar[bar].type == BAR_FIXED) 1054 + test_reg_size = epc_features->bar[bar].fixed_size; 1055 + else 1056 + test_reg_size = bar_size[bar]; 1057 + 1058 + base = pci_epf_alloc_space(epf, test_reg_size, bar, 1071 1059 epc_features, PRIMARY_INTERFACE); 1072 1060 if (!base) 1073 1061 dev_err(dev, "Failed to allocate space for BAR%d\n",
+1 -1
drivers/pci/endpoint/pci-ep-msi.c
··· 24 24 struct pci_epf *epf; 25 25 26 26 epc = pci_epc_get(dev_name(msi_desc_to_dev(desc))); 27 - if (!epc) 27 + if (IS_ERR(epc)) 28 28 return; 29 29 30 30 epf = list_first_entry_or_null(&epc->pci_epf, struct pci_epf, list);
+4 -4
drivers/pci/hotplug/cpqphp_pci.c
··· 1302 1302 1303 1303 dbg("found io_node(base, length) = %x, %x\n", 1304 1304 io_node->base, io_node->length); 1305 - dbg("populated slot =%d \n", populated_slot); 1305 + dbg("populated slot = %d\n", populated_slot); 1306 1306 if (!populated_slot) { 1307 1307 io_node->next = ctrl->io_head; 1308 1308 ctrl->io_head = io_node; ··· 1325 1325 1326 1326 dbg("found mem_node(base, length) = %x, %x\n", 1327 1327 mem_node->base, mem_node->length); 1328 - dbg("populated slot =%d \n", populated_slot); 1328 + dbg("populated slot = %d\n", populated_slot); 1329 1329 if (!populated_slot) { 1330 1330 mem_node->next = ctrl->mem_head; 1331 1331 ctrl->mem_head = mem_node; ··· 1349 1349 p_mem_node->length = pre_mem_length << 16; 1350 1350 dbg("found p_mem_node(base, length) = %x, %x\n", 1351 1351 p_mem_node->base, p_mem_node->length); 1352 - dbg("populated slot =%d \n", populated_slot); 1352 + dbg("populated slot = %d\n", populated_slot); 1353 1353 1354 1354 if (!populated_slot) { 1355 1355 p_mem_node->next = ctrl->p_mem_head; ··· 1373 1373 bus_node->length = max_bus - secondary_bus + 1; 1374 1374 dbg("found bus_node(base, length) = %x, %x\n", 1375 1375 bus_node->base, bus_node->length); 1376 - dbg("populated slot =%d \n", populated_slot); 1376 + dbg("populated slot = %d\n", populated_slot); 1377 1377 if (!populated_slot) { 1378 1378 bus_node->next = ctrl->bus_head; 1379 1379 ctrl->bus_head = bus_node;
+3 -3
drivers/pci/hotplug/ibmphp_hpc.c
··· 124 124 unsigned long ultemp; 125 125 unsigned long data; // actual data HILO format 126 126 127 - debug_polling("%s - Entry WPGBbar[%p] index[%x] \n", __func__, WPGBbar, index); 127 + debug_polling("%s - Entry WPGBbar[%p] index[%x]\n", __func__, WPGBbar, index); 128 128 129 129 //-------------------------------------------------------------------- 130 130 // READ - step 1 ··· 147 147 ultemp = ultemp << 8; 148 148 data |= ultemp; 149 149 } else { 150 - err("this controller type is not supported \n"); 150 + err("this controller type is not supported\n"); 151 151 return HPC_ERROR; 152 152 } 153 153 ··· 258 258 ultemp = ultemp << 8; 259 259 data |= ultemp; 260 260 } else { 261 - err("this controller type is not supported \n"); 261 + err("this controller type is not supported\n"); 262 262 return HPC_ERROR; 263 263 } 264 264
+5
drivers/pci/iov.c
··· 629 629 if (dev->no_vf_scan) 630 630 return 0; 631 631 632 + pci_lock_rescan_remove(); 632 633 for (i = 0; i < num_vfs; i++) { 633 634 rc = pci_iov_add_virtfn(dev, i); 634 635 if (rc) 635 636 goto failed; 636 637 } 638 + pci_unlock_rescan_remove(); 637 639 return 0; 638 640 failed: 639 641 while (i--) 640 642 pci_iov_remove_virtfn(dev, i); 643 + pci_unlock_rescan_remove(); 641 644 642 645 return rc; 643 646 } ··· 765 762 struct pci_sriov *iov = dev->sriov; 766 763 int i; 767 764 765 + pci_lock_rescan_remove(); 768 766 for (i = 0; i < iov->num_VFs; i++) 769 767 pci_iov_remove_virtfn(dev, i); 768 + pci_unlock_rescan_remove(); 770 769 } 771 770 772 771 static void sriov_disable(struct pci_dev *dev)
+15 -7
drivers/pci/of_property.c
··· 279 279 mapp++; 280 280 *mapp = out_irq[i].np->phandle; 281 281 mapp++; 282 - if (addr_sz[i]) { 283 - ret = of_property_read_u32_array(out_irq[i].np, 284 - "reg", mapp, 285 - addr_sz[i]); 286 - if (ret) 287 - goto failed; 288 - } 282 + 283 + /* 284 + * A device address does not affect the device <-> 285 + * interrupt-controller HW connection for all 286 + * modern interrupt controllers; moreover, the 287 + * kernel (i.e., of_irq_parse_raw()) ignores the 288 + * values in the parent unit address cells while 289 + * parsing the interrupt-map property because they 290 + * are irrelevant for interrupt mapping in modern 291 + * systems. 292 + * 293 + * Leave the parent unit address initialized to 0 -- 294 + * just take into account the #address-cells size 295 + * to build the property properly. 296 + */ 289 297 mapp += addr_sz[i]; 290 298 memcpy(mapp, out_irq[i].args, 291 299 out_irq[i].args_count * sizeof(u32));
+2 -3
drivers/pci/p2pdma.c
··· 360 360 pages_free: 361 361 devm_memunmap_pages(&pdev->dev, pgmap); 362 362 pgmap_free: 363 - devm_kfree(&pdev->dev, pgmap); 363 + devm_kfree(&pdev->dev, p2p_pgmap); 364 364 return error; 365 365 } 366 366 EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); ··· 738 738 * pci_has_p2pmem - check if a given PCI device has published any p2pmem 739 739 * @pdev: PCI device to check 740 740 */ 741 - bool pci_has_p2pmem(struct pci_dev *pdev) 741 + static bool pci_has_p2pmem(struct pci_dev *pdev) 742 742 { 743 743 struct pci_p2pdma *p2pdma; 744 744 bool res; ··· 750 750 751 751 return res; 752 752 } 753 - EXPORT_SYMBOL_GPL(pci_has_p2pmem); 754 753 755 754 /** 756 755 * pci_p2pmem_find_many - find a peer-to-peer DMA memory device compatible with
+4 -2
drivers/pci/pci-acpi.c
··· 122 122 123 123 bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge) 124 124 { 125 + bool ret = false; 126 + 125 127 if (ACPI_HANDLE(&host_bridge->dev)) { 126 128 union acpi_object *obj; 127 129 ··· 137 135 1, DSM_PCI_PRESERVE_BOOT_CONFIG, 138 136 NULL, ACPI_TYPE_INTEGER); 139 137 if (obj && obj->integer.value == 0) 140 - return true; 138 + ret = true; 141 139 ACPI_FREE(obj); 142 140 } 143 141 144 - return false; 142 + return ret; 145 143 } 146 144 147 145 /* _HPX PCI Setting Record (Type 0); same as _HPP */
+2 -1
drivers/pci/pci-driver.c
··· 1582 1582 return 0; 1583 1583 } 1584 1584 1585 - #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) 1585 + #if defined(CONFIG_PCIEAER) || defined(CONFIG_EEH) || defined(CONFIG_S390) 1586 1586 /** 1587 1587 * pci_uevent_ers - emit a uevent during recovery path of PCI device 1588 1588 * @pdev: PCI device undergoing error recovery ··· 1596 1596 switch (err_type) { 1597 1597 case PCI_ERS_RESULT_NONE: 1598 1598 case PCI_ERS_RESULT_CAN_RECOVER: 1599 + case PCI_ERS_RESULT_NEED_RESET: 1599 1600 envp[idx++] = "ERROR_EVENT=BEGIN_RECOVERY"; 1600 1601 envp[idx++] = "DEVICE_ONLINE=0"; 1601 1602 break;
+60 -8
drivers/pci/pci-sysfs.c
··· 30 30 #include <linux/msi.h> 31 31 #include <linux/of.h> 32 32 #include <linux/aperture.h> 33 + #include <linux/unaligned.h> 33 34 #include "pci.h" 34 35 35 36 #ifndef ARCH_PCI_DEV_GROUPS ··· 178 177 179 178 for (i = 0; i < max; i++) { 180 179 struct resource *res = &pci_dev->resource[i]; 180 + struct resource zerores = {}; 181 + 182 + /* For backwards compatibility */ 183 + if (i >= PCI_BRIDGE_RESOURCES && i <= PCI_BRIDGE_RESOURCE_END && 184 + res->flags & (IORESOURCE_UNSET | IORESOURCE_DISABLED)) 185 + res = &zerores; 186 + 181 187 pci_resource_to_user(pci_dev, i, res, &start, &end); 182 188 len += sysfs_emit_at(buf, len, "0x%016llx 0x%016llx 0x%016llx\n", 183 189 (unsigned long long)start, ··· 209 201 struct device_attribute *attr, char *buf) 210 202 { 211 203 struct pci_dev *pdev = to_pci_dev(dev); 204 + ssize_t ret; 212 205 213 - return sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev)); 206 + /* We read PCI_EXP_LNKCAP, so we need the device to be accessible. */ 207 + pci_config_pm_runtime_get(pdev); 208 + ret = sysfs_emit(buf, "%u\n", pcie_get_width_cap(pdev)); 209 + pci_config_pm_runtime_put(pdev); 210 + 211 + return ret; 214 212 } 215 213 static DEVICE_ATTR_RO(max_link_width); 216 214 ··· 228 214 int err; 229 215 enum pci_bus_speed speed; 230 216 217 + pci_config_pm_runtime_get(pci_dev); 231 218 err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 219 + pci_config_pm_runtime_put(pci_dev); 220 + 232 221 if (err) 233 222 return -EINVAL; 234 223 ··· 248 231 u16 linkstat; 249 232 int err; 250 233 234 + pci_config_pm_runtime_get(pci_dev); 251 235 err = pcie_capability_read_word(pci_dev, PCI_EXP_LNKSTA, &linkstat); 236 + pci_config_pm_runtime_put(pci_dev); 237 + 252 238 if (err) 253 239 return -EINVAL; 254 240 ··· 267 247 u8 sec_bus; 268 248 int err; 269 249 250 + pci_config_pm_runtime_get(pci_dev); 270 251 err = pci_read_config_byte(pci_dev, PCI_SECONDARY_BUS, &sec_bus); 252 + pci_config_pm_runtime_put(pci_dev); 253 + 271 254 if (err) 272 255 return -EINVAL; 273 256 ··· 286 263 u8 sub_bus; 287 264 int err; 288 265 266 + pci_config_pm_runtime_get(pci_dev); 289 267 err = pci_read_config_byte(pci_dev, PCI_SUBORDINATE_BUS, &sub_bus); 268 + pci_config_pm_runtime_put(pci_dev); 269 + 290 270 if (err) 291 271 return -EINVAL; 292 272 ··· 719 693 IORESOURCE_ROM_SHADOW)); 720 694 } 721 695 static DEVICE_ATTR_RO(boot_vga); 696 + 697 + static ssize_t serial_number_show(struct device *dev, 698 + struct device_attribute *attr, char *buf) 699 + { 700 + struct pci_dev *pci_dev = to_pci_dev(dev); 701 + u64 dsn; 702 + u8 bytes[8]; 703 + 704 + dsn = pci_get_dsn(pci_dev); 705 + if (!dsn) 706 + return -EIO; 707 + 708 + put_unaligned_be64(dsn, bytes); 709 + return sysfs_emit(buf, "%8phD\n", bytes); 710 + } 711 + static DEVICE_ATTR_ADMIN_RO(serial_number); 722 712 723 713 static ssize_t pci_read_config(struct file *filp, struct kobject *kobj, 724 714 const struct bin_attribute *bin_attr, char *buf, ··· 1597 1555 const char *buf, size_t count) 1598 1556 { 1599 1557 struct pci_dev *pdev = to_pci_dev(dev); 1600 - unsigned long size, flags; 1558 + struct pci_bus *bus = pdev->bus; 1559 + struct resource *b_win, *res; 1560 + unsigned long size; 1601 1561 int ret, i; 1602 1562 u16 cmd; 1603 1563 1604 1564 if (kstrtoul(buf, 0, &size) < 0) 1565 + return -EINVAL; 1566 + 1567 + b_win = pbus_select_window(bus, pci_resource_n(pdev, n)); 1568 + if (!b_win) 1605 1569 return -EINVAL; 1606 1570 1607 1571 device_lock(dev); ··· 1629 1581 pci_write_config_word(pdev, PCI_COMMAND, 1630 1582 cmd & ~PCI_COMMAND_MEMORY); 1631 1583 1632 - flags = pci_resource_flags(pdev, n); 1633 - 1634 1584 pci_remove_resource_files(pdev); 1635 1585 1636 - for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { 1637 - if (pci_resource_len(pdev, i) && 1638 - pci_resource_flags(pdev, i) == flags) 1586 + pci_dev_for_each_resource(pdev, res, i) { 1587 + if (i >= PCI_BRIDGE_RESOURCES) 1588 + break; 1589 + 1590 + if (b_win == pbus_select_window(bus, res)) 1639 1591 pci_release_resource(pdev, i); 1640 1592 } 1641 1593 1642 1594 ret = pci_resize_resource(pdev, n, size); 1643 1595 1644 - pci_assign_unassigned_bus_resources(pdev->bus); 1596 + pci_assign_unassigned_bus_resources(bus); 1645 1597 1646 1598 if (pci_create_resource_files(pdev)) 1647 1599 pci_warn(pdev, "Failed to recreate resource files after BAR resizing\n"); ··· 1746 1698 1747 1699 static struct attribute *pci_dev_dev_attrs[] = { 1748 1700 &dev_attr_boot_vga.attr, 1701 + &dev_attr_serial_number.attr, 1749 1702 NULL, 1750 1703 }; 1751 1704 ··· 1757 1708 struct pci_dev *pdev = to_pci_dev(dev); 1758 1709 1759 1710 if (a == &dev_attr_boot_vga.attr && pci_is_vga(pdev)) 1711 + return a->mode; 1712 + 1713 + if (a == &dev_attr_serial_number.attr && pci_get_dsn(pdev)) 1760 1714 return a->mode; 1761 1715 1762 1716 return 0;
+15 -66
drivers/pci/pci.c
··· 423 423 return 1; 424 424 } 425 425 426 - static u8 __pci_find_next_cap_ttl(struct pci_bus *bus, unsigned int devfn, 427 - u8 pos, int cap, int *ttl) 428 - { 429 - u8 id; 430 - u16 ent; 431 - 432 - pci_bus_read_config_byte(bus, devfn, pos, &pos); 433 - 434 - while ((*ttl)--) { 435 - if (pos < 0x40) 436 - break; 437 - pos &= ~3; 438 - pci_bus_read_config_word(bus, devfn, pos, &ent); 439 - 440 - id = ent & 0xff; 441 - if (id == 0xff) 442 - break; 443 - if (id == cap) 444 - return pos; 445 - pos = (ent >> 8); 446 - } 447 - return 0; 448 - } 449 - 450 426 static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn, 451 427 u8 pos, int cap) 452 428 { 453 - int ttl = PCI_FIND_CAP_TTL; 454 - 455 - return __pci_find_next_cap_ttl(bus, devfn, pos, cap, &ttl); 429 + return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, bus, devfn); 456 430 } 457 431 458 432 u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap) ··· 527 553 */ 528 554 u16 pci_find_next_ext_capability(struct pci_dev *dev, u16 start, int cap) 529 555 { 530 - u32 header; 531 - int ttl; 532 - u16 pos = PCI_CFG_SPACE_SIZE; 533 - 534 - /* minimum 8 bytes per capability */ 535 - ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; 536 - 537 556 if (dev->cfg_size <= PCI_CFG_SPACE_SIZE) 538 557 return 0; 539 558 540 - if (start) 541 - pos = start; 542 - 543 - if (pci_read_config_dword(dev, pos, &header) != PCIBIOS_SUCCESSFUL) 544 - return 0; 545 - 546 - /* 547 - * If we have no capabilities, this is indicated by cap ID, 548 - * cap version and next pointer all being 0. 549 - */ 550 - if (header == 0) 551 - return 0; 552 - 553 - while (ttl-- > 0) { 554 - if (PCI_EXT_CAP_ID(header) == cap && pos != start) 555 - return pos; 556 - 557 - pos = PCI_EXT_CAP_NEXT(header); 558 - if (pos < PCI_CFG_SPACE_SIZE) 559 - break; 560 - 561 - if (pci_read_config_dword(dev, pos, &header) != PCIBIOS_SUCCESSFUL) 562 - break; 563 - } 564 - 565 - return 0; 559 + return PCI_FIND_NEXT_EXT_CAP(pci_bus_read_config, start, cap, 560 + dev->bus, dev->devfn); 566 561 } 567 562 EXPORT_SYMBOL_GPL(pci_find_next_ext_capability); 568 563 ··· 591 648 592 649 static u8 __pci_find_next_ht_cap(struct pci_dev *dev, u8 pos, int ht_cap) 593 650 { 594 - int rc, ttl = PCI_FIND_CAP_TTL; 651 + int rc; 595 652 u8 cap, mask; 596 653 597 654 if (ht_cap == HT_CAPTYPE_SLAVE || ht_cap == HT_CAPTYPE_HOST) ··· 599 656 else 600 657 mask = HT_5BIT_CAP_MASK; 601 658 602 - pos = __pci_find_next_cap_ttl(dev->bus, dev->devfn, pos, 603 - PCI_CAP_ID_HT, &ttl); 659 + pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, 660 + PCI_CAP_ID_HT, dev->bus, dev->devfn); 604 661 while (pos) { 605 662 rc = pci_read_config_byte(dev, pos + 3, &cap); 606 663 if (rc != PCIBIOS_SUCCESSFUL) ··· 609 666 if ((cap & mask) == ht_cap) 610 667 return pos; 611 668 612 - pos = __pci_find_next_cap_ttl(dev->bus, dev->devfn, 613 - pos + PCI_CAP_LIST_NEXT, 614 - PCI_CAP_ID_HT, &ttl); 669 + pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, 670 + pos + PCI_CAP_LIST_NEXT, 671 + PCI_CAP_ID_HT, dev->bus, 672 + dev->devfn); 615 673 } 616 674 617 675 return 0; ··· 1315 1371 else 1316 1372 dev->current_state = state; 1317 1373 1374 + return -EIO; 1375 + } 1376 + 1377 + if (pci_dev_is_disconnected(dev)) { 1378 + dev->current_state = PCI_D3cold; 1318 1379 return -EIO; 1319 1380 } 1320 1381
+95 -1
drivers/pci/pci.h
··· 2 2 #ifndef DRIVERS_PCI_H 3 3 #define DRIVERS_PCI_H 4 4 5 + #include <linux/align.h> 6 + #include <linux/bitfield.h> 5 7 #include <linux/pci.h> 6 8 7 9 struct pcie_tlp_log; 8 10 9 11 /* Number of possible devfns: 0.0 to 1f.7 inclusive */ 10 12 #define MAX_NR_DEVFNS 256 13 + #define PCI_MAX_NR_DEVS 32 11 14 12 15 #define MAX_NR_LANES 16 13 16 ··· 84 81 #define PCIE_MSG_CODE_DEASSERT_INTC 0x26 85 82 #define PCIE_MSG_CODE_DEASSERT_INTD 0x27 86 83 84 + #define PCI_BUS_BRIDGE_IO_WINDOW 0 85 + #define PCI_BUS_BRIDGE_MEM_WINDOW 1 86 + #define PCI_BUS_BRIDGE_PREF_MEM_WINDOW 2 87 + 87 88 extern const unsigned char pcie_link_speed[]; 88 89 extern bool pci_early_dump; 90 + 91 + extern struct mutex pci_rescan_remove_lock; 89 92 90 93 bool pcie_cap_has_lnkctl(const struct pci_dev *dev); 91 94 bool pcie_cap_has_lnkctl2(const struct pci_dev *dev); 92 95 bool pcie_cap_has_rtctl(const struct pci_dev *dev); 96 + 97 + /* Standard Capability finder */ 98 + /** 99 + * PCI_FIND_NEXT_CAP - Find a PCI standard capability 100 + * @read_cfg: Function pointer for reading PCI config space 101 + * @start: Starting position to begin search 102 + * @cap: Capability ID to find 103 + * @args: Arguments to pass to read_cfg function 104 + * 105 + * Search the capability list in PCI config space to find @cap. 106 + * Implements TTL (time-to-live) protection against infinite loops. 107 + * 108 + * Return: Position of the capability if found, 0 otherwise. 109 + */ 110 + #define PCI_FIND_NEXT_CAP(read_cfg, start, cap, args...) \ 111 + ({ \ 112 + int __ttl = PCI_FIND_CAP_TTL; \ 113 + u8 __id, __found_pos = 0; \ 114 + u8 __pos = (start); \ 115 + u16 __ent; \ 116 + \ 117 + read_cfg##_byte(args, __pos, &__pos); \ 118 + \ 119 + while (__ttl--) { \ 120 + if (__pos < PCI_STD_HEADER_SIZEOF) \ 121 + break; \ 122 + \ 123 + __pos = ALIGN_DOWN(__pos, 4); \ 124 + read_cfg##_word(args, __pos, &__ent); \ 125 + \ 126 + __id = FIELD_GET(PCI_CAP_ID_MASK, __ent); \ 127 + if (__id == 0xff) \ 128 + break; \ 129 + \ 130 + if (__id == (cap)) { \ 131 + __found_pos = __pos; \ 132 + break; \ 133 + } \ 134 + \ 135 + __pos = FIELD_GET(PCI_CAP_LIST_NEXT_MASK, __ent); \ 136 + } \ 137 + __found_pos; \ 138 + }) 139 + 140 + /* Extended Capability finder */ 141 + /** 142 + * PCI_FIND_NEXT_EXT_CAP - Find a PCI extended capability 143 + * @read_cfg: Function pointer for reading PCI config space 144 + * @start: Starting position to begin search (0 for initial search) 145 + * @cap: Extended capability ID to find 146 + * @args: Arguments to pass to read_cfg function 147 + * 148 + * Search the extended capability list in PCI config space to find @cap. 149 + * Implements TTL protection against infinite loops using a calculated 150 + * maximum search count. 151 + * 152 + * Return: Position of the capability if found, 0 otherwise. 153 + */ 154 + #define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, args...) \ 155 + ({ \ 156 + u16 __pos = (start) ?: PCI_CFG_SPACE_SIZE; \ 157 + u16 __found_pos = 0; \ 158 + int __ttl, __ret; \ 159 + u32 __header; \ 160 + \ 161 + __ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; \ 162 + while (__ttl-- > 0 && __pos >= PCI_CFG_SPACE_SIZE) { \ 163 + __ret = read_cfg##_dword(args, __pos, &__header); \ 164 + if (__ret != PCIBIOS_SUCCESSFUL) \ 165 + break; \ 166 + \ 167 + if (__header == 0) \ 168 + break; \ 169 + \ 170 + if (PCI_EXT_CAP_ID(__header) == (cap) && __pos != start) {\ 171 + __found_pos = __pos; \ 172 + break; \ 173 + } \ 174 + \ 175 + __pos = PCI_EXT_CAP_NEXT(__header); \ 176 + } \ 177 + __found_pos; \ 178 + }) 93 179 94 180 /* Functions internal to the PCI core code */ 95 181 ··· 422 330 void pci_put_host_bridge_device(struct device *dev); 423 331 424 332 unsigned int pci_rescan_bus_bridge_resize(struct pci_dev *bridge); 425 - int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type); 333 + int pbus_reassign_bridge_resources(struct pci_bus *bus, struct resource *res); 426 334 int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); 427 335 428 336 int pci_configure_extended_tags(struct pci_dev *dev, void *ign); ··· 473 381 return resno; 474 382 } 475 383 384 + struct resource *pbus_select_window(struct pci_bus *bus, 385 + const struct resource *res); 476 386 void pci_reassigndev_resource_alignment(struct pci_dev *dev); 477 387 void pci_disable_bridge_window(struct pci_dev *dev); 478 388 struct pci_bus *pci_bus_get(struct pci_bus *bus);
+40 -9
drivers/pci/pcie/aer.c
··· 43 43 #define AER_ERROR_SOURCES_MAX 128 44 44 45 45 #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ 46 - #define AER_MAX_TYPEOF_UNCOR_ERRS 27 /* as per PCI_ERR_UNCOR_STATUS*/ 46 + #define AER_MAX_TYPEOF_UNCOR_ERRS 32 /* as per PCI_ERR_UNCOR_STATUS*/ 47 47 48 48 struct aer_err_source { 49 49 u32 status; /* PCI_ERR_ROOT_STATUS */ ··· 96 96 }; 97 97 98 98 #define AER_LOG_TLP_MASKS (PCI_ERR_UNC_POISON_TLP| \ 99 + PCI_ERR_UNC_POISON_BLK | \ 99 100 PCI_ERR_UNC_ECRC| \ 100 101 PCI_ERR_UNC_UNSUP| \ 101 102 PCI_ERR_UNC_COMP_ABORT| \ 102 103 PCI_ERR_UNC_UNX_COMP| \ 103 - PCI_ERR_UNC_MALF_TLP) 104 + PCI_ERR_UNC_ACSV | \ 105 + PCI_ERR_UNC_MCBTLP | \ 106 + PCI_ERR_UNC_ATOMEG | \ 107 + PCI_ERR_UNC_DMWR_BLK | \ 108 + PCI_ERR_UNC_XLAT_BLK | \ 109 + PCI_ERR_UNC_TLPPRE | \ 110 + PCI_ERR_UNC_MALF_TLP | \ 111 + PCI_ERR_UNC_IDE_CHECK | \ 112 + PCI_ERR_UNC_MISR_IDE | \ 113 + PCI_ERR_UNC_PCRC_CHECK) 104 114 105 115 #define SYSTEM_ERROR_INTR_ON_MESG_MASK (PCI_EXP_RTCTL_SECEE| \ 106 116 PCI_EXP_RTCTL_SENFEE| \ ··· 393 383 return; 394 384 395 385 dev->aer_info = kzalloc(sizeof(*dev->aer_info), GFP_KERNEL); 386 + if (!dev->aer_info) { 387 + dev->aer_cap = 0; 388 + return; 389 + } 396 390 397 391 ratelimit_state_init(&dev->aer_info->correctable_ratelimit, 398 392 DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST); ··· 539 525 "AtomicOpBlocked", /* Bit Position 24 */ 540 526 "TLPBlockedErr", /* Bit Position 25 */ 541 527 "PoisonTLPBlocked", /* Bit Position 26 */ 542 - NULL, /* Bit Position 27 */ 543 - NULL, /* Bit Position 28 */ 544 - NULL, /* Bit Position 29 */ 545 - NULL, /* Bit Position 30 */ 546 - NULL, /* Bit Position 31 */ 528 + "DMWrReqBlocked", /* Bit Position 27 */ 529 + "IDECheck", /* Bit Position 28 */ 530 + "MisIDETLP", /* Bit Position 29 */ 531 + "PCRC_CHECK", /* Bit Position 30 */ 532 + "TLPXlatBlocked", /* Bit Position 31 */ 547 533 }; 548 534 549 535 static const char *aer_agent_string[] = { ··· 800 786 801 787 static int aer_ratelimit(struct pci_dev *dev, unsigned int severity) 802 788 { 789 + if (!dev->aer_info) 790 + return 1; 791 + 803 792 switch (severity) { 804 793 case AER_NONFATAL: 805 794 return __ratelimit(&dev->aer_info->nonfatal_ratelimit); ··· 811 794 default: 812 795 return 1; /* Don't ratelimit fatal errors */ 813 796 } 797 + } 798 + 799 + static bool tlp_header_logged(u32 status, u32 capctl) 800 + { 801 + /* Errors for which a header is always logged (PCIe r7.0 sec 6.2.7) */ 802 + if (status & AER_LOG_TLP_MASKS) 803 + return true; 804 + 805 + /* Completion Timeout header is only logged on capable devices */ 806 + if (status & PCI_ERR_UNC_COMP_TIME && 807 + capctl & PCI_ERR_CAP_COMP_TIME_LOG) 808 + return true; 809 + 810 + return false; 814 811 } 815 812 816 813 static void __aer_print_error(struct pci_dev *dev, struct aer_err_info *info) ··· 941 910 status = aer->uncor_status; 942 911 mask = aer->uncor_mask; 943 912 info.level = KERN_ERR; 944 - tlp_header_valid = status & AER_LOG_TLP_MASKS; 913 + tlp_header_valid = tlp_header_logged(status, aer->cap_control); 945 914 } 946 915 947 916 info.status = status; ··· 1432 1401 pci_read_config_dword(dev, aer + PCI_ERR_CAP, &aercc); 1433 1402 info->first_error = PCI_ERR_CAP_FEP(aercc); 1434 1403 1435 - if (info->status & AER_LOG_TLP_MASKS) { 1404 + if (tlp_header_logged(info->status, aercc)) { 1436 1405 info->tlp_header_valid = 1; 1437 1406 pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, 1438 1407 aer + PCI_ERR_PREFIX_LOG,
+43 -2
drivers/pci/pcie/aspm.c
··· 15 15 #include <linux/math.h> 16 16 #include <linux/module.h> 17 17 #include <linux/moduleparam.h> 18 + #include <linux/of.h> 18 19 #include <linux/pci.h> 19 20 #include <linux/pci_regs.h> 20 21 #include <linux/errno.h> ··· 236 235 u32 aspm_support:7; /* Supported ASPM state */ 237 236 u32 aspm_enabled:7; /* Enabled ASPM state */ 238 237 u32 aspm_capable:7; /* Capable ASPM state with latency */ 239 - u32 aspm_default:7; /* Default ASPM state by BIOS */ 238 + u32 aspm_default:7; /* Default ASPM state by BIOS or 239 + override */ 240 240 u32 aspm_disable:7; /* Disabled ASPM state */ 241 241 242 242 /* Clock PM state */ 243 243 u32 clkpm_capable:1; /* Clock PM capable? */ 244 244 u32 clkpm_enabled:1; /* Current Clock PM state */ 245 - u32 clkpm_default:1; /* Default Clock PM state by BIOS */ 245 + u32 clkpm_default:1; /* Default Clock PM state by BIOS or 246 + override */ 246 247 u32 clkpm_disable:1; /* Clock PM disabled */ 247 248 }; 248 249 ··· 376 373 pcie_set_clkpm_nocheck(link, enable); 377 374 } 378 375 376 + static void pcie_clkpm_override_default_link_state(struct pcie_link_state *link, 377 + int enabled) 378 + { 379 + struct pci_dev *pdev = link->downstream; 380 + 381 + /* For devicetree platforms, enable ClockPM by default */ 382 + if (of_have_populated_dt() && !enabled) { 383 + link->clkpm_default = 1; 384 + pci_info(pdev, "ASPM: DT platform, enabling ClockPM\n"); 385 + } 386 + } 387 + 379 388 static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist) 380 389 { 381 390 int capable = 1, enabled = 1; ··· 410 395 } 411 396 link->clkpm_enabled = enabled; 412 397 link->clkpm_default = enabled; 398 + pcie_clkpm_override_default_link_state(link, enabled); 413 399 link->clkpm_capable = capable; 414 400 link->clkpm_disable = blacklist ? 1 : 0; 415 401 } ··· 804 788 aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap); 805 789 } 806 790 791 + #define FLAG(x, y, d) (((x) & (PCIE_LINK_STATE_##y)) ? d : "") 792 + 793 + static void pcie_aspm_override_default_link_state(struct pcie_link_state *link) 794 + { 795 + struct pci_dev *pdev = link->downstream; 796 + u32 override; 797 + 798 + /* For devicetree platforms, enable all ASPM states by default */ 799 + if (of_have_populated_dt()) { 800 + link->aspm_default = PCIE_LINK_STATE_ASPM_ALL; 801 + override = link->aspm_default & ~link->aspm_enabled; 802 + if (override) 803 + pci_info(pdev, "ASPM: DT platform, enabling%s%s%s%s%s%s%s\n", 804 + FLAG(override, L0S_UP, " L0s-up"), 805 + FLAG(override, L0S_DW, " L0s-dw"), 806 + FLAG(override, L1, " L1"), 807 + FLAG(override, L1_1, " ASPM-L1.1"), 808 + FLAG(override, L1_2, " ASPM-L1.2"), 809 + FLAG(override, L1_1_PCIPM, " PCI-PM-L1.1"), 810 + FLAG(override, L1_2_PCIPM, " PCI-PM-L1.2")); 811 + } 812 + } 813 + 807 814 static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) 808 815 { 809 816 struct pci_dev *child = link->downstream, *parent = link->pdev; ··· 906 867 907 868 /* Save default state */ 908 869 link->aspm_default = link->aspm_enabled; 870 + 871 + pcie_aspm_override_default_link_state(link); 909 872 910 873 /* Setup initial capable state. Will be updated later */ 911 874 link->aspm_capable = link->aspm_support;
+31 -9
drivers/pci/pcie/err.c
··· 108 108 return report_error_detected(dev, pci_channel_io_normal, data); 109 109 } 110 110 111 + static int report_perm_failure_detected(struct pci_dev *dev, void *data) 112 + { 113 + struct pci_driver *pdrv; 114 + const struct pci_error_handlers *err_handler; 115 + 116 + device_lock(&dev->dev); 117 + pdrv = dev->driver; 118 + if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->error_detected) 119 + goto out; 120 + 121 + err_handler = pdrv->err_handler; 122 + err_handler->error_detected(dev, pci_channel_io_perm_failure); 123 + out: 124 + pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); 125 + device_unlock(&dev->dev); 126 + return 0; 127 + } 128 + 111 129 static int report_mmio_enabled(struct pci_dev *dev, void *data) 112 130 { 113 131 struct pci_driver *pdrv; ··· 153 135 154 136 device_lock(&dev->dev); 155 137 pdrv = dev->driver; 156 - if (!pdrv || !pdrv->err_handler || !pdrv->err_handler->slot_reset) 138 + if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || 139 + !pdrv || !pdrv->err_handler || !pdrv->err_handler->slot_reset) 157 140 goto out; 158 141 159 142 err_handler = pdrv->err_handler; ··· 236 217 pci_walk_bridge(bridge, pci_pm_runtime_get_sync, NULL); 237 218 238 219 pci_dbg(bridge, "broadcast error_detected message\n"); 239 - if (state == pci_channel_io_frozen) { 220 + if (state == pci_channel_io_frozen) 240 221 pci_walk_bridge(bridge, report_frozen_detected, &status); 241 - if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) { 242 - pci_warn(bridge, "subordinate device reset failed\n"); 243 - goto failed; 244 - } 245 - } else { 222 + else 246 223 pci_walk_bridge(bridge, report_normal_detected, &status); 247 - } 248 224 249 225 if (status == PCI_ERS_RESULT_CAN_RECOVER) { 250 226 status = PCI_ERS_RESULT_RECOVERED; 251 227 pci_dbg(bridge, "broadcast mmio_enabled message\n"); 252 228 pci_walk_bridge(bridge, report_mmio_enabled, &status); 229 + } 230 + 231 + if (status == PCI_ERS_RESULT_NEED_RESET || 232 + state == pci_channel_io_frozen) { 233 + if (reset_subordinates(bridge) != PCI_ERS_RESULT_RECOVERED) { 234 + pci_warn(bridge, "subordinate device reset failed\n"); 235 + goto failed; 236 + } 253 237 } 254 238 255 239 if (status == PCI_ERS_RESULT_NEED_RESET) { ··· 291 269 failed: 292 270 pci_walk_bridge(bridge, pci_pm_runtime_put, NULL); 293 271 294 - pci_uevent_ers(bridge, PCI_ERS_RESULT_DISCONNECT); 272 + pci_walk_bridge(bridge, report_perm_failure_detected, NULL); 295 273 296 274 pci_info(bridge, "device recovery failed\n"); 297 275
+63 -25
drivers/pci/probe.c
··· 3 3 * PCI detection and setup code 4 4 */ 5 5 6 + #include <linux/array_size.h> 6 7 #include <linux/kernel.h> 7 8 #include <linux/delay.h> 8 9 #include <linux/init.h> ··· 420 419 limit |= ((unsigned long) io_limit_hi << 16); 421 420 } 422 421 422 + res->flags = (io_base_lo & PCI_IO_RANGE_TYPE_MASK) | IORESOURCE_IO; 423 + 423 424 if (base <= limit) { 424 - res->flags = (io_base_lo & PCI_IO_RANGE_TYPE_MASK) | IORESOURCE_IO; 425 425 region.start = base; 426 426 region.end = limit + io_granularity - 1; 427 427 pcibios_bus_to_resource(dev->bus, res, &region); 428 428 if (log) 429 429 pci_info(dev, " bridge window %pR\n", res); 430 + } else { 431 + resource_set_range(res, 0, 0); 432 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 430 433 } 431 434 } 432 435 ··· 445 440 pci_read_config_word(dev, PCI_MEMORY_LIMIT, &mem_limit_lo); 446 441 base = ((unsigned long) mem_base_lo & PCI_MEMORY_RANGE_MASK) << 16; 447 442 limit = ((unsigned long) mem_limit_lo & PCI_MEMORY_RANGE_MASK) << 16; 443 + 444 + res->flags = (mem_base_lo & PCI_MEMORY_RANGE_TYPE_MASK) | IORESOURCE_MEM; 445 + 448 446 if (base <= limit) { 449 - res->flags = (mem_base_lo & PCI_MEMORY_RANGE_TYPE_MASK) | IORESOURCE_MEM; 450 447 region.start = base; 451 448 region.end = limit + 0xfffff; 452 449 pcibios_bus_to_resource(dev->bus, res, &region); 453 450 if (log) 454 451 pci_info(dev, " bridge window %pR\n", res); 452 + } else { 453 + resource_set_range(res, 0, 0); 454 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 455 455 } 456 456 } 457 457 ··· 499 489 return; 500 490 } 501 491 492 + res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | IORESOURCE_MEM | 493 + IORESOURCE_PREFETCH; 494 + if (res->flags & PCI_PREF_RANGE_TYPE_64) 495 + res->flags |= IORESOURCE_MEM_64; 496 + 502 497 if (base <= limit) { 503 - res->flags = (mem_base_lo & PCI_PREF_RANGE_TYPE_MASK) | 504 - IORESOURCE_MEM | IORESOURCE_PREFETCH; 505 - if (res->flags & PCI_PREF_RANGE_TYPE_64) 506 - res->flags |= IORESOURCE_MEM_64; 507 498 region.start = base; 508 499 region.end = limit + 0xfffff; 509 500 pcibios_bus_to_resource(dev->bus, res, &region); 510 501 if (log) 511 502 pci_info(dev, " bridge window %pR\n", res); 503 + } else { 504 + resource_set_range(res, 0, 0); 505 + res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 512 506 } 513 507 } 514 508 ··· 538 524 } 539 525 if (io) { 540 526 bridge->io_window = 1; 541 - pci_read_bridge_io(bridge, &res, true); 527 + pci_read_bridge_io(bridge, 528 + pci_resource_n(bridge, PCI_BRIDGE_IO_WINDOW), 529 + true); 542 530 } 543 531 544 - pci_read_bridge_mmio(bridge, &res, true); 532 + pci_read_bridge_mmio(bridge, 533 + pci_resource_n(bridge, PCI_BRIDGE_MEM_WINDOW), 534 + true); 545 535 546 536 /* 547 537 * DECchip 21050 pass 2 errata: the bridge may miss an address ··· 583 565 bridge->pref_64_window = 1; 584 566 } 585 567 586 - pci_read_bridge_mmio_pref(bridge, &res, true); 568 + pci_read_bridge_mmio_pref(bridge, 569 + pci_resource_n(bridge, 570 + PCI_BRIDGE_PREF_MEM_WINDOW), 571 + true); 587 572 } 588 573 589 574 void pci_read_bridge_bases(struct pci_bus *child) ··· 606 585 for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) 607 586 child->resource[i] = &dev->resource[PCI_BRIDGE_RESOURCES+i]; 608 587 609 - pci_read_bridge_io(child->self, child->resource[0], false); 610 - pci_read_bridge_mmio(child->self, child->resource[1], false); 611 - pci_read_bridge_mmio_pref(child->self, child->resource[2], false); 588 + pci_read_bridge_io(child->self, 589 + child->resource[PCI_BUS_BRIDGE_IO_WINDOW], false); 590 + pci_read_bridge_mmio(child->self, 591 + child->resource[PCI_BUS_BRIDGE_MEM_WINDOW], false); 592 + pci_read_bridge_mmio_pref(child->self, 593 + child->resource[PCI_BUS_BRIDGE_PREF_MEM_WINDOW], 594 + false); 612 595 613 596 if (!dev->transparent) 614 597 return; ··· 1937 1912 1938 1913 static void early_dump_pci_device(struct pci_dev *pdev) 1939 1914 { 1940 - u32 value[256 / 4]; 1915 + u32 value[PCI_CFG_SPACE_SIZE / sizeof(u32)]; 1941 1916 int i; 1942 1917 1943 1918 pci_info(pdev, "config space:\n"); 1944 1919 1945 - for (i = 0; i < 256; i += 4) 1946 - pci_read_config_dword(pdev, i, &value[i / 4]); 1920 + for (i = 0; i < ARRAY_SIZE(value); i++) 1921 + pci_read_config_dword(pdev, i * sizeof(u32), &value[i]); 1947 1922 1948 1923 print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 16, 1, 1949 - value, 256, false); 1924 + value, ARRAY_SIZE(value) * sizeof(u32), false); 1950 1925 } 1951 1926 1952 1927 static const char *pci_type_str(struct pci_dev *dev) ··· 2010 1985 dev->sysdata = dev->bus->sysdata; 2011 1986 dev->dev.parent = dev->bus->bridge; 2012 1987 dev->dev.bus = &pci_bus_type; 2013 - dev->hdr_type = hdr_type & 0x7f; 2014 - dev->multifunction = !!(hdr_type & 0x80); 1988 + dev->hdr_type = FIELD_GET(PCI_HEADER_TYPE_MASK, hdr_type); 1989 + dev->multifunction = FIELD_GET(PCI_HEADER_TYPE_MFD, hdr_type); 2015 1990 dev->error_state = pci_channel_io_normal; 2016 1991 set_pcie_port_type(dev); 2017 1992 ··· 2541 2516 struct device_node *np; 2542 2517 2543 2518 np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn); 2544 - if (!np || of_find_device_by_node(np)) 2519 + if (!np) 2545 2520 return NULL; 2521 + 2522 + pdev = of_find_device_by_node(np); 2523 + if (pdev) { 2524 + put_device(&pdev->dev); 2525 + goto err_put_of_node; 2526 + } 2546 2527 2547 2528 /* 2548 2529 * First check whether the pwrctrl device really needs to be created or ··· 2557 2526 */ 2558 2527 if (!of_pci_supply_present(np)) { 2559 2528 pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name); 2560 - return NULL; 2529 + goto err_put_of_node; 2561 2530 } 2562 2531 2563 2532 /* Now create the pwrctrl device */ 2564 2533 pdev = of_platform_device_create(np, NULL, &host->dev); 2565 2534 if (!pdev) { 2566 2535 pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name); 2567 - return NULL; 2536 + goto err_put_of_node; 2568 2537 } 2569 2538 2539 + of_node_put(np); 2540 + 2570 2541 return pdev; 2542 + 2543 + err_put_of_node: 2544 + of_node_put(np); 2545 + 2546 + return NULL; 2571 2547 } 2572 2548 #else 2573 2549 static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn) ··· 3083 3045 { 3084 3046 unsigned int used_buses, normal_bridges = 0, hotplug_bridges = 0; 3085 3047 unsigned int start = bus->busn_res.start; 3086 - unsigned int devfn, cmax, max = start; 3048 + unsigned int devnr, cmax, max = start; 3087 3049 struct pci_dev *dev; 3088 3050 3089 3051 dev_dbg(&bus->dev, "scanning bus\n"); 3090 3052 3091 3053 /* Go find them, Rover! */ 3092 - for (devfn = 0; devfn < 256; devfn += 8) 3093 - pci_scan_slot(bus, devfn); 3054 + for (devnr = 0; devnr < PCI_MAX_NR_DEVS; devnr++) 3055 + pci_scan_slot(bus, PCI_DEVFN(devnr, 0)); 3094 3056 3095 3057 /* Reserve buses for SR-IOV capability */ 3096 3058 used_buses = pci_iov_bus_range(bus); ··· 3507 3469 * pci_rescan_bus(), pci_rescan_bus_bridge_resize() and PCI device removal 3508 3470 * routines should always be executed under this mutex. 3509 3471 */ 3510 - static DEFINE_MUTEX(pci_rescan_remove_lock); 3472 + DEFINE_MUTEX(pci_rescan_remove_lock); 3511 3473 3512 3474 void pci_lock_rescan_remove(void) 3513 3475 {
+3 -9
drivers/pci/pwrctrl/slot.c
··· 49 49 ret = regulator_bulk_enable(slot->num_supplies, slot->supplies); 50 50 if (ret < 0) { 51 51 dev_err_probe(dev, ret, "Failed to enable slot regulators\n"); 52 - goto err_regulator_free; 52 + regulator_bulk_free(slot->num_supplies, slot->supplies); 53 + return ret; 53 54 } 54 55 55 56 ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_slot_power_off, 56 57 slot); 57 58 if (ret) 58 - goto err_regulator_disable; 59 + return ret; 59 60 60 61 clk = devm_clk_get_optional_enabled(dev, NULL); 61 62 if (IS_ERR(clk)) { ··· 71 70 return dev_err_probe(dev, ret, "Failed to register pwrctrl driver\n"); 72 71 73 72 return 0; 74 - 75 - err_regulator_disable: 76 - regulator_bulk_disable(slot->num_supplies, slot->supplies); 77 - err_regulator_free: 78 - regulator_bulk_free(slot->num_supplies, slot->supplies); 79 - 80 - return ret; 81 73 } 82 74 83 75 static const struct of_device_id pci_pwrctrl_slot_of_match[] = {
+1
drivers/pci/quirks.c
··· 2717 2717 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi); 2718 2718 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi); 2719 2719 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x5a3f, quirk_disable_msi); 2720 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_RDC, 0x1031, quirk_disable_msi); 2720 2721 2721 2722 /* 2722 2723 * The APC bridge device in AMD 780 family northbridges has some random
+3
drivers/pci/remove.c
··· 31 31 return; 32 32 33 33 of_device_unregister(pdev); 34 + put_device(&pdev->dev); 35 + 34 36 of_node_clear_flag(np, OF_POPULATED); 35 37 } 36 38 ··· 140 138 */ 141 139 void pci_stop_and_remove_bus_device(struct pci_dev *dev) 142 140 { 141 + lockdep_assert_held(&pci_rescan_remove_lock); 143 142 pci_stop_bus_device(dev); 144 143 pci_remove_bus_device(dev); 145 144 }
+441 -414
drivers/pci/setup-bus.c
··· 28 28 #include <linux/acpi.h> 29 29 #include "pci.h" 30 30 31 + #define PCI_RES_TYPE_MASK \ 32 + (IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_PREFETCH |\ 33 + IORESOURCE_MEM_64) 34 + 31 35 unsigned int pci_flags; 32 36 EXPORT_SYMBOL_GPL(pci_flags); 33 37 ··· 140 136 res->flags = dev_res->flags; 141 137 } 142 138 139 + /* 140 + * Helper function for sizing routines. Assigned resources have non-NULL 141 + * parent resource. 142 + * 143 + * Return first unassigned resource of the correct type. If there is none, 144 + * return first assigned resource of the correct type. If none of the 145 + * above, return NULL. 146 + * 147 + * Returning an assigned resource of the correct type allows the caller to 148 + * distinguish between already assigned and no resource of the correct type. 149 + */ 150 + static struct resource *find_bus_resource_of_type(struct pci_bus *bus, 151 + unsigned long type_mask, 152 + unsigned long type) 153 + { 154 + struct resource *r, *r_assigned = NULL; 155 + 156 + pci_bus_for_each_resource(bus, r) { 157 + if (!r || r == &ioport_resource || r == &iomem_resource) 158 + continue; 159 + 160 + if ((r->flags & type_mask) != type) 161 + continue; 162 + 163 + if (!r->parent) 164 + return r; 165 + if (!r_assigned) 166 + r_assigned = r; 167 + } 168 + return r_assigned; 169 + } 170 + 171 + /** 172 + * pbus_select_window_for_type - Select bridge window for a resource type 173 + * @bus: PCI bus 174 + * @type: Resource type (resource flags can be passed as is) 175 + * 176 + * Select the bridge window based on a resource @type. 177 + * 178 + * For memory resources, the selection is done as follows: 179 + * 180 + * Any non-prefetchable resource is put into the non-prefetchable window. 181 + * 182 + * If there is no prefetchable MMIO window, put all memory resources into the 183 + * non-prefetchable window. 184 + * 185 + * If there's a 64-bit prefetchable MMIO window, put all 64-bit prefetchable 186 + * resources into it and place 32-bit prefetchable memory into the 187 + * non-prefetchable window. 188 + * 189 + * Otherwise, put all prefetchable resources into the prefetchable window. 190 + * 191 + * Return: the bridge window resource or NULL if no bridge window is found. 192 + */ 193 + static struct resource *pbus_select_window_for_type(struct pci_bus *bus, 194 + unsigned long type) 195 + { 196 + int iores_type = type & IORESOURCE_TYPE_BITS; /* w/o 64bit & pref */ 197 + struct resource *mmio, *mmio_pref, *win; 198 + 199 + type &= PCI_RES_TYPE_MASK; /* with 64bit & pref */ 200 + 201 + if ((iores_type != IORESOURCE_IO) && (iores_type != IORESOURCE_MEM)) 202 + return NULL; 203 + 204 + if (pci_is_root_bus(bus)) { 205 + win = find_bus_resource_of_type(bus, type, type); 206 + if (win) 207 + return win; 208 + 209 + type &= ~IORESOURCE_MEM_64; 210 + win = find_bus_resource_of_type(bus, type, type); 211 + if (win) 212 + return win; 213 + 214 + type &= ~IORESOURCE_PREFETCH; 215 + return find_bus_resource_of_type(bus, type, type); 216 + } 217 + 218 + switch (iores_type) { 219 + case IORESOURCE_IO: 220 + return pci_bus_resource_n(bus, PCI_BUS_BRIDGE_IO_WINDOW); 221 + 222 + case IORESOURCE_MEM: 223 + mmio = pci_bus_resource_n(bus, PCI_BUS_BRIDGE_MEM_WINDOW); 224 + mmio_pref = pci_bus_resource_n(bus, PCI_BUS_BRIDGE_PREF_MEM_WINDOW); 225 + 226 + if (!(type & IORESOURCE_PREFETCH) || 227 + !(mmio_pref->flags & IORESOURCE_MEM)) 228 + return mmio; 229 + 230 + if ((type & IORESOURCE_MEM_64) || 231 + !(mmio_pref->flags & IORESOURCE_MEM_64)) 232 + return mmio_pref; 233 + 234 + return mmio; 235 + default: 236 + return NULL; 237 + } 238 + } 239 + 240 + /** 241 + * pbus_select_window - Select bridge window for a resource 242 + * @bus: PCI bus 243 + * @res: Resource 244 + * 245 + * Select the bridge window for @res. If the resource is already assigned, 246 + * return the current bridge window. 247 + * 248 + * For memory resources, the selection is done as follows: 249 + * 250 + * Any non-prefetchable resource is put into the non-prefetchable window. 251 + * 252 + * If there is no prefetchable MMIO window, put all memory resources into the 253 + * non-prefetchable window. 254 + * 255 + * If there's a 64-bit prefetchable MMIO window, put all 64-bit prefetchable 256 + * resources into it and place 32-bit prefetchable memory into the 257 + * non-prefetchable window. 258 + * 259 + * Otherwise, put all prefetchable resources into the prefetchable window. 260 + * 261 + * Return: the bridge window resource or NULL if no bridge window is found. 262 + */ 263 + struct resource *pbus_select_window(struct pci_bus *bus, 264 + const struct resource *res) 265 + { 266 + if (res->parent) 267 + return res->parent; 268 + 269 + return pbus_select_window_for_type(bus, res->flags); 270 + } 271 + 143 272 static bool pdev_resources_assignable(struct pci_dev *dev) 144 273 { 145 274 u16 class = dev->class >> 8, command; ··· 291 154 return true; 292 155 } 293 156 157 + static bool pdev_resource_assignable(struct pci_dev *dev, struct resource *res) 158 + { 159 + int idx = pci_resource_num(dev, res); 160 + 161 + if (!res->flags) 162 + return false; 163 + 164 + if (idx >= PCI_BRIDGE_RESOURCES && idx <= PCI_BRIDGE_RESOURCE_END && 165 + res->flags & IORESOURCE_DISABLED) 166 + return false; 167 + 168 + return true; 169 + } 170 + 171 + static bool pdev_resource_should_fit(struct pci_dev *dev, struct resource *res) 172 + { 173 + if (res->parent) 174 + return false; 175 + 176 + if (res->flags & IORESOURCE_PCI_FIXED) 177 + return false; 178 + 179 + return pdev_resource_assignable(dev, res); 180 + } 181 + 294 182 /* Sort resources by alignment */ 295 183 static void pdev_sort_resources(struct pci_dev *dev, struct list_head *head) 296 184 { ··· 331 169 resource_size_t r_align; 332 170 struct list_head *n; 333 171 334 - if (r->flags & IORESOURCE_PCI_FIXED) 335 - continue; 336 - 337 - if (!(r->flags) || r->parent) 172 + if (!pdev_resource_should_fit(dev, r)) 338 173 continue; 339 174 340 175 r_align = pci_resource_alignment(dev, r); ··· 380 221 return false; 381 222 } 382 223 383 - static inline void reset_resource(struct resource *res) 224 + static inline void reset_resource(struct pci_dev *dev, struct resource *res) 384 225 { 226 + int idx = pci_resource_num(dev, res); 227 + 228 + if (idx >= PCI_BRIDGE_RESOURCES && idx <= PCI_BRIDGE_RESOURCE_END) { 229 + res->flags |= IORESOURCE_UNSET; 230 + return; 231 + } 232 + 385 233 res->start = 0; 386 234 res->end = 0; 387 235 res->flags = 0; ··· 550 384 } 551 385 552 386 /* Return: @true if assignment of a required resource failed. */ 553 - static bool pci_required_resource_failed(struct list_head *fail_head) 387 + static bool pci_required_resource_failed(struct list_head *fail_head, 388 + unsigned long type) 554 389 { 555 390 struct pci_dev_resource *fail_res; 556 391 392 + type &= PCI_RES_TYPE_MASK; 393 + 557 394 list_for_each_entry(fail_res, fail_head, list) { 558 395 int idx = pci_resource_num(fail_res->dev, fail_res->res); 396 + 397 + if (type && (fail_res->flags & PCI_RES_TYPE_MASK) != type) 398 + continue; 559 399 560 400 if (!pci_resource_is_optional(fail_res->dev, idx)) 561 401 return true; ··· 603 431 struct pci_dev_resource *dev_res, *tmp_res, *dev_res2; 604 432 struct resource *res; 605 433 struct pci_dev *dev; 606 - const char *res_name; 607 - int idx; 608 434 unsigned long fail_type; 609 435 resource_size_t add_align, align; 610 436 ··· 674 504 } 675 505 676 506 /* Without realloc_head and only optional fails, nothing more to do. */ 677 - if (!pci_required_resource_failed(&local_fail_head) && 507 + if (!pci_required_resource_failed(&local_fail_head, 0) && 678 508 list_empty(realloc_head)) { 679 509 list_for_each_entry(save_res, &save_head, list) { 680 510 struct resource *res = save_res->res; ··· 710 540 res = dev_res->res; 711 541 dev = dev_res->dev; 712 542 713 - if (!res->parent) 714 - continue; 715 - 716 - idx = pci_resource_num(dev, res); 717 - res_name = pci_resource_name(dev, idx); 718 - pci_dbg(dev, "%s %pR: releasing\n", res_name, res); 719 - 720 - release_resource(res); 543 + pci_release_resource(dev, pci_resource_num(dev, res)); 721 544 restore_dev_resource(dev_res); 722 545 } 723 546 /* Restore start/end/flags from saved list */ ··· 740 577 0 /* don't care */); 741 578 } 742 579 743 - reset_resource(res); 580 + reset_resource(dev, res); 744 581 } 745 582 746 583 free_list(head); ··· 781 618 782 619 res = bus->resource[0]; 783 620 pcibios_resource_to_bus(bridge->bus, &region, res); 784 - if (res->flags & IORESOURCE_IO) { 621 + if (res->parent && res->flags & IORESOURCE_IO) { 785 622 /* 786 623 * The IO resource is allocated a range twice as large as it 787 624 * would normally need. This allows us to set both IO regs. ··· 795 632 796 633 res = bus->resource[1]; 797 634 pcibios_resource_to_bus(bridge->bus, &region, res); 798 - if (res->flags & IORESOURCE_IO) { 635 + if (res->parent && res->flags & IORESOURCE_IO) { 799 636 pci_info(bridge, " bridge window %pR\n", res); 800 637 pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 801 638 region.start); ··· 805 642 806 643 res = bus->resource[2]; 807 644 pcibios_resource_to_bus(bridge->bus, &region, res); 808 - if (res->flags & IORESOURCE_MEM) { 645 + if (res->parent && res->flags & IORESOURCE_MEM) { 809 646 pci_info(bridge, " bridge window %pR\n", res); 810 647 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 811 648 region.start); ··· 815 652 816 653 res = bus->resource[3]; 817 654 pcibios_resource_to_bus(bridge->bus, &region, res); 818 - if (res->flags & IORESOURCE_MEM) { 655 + if (res->parent && res->flags & IORESOURCE_MEM) { 819 656 pci_info(bridge, " bridge window %pR\n", res); 820 657 pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 821 658 region.start); ··· 856 693 res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 857 694 res_name = pci_resource_name(bridge, PCI_BRIDGE_IO_WINDOW); 858 695 pcibios_resource_to_bus(bridge->bus, &region, res); 859 - if (res->flags & IORESOURCE_IO) { 696 + if (res->parent && res->flags & IORESOURCE_IO) { 860 697 pci_read_config_word(bridge, PCI_IO_BASE, &l); 861 698 io_base_lo = (region.start >> 8) & io_mask; 862 699 io_limit_lo = (region.end >> 8) & io_mask; ··· 888 725 res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 889 726 res_name = pci_resource_name(bridge, PCI_BRIDGE_MEM_WINDOW); 890 727 pcibios_resource_to_bus(bridge->bus, &region, res); 891 - if (res->flags & IORESOURCE_MEM) { 728 + if (res->parent && res->flags & IORESOURCE_MEM) { 892 729 l = (region.start >> 16) & 0xfff0; 893 730 l |= region.end & 0xfff00000; 894 731 pci_info(bridge, " %s %pR\n", res_name, res); ··· 917 754 res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 918 755 res_name = pci_resource_name(bridge, PCI_BRIDGE_PREF_MEM_WINDOW); 919 756 pcibios_resource_to_bus(bridge->bus, &region, res); 920 - if (res->flags & IORESOURCE_PREFETCH) { 757 + if (res->parent && res->flags & IORESOURCE_PREFETCH) { 921 758 l = (region.start >> 16) & 0xfff0; 922 759 l |= region.end & 0xfff00000; 923 760 if (res->flags & IORESOURCE_MEM_64) { ··· 953 790 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); 954 791 } 955 792 793 + static void pci_setup_one_bridge_window(struct pci_dev *bridge, int resno) 794 + { 795 + switch (resno) { 796 + case PCI_BRIDGE_IO_WINDOW: 797 + pci_setup_bridge_io(bridge); 798 + break; 799 + case PCI_BRIDGE_MEM_WINDOW: 800 + pci_setup_bridge_mmio(bridge); 801 + break; 802 + case PCI_BRIDGE_PREF_MEM_WINDOW: 803 + pci_setup_bridge_mmio_pref(bridge); 804 + break; 805 + default: 806 + return; 807 + } 808 + } 809 + 956 810 void __weak pcibios_setup_bridge(struct pci_bus *bus, unsigned long type) 957 811 { 958 812 } ··· 986 806 987 807 int pci_claim_bridge_resource(struct pci_dev *bridge, int i) 988 808 { 809 + int ret = -EINVAL; 810 + 989 811 if (i < PCI_BRIDGE_RESOURCES || i > PCI_BRIDGE_RESOURCE_END) 990 812 return 0; 991 813 ··· 997 815 if ((bridge->class >> 8) != PCI_CLASS_BRIDGE_PCI) 998 816 return 0; 999 817 1000 - if (!pci_bus_clip_resource(bridge, i)) 1001 - return -EINVAL; /* Clipping didn't change anything */ 1002 - 1003 - switch (i) { 1004 - case PCI_BRIDGE_IO_WINDOW: 1005 - pci_setup_bridge_io(bridge); 1006 - break; 1007 - case PCI_BRIDGE_MEM_WINDOW: 1008 - pci_setup_bridge_mmio(bridge); 1009 - break; 1010 - case PCI_BRIDGE_PREF_MEM_WINDOW: 1011 - pci_setup_bridge_mmio_pref(bridge); 1012 - break; 1013 - default: 818 + if (i > PCI_BRIDGE_PREF_MEM_WINDOW) 1014 819 return -EINVAL; 1015 - } 1016 820 1017 - if (pci_claim_resource(bridge, i) == 0) 1018 - return 0; /* Claimed a smaller window */ 821 + /* Try to clip the resource and claim the smaller window */ 822 + if (pci_bus_clip_resource(bridge, i)) 823 + ret = pci_claim_resource(bridge, i); 1019 824 1020 - return -EINVAL; 825 + pci_setup_one_bridge_window(bridge, i); 826 + 827 + return ret; 1021 828 } 1022 829 1023 830 /* ··· 1035 864 PCI_PREF_RANGE_TYPE_64; 1036 865 } 1037 866 } 1038 - } 1039 - 1040 - /* 1041 - * Helper function for sizing routines. Assigned resources have non-NULL 1042 - * parent resource. 1043 - * 1044 - * Return first unassigned resource of the correct type. If there is none, 1045 - * return first assigned resource of the correct type. If none of the 1046 - * above, return NULL. 1047 - * 1048 - * Returning an assigned resource of the correct type allows the caller to 1049 - * distinguish between already assigned and no resource of the correct type. 1050 - */ 1051 - static struct resource *find_bus_resource_of_type(struct pci_bus *bus, 1052 - unsigned long type_mask, 1053 - unsigned long type) 1054 - { 1055 - struct resource *r, *r_assigned = NULL; 1056 - 1057 - pci_bus_for_each_resource(bus, r) { 1058 - if (r == &ioport_resource || r == &iomem_resource) 1059 - continue; 1060 - if (r && (r->flags & type_mask) == type && !r->parent) 1061 - return r; 1062 - if (r && (r->flags & type_mask) == type && !r_assigned) 1063 - r_assigned = r; 1064 - } 1065 - return r_assigned; 1066 867 } 1067 868 1068 869 static resource_size_t calculate_iosize(resource_size_t size, ··· 1127 984 struct list_head *realloc_head) 1128 985 { 1129 986 struct pci_dev *dev; 1130 - struct resource *b_res = find_bus_resource_of_type(bus, IORESOURCE_IO, 1131 - IORESOURCE_IO); 987 + struct resource *b_res = pbus_select_window_for_type(bus, IORESOURCE_IO); 1132 988 resource_size_t size = 0, size0 = 0, size1 = 0; 1133 989 resource_size_t children_add_size = 0; 1134 990 resource_size_t min_align, align; ··· 1148 1006 1149 1007 if (r->parent || !(r->flags & IORESOURCE_IO)) 1150 1008 continue; 1151 - r_size = resource_size(r); 1152 1009 1010 + if (!pdev_resource_assignable(dev, r)) 1011 + continue; 1012 + 1013 + r_size = resource_size(r); 1153 1014 if (r_size < SZ_1K) 1154 1015 /* Might be re-aligned for ISA */ 1155 1016 size += r_size; ··· 1171 1026 size0 = calculate_iosize(size, min_size, size1, 0, 0, 1172 1027 resource_size(b_res), min_align); 1173 1028 1029 + if (size0) 1030 + b_res->flags &= ~IORESOURCE_DISABLED; 1031 + 1174 1032 size1 = size0; 1175 1033 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1176 1034 size1 = calculate_iosize(size, min_size, size1, add_size, ··· 1185 1037 if (bus->self && (b_res->start || b_res->end)) 1186 1038 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 1187 1039 b_res, &bus->busn_res); 1188 - b_res->flags = 0; 1040 + b_res->flags |= IORESOURCE_DISABLED; 1189 1041 return; 1190 1042 } 1191 1043 1192 1044 resource_set_range(b_res, min_align, size0); 1193 1045 b_res->flags |= IORESOURCE_STARTALIGN; 1194 1046 if (bus->self && size1 > size0 && realloc_head) { 1047 + b_res->flags &= ~IORESOURCE_DISABLED; 1195 1048 add_to_list(realloc_head, bus->self, b_res, size1-size0, 1196 1049 min_align); 1197 1050 pci_info(bus->self, "bridge window %pR to %pR add_size %llx\n", ··· 1226 1077 /** 1227 1078 * pbus_upstream_space_available - Check no upstream resource limits allocation 1228 1079 * @bus: The bus 1229 - * @mask: Mask the resource flag, then compare it with type 1230 - * @type: The type of resource from bridge 1080 + * @res: The resource to help select the correct bridge window 1231 1081 * @size: The size required from the bridge window 1232 1082 * @align: Required alignment for the resource 1233 1083 * 1234 - * Checks that @size can fit inside the upstream bridge resources that are 1235 - * already assigned. 1084 + * Check that @size can fit inside the upstream bridge resources that are 1085 + * already assigned. Select the upstream bridge window based on the type of 1086 + * @res. 1236 1087 * 1237 1088 * Return: %true if enough space is available on all assigned upstream 1238 1089 * resources. 1239 1090 */ 1240 - static bool pbus_upstream_space_available(struct pci_bus *bus, unsigned long mask, 1241 - unsigned long type, resource_size_t size, 1091 + static bool pbus_upstream_space_available(struct pci_bus *bus, 1092 + struct resource *res, 1093 + resource_size_t size, 1242 1094 resource_size_t align) 1243 1095 { 1244 1096 struct resource_constraint constraint = { ··· 1247 1097 .align = align, 1248 1098 }; 1249 1099 struct pci_bus *downstream = bus; 1250 - struct resource *r; 1251 1100 1252 1101 while ((bus = bus->parent)) { 1253 1102 if (pci_is_root_bus(bus)) 1254 1103 break; 1255 1104 1256 - pci_bus_for_each_resource(bus, r) { 1257 - if (!r || !r->parent || (r->flags & mask) != type) 1258 - continue; 1259 - 1260 - if (resource_size(r) >= size) { 1261 - struct resource gap = {}; 1262 - 1263 - if (find_resource_space(r, &gap, size, &constraint) == 0) { 1264 - gap.flags = type; 1265 - pci_dbg(bus->self, 1266 - "Assigned bridge window %pR to %pR free space at %pR\n", 1267 - r, &bus->busn_res, &gap); 1268 - return true; 1269 - } 1270 - } 1271 - 1272 - if (bus->self) { 1273 - pci_info(bus->self, 1274 - "Assigned bridge window %pR to %pR cannot fit 0x%llx required for %s bridging to %pR\n", 1275 - r, &bus->busn_res, 1276 - (unsigned long long)size, 1277 - pci_name(downstream->self), 1278 - &downstream->busn_res); 1279 - } 1280 - 1105 + res = pbus_select_window(bus, res); 1106 + if (!res) 1281 1107 return false; 1108 + if (!res->parent) 1109 + continue; 1110 + 1111 + if (resource_size(res) >= size) { 1112 + struct resource gap = {}; 1113 + 1114 + if (find_resource_space(res, &gap, size, &constraint) == 0) { 1115 + gap.flags = res->flags; 1116 + pci_dbg(bus->self, 1117 + "Assigned bridge window %pR to %pR free space at %pR\n", 1118 + res, &bus->busn_res, &gap); 1119 + return true; 1120 + } 1282 1121 } 1122 + 1123 + if (bus->self) { 1124 + pci_info(bus->self, 1125 + "Assigned bridge window %pR to %pR cannot fit 0x%llx required for %s bridging to %pR\n", 1126 + res, &bus->busn_res, 1127 + (unsigned long long)size, 1128 + pci_name(downstream->self), 1129 + &downstream->busn_res); 1130 + } 1131 + 1132 + return false; 1283 1133 } 1284 1134 1285 1135 return true; ··· 1289 1139 * pbus_size_mem() - Size the memory window of a given bus 1290 1140 * 1291 1141 * @bus: The bus 1292 - * @mask: Mask the resource flag, then compare it with type 1293 - * @type: The type of free resource from bridge 1294 - * @type2: Second match type 1295 - * @type3: Third match type 1142 + * @type: The type of bridge resource 1296 1143 * @min_size: The minimum memory window that must be allocated 1297 1144 * @add_size: Additional optional memory window 1298 1145 * @realloc_head: Track the additional memory window on this list 1299 1146 * 1300 - * Calculate the size of the bus and minimal alignment which guarantees 1301 - * that all child resources fit in this size. 1147 + * Calculate the size of the bus resource for @type and minimal alignment 1148 + * which guarantees that all child resources fit in this size. 1302 1149 * 1303 - * Return -ENOSPC if there's no available bus resource of the desired 1304 - * type. Otherwise, set the bus resource start/end to indicate the 1305 - * required size, add things to realloc_head (if supplied), and return 0. 1150 + * Set the bus resource start/end to indicate the required size if there an 1151 + * available unassigned bus resource of the desired @type. 1152 + * 1153 + * Add optional resource requests to the @realloc_head list if it is 1154 + * supplied. 1306 1155 */ 1307 - static int pbus_size_mem(struct pci_bus *bus, unsigned long mask, 1308 - unsigned long type, unsigned long type2, 1309 - unsigned long type3, resource_size_t min_size, 1156 + static void pbus_size_mem(struct pci_bus *bus, unsigned long type, 1157 + resource_size_t min_size, 1310 1158 resource_size_t add_size, 1311 1159 struct list_head *realloc_head) 1312 1160 { ··· 1312 1164 resource_size_t min_align, win_align, align, size, size0, size1 = 0; 1313 1165 resource_size_t aligns[28]; /* Alignments from 1MB to 128TB */ 1314 1166 int order, max_order; 1315 - struct resource *b_res = find_bus_resource_of_type(bus, 1316 - mask | IORESOURCE_PREFETCH, type); 1167 + struct resource *b_res = pbus_select_window_for_type(bus, type); 1317 1168 resource_size_t children_add_size = 0; 1318 1169 resource_size_t children_add_align = 0; 1319 1170 resource_size_t add_align = 0; 1171 + resource_size_t relaxed_align; 1172 + resource_size_t old_size; 1320 1173 1321 1174 if (!b_res) 1322 - return -ENOSPC; 1175 + return; 1323 1176 1324 1177 /* If resource is already assigned, nothing more to do */ 1325 1178 if (b_res->parent) 1326 - return 0; 1179 + return; 1327 1180 1328 1181 memset(aligns, 0, sizeof(aligns)); 1329 1182 max_order = 0; ··· 1338 1189 const char *r_name = pci_resource_name(dev, i); 1339 1190 resource_size_t r_size; 1340 1191 1341 - if (r->parent || (r->flags & IORESOURCE_PCI_FIXED) || 1342 - ((r->flags & mask) != type && 1343 - (r->flags & mask) != type2 && 1344 - (r->flags & mask) != type3)) 1192 + if (!pdev_resources_assignable(dev) || 1193 + !pdev_resource_should_fit(dev, r)) 1345 1194 continue; 1195 + if (b_res != pbus_select_window(bus, r)) 1196 + continue; 1197 + 1346 1198 r_size = resource_size(r); 1347 1199 1348 1200 /* Put SRIOV requested res to the optional list */ ··· 1388 1238 } 1389 1239 } 1390 1240 1241 + old_size = resource_size(b_res); 1391 1242 win_align = window_alignment(bus, b_res->flags); 1392 1243 min_align = calculate_mem_align(aligns, max_order); 1393 1244 min_align = max(min_align, win_align); 1394 - size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align); 1245 + size0 = calculate_memsize(size, min_size, 0, 0, old_size, min_align); 1246 + 1247 + if (size0) { 1248 + resource_set_range(b_res, min_align, size0); 1249 + b_res->flags &= ~IORESOURCE_DISABLED; 1250 + } 1395 1251 1396 1252 if (bus->self && size0 && 1397 - !pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type, 1398 - size0, min_align)) { 1399 - min_align = 1ULL << (max_order + __ffs(SZ_1M)); 1400 - min_align = max(min_align, win_align); 1401 - size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), win_align); 1253 + !pbus_upstream_space_available(bus, b_res, size0, min_align)) { 1254 + relaxed_align = 1ULL << (max_order + __ffs(SZ_1M)); 1255 + relaxed_align = max(relaxed_align, win_align); 1256 + min_align = min(min_align, relaxed_align); 1257 + size0 = calculate_memsize(size, min_size, 0, 0, old_size, win_align); 1258 + resource_set_range(b_res, min_align, size0); 1402 1259 pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n", 1403 1260 b_res, &bus->busn_res); 1404 1261 } ··· 1413 1256 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1414 1257 add_align = max(min_align, add_align); 1415 1258 size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1416 - resource_size(b_res), add_align); 1259 + old_size, add_align); 1417 1260 1418 1261 if (bus->self && size1 && 1419 - !pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type, 1420 - size1, add_align)) { 1421 - min_align = 1ULL << (max_order + __ffs(SZ_1M)); 1422 - min_align = max(min_align, win_align); 1262 + !pbus_upstream_space_available(bus, b_res, size1, add_align)) { 1263 + relaxed_align = 1ULL << (max_order + __ffs(SZ_1M)); 1264 + relaxed_align = max(relaxed_align, win_align); 1265 + min_align = min(min_align, relaxed_align); 1423 1266 size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1424 - resource_size(b_res), win_align); 1267 + old_size, win_align); 1425 1268 pci_info(bus->self, 1426 1269 "bridge window %pR to %pR requires relaxed alignment rules\n", 1427 1270 b_res, &bus->busn_res); ··· 1432 1275 if (bus->self && (b_res->start || b_res->end)) 1433 1276 pci_info(bus->self, "disabling bridge window %pR to %pR (unused)\n", 1434 1277 b_res, &bus->busn_res); 1435 - b_res->flags = 0; 1436 - return 0; 1278 + b_res->flags |= IORESOURCE_DISABLED; 1279 + return; 1437 1280 } 1438 1281 1439 1282 resource_set_range(b_res, min_align, size0); 1440 1283 b_res->flags |= IORESOURCE_STARTALIGN; 1441 1284 if (bus->self && size1 > size0 && realloc_head) { 1285 + b_res->flags &= ~IORESOURCE_DISABLED; 1442 1286 add_to_list(realloc_head, bus->self, b_res, size1-size0, add_align); 1443 1287 pci_info(bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n", 1444 1288 b_res, &bus->busn_res, 1445 1289 (unsigned long long) (size1 - size0), 1446 1290 (unsigned long long) add_align); 1447 1291 } 1448 - return 0; 1449 1292 } 1450 1293 1451 1294 unsigned long pci_cardbus_resource_alignment(struct resource *res) ··· 1550 1393 void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) 1551 1394 { 1552 1395 struct pci_dev *dev; 1553 - unsigned long mask, prefmask, type2 = 0, type3 = 0; 1554 1396 resource_size_t additional_io_size = 0, additional_mmio_size = 0, 1555 1397 additional_mmio_pref_size = 0; 1556 1398 struct resource *pref; 1557 1399 struct pci_host_bridge *host; 1558 - int hdr_type, ret; 1400 + int hdr_type; 1559 1401 1560 1402 list_for_each_entry(dev, &bus->devices, bus_list) { 1561 1403 struct pci_bus *b = dev->subordinate; ··· 1604 1448 pbus_size_io(bus, realloc_head ? 0 : additional_io_size, 1605 1449 additional_io_size, realloc_head); 1606 1450 1607 - /* 1608 - * If there's a 64-bit prefetchable MMIO window, compute 1609 - * the size required to put all 64-bit prefetchable 1610 - * resources in it. 1611 - */ 1612 - mask = IORESOURCE_MEM; 1613 - prefmask = IORESOURCE_MEM | IORESOURCE_PREFETCH; 1614 - if (pref && (pref->flags & IORESOURCE_MEM_64)) { 1615 - prefmask |= IORESOURCE_MEM_64; 1616 - ret = pbus_size_mem(bus, prefmask, prefmask, 1617 - prefmask, prefmask, 1618 - realloc_head ? 0 : additional_mmio_pref_size, 1619 - additional_mmio_pref_size, realloc_head); 1620 - 1621 - /* 1622 - * If successful, all non-prefetchable resources 1623 - * and any 32-bit prefetchable resources will go in 1624 - * the non-prefetchable window. 1625 - */ 1626 - if (ret == 0) { 1627 - mask = prefmask; 1628 - type2 = prefmask & ~IORESOURCE_MEM_64; 1629 - type3 = prefmask & ~IORESOURCE_PREFETCH; 1630 - } 1451 + if (pref) { 1452 + pbus_size_mem(bus, 1453 + IORESOURCE_MEM | IORESOURCE_PREFETCH | 1454 + (pref->flags & IORESOURCE_MEM_64), 1455 + realloc_head ? 0 : additional_mmio_pref_size, 1456 + additional_mmio_pref_size, realloc_head); 1631 1457 } 1632 1458 1633 - /* 1634 - * If there is no 64-bit prefetchable window, compute the 1635 - * size required to put all prefetchable resources in the 1636 - * 32-bit prefetchable window (if there is one). 1637 - */ 1638 - if (!type2) { 1639 - prefmask &= ~IORESOURCE_MEM_64; 1640 - ret = pbus_size_mem(bus, prefmask, prefmask, 1641 - prefmask, prefmask, 1642 - realloc_head ? 0 : additional_mmio_pref_size, 1643 - additional_mmio_pref_size, realloc_head); 1644 - 1645 - /* 1646 - * If successful, only non-prefetchable resources 1647 - * will go in the non-prefetchable window. 1648 - */ 1649 - if (ret == 0) 1650 - mask = prefmask; 1651 - else 1652 - additional_mmio_size += additional_mmio_pref_size; 1653 - 1654 - type2 = type3 = IORESOURCE_MEM; 1655 - } 1656 - 1657 - /* 1658 - * Compute the size required to put everything else in the 1659 - * non-prefetchable window. This includes: 1660 - * 1661 - * - all non-prefetchable resources 1662 - * - 32-bit prefetchable resources if there's a 64-bit 1663 - * prefetchable window or no prefetchable window at all 1664 - * - 64-bit prefetchable resources if there's no prefetchable 1665 - * window at all 1666 - * 1667 - * Note that the strategy in __pci_assign_resource() must match 1668 - * that used here. Specifically, we cannot put a 32-bit 1669 - * prefetchable resource in a 64-bit prefetchable window. 1670 - */ 1671 - pbus_size_mem(bus, mask, IORESOURCE_MEM, type2, type3, 1459 + pbus_size_mem(bus, IORESOURCE_MEM, 1672 1460 realloc_head ? 0 : additional_mmio_size, 1673 1461 additional_mmio_size, realloc_head); 1674 1462 break; ··· 1804 1704 } 1805 1705 } 1806 1706 1807 - #define PCI_RES_TYPE_MASK \ 1808 - (IORESOURCE_IO | IORESOURCE_MEM | IORESOURCE_PREFETCH |\ 1809 - IORESOURCE_MEM_64) 1810 - 1811 1707 static void pci_bridge_release_resources(struct pci_bus *bus, 1812 - unsigned long type) 1708 + struct resource *b_win) 1813 1709 { 1814 1710 struct pci_dev *dev = bus->self; 1815 - struct resource *r; 1816 - unsigned int old_flags; 1817 - struct resource *b_res; 1818 - int idx = 1; 1711 + int idx, ret; 1819 1712 1820 - b_res = &dev->resource[PCI_BRIDGE_RESOURCES]; 1821 - 1822 - /* 1823 - * 1. If IO port assignment fails, release bridge IO port. 1824 - * 2. If non pref MMIO assignment fails, release bridge nonpref MMIO. 1825 - * 3. If 64bit pref MMIO assignment fails, and bridge pref is 64bit, 1826 - * release bridge pref MMIO. 1827 - * 4. If pref MMIO assignment fails, and bridge pref is 32bit, 1828 - * release bridge pref MMIO. 1829 - * 5. If pref MMIO assignment fails, and bridge pref is not 1830 - * assigned, release bridge nonpref MMIO. 1831 - */ 1832 - if (type & IORESOURCE_IO) 1833 - idx = 0; 1834 - else if (!(type & IORESOURCE_PREFETCH)) 1835 - idx = 1; 1836 - else if ((type & IORESOURCE_MEM_64) && 1837 - (b_res[2].flags & IORESOURCE_MEM_64)) 1838 - idx = 2; 1839 - else if (!(b_res[2].flags & IORESOURCE_MEM_64) && 1840 - (b_res[2].flags & IORESOURCE_PREFETCH)) 1841 - idx = 2; 1842 - else 1843 - idx = 1; 1844 - 1845 - r = &b_res[idx]; 1846 - 1847 - if (!r->parent) 1713 + if (!b_win->parent) 1848 1714 return; 1849 1715 1850 - /* If there are children, release them all */ 1851 - release_child_resources(r); 1852 - if (!release_resource(r)) { 1853 - type = old_flags = r->flags & PCI_RES_TYPE_MASK; 1854 - pci_info(dev, "resource %d %pR released\n", 1855 - PCI_BRIDGE_RESOURCES + idx, r); 1856 - /* Keep the old size */ 1857 - resource_set_range(r, 0, resource_size(r)); 1858 - r->flags = 0; 1716 + idx = pci_resource_num(dev, b_win); 1859 1717 1860 - /* Avoiding touch the one without PREF */ 1861 - if (type & IORESOURCE_PREFETCH) 1862 - type = IORESOURCE_PREFETCH; 1863 - __pci_setup_bridge(bus, type); 1864 - /* For next child res under same bridge */ 1865 - r->flags = old_flags; 1866 - } 1718 + /* If there are children, release them all */ 1719 + release_child_resources(b_win); 1720 + 1721 + ret = pci_release_resource(dev, idx); 1722 + if (ret) 1723 + return; 1724 + 1725 + pci_setup_one_bridge_window(dev, idx); 1867 1726 } 1868 1727 1869 1728 enum release_type { ··· 1835 1776 * a larger window later. 1836 1777 */ 1837 1778 static void pci_bus_release_bridge_resources(struct pci_bus *bus, 1838 - unsigned long type, 1779 + struct resource *b_win, 1839 1780 enum release_type rel_type) 1840 1781 { 1841 1782 struct pci_dev *dev; ··· 1843 1784 1844 1785 list_for_each_entry(dev, &bus->devices, bus_list) { 1845 1786 struct pci_bus *b = dev->subordinate; 1787 + struct resource *res; 1788 + 1846 1789 if (!b) 1847 1790 continue; 1848 1791 ··· 1853 1792 if ((dev->class >> 8) != PCI_CLASS_BRIDGE_PCI) 1854 1793 continue; 1855 1794 1856 - if (rel_type == whole_subtree) 1857 - pci_bus_release_bridge_resources(b, type, 1858 - whole_subtree); 1795 + if (rel_type != whole_subtree) 1796 + continue; 1797 + 1798 + pci_bus_for_each_resource(b, res) { 1799 + if (res->parent != b_win) 1800 + continue; 1801 + 1802 + pci_bus_release_bridge_resources(b, res, rel_type); 1803 + } 1859 1804 } 1860 1805 1861 1806 if (pci_is_root_bus(bus)) ··· 1871 1804 return; 1872 1805 1873 1806 if ((rel_type == whole_subtree) || is_leaf_bridge) 1874 - pci_bridge_release_resources(bus, type); 1807 + pci_bridge_release_resources(bus, b_win); 1875 1808 } 1876 1809 1877 1810 static void pci_bus_dump_res(struct pci_bus *bus) ··· 2046 1979 avail->start = min(avail->start + tmp, avail->end + 1); 2047 1980 } 2048 1981 2049 - static void remove_dev_resources(struct pci_dev *dev, struct resource *io, 2050 - struct resource *mmio, 2051 - struct resource *mmio_pref) 1982 + static void remove_dev_resources(struct pci_dev *dev, 1983 + struct resource available[PCI_P2P_BRIDGE_RESOURCE_NUM]) 2052 1984 { 2053 - struct resource *res; 1985 + struct resource *res, *b_win; 1986 + int idx; 2054 1987 2055 1988 pci_dev_for_each_resource(dev, res) { 2056 - if (resource_type(res) == IORESOURCE_IO) { 2057 - remove_dev_resource(io, dev, res); 2058 - } else if (resource_type(res) == IORESOURCE_MEM) { 1989 + b_win = pbus_select_window(dev->bus, res); 1990 + if (!b_win) 1991 + continue; 2059 1992 2060 - /* 2061 - * Make sure prefetchable memory is reduced from 2062 - * the correct resource. Specifically we put 32-bit 2063 - * prefetchable memory in non-prefetchable window 2064 - * if there is a 64-bit prefetchable window. 2065 - * 2066 - * See comments in __pci_bus_size_bridges() for 2067 - * more information. 2068 - */ 2069 - if ((res->flags & IORESOURCE_PREFETCH) && 2070 - ((res->flags & IORESOURCE_MEM_64) == 2071 - (mmio_pref->flags & IORESOURCE_MEM_64))) 2072 - remove_dev_resource(mmio_pref, dev, res); 2073 - else 2074 - remove_dev_resource(mmio, dev, res); 2075 - } 1993 + idx = pci_resource_num(dev->bus->self, b_win); 1994 + idx -= PCI_BRIDGE_RESOURCES; 1995 + 1996 + remove_dev_resource(&available[idx], dev, res); 2076 1997 } 2077 1998 } 2078 1999 ··· 2074 2019 * shared with the bridges. 2075 2020 */ 2076 2021 static void pci_bus_distribute_available_resources(struct pci_bus *bus, 2077 - struct list_head *add_list, 2078 - struct resource io, 2079 - struct resource mmio, 2080 - struct resource mmio_pref) 2022 + struct list_head *add_list, 2023 + struct resource available_in[PCI_P2P_BRIDGE_RESOURCE_NUM]) 2081 2024 { 2025 + struct resource available[PCI_P2P_BRIDGE_RESOURCE_NUM]; 2082 2026 unsigned int normal_bridges = 0, hotplug_bridges = 0; 2083 - struct resource *io_res, *mmio_res, *mmio_pref_res; 2084 2027 struct pci_dev *dev, *bridge = bus->self; 2085 - resource_size_t io_per_b, mmio_per_b, mmio_pref_per_b, align; 2028 + resource_size_t per_bridge[PCI_P2P_BRIDGE_RESOURCE_NUM]; 2029 + resource_size_t align; 2030 + int i; 2086 2031 2087 - io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 2088 - mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 2089 - mmio_pref_res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 2032 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2033 + struct resource *res = pci_bus_resource_n(bus, i); 2090 2034 2091 - /* 2092 - * The alignment of this bridge is yet to be considered, hence it must 2093 - * be done now before extending its bridge window. 2094 - */ 2095 - align = pci_resource_alignment(bridge, io_res); 2096 - if (!io_res->parent && align) 2097 - io.start = min(ALIGN(io.start, align), io.end + 1); 2035 + available[i] = available_in[i]; 2098 2036 2099 - align = pci_resource_alignment(bridge, mmio_res); 2100 - if (!mmio_res->parent && align) 2101 - mmio.start = min(ALIGN(mmio.start, align), mmio.end + 1); 2037 + /* 2038 + * The alignment of this bridge is yet to be considered, 2039 + * hence it must be done now before extending its bridge 2040 + * window. 2041 + */ 2042 + align = pci_resource_alignment(bridge, res); 2043 + if (!res->parent && align) 2044 + available[i].start = min(ALIGN(available[i].start, align), 2045 + available[i].end + 1); 2102 2046 2103 - align = pci_resource_alignment(bridge, mmio_pref_res); 2104 - if (!mmio_pref_res->parent && align) 2105 - mmio_pref.start = min(ALIGN(mmio_pref.start, align), 2106 - mmio_pref.end + 1); 2107 - 2108 - /* 2109 - * Now that we have adjusted for alignment, update the bridge window 2110 - * resources to fill as much remaining resource space as possible. 2111 - */ 2112 - adjust_bridge_window(bridge, io_res, add_list, resource_size(&io)); 2113 - adjust_bridge_window(bridge, mmio_res, add_list, resource_size(&mmio)); 2114 - adjust_bridge_window(bridge, mmio_pref_res, add_list, 2115 - resource_size(&mmio_pref)); 2047 + /* 2048 + * Now that we have adjusted for alignment, update the 2049 + * bridge window resources to fill as much remaining 2050 + * resource space as possible. 2051 + */ 2052 + adjust_bridge_window(bridge, res, add_list, 2053 + resource_size(&available[i])); 2054 + } 2116 2055 2117 2056 /* 2118 2057 * Calculate how many hotplug bridges and normal bridges there ··· 2130 2081 */ 2131 2082 list_for_each_entry(dev, &bus->devices, bus_list) { 2132 2083 if (!dev->is_virtfn) 2133 - remove_dev_resources(dev, &io, &mmio, &mmio_pref); 2084 + remove_dev_resources(dev, available); 2134 2085 } 2135 2086 2136 2087 /* ··· 2142 2093 * split between non-hotplug bridges. This is to allow possible 2143 2094 * hotplug bridges below them to get the extra space as well. 2144 2095 */ 2145 - if (hotplug_bridges) { 2146 - io_per_b = div64_ul(resource_size(&io), hotplug_bridges); 2147 - mmio_per_b = div64_ul(resource_size(&mmio), hotplug_bridges); 2148 - mmio_pref_per_b = div64_ul(resource_size(&mmio_pref), 2149 - hotplug_bridges); 2150 - } else { 2151 - io_per_b = div64_ul(resource_size(&io), normal_bridges); 2152 - mmio_per_b = div64_ul(resource_size(&mmio), normal_bridges); 2153 - mmio_pref_per_b = div64_ul(resource_size(&mmio_pref), 2154 - normal_bridges); 2096 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2097 + per_bridge[i] = div64_ul(resource_size(&available[i]), 2098 + hotplug_bridges ?: normal_bridges); 2155 2099 } 2156 2100 2157 2101 for_each_pci_bridge(dev, bus) { ··· 2157 2115 if (hotplug_bridges && !dev->is_hotplug_bridge) 2158 2116 continue; 2159 2117 2160 - res = &dev->resource[PCI_BRIDGE_IO_WINDOW]; 2118 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2119 + res = pci_bus_resource_n(bus, i); 2161 2120 2162 - /* 2163 - * Make sure the split resource space is properly aligned 2164 - * for bridge windows (align it down to avoid going above 2165 - * what is available). 2166 - */ 2167 - align = pci_resource_alignment(dev, res); 2168 - resource_set_size(&io, ALIGN_DOWN_IF_NONZERO(io_per_b, align)); 2121 + /* 2122 + * Make sure the split resource space is properly 2123 + * aligned for bridge windows (align it down to 2124 + * avoid going above what is available). 2125 + */ 2126 + align = pci_resource_alignment(dev, res); 2127 + resource_set_size(&available[i], 2128 + ALIGN_DOWN_IF_NONZERO(per_bridge[i], 2129 + align)); 2169 2130 2170 - /* 2171 - * The x_per_b holds the extra resource space that can be 2172 - * added for each bridge but there is the minimal already 2173 - * reserved as well so adjust x.start down accordingly to 2174 - * cover the whole space. 2175 - */ 2176 - io.start -= resource_size(res); 2131 + /* 2132 + * The per_bridge holds the extra resource space 2133 + * that can be added for each bridge but there is 2134 + * the minimal already reserved as well so adjust 2135 + * x.start down accordingly to cover the whole 2136 + * space. 2137 + */ 2138 + available[i].start -= resource_size(res); 2139 + } 2177 2140 2178 - res = &dev->resource[PCI_BRIDGE_MEM_WINDOW]; 2179 - align = pci_resource_alignment(dev, res); 2180 - resource_set_size(&mmio, 2181 - ALIGN_DOWN_IF_NONZERO(mmio_per_b,align)); 2182 - mmio.start -= resource_size(res); 2141 + pci_bus_distribute_available_resources(b, add_list, available); 2183 2142 2184 - res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 2185 - align = pci_resource_alignment(dev, res); 2186 - resource_set_size(&mmio_pref, 2187 - ALIGN_DOWN_IF_NONZERO(mmio_pref_per_b, align)); 2188 - mmio_pref.start -= resource_size(res); 2189 - 2190 - pci_bus_distribute_available_resources(b, add_list, io, mmio, 2191 - mmio_pref); 2192 - 2193 - io.start += io.end + 1; 2194 - mmio.start += mmio.end + 1; 2195 - mmio_pref.start += mmio_pref.end + 1; 2143 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) 2144 + available[i].start += available[i].end + 1; 2196 2145 } 2197 2146 } 2198 2147 2199 2148 static void pci_bridge_distribute_available_resources(struct pci_dev *bridge, 2200 2149 struct list_head *add_list) 2201 2150 { 2202 - struct resource available_io, available_mmio, available_mmio_pref; 2151 + struct resource *res, available[PCI_P2P_BRIDGE_RESOURCE_NUM]; 2152 + unsigned int i; 2203 2153 2204 2154 if (!bridge->is_hotplug_bridge) 2205 2155 return; ··· 2199 2165 pci_dbg(bridge, "distributing available resources\n"); 2200 2166 2201 2167 /* Take the initial extra resources from the hotplug port */ 2202 - available_io = bridge->resource[PCI_BRIDGE_IO_WINDOW]; 2203 - available_mmio = bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 2204 - available_mmio_pref = bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 2168 + for (i = 0; i < PCI_P2P_BRIDGE_RESOURCE_NUM; i++) { 2169 + res = pci_resource_n(bridge, PCI_BRIDGE_RESOURCES + i); 2170 + available[i] = *res; 2171 + } 2205 2172 2206 2173 pci_bus_distribute_available_resources(bridge->subordinate, 2207 - add_list, available_io, 2208 - available_mmio, 2209 - available_mmio_pref); 2174 + add_list, available); 2210 2175 } 2211 2176 2212 2177 static bool pci_bridge_resources_not_assigned(struct pci_dev *dev) ··· 2268 2235 * enough to contain child device resources. 2269 2236 */ 2270 2237 list_for_each_entry(fail_res, fail_head, list) { 2271 - pci_bus_release_bridge_resources(fail_res->dev->bus, 2272 - fail_res->flags & PCI_RES_TYPE_MASK, 2273 - rel_type); 2238 + struct pci_bus *bus = fail_res->dev->bus; 2239 + struct resource *b_win; 2240 + 2241 + b_win = pbus_select_window_for_type(bus, fail_res->flags); 2242 + if (!b_win) 2243 + continue; 2244 + pci_bus_release_bridge_resources(bus, b_win, rel_type); 2274 2245 } 2275 2246 2276 2247 /* Restore size and flags */ 2277 - list_for_each_entry(fail_res, fail_head, list) { 2278 - struct resource *res = fail_res->res; 2279 - struct pci_dev *dev = fail_res->dev; 2280 - int idx = pci_resource_num(dev, res); 2281 - 2248 + list_for_each_entry(fail_res, fail_head, list) 2282 2249 restore_dev_resource(fail_res); 2283 - 2284 - if (!pci_is_bridge(dev)) 2285 - continue; 2286 - 2287 - if (idx >= PCI_BRIDGE_RESOURCES && 2288 - idx <= PCI_BRIDGE_RESOURCE_END) 2289 - res->flags = 0; 2290 - } 2291 2250 2292 2251 free_list(fail_head); 2293 2252 } ··· 2414 2389 } 2415 2390 EXPORT_SYMBOL_GPL(pci_assign_unassigned_bridge_resources); 2416 2391 2417 - int pci_reassign_bridge_resources(struct pci_dev *bridge, unsigned long type) 2392 + /* 2393 + * Walk to the root bus, find the bridge window relevant for @res and 2394 + * release it when possible. If the bridge window contains assigned 2395 + * resources, it cannot be released. 2396 + */ 2397 + int pbus_reassign_bridge_resources(struct pci_bus *bus, struct resource *res) 2418 2398 { 2399 + unsigned long type = res->flags; 2419 2400 struct pci_dev_resource *dev_res; 2420 - struct pci_dev *next; 2401 + struct pci_dev *bridge; 2421 2402 LIST_HEAD(saved); 2422 2403 LIST_HEAD(added); 2423 2404 LIST_HEAD(failed); ··· 2432 2401 2433 2402 down_read(&pci_bus_sem); 2434 2403 2435 - /* Walk to the root hub, releasing bridge BARs when possible */ 2436 - next = bridge; 2437 - do { 2438 - bridge = next; 2439 - for (i = PCI_BRIDGE_RESOURCES; i < PCI_BRIDGE_RESOURCE_END; 2440 - i++) { 2441 - struct resource *res = &bridge->resource[i]; 2442 - const char *res_name = pci_resource_name(bridge, i); 2404 + while (!pci_is_root_bus(bus)) { 2405 + bridge = bus->self; 2406 + res = pbus_select_window(bus, res); 2407 + if (!res) 2408 + break; 2443 2409 2444 - if ((res->flags ^ type) & PCI_RES_TYPE_MASK) 2445 - continue; 2410 + i = pci_resource_num(bridge, res); 2446 2411 2447 - /* Ignore BARs which are still in use */ 2448 - if (res->child) 2449 - continue; 2450 - 2412 + /* Ignore BARs which are still in use */ 2413 + if (!res->child) { 2451 2414 ret = add_to_list(&saved, bridge, res, 0, 0); 2452 2415 if (ret) 2453 2416 goto cleanup; 2454 2417 2455 - pci_info(bridge, "%s %pR: releasing\n", res_name, res); 2418 + pci_release_resource(bridge, i); 2419 + } else { 2420 + const char *res_name = pci_resource_name(bridge, i); 2456 2421 2457 - if (res->parent) 2458 - release_resource(res); 2459 - res->start = 0; 2460 - res->end = 0; 2461 - break; 2422 + pci_warn(bridge, 2423 + "%s %pR: was not released (still contains assigned resources)\n", 2424 + res_name, res); 2462 2425 } 2463 - if (i == PCI_BRIDGE_RESOURCE_END) 2464 - break; 2465 2426 2466 - next = bridge->bus ? bridge->bus->self : NULL; 2467 - } while (next); 2427 + bus = bus->parent; 2428 + } 2468 2429 2469 2430 if (list_empty(&saved)) { 2470 2431 up_read(&pci_bus_sem); ··· 2469 2446 free_list(&added); 2470 2447 2471 2448 if (!list_empty(&failed)) { 2472 - ret = -ENOSPC; 2473 - goto cleanup; 2449 + if (pci_required_resource_failed(&failed, type)) { 2450 + ret = -ENOSPC; 2451 + goto cleanup; 2452 + } 2453 + /* Only resources with unrelated types failed (again) */ 2454 + free_list(&failed); 2474 2455 } 2475 2456 2476 2457 list_for_each_entry(dev_res, &saved, list) {
+29 -17
drivers/pci/setup-res.c
··· 359 359 360 360 res->flags &= ~IORESOURCE_UNSET; 361 361 res->flags &= ~IORESOURCE_STARTALIGN; 362 + if (resno >= PCI_BRIDGE_RESOURCES && resno <= PCI_BRIDGE_RESOURCE_END) 363 + res->flags &= ~IORESOURCE_DISABLED; 364 + 362 365 pci_info(dev, "%s %pR: assigned\n", res_name, res); 363 366 if (resno < PCI_BRIDGE_RESOURCES) 364 367 pci_update_resource(dev, resno); ··· 409 406 return 0; 410 407 } 411 408 412 - void pci_release_resource(struct pci_dev *dev, int resno) 409 + int pci_release_resource(struct pci_dev *dev, int resno) 413 410 { 414 411 struct resource *res = pci_resource_n(dev, resno); 415 412 const char *res_name = pci_resource_name(dev, resno); 413 + int ret; 416 414 417 415 if (!res->parent) 418 - return; 416 + return 0; 419 417 420 418 pci_info(dev, "%s %pR: releasing\n", res_name, res); 421 419 422 - release_resource(res); 420 + ret = release_resource(res); 421 + if (ret) 422 + return ret; 423 423 res->end = resource_size(res) - 1; 424 424 res->start = 0; 425 425 res->flags |= IORESOURCE_UNSET; 426 + 427 + return 0; 426 428 } 427 429 EXPORT_SYMBOL(pci_release_resource); 428 430 ··· 496 488 497 489 /* Check if the new config works by trying to assign everything. */ 498 490 if (dev->bus->self) { 499 - ret = pci_reassign_bridge_resources(dev->bus->self, res->flags); 491 + ret = pbus_reassign_bridge_resources(dev->bus, res); 500 492 if (ret) 501 493 goto error_resize; 502 494 } ··· 530 522 if (pci_resource_is_optional(dev, i)) 531 523 continue; 532 524 533 - if (r->flags & IORESOURCE_UNSET) { 534 - pci_err(dev, "%s %pR: not assigned; can't enable device\n", 535 - r_name, r); 536 - return -EINVAL; 525 + if (i < PCI_BRIDGE_RESOURCES) { 526 + if (r->flags & IORESOURCE_UNSET) { 527 + pci_err(dev, "%s %pR: not assigned; can't enable device\n", 528 + r_name, r); 529 + return -EINVAL; 530 + } 531 + 532 + if (!r->parent) { 533 + pci_err(dev, "%s %pR: not claimed; can't enable device\n", 534 + r_name, r); 535 + return -EINVAL; 536 + } 537 537 } 538 538 539 - if (!r->parent) { 540 - pci_err(dev, "%s %pR: not claimed; can't enable device\n", 541 - r_name, r); 542 - return -EINVAL; 539 + if (r->parent) { 540 + if (r->flags & IORESOURCE_IO) 541 + cmd |= PCI_COMMAND_IO; 542 + if (r->flags & IORESOURCE_MEM) 543 + cmd |= PCI_COMMAND_MEMORY; 543 544 } 544 - 545 - if (r->flags & IORESOURCE_IO) 546 - cmd |= PCI_COMMAND_IO; 547 - if (r->flags & IORESOURCE_MEM) 548 - cmd |= PCI_COMMAND_MEMORY; 549 545 } 550 546 551 547 if (cmd != old_cmd) {
+11 -12
drivers/pci/switch/switchtec.c
··· 269 269 270 270 dev_dbg(&stdev->dev, "%s\n", __func__); 271 271 272 - mutex_lock(&stdev->mrpc_mutex); 272 + guard(mutex)(&stdev->mrpc_mutex); 273 273 cancel_delayed_work(&stdev->mrpc_timeout); 274 274 mrpc_complete_cmd(stdev); 275 - mutex_unlock(&stdev->mrpc_mutex); 276 275 } 277 276 278 277 static void mrpc_error_complete_cmd(struct switchtec_dev *stdev) ··· 1321 1322 cancel_delayed_work_sync(&stdev->mrpc_timeout); 1322 1323 1323 1324 /* Mark the hardware as unavailable and complete all completions */ 1324 - mutex_lock(&stdev->mrpc_mutex); 1325 - stdev->alive = false; 1325 + scoped_guard (mutex, &stdev->mrpc_mutex) { 1326 + stdev->alive = false; 1326 1327 1327 - /* Wake up and kill any users waiting on an MRPC request */ 1328 - list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) { 1329 - stuser->cmd_done = true; 1330 - wake_up_interruptible(&stuser->cmd_comp); 1331 - list_del_init(&stuser->list); 1332 - stuser_put(stuser); 1328 + /* Wake up and kill any users waiting on an MRPC request */ 1329 + list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) { 1330 + stuser->cmd_done = true; 1331 + wake_up_interruptible(&stuser->cmd_comp); 1332 + list_del_init(&stuser->list); 1333 + stuser_put(stuser); 1334 + } 1335 + 1333 1336 } 1334 - 1335 - mutex_unlock(&stdev->mrpc_mutex); 1336 1337 1337 1338 /* Wake up any users waiting on event_wq */ 1338 1339 wake_up_interruptible(&stdev->event_wq);
+13
drivers/pinctrl/core.c
··· 1656 1656 EXPORT_SYMBOL_GPL(pinctrl_pm_select_default_state); 1657 1657 1658 1658 /** 1659 + * pinctrl_pm_select_init_state() - select init pinctrl state for PM 1660 + * @dev: device to select init state for 1661 + */ 1662 + int pinctrl_pm_select_init_state(struct device *dev) 1663 + { 1664 + if (!dev->pins) 1665 + return 0; 1666 + 1667 + return pinctrl_select_bound_state(dev, dev->pins->init_state); 1668 + } 1669 + EXPORT_SYMBOL_GPL(pinctrl_pm_select_init_state); 1670 + 1671 + /** 1659 1672 * pinctrl_pm_select_sleep_state() - select sleep pinctrl state for PM 1660 1673 * @dev: device to select sleep state for 1661 1674 */
+1 -1
drivers/scsi/lpfc/lpfc_init.c
··· 14367 14367 * as desired. 14368 14368 * 14369 14369 * Return codes 14370 - * PCI_ERS_RESULT_CAN_RECOVER - can be recovered with reset_link 14370 + * PCI_ERS_RESULT_CAN_RECOVER - can be recovered without reset 14371 14371 * PCI_ERS_RESULT_NEED_RESET - need to reset before recovery 14372 14372 * PCI_ERS_RESULT_DISCONNECT - device could not be recovered 14373 14373 **/
-5
drivers/scsi/qla2xxx/qla_os.c
··· 7884 7884 "Slot Reset.\n"); 7885 7885 7886 7886 ha->pci_error_state = QLA_PCI_SLOT_RESET; 7887 - /* Workaround: qla2xxx driver which access hardware earlier 7888 - * needs error state to be pci_channel_io_online. 7889 - * Otherwise mailbox command timesout. 7890 - */ 7891 - pdev->error_state = pci_channel_io_normal; 7892 7887 7893 7888 pci_restore_state(pdev); 7894 7889
-5
include/linux/pci-p2pdma.h
··· 21 21 u64 offset); 22 22 int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, 23 23 int num_clients, bool verbose); 24 - bool pci_has_p2pmem(struct pci_dev *pdev); 25 24 struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients); 26 25 void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size); 27 26 void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size); ··· 43 44 struct device **clients, int num_clients, bool verbose) 44 45 { 45 46 return -1; 46 - } 47 - static inline bool pci_has_p2pmem(struct pci_dev *pdev) 48 - { 49 - return false; 50 47 } 51 48 static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients, 52 49 int num_clients)
+4 -3
include/linux/pci.h
··· 119 119 #define PCI_CB_BRIDGE_MEM_1_WINDOW (PCI_BRIDGE_RESOURCES + 3) 120 120 121 121 /* Total number of bridge resources for P2P and CardBus */ 122 - #define PCI_BRIDGE_RESOURCE_NUM 4 122 + #define PCI_P2P_BRIDGE_RESOURCE_NUM 3 123 + #define PCI_BRIDGE_RESOURCE_NUM 4 123 124 124 125 /* Resources assigned to buses behind the bridge */ 125 126 PCI_BRIDGE_RESOURCES, ··· 1418 1417 void pcibios_reset_secondary_bus(struct pci_dev *dev); 1419 1418 void pci_update_resource(struct pci_dev *dev, int resno); 1420 1419 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 1421 - void pci_release_resource(struct pci_dev *dev, int resno); 1420 + int pci_release_resource(struct pci_dev *dev, int resno); 1422 1421 static inline int pci_rebar_bytes_to_size(u64 bytes) 1423 1422 { 1424 1423 bytes = roundup_pow_of_two(bytes); ··· 2765 2764 return false; 2766 2765 } 2767 2766 2768 - #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) 2767 + #if defined(CONFIG_PCIEPORTBUS) || defined(CONFIG_EEH) || defined(CONFIG_S390) 2769 2768 void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); 2770 2769 #endif 2771 2770
+10
include/linux/pinctrl/consumer.h
··· 48 48 49 49 #ifdef CONFIG_PM 50 50 int pinctrl_pm_select_default_state(struct device *dev); 51 + int pinctrl_pm_select_init_state(struct device *dev); 51 52 int pinctrl_pm_select_sleep_state(struct device *dev); 52 53 int pinctrl_pm_select_idle_state(struct device *dev); 53 54 #else 54 55 static inline int pinctrl_pm_select_default_state(struct device *dev) 56 + { 57 + return 0; 58 + } 59 + static inline int pinctrl_pm_select_init_state(struct device *dev) 55 60 { 56 61 return 0; 57 62 } ··· 144 139 } 145 140 146 141 static inline int pinctrl_pm_select_default_state(struct device *dev) 142 + { 143 + return 0; 144 + } 145 + 146 + static inline int pinctrl_pm_select_init_state(struct device *dev) 147 147 { 148 148 return 0; 149 149 }
+10
include/uapi/linux/pci_regs.h
··· 207 207 208 208 /* Capability lists */ 209 209 210 + #define PCI_CAP_ID_MASK 0x00ff /* Capability ID mask */ 211 + #define PCI_CAP_LIST_NEXT_MASK 0xff00 /* Next Capability Pointer mask */ 212 + 210 213 #define PCI_CAP_LIST_ID 0 /* Capability ID */ 211 214 #define PCI_CAP_ID_PM 0x01 /* Power Management */ 212 215 #define PCI_CAP_ID_AGP 0x02 /* Accelerated Graphics Port */ ··· 779 776 #define PCI_ERR_UNC_MCBTLP 0x00800000 /* MC blocked TLP */ 780 777 #define PCI_ERR_UNC_ATOMEG 0x01000000 /* Atomic egress blocked */ 781 778 #define PCI_ERR_UNC_TLPPRE 0x02000000 /* TLP prefix blocked */ 779 + #define PCI_ERR_UNC_POISON_BLK 0x04000000 /* Poisoned TLP Egress Blocked */ 780 + #define PCI_ERR_UNC_DMWR_BLK 0x08000000 /* DMWr Request Egress Blocked */ 781 + #define PCI_ERR_UNC_IDE_CHECK 0x10000000 /* IDE Check Failed */ 782 + #define PCI_ERR_UNC_MISR_IDE 0x20000000 /* Misrouted IDE TLP */ 783 + #define PCI_ERR_UNC_PCRC_CHECK 0x40000000 /* PCRC Check Failed */ 784 + #define PCI_ERR_UNC_XLAT_BLK 0x80000000 /* TLP Translation Egress Blocked */ 782 785 #define PCI_ERR_UNCOR_MASK 0x08 /* Uncorrectable Error Mask */ 783 786 /* Same bits as above */ 784 787 #define PCI_ERR_UNCOR_SEVER 0x0c /* Uncorrectable Error Severity */ ··· 807 798 #define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */ 808 799 #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ 809 800 #define PCI_ERR_CAP_PREFIX_LOG_PRESENT 0x00000800 /* TLP Prefix Log Present */ 801 + #define PCI_ERR_CAP_COMP_TIME_LOG 0x00001000 /* Completion Timeout Prefix/Header Log Capable */ 810 802 #define PCI_ERR_CAP_TLP_LOG_FLIT 0x00040000 /* TLP was logged in Flit Mode */ 811 803 #define PCI_ERR_CAP_TLP_LOG_SIZE 0x00f80000 /* Logged TLP Size (only in Flit mode) */ 812 804 #define PCI_ERR_HEADER_LOG 0x1c /* Header Log Register (16 bytes) */
+4
tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
··· 121 121 122 122 for (i = 1; i <= 32; i++) { 123 123 pci_ep_ioctl(PCITEST_MSI, i); 124 + if (ret == -EINVAL) 125 + SKIP(return, "MSI%d is disabled", i); 124 126 EXPECT_FALSE(ret) TH_LOG("Test failed for MSI%d", i); 125 127 } 126 128 } ··· 139 137 140 138 for (i = 1; i <= 2048; i++) { 141 139 pci_ep_ioctl(PCITEST_MSIX, i); 140 + if (ret == -EINVAL) 141 + SKIP(return, "MSI-X%d is disabled", i); 142 142 EXPECT_FALSE(ret) TH_LOG("Test failed for MSI-X%d", i); 143 143 } 144 144 }