Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Export pcie_retrain_link() for use outside ASPM

- Add Data Link Layer Link Active Reporting as another way for
pcie_retrain_link() to determine the link is up

- Work around link training failures (especially on the ASMedia
ASM2824 switch) by training first at 2.5GT/s and then attempting
higher rates

Resource management:

- When we coalesce host bridge windows, remove invalidated resources
from the resource tree so future allocations work correctly

Hotplug:

- Cancel bringup sequence if card is not present, to keep from
blinking Power Indicator indefinitely

- Reassign bridge resources if necessary for ACPI hotplug

Driver binding:

- Convert platform_device .remove() callbacks to return void instead
of a mostly useless int

Power management:

- Reduce wait time for secondary bus to be ready to speed up resume

- Avoid putting EloPOS E2/S2/H2 (as well as Elo i2) PCIe Ports in
D3cold

- Call _REG when transitioning D-states so AML that uses the PCI
config space OpRegion works, which fixes some ASMedia GPIO
controllers after resume

Virtualization:

- Delay extra 250ms after FLR of Solidigm P44 Pro NVMe to avoid KVM
hang when guest is rebooted

- Add function 1 DMA alias quirk for Marvell 88SE9235

Error handling:

- Unexport pci_save_aer_state() since it's only used in drivers/pci/

- Drop recommendation for drivers to configure AER Capability, since
the PCI core does this for all devices

ASPM:

- Disable ASPM on MFD function removal to avoid use-after-free

- Tighten up pci_enable_link_state() and pci_disable_link_state()
interfaces so they don't enable/disable states the driver didn't
specify

- Avoid link retraining race that can happen if ASPM sets link
control parameters while the link is in the midst of training for
some other reason

Endpoint framework:

- Change "PCI Endpoint Virtual NTB driver" Kconfig prompt to be
different from "PCI Endpoint NTB driver"

- Automatically create a function specific attributes group for
endpoint drivers to avoid reference counting issues

- Fix many EPC test issues

- Return pci_epf_type_add_cfs() error if EPF has no driver

- Add kernel-doc for pci_epc_raise_irq() and pci_epc_map_msi_irq()
MSI vector parameters

- Pass EPF device ID to driver probe functions

- Return -EALREADY if EPC has already been started/stopped

- Add linkdown notifier support and use it in qcom-ep

- Add Bus Master Enable event support and use it in qcom-ep

- Add Qualcomm Modem Host Interface (MHI) endpoint driver

- Add Layerscape PME interrupt handling to manage link-up
notification

Cadence PCIe controller driver:

- Wait for link retrain to complete when working around the J721E
i2085 erratum with Gen2 mode

Faraday FTPC100 PCI controller driver:

- Release clock resources on error paths

Freescale i.MX6 PCIe controller driver:

- Save and restore Root Port MSI control to work around hardware defect

Intel VMD host bridge driver:

- Reset VMD config register between soft reboots

- Capture pci_reset_bus() return value instead of printing junk when
it fails

Qualcomm PCIe controller driver:

- Add SDX65 endpoint compatible string to DT binding

- Disable register write access after init for IP v2.3.3, v2.9.0

- Use DWC helpers for enabling/disabling writes to DBI registers

- Hide slot hotplug capability for IP v1.0.0, v1.9.0, v2.1.0, v2.3.2,
v2.3.3, v2.7.0, v2.9.0

- Reuse v2.3.2 post-init sequence for v2.4.0

Renesas R-Car PCIe controller driver:

- Remove unused static pcie_base and pcie_dev

Rockchip PCIe controller driver:

- Remove writes to unused registers

- Write endpoint Device ID using correct register

- Assert PCI Configuration Enable bit after probe so endpoint
responds instead of generating Request Retry Status messages

- Poll waiting for PHY PLLs to lock

- Update RK3399 example DT binding to be valid

- Use RK3399 PCIE_CLIENT_LEGACY_INT_CTRL to generate INTx instead of
manually generating PCIe message

- Use multiple windows to avoid address translation conflicts

- Use u32 (not u16) when accessing 32-bit registers

- Hide MSI-X Capability, since RK3399 can't generate MSI-X

- Set endpoint controller required alignment to 256

Synopsys DesignWare PCIe controller driver:

- Wait for link to come up only if we've initiated link training

Miscellaneous:

- Add pci_clear_master() stub for non-CONFIG_PCI"

* tag 'pci-v6.5-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (116 commits)
Documentation: PCI: correct spelling
PCI: vmd: Fix uninitialized variable usage in vmd_enable_domain()
PCI: xgene-msi: Convert to platform remove callback returning void
PCI: tegra: Convert to platform remove callback returning void
PCI: rockchip-host: Convert to platform remove callback returning void
PCI: mvebu: Convert to platform remove callback returning void
PCI: mt7621: Convert to platform remove callback returning void
PCI: mediatek-gen3: Convert to platform remove callback returning void
PCI: mediatek: Convert to platform remove callback returning void
PCI: iproc: Convert to platform remove callback returning void
PCI: hisi-error: Convert to platform remove callback returning void
PCI: dwc: Convert to platform remove callback returning void
PCI: j721e: Convert to platform remove callback returning void
PCI: brcmstb: Convert to platform remove callback returning void
PCI: altera-msi: Convert to platform remove callback returning void
PCI: altera: Convert to platform remove callback returning void
PCI: aardvark: Convert to platform remove callback returning void
PCI: rcar: Use correct product family name for Renesas R-Car
PCI: layerscape: Add the endpoint linkup notifier support
PCI: endpoint: pci-epf-vntb: Fix typo in comments
...

+1631 -848
+4 -7
Documentation/PCI/endpoint/pci-ntb-howto.rst
··· 88 88 # echo 0x104c > functions/pci_epf_ntb/func1/vendorid 89 89 # echo 0xb00d > functions/pci_epf_ntb/func1/deviceid 90 90 91 - In order to configure NTB specific attributes, a new sub-directory to func1 92 - should be created:: 93 - 94 - # mkdir functions/pci_epf_ntb/func1/pci_epf_ntb.0/ 95 - 96 - The NTB function driver will populate this directory with various attributes 97 - that can be configured by the user:: 91 + The PCI endpoint framework also automatically creates a sub-directory in the 92 + function attribute directory. This sub-directory has the same name as the name 93 + of the function device and is populated with the following NTB specific 94 + attributes that can be configured by the user:: 98 95 99 96 # ls functions/pci_epf_ntb/func1/pci_epf_ntb.0/ 100 97 db_count mw1 mw2 mw3 mw4 num_mws
+5 -8
Documentation/PCI/endpoint/pci-vntb-howto.rst
··· 84 84 # echo 0x1957 > functions/pci_epf_vntb/func1/vendorid 85 85 # echo 0x0809 > functions/pci_epf_vntb/func1/deviceid 86 86 87 - In order to configure NTB specific attributes, a new sub-directory to func1 88 - should be created:: 89 - 90 - # mkdir functions/pci_epf_vntb/func1/pci_epf_vntb.0/ 91 - 92 - The NTB function driver will populate this directory with various attributes 93 - that can be configured by the user:: 87 + The PCI endpoint framework also automatically creates a sub-directory in the 88 + function attribute directory. This sub-directory has the same name as the name 89 + of the function device and is populated with the following NTB specific 90 + attributes that can be configured by the user:: 94 91 95 92 # ls functions/pci_epf_vntb/func1/pci_epf_vntb.0/ 96 93 db_count mw1 mw2 mw3 mw4 num_mws ··· 100 103 # echo 1 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/num_mws 101 104 # echo 0x100000 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/mw1 102 105 103 - A sample configuration for virtual NTB driver for virutal PCI bus:: 106 + A sample configuration for virtual NTB driver for virtual PCI bus:: 104 107 105 108 # echo 0x1957 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_vid 106 109 # echo 0x080A > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_pid
+1 -1
Documentation/PCI/msi-howto.rst
··· 290 290 List of device drivers MSI(-X) APIs 291 291 =================================== 292 292 293 - The PCI/MSI subystem has a dedicated C file for its exported device driver 293 + The PCI/MSI subsystem has a dedicated C file for its exported device driver 294 294 APIs — `drivers/pci/msi/api.c`. The following functions are exported: 295 295 296 296 .. kernel-doc:: drivers/pci/msi/api.c
+1 -1
Documentation/PCI/pci-error-recovery.rst
··· 364 364 caused by over-heating, some by a poorly seated card. Many 365 365 PCI error events are caused by software bugs, e.g. DMA's to 366 366 wild addresses or bogus split transactions due to programming 367 - errors. See the discussion in powerpc/eeh-pci-error-recovery.txt 367 + errors. See the discussion in Documentation/powerpc/eeh-pci-error-recovery.rst 368 368 for additional detail on real-life experience of the causes of 369 369 software errors. 370 370
+65 -118
Documentation/PCI/pcieaer-howto.rst
··· 16 16 About this guide 17 17 ---------------- 18 18 19 - This guide describes the basics of the PCI Express Advanced Error 19 + This guide describes the basics of the PCI Express (PCIe) Advanced Error 20 20 Reporting (AER) driver and provides information on how to use it, as 21 - well as how to enable the drivers of endpoint devices to conform with 22 - PCI Express AER driver. 21 + well as how to enable the drivers of Endpoint devices to conform with 22 + the PCIe AER driver. 23 23 24 24 25 - What is the PCI Express AER Driver? 26 - ----------------------------------- 25 + What is the PCIe AER Driver? 26 + ---------------------------- 27 27 28 - PCI Express error signaling can occur on the PCI Express link itself 29 - or on behalf of transactions initiated on the link. PCI Express 28 + PCIe error signaling can occur on the PCIe link itself 29 + or on behalf of transactions initiated on the link. PCIe 30 30 defines two error reporting paradigms: the baseline capability and 31 31 the Advanced Error Reporting capability. The baseline capability is 32 - required of all PCI Express components providing a minimum defined 32 + required of all PCIe components providing a minimum defined 33 33 set of error reporting requirements. Advanced Error Reporting 34 - capability is implemented with a PCI Express advanced error reporting 34 + capability is implemented with a PCIe Advanced Error Reporting 35 35 extended capability structure providing more robust error reporting. 36 36 37 - The PCI Express AER driver provides the infrastructure to support PCI 38 - Express Advanced Error Reporting capability. The PCI Express AER 39 - driver provides three basic functions: 37 + The PCIe AER driver provides the infrastructure to support PCIe Advanced 38 + Error Reporting capability. The PCIe AER driver provides three basic 39 + functions: 40 40 41 41 - Gathers the comprehensive error information if errors occurred. 42 42 - Reports error to the users. 43 43 - Performs error recovery actions. 44 44 45 - AER driver only attaches root ports which support PCI-Express AER 46 - capability. 45 + The AER driver only attaches to Root Ports and RCECs that support the PCIe 46 + AER capability. 47 47 48 48 49 49 User Guide 50 50 ========== 51 51 52 - Include the PCI Express AER Root Driver into the Linux Kernel 53 - ------------------------------------------------------------- 52 + Include the PCIe AER Root Driver into the Linux Kernel 53 + ------------------------------------------------------ 54 54 55 - The PCI Express AER Root driver is a Root Port service driver attached 56 - to the PCI Express Port Bus driver. If a user wants to use it, the driver 57 - has to be compiled. Option CONFIG_PCIEAER supports this capability. It 58 - depends on CONFIG_PCIEPORTBUS, so pls. set CONFIG_PCIEPORTBUS=y and 59 - CONFIG_PCIEAER = y. 55 + The PCIe AER driver is a Root Port service driver attached 56 + via the PCIe Port Bus driver. If a user wants to use it, the driver 57 + must be compiled. It is enabled with CONFIG_PCIEAER, which 58 + depends on CONFIG_PCIEPORTBUS. 60 59 61 - Load PCI Express AER Root Driver 62 - -------------------------------- 60 + Load PCIe AER Root Driver 61 + ------------------------- 63 62 64 63 Some systems have AER support in firmware. Enabling Linux AER support at 65 - the same time the firmware handles AER may result in unpredictable 64 + the same time the firmware handles AER would result in unpredictable 66 65 behavior. Therefore, Linux does not handle AER events unless the firmware 67 - grants AER control to the OS via the ACPI _OSC method. See the PCI FW 3.0 66 + grants AER control to the OS via the ACPI _OSC method. See the PCI Firmware 68 67 Specification for details regarding _OSC usage. 69 68 70 69 AER error output 71 70 ---------------- 72 71 73 72 When a PCIe AER error is captured, an error message will be output to 74 - console. If it's a correctable error, it is output as a warning. 73 + console. If it's a correctable error, it is output as an info message. 75 74 Otherwise, it is printed as an error. So users could choose different 76 75 log level to filter out correctable error messages. 77 76 ··· 81 82 0000:50:00.0: [20] Unsupported Request (First) 82 83 0000:50:00.0: TLP Header: 04000001 00200a03 05010000 00050100 83 84 84 - In the example, 'Requester ID' means the ID of the device who sends 85 - the error message to root port. Pls. refer to pci express specs for 86 - other fields. 85 + In the example, 'Requester ID' means the ID of the device that sent 86 + the error message to the Root Port. Please refer to PCIe specs for other 87 + fields. 87 88 88 89 AER Statistics / Counters 89 90 ------------------------- ··· 95 96 Developer Guide 96 97 =============== 97 98 98 - To enable AER aware support requires a software driver to configure 99 - the AER capability structure within its device and to provide callbacks. 99 + To enable error recovery, a software driver must provide callbacks. 100 100 101 - To support AER better, developers need understand how AER does work 102 - firstly. 101 + To support AER better, developers need to understand how AER works. 103 102 104 - PCI Express errors are classified into two types: correctable errors 105 - and uncorrectable errors. This classification is based on the impacts 103 + PCIe errors are classified into two types: correctable errors 104 + and uncorrectable errors. This classification is based on the impact 106 105 of those errors, which may result in degraded performance or function 107 106 failure. 108 107 109 108 Correctable errors pose no impacts on the functionality of the 110 - interface. The PCI Express protocol can recover without any software 109 + interface. The PCIe protocol can recover without any software 111 110 intervention or any loss of data. These errors are detected and 112 - corrected by hardware. Unlike correctable errors, uncorrectable 111 + corrected by hardware. 112 + 113 + Unlike correctable errors, uncorrectable 113 114 errors impact functionality of the interface. Uncorrectable errors 114 - can cause a particular transaction or a particular PCI Express link 115 + can cause a particular transaction or a particular PCIe link 115 116 to be unreliable. Depending on those error conditions, uncorrectable 116 117 errors are further classified into non-fatal errors and fatal errors. 117 118 Non-fatal errors cause the particular transaction to be unreliable, 118 - but the PCI Express link itself is fully functional. Fatal errors, on 119 + but the PCIe link itself is fully functional. Fatal errors, on 119 120 the other hand, cause the link to be unreliable. 120 121 121 - When AER is enabled, a PCI Express device will automatically send an 122 - error message to the PCIe root port above it when the device captures 122 + When PCIe error reporting is enabled, a device will automatically send an 123 + error message to the Root Port above it when it captures 123 124 an error. The Root Port, upon receiving an error reporting message, 124 - internally processes and logs the error message in its PCI Express 125 - capability structure. Error information being logged includes storing 125 + internally processes and logs the error message in its AER 126 + Capability structure. Error information being logged includes storing 126 127 the error reporting agent's requestor ID into the Error Source 127 128 Identification Registers and setting the error bits of the Root Error 128 - Status Register accordingly. If AER error reporting is enabled in Root 129 - Error Command Register, the Root Port generates an interrupt if an 129 + Status Register accordingly. If AER error reporting is enabled in the Root 130 + Error Command Register, the Root Port generates an interrupt when an 130 131 error is detected. 131 132 132 - Note that the errors as described above are related to the PCI Express 133 + Note that the errors as described above are related to the PCIe 133 134 hierarchy and links. These errors do not include any device specific 134 135 errors because device specific errors will still get sent directly to 135 136 the device driver. 136 137 137 - Configure the AER capability structure 138 - -------------------------------------- 139 - 140 - AER aware drivers of PCI Express component need change the device 141 - control registers to enable AER. They also could change AER registers, 142 - including mask and severity registers. Helper function 143 - pci_enable_pcie_error_reporting could be used to enable AER. See 144 - section 3.3. 145 - 146 138 Provide callbacks 147 139 ----------------- 148 140 149 - callback reset_link to reset pci express link 150 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 141 + callback reset_link to reset PCIe link 142 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 151 143 152 - This callback is used to reset the pci express physical link when a 153 - fatal error happens. The root port aer service driver provides a 154 - default reset_link function, but different upstream ports might 155 - have different specifications to reset pci express link, so all 156 - upstream ports should provide their own reset_link functions. 144 + This callback is used to reset the PCIe physical link when a 145 + fatal error happens. The Root Port AER service driver provides a 146 + default reset_link function, but different Upstream Ports might 147 + have different specifications to reset the PCIe link, so 148 + Upstream Port drivers may provide their own reset_link functions. 157 149 158 150 Section 3.2.2.2 provides more detailed info on when to call 159 151 reset_link. ··· 152 162 PCI error-recovery callbacks 153 163 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 154 164 155 - The PCI Express AER Root driver uses error callbacks to coordinate 165 + The PCIe AER Root driver uses error callbacks to coordinate 156 166 with downstream device drivers associated with a hierarchy in question 157 167 when performing error recovery actions. 158 168 159 169 Data struct pci_driver has a pointer, err_handler, to point to 160 170 pci_error_handlers who consists of a couple of callback function 161 - pointers. AER driver follows the rules defined in 162 - pci-error-recovery.txt except pci express specific parts (e.g. 163 - reset_link). Pls. refer to pci-error-recovery.txt for detailed 171 + pointers. The AER driver follows the rules defined in 172 + pci-error-recovery.rst except PCIe-specific parts (e.g. 173 + reset_link). Please refer to pci-error-recovery.rst for detailed 164 174 definitions of the callbacks. 165 175 166 - Below sections specify when to call the error callback functions. 176 + The sections below specify when to call the error callback functions. 167 177 168 178 Correctable errors 169 179 ~~~~~~~~~~~~~~~~~~ 170 180 171 181 Correctable errors pose no impacts on the functionality of 172 - the interface. The PCI Express protocol can recover without any 182 + the interface. The PCIe protocol can recover without any 173 183 software intervention or any loss of data. These errors do not 174 184 require any recovery actions. The AER driver clears the device's 175 185 correctable error status register accordingly and logs these errors. ··· 180 190 If an error message indicates a non-fatal error, performing link reset 181 191 at upstream is not required. The AER driver calls error_detected(dev, 182 192 pci_channel_io_normal) to all drivers associated within a hierarchy in 183 - question. for example:: 193 + question. For example:: 184 194 185 - EndPoint<==>DownstreamPort B<==>UpstreamPort A<==>RootPort 195 + Endpoint <==> Downstream Port B <==> Upstream Port A <==> Root Port 186 196 187 - If Upstream port A captures an AER error, the hierarchy consists of 188 - Downstream port B and EndPoint. 197 + If Upstream Port A captures an AER error, the hierarchy consists of 198 + Downstream Port B and Endpoint. 189 199 190 200 A driver may return PCI_ERS_RESULT_CAN_RECOVER, 191 201 PCI_ERS_RESULT_DISCONNECT, or PCI_ERS_RESULT_NEED_RESET, depending on ··· 202 212 and reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes 203 213 to mmio_enabled. 204 214 205 - helper functions 206 - ---------------- 207 - :: 208 - 209 - int pci_enable_pcie_error_reporting(struct pci_dev *dev); 210 - 211 - pci_enable_pcie_error_reporting enables the device to send error 212 - messages to root port when an error is detected. Note that devices 213 - don't enable the error reporting by default, so device drivers need 214 - call this function to enable it. 215 - 216 - :: 217 - 218 - int pci_disable_pcie_error_reporting(struct pci_dev *dev); 219 - 220 - pci_disable_pcie_error_reporting disables the device to send error 221 - messages to root port when an error is detected. 222 - 223 - :: 224 - 225 - int pci_aer_clear_nonfatal_status(struct pci_dev *dev);` 226 - 227 - pci_aer_clear_nonfatal_status clears non-fatal errors in the uncorrectable 228 - error status register. 229 - 230 215 Frequent Asked Questions 231 216 ------------------------ 232 217 233 218 Q: 234 - What happens if a PCI Express device driver does not provide an 219 + What happens if a PCIe device driver does not provide an 235 220 error recovery handler (pci_driver->err_handler is equal to NULL)? 236 221 237 222 A: ··· 221 256 A: 222 257 Fatal error recovery will fail if the errors are reported by the 223 258 upstream ports who are attached by the service driver. 224 - 225 - Q: 226 - How does this infrastructure deal with driver that is not PCI 227 - Express aware? 228 - 229 - A: 230 - This infrastructure calls the error callback functions of the 231 - driver when an error happens. But if the driver is not aware of 232 - PCI Express, the device might not report its own errors to root 233 - port. 234 - 235 - Q: 236 - What modifications will that driver need to make it compatible 237 - with the PCI Express AER Root driver? 238 - 239 - A: 240 - It could call the helper functions to enable AER in devices and 241 - cleanup uncorrectable status register. Pls. refer to section 3.3. 242 259 243 260 244 261 Software error injection ··· 243 296 244 297 https://git.kernel.org/cgit/linux/kernel/git/gong.chen/aer-inject.git/ 245 298 246 - More information about aer-inject can be found in the document comes 247 - with its source code. 299 + More information about aer-inject can be found in the document in 300 + its source code.
+2
Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
··· 13 13 compatible: 14 14 enum: 15 15 - qcom,sdx55-pcie-ep 16 + - qcom,sdx65-pcie-ep 16 17 - qcom,sm8450-pcie-ep 17 18 18 19 reg: ··· 110 109 contains: 111 110 enum: 112 111 - qcom,sdx55-pcie-ep 112 + - qcom,sdx65-pcie-ep 113 113 then: 114 114 properties: 115 115 clocks:
+3 -1
Documentation/devicetree/bindings/pci/rockchip,rk3399-pcie-ep.yaml
··· 47 47 48 48 pcie-ep@f8000000 { 49 49 compatible = "rockchip,rk3399-pcie-ep"; 50 - reg = <0x0 0xfd000000 0x0 0x1000000>, <0x0 0x80000000 0x0 0x20000>; 50 + reg = <0x0 0xfd000000 0x0 0x1000000>, <0x0 0xfa000000 0x0 0x2000000>; 51 51 reg-names = "apb-base", "mem-base"; 52 52 clocks = <&cru ACLK_PCIE>, <&cru ACLK_PERF_PCIE>, 53 53 <&cru PCLK_PCIE>, <&cru SCLK_PCIE_PM>; ··· 63 63 phys = <&pcie_phy 0>, <&pcie_phy 1>, <&pcie_phy 2>, <&pcie_phy 3>; 64 64 phy-names = "pcie-phy-0", "pcie-phy-1", "pcie-phy-2", "pcie-phy-3"; 65 65 rockchip,max-outbound-regions = <16>; 66 + pinctrl-names = "default"; 67 + pinctrl-0 = <&pcie_clkreqnb_cpm>; 66 68 }; 67 69 }; 68 70 ...
+1
MAINTAINERS
··· 13744 13744 F: Documentation/ABI/stable/sysfs-bus-mhi 13745 13745 F: Documentation/mhi/ 13746 13746 F: drivers/bus/mhi/ 13747 + F: drivers/pci/endpoint/functions/pci-epf-mhi.c 13747 13748 F: include/linux/mhi.h 13748 13749 13749 13750 MICROBLAZE ARCHITECTURE
+2 -3
arch/powerpc/kernel/eeh_pe.c
··· 671 671 eeh_ops->write_config(edev, cap + PCI_EXP_LNKCTL, 2, val); 672 672 673 673 /* Check link */ 674 - eeh_ops->read_config(edev, cap + PCI_EXP_LNKCAP, 4, &val); 675 - if (!(val & PCI_EXP_LNKCAP_DLLLARC)) { 676 - eeh_edev_dbg(edev, "No link reporting capability (0x%08x) \n", val); 674 + if (!edev->pdev->link_active_reporting) { 675 + eeh_edev_dbg(edev, "No link reporting capability\n"); 677 676 msleep(1000); 678 677 return; 679 678 }
+11 -14
drivers/misc/pci_endpoint_test.c
··· 159 159 if (reg & STATUS_IRQ_RAISED) { 160 160 test->last_irq = irq; 161 161 complete(&test->irq_raised); 162 - reg &= ~STATUS_IRQ_RAISED; 163 162 } 164 - pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_STATUS, 165 - reg); 166 163 167 164 return IRQ_HANDLED; 168 165 } ··· 313 316 struct pci_dev *pdev = test->pdev; 314 317 315 318 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, 316 - msix == false ? IRQ_TYPE_MSI : 317 - IRQ_TYPE_MSIX); 319 + msix ? IRQ_TYPE_MSIX : IRQ_TYPE_MSI); 318 320 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, msi_num); 319 321 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, 320 - msix == false ? COMMAND_RAISE_MSI_IRQ : 321 - COMMAND_RAISE_MSIX_IRQ); 322 + msix ? COMMAND_RAISE_MSIX_IRQ : 323 + COMMAND_RAISE_MSI_IRQ); 322 324 val = wait_for_completion_timeout(&test->irq_raised, 323 325 msecs_to_jiffies(1000)); 324 326 if (!val) 325 327 return false; 326 328 327 - if (pci_irq_vector(pdev, msi_num - 1) == test->last_irq) 328 - return true; 329 - 330 - return false; 329 + return pci_irq_vector(pdev, msi_num - 1) == test->last_irq; 331 330 } 332 331 333 332 static int pci_endpoint_test_validate_xfer_params(struct device *dev, ··· 722 729 struct pci_dev *pdev = test->pdev; 723 730 724 731 mutex_lock(&test->mutex); 732 + 733 + reinit_completion(&test->irq_raised); 734 + test->last_irq = -ENODATA; 735 + 725 736 switch (cmd) { 726 737 case PCITEST_BAR: 727 738 bar = arg; ··· 935 938 if (id < 0) 936 939 return; 937 940 941 + pci_endpoint_test_release_irq(test); 942 + pci_endpoint_test_free_irq_vectors(test); 943 + 938 944 misc_deregister(&test->miscdev); 939 945 kfree(misc_device->name); 940 946 kfree(test->name); ··· 946 946 if (test->bar[bar]) 947 947 pci_iounmap(pdev, test->bar[bar]); 948 948 } 949 - 950 - pci_endpoint_test_release_irq(test); 951 - pci_endpoint_test_free_irq_vectors(test); 952 949 953 950 pci_release_regions(pdev); 954 951 pci_disable_device(pdev);
+2 -6
drivers/net/ethernet/mellanox/mlx5/core/fw_reset.c
··· 368 368 struct pci_dev *sdev; 369 369 u16 reg16, dev_id; 370 370 int cap, err; 371 - u32 reg32; 372 371 373 372 err = pci_read_config_word(dev->pdev, PCI_DEVICE_ID, &dev_id); 374 373 if (err) ··· 398 399 return err; 399 400 400 401 /* Check link */ 401 - err = pci_read_config_dword(bridge, cap + PCI_EXP_LNKCAP, &reg32); 402 - if (err) 403 - return err; 404 - if (!(reg32 & PCI_EXP_LNKCAP_DLLLARC)) { 405 - mlx5_core_warn(dev, "No PCI link reporting capability (0x%08x)\n", reg32); 402 + if (!bridge->link_active_reporting) { 403 + mlx5_core_warn(dev, "No PCI link reporting capability\n"); 406 404 msleep(1000); 407 405 goto restore; 408 406 }
+2 -4
drivers/pci/controller/cadence/pci-j721e.c
··· 542 542 return ret; 543 543 } 544 544 545 - static int j721e_pcie_remove(struct platform_device *pdev) 545 + static void j721e_pcie_remove(struct platform_device *pdev) 546 546 { 547 547 struct j721e_pcie *pcie = platform_get_drvdata(pdev); 548 548 struct cdns_pcie *cdns_pcie = pcie->cdns_pcie; ··· 552 552 cdns_pcie_disable_phy(cdns_pcie); 553 553 pm_runtime_put(dev); 554 554 pm_runtime_disable(dev); 555 - 556 - return 0; 557 555 } 558 556 559 557 static struct platform_driver j721e_pcie_driver = { 560 558 .probe = j721e_pcie_probe, 561 - .remove = j721e_pcie_remove, 559 + .remove_new = j721e_pcie_remove, 562 560 .driver = { 563 561 .name = "j721e-pcie", 564 562 .of_match_table = of_j721e_pcie_match,
+27
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 12 12 13 13 #include "pcie-cadence.h" 14 14 15 + #define LINK_RETRAIN_TIMEOUT HZ 16 + 15 17 static u64 bar_max_size[] = { 16 18 [RP_BAR0] = _ULL(128 * SZ_2G), 17 19 [RP_BAR1] = SZ_2G, ··· 79 77 .write = pci_generic_config_write, 80 78 }; 81 79 80 + static int cdns_pcie_host_training_complete(struct cdns_pcie *pcie) 81 + { 82 + u32 pcie_cap_off = CDNS_PCIE_RP_CAP_OFFSET; 83 + unsigned long end_jiffies; 84 + u16 lnk_stat; 85 + 86 + /* Wait for link training to complete. Exit after timeout. */ 87 + end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT; 88 + do { 89 + lnk_stat = cdns_pcie_rp_readw(pcie, pcie_cap_off + PCI_EXP_LNKSTA); 90 + if (!(lnk_stat & PCI_EXP_LNKSTA_LT)) 91 + break; 92 + usleep_range(0, 1000); 93 + } while (time_before(jiffies, end_jiffies)); 94 + 95 + if (!(lnk_stat & PCI_EXP_LNKSTA_LT)) 96 + return 0; 97 + 98 + return -ETIMEDOUT; 99 + } 100 + 82 101 static int cdns_pcie_host_wait_for_link(struct cdns_pcie *pcie) 83 102 { 84 103 struct device *dev = pcie->dev; ··· 140 117 lnk_ctl |= PCI_EXP_LNKCTL_RL; 141 118 cdns_pcie_rp_writew(pcie, pcie_cap_off + PCI_EXP_LNKCTL, 142 119 lnk_ctl); 120 + 121 + ret = cdns_pcie_host_training_complete(pcie); 122 + if (ret) 123 + return ret; 143 124 144 125 ret = cdns_pcie_host_wait_for_link(pcie); 145 126 }
+23
drivers/pci/controller/dwc/pci-imx6.c
··· 80 80 struct clk *pcie; 81 81 struct clk *pcie_aux; 82 82 struct regmap *iomuxc_gpr; 83 + u16 msi_ctrl; 83 84 u32 controller_id; 84 85 struct reset_control *pciephy_reset; 85 86 struct reset_control *apps_reset; ··· 1179 1178 usleep_range(1000, 10000); 1180 1179 } 1181 1180 1181 + static void imx6_pcie_msi_save_restore(struct imx6_pcie *imx6_pcie, bool save) 1182 + { 1183 + u8 offset; 1184 + u16 val; 1185 + struct dw_pcie *pci = imx6_pcie->pci; 1186 + 1187 + if (pci_msi_enabled()) { 1188 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); 1189 + if (save) { 1190 + val = dw_pcie_readw_dbi(pci, offset + PCI_MSI_FLAGS); 1191 + imx6_pcie->msi_ctrl = val; 1192 + } else { 1193 + dw_pcie_dbi_ro_wr_en(pci); 1194 + val = imx6_pcie->msi_ctrl; 1195 + dw_pcie_writew_dbi(pci, offset + PCI_MSI_FLAGS, val); 1196 + dw_pcie_dbi_ro_wr_dis(pci); 1197 + } 1198 + } 1199 + } 1200 + 1182 1201 static int imx6_pcie_suspend_noirq(struct device *dev) 1183 1202 { 1184 1203 struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); ··· 1207 1186 if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND)) 1208 1187 return 0; 1209 1188 1189 + imx6_pcie_msi_save_restore(imx6_pcie, true); 1210 1190 imx6_pcie_pm_turnoff(imx6_pcie); 1211 1191 imx6_pcie_stop_link(imx6_pcie->pci); 1212 1192 imx6_pcie_host_exit(pp); ··· 1227 1205 ret = imx6_pcie_host_init(pp); 1228 1206 if (ret) 1229 1207 return ret; 1208 + imx6_pcie_msi_save_restore(imx6_pcie, false); 1230 1209 dw_pcie_setup_rc(pp); 1231 1210 1232 1211 if (imx6_pcie->link_is_up)
+99 -1
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 18 18 19 19 #include "pcie-designware.h" 20 20 21 + #define PEX_PF0_CONFIG 0xC0014 22 + #define PEX_PF0_CFG_READY BIT(0) 23 + 24 + /* PEX PFa PCIE PME and message interrupt registers*/ 25 + #define PEX_PF0_PME_MES_DR 0xC0020 26 + #define PEX_PF0_PME_MES_DR_LUD BIT(7) 27 + #define PEX_PF0_PME_MES_DR_LDD BIT(9) 28 + #define PEX_PF0_PME_MES_DR_HRD BIT(10) 29 + 30 + #define PEX_PF0_PME_MES_IER 0xC0028 31 + #define PEX_PF0_PME_MES_IER_LUDIE BIT(7) 32 + #define PEX_PF0_PME_MES_IER_LDDIE BIT(9) 33 + #define PEX_PF0_PME_MES_IER_HRDIE BIT(10) 34 + 21 35 #define to_ls_pcie_ep(x) dev_get_drvdata((x)->dev) 22 36 23 37 struct ls_pcie_ep_drvdata { ··· 44 30 struct dw_pcie *pci; 45 31 struct pci_epc_features *ls_epc; 46 32 const struct ls_pcie_ep_drvdata *drvdata; 33 + int irq; 34 + bool big_endian; 47 35 }; 36 + 37 + static u32 ls_lut_readl(struct ls_pcie_ep *pcie, u32 offset) 38 + { 39 + struct dw_pcie *pci = pcie->pci; 40 + 41 + if (pcie->big_endian) 42 + return ioread32be(pci->dbi_base + offset); 43 + else 44 + return ioread32(pci->dbi_base + offset); 45 + } 46 + 47 + static void ls_lut_writel(struct ls_pcie_ep *pcie, u32 offset, u32 value) 48 + { 49 + struct dw_pcie *pci = pcie->pci; 50 + 51 + if (pcie->big_endian) 52 + iowrite32be(value, pci->dbi_base + offset); 53 + else 54 + iowrite32(value, pci->dbi_base + offset); 55 + } 56 + 57 + static irqreturn_t ls_pcie_ep_event_handler(int irq, void *dev_id) 58 + { 59 + struct ls_pcie_ep *pcie = dev_id; 60 + struct dw_pcie *pci = pcie->pci; 61 + u32 val, cfg; 62 + 63 + val = ls_lut_readl(pcie, PEX_PF0_PME_MES_DR); 64 + ls_lut_writel(pcie, PEX_PF0_PME_MES_DR, val); 65 + 66 + if (!val) 67 + return IRQ_NONE; 68 + 69 + if (val & PEX_PF0_PME_MES_DR_LUD) { 70 + cfg = ls_lut_readl(pcie, PEX_PF0_CONFIG); 71 + cfg |= PEX_PF0_CFG_READY; 72 + ls_lut_writel(pcie, PEX_PF0_CONFIG, cfg); 73 + dw_pcie_ep_linkup(&pci->ep); 74 + 75 + dev_dbg(pci->dev, "Link up\n"); 76 + } else if (val & PEX_PF0_PME_MES_DR_LDD) { 77 + dev_dbg(pci->dev, "Link down\n"); 78 + } else if (val & PEX_PF0_PME_MES_DR_HRD) { 79 + dev_dbg(pci->dev, "Hot reset\n"); 80 + } 81 + 82 + return IRQ_HANDLED; 83 + } 84 + 85 + static int ls_pcie_ep_interrupt_init(struct ls_pcie_ep *pcie, 86 + struct platform_device *pdev) 87 + { 88 + u32 val; 89 + int ret; 90 + 91 + pcie->irq = platform_get_irq_byname(pdev, "pme"); 92 + if (pcie->irq < 0) 93 + return pcie->irq; 94 + 95 + ret = devm_request_irq(&pdev->dev, pcie->irq, ls_pcie_ep_event_handler, 96 + IRQF_SHARED, pdev->name, pcie); 97 + if (ret) { 98 + dev_err(&pdev->dev, "Can't register PCIe IRQ\n"); 99 + return ret; 100 + } 101 + 102 + /* Enable interrupts */ 103 + val = ls_lut_readl(pcie, PEX_PF0_PME_MES_IER); 104 + val |= PEX_PF0_PME_MES_IER_LDDIE | PEX_PF0_PME_MES_IER_HRDIE | 105 + PEX_PF0_PME_MES_IER_LUDIE; 106 + ls_lut_writel(pcie, PEX_PF0_PME_MES_IER, val); 107 + 108 + return 0; 109 + } 48 110 49 111 static const struct pci_epc_features* 50 112 ls_pcie_ep_get_features(struct dw_pcie_ep *ep) ··· 215 125 struct ls_pcie_ep *pcie; 216 126 struct pci_epc_features *ls_epc; 217 127 struct resource *dbi_base; 128 + int ret; 218 129 219 130 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 220 131 if (!pcie) ··· 235 144 pci->ops = pcie->drvdata->dw_pcie_ops; 236 145 237 146 ls_epc->bar_fixed_64bit = (1 << BAR_2) | (1 << BAR_4); 147 + ls_epc->linkup_notifier = true; 238 148 239 149 pcie->pci = pci; 240 150 pcie->ls_epc = ls_epc; ··· 247 155 248 156 pci->ep.ops = &ls_pcie_ep_ops; 249 157 158 + pcie->big_endian = of_property_read_bool(dev->of_node, "big-endian"); 159 + 250 160 platform_set_drvdata(pdev, pcie); 251 161 252 - return dw_pcie_ep_init(&pci->ep); 162 + ret = dw_pcie_ep_init(&pci->ep); 163 + if (ret) 164 + return ret; 165 + 166 + return ls_pcie_ep_interrupt_init(pcie, pdev); 253 167 } 254 168 255 169 static struct platform_driver ls_pcie_ep_driver = {
+2 -4
drivers/pci/controller/dwc/pcie-bt1.c
··· 617 617 return bt1_pcie_add_port(btpci); 618 618 } 619 619 620 - static int bt1_pcie_remove(struct platform_device *pdev) 620 + static void bt1_pcie_remove(struct platform_device *pdev) 621 621 { 622 622 struct bt1_pcie *btpci = platform_get_drvdata(pdev); 623 623 624 624 bt1_pcie_del_port(btpci); 625 - 626 - return 0; 627 625 } 628 626 629 627 static const struct of_device_id bt1_pcie_of_match[] = { ··· 632 634 633 635 static struct platform_driver bt1_pcie_driver = { 634 636 .probe = bt1_pcie_probe, 635 - .remove = bt1_pcie_remove, 637 + .remove_new = bt1_pcie_remove, 636 638 .driver = { 637 639 .name = "bt1-pcie", 638 640 .of_match_table = bt1_pcie_of_match,
+9 -4
drivers/pci/controller/dwc/pcie-designware-host.c
··· 485 485 if (ret) 486 486 goto err_remove_edma; 487 487 488 - if (!dw_pcie_link_up(pci)) { 488 + if (dw_pcie_link_up(pci)) { 489 + dw_pcie_print_link_status(pci); 490 + } else { 489 491 ret = dw_pcie_start_link(pci); 490 492 if (ret) 491 493 goto err_remove_edma; 492 - } 493 494 494 - /* Ignore errors, the link may come up later */ 495 - dw_pcie_wait_for_link(pci); 495 + if (pci->ops && pci->ops->start_link) { 496 + ret = dw_pcie_wait_for_link(pci); 497 + if (ret) 498 + goto err_stop_link; 499 + } 500 + } 496 501 497 502 bridge->sysdata = pp; 498 503
+13 -7
drivers/pci/controller/dwc/pcie-designware.c
··· 644 644 dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0); 645 645 } 646 646 647 - int dw_pcie_wait_for_link(struct dw_pcie *pci) 647 + void dw_pcie_print_link_status(struct dw_pcie *pci) 648 648 { 649 649 u32 offset, val; 650 + 651 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 652 + val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 653 + 654 + dev_info(pci->dev, "PCIe Gen.%u x%u link up\n", 655 + FIELD_GET(PCI_EXP_LNKSTA_CLS, val), 656 + FIELD_GET(PCI_EXP_LNKSTA_NLW, val)); 657 + } 658 + 659 + int dw_pcie_wait_for_link(struct dw_pcie *pci) 660 + { 650 661 int retries; 651 662 652 663 /* Check if the link is up or not */ ··· 673 662 return -ETIMEDOUT; 674 663 } 675 664 676 - offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 677 - val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 678 - 679 - dev_info(pci->dev, "PCIe Gen.%u x%u link up\n", 680 - FIELD_GET(PCI_EXP_LNKSTA_CLS, val), 681 - FIELD_GET(PCI_EXP_LNKSTA_NLW, val)); 665 + dw_pcie_print_link_status(pci); 682 666 683 667 return 0; 684 668 }
+1
drivers/pci/controller/dwc/pcie-designware.h
··· 429 429 void dw_pcie_iatu_detect(struct dw_pcie *pci); 430 430 int dw_pcie_edma_detect(struct dw_pcie *pci); 431 431 void dw_pcie_edma_remove(struct dw_pcie *pci); 432 + void dw_pcie_print_link_status(struct dw_pcie *pci); 432 433 433 434 static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) 434 435 {
+2 -4
drivers/pci/controller/dwc/pcie-histb.c
··· 421 421 return 0; 422 422 } 423 423 424 - static int histb_pcie_remove(struct platform_device *pdev) 424 + static void histb_pcie_remove(struct platform_device *pdev) 425 425 { 426 426 struct histb_pcie *hipcie = platform_get_drvdata(pdev); 427 427 ··· 429 429 430 430 if (hipcie->phy) 431 431 phy_exit(hipcie->phy); 432 - 433 - return 0; 434 432 } 435 433 436 434 static const struct of_device_id histb_pcie_of_match[] = { ··· 439 441 440 442 static struct platform_driver histb_pcie_platform_driver = { 441 443 .probe = histb_pcie_probe, 442 - .remove = histb_pcie_remove, 444 + .remove_new = histb_pcie_remove, 443 445 .driver = { 444 446 .name = "histb-pcie", 445 447 .of_match_table = histb_pcie_of_match,
+2 -4
drivers/pci/controller/dwc/pcie-intel-gw.c
··· 340 340 phy_exit(pcie->phy); 341 341 } 342 342 343 - static int intel_pcie_remove(struct platform_device *pdev) 343 + static void intel_pcie_remove(struct platform_device *pdev) 344 344 { 345 345 struct intel_pcie *pcie = platform_get_drvdata(pdev); 346 346 struct dw_pcie_rp *pp = &pcie->pci.pp; 347 347 348 348 dw_pcie_host_deinit(pp); 349 349 __intel_pcie_remove(pcie); 350 - 351 - return 0; 352 350 } 353 351 354 352 static int intel_pcie_suspend_noirq(struct device *dev) ··· 441 443 442 444 static struct platform_driver intel_pcie_driver = { 443 445 .probe = intel_pcie_probe, 444 - .remove = intel_pcie_remove, 446 + .remove_new = intel_pcie_remove, 445 447 .driver = { 446 448 .name = "intel-gw-pcie", 447 449 .of_match_table = of_intel_pcie_match,
+5 -5
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 569 569 if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) { 570 570 dev_dbg(dev, "Received Linkdown event\n"); 571 571 pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN; 572 + pci_epc_linkdown(pci->ep.epc); 572 573 } else if (FIELD_GET(PARF_INT_ALL_BME, status)) { 573 574 dev_dbg(dev, "Received BME event. Link is enabled!\n"); 574 575 pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED; 576 + pci_epc_bme_notify(pci->ep.epc); 575 577 } else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) { 576 578 dev_dbg(dev, "Received PM Turn-off event! Entering L23\n"); 577 579 val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); ··· 786 784 return ret; 787 785 } 788 786 789 - static int qcom_pcie_ep_remove(struct platform_device *pdev) 787 + static void qcom_pcie_ep_remove(struct platform_device *pdev) 790 788 { 791 789 struct qcom_pcie_ep *pcie_ep = platform_get_drvdata(pdev); 792 790 ··· 796 794 debugfs_remove_recursive(pcie_ep->debugfs); 797 795 798 796 if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) 799 - return 0; 797 + return; 800 798 801 799 qcom_pcie_disable_resources(pcie_ep); 802 - 803 - return 0; 804 800 } 805 801 806 802 static const struct of_device_id qcom_pcie_ep_match[] = { ··· 810 810 811 811 static struct platform_driver qcom_pcie_ep_driver = { 812 812 .probe = qcom_pcie_ep_probe, 813 - .remove = qcom_pcie_ep_remove, 813 + .remove_new = qcom_pcie_ep_remove, 814 814 .driver = { 815 815 .name = "qcom-pcie-ep", 816 816 .of_match_table = qcom_pcie_ep_match,
+38 -35
drivers/pci/controller/dwc/pcie-qcom.c
··· 61 61 /* DBI registers */ 62 62 #define AXI_MSTR_RESP_COMP_CTRL0 0x818 63 63 #define AXI_MSTR_RESP_COMP_CTRL1 0x81c 64 - #define MISC_CONTROL_1_REG 0x8bc 65 64 66 65 /* MHI registers */ 67 66 #define PARF_DEBUG_CNT_PM_LINKST_IN_L2 0xc04 ··· 131 132 /* AXI_MSTR_RESP_COMP_CTRL1 register fields */ 132 133 #define CFG_BRIDGE_SB_INIT BIT(0) 133 134 134 - /* MISC_CONTROL_1_REG register fields */ 135 - #define DBI_RO_WR_EN 1 136 - 137 135 /* PCI_EXP_SLTCAP register fields */ 138 136 #define PCIE_CAP_SLOT_POWER_LIMIT_VAL FIELD_PREP(PCI_EXP_SLTCAP_SPLV, 250) 139 137 #define PCIE_CAP_SLOT_POWER_LIMIT_SCALE FIELD_PREP(PCI_EXP_SLTCAP_SPLS, 1) ··· 140 144 PCI_EXP_SLTCAP_AIP | \ 141 145 PCI_EXP_SLTCAP_PIP | \ 142 146 PCI_EXP_SLTCAP_HPS | \ 143 - PCI_EXP_SLTCAP_HPC | \ 144 147 PCI_EXP_SLTCAP_EIP | \ 145 148 PCIE_CAP_SLOT_POWER_LIMIT_VAL | \ 146 149 PCIE_CAP_SLOT_POWER_LIMIT_SCALE) ··· 267 272 pcie->cfg->ops->ltssm_enable(pcie); 268 273 269 274 return 0; 275 + } 276 + 277 + static void qcom_pcie_clear_hpc(struct dw_pcie *pci) 278 + { 279 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 280 + u32 val; 281 + 282 + dw_pcie_dbi_ro_wr_en(pci); 283 + 284 + val = readl(pci->dbi_base + offset + PCI_EXP_SLTCAP); 285 + val &= ~PCI_EXP_SLTCAP_HPC; 286 + writel(val, pci->dbi_base + offset + PCI_EXP_SLTCAP); 287 + 288 + dw_pcie_dbi_ro_wr_dis(pci); 270 289 } 271 290 272 291 static void qcom_pcie_2_1_0_ltssm_enable(struct qcom_pcie *pcie) ··· 438 429 writel(CFG_BRIDGE_SB_INIT, 439 430 pci->dbi_base + AXI_MSTR_RESP_COMP_CTRL1); 440 431 432 + qcom_pcie_clear_hpc(pcie->pci); 433 + 441 434 return 0; 442 435 } 443 436 ··· 522 511 val |= EN; 523 512 writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT); 524 513 } 514 + 515 + qcom_pcie_clear_hpc(pcie->pci); 525 516 526 517 return 0; 527 518 } ··· 620 607 val |= EN; 621 608 writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2); 622 609 610 + qcom_pcie_clear_hpc(pcie->pci); 611 + 623 612 return 0; 624 613 } 625 614 ··· 703 688 reset_control_bulk_assert(res->num_resets, res->resets); 704 689 return ret; 705 690 } 706 - 707 - return 0; 708 - } 709 - 710 - static int qcom_pcie_post_init_2_4_0(struct qcom_pcie *pcie) 711 - { 712 - u32 val; 713 - 714 - /* enable PCIe clocks and resets */ 715 - val = readl(pcie->parf + PARF_PHY_CTRL); 716 - val &= ~PHY_TEST_PWR_DOWN; 717 - writel(val, pcie->parf + PARF_PHY_CTRL); 718 - 719 - /* change DBI base address */ 720 - writel(0, pcie->parf + PARF_DBI_BASE_ADDR); 721 - 722 - /* MAC PHY_POWERDOWN MUX DISABLE */ 723 - val = readl(pcie->parf + PARF_SYS_CTRL); 724 - val &= ~MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN; 725 - writel(val, pcie->parf + PARF_SYS_CTRL); 726 - 727 - val = readl(pcie->parf + PARF_MHI_CLOCK_RESET_CTRL); 728 - val |= BYPASS; 729 - writel(val, pcie->parf + PARF_MHI_CLOCK_RESET_CTRL); 730 - 731 - val = readl(pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2); 732 - val |= EN; 733 - writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2); 734 691 735 692 return 0; 736 693 } ··· 813 826 writel(0, pcie->parf + PARF_Q2A_FLUSH); 814 827 815 828 writel(PCI_COMMAND_MASTER, pci->dbi_base + PCI_COMMAND); 816 - writel(DBI_RO_WR_EN, pci->dbi_base + MISC_CONTROL_1_REG); 829 + 830 + dw_pcie_dbi_ro_wr_en(pci); 831 + 817 832 writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); 818 833 819 834 val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); ··· 824 835 825 836 writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + 826 837 PCI_EXP_DEVCTL2); 838 + 839 + dw_pcie_dbi_ro_wr_dis(pci); 827 840 828 841 return 0; 829 842 } ··· 955 964 regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 956 965 957 966 return ret; 967 + } 968 + 969 + static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie) 970 + { 971 + qcom_pcie_clear_hpc(pcie->pci); 972 + 973 + return 0; 958 974 } 959 975 960 976 static void qcom_pcie_deinit_2_7_0(struct qcom_pcie *pcie) ··· 1134 1136 writel(0, pcie->parf + PARF_Q2A_FLUSH); 1135 1137 1136 1138 dw_pcie_dbi_ro_wr_en(pci); 1139 + 1137 1140 writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); 1138 1141 1139 1142 val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); ··· 1143 1144 1144 1145 writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + 1145 1146 PCI_EXP_DEVCTL2); 1147 + 1148 + dw_pcie_dbi_ro_wr_dis(pci); 1146 1149 1147 1150 for (i = 0; i < 256; i++) 1148 1151 writel(0, pcie->parf + PARF_BDF_TO_SID_TABLE_N + (4 * i)); ··· 1252 1251 static const struct qcom_pcie_ops ops_2_4_0 = { 1253 1252 .get_resources = qcom_pcie_get_resources_2_4_0, 1254 1253 .init = qcom_pcie_init_2_4_0, 1255 - .post_init = qcom_pcie_post_init_2_4_0, 1254 + .post_init = qcom_pcie_post_init_2_3_2, 1256 1255 .deinit = qcom_pcie_deinit_2_4_0, 1257 1256 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1258 1257 }; ··· 1270 1269 static const struct qcom_pcie_ops ops_2_7_0 = { 1271 1270 .get_resources = qcom_pcie_get_resources_2_7_0, 1272 1271 .init = qcom_pcie_init_2_7_0, 1272 + .post_init = qcom_pcie_post_init_2_7_0, 1273 1273 .deinit = qcom_pcie_deinit_2_7_0, 1274 1274 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1275 1275 }; ··· 1279 1277 static const struct qcom_pcie_ops ops_1_9_0 = { 1280 1278 .get_resources = qcom_pcie_get_resources_2_7_0, 1281 1279 .init = qcom_pcie_init_2_7_0, 1280 + .post_init = qcom_pcie_post_init_2_7_0, 1282 1281 .deinit = qcom_pcie_deinit_2_7_0, 1283 1282 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1284 1283 .config_sid = qcom_pcie_config_sid_1_9_0,
+3 -5
drivers/pci/controller/dwc/pcie-tegra194.c
··· 2296 2296 return ret; 2297 2297 } 2298 2298 2299 - static int tegra_pcie_dw_remove(struct platform_device *pdev) 2299 + static void tegra_pcie_dw_remove(struct platform_device *pdev) 2300 2300 { 2301 2301 struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); 2302 2302 2303 2303 if (pcie->of_data->mode == DW_PCIE_RC_TYPE) { 2304 2304 if (!pcie->link_state) 2305 - return 0; 2305 + return; 2306 2306 2307 2307 debugfs_remove_recursive(pcie->debugfs); 2308 2308 tegra_pcie_deinit_controller(pcie); ··· 2316 2316 tegra_bpmp_put(pcie->bpmp); 2317 2317 if (pcie->pex_refclk_sel_gpiod) 2318 2318 gpiod_set_value(pcie->pex_refclk_sel_gpiod, 0); 2319 - 2320 - return 0; 2321 2319 } 2322 2320 2323 2321 static int tegra_pcie_dw_suspend_late(struct device *dev) ··· 2509 2511 2510 2512 static struct platform_driver tegra_pcie_dw_driver = { 2511 2513 .probe = tegra_pcie_dw_probe, 2512 - .remove = tegra_pcie_dw_remove, 2514 + .remove_new = tegra_pcie_dw_remove, 2513 2515 .shutdown = tegra_pcie_dw_shutdown, 2514 2516 .driver = { 2515 2517 .name = "tegra194-pcie",
+2 -4
drivers/pci/controller/pci-aardvark.c
··· 1927 1927 return 0; 1928 1928 } 1929 1929 1930 - static int advk_pcie_remove(struct platform_device *pdev) 1930 + static void advk_pcie_remove(struct platform_device *pdev) 1931 1931 { 1932 1932 struct advk_pcie *pcie = platform_get_drvdata(pdev); 1933 1933 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); ··· 1989 1989 1990 1990 /* Disable phy */ 1991 1991 advk_pcie_disable_phy(pcie); 1992 - 1993 - return 0; 1994 1992 } 1995 1993 1996 1994 static const struct of_device_id advk_pcie_of_match_table[] = { ··· 2003 2005 .of_match_table = advk_pcie_of_match_table, 2004 2006 }, 2005 2007 .probe = advk_pcie_probe, 2006 - .remove = advk_pcie_remove, 2008 + .remove_new = advk_pcie_remove, 2007 2009 }; 2008 2010 module_platform_driver(advk_pcie_driver); 2009 2011
+2 -12
drivers/pci/controller/pci-ftpci100.c
··· 429 429 p->dev = dev; 430 430 431 431 /* Retrieve and enable optional clocks */ 432 - clk = devm_clk_get(dev, "PCLK"); 432 + clk = devm_clk_get_enabled(dev, "PCLK"); 433 433 if (IS_ERR(clk)) 434 434 return PTR_ERR(clk); 435 - ret = clk_prepare_enable(clk); 436 - if (ret) { 437 - dev_err(dev, "could not prepare PCLK\n"); 438 - return ret; 439 - } 440 - p->bus_clk = devm_clk_get(dev, "PCICLK"); 435 + p->bus_clk = devm_clk_get_enabled(dev, "PCICLK"); 441 436 if (IS_ERR(p->bus_clk)) 442 437 return PTR_ERR(p->bus_clk); 443 - ret = clk_prepare_enable(p->bus_clk); 444 - if (ret) { 445 - dev_err(dev, "could not prepare PCICLK\n"); 446 - return ret; 447 - } 448 438 449 439 p->base = devm_platform_ioremap_resource(pdev, 0); 450 440 if (IS_ERR(p->base))
+2 -4
drivers/pci/controller/pci-mvebu.c
··· 1649 1649 return pci_host_probe(bridge); 1650 1650 } 1651 1651 1652 - static int mvebu_pcie_remove(struct platform_device *pdev) 1652 + static void mvebu_pcie_remove(struct platform_device *pdev) 1653 1653 { 1654 1654 struct mvebu_pcie *pcie = platform_get_drvdata(pdev); 1655 1655 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); ··· 1707 1707 /* Power down card and disable clocks. Must be the last step. */ 1708 1708 mvebu_pcie_powerdown(port); 1709 1709 } 1710 - 1711 - return 0; 1712 1710 } 1713 1711 1714 1712 static const struct of_device_id mvebu_pcie_of_match_table[] = { ··· 1728 1730 .pm = &mvebu_pcie_pm_ops, 1729 1731 }, 1730 1732 .probe = mvebu_pcie_probe, 1731 - .remove = mvebu_pcie_remove, 1733 + .remove_new = mvebu_pcie_remove, 1732 1734 }; 1733 1735 module_platform_driver(mvebu_pcie_driver); 1734 1736
+2 -4
drivers/pci/controller/pci-tegra.c
··· 2680 2680 return err; 2681 2681 } 2682 2682 2683 - static int tegra_pcie_remove(struct platform_device *pdev) 2683 + static void tegra_pcie_remove(struct platform_device *pdev) 2684 2684 { 2685 2685 struct tegra_pcie *pcie = platform_get_drvdata(pdev); 2686 2686 struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); ··· 2701 2701 2702 2702 list_for_each_entry_safe(port, tmp, &pcie->ports, list) 2703 2703 tegra_pcie_port_free(port); 2704 - 2705 - return 0; 2706 2704 } 2707 2705 2708 2706 static int tegra_pcie_pm_suspend(struct device *dev) ··· 2806 2808 .pm = &tegra_pcie_pm_ops, 2807 2809 }, 2808 2810 .probe = tegra_pcie_probe, 2809 - .remove = tegra_pcie_remove, 2811 + .remove_new = tegra_pcie_remove, 2810 2812 }; 2811 2813 module_platform_driver(tegra_pcie_driver);
+2 -4
drivers/pci/controller/pci-xgene-msi.c
··· 348 348 349 349 static enum cpuhp_state pci_xgene_online; 350 350 351 - static int xgene_msi_remove(struct platform_device *pdev) 351 + static void xgene_msi_remove(struct platform_device *pdev) 352 352 { 353 353 struct xgene_msi *msi = platform_get_drvdata(pdev); 354 354 ··· 362 362 msi->bitmap = NULL; 363 363 364 364 xgene_free_domains(msi); 365 - 366 - return 0; 367 365 } 368 366 369 367 static int xgene_msi_hwirq_alloc(unsigned int cpu) ··· 519 521 .of_match_table = xgene_msi_match_table, 520 522 }, 521 523 .probe = xgene_msi_probe, 522 - .remove = xgene_msi_remove, 524 + .remove_new = xgene_msi_remove, 523 525 }; 524 526 525 527 static int __init xgene_pcie_msi_init(void)
+2 -3
drivers/pci/controller/pcie-altera-msi.c
··· 197 197 irq_domain_remove(msi->inner_domain); 198 198 } 199 199 200 - static int altera_msi_remove(struct platform_device *pdev) 200 + static void altera_msi_remove(struct platform_device *pdev) 201 201 { 202 202 struct altera_msi *msi = platform_get_drvdata(pdev); 203 203 ··· 207 207 altera_free_domains(msi); 208 208 209 209 platform_set_drvdata(pdev, NULL); 210 - return 0; 211 210 } 212 211 213 212 static int altera_msi_probe(struct platform_device *pdev) ··· 274 275 .of_match_table = altera_msi_of_match, 275 276 }, 276 277 .probe = altera_msi_probe, 277 - .remove = altera_msi_remove, 278 + .remove_new = altera_msi_remove, 278 279 }; 279 280 280 281 static int __init altera_msi_init(void)
+2 -4
drivers/pci/controller/pcie-altera.c
··· 806 806 return pci_host_probe(bridge); 807 807 } 808 808 809 - static int altera_pcie_remove(struct platform_device *pdev) 809 + static void altera_pcie_remove(struct platform_device *pdev) 810 810 { 811 811 struct altera_pcie *pcie = platform_get_drvdata(pdev); 812 812 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); ··· 814 814 pci_stop_root_bus(bridge->bus); 815 815 pci_remove_root_bus(bridge->bus); 816 816 altera_pcie_irq_teardown(pcie); 817 - 818 - return 0; 819 817 } 820 818 821 819 static struct platform_driver altera_pcie_driver = { 822 820 .probe = altera_pcie_probe, 823 - .remove = altera_pcie_remove, 821 + .remove_new = altera_pcie_remove, 824 822 .driver = { 825 823 .name = "altera-pcie", 826 824 .of_match_table = altera_pcie_of_match,
+2 -4
drivers/pci/controller/pcie-brcmstb.c
··· 1396 1396 clk_disable_unprepare(pcie->clk); 1397 1397 } 1398 1398 1399 - static int brcm_pcie_remove(struct platform_device *pdev) 1399 + static void brcm_pcie_remove(struct platform_device *pdev) 1400 1400 { 1401 1401 struct brcm_pcie *pcie = platform_get_drvdata(pdev); 1402 1402 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); ··· 1404 1404 pci_stop_root_bus(bridge->bus); 1405 1405 pci_remove_root_bus(bridge->bus); 1406 1406 __brcm_pcie_remove(pcie); 1407 - 1408 - return 0; 1409 1407 } 1410 1408 1411 1409 static const int pcie_offsets[] = { ··· 1610 1612 1611 1613 static struct platform_driver brcm_pcie_driver = { 1612 1614 .probe = brcm_pcie_probe, 1613 - .remove = brcm_pcie_remove, 1615 + .remove_new = brcm_pcie_remove, 1614 1616 .driver = { 1615 1617 .name = "brcm-pcie", 1616 1618 .of_match_table = brcm_pcie_match,
+2 -4
drivers/pci/controller/pcie-hisi-error.c
··· 299 299 return 0; 300 300 } 301 301 302 - static int hisi_pcie_error_handler_remove(struct platform_device *pdev) 302 + static void hisi_pcie_error_handler_remove(struct platform_device *pdev) 303 303 { 304 304 struct hisi_pcie_error_private *priv = platform_get_drvdata(pdev); 305 305 306 306 ghes_unregister_vendor_record_notifier(&priv->nb); 307 - 308 - return 0; 309 307 } 310 308 311 309 static const struct acpi_device_id hisi_pcie_acpi_match[] = { ··· 317 319 .acpi_match_table = hisi_pcie_acpi_match, 318 320 }, 319 321 .probe = hisi_pcie_error_handler_probe, 320 - .remove = hisi_pcie_error_handler_remove, 322 + .remove_new = hisi_pcie_error_handler_remove, 321 323 }; 322 324 module_platform_driver(hisi_pcie_error_handler_driver); 323 325
+3 -3
drivers/pci/controller/pcie-iproc-platform.c
··· 114 114 return 0; 115 115 } 116 116 117 - static int iproc_pltfm_pcie_remove(struct platform_device *pdev) 117 + static void iproc_pltfm_pcie_remove(struct platform_device *pdev) 118 118 { 119 119 struct iproc_pcie *pcie = platform_get_drvdata(pdev); 120 120 121 - return iproc_pcie_remove(pcie); 121 + iproc_pcie_remove(pcie); 122 122 } 123 123 124 124 static void iproc_pltfm_pcie_shutdown(struct platform_device *pdev) ··· 134 134 .of_match_table = of_match_ptr(iproc_pcie_of_match_table), 135 135 }, 136 136 .probe = iproc_pltfm_pcie_probe, 137 - .remove = iproc_pltfm_pcie_remove, 137 + .remove_new = iproc_pltfm_pcie_remove, 138 138 .shutdown = iproc_pltfm_pcie_shutdown, 139 139 }; 140 140 module_platform_driver(iproc_pltfm_pcie_driver);
+1 -3
drivers/pci/controller/pcie-iproc.c
··· 1537 1537 } 1538 1538 EXPORT_SYMBOL(iproc_pcie_setup); 1539 1539 1540 - int iproc_pcie_remove(struct iproc_pcie *pcie) 1540 + void iproc_pcie_remove(struct iproc_pcie *pcie) 1541 1541 { 1542 1542 struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 1543 1543 ··· 1548 1548 1549 1549 phy_power_off(pcie->phy); 1550 1550 phy_exit(pcie->phy); 1551 - 1552 - return 0; 1553 1551 } 1554 1552 EXPORT_SYMBOL(iproc_pcie_remove); 1555 1553
+1 -1
drivers/pci/controller/pcie-iproc.h
··· 111 111 }; 112 112 113 113 int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res); 114 - int iproc_pcie_remove(struct iproc_pcie *pcie); 114 + void iproc_pcie_remove(struct iproc_pcie *pcie); 115 115 int iproc_pcie_shutdown(struct iproc_pcie *pcie); 116 116 117 117 #ifdef CONFIG_PCIE_IPROC_MSI
+2 -4
drivers/pci/controller/pcie-mediatek-gen3.c
··· 943 943 return 0; 944 944 } 945 945 946 - static int mtk_pcie_remove(struct platform_device *pdev) 946 + static void mtk_pcie_remove(struct platform_device *pdev) 947 947 { 948 948 struct mtk_gen3_pcie *pcie = platform_get_drvdata(pdev); 949 949 struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); ··· 955 955 956 956 mtk_pcie_irq_teardown(pcie); 957 957 mtk_pcie_power_down(pcie); 958 - 959 - return 0; 960 958 } 961 959 962 960 static void mtk_pcie_irq_save(struct mtk_gen3_pcie *pcie) ··· 1067 1069 1068 1070 static struct platform_driver mtk_pcie_driver = { 1069 1071 .probe = mtk_pcie_probe, 1070 - .remove = mtk_pcie_remove, 1072 + .remove_new = mtk_pcie_remove, 1071 1073 .driver = { 1072 1074 .name = "mtk-pcie-gen3", 1073 1075 .of_match_table = mtk_pcie_of_match,
+2 -4
drivers/pci/controller/pcie-mediatek.c
··· 1134 1134 pci_free_resource_list(windows); 1135 1135 } 1136 1136 1137 - static int mtk_pcie_remove(struct platform_device *pdev) 1137 + static void mtk_pcie_remove(struct platform_device *pdev) 1138 1138 { 1139 1139 struct mtk_pcie *pcie = platform_get_drvdata(pdev); 1140 1140 struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); ··· 1146 1146 mtk_pcie_irq_teardown(pcie); 1147 1147 1148 1148 mtk_pcie_put_resources(pcie); 1149 - 1150 - return 0; 1151 1149 } 1152 1150 1153 1151 static int mtk_pcie_suspend_noirq(struct device *dev) ··· 1237 1239 1238 1240 static struct platform_driver mtk_pcie_driver = { 1239 1241 .probe = mtk_pcie_probe, 1240 - .remove = mtk_pcie_remove, 1242 + .remove_new = mtk_pcie_remove, 1241 1243 .driver = { 1242 1244 .name = "mtk-pcie", 1243 1245 .of_match_table = mtk_pcie_ids,
+2 -4
drivers/pci/controller/pcie-mt7621.c
··· 524 524 return err; 525 525 } 526 526 527 - static int mt7621_pcie_remove(struct platform_device *pdev) 527 + static void mt7621_pcie_remove(struct platform_device *pdev) 528 528 { 529 529 struct mt7621_pcie *pcie = platform_get_drvdata(pdev); 530 530 struct mt7621_pcie_port *port; 531 531 532 532 list_for_each_entry(port, &pcie->ports, list) 533 533 reset_control_put(port->pcie_rst); 534 - 535 - return 0; 536 534 } 537 535 538 536 static const struct of_device_id mt7621_pcie_ids[] = { ··· 541 543 542 544 static struct platform_driver mt7621_pcie_driver = { 543 545 .probe = mt7621_pcie_probe, 544 - .remove = mt7621_pcie_remove, 546 + .remove_new = mt7621_pcie_remove, 545 547 .driver = { 546 548 .name = "mt7621-pci", 547 549 .of_match_table = mt7621_pcie_ids,
+2 -23
drivers/pci/controller/pcie-rcar-host.c
··· 41 41 int irq2; 42 42 }; 43 43 44 - #ifdef CONFIG_ARM 45 - /* 46 - * Here we keep a static copy of the remapped PCIe controller address. 47 - * This is only used on aarch32 systems, all of which have one single 48 - * PCIe controller, to provide quick access to the PCIe controller in 49 - * the L1 link state fixup function, called from the ARM fault handler. 50 - */ 51 - static void __iomem *pcie_base; 52 - /* 53 - * Static copy of PCIe device pointer, so we can check whether the 54 - * device is runtime suspended or not. 55 - */ 56 - static struct device *pcie_dev; 57 - #endif 58 - 59 44 /* Structure representing the PCIe interface */ 60 45 struct rcar_pcie_host { 61 46 struct rcar_pcie pcie; ··· 669 684 } 670 685 671 686 static struct irq_chip rcar_msi_bottom_chip = { 672 - .name = "Rcar MSI", 687 + .name = "R-Car MSI", 673 688 .irq_ack = rcar_msi_irq_ack, 674 689 .irq_mask = rcar_msi_irq_mask, 675 690 .irq_unmask = rcar_msi_irq_unmask, ··· 798 813 799 814 /* 800 815 * Setup MSI data target using RC base address address, which 801 - * is guaranteed to be in the low 32bit range on any RCar HW. 816 + * is guaranteed to be in the low 32bit range on any R-Car HW. 802 817 */ 803 818 rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR); 804 819 rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR); ··· 863 878 goto err_irq2; 864 879 } 865 880 host->msi.irq2 = i; 866 - 867 - #ifdef CONFIG_ARM 868 - /* Cache static copy for L1 link state fixup hook on aarch32 */ 869 - pcie_base = pcie->base; 870 - pcie_dev = pcie->dev; 871 - #endif 872 881 873 882 return 0; 874 883
+100 -121
drivers/pci/controller/pcie-rockchip-ep.c
··· 61 61 ROCKCHIP_PCIE_AT_OB_REGION_DESC0(region)); 62 62 rockchip_pcie_write(rockchip, 0, 63 63 ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region)); 64 - rockchip_pcie_write(rockchip, 0, 65 - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(region)); 66 - rockchip_pcie_write(rockchip, 0, 67 - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(region)); 68 64 } 69 65 70 66 static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn, 71 - u32 r, u32 type, u64 cpu_addr, 72 - u64 pci_addr, size_t size) 67 + u32 r, u64 cpu_addr, u64 pci_addr, 68 + size_t size) 73 69 { 74 - u64 sz = 1ULL << fls64(size - 1); 75 - int num_pass_bits = ilog2(sz); 76 - u32 addr0, addr1, desc0, desc1; 77 - bool is_nor_msg = (type == AXI_WRAPPER_NOR_MSG); 70 + int num_pass_bits = fls64(size - 1); 71 + u32 addr0, addr1, desc0; 78 72 79 - /* The minimal region size is 1MB */ 80 73 if (num_pass_bits < 8) 81 74 num_pass_bits = 8; 82 75 83 - cpu_addr -= rockchip->mem_res->start; 84 - addr0 = ((is_nor_msg ? 0x10 : (num_pass_bits - 1)) & 85 - PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | 86 - (lower_32_bits(cpu_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); 87 - addr1 = upper_32_bits(is_nor_msg ? cpu_addr : pci_addr); 88 - desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | type; 89 - desc1 = 0; 76 + addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | 77 + (lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); 78 + addr1 = upper_32_bits(pci_addr); 79 + desc0 = ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(fn) | AXI_WRAPPER_MEM_WRITE; 90 80 91 - if (is_nor_msg) { 92 - rockchip_pcie_write(rockchip, 0, 93 - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); 94 - rockchip_pcie_write(rockchip, 0, 95 - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); 96 - rockchip_pcie_write(rockchip, desc0, 97 - ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); 98 - rockchip_pcie_write(rockchip, desc1, 99 - ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); 100 - } else { 101 - /* PCI bus address region */ 102 - rockchip_pcie_write(rockchip, addr0, 103 - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); 104 - rockchip_pcie_write(rockchip, addr1, 105 - ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); 106 - rockchip_pcie_write(rockchip, desc0, 107 - ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); 108 - rockchip_pcie_write(rockchip, desc1, 109 - ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); 110 - 111 - addr0 = 112 - ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) | 113 - (lower_32_bits(cpu_addr) & 114 - PCIE_CORE_OB_REGION_ADDR0_LO_ADDR); 115 - addr1 = upper_32_bits(cpu_addr); 116 - } 117 - 118 - /* CPU bus address region */ 81 + /* PCI bus address region */ 119 82 rockchip_pcie_write(rockchip, addr0, 120 - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r)); 83 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r)); 121 84 rockchip_pcie_write(rockchip, addr1, 122 - ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r)); 85 + ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r)); 86 + rockchip_pcie_write(rockchip, desc0, 87 + ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r)); 88 + rockchip_pcie_write(rockchip, 0, 89 + ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r)); 123 90 } 124 91 125 92 static int rockchip_pcie_ep_write_header(struct pci_epc *epc, u8 fn, u8 vfn, 126 93 struct pci_epf_header *hdr) 127 94 { 95 + u32 reg; 128 96 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 129 97 struct rockchip_pcie *rockchip = &ep->rockchip; 130 98 ··· 105 137 PCIE_CORE_CONFIG_VENDOR); 106 138 } 107 139 108 - rockchip_pcie_write(rockchip, hdr->deviceid << 16, 109 - ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + PCI_VENDOR_ID); 140 + reg = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_DID_VID); 141 + reg = (reg & 0xFFFF) | (hdr->deviceid << 16); 142 + rockchip_pcie_write(rockchip, reg, PCIE_EP_CONFIG_DID_VID); 110 143 111 144 rockchip_pcie_write(rockchip, 112 145 hdr->revid | ··· 225 256 ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar)); 226 257 } 227 258 259 + static inline u32 rockchip_ob_region(phys_addr_t addr) 260 + { 261 + return (addr >> ilog2(SZ_1M)) & 0x1f; 262 + } 263 + 228 264 static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn, 229 265 phys_addr_t addr, u64 pci_addr, 230 266 size_t size) 231 267 { 232 268 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 233 269 struct rockchip_pcie *pcie = &ep->rockchip; 234 - u32 r; 270 + u32 r = rockchip_ob_region(addr); 235 271 236 - r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG); 237 - /* 238 - * Region 0 is reserved for configuration space and shouldn't 239 - * be used elsewhere per TRM, so leave it out. 240 - */ 241 - if (r >= ep->max_regions - 1) { 242 - dev_err(&epc->dev, "no free outbound region\n"); 243 - return -EINVAL; 244 - } 245 - 246 - rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, AXI_WRAPPER_MEM_WRITE, addr, 247 - pci_addr, size); 272 + rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, addr, pci_addr, size); 248 273 249 274 set_bit(r, &ep->ob_region_map); 250 275 ep->ob_addr[r] = addr; ··· 253 290 struct rockchip_pcie *rockchip = &ep->rockchip; 254 291 u32 r; 255 292 256 - for (r = 0; r < ep->max_regions - 1; r++) 293 + for (r = 0; r < ep->max_regions; r++) 257 294 if (ep->ob_addr[r] == addr) 258 295 break; 259 296 260 - /* 261 - * Region 0 is reserved for configuration space and shouldn't 262 - * be used elsewhere per TRM, so leave it out. 263 - */ 264 - if (r == ep->max_regions - 1) 297 + if (r == ep->max_regions) 265 298 return; 266 299 267 300 rockchip_pcie_clear_ep_ob_atu(rockchip, r); ··· 271 312 { 272 313 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 273 314 struct rockchip_pcie *rockchip = &ep->rockchip; 274 - u16 flags; 315 + u32 flags; 275 316 276 317 flags = rockchip_pcie_read(rockchip, 277 318 ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 278 319 ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 279 320 flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK; 280 321 flags |= 281 - ((multi_msg_cap << 1) << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) | 282 - PCI_MSI_FLAGS_64BIT; 322 + (multi_msg_cap << ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET) | 323 + (PCI_MSI_FLAGS_64BIT << ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET); 283 324 flags &= ~ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP; 284 325 rockchip_pcie_write(rockchip, flags, 285 326 ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + ··· 291 332 { 292 333 struct rockchip_pcie_ep *ep = epc_get_drvdata(epc); 293 334 struct rockchip_pcie *rockchip = &ep->rockchip; 294 - u16 flags; 335 + u32 flags; 295 336 296 337 flags = rockchip_pcie_read(rockchip, 297 338 ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + ··· 304 345 } 305 346 306 347 static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn, 307 - u8 intx, bool is_asserted) 348 + u8 intx, bool do_assert) 308 349 { 309 350 struct rockchip_pcie *rockchip = &ep->rockchip; 310 - u32 r = ep->max_regions - 1; 311 - u32 offset; 312 - u32 status; 313 - u8 msg_code; 314 - 315 - if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR || 316 - ep->irq_pci_fn != fn)) { 317 - rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r, 318 - AXI_WRAPPER_NOR_MSG, 319 - ep->irq_phys_addr, 0, 0); 320 - ep->irq_pci_addr = ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR; 321 - ep->irq_pci_fn = fn; 322 - } 323 351 324 352 intx &= 3; 325 - if (is_asserted) { 353 + 354 + if (do_assert) { 326 355 ep->irq_pending |= BIT(intx); 327 - msg_code = ROCKCHIP_PCIE_MSG_CODE_ASSERT_INTA + intx; 356 + rockchip_pcie_write(rockchip, 357 + PCIE_CLIENT_INT_IN_ASSERT | 358 + PCIE_CLIENT_INT_PEND_ST_PEND, 359 + PCIE_CLIENT_LEGACY_INT_CTRL); 328 360 } else { 329 361 ep->irq_pending &= ~BIT(intx); 330 - msg_code = ROCKCHIP_PCIE_MSG_CODE_DEASSERT_INTA + intx; 362 + rockchip_pcie_write(rockchip, 363 + PCIE_CLIENT_INT_IN_DEASSERT | 364 + PCIE_CLIENT_INT_PEND_ST_NORMAL, 365 + PCIE_CLIENT_LEGACY_INT_CTRL); 331 366 } 332 - 333 - status = rockchip_pcie_read(rockchip, 334 - ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 335 - ROCKCHIP_PCIE_EP_CMD_STATUS); 336 - status &= ROCKCHIP_PCIE_EP_CMD_STATUS_IS; 337 - 338 - if ((status != 0) ^ (ep->irq_pending != 0)) { 339 - status ^= ROCKCHIP_PCIE_EP_CMD_STATUS_IS; 340 - rockchip_pcie_write(rockchip, status, 341 - ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 342 - ROCKCHIP_PCIE_EP_CMD_STATUS); 343 - } 344 - 345 - offset = 346 - ROCKCHIP_PCIE_MSG_ROUTING(ROCKCHIP_PCIE_MSG_ROUTING_LOCAL_INTX) | 347 - ROCKCHIP_PCIE_MSG_CODE(msg_code) | ROCKCHIP_PCIE_MSG_NO_DATA; 348 - writel(0, ep->irq_cpu_addr + offset); 349 367 } 350 368 351 369 static int rockchip_pcie_ep_send_legacy_irq(struct rockchip_pcie_ep *ep, u8 fn, ··· 352 416 u8 interrupt_num) 353 417 { 354 418 struct rockchip_pcie *rockchip = &ep->rockchip; 355 - u16 flags, mme, data, data_mask; 419 + u32 flags, mme, data, data_mask; 356 420 u8 msi_count; 357 - u64 pci_addr, pci_addr_mask = 0xff; 421 + u64 pci_addr; 422 + u32 r; 358 423 359 424 /* Check MSI enable bit */ 360 425 flags = rockchip_pcie_read(&ep->rockchip, ··· 389 452 ROCKCHIP_PCIE_EP_FUNC_BASE(fn) + 390 453 ROCKCHIP_PCIE_EP_MSI_CTRL_REG + 391 454 PCI_MSI_ADDRESS_LO); 392 - pci_addr &= GENMASK_ULL(63, 2); 393 455 394 456 /* Set the outbound region if needed. */ 395 - if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || 457 + if (unlikely(ep->irq_pci_addr != (pci_addr & PCIE_ADDR_MASK) || 396 458 ep->irq_pci_fn != fn)) { 397 - rockchip_pcie_prog_ep_ob_atu(rockchip, fn, ep->max_regions - 1, 398 - AXI_WRAPPER_MEM_WRITE, 459 + r = rockchip_ob_region(ep->irq_phys_addr); 460 + rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r, 399 461 ep->irq_phys_addr, 400 - pci_addr & ~pci_addr_mask, 401 - pci_addr_mask + 1); 402 - ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); 462 + pci_addr & PCIE_ADDR_MASK, 463 + ~PCIE_ADDR_MASK + 1); 464 + ep->irq_pci_addr = (pci_addr & PCIE_ADDR_MASK); 403 465 ep->irq_pci_fn = fn; 404 466 } 405 467 406 - writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); 468 + writew(data, ep->irq_cpu_addr + (pci_addr & ~PCIE_ADDR_MASK)); 407 469 return 0; 408 470 } 409 471 ··· 442 506 .linkup_notifier = false, 443 507 .msi_capable = true, 444 508 .msix_capable = false, 509 + .align = 256, 445 510 }; 446 511 447 512 static const struct pci_epc_features* ··· 484 547 if (err < 0 || ep->max_regions > MAX_REGION_LIMIT) 485 548 ep->max_regions = MAX_REGION_LIMIT; 486 549 550 + ep->ob_region_map = 0; 551 + 487 552 err = of_property_read_u8(dev->of_node, "max-functions", 488 553 &ep->epc->max_functions); 489 554 if (err < 0) ··· 506 567 struct rockchip_pcie *rockchip; 507 568 struct pci_epc *epc; 508 569 size_t max_regions; 509 - int err; 570 + struct pci_epc_mem_window *windows = NULL; 571 + int err, i; 572 + u32 cfg_msi, cfg_msix_cp; 510 573 511 574 ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL); 512 575 if (!ep) ··· 555 614 /* Only enable function 0 by default */ 556 615 rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG); 557 616 558 - err = pci_epc_mem_init(epc, rockchip->mem_res->start, 559 - resource_size(rockchip->mem_res), PAGE_SIZE); 617 + windows = devm_kcalloc(dev, ep->max_regions, 618 + sizeof(struct pci_epc_mem_window), GFP_KERNEL); 619 + if (!windows) { 620 + err = -ENOMEM; 621 + goto err_uninit_port; 622 + } 623 + for (i = 0; i < ep->max_regions; i++) { 624 + windows[i].phys_base = rockchip->mem_res->start + (SZ_1M * i); 625 + windows[i].size = SZ_1M; 626 + windows[i].page_size = SZ_1M; 627 + } 628 + err = pci_epc_multi_mem_init(epc, windows, ep->max_regions); 629 + devm_kfree(dev, windows); 630 + 560 631 if (err < 0) { 561 632 dev_err(dev, "failed to initialize the memory space\n"); 562 633 goto err_uninit_port; 563 634 } 564 635 565 636 ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr, 566 - SZ_128K); 637 + SZ_1M); 567 638 if (!ep->irq_cpu_addr) { 568 639 dev_err(dev, "failed to reserve memory space for MSI\n"); 569 640 err = -ENOMEM; ··· 583 630 } 584 631 585 632 ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR; 633 + 634 + /* 635 + * MSI-X is not supported but the controller still advertises the MSI-X 636 + * capability by default, which can lead to the Root Complex side 637 + * allocating MSI-X vectors which cannot be used. Avoid this by skipping 638 + * the MSI-X capability entry in the PCIe capabilities linked-list: get 639 + * the next pointer from the MSI-X entry and set that in the MSI 640 + * capability entry (which is the previous entry). This way the MSI-X 641 + * entry is skipped (left out of the linked-list) and not advertised. 642 + */ 643 + cfg_msi = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + 644 + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 645 + 646 + cfg_msi &= ~ROCKCHIP_PCIE_EP_MSI_CP1_MASK; 647 + 648 + cfg_msix_cp = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_BASE + 649 + ROCKCHIP_PCIE_EP_MSIX_CAP_REG) & 650 + ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK; 651 + 652 + cfg_msi |= cfg_msix_cp; 653 + 654 + rockchip_pcie_write(rockchip, cfg_msi, 655 + PCIE_EP_CONFIG_BASE + ROCKCHIP_PCIE_EP_MSI_CTRL_REG); 656 + 657 + rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE, 658 + PCIE_CLIENT_CONFIG); 586 659 587 660 return 0; 588 661 err_epc_mem_exit:
+2 -4
drivers/pci/controller/pcie-rockchip-host.c
··· 1009 1009 return err; 1010 1010 } 1011 1011 1012 - static int rockchip_pcie_remove(struct platform_device *pdev) 1012 + static void rockchip_pcie_remove(struct platform_device *pdev) 1013 1013 { 1014 1014 struct device *dev = &pdev->dev; 1015 1015 struct rockchip_pcie *rockchip = dev_get_drvdata(dev); ··· 1029 1029 regulator_disable(rockchip->vpcie3v3); 1030 1030 regulator_disable(rockchip->vpcie1v8); 1031 1031 regulator_disable(rockchip->vpcie0v9); 1032 - 1033 - return 0; 1034 1032 } 1035 1033 1036 1034 static const struct dev_pm_ops rockchip_pcie_pm_ops = { ··· 1049 1051 .pm = &rockchip_pcie_pm_ops, 1050 1052 }, 1051 1053 .probe = rockchip_pcie_probe, 1052 - .remove = rockchip_pcie_remove, 1054 + .remove_new = rockchip_pcie_remove, 1053 1055 }; 1054 1056 module_platform_driver(rockchip_pcie_driver); 1055 1057
+17
drivers/pci/controller/pcie-rockchip.c
··· 14 14 #include <linux/clk.h> 15 15 #include <linux/delay.h> 16 16 #include <linux/gpio/consumer.h> 17 + #include <linux/iopoll.h> 17 18 #include <linux/of_pci.h> 18 19 #include <linux/phy/phy.h> 19 20 #include <linux/platform_device.h> ··· 154 153 } 155 154 EXPORT_SYMBOL_GPL(rockchip_pcie_parse_dt); 156 155 156 + #define rockchip_pcie_read_addr(addr) rockchip_pcie_read(rockchip, addr) 157 + /* 100 ms max wait time for PHY PLLs to lock */ 158 + #define RK_PHY_PLL_LOCK_TIMEOUT_US 100000 159 + /* Sleep should be less than 20ms */ 160 + #define RK_PHY_PLL_LOCK_SLEEP_US 1000 161 + 157 162 int rockchip_pcie_init_port(struct rockchip_pcie *rockchip) 158 163 { 159 164 struct device *dev = rockchip->dev; ··· 259 252 dev_err(dev, "power on phy%d err %d\n", i, err); 260 253 goto err_power_off_phy; 261 254 } 255 + } 256 + 257 + err = readx_poll_timeout(rockchip_pcie_read_addr, 258 + PCIE_CLIENT_SIDE_BAND_STATUS, 259 + regs, !(regs & PCIE_CLIENT_PHY_ST), 260 + RK_PHY_PLL_LOCK_SLEEP_US, 261 + RK_PHY_PLL_LOCK_TIMEOUT_US); 262 + if (err) { 263 + dev_err(dev, "PHY PLLs could not lock, %d\n", err); 264 + goto err_power_off_phy; 262 265 } 263 266 264 267 /*
+34 -15
drivers/pci/controller/pcie-rockchip.h
··· 38 38 #define PCIE_CLIENT_MODE_EP HIWORD_UPDATE(0x0040, 0) 39 39 #define PCIE_CLIENT_GEN_SEL_1 HIWORD_UPDATE(0x0080, 0) 40 40 #define PCIE_CLIENT_GEN_SEL_2 HIWORD_UPDATE_BIT(0x0080) 41 + #define PCIE_CLIENT_LEGACY_INT_CTRL (PCIE_CLIENT_BASE + 0x0c) 42 + #define PCIE_CLIENT_INT_IN_ASSERT HIWORD_UPDATE_BIT(0x0002) 43 + #define PCIE_CLIENT_INT_IN_DEASSERT HIWORD_UPDATE(0x0002, 0) 44 + #define PCIE_CLIENT_INT_PEND_ST_PEND HIWORD_UPDATE_BIT(0x0001) 45 + #define PCIE_CLIENT_INT_PEND_ST_NORMAL HIWORD_UPDATE(0x0001, 0) 46 + #define PCIE_CLIENT_SIDE_BAND_STATUS (PCIE_CLIENT_BASE + 0x20) 47 + #define PCIE_CLIENT_PHY_ST BIT(12) 41 48 #define PCIE_CLIENT_DEBUG_OUT_0 (PCIE_CLIENT_BASE + 0x3c) 42 49 #define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0) 43 50 #define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18 ··· 139 132 140 133 #define PCIE_RC_RP_ATS_BASE 0x400000 141 134 #define PCIE_RC_CONFIG_NORMAL_BASE 0x800000 135 + #define PCIE_EP_PF_CONFIG_REGS_BASE 0x800000 142 136 #define PCIE_RC_CONFIG_BASE 0xa00000 137 + #define PCIE_EP_CONFIG_BASE 0xa00000 138 + #define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00) 143 139 #define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08) 144 140 #define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4) 145 141 #define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18 ··· 158 148 #define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274) 159 149 #define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20) 160 150 151 + #define PCIE_ADDR_MASK 0xffffff00 161 152 #define PCIE_CORE_AXI_CONF_BASE 0xc00000 162 153 #define PCIE_CORE_OB_REGION_ADDR0 (PCIE_CORE_AXI_CONF_BASE + 0x0) 163 154 #define PCIE_CORE_OB_REGION_ADDR0_NUM_BITS 0x3f 164 - #define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR 0xffffff00 155 + #define PCIE_CORE_OB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK 165 156 #define PCIE_CORE_OB_REGION_ADDR1 (PCIE_CORE_AXI_CONF_BASE + 0x4) 166 157 #define PCIE_CORE_OB_REGION_DESC0 (PCIE_CORE_AXI_CONF_BASE + 0x8) 167 158 #define PCIE_CORE_OB_REGION_DESC1 (PCIE_CORE_AXI_CONF_BASE + 0xc) ··· 170 159 #define PCIE_CORE_AXI_INBOUND_BASE 0xc00800 171 160 #define PCIE_RP_IB_ADDR0 (PCIE_CORE_AXI_INBOUND_BASE + 0x0) 172 161 #define PCIE_CORE_IB_REGION_ADDR0_NUM_BITS 0x3f 173 - #define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR 0xffffff00 162 + #define PCIE_CORE_IB_REGION_ADDR0_LO_ADDR PCIE_ADDR_MASK 174 163 #define PCIE_RP_IB_ADDR1 (PCIE_CORE_AXI_INBOUND_BASE + 0x4) 175 164 176 165 /* Size of one AXI Region (not Region 0) */ ··· 227 216 #define ROCKCHIP_PCIE_EP_CMD_STATUS 0x4 228 217 #define ROCKCHIP_PCIE_EP_CMD_STATUS_IS BIT(19) 229 218 #define ROCKCHIP_PCIE_EP_MSI_CTRL_REG 0x90 219 + #define ROCKCHIP_PCIE_EP_MSI_CP1_OFFSET 8 220 + #define ROCKCHIP_PCIE_EP_MSI_CP1_MASK GENMASK(15, 8) 221 + #define ROCKCHIP_PCIE_EP_MSI_FLAGS_OFFSET 16 230 222 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_OFFSET 17 231 223 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MMC_MASK GENMASK(19, 17) 232 224 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_OFFSET 20 233 225 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MME_MASK GENMASK(22, 20) 234 226 #define ROCKCHIP_PCIE_EP_MSI_CTRL_ME BIT(16) 235 227 #define ROCKCHIP_PCIE_EP_MSI_CTRL_MASK_MSI_CAP BIT(24) 228 + #define ROCKCHIP_PCIE_EP_MSIX_CAP_REG 0xb0 229 + #define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_OFFSET 8 230 + #define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK GENMASK(15, 8) 236 231 #define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1 237 232 #define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3 238 - #define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) (((fn) << 12) & GENMASK(19, 12)) 233 + #define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) \ 234 + (PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12))) 235 + #define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \ 236 + (PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12))) 239 237 #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \ 240 - (PCIE_RC_RP_ATS_BASE + 0x0840 + (fn) * 0x0040 + (bar) * 0x0008) 238 + (PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008) 241 239 #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \ 242 - (PCIE_RC_RP_ATS_BASE + 0x0844 + (fn) * 0x0040 + (bar) * 0x0008) 243 - #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \ 244 - (PCIE_RC_RP_ATS_BASE + 0x0000 + ((r) & 0x1f) * 0x0020) 240 + (PCIE_CORE_AXI_CONF_BASE + 0x082c + (fn) * 0x0040 + (bar) * 0x0008) 245 241 #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN_MASK GENMASK(19, 12) 246 242 #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_DEVFN(devfn) \ 247 243 (((devfn) << 12) & \ ··· 256 238 #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK GENMASK(27, 20) 257 239 #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS(bus) \ 258 240 (((bus) << 20) & ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0_BUS_MASK) 241 + #define PCIE_RC_EP_ATR_OB_REGIONS_1_32 (PCIE_CORE_AXI_CONF_BASE + 0x0020) 242 + #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR0(r) \ 243 + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0000 + ((r) & 0x1f) * 0x0020) 259 244 #define ROCKCHIP_PCIE_AT_OB_REGION_PCI_ADDR1(r) \ 260 - (PCIE_RC_RP_ATS_BASE + 0x0004 + ((r) & 0x1f) * 0x0020) 245 + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0004 + ((r) & 0x1f) * 0x0020) 261 246 #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_HARDCODED_RID BIT(23) 262 247 #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK GENMASK(31, 24) 263 248 #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN(devfn) \ 264 249 (((devfn) << 24) & ROCKCHIP_PCIE_AT_OB_REGION_DESC0_DEVFN_MASK) 265 250 #define ROCKCHIP_PCIE_AT_OB_REGION_DESC0(r) \ 266 - (PCIE_RC_RP_ATS_BASE + 0x0008 + ((r) & 0x1f) * 0x0020) 267 - #define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \ 268 - (PCIE_RC_RP_ATS_BASE + 0x000c + ((r) & 0x1f) * 0x0020) 269 - #define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR0(r) \ 270 - (PCIE_RC_RP_ATS_BASE + 0x0018 + ((r) & 0x1f) * 0x0020) 271 - #define ROCKCHIP_PCIE_AT_OB_REGION_CPU_ADDR1(r) \ 272 - (PCIE_RC_RP_ATS_BASE + 0x001c + ((r) & 0x1f) * 0x0020) 251 + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0008 + ((r) & 0x1f) * 0x0020) 252 + #define ROCKCHIP_PCIE_AT_OB_REGION_DESC1(r) \ 253 + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x000c + ((r) & 0x1f) * 0x0020) 254 + #define ROCKCHIP_PCIE_AT_OB_REGION_DESC2(r) \ 255 + (PCIE_RC_EP_ATR_OB_REGIONS_1_32 + 0x0010 + ((r) & 0x1f) * 0x0020) 273 256 274 257 #define ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG0(fn) \ 275 258 (PCIE_CORE_CTRL_MGMT_BASE + 0x0240 + (fn) * 0x0008)
+10 -1
drivers/pci/controller/vmd.c
··· 927 927 if (!list_empty(&child->devices)) { 928 928 dev = list_first_entry(&child->devices, 929 929 struct pci_dev, bus_list); 930 - if (pci_reset_bus(dev)) 930 + ret = pci_reset_bus(dev); 931 + if (ret) 931 932 pci_warn(dev, "can't reset device: %d\n", ret); 932 933 933 934 break; ··· 1037 1036 ida_simple_remove(&vmd_instance_ida, vmd->instance); 1038 1037 } 1039 1038 1039 + static void vmd_shutdown(struct pci_dev *dev) 1040 + { 1041 + struct vmd_dev *vmd = pci_get_drvdata(dev); 1042 + 1043 + vmd_remove_irq_domain(vmd); 1044 + } 1045 + 1040 1046 #ifdef CONFIG_PM_SLEEP 1041 1047 static int vmd_suspend(struct device *dev) 1042 1048 { ··· 1109 1101 .id_table = vmd_ids, 1110 1102 .probe = vmd_probe, 1111 1103 .remove = vmd_remove, 1104 + .shutdown = vmd_shutdown, 1112 1105 .driver = { 1113 1106 .pm = &vmd_dev_pm_ops, 1114 1107 },
+11 -1
drivers/pci/endpoint/functions/Kconfig
··· 27 27 If in doubt, say "N" to disable Endpoint NTB driver. 28 28 29 29 config PCI_EPF_VNTB 30 - tristate "PCI Endpoint NTB driver" 30 + tristate "PCI Endpoint Virtual NTB driver" 31 31 depends on PCI_ENDPOINT 32 32 depends on NTB 33 33 select CONFIGFS_FS ··· 37 37 between PCI Root Port and PCIe Endpoint. 38 38 39 39 If in doubt, say "N" to disable Endpoint NTB driver. 40 + 41 + config PCI_EPF_MHI 42 + tristate "PCI Endpoint driver for MHI bus" 43 + depends on PCI_ENDPOINT && MHI_BUS_EP 44 + help 45 + Enable this configuration option to enable the PCI Endpoint 46 + driver for Modem Host Interface (MHI) bus in Qualcomm Endpoint 47 + devices such as SDX55. 48 + 49 + If in doubt, say "N" to disable Endpoint driver for MHI bus.
+1
drivers/pci/endpoint/functions/Makefile
··· 6 6 obj-$(CONFIG_PCI_EPF_TEST) += pci-epf-test.o 7 7 obj-$(CONFIG_PCI_EPF_NTB) += pci-epf-ntb.o 8 8 obj-$(CONFIG_PCI_EPF_VNTB) += pci-epf-vntb.o 9 + obj-$(CONFIG_PCI_EPF_MHI) += pci-epf-mhi.o
+458
drivers/pci/endpoint/functions/pci-epf-mhi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCI EPF driver for MHI Endpoint devices 4 + * 5 + * Copyright (C) 2023 Linaro Ltd. 6 + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 7 + */ 8 + 9 + #include <linux/mhi_ep.h> 10 + #include <linux/module.h> 11 + #include <linux/platform_device.h> 12 + #include <linux/pci-epc.h> 13 + #include <linux/pci-epf.h> 14 + 15 + #define MHI_VERSION_1_0 0x01000000 16 + 17 + #define to_epf_mhi(cntrl) container_of(cntrl, struct pci_epf_mhi, cntrl) 18 + 19 + struct pci_epf_mhi_ep_info { 20 + const struct mhi_ep_cntrl_config *config; 21 + struct pci_epf_header *epf_header; 22 + enum pci_barno bar_num; 23 + u32 epf_flags; 24 + u32 msi_count; 25 + u32 mru; 26 + }; 27 + 28 + #define MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, direction) \ 29 + { \ 30 + .num = ch_num, \ 31 + .name = ch_name, \ 32 + .dir = direction, \ 33 + } 34 + 35 + #define MHI_EP_CHANNEL_CONFIG_UL(ch_num, ch_name) \ 36 + MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, DMA_TO_DEVICE) 37 + 38 + #define MHI_EP_CHANNEL_CONFIG_DL(ch_num, ch_name) \ 39 + MHI_EP_CHANNEL_CONFIG(ch_num, ch_name, DMA_FROM_DEVICE) 40 + 41 + static const struct mhi_ep_channel_config mhi_v1_channels[] = { 42 + MHI_EP_CHANNEL_CONFIG_UL(0, "LOOPBACK"), 43 + MHI_EP_CHANNEL_CONFIG_DL(1, "LOOPBACK"), 44 + MHI_EP_CHANNEL_CONFIG_UL(2, "SAHARA"), 45 + MHI_EP_CHANNEL_CONFIG_DL(3, "SAHARA"), 46 + MHI_EP_CHANNEL_CONFIG_UL(4, "DIAG"), 47 + MHI_EP_CHANNEL_CONFIG_DL(5, "DIAG"), 48 + MHI_EP_CHANNEL_CONFIG_UL(6, "SSR"), 49 + MHI_EP_CHANNEL_CONFIG_DL(7, "SSR"), 50 + MHI_EP_CHANNEL_CONFIG_UL(8, "QDSS"), 51 + MHI_EP_CHANNEL_CONFIG_DL(9, "QDSS"), 52 + MHI_EP_CHANNEL_CONFIG_UL(10, "EFS"), 53 + MHI_EP_CHANNEL_CONFIG_DL(11, "EFS"), 54 + MHI_EP_CHANNEL_CONFIG_UL(12, "MBIM"), 55 + MHI_EP_CHANNEL_CONFIG_DL(13, "MBIM"), 56 + MHI_EP_CHANNEL_CONFIG_UL(14, "QMI"), 57 + MHI_EP_CHANNEL_CONFIG_DL(15, "QMI"), 58 + MHI_EP_CHANNEL_CONFIG_UL(16, "QMI"), 59 + MHI_EP_CHANNEL_CONFIG_DL(17, "QMI"), 60 + MHI_EP_CHANNEL_CONFIG_UL(18, "IP-CTRL-1"), 61 + MHI_EP_CHANNEL_CONFIG_DL(19, "IP-CTRL-1"), 62 + MHI_EP_CHANNEL_CONFIG_UL(20, "IPCR"), 63 + MHI_EP_CHANNEL_CONFIG_DL(21, "IPCR"), 64 + MHI_EP_CHANNEL_CONFIG_UL(32, "DUN"), 65 + MHI_EP_CHANNEL_CONFIG_DL(33, "DUN"), 66 + MHI_EP_CHANNEL_CONFIG_UL(46, "IP_SW0"), 67 + MHI_EP_CHANNEL_CONFIG_DL(47, "IP_SW0"), 68 + }; 69 + 70 + static const struct mhi_ep_cntrl_config mhi_v1_config = { 71 + .max_channels = 128, 72 + .num_channels = ARRAY_SIZE(mhi_v1_channels), 73 + .ch_cfg = mhi_v1_channels, 74 + .mhi_version = MHI_VERSION_1_0, 75 + }; 76 + 77 + static struct pci_epf_header sdx55_header = { 78 + .vendorid = PCI_VENDOR_ID_QCOM, 79 + .deviceid = 0x0306, 80 + .baseclass_code = PCI_BASE_CLASS_COMMUNICATION, 81 + .subclass_code = PCI_CLASS_COMMUNICATION_MODEM & 0xff, 82 + .interrupt_pin = PCI_INTERRUPT_INTA, 83 + }; 84 + 85 + static const struct pci_epf_mhi_ep_info sdx55_info = { 86 + .config = &mhi_v1_config, 87 + .epf_header = &sdx55_header, 88 + .bar_num = BAR_0, 89 + .epf_flags = PCI_BASE_ADDRESS_MEM_TYPE_32, 90 + .msi_count = 32, 91 + .mru = 0x8000, 92 + }; 93 + 94 + struct pci_epf_mhi { 95 + const struct pci_epf_mhi_ep_info *info; 96 + struct mhi_ep_cntrl mhi_cntrl; 97 + struct pci_epf *epf; 98 + struct mutex lock; 99 + void __iomem *mmio; 100 + resource_size_t mmio_phys; 101 + u32 mmio_size; 102 + int irq; 103 + }; 104 + 105 + static int __pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, 106 + phys_addr_t *paddr, void __iomem **vaddr, 107 + size_t offset, size_t size) 108 + { 109 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 110 + struct pci_epf *epf = epf_mhi->epf; 111 + struct pci_epc *epc = epf->epc; 112 + int ret; 113 + 114 + *vaddr = pci_epc_mem_alloc_addr(epc, paddr, size + offset); 115 + if (!*vaddr) 116 + return -ENOMEM; 117 + 118 + ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, *paddr, 119 + pci_addr - offset, size + offset); 120 + if (ret) { 121 + pci_epc_mem_free_addr(epc, *paddr, *vaddr, size + offset); 122 + return ret; 123 + } 124 + 125 + *paddr = *paddr + offset; 126 + *vaddr = *vaddr + offset; 127 + 128 + return 0; 129 + } 130 + 131 + static int pci_epf_mhi_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, 132 + phys_addr_t *paddr, void __iomem **vaddr, 133 + size_t size) 134 + { 135 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 136 + struct pci_epc *epc = epf_mhi->epf->epc; 137 + size_t offset = pci_addr & (epc->mem->window.page_size - 1); 138 + 139 + return __pci_epf_mhi_alloc_map(mhi_cntrl, pci_addr, paddr, vaddr, 140 + offset, size); 141 + } 142 + 143 + static void __pci_epf_mhi_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, 144 + u64 pci_addr, phys_addr_t paddr, 145 + void __iomem *vaddr, size_t offset, 146 + size_t size) 147 + { 148 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 149 + struct pci_epf *epf = epf_mhi->epf; 150 + struct pci_epc *epc = epf->epc; 151 + 152 + pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, paddr - offset); 153 + pci_epc_mem_free_addr(epc, paddr - offset, vaddr - offset, 154 + size + offset); 155 + } 156 + 157 + static void pci_epf_mhi_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, 158 + phys_addr_t paddr, void __iomem *vaddr, 159 + size_t size) 160 + { 161 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 162 + struct pci_epf *epf = epf_mhi->epf; 163 + struct pci_epc *epc = epf->epc; 164 + size_t offset = pci_addr & (epc->mem->window.page_size - 1); 165 + 166 + __pci_epf_mhi_unmap_free(mhi_cntrl, pci_addr, paddr, vaddr, offset, 167 + size); 168 + } 169 + 170 + static void pci_epf_mhi_raise_irq(struct mhi_ep_cntrl *mhi_cntrl, u32 vector) 171 + { 172 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 173 + struct pci_epf *epf = epf_mhi->epf; 174 + struct pci_epc *epc = epf->epc; 175 + 176 + /* 177 + * MHI supplies 0 based MSI vectors but the API expects the vector 178 + * number to start from 1, so we need to increment the vector by 1. 179 + */ 180 + pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, PCI_EPC_IRQ_MSI, 181 + vector + 1); 182 + } 183 + 184 + static int pci_epf_mhi_read_from_host(struct mhi_ep_cntrl *mhi_cntrl, u64 from, 185 + void *to, size_t size) 186 + { 187 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 188 + size_t offset = from % SZ_4K; 189 + void __iomem *tre_buf; 190 + phys_addr_t tre_phys; 191 + int ret; 192 + 193 + mutex_lock(&epf_mhi->lock); 194 + 195 + ret = __pci_epf_mhi_alloc_map(mhi_cntrl, from, &tre_phys, &tre_buf, 196 + offset, size); 197 + if (ret) { 198 + mutex_unlock(&epf_mhi->lock); 199 + return ret; 200 + } 201 + 202 + memcpy_fromio(to, tre_buf, size); 203 + 204 + __pci_epf_mhi_unmap_free(mhi_cntrl, from, tre_phys, tre_buf, offset, 205 + size); 206 + 207 + mutex_unlock(&epf_mhi->lock); 208 + 209 + return 0; 210 + } 211 + 212 + static int pci_epf_mhi_write_to_host(struct mhi_ep_cntrl *mhi_cntrl, 213 + void *from, u64 to, size_t size) 214 + { 215 + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl); 216 + size_t offset = to % SZ_4K; 217 + void __iomem *tre_buf; 218 + phys_addr_t tre_phys; 219 + int ret; 220 + 221 + mutex_lock(&epf_mhi->lock); 222 + 223 + ret = __pci_epf_mhi_alloc_map(mhi_cntrl, to, &tre_phys, &tre_buf, 224 + offset, size); 225 + if (ret) { 226 + mutex_unlock(&epf_mhi->lock); 227 + return ret; 228 + } 229 + 230 + memcpy_toio(tre_buf, from, size); 231 + 232 + __pci_epf_mhi_unmap_free(mhi_cntrl, to, tre_phys, tre_buf, offset, 233 + size); 234 + 235 + mutex_unlock(&epf_mhi->lock); 236 + 237 + return 0; 238 + } 239 + 240 + static int pci_epf_mhi_core_init(struct pci_epf *epf) 241 + { 242 + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 243 + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; 244 + struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num]; 245 + struct pci_epc *epc = epf->epc; 246 + struct device *dev = &epf->dev; 247 + int ret; 248 + 249 + epf_bar->phys_addr = epf_mhi->mmio_phys; 250 + epf_bar->size = epf_mhi->mmio_size; 251 + epf_bar->barno = info->bar_num; 252 + epf_bar->flags = info->epf_flags; 253 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, epf_bar); 254 + if (ret) { 255 + dev_err(dev, "Failed to set BAR: %d\n", ret); 256 + return ret; 257 + } 258 + 259 + ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no, 260 + order_base_2(info->msi_count)); 261 + if (ret) { 262 + dev_err(dev, "Failed to set MSI configuration: %d\n", ret); 263 + return ret; 264 + } 265 + 266 + ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no, 267 + epf->header); 268 + if (ret) { 269 + dev_err(dev, "Failed to set Configuration header: %d\n", ret); 270 + return ret; 271 + } 272 + 273 + return 0; 274 + } 275 + 276 + static int pci_epf_mhi_link_up(struct pci_epf *epf) 277 + { 278 + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 279 + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; 280 + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; 281 + struct pci_epc *epc = epf->epc; 282 + struct device *dev = &epf->dev; 283 + int ret; 284 + 285 + mhi_cntrl->mmio = epf_mhi->mmio; 286 + mhi_cntrl->irq = epf_mhi->irq; 287 + mhi_cntrl->mru = info->mru; 288 + 289 + /* Assign the struct dev of PCI EP as MHI controller device */ 290 + mhi_cntrl->cntrl_dev = epc->dev.parent; 291 + mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq; 292 + mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map; 293 + mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free; 294 + mhi_cntrl->read_from_host = pci_epf_mhi_read_from_host; 295 + mhi_cntrl->write_to_host = pci_epf_mhi_write_to_host; 296 + 297 + /* Register the MHI EP controller */ 298 + ret = mhi_ep_register_controller(mhi_cntrl, info->config); 299 + if (ret) { 300 + dev_err(dev, "Failed to register MHI EP controller: %d\n", ret); 301 + return ret; 302 + } 303 + 304 + return 0; 305 + } 306 + 307 + static int pci_epf_mhi_link_down(struct pci_epf *epf) 308 + { 309 + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 310 + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; 311 + 312 + if (mhi_cntrl->mhi_dev) { 313 + mhi_ep_power_down(mhi_cntrl); 314 + mhi_ep_unregister_controller(mhi_cntrl); 315 + } 316 + 317 + return 0; 318 + } 319 + 320 + static int pci_epf_mhi_bme(struct pci_epf *epf) 321 + { 322 + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 323 + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; 324 + struct device *dev = &epf->dev; 325 + int ret; 326 + 327 + /* 328 + * Power up the MHI EP stack if link is up and stack is in power down 329 + * state. 330 + */ 331 + if (!mhi_cntrl->enabled && mhi_cntrl->mhi_dev) { 332 + ret = mhi_ep_power_up(mhi_cntrl); 333 + if (ret) { 334 + dev_err(dev, "Failed to power up MHI EP: %d\n", ret); 335 + mhi_ep_unregister_controller(mhi_cntrl); 336 + } 337 + } 338 + 339 + return 0; 340 + } 341 + 342 + static int pci_epf_mhi_bind(struct pci_epf *epf) 343 + { 344 + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 345 + struct pci_epc *epc = epf->epc; 346 + struct platform_device *pdev = to_platform_device(epc->dev.parent); 347 + struct resource *res; 348 + int ret; 349 + 350 + /* Get MMIO base address from Endpoint controller */ 351 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio"); 352 + epf_mhi->mmio_phys = res->start; 353 + epf_mhi->mmio_size = resource_size(res); 354 + 355 + epf_mhi->mmio = ioremap(epf_mhi->mmio_phys, epf_mhi->mmio_size); 356 + if (!epf_mhi->mmio) 357 + return -ENOMEM; 358 + 359 + ret = platform_get_irq_byname(pdev, "doorbell"); 360 + if (ret < 0) { 361 + iounmap(epf_mhi->mmio); 362 + return ret; 363 + } 364 + 365 + epf_mhi->irq = ret; 366 + 367 + return 0; 368 + } 369 + 370 + static void pci_epf_mhi_unbind(struct pci_epf *epf) 371 + { 372 + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 373 + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; 374 + struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num]; 375 + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; 376 + struct pci_epc *epc = epf->epc; 377 + 378 + /* 379 + * Forcefully power down the MHI EP stack. Only way to bring the MHI EP 380 + * stack back to working state after successive bind is by getting BME 381 + * from host. 382 + */ 383 + if (mhi_cntrl->mhi_dev) { 384 + mhi_ep_power_down(mhi_cntrl); 385 + mhi_ep_unregister_controller(mhi_cntrl); 386 + } 387 + 388 + iounmap(epf_mhi->mmio); 389 + pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, epf_bar); 390 + } 391 + 392 + static struct pci_epc_event_ops pci_epf_mhi_event_ops = { 393 + .core_init = pci_epf_mhi_core_init, 394 + .link_up = pci_epf_mhi_link_up, 395 + .link_down = pci_epf_mhi_link_down, 396 + .bme = pci_epf_mhi_bme, 397 + }; 398 + 399 + static int pci_epf_mhi_probe(struct pci_epf *epf, 400 + const struct pci_epf_device_id *id) 401 + { 402 + struct pci_epf_mhi_ep_info *info = 403 + (struct pci_epf_mhi_ep_info *)id->driver_data; 404 + struct pci_epf_mhi *epf_mhi; 405 + struct device *dev = &epf->dev; 406 + 407 + epf_mhi = devm_kzalloc(dev, sizeof(*epf_mhi), GFP_KERNEL); 408 + if (!epf_mhi) 409 + return -ENOMEM; 410 + 411 + epf->header = info->epf_header; 412 + epf_mhi->info = info; 413 + epf_mhi->epf = epf; 414 + 415 + epf->event_ops = &pci_epf_mhi_event_ops; 416 + 417 + mutex_init(&epf_mhi->lock); 418 + 419 + epf_set_drvdata(epf, epf_mhi); 420 + 421 + return 0; 422 + } 423 + 424 + static const struct pci_epf_device_id pci_epf_mhi_ids[] = { 425 + { 426 + .name = "sdx55", .driver_data = (kernel_ulong_t)&sdx55_info, 427 + }, 428 + {}, 429 + }; 430 + 431 + static struct pci_epf_ops pci_epf_mhi_ops = { 432 + .unbind = pci_epf_mhi_unbind, 433 + .bind = pci_epf_mhi_bind, 434 + }; 435 + 436 + static struct pci_epf_driver pci_epf_mhi_driver = { 437 + .driver.name = "pci_epf_mhi", 438 + .probe = pci_epf_mhi_probe, 439 + .id_table = pci_epf_mhi_ids, 440 + .ops = &pci_epf_mhi_ops, 441 + .owner = THIS_MODULE, 442 + }; 443 + 444 + static int __init pci_epf_mhi_init(void) 445 + { 446 + return pci_epf_register_driver(&pci_epf_mhi_driver); 447 + } 448 + module_init(pci_epf_mhi_init); 449 + 450 + static void __exit pci_epf_mhi_exit(void) 451 + { 452 + pci_epf_unregister_driver(&pci_epf_mhi_driver); 453 + } 454 + module_exit(pci_epf_mhi_exit); 455 + 456 + MODULE_DESCRIPTION("PCI EPF driver for MHI Endpoint devices"); 457 + MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>"); 458 + MODULE_LICENSE("GPL");
+3 -1
drivers/pci/endpoint/functions/pci-epf-ntb.c
··· 2075 2075 /** 2076 2076 * epf_ntb_probe() - Probe NTB function driver 2077 2077 * @epf: NTB endpoint function device 2078 + * @id: NTB endpoint function device ID 2078 2079 * 2079 2080 * Probe NTB function driver when endpoint function bus detects a NTB 2080 2081 * endpoint function. 2081 2082 */ 2082 - static int epf_ntb_probe(struct pci_epf *epf) 2083 + static int epf_ntb_probe(struct pci_epf *epf, 2084 + const struct pci_epf_device_id *id) 2083 2085 { 2084 2086 struct epf_ntb *ntb; 2085 2087 struct device *dev;
+122 -149
drivers/pci/endpoint/functions/pci-epf-test.c
··· 54 54 struct delayed_work cmd_handler; 55 55 struct dma_chan *dma_chan_tx; 56 56 struct dma_chan *dma_chan_rx; 57 + struct dma_chan *transfer_chan; 58 + dma_cookie_t transfer_cookie; 59 + enum dma_status transfer_status; 57 60 struct completion transfer_complete; 58 61 bool dma_supported; 59 62 bool dma_private; ··· 88 85 static void pci_epf_test_dma_callback(void *param) 89 86 { 90 87 struct pci_epf_test *epf_test = param; 88 + struct dma_tx_state state; 91 89 92 - complete(&epf_test->transfer_complete); 90 + epf_test->transfer_status = 91 + dmaengine_tx_status(epf_test->transfer_chan, 92 + epf_test->transfer_cookie, &state); 93 + if (epf_test->transfer_status == DMA_COMPLETE || 94 + epf_test->transfer_status == DMA_ERROR) 95 + complete(&epf_test->transfer_complete); 93 96 } 94 97 95 98 /** ··· 121 112 size_t len, dma_addr_t dma_remote, 122 113 enum dma_transfer_direction dir) 123 114 { 124 - struct dma_chan *chan = (dir == DMA_DEV_TO_MEM) ? 115 + struct dma_chan *chan = (dir == DMA_MEM_TO_DEV) ? 125 116 epf_test->dma_chan_tx : epf_test->dma_chan_rx; 126 117 dma_addr_t dma_local = (dir == DMA_MEM_TO_DEV) ? dma_src : dma_dst; 127 118 enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; ··· 129 120 struct dma_async_tx_descriptor *tx; 130 121 struct dma_slave_config sconf = {}; 131 122 struct device *dev = &epf->dev; 132 - dma_cookie_t cookie; 133 123 int ret; 134 124 135 125 if (IS_ERR_OR_NULL(chan)) { ··· 159 151 return -EIO; 160 152 } 161 153 154 + reinit_completion(&epf_test->transfer_complete); 155 + epf_test->transfer_chan = chan; 162 156 tx->callback = pci_epf_test_dma_callback; 163 157 tx->callback_param = epf_test; 164 - cookie = tx->tx_submit(tx); 165 - reinit_completion(&epf_test->transfer_complete); 158 + epf_test->transfer_cookie = dmaengine_submit(tx); 166 159 167 - ret = dma_submit_error(cookie); 160 + ret = dma_submit_error(epf_test->transfer_cookie); 168 161 if (ret) { 169 - dev_err(dev, "Failed to do DMA tx_submit %d\n", cookie); 170 - return -EIO; 162 + dev_err(dev, "Failed to do DMA tx_submit %d\n", ret); 163 + goto terminate; 171 164 } 172 165 173 166 dma_async_issue_pending(chan); 174 167 ret = wait_for_completion_interruptible(&epf_test->transfer_complete); 175 168 if (ret < 0) { 176 - dmaengine_terminate_sync(chan); 177 - dev_err(dev, "DMA wait_for_completion_timeout\n"); 178 - return -ETIMEDOUT; 169 + dev_err(dev, "DMA wait_for_completion interrupted\n"); 170 + goto terminate; 179 171 } 180 172 181 - return 0; 173 + if (epf_test->transfer_status == DMA_ERROR) { 174 + dev_err(dev, "DMA transfer failed\n"); 175 + ret = -EIO; 176 + } 177 + 178 + terminate: 179 + dmaengine_terminate_sync(chan); 180 + 181 + return ret; 182 182 } 183 183 184 184 struct epf_dma_filter { ··· 295 279 return; 296 280 } 297 281 298 - static void pci_epf_test_print_rate(const char *ops, u64 size, 282 + static void pci_epf_test_print_rate(struct pci_epf_test *epf_test, 283 + const char *op, u64 size, 299 284 struct timespec64 *start, 300 285 struct timespec64 *end, bool dma) 301 286 { 302 - struct timespec64 ts; 303 - u64 rate, ns; 304 - 305 - ts = timespec64_sub(*end, *start); 306 - 307 - /* convert both size (stored in 'rate') and time in terms of 'ns' */ 308 - ns = timespec64_to_ns(&ts); 309 - rate = size * NSEC_PER_SEC; 310 - 311 - /* Divide both size (stored in 'rate') and ns by a common factor */ 312 - while (ns > UINT_MAX) { 313 - rate >>= 1; 314 - ns >>= 1; 315 - } 316 - 317 - if (!ns) 318 - return; 287 + struct timespec64 ts = timespec64_sub(*end, *start); 288 + u64 rate = 0, ns; 319 289 320 290 /* calculate the rate */ 321 - do_div(rate, (uint32_t)ns); 291 + ns = timespec64_to_ns(&ts); 292 + if (ns) 293 + rate = div64_u64(size * NSEC_PER_SEC, ns * 1000); 322 294 323 - pr_info("\n%s => Size: %llu bytes\t DMA: %s\t Time: %llu.%09u seconds\t" 324 - "Rate: %llu KB/s\n", ops, size, dma ? "YES" : "NO", 325 - (u64)ts.tv_sec, (u32)ts.tv_nsec, rate / 1024); 295 + dev_info(&epf_test->epf->dev, 296 + "%s => Size: %llu B, DMA: %s, Time: %llu.%09u s, Rate: %llu KB/s\n", 297 + op, size, dma ? "YES" : "NO", 298 + (u64)ts.tv_sec, (u32)ts.tv_nsec, rate); 326 299 } 327 300 328 - static int pci_epf_test_copy(struct pci_epf_test *epf_test) 301 + static void pci_epf_test_copy(struct pci_epf_test *epf_test, 302 + struct pci_epf_test_reg *reg) 329 303 { 330 304 int ret; 331 - bool use_dma; 332 305 void __iomem *src_addr; 333 306 void __iomem *dst_addr; 334 307 phys_addr_t src_phys_addr; ··· 326 321 struct pci_epf *epf = epf_test->epf; 327 322 struct device *dev = &epf->dev; 328 323 struct pci_epc *epc = epf->epc; 329 - enum pci_barno test_reg_bar = epf_test->test_reg_bar; 330 - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 331 324 332 325 src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size); 333 326 if (!src_addr) { ··· 360 357 } 361 358 362 359 ktime_get_ts64(&start); 363 - use_dma = !!(reg->flags & FLAG_USE_DMA); 364 - if (use_dma) { 365 - if (!epf_test->dma_supported) { 366 - dev_err(dev, "Cannot transfer data using DMA\n"); 367 - ret = -EINVAL; 368 - goto err_map_addr; 369 - } 370 - 360 + if (reg->flags & FLAG_USE_DMA) { 371 361 if (epf_test->dma_private) { 372 362 dev_err(dev, "Cannot transfer data using DMA\n"); 373 363 ret = -EINVAL; ··· 386 390 kfree(buf); 387 391 } 388 392 ktime_get_ts64(&end); 389 - pci_epf_test_print_rate("COPY", reg->size, &start, &end, use_dma); 393 + pci_epf_test_print_rate(epf_test, "COPY", reg->size, &start, &end, 394 + reg->flags & FLAG_USE_DMA); 390 395 391 396 err_map_addr: 392 397 pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr); ··· 402 405 pci_epc_mem_free_addr(epc, src_phys_addr, src_addr, reg->size); 403 406 404 407 err: 405 - return ret; 408 + if (!ret) 409 + reg->status |= STATUS_COPY_SUCCESS; 410 + else 411 + reg->status |= STATUS_COPY_FAIL; 406 412 } 407 413 408 - static int pci_epf_test_read(struct pci_epf_test *epf_test) 414 + static void pci_epf_test_read(struct pci_epf_test *epf_test, 415 + struct pci_epf_test_reg *reg) 409 416 { 410 417 int ret; 411 418 void __iomem *src_addr; 412 419 void *buf; 413 420 u32 crc32; 414 - bool use_dma; 415 421 phys_addr_t phys_addr; 416 422 phys_addr_t dst_phys_addr; 417 423 struct timespec64 start, end; ··· 422 422 struct device *dev = &epf->dev; 423 423 struct pci_epc *epc = epf->epc; 424 424 struct device *dma_dev = epf->epc->dev.parent; 425 - enum pci_barno test_reg_bar = epf_test->test_reg_bar; 426 - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 427 425 428 426 src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); 429 427 if (!src_addr) { ··· 445 447 goto err_map_addr; 446 448 } 447 449 448 - use_dma = !!(reg->flags & FLAG_USE_DMA); 449 - if (use_dma) { 450 - if (!epf_test->dma_supported) { 451 - dev_err(dev, "Cannot transfer data using DMA\n"); 452 - ret = -EINVAL; 453 - goto err_dma_map; 454 - } 455 - 450 + if (reg->flags & FLAG_USE_DMA) { 456 451 dst_phys_addr = dma_map_single(dma_dev, buf, reg->size, 457 452 DMA_FROM_DEVICE); 458 453 if (dma_mapping_error(dma_dev, dst_phys_addr)) { ··· 470 479 ktime_get_ts64(&end); 471 480 } 472 481 473 - pci_epf_test_print_rate("READ", reg->size, &start, &end, use_dma); 482 + pci_epf_test_print_rate(epf_test, "READ", reg->size, &start, &end, 483 + reg->flags & FLAG_USE_DMA); 474 484 475 485 crc32 = crc32_le(~0, buf, reg->size); 476 486 if (crc32 != reg->checksum) ··· 487 495 pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size); 488 496 489 497 err: 490 - return ret; 498 + if (!ret) 499 + reg->status |= STATUS_READ_SUCCESS; 500 + else 501 + reg->status |= STATUS_READ_FAIL; 491 502 } 492 503 493 - static int pci_epf_test_write(struct pci_epf_test *epf_test) 504 + static void pci_epf_test_write(struct pci_epf_test *epf_test, 505 + struct pci_epf_test_reg *reg) 494 506 { 495 507 int ret; 496 508 void __iomem *dst_addr; 497 509 void *buf; 498 - bool use_dma; 499 510 phys_addr_t phys_addr; 500 511 phys_addr_t src_phys_addr; 501 512 struct timespec64 start, end; ··· 506 511 struct device *dev = &epf->dev; 507 512 struct pci_epc *epc = epf->epc; 508 513 struct device *dma_dev = epf->epc->dev.parent; 509 - enum pci_barno test_reg_bar = epf_test->test_reg_bar; 510 - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 511 514 512 515 dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size); 513 516 if (!dst_addr) { ··· 532 539 get_random_bytes(buf, reg->size); 533 540 reg->checksum = crc32_le(~0, buf, reg->size); 534 541 535 - use_dma = !!(reg->flags & FLAG_USE_DMA); 536 - if (use_dma) { 537 - if (!epf_test->dma_supported) { 538 - dev_err(dev, "Cannot transfer data using DMA\n"); 539 - ret = -EINVAL; 540 - goto err_dma_map; 541 - } 542 - 542 + if (reg->flags & FLAG_USE_DMA) { 543 543 src_phys_addr = dma_map_single(dma_dev, buf, reg->size, 544 544 DMA_TO_DEVICE); 545 545 if (dma_mapping_error(dma_dev, src_phys_addr)) { ··· 559 573 ktime_get_ts64(&end); 560 574 } 561 575 562 - pci_epf_test_print_rate("WRITE", reg->size, &start, &end, use_dma); 576 + pci_epf_test_print_rate(epf_test, "WRITE", reg->size, &start, &end, 577 + reg->flags & FLAG_USE_DMA); 563 578 564 579 /* 565 580 * wait 1ms inorder for the write to complete. Without this delay L3 ··· 578 591 pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size); 579 592 580 593 err: 581 - return ret; 594 + if (!ret) 595 + reg->status |= STATUS_WRITE_SUCCESS; 596 + else 597 + reg->status |= STATUS_WRITE_FAIL; 582 598 } 583 599 584 - static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, u8 irq_type, 585 - u16 irq) 600 + static void pci_epf_test_raise_irq(struct pci_epf_test *epf_test, 601 + struct pci_epf_test_reg *reg) 586 602 { 587 603 struct pci_epf *epf = epf_test->epf; 588 604 struct device *dev = &epf->dev; 589 605 struct pci_epc *epc = epf->epc; 590 - enum pci_barno test_reg_bar = epf_test->test_reg_bar; 591 - struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 606 + u32 status = reg->status | STATUS_IRQ_RAISED; 607 + int count; 592 608 593 - reg->status |= STATUS_IRQ_RAISED; 609 + /* 610 + * Set the status before raising the IRQ to ensure that the host sees 611 + * the updated value when it gets the IRQ. 612 + */ 613 + WRITE_ONCE(reg->status, status); 594 614 595 - switch (irq_type) { 615 + switch (reg->irq_type) { 596 616 case IRQ_TYPE_LEGACY: 597 617 pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 598 618 PCI_EPC_IRQ_LEGACY, 0); 599 619 break; 600 620 case IRQ_TYPE_MSI: 621 + count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no); 622 + if (reg->irq_number > count || count <= 0) { 623 + dev_err(dev, "Invalid MSI IRQ number %d / %d\n", 624 + reg->irq_number, count); 625 + return; 626 + } 601 627 pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 602 - PCI_EPC_IRQ_MSI, irq); 628 + PCI_EPC_IRQ_MSI, reg->irq_number); 603 629 break; 604 630 case IRQ_TYPE_MSIX: 631 + count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no); 632 + if (reg->irq_number > count || count <= 0) { 633 + dev_err(dev, "Invalid MSIX IRQ number %d / %d\n", 634 + reg->irq_number, count); 635 + return; 636 + } 605 637 pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 606 - PCI_EPC_IRQ_MSIX, irq); 638 + PCI_EPC_IRQ_MSIX, reg->irq_number); 607 639 break; 608 640 default: 609 641 dev_err(dev, "Failed to raise IRQ, unknown type\n"); ··· 632 626 633 627 static void pci_epf_test_cmd_handler(struct work_struct *work) 634 628 { 635 - int ret; 636 - int count; 637 629 u32 command; 638 630 struct pci_epf_test *epf_test = container_of(work, struct pci_epf_test, 639 631 cmd_handler.work); 640 632 struct pci_epf *epf = epf_test->epf; 641 633 struct device *dev = &epf->dev; 642 - struct pci_epc *epc = epf->epc; 643 634 enum pci_barno test_reg_bar = epf_test->test_reg_bar; 644 635 struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 645 636 646 - command = reg->command; 637 + command = READ_ONCE(reg->command); 647 638 if (!command) 648 639 goto reset_handler; 649 640 650 - reg->command = 0; 651 - reg->status = 0; 641 + WRITE_ONCE(reg->command, 0); 642 + WRITE_ONCE(reg->status, 0); 643 + 644 + if ((READ_ONCE(reg->flags) & FLAG_USE_DMA) && 645 + !epf_test->dma_supported) { 646 + dev_err(dev, "Cannot transfer data using DMA\n"); 647 + goto reset_handler; 648 + } 652 649 653 650 if (reg->irq_type > IRQ_TYPE_MSIX) { 654 651 dev_err(dev, "Failed to detect IRQ type\n"); 655 652 goto reset_handler; 656 653 } 657 654 658 - if (command & COMMAND_RAISE_LEGACY_IRQ) { 659 - reg->status = STATUS_IRQ_RAISED; 660 - pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 661 - PCI_EPC_IRQ_LEGACY, 0); 662 - goto reset_handler; 663 - } 664 - 665 - if (command & COMMAND_WRITE) { 666 - ret = pci_epf_test_write(epf_test); 667 - if (ret) 668 - reg->status |= STATUS_WRITE_FAIL; 669 - else 670 - reg->status |= STATUS_WRITE_SUCCESS; 671 - pci_epf_test_raise_irq(epf_test, reg->irq_type, 672 - reg->irq_number); 673 - goto reset_handler; 674 - } 675 - 676 - if (command & COMMAND_READ) { 677 - ret = pci_epf_test_read(epf_test); 678 - if (!ret) 679 - reg->status |= STATUS_READ_SUCCESS; 680 - else 681 - reg->status |= STATUS_READ_FAIL; 682 - pci_epf_test_raise_irq(epf_test, reg->irq_type, 683 - reg->irq_number); 684 - goto reset_handler; 685 - } 686 - 687 - if (command & COMMAND_COPY) { 688 - ret = pci_epf_test_copy(epf_test); 689 - if (!ret) 690 - reg->status |= STATUS_COPY_SUCCESS; 691 - else 692 - reg->status |= STATUS_COPY_FAIL; 693 - pci_epf_test_raise_irq(epf_test, reg->irq_type, 694 - reg->irq_number); 695 - goto reset_handler; 696 - } 697 - 698 - if (command & COMMAND_RAISE_MSI_IRQ) { 699 - count = pci_epc_get_msi(epc, epf->func_no, epf->vfunc_no); 700 - if (reg->irq_number > count || count <= 0) 701 - goto reset_handler; 702 - reg->status = STATUS_IRQ_RAISED; 703 - pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 704 - PCI_EPC_IRQ_MSI, reg->irq_number); 705 - goto reset_handler; 706 - } 707 - 708 - if (command & COMMAND_RAISE_MSIX_IRQ) { 709 - count = pci_epc_get_msix(epc, epf->func_no, epf->vfunc_no); 710 - if (reg->irq_number > count || count <= 0) 711 - goto reset_handler; 712 - reg->status = STATUS_IRQ_RAISED; 713 - pci_epc_raise_irq(epc, epf->func_no, epf->vfunc_no, 714 - PCI_EPC_IRQ_MSIX, reg->irq_number); 715 - goto reset_handler; 655 + switch (command) { 656 + case COMMAND_RAISE_LEGACY_IRQ: 657 + case COMMAND_RAISE_MSI_IRQ: 658 + case COMMAND_RAISE_MSIX_IRQ: 659 + pci_epf_test_raise_irq(epf_test, reg); 660 + break; 661 + case COMMAND_WRITE: 662 + pci_epf_test_write(epf_test, reg); 663 + pci_epf_test_raise_irq(epf_test, reg); 664 + break; 665 + case COMMAND_READ: 666 + pci_epf_test_read(epf_test, reg); 667 + pci_epf_test_raise_irq(epf_test, reg); 668 + break; 669 + case COMMAND_COPY: 670 + pci_epf_test_copy(epf_test, reg); 671 + pci_epf_test_raise_irq(epf_test, reg); 672 + break; 673 + default: 674 + dev_err(dev, "Invalid command 0x%x\n", command); 675 + break; 716 676 } 717 677 718 678 reset_handler: ··· 952 980 {}, 953 981 }; 954 982 955 - static int pci_epf_test_probe(struct pci_epf *epf) 983 + static int pci_epf_test_probe(struct pci_epf *epf, 984 + const struct pci_epf_device_id *id) 956 985 { 957 986 struct pci_epf_test *epf_test; 958 987 struct device *dev = &epf->dev;
+8 -6
drivers/pci/endpoint/functions/pci-epf-vntb.c
··· 84 84 * | | 85 85 * | | 86 86 * | | 87 - * +-----------------------+--------------------------+ Base+span_offset 87 + * +-----------------------+--------------------------+ Base+spad_offset 88 88 * | | | 89 - * | Peer Span Space | Span Space | 89 + * | Peer Spad Space | Spad Space | 90 90 * | | | 91 91 * | | | 92 - * +-----------------------+--------------------------+ Base+span_offset 93 - * | | | +span_count * 4 92 + * +-----------------------+--------------------------+ Base+spad_offset 93 + * | | | +spad_count * 4 94 94 * | | | 95 - * | Span Space | Peer Span Space | 95 + * | Spad Space | Peer Spad Space | 96 96 * | | | 97 97 * +-----------------------+--------------------------+ 98 98 * Virtual PCI PCIe Endpoint ··· 1395 1395 /** 1396 1396 * epf_ntb_probe() - Probe NTB function driver 1397 1397 * @epf: NTB endpoint function device 1398 + * @id: NTB endpoint function device ID 1398 1399 * 1399 1400 * Probe NTB function driver when endpoint function bus detects a NTB 1400 1401 * endpoint function. 1401 1402 * 1402 1403 * Returns: Zero for success, or an error code in case of failure 1403 1404 */ 1404 - static int epf_ntb_probe(struct pci_epf *epf) 1405 + static int epf_ntb_probe(struct pci_epf *epf, 1406 + const struct pci_epf_device_id *id) 1405 1407 { 1406 1408 struct epf_ntb *ntb; 1407 1409 struct device *dev;
+59 -22
drivers/pci/endpoint/pci-ep-cfs.c
··· 23 23 struct config_group group; 24 24 struct config_group primary_epc_group; 25 25 struct config_group secondary_epc_group; 26 + struct config_group *type_group; 26 27 struct delayed_work cfs_work; 27 28 struct pci_epf *epf; 28 29 int index; ··· 178 177 179 178 if (kstrtobool(page, &start) < 0) 180 179 return -EINVAL; 180 + 181 + if (start == epc_group->start) 182 + return -EALREADY; 181 183 182 184 if (!start) { 183 185 pci_epc_stop(epc); ··· 506 502 .release = pci_epf_release, 507 503 }; 508 504 509 - static struct config_group *pci_epf_type_make(struct config_group *group, 510 - const char *name) 511 - { 512 - struct pci_epf_group *epf_group = to_pci_epf_group(&group->cg_item); 513 - struct config_group *epf_type_group; 514 - 515 - epf_type_group = pci_epf_type_add_cfs(epf_group->epf, group); 516 - return epf_type_group; 517 - } 518 - 519 - static void pci_epf_type_drop(struct config_group *group, 520 - struct config_item *item) 521 - { 522 - config_item_put(item); 523 - } 524 - 525 - static struct configfs_group_operations pci_epf_type_group_ops = { 526 - .make_group = &pci_epf_type_make, 527 - .drop_item = &pci_epf_type_drop, 528 - }; 529 - 530 505 static const struct config_item_type pci_epf_type = { 531 - .ct_group_ops = &pci_epf_type_group_ops, 532 506 .ct_item_ops = &pci_epf_ops, 533 507 .ct_attrs = pci_epf_attrs, 534 508 .ct_owner = THIS_MODULE, 535 509 }; 510 + 511 + /** 512 + * pci_epf_type_add_cfs() - Help function drivers to expose function specific 513 + * attributes in configfs 514 + * @epf: the EPF device that has to be configured using configfs 515 + * @group: the parent configfs group (corresponding to entries in 516 + * pci_epf_device_id) 517 + * 518 + * Invoke to expose function specific attributes in configfs. 519 + * 520 + * Return: A pointer to a config_group structure or NULL if the function driver 521 + * does not have anything to expose (attributes configured by user) or if 522 + * the function driver does not implement the add_cfs() method. 523 + * 524 + * Returns an error pointer if this function is called for an unbound EPF device 525 + * or if the EPF driver add_cfs() method fails. 526 + */ 527 + static struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, 528 + struct config_group *group) 529 + { 530 + struct config_group *epf_type_group; 531 + 532 + if (!epf->driver) { 533 + dev_err(&epf->dev, "epf device not bound to driver\n"); 534 + return ERR_PTR(-ENODEV); 535 + } 536 + 537 + if (!epf->driver->ops->add_cfs) 538 + return NULL; 539 + 540 + mutex_lock(&epf->lock); 541 + epf_type_group = epf->driver->ops->add_cfs(epf, group); 542 + mutex_unlock(&epf->lock); 543 + 544 + return epf_type_group; 545 + } 546 + 547 + static void pci_ep_cfs_add_type_group(struct pci_epf_group *epf_group) 548 + { 549 + struct config_group *group; 550 + 551 + group = pci_epf_type_add_cfs(epf_group->epf, &epf_group->group); 552 + if (!group) 553 + return; 554 + 555 + if (IS_ERR(group)) { 556 + dev_err(&epf_group->epf->dev, 557 + "failed to create epf type specific attributes\n"); 558 + return; 559 + } 560 + 561 + configfs_register_group(&epf_group->group, group); 562 + } 536 563 537 564 static void pci_epf_cfs_work(struct work_struct *work) 538 565 { ··· 582 547 pr_err("failed to create 'secondary' EPC interface\n"); 583 548 return; 584 549 } 550 + 551 + pci_ep_cfs_add_type_group(epf_group); 585 552 } 586 553 587 554 static struct config_group *pci_epf_make(struct config_group *group,
+54 -2
drivers/pci/endpoint/pci-epc-core.c
··· 213 213 * @func_no: the physical endpoint function number in the EPC device 214 214 * @vfunc_no: the virtual endpoint function number in the physical function 215 215 * @type: specify the type of interrupt; legacy, MSI or MSI-X 216 - * @interrupt_num: the MSI or MSI-X interrupt number 216 + * @interrupt_num: the MSI or MSI-X interrupt number with range (1-N) 217 217 * 218 218 * Invoke to raise an legacy, MSI or MSI-X interrupt 219 219 */ ··· 246 246 * @func_no: the physical endpoint function number in the EPC device 247 247 * @vfunc_no: the virtual endpoint function number in the physical function 248 248 * @phys_addr: the physical address of the outbound region 249 - * @interrupt_num: the MSI interrupt number 249 + * @interrupt_num: the MSI interrupt number with range (1-N) 250 250 * @entry_size: Size of Outbound address region for each interrupt 251 251 * @msi_data: the data that should be written in order to raise MSI interrupt 252 252 * with interrupt number as 'interrupt num' ··· 707 707 EXPORT_SYMBOL_GPL(pci_epc_linkup); 708 708 709 709 /** 710 + * pci_epc_linkdown() - Notify the EPF device that EPC device has dropped the 711 + * connection with the Root Complex. 712 + * @epc: the EPC device which has dropped the link with the host 713 + * 714 + * Invoke to Notify the EPF device that the EPC device has dropped the 715 + * connection with the Root Complex. 716 + */ 717 + void pci_epc_linkdown(struct pci_epc *epc) 718 + { 719 + struct pci_epf *epf; 720 + 721 + if (!epc || IS_ERR(epc)) 722 + return; 723 + 724 + mutex_lock(&epc->list_lock); 725 + list_for_each_entry(epf, &epc->pci_epf, list) { 726 + mutex_lock(&epf->lock); 727 + if (epf->event_ops && epf->event_ops->link_down) 728 + epf->event_ops->link_down(epf); 729 + mutex_unlock(&epf->lock); 730 + } 731 + mutex_unlock(&epc->list_lock); 732 + } 733 + EXPORT_SYMBOL_GPL(pci_epc_linkdown); 734 + 735 + /** 710 736 * pci_epc_init_notify() - Notify the EPF device that EPC device's core 711 737 * initialization is completed. 712 738 * @epc: the EPC device whose core initialization is completed ··· 757 731 mutex_unlock(&epc->list_lock); 758 732 } 759 733 EXPORT_SYMBOL_GPL(pci_epc_init_notify); 734 + 735 + /** 736 + * pci_epc_bme_notify() - Notify the EPF device that the EPC device has received 737 + * the BME event from the Root complex 738 + * @epc: the EPC device that received the BME event 739 + * 740 + * Invoke to Notify the EPF device that the EPC device has received the Bus 741 + * Master Enable (BME) event from the Root complex 742 + */ 743 + void pci_epc_bme_notify(struct pci_epc *epc) 744 + { 745 + struct pci_epf *epf; 746 + 747 + if (!epc || IS_ERR(epc)) 748 + return; 749 + 750 + mutex_lock(&epc->list_lock); 751 + list_for_each_entry(epf, &epc->pci_epf, list) { 752 + mutex_lock(&epf->lock); 753 + if (epf->event_ops && epf->event_ops->bme) 754 + epf->event_ops->bme(epf); 755 + mutex_unlock(&epf->lock); 756 + } 757 + mutex_unlock(&epc->list_lock); 758 + } 759 + EXPORT_SYMBOL_GPL(pci_epc_bme_notify); 760 760 761 761 /** 762 762 * pci_epc_destroy() - destroy the EPC device
+5 -37
drivers/pci/endpoint/pci-epf-core.c
··· 21 21 static const struct device_type pci_epf_type; 22 22 23 23 /** 24 - * pci_epf_type_add_cfs() - Help function drivers to expose function specific 25 - * attributes in configfs 26 - * @epf: the EPF device that has to be configured using configfs 27 - * @group: the parent configfs group (corresponding to entries in 28 - * pci_epf_device_id) 29 - * 30 - * Invoke to expose function specific attributes in configfs. If the function 31 - * driver does not have anything to expose (attributes configured by user), 32 - * return NULL. 33 - */ 34 - struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, 35 - struct config_group *group) 36 - { 37 - struct config_group *epf_type_group; 38 - 39 - if (!epf->driver) { 40 - dev_err(&epf->dev, "epf device not bound to driver\n"); 41 - return NULL; 42 - } 43 - 44 - if (!epf->driver->ops->add_cfs) 45 - return NULL; 46 - 47 - mutex_lock(&epf->lock); 48 - epf_type_group = epf->driver->ops->add_cfs(epf, group); 49 - mutex_unlock(&epf->lock); 50 - 51 - return epf_type_group; 52 - } 53 - EXPORT_SYMBOL_GPL(pci_epf_type_add_cfs); 54 - 55 - /** 56 24 * pci_epf_unbind() - Notify the function driver that the binding between the 57 25 * EPF device and EPC device has been lost 58 26 * @epf: the EPF device which has lost the binding with the EPC device ··· 461 493 .release = pci_epf_dev_release, 462 494 }; 463 495 464 - static int 496 + static const struct pci_epf_device_id * 465 497 pci_epf_match_id(const struct pci_epf_device_id *id, const struct pci_epf *epf) 466 498 { 467 499 while (id->name[0]) { 468 500 if (strcmp(epf->name, id->name) == 0) 469 - return true; 501 + return id; 470 502 id++; 471 503 } 472 504 473 - return false; 505 + return NULL; 474 506 } 475 507 476 508 static int pci_epf_device_match(struct device *dev, struct device_driver *drv) ··· 479 511 struct pci_epf_driver *driver = to_pci_epf_driver(drv); 480 512 481 513 if (driver->id_table) 482 - return pci_epf_match_id(driver->id_table, epf); 514 + return !!pci_epf_match_id(driver->id_table, epf); 483 515 484 516 return !strcmp(epf->name, drv->name); 485 517 } ··· 494 526 495 527 epf->driver = driver; 496 528 497 - return driver->probe(epf); 529 + return driver->probe(epf, pci_epf_match_id(driver->id_table, epf)); 498 530 } 499 531 500 532 static void pci_epf_device_remove(struct device *dev)
+1 -4
drivers/pci/hotplug/acpiphp_glue.c
··· 498 498 acpiphp_native_scan_bridge(dev); 499 499 } 500 500 } else { 501 - LIST_HEAD(add_list); 502 501 int max, pass; 503 502 504 503 acpiphp_rescan_slot(slot); ··· 511 512 if (pass && dev->subordinate) { 512 513 check_hotplug_bridge(slot, dev); 513 514 pcibios_resource_survey_bus(dev->subordinate); 514 - __pci_bus_size_bridges(dev->subordinate, 515 - &add_list); 516 515 } 517 516 } 518 517 } 519 - __pci_bus_assign_resources(bus, &add_list, NULL); 518 + pci_assign_unassigned_bridge_resources(bus->self); 520 519 } 521 520 522 521 acpiphp_sanitize_bus(bus);
+15 -6
drivers/pci/hotplug/pciehp_ctrl.c
··· 166 166 case ON_STATE: 167 167 if (ctrl->state == ON_STATE) { 168 168 ctrl->state = BLINKINGOFF_STATE; 169 - ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n", 169 + ctrl_info(ctrl, "Slot(%s): Button press: will power off in 5 sec\n", 170 170 slot_name(ctrl)); 171 171 } else { 172 172 ctrl->state = BLINKINGON_STATE; 173 - ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", 173 + ctrl_info(ctrl, "Slot(%s): Button press: will power on in 5 sec\n", 174 174 slot_name(ctrl)); 175 175 } 176 176 /* blink power indicator and turn off attention */ ··· 185 185 * press the attention again before the 5 sec. limit 186 186 * expires to cancel hot-add or hot-remove 187 187 */ 188 - ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(ctrl)); 189 188 cancel_delayed_work(&ctrl->button_work); 190 189 if (ctrl->state == BLINKINGOFF_STATE) { 191 190 ctrl->state = ON_STATE; 192 191 pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, 193 192 PCI_EXP_SLTCTL_ATTN_IND_OFF); 193 + ctrl_info(ctrl, "Slot(%s): Button press: canceling request to power off\n", 194 + slot_name(ctrl)); 194 195 } else { 195 196 ctrl->state = OFF_STATE; 196 197 pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, 197 198 PCI_EXP_SLTCTL_ATTN_IND_OFF); 199 + ctrl_info(ctrl, "Slot(%s): Button press: canceling request to power on\n", 200 + slot_name(ctrl)); 198 201 } 199 - ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", 200 - slot_name(ctrl)); 201 202 break; 202 203 default: 203 - ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", 204 + ctrl_err(ctrl, "Slot(%s): Button press: ignoring invalid state %#x\n", 204 205 slot_name(ctrl), ctrl->state); 205 206 break; 206 207 } ··· 257 256 present = pciehp_card_present(ctrl); 258 257 link_active = pciehp_check_link_active(ctrl); 259 258 if (present <= 0 && link_active <= 0) { 259 + if (ctrl->state == BLINKINGON_STATE) { 260 + ctrl->state = OFF_STATE; 261 + cancel_delayed_work(&ctrl->button_work); 262 + pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, 263 + INDICATOR_NOOP); 264 + ctrl_info(ctrl, "Slot(%s): Card not present\n", 265 + slot_name(ctrl)); 266 + } 260 267 mutex_unlock(&ctrl->state_lock); 261 268 return; 262 269 }
+3 -9
drivers/pci/hotplug/pciehp_hpc.c
··· 722 722 } 723 723 724 724 /* Check Attention Button Pressed */ 725 - if (events & PCI_EXP_SLTSTA_ABP) { 726 - ctrl_info(ctrl, "Slot(%s): Attention button pressed\n", 727 - slot_name(ctrl)); 725 + if (events & PCI_EXP_SLTSTA_ABP) 728 726 pciehp_handle_button_press(ctrl); 729 - } 730 727 731 728 /* Check Power Fault Detected */ 732 729 if (events & PCI_EXP_SLTSTA_PFD) { ··· 981 984 struct controller *pcie_init(struct pcie_device *dev) 982 985 { 983 986 struct controller *ctrl; 984 - u32 slot_cap, slot_cap2, link_cap; 987 + u32 slot_cap, slot_cap2; 985 988 u8 poweron; 986 989 struct pci_dev *pdev = dev->port; 987 990 struct pci_bus *subordinate = pdev->subordinate; ··· 1027 1030 if (dmi_first_match(inband_presence_disabled_dmi_table)) 1028 1031 ctrl->inband_presence_disabled = 1; 1029 1032 1030 - /* Check if Data Link Layer Link Active Reporting is implemented */ 1031 - pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); 1032 - 1033 1033 /* Clear all remaining event bits in Slot Status register. */ 1034 1034 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, 1035 1035 PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | ··· 1045 1051 FLAG(slot_cap, PCI_EXP_SLTCAP_EIP), 1046 1052 FLAG(slot_cap, PCI_EXP_SLTCAP_NCCS), 1047 1053 FLAG(slot_cap2, PCI_EXP_SLTCAP2_IBPD), 1048 - FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC), 1054 + FLAG(pdev->link_active_reporting, true), 1049 1055 pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : ""); 1050 1056 1051 1057 /*
+4 -10
drivers/pci/of.c
··· 39 39 return -ENODEV; 40 40 } 41 41 42 - dev->dev.of_node = node; 43 - dev->dev.fwnode = &node->fwnode; 42 + device_set_node(&dev->dev, of_fwnode_handle(node)); 44 43 return 0; 45 44 } 46 45 47 46 void pci_release_of_node(struct pci_dev *dev) 48 47 { 49 48 of_node_put(dev->dev.of_node); 50 - dev->dev.of_node = NULL; 51 - dev->dev.fwnode = NULL; 49 + device_set_node(&dev->dev, NULL); 52 50 } 53 51 54 52 void pci_set_bus_of_node(struct pci_bus *bus) ··· 61 63 bus->self->external_facing = true; 62 64 } 63 65 64 - bus->dev.of_node = node; 65 - 66 - if (bus->dev.of_node) 67 - bus->dev.fwnode = &bus->dev.of_node->fwnode; 66 + device_set_node(&bus->dev, of_fwnode_handle(node)); 68 67 } 69 68 70 69 void pci_release_bus_of_node(struct pci_bus *bus) 71 70 { 72 71 of_node_put(bus->dev.of_node); 73 - bus->dev.of_node = NULL; 74 - bus->dev.fwnode = NULL; 72 + device_set_node(&bus->dev, NULL); 75 73 } 76 74 77 75 struct device_node * __weak pcibios_get_phb_of_node(struct pci_bus *bus)
+40 -13
drivers/pci/pci-acpi.c
··· 1043 1043 return false; 1044 1044 } 1045 1045 1046 + static void acpi_pci_config_space_access(struct pci_dev *dev, bool enable) 1047 + { 1048 + int val = enable ? ACPI_REG_CONNECT : ACPI_REG_DISCONNECT; 1049 + int ret = acpi_evaluate_reg(ACPI_HANDLE(&dev->dev), 1050 + ACPI_ADR_SPACE_PCI_CONFIG, val); 1051 + if (ret) 1052 + pci_dbg(dev, "ACPI _REG %s evaluation failed (%d)\n", 1053 + enable ? "connect" : "disconnect", ret); 1054 + } 1055 + 1046 1056 int acpi_pci_set_power_state(struct pci_dev *dev, pci_power_t state) 1047 1057 { 1048 1058 struct acpi_device *adev = ACPI_COMPANION(&dev->dev); ··· 1063 1053 [PCI_D3hot] = ACPI_STATE_D3_HOT, 1064 1054 [PCI_D3cold] = ACPI_STATE_D3_COLD, 1065 1055 }; 1066 - int error = -EINVAL; 1056 + int error; 1067 1057 1068 1058 /* If the ACPI device has _EJ0, ignore the device */ 1069 1059 if (!adev || acpi_has_method(adev->handle, "_EJ0")) 1070 1060 return -ENODEV; 1071 1061 1072 1062 switch (state) { 1073 - case PCI_D3cold: 1074 - if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) == 1075 - PM_QOS_FLAGS_ALL) { 1076 - error = -EBUSY; 1077 - break; 1078 - } 1079 - fallthrough; 1080 1063 case PCI_D0: 1081 1064 case PCI_D1: 1082 1065 case PCI_D2: 1083 1066 case PCI_D3hot: 1084 - error = acpi_device_set_power(adev, state_conv[state]); 1067 + case PCI_D3cold: 1068 + break; 1069 + default: 1070 + return -EINVAL; 1085 1071 } 1086 1072 1087 - if (!error) 1088 - pci_dbg(dev, "power state changed by ACPI to %s\n", 1089 - acpi_power_state_string(adev->power.state)); 1073 + if (state == PCI_D3cold) { 1074 + if (dev_pm_qos_flags(&dev->dev, PM_QOS_FLAG_NO_POWER_OFF) == 1075 + PM_QOS_FLAGS_ALL) 1076 + return -EBUSY; 1090 1077 1091 - return error; 1078 + /* Notify AML lack of PCI config space availability */ 1079 + acpi_pci_config_space_access(dev, false); 1080 + } 1081 + 1082 + error = acpi_device_set_power(adev, state_conv[state]); 1083 + if (error) 1084 + return error; 1085 + 1086 + pci_dbg(dev, "power state changed by ACPI to %s\n", 1087 + acpi_power_state_string(adev->power.state)); 1088 + 1089 + /* 1090 + * Notify AML of PCI config space availability. Config space is 1091 + * accessible in all states except D3cold; the only transitions 1092 + * that change availability are transitions to D3cold and from 1093 + * D3cold to D0. 1094 + */ 1095 + if (state == PCI_D0) 1096 + acpi_pci_config_space_access(dev, true); 1097 + 1098 + return 0; 1092 1099 } 1093 1100 1094 1101 pci_power_t acpi_pci_get_power_state(struct pci_dev *dev)
+155 -39
drivers/pci/pci.c
··· 65 65 #define PME_TIMEOUT 1000 /* How long between PME checks */ 66 66 67 67 /* 68 + * Following exit from Conventional Reset, devices must be ready within 1 sec 69 + * (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional 70 + * Reset (PCIe r6.0 sec 5.8). 71 + */ 72 + #define PCI_RESET_WAIT 1000 /* msec */ 73 + 74 + /* 68 75 * Devices may extend the 1 sec period through Request Retry Status 69 76 * completions (PCIe r6.0 sec 2.3.1). The spec does not provide an upper 70 77 * limit, but 60 sec ought to be enough for any device to become ··· 1163 1156 static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout) 1164 1157 { 1165 1158 int delay = 1; 1166 - u32 id; 1159 + bool retrain = false; 1160 + struct pci_dev *bridge; 1161 + 1162 + if (pci_is_pcie(dev)) { 1163 + bridge = pci_upstream_bridge(dev); 1164 + if (bridge) 1165 + retrain = true; 1166 + } 1167 1167 1168 1168 /* 1169 1169 * After reset, the device should not silently discard config ··· 1184 1170 * Command register instead of Vendor ID so we don't have to 1185 1171 * contend with the CRS SV value. 1186 1172 */ 1187 - pci_read_config_dword(dev, PCI_COMMAND, &id); 1188 - while (PCI_POSSIBLE_ERROR(id)) { 1173 + for (;;) { 1174 + u32 id; 1175 + 1176 + pci_read_config_dword(dev, PCI_COMMAND, &id); 1177 + if (!PCI_POSSIBLE_ERROR(id)) 1178 + break; 1179 + 1189 1180 if (delay > timeout) { 1190 1181 pci_warn(dev, "not ready %dms after %s; giving up\n", 1191 1182 delay - 1, reset_type); 1192 1183 return -ENOTTY; 1193 1184 } 1194 1185 1195 - if (delay > PCI_RESET_WAIT) 1186 + if (delay > PCI_RESET_WAIT) { 1187 + if (retrain) { 1188 + retrain = false; 1189 + if (pcie_failed_link_retrain(bridge)) { 1190 + delay = 1; 1191 + continue; 1192 + } 1193 + } 1196 1194 pci_info(dev, "not ready %dms after %s; waiting\n", 1197 1195 delay - 1, reset_type); 1196 + } 1198 1197 1199 1198 msleep(delay); 1200 1199 delay *= 2; 1201 - pci_read_config_dword(dev, PCI_COMMAND, &id); 1202 1200 } 1203 1201 1204 1202 if (delay > PCI_RESET_WAIT) ··· 2975 2949 { 2976 2950 /* 2977 2951 * Downstream device is not accessible after putting a root port 2978 - * into D3cold and back into D0 on Elo i2. 2952 + * into D3cold and back into D0 on Elo Continental Z2 board 2979 2953 */ 2980 - .ident = "Elo i2", 2954 + .ident = "Elo Continental Z2", 2981 2955 .matches = { 2982 - DMI_MATCH(DMI_SYS_VENDOR, "Elo Touch Solutions"), 2983 - DMI_MATCH(DMI_PRODUCT_NAME, "Elo i2"), 2984 - DMI_MATCH(DMI_PRODUCT_VERSION, "RevB"), 2956 + DMI_MATCH(DMI_BOARD_VENDOR, "Elo Touch Solutions"), 2957 + DMI_MATCH(DMI_BOARD_NAME, "Geminilake"), 2958 + DMI_MATCH(DMI_BOARD_VERSION, "Continental Z2"), 2985 2959 }, 2986 2960 }, 2987 2961 #endif ··· 4883 4857 } 4884 4858 4885 4859 /** 4860 + * pcie_wait_for_link_status - Wait for link status change 4861 + * @pdev: Device whose link to wait for. 4862 + * @use_lt: Use the LT bit if TRUE, or the DLLLA bit if FALSE. 4863 + * @active: Waiting for active or inactive? 4864 + * 4865 + * Return 0 if successful, or -ETIMEDOUT if status has not changed within 4866 + * PCIE_LINK_RETRAIN_TIMEOUT_MS milliseconds. 4867 + */ 4868 + static int pcie_wait_for_link_status(struct pci_dev *pdev, 4869 + bool use_lt, bool active) 4870 + { 4871 + u16 lnksta_mask, lnksta_match; 4872 + unsigned long end_jiffies; 4873 + u16 lnksta; 4874 + 4875 + lnksta_mask = use_lt ? PCI_EXP_LNKSTA_LT : PCI_EXP_LNKSTA_DLLLA; 4876 + lnksta_match = active ? lnksta_mask : 0; 4877 + 4878 + end_jiffies = jiffies + msecs_to_jiffies(PCIE_LINK_RETRAIN_TIMEOUT_MS); 4879 + do { 4880 + pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnksta); 4881 + if ((lnksta & lnksta_mask) == lnksta_match) 4882 + return 0; 4883 + msleep(1); 4884 + } while (time_before(jiffies, end_jiffies)); 4885 + 4886 + return -ETIMEDOUT; 4887 + } 4888 + 4889 + /** 4890 + * pcie_retrain_link - Request a link retrain and wait for it to complete 4891 + * @pdev: Device whose link to retrain. 4892 + * @use_lt: Use the LT bit if TRUE, or the DLLLA bit if FALSE, for status. 4893 + * 4894 + * Retrain completion status is retrieved from the Link Status Register 4895 + * according to @use_lt. It is not verified whether the use of the DLLLA 4896 + * bit is valid. 4897 + * 4898 + * Return 0 if successful, or -ETIMEDOUT if training has not completed 4899 + * within PCIE_LINK_RETRAIN_TIMEOUT_MS milliseconds. 4900 + */ 4901 + int pcie_retrain_link(struct pci_dev *pdev, bool use_lt) 4902 + { 4903 + int rc; 4904 + u16 lnkctl; 4905 + 4906 + /* 4907 + * Ensure the updated LNKCTL parameters are used during link 4908 + * training by checking that there is no ongoing link training to 4909 + * avoid LTSSM race as recommended in Implementation Note at the 4910 + * end of PCIe r6.0.1 sec 7.5.3.7. 4911 + */ 4912 + rc = pcie_wait_for_link_status(pdev, use_lt, !use_lt); 4913 + if (rc) 4914 + return rc; 4915 + 4916 + pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &lnkctl); 4917 + lnkctl |= PCI_EXP_LNKCTL_RL; 4918 + pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl); 4919 + if (pdev->clear_retrain_link) { 4920 + /* 4921 + * Due to an erratum in some devices the Retrain Link bit 4922 + * needs to be cleared again manually to allow the link 4923 + * training to succeed. 4924 + */ 4925 + lnkctl &= ~PCI_EXP_LNKCTL_RL; 4926 + pcie_capability_write_word(pdev, PCI_EXP_LNKCTL, lnkctl); 4927 + } 4928 + 4929 + return pcie_wait_for_link_status(pdev, use_lt, !use_lt); 4930 + } 4931 + 4932 + /** 4886 4933 * pcie_wait_for_link_delay - Wait until link is active or inactive 4887 4934 * @pdev: Bridge device 4888 4935 * @active: waiting for active or inactive? ··· 4966 4867 static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active, 4967 4868 int delay) 4968 4869 { 4969 - int timeout = 1000; 4970 - bool ret; 4971 - u16 lnk_status; 4870 + int rc; 4972 4871 4973 4872 /* 4974 4873 * Some controllers might not implement link active reporting. In this 4975 4874 * case, we wait for 1000 ms + any delay requested by the caller. 4976 4875 */ 4977 4876 if (!pdev->link_active_reporting) { 4978 - msleep(timeout + delay); 4877 + msleep(PCIE_LINK_RETRAIN_TIMEOUT_MS + delay); 4979 4878 return true; 4980 4879 } 4981 4880 ··· 4988 4891 */ 4989 4892 if (active) 4990 4893 msleep(20); 4991 - for (;;) { 4992 - pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 4993 - ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); 4994 - if (ret == active) 4995 - break; 4996 - if (timeout <= 0) 4997 - break; 4998 - msleep(10); 4999 - timeout -= 10; 5000 - } 5001 - if (active && ret) 5002 - msleep(delay); 4894 + rc = pcie_wait_for_link_status(pdev, false, active); 4895 + if (active) { 4896 + if (rc) 4897 + rc = pcie_failed_link_retrain(pdev); 4898 + if (rc) 4899 + return false; 5003 4900 5004 - return ret == active; 4901 + msleep(delay); 4902 + return true; 4903 + } 4904 + 4905 + if (rc) 4906 + return false; 4907 + 4908 + return true; 5005 4909 } 5006 4910 5007 4911 /** ··· 5109 5011 * 5110 5012 * However, 100 ms is the minimum and the PCIe spec says the 5111 5013 * software must allow at least 1s before it can determine that the 5112 - * device that did not respond is a broken device. There is 5113 - * evidence that 100 ms is not always enough, for example certain 5114 - * Titan Ridge xHCI controller does not always respond to 5115 - * configuration requests if we only wait for 100 ms (see 5116 - * https://bugzilla.kernel.org/show_bug.cgi?id=203885). 5014 + * device that did not respond is a broken device. Also device can 5015 + * take longer than that to respond if it indicates so through Request 5016 + * Retry Status completions. 5117 5017 * 5118 5018 * Therefore we wait for 100 ms and check for the device presence 5119 5019 * until the timeout expires. ··· 5120 5024 return 0; 5121 5025 5122 5026 if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { 5027 + u16 status; 5028 + 5123 5029 pci_dbg(dev, "waiting %d ms for downstream link\n", delay); 5124 5030 msleep(delay); 5125 - } else { 5126 - pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", 5127 - delay); 5128 - if (!pcie_wait_for_link_delay(dev, true, delay)) { 5129 - /* Did not train, no need to wait any further */ 5130 - pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n"); 5031 + 5032 + if (!pci_dev_wait(child, reset_type, PCI_RESET_WAIT - delay)) 5033 + return 0; 5034 + 5035 + /* 5036 + * If the port supports active link reporting we now check 5037 + * whether the link is active and if not bail out early with 5038 + * the assumption that the device is not present anymore. 5039 + */ 5040 + if (!dev->link_active_reporting) 5131 5041 return -ENOTTY; 5132 - } 5042 + 5043 + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &status); 5044 + if (!(status & PCI_EXP_LNKSTA_DLLLA)) 5045 + return -ENOTTY; 5046 + 5047 + return pci_dev_wait(child, reset_type, 5048 + PCIE_RESET_READY_POLL_MS - PCI_RESET_WAIT); 5049 + } 5050 + 5051 + pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", 5052 + delay); 5053 + if (!pcie_wait_for_link_delay(dev, true, delay)) { 5054 + /* Did not train, no need to wait any further */ 5055 + pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n"); 5056 + return -ENOTTY; 5133 5057 } 5134 5058 5135 5059 return pci_dev_wait(child, reset_type,
+12 -7
drivers/pci/pci.h
··· 11 11 12 12 #define PCI_VSEC_ID_INTEL_TBT 0x1234 /* Thunderbolt */ 13 13 14 + #define PCIE_LINK_RETRAIN_TIMEOUT_MS 1000 15 + 14 16 extern const unsigned char pcie_link_speed[]; 15 17 extern bool pci_early_dump; 16 18 ··· 65 63 #define PCI_PM_D2_DELAY 200 /* usec; see PCIe r4.0, sec 5.9.1 */ 66 64 #define PCI_PM_D3HOT_WAIT 10 /* msec */ 67 65 #define PCI_PM_D3COLD_WAIT 100 /* msec */ 68 - 69 - /* 70 - * Following exit from Conventional Reset, devices must be ready within 1 sec 71 - * (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional 72 - * Reset (PCIe r6.0 sec 5.8). 73 - */ 74 - #define PCI_RESET_WAIT 1000 /* msec */ 75 66 76 67 void pci_update_current_state(struct pci_dev *dev, pci_power_t state); 77 68 void pci_refresh_power_state(struct pci_dev *dev); ··· 536 541 int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags); 537 542 int pci_dev_specific_enable_acs(struct pci_dev *dev); 538 543 int pci_dev_specific_disable_acs_redir(struct pci_dev *dev); 544 + bool pcie_failed_link_retrain(struct pci_dev *dev); 539 545 #else 540 546 static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev, 541 547 u16 acs_flags) ··· 551 555 { 552 556 return -ENOTTY; 553 557 } 558 + static inline bool pcie_failed_link_retrain(struct pci_dev *dev) 559 + { 560 + return false; 561 + } 554 562 #endif 555 563 556 564 /* PCI error reporting and recovery */ ··· 563 563 pci_ers_result_t (*reset_subordinates)(struct pci_dev *pdev)); 564 564 565 565 bool pcie_wait_for_link(struct pci_dev *pdev, bool active); 566 + int pcie_retrain_link(struct pci_dev *pdev, bool use_lt); 566 567 #ifdef CONFIG_PCIEASPM 567 568 void pcie_aspm_init_link_state(struct pci_dev *pdev); 568 569 void pcie_aspm_exit_link_state(struct pci_dev *pdev); ··· 687 686 void pci_aer_clear_fatal_status(struct pci_dev *dev); 688 687 int pci_aer_clear_status(struct pci_dev *dev); 689 688 int pci_aer_raw_clear_status(struct pci_dev *dev); 689 + void pci_save_aer_state(struct pci_dev *dev); 690 + void pci_restore_aer_state(struct pci_dev *dev); 690 691 #else 691 692 static inline void pci_no_aer(void) { } 692 693 static inline void pci_aer_init(struct pci_dev *d) { } ··· 696 693 static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { } 697 694 static inline int pci_aer_clear_status(struct pci_dev *dev) { return -EINVAL; } 698 695 static inline int pci_aer_raw_clear_status(struct pci_dev *dev) { return -EINVAL; } 696 + static inline void pci_save_aer_state(struct pci_dev *dev) { } 697 + static inline void pci_restore_aer_state(struct pci_dev *dev) { } 699 698 #endif 700 699 701 700 #ifdef CONFIG_ACPI
+34 -67
drivers/pci/pcie/aspm.c
··· 90 90 [POLICY_POWER_SUPERSAVE] = "powersupersave" 91 91 }; 92 92 93 - #define LINK_RETRAIN_TIMEOUT HZ 94 - 95 93 /* 96 94 * The L1 PM substate capability is only implemented in function 0 in a 97 95 * multi function device. ··· 191 193 link->clkpm_disable = blacklist ? 1 : 0; 192 194 } 193 195 194 - static bool pcie_retrain_link(struct pcie_link_state *link) 195 - { 196 - struct pci_dev *parent = link->pdev; 197 - unsigned long end_jiffies; 198 - u16 reg16; 199 - 200 - pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16); 201 - reg16 |= PCI_EXP_LNKCTL_RL; 202 - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 203 - if (parent->clear_retrain_link) { 204 - /* 205 - * Due to an erratum in some devices the Retrain Link bit 206 - * needs to be cleared again manually to allow the link 207 - * training to succeed. 208 - */ 209 - reg16 &= ~PCI_EXP_LNKCTL_RL; 210 - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 211 - } 212 - 213 - /* Wait for link training end. Break out after waiting for timeout */ 214 - end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT; 215 - do { 216 - pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16); 217 - if (!(reg16 & PCI_EXP_LNKSTA_LT)) 218 - break; 219 - msleep(1); 220 - } while (time_before(jiffies, end_jiffies)); 221 - return !(reg16 & PCI_EXP_LNKSTA_LT); 222 - } 223 - 224 196 /* 225 197 * pcie_aspm_configure_common_clock: check if the 2 ends of a link 226 198 * could use common clock. If they are, configure them to use the ··· 257 289 reg16 &= ~PCI_EXP_LNKCTL_CCC; 258 290 pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); 259 291 260 - if (pcie_retrain_link(link)) 261 - return; 292 + if (pcie_retrain_link(link->pdev, true)) { 262 293 263 - /* Training failed. Restore common clock configurations */ 264 - pci_err(parent, "ASPM: Could not configure common clock\n"); 265 - list_for_each_entry(child, &linkbus->devices, bus_list) 266 - pcie_capability_write_word(child, PCI_EXP_LNKCTL, 294 + /* Training failed. Restore common clock configurations */ 295 + pci_err(parent, "ASPM: Could not configure common clock\n"); 296 + list_for_each_entry(child, &linkbus->devices, bus_list) 297 + pcie_capability_write_word(child, PCI_EXP_LNKCTL, 267 298 child_reg[PCI_FUNC(child->devfn)]); 268 - pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); 299 + pcie_capability_write_word(parent, PCI_EXP_LNKCTL, parent_reg); 300 + } 269 301 } 270 302 271 303 /* Convert L0s latency encoding to ns */ ··· 305 337 } 306 338 307 339 /* Convert L1SS T_pwr encoding to usec */ 308 - static u32 calc_l1ss_pwron(struct pci_dev *pdev, u32 scale, u32 val) 340 + static u32 calc_l12_pwron(struct pci_dev *pdev, u32 scale, u32 val) 309 341 { 310 342 switch (scale) { 311 343 case 0: ··· 439 471 } 440 472 441 473 /* Calculate L1.2 PM substate timing parameters */ 442 - static void aspm_calc_l1ss_info(struct pcie_link_state *link, 474 + static void aspm_calc_l12_info(struct pcie_link_state *link, 443 475 u32 parent_l1ss_cap, u32 child_l1ss_cap) 444 476 { 445 477 struct pci_dev *child = link->downstream, *parent = link->pdev; ··· 448 480 u32 ctl1 = 0, ctl2 = 0; 449 481 u32 pctl1, pctl2, cctl1, cctl2; 450 482 u32 pl1_2_enables, cl1_2_enables; 451 - 452 - if (!(link->aspm_support & ASPM_STATE_L1_2_MASK)) 453 - return; 454 483 455 484 /* Choose the greater of the two Port Common_Mode_Restore_Times */ 456 485 val1 = (parent_l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8; ··· 460 495 val2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19; 461 496 scale2 = (child_l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16; 462 497 463 - if (calc_l1ss_pwron(parent, scale1, val1) > 464 - calc_l1ss_pwron(child, scale2, val2)) { 498 + if (calc_l12_pwron(parent, scale1, val1) > 499 + calc_l12_pwron(child, scale2, val2)) { 465 500 ctl2 |= scale1 | (val1 << 3); 466 - t_power_on = calc_l1ss_pwron(parent, scale1, val1); 501 + t_power_on = calc_l12_pwron(parent, scale1, val1); 467 502 } else { 468 503 ctl2 |= scale2 | (val2 << 3); 469 - t_power_on = calc_l1ss_pwron(child, scale2, val2); 504 + t_power_on = calc_l12_pwron(child, scale2, val2); 470 505 } 471 506 472 507 /* ··· 581 616 if (parent_l1ss_ctl1 & child_l1ss_ctl1 & PCI_L1SS_CTL1_PCIPM_L1_2) 582 617 link->aspm_enabled |= ASPM_STATE_L1_2_PCIPM; 583 618 584 - if (link->aspm_support & ASPM_STATE_L1SS) 585 - aspm_calc_l1ss_info(link, parent_l1ss_cap, child_l1ss_cap); 619 + if (link->aspm_support & ASPM_STATE_L1_2_MASK) 620 + aspm_calc_l12_info(link, parent_l1ss_cap, child_l1ss_cap); 586 621 } 587 622 588 623 static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist) ··· 975 1010 976 1011 down_read(&pci_bus_sem); 977 1012 mutex_lock(&aspm_lock); 978 - /* 979 - * All PCIe functions are in one slot, remove one function will remove 980 - * the whole slot, so just wait until we are the last function left. 981 - */ 982 - if (!list_empty(&parent->subordinate->devices)) 983 - goto out; 984 1013 985 1014 link = parent->link_state; 986 1015 root = link->root; 987 1016 parent_link = link->parent; 988 1017 989 - /* All functions are removed, so just disable ASPM for the link */ 1018 + /* 1019 + * link->downstream is a pointer to the pci_dev of function 0. If 1020 + * we remove that function, the pci_dev is about to be deallocated, 1021 + * so we can't use link->downstream again. Free the link state to 1022 + * avoid this. 1023 + * 1024 + * If we're removing a non-0 function, it's possible we could 1025 + * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends 1026 + * programming the same ASPM Control value for all functions of 1027 + * multi-function devices, so disable ASPM for all of them. 1028 + */ 990 1029 pcie_config_aspm_link(link, 0); 991 1030 list_del(&link->sibling); 992 - /* Clock PM is for endpoint device */ 993 1031 free_link_state(link); 994 1032 995 1033 /* Recheck latencies and configure upstream links */ ··· 1000 1032 pcie_update_aspm_capable(root); 1001 1033 pcie_config_aspm_path(parent_link); 1002 1034 } 1003 - out: 1035 + 1004 1036 mutex_unlock(&aspm_lock); 1005 1037 up_read(&pci_bus_sem); 1006 1038 } ··· 1063 1095 if (state & PCIE_LINK_STATE_L0S) 1064 1096 link->aspm_disable |= ASPM_STATE_L0S; 1065 1097 if (state & PCIE_LINK_STATE_L1) 1066 - /* L1 PM substates require L1 */ 1067 - link->aspm_disable |= ASPM_STATE_L1 | ASPM_STATE_L1SS; 1098 + link->aspm_disable |= ASPM_STATE_L1; 1068 1099 if (state & PCIE_LINK_STATE_L1_1) 1069 1100 link->aspm_disable |= ASPM_STATE_L1_1; 1070 1101 if (state & PCIE_LINK_STATE_L1_2) ··· 1138 1171 if (state & PCIE_LINK_STATE_L0S) 1139 1172 link->aspm_default |= ASPM_STATE_L0S; 1140 1173 if (state & PCIE_LINK_STATE_L1) 1141 - /* L1 PM substates require L1 */ 1142 - link->aspm_default |= ASPM_STATE_L1 | ASPM_STATE_L1SS; 1174 + link->aspm_default |= ASPM_STATE_L1; 1175 + /* L1 PM substates require L1 */ 1143 1176 if (state & PCIE_LINK_STATE_L1_1) 1144 - link->aspm_default |= ASPM_STATE_L1_1; 1177 + link->aspm_default |= ASPM_STATE_L1_1 | ASPM_STATE_L1; 1145 1178 if (state & PCIE_LINK_STATE_L1_2) 1146 - link->aspm_default |= ASPM_STATE_L1_2; 1179 + link->aspm_default |= ASPM_STATE_L1_2 | ASPM_STATE_L1; 1147 1180 if (state & PCIE_LINK_STATE_L1_1_PCIPM) 1148 - link->aspm_default |= ASPM_STATE_L1_1_PCIPM; 1181 + link->aspm_default |= ASPM_STATE_L1_1_PCIPM | ASPM_STATE_L1; 1149 1182 if (state & PCIE_LINK_STATE_L1_2_PCIPM) 1150 - link->aspm_default |= ASPM_STATE_L1_2_PCIPM; 1183 + link->aspm_default |= ASPM_STATE_L1_2_PCIPM | ASPM_STATE_L1; 1151 1184 pcie_config_aspm_link(link, policy_to_aspm_state(link)); 1152 1185 1153 1186 link->clkpm_default = (state & PCIE_LINK_STATE_CLKPM) ? 1 : 0;
+10 -2
drivers/pci/probe.c
··· 820 820 821 821 pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap); 822 822 bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS]; 823 - bridge->link_active_reporting = !!(linkcap & PCI_EXP_LNKCAP_DLLLARC); 824 823 825 824 pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta); 826 825 pcie_update_link_speed(bus, linksta); ··· 996 997 resource_list_for_each_entry_safe(window, n, &resources) { 997 998 offset = window->offset; 998 999 res = window->res; 999 - if (!res->flags && !res->start && !res->end) 1000 + if (!res->flags && !res->start && !res->end) { 1001 + release_resource(res); 1000 1002 continue; 1003 + } 1001 1004 1002 1005 list_move_tail(&window->node, &bridge->windows); 1003 1006 ··· 1528 1527 { 1529 1528 int pos; 1530 1529 u16 reg16; 1530 + u32 reg32; 1531 1531 int type; 1532 1532 struct pci_dev *parent; 1533 1533 ··· 1541 1539 pdev->pcie_flags_reg = reg16; 1542 1540 pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap); 1543 1541 pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap); 1542 + 1543 + pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &reg32); 1544 + if (reg32 & PCI_EXP_LNKCAP_DLLLARC) 1545 + pdev->link_active_reporting = 1; 1544 1546 1545 1547 parent = pci_upstream_bridge(pdev); 1546 1548 if (!parent) ··· 2551 2545 2552 2546 dma_set_max_seg_size(&dev->dev, 65536); 2553 2547 dma_set_seg_boundary(&dev->dev, 0xffffffff); 2548 + 2549 + pcie_failed_link_retrain(dev); 2554 2550 2555 2551 /* Fix up broken headers */ 2556 2552 pci_fixup_device(pci_fixup_header, dev);
+104 -7
drivers/pci/quirks.c
··· 33 33 #include <linux/switchtec.h> 34 34 #include "pci.h" 35 35 36 + /* 37 + * Retrain the link of a downstream PCIe port by hand if necessary. 38 + * 39 + * This is needed at least where a downstream port of the ASMedia ASM2824 40 + * Gen 3 switch is wired to the upstream port of the Pericom PI7C9X2G304 41 + * Gen 2 switch, and observed with the Delock Riser Card PCI Express x1 > 42 + * 2 x PCIe x1 device, P/N 41433, plugged into the SiFive HiFive Unmatched 43 + * board. 44 + * 45 + * In such a configuration the switches are supposed to negotiate the link 46 + * speed of preferably 5.0GT/s, falling back to 2.5GT/s. However the link 47 + * continues switching between the two speeds indefinitely and the data 48 + * link layer never reaches the active state, with link training reported 49 + * repeatedly active ~84% of the time. Forcing the target link speed to 50 + * 2.5GT/s with the upstream ASM2824 device makes the two switches talk to 51 + * each other correctly however. And more interestingly retraining with a 52 + * higher target link speed afterwards lets the two successfully negotiate 53 + * 5.0GT/s. 54 + * 55 + * With the ASM2824 we can rely on the otherwise optional Data Link Layer 56 + * Link Active status bit and in the failed link training scenario it will 57 + * be off along with the Link Bandwidth Management Status indicating that 58 + * hardware has changed the link speed or width in an attempt to correct 59 + * unreliable link operation. For a port that has been left unconnected 60 + * both bits will be clear. So use this information to detect the problem 61 + * rather than polling the Link Training bit and watching out for flips or 62 + * at least the active status. 63 + * 64 + * Since the exact nature of the problem isn't known and in principle this 65 + * could trigger where an ASM2824 device is downstream rather upstream, 66 + * apply this erratum workaround to any downstream ports as long as they 67 + * support Link Active reporting and have the Link Control 2 register. 68 + * Restrict the speed to 2.5GT/s then with the Target Link Speed field, 69 + * request a retrain and wait 200ms for the data link to go up. 70 + * 71 + * If this turns out successful and we know by the Vendor:Device ID it is 72 + * safe to do so, then lift the restriction, letting the devices negotiate 73 + * a higher speed. Also check for a similar 2.5GT/s speed restriction the 74 + * firmware may have already arranged and lift it with ports that already 75 + * report their data link being up. 76 + * 77 + * Return TRUE if the link has been successfully retrained, otherwise FALSE. 78 + */ 79 + bool pcie_failed_link_retrain(struct pci_dev *dev) 80 + { 81 + static const struct pci_device_id ids[] = { 82 + { PCI_VDEVICE(ASMEDIA, 0x2824) }, /* ASMedia ASM2824 */ 83 + {} 84 + }; 85 + u16 lnksta, lnkctl2; 86 + 87 + if (!pci_is_pcie(dev) || !pcie_downstream_port(dev) || 88 + !pcie_cap_has_lnkctl2(dev) || !dev->link_active_reporting) 89 + return false; 90 + 91 + pcie_capability_read_word(dev, PCI_EXP_LNKCTL2, &lnkctl2); 92 + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); 93 + if ((lnksta & (PCI_EXP_LNKSTA_LBMS | PCI_EXP_LNKSTA_DLLLA)) == 94 + PCI_EXP_LNKSTA_LBMS) { 95 + pci_info(dev, "broken device, retraining non-functional downstream link at 2.5GT/s\n"); 96 + 97 + lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS; 98 + lnkctl2 |= PCI_EXP_LNKCTL2_TLS_2_5GT; 99 + pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2); 100 + 101 + if (pcie_retrain_link(dev, false)) { 102 + pci_info(dev, "retraining failed\n"); 103 + return false; 104 + } 105 + 106 + pcie_capability_read_word(dev, PCI_EXP_LNKSTA, &lnksta); 107 + } 108 + 109 + if ((lnksta & PCI_EXP_LNKSTA_DLLLA) && 110 + (lnkctl2 & PCI_EXP_LNKCTL2_TLS) == PCI_EXP_LNKCTL2_TLS_2_5GT && 111 + pci_match_id(ids, dev)) { 112 + u32 lnkcap; 113 + 114 + pci_info(dev, "removing 2.5GT/s downstream link speed restriction\n"); 115 + pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 116 + lnkctl2 &= ~PCI_EXP_LNKCTL2_TLS; 117 + lnkctl2 |= lnkcap & PCI_EXP_LNKCAP_SLS; 118 + pcie_capability_write_word(dev, PCI_EXP_LNKCTL2, lnkctl2); 119 + 120 + if (pcie_retrain_link(dev, false)) { 121 + pci_info(dev, "retraining failed\n"); 122 + return false; 123 + } 124 + } 125 + 126 + return true; 127 + } 128 + 36 129 static ktime_t fixup_debug_start(struct pci_dev *dev, 37 130 void (*fn)(struct pci_dev *dev)) 38 131 { ··· 2513 2420 dev->clear_retrain_link = 1; 2514 2421 pci_info(dev, "Enable PCIe Retrain Link quirk\n"); 2515 2422 } 2516 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe110, quirk_enable_clear_retrain_link); 2517 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe111, quirk_enable_clear_retrain_link); 2518 - DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_PERICOM, 0xe130, quirk_enable_clear_retrain_link); 2423 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe110, quirk_enable_clear_retrain_link); 2424 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe111, quirk_enable_clear_retrain_link); 2425 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_PERICOM, 0xe130, quirk_enable_clear_retrain_link); 2519 2426 2520 2427 static void fixup_rev1_53c810(struct pci_dev *dev) 2521 2428 { ··· 4086 3993 } 4087 3994 4088 3995 /* 4089 - * Intel DC P3700 NVMe controller will timeout waiting for ready status 4090 - * to change after NVMe enable if the driver starts interacting with the 4091 - * device too soon after FLR. A 250ms delay after FLR has heuristically 4092 - * proven to produce reliably working results for device assignment cases. 3996 + * Some NVMe controllers such as Intel DC P3700 and Solidigm P44 Pro will 3997 + * timeout waiting for ready status to change after NVMe enable if the driver 3998 + * starts interacting with the device too soon after FLR. A 250ms delay after 3999 + * FLR has heuristically proven to produce reliably working results for device 4000 + * assignment cases. 4093 4001 */ 4094 4002 static int delay_250ms_after_flr(struct pci_dev *dev, bool probe) 4095 4003 { ··· 4177 4083 { PCI_VENDOR_ID_SAMSUNG, 0xa804, nvme_disable_and_flr }, 4178 4084 { PCI_VENDOR_ID_INTEL, 0x0953, delay_250ms_after_flr }, 4179 4085 { PCI_VENDOR_ID_INTEL, 0x0a54, delay_250ms_after_flr }, 4086 + { PCI_VENDOR_ID_SOLIDIGM, 0xf1ac, delay_250ms_after_flr }, 4180 4087 { PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID, 4181 4088 reset_chelsio_generic_dev }, 4182 4089 { PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_HINIC_VF, ··· 4268 4173 quirk_dma_func1_alias); 4269 4174 /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */ 4270 4175 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230, 4176 + quirk_dma_func1_alias); 4177 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9235, 4271 4178 quirk_dma_func1_alias); 4272 4179 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0642, 4273 4180 quirk_dma_func1_alias);
-4
include/linux/aer.h
··· 45 45 int pci_enable_pcie_error_reporting(struct pci_dev *dev); 46 46 int pci_disable_pcie_error_reporting(struct pci_dev *dev); 47 47 int pci_aer_clear_nonfatal_status(struct pci_dev *dev); 48 - void pci_save_aer_state(struct pci_dev *dev); 49 - void pci_restore_aer_state(struct pci_dev *dev); 50 48 #else 51 49 static inline int pci_enable_pcie_error_reporting(struct pci_dev *dev) 52 50 { ··· 58 60 { 59 61 return -EINVAL; 60 62 } 61 - static inline void pci_save_aer_state(struct pci_dev *dev) {} 62 - static inline void pci_restore_aer_state(struct pci_dev *dev) {} 63 63 #endif 64 64 65 65 void cper_print_aer(struct pci_dev *dev, int aer_severity,
+2
include/linux/pci-epc.h
··· 203 203 int pci_epc_add_epf(struct pci_epc *epc, struct pci_epf *epf, 204 204 enum pci_epc_interface_type type); 205 205 void pci_epc_linkup(struct pci_epc *epc); 206 + void pci_epc_linkdown(struct pci_epc *epc); 206 207 void pci_epc_init_notify(struct pci_epc *epc); 208 + void pci_epc_bme_notify(struct pci_epc *epc); 207 209 void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf, 208 210 enum pci_epc_interface_type type); 209 211 int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+8 -3
include/linux/pci-epf.h
··· 71 71 * struct pci_epf_event_ops - Callbacks for capturing the EPC events 72 72 * @core_init: Callback for the EPC initialization complete event 73 73 * @link_up: Callback for the EPC link up event 74 + * @link_down: Callback for the EPC link down event 75 + * @bme: Callback for the EPC BME (Bus Master Enable) event 74 76 */ 75 77 struct pci_epc_event_ops { 76 78 int (*core_init)(struct pci_epf *epf); 77 79 int (*link_up)(struct pci_epf *epf); 80 + int (*link_down)(struct pci_epf *epf); 81 + int (*bme)(struct pci_epf *epf); 78 82 }; 79 83 80 84 /** ··· 93 89 * @id_table: identifies EPF devices for probing 94 90 */ 95 91 struct pci_epf_driver { 96 - int (*probe)(struct pci_epf *epf); 92 + int (*probe)(struct pci_epf *epf, 93 + const struct pci_epf_device_id *id); 97 94 void (*remove)(struct pci_epf *epf); 98 95 99 96 struct device_driver driver; ··· 136 131 * @epc: the EPC device to which this EPF device is bound 137 132 * @epf_pf: the physical EPF device to which this virtual EPF device is bound 138 133 * @driver: the EPF driver to which this EPF device is bound 134 + * @id: Pointer to the EPF device ID 139 135 * @list: to add pci_epf as a list of PCI endpoint functions to pci_epc 140 136 * @lock: mutex to protect pci_epf_ops 141 137 * @sec_epc: the secondary EPC device to which this EPF device is bound ··· 164 158 struct pci_epc *epc; 165 159 struct pci_epf *epf_pf; 166 160 struct pci_epf_driver *driver; 161 + const struct pci_epf_device_id *id; 167 162 struct list_head list; 168 163 /* mutex to protect against concurrent access of pci_epf_ops */ 169 164 struct mutex lock; ··· 221 214 enum pci_epc_interface_type type); 222 215 int pci_epf_bind(struct pci_epf *epf); 223 216 void pci_epf_unbind(struct pci_epf *epf); 224 - struct config_group *pci_epf_type_add_cfs(struct pci_epf *epf, 225 - struct config_group *group); 226 217 int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf); 227 218 void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf); 228 219 #endif /* __LINUX_PCI_EPF_H */
+1
include/linux/pci.h
··· 1903 1903 #define pci_dev_put(dev) do { } while (0) 1904 1904 1905 1905 static inline void pci_set_master(struct pci_dev *dev) { } 1906 + static inline void pci_clear_master(struct pci_dev *dev) { } 1906 1907 static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; } 1907 1908 static inline void pci_disable_device(struct pci_dev *dev) { } 1908 1909 static inline int pcim_enable_device(struct pci_dev *pdev) { return -EIO; }
+3 -1
include/linux/pci_ids.h
··· 2 2 /* 3 3 * PCI Class, Vendor and Device IDs 4 4 * 5 - * Please keep sorted. 5 + * Please keep sorted by numeric Vendor ID and Device ID. 6 6 * 7 7 * Do not add new entries to this file unless the definitions 8 8 * are shared between multiple drivers. ··· 163 163 164 164 #define PCI_DEVICE_ID_LOONGSON_HDA 0x7a07 165 165 #define PCI_DEVICE_ID_LOONGSON_HDMI 0x7a37 166 + 167 + #define PCI_VENDOR_ID_SOLIDIGM 0x025e 166 168 167 169 #define PCI_VENDOR_ID_TTTECH 0x0357 168 170 #define PCI_DEVICE_ID_TTTECH_MC322 0x000a
+1
include/uapi/linux/pci_regs.h
··· 738 738 #define PCI_EXT_CAP_ID_DVSEC 0x23 /* Designated Vendor-Specific */ 739 739 #define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */ 740 740 #define PCI_EXT_CAP_ID_PL_16GT 0x26 /* Physical Layer 16.0 GT/s */ 741 + #define PCI_EXT_CAP_ID_PL_32GT 0x2A /* Physical Layer 32.0 GT/s */ 741 742 #define PCI_EXT_CAP_ID_DOE 0x2E /* Data Object Exchange */ 742 743 #define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_DOE 743 744