Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.11-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Define PCIE_RESET_CONFIG_DEVICE_WAIT_MS for the generic 100ms
required after reset before config access (Kevin Xie)

- Define PCIE_T_RRS_READY_MS for the generic 100ms required after
reset before config access (probably should be unified with
PCIE_RESET_CONFIG_DEVICE_WAIT_MS) (Damien Le Moal)

Resource management:

- Rename find_resource() to find_resource_space() to be more
descriptive (Ilpo Järvinen)

- Export find_resource_space() for use by PCI core, which needs to
learn whether there is available space for a bridge window (Ilpo
Järvinen)

- Prevent double counting of resources so window size doesn't grow on
each remove/rescan cycle (Ilpo Järvinen)

- Relax bridge window sizing algorithm so a device doesn't break
simply because it was removed and rescanned (Ilpo Järvinen)

- Evaluate the ACPI PRESERVE_BOOT_CONFIG _DSM in
pci_register_host_bridge() (not acpi_pci_root_create()) so we can
unify it with similar DT functionality (Vidya Sagar)

- Extend use of DT "linux,pci-probe-only" property so it works
per-host bridge as well as globally (Vidya Sagar)

- Unify support for ACPI PRESERVE_BOOT_CONFIG _DSM and the DT
"linux,pci-probe-only" property in pci_preserve_config() (Vidya
Sagar)

Driver binding:

- Add devres infrastructure for managed request and map of partial
BAR resources (Philipp Stanner)

- Deprecate pcim_iomap_table() because uses like
"pcim_iomap_table()[0]" have no good way to return errors (Philipp
Stanner)

- Add an always-managed pcim_request_region() for use instead of
pci_request_region() and similar, which are sometimes managed
depending on whether pcim_enable_device() has been called
previously (Philipp Stanner)

- Reimplement pcim_set_mwi() so it doesn't need to keep store MWI
state (Philipp Stanner)

- Add pcim_intx() for use instead of pci_intx(), which is sometimes
managed depending on whether pcim_enable_device() has been called
previously (Philipp Stanner)

- Add managed pcim_iomap_range() to allow mapping of a partial BAR
(Philipp Stanner)

- Fix a devres mapping leak in drm/vboxvideo (Philipp Stanner)

Error handling:

- Add missing bridge locking in device reset path and add a warning
for other possible lock issues (Dan Williams)

- Fix use-after-free on concurrent DPC and hot-removal (Lukas Wunner)

Power management:

- Disable AER and DPC during suspend to avoid spurious wakeups if
they share an interrupt with PME (Kai-Heng Feng)

PCIe native device hotplug:

- Detect if a device was removed or replaced during system sleep so
we don't assume a new device is the one that used to be there
(Lukas Wunner)

Virtualization:

- Add an ACS quirk for Broadcom BCM5760X multi-function NIC; it
prevents transactions between functions even though it doesn't
advertise ACS, so the functions can be attached individually via
VFIO (Ajit Khaparde)

Peer-to-peer DMA:

- Add a "pci=config_acs=" kernel command-line parameter to relax
default ACS settings to enable additional peer-to-peer
configurations. Requires expert knowledge of topology and ACS
operation (Vidya Sagar)

Endpoint framework:

- Remove unused struct pci_epf_group.type_group (Christophe JAILLET)

- Fix error handling in vpci_scan_bus() and epf_ntb_epc_cleanup()
(Dan Carpenter)

- Make struct pci_epc_class constant (Greg Kroah-Hartman)

- Remove unused pci_endpoint_test_bar_{readl,writel} functions
(Jiapeng Chong)

- Rename "BME" to "Bus Master Enable" (Manivannan Sadhasivam)

- Rename struct pci_epc_event_ops.core_init() callback to epc_init()
(Manivannan Sadhasivam)

- Move DMA init to MHI .epc_init() callback for uniformity
(Manivannan Sadhasivam)

- Cancel EPF test delayed work when link goes down (Manivannan
Sadhasivam)

- Add struct pci_epc_event_ops.epc_deinit() callback for cleanup
needed on fundamental reset (Manivannan Sadhasivam)

- Add 64KB alignment to endpoint test to support Rockchip rk3588
(Niklas Cassel)

- Optimize endpoint test by using memcpy() instead of readl() (Niklas
Cassel)

Device tree bindings:

- Add generic "ats-supported" property to advertise that a PCIe Root
Complex supports ATS (Jean-Philippe Brucker)

Amazon Annapurna Labs PCIe controller driver:

- Validate IORESOURCE_BUS presence to avoid NULL pointer dereference
(Aleksandr Mishin)

Axis ARTPEC-6 PCIe controller driver:

- Rename .cpu_addr_fixup() parameter to reflect that it is a PCI
address, not a CPU address (Niklas Cassel)

Freescale i.MX6 PCIe controller driver:

- Convert to agnostic GPIO API (Andy Shevchenko)

Freescale Layerscape PCIe controller driver:

- Make struct mobiveil_rp_ops constant (Christophe JAILLET)

- Use new generic dw_pcie_ep_linkdown() to handle link-down events
(Manivannan Sadhasivam)

HiSilicon Kirin PCIe controller driver:

- Convert to agnostic GPIO API (Andy Shevchenko)

- Use _scoped() iterator for OF children to ensure refcounts are
decremented at loop exit (Javier Carrasco)

Intel VMD host bridge driver:

- Create sysfs "domain" symlink before downstream devices are exposed
to userspace by pci_bus_add_devices() (Jiwei Sun)

Loongson PCIe controller driver:

- Enable MSI when LS7A is used with new CPUs that have integrated
PCIe Root Complex, e.g., Loongson-3C6000, so downstream devices can
use MSI (Huacai Chen)

Microchip AXI PolarFlare PCIe controller driver:

- Move pcie-microchip-host.c to a new PLDA directory (Minda Chen)

- Factor PLDA generic items out to a common
plda,xpressrich3-axi-common.yaml binding (Minda Chen)

- Factor PLDA generic data structures and code out to shared
pcie-plda.h, pcie-plda-host.c (Minda Chen)

- Add PLDA generic interrupt handling with a .request_event_irq()
callback for vendor-specific events (Minda Chen)

- Add PLDA generic host init/deinit and map bus functions for use by
vendor-specific drivers (Minda Chen)

- Rework to use PLDA core (Minda Chen)

Microsoft Hyper-V host bridge driver:

- Return zero, not garbage, when reading PCI_INTERRUPT_PIN (Wei Liu)

NVIDIA Tegra194 PCIe controller driver:

- Remove unused struct tegra_pcie_soc (Dr. David Alan Gilbert)

- Set 64KB inbound ATU alignment restriction (Jon Hunter)

Qualcomm PCIe controller driver:

- Make the MHI reg region mandatory for X1E80100, since all PCIe
controllers have it (Abel Vesa)

- Prevent use of uninitialized data and possible error pointer
dereference (Dan Carpenter)

- Return error, not success, if dev_pm_opp_find_freq_floor() fails
(Dan Carpenter)

- Add Operating Performance Points (OPP) support to scale performance
state based on aggregate link bandwidth to improve SoC power
efficiency (Krishna chaitanya chundru)

- Vote for the CPU-PCIe ICC (interconnect) path to ensure it stays
active even if other drivers don't vote for it (Krishna chaitanya
chundru)

- Use devm_clk_bulk_get_all() to get all the clocks from DT to avoid
writing out all the clock names (Manivannan Sadhasivam)

- Add DT binding and driver support for the SA8775P SoC (Mrinmay
Sarkar)

- Add HDMA support for the SA8775P SoC (Mrinmay Sarkar)

- Override the SA8775P NO_SNOOP default to avoid possible memory
corruption (Mrinmay Sarkar)

- Make sure resources are disabled during PERST# assertion, even if
the link is already disabled (Manivannan Sadhasivam)

- Use new generic dw_pcie_ep_linkdown() to handle link-down events
(Manivannan Sadhasivam)

- Add DT and endpoint driver support for the SA8775P SoC (Mrinmay
Sarkar)

- Add Hyper DMA (HDMA) support for the SA8775P SoC and enable it in
the EPF MHI driver (Mrinmay Sarkar)

- Set PCIE_PARF_NO_SNOOP_OVERIDE to override the default NO_SNOOP
attribute on the SA8775P SoC (both Root Complex and Endpoint mode)
to avoid possible memory corruption (Mrinmay Sarkar)

Renesas R-Car PCIe controller driver:

- Demote WARN() to dev_warn_ratelimited() in rcar_pcie_wakeup() to
avoid unnecessary backtrace (Marek Vasut)

- Add DT and driver support for R-Car V4H (R8A779G0) host and
endpoint. This requires separate proprietary firmware (Yoshihiro
Shimoda)

Rockchip PCIe controller driver:

- Assert PERST# for 100ms after power is stable (Damien Le Moal)

- Wait PCIE_T_RRS_READY_MS (100ms) after reset before starting
configuration (Damien Le Moal)

- Use GPIOD_OUT_LOW flag while requesting ep_gpio to fix a firmware
crash on Qcom-based modems with Rockpro64 board (Manivannan
Sadhasivam)

Rockchip DesignWare PCIe controller driver:

- Factor common parts of rockchip-dw-pcie DT binding to be shared by
Root Complex and Endpoint mode (Niklas Cassel)

- Add missing INTx signals to common DT binding (Niklas Cassel)

- Add eDMA items to DT binding for Endpoint controller (Niklas
Cassel)

- Fix initial dw-rockchip PERST# GPIO value to prevent unnecessary
short assert/deassert that causes issues with some WLAN controllers
(Niklas Cassel)

- Refactor dw-rockchip and add support for Endpoint mode (Niklas
Cassel)

- Call pci_epc_init_notify() and drop dw_pcie_ep_init_notify()
wrapper (Niklas Cassel)

- Add error messages in .probe() error paths to improve user
experience (Uwe Kleine-König)

Samsung Exynos PCIe controller driver:

- Use bulk clock APIs to simplify clock setup (Shradha Todi)

StarFive PCIe controller driver:

- Add DT binding and driver support for the StarFive JH7110
PLDA-based PCIe controller (Minda Chen)

Synopsys DesignWare PCIe controller driver:

- Add generic support for sending PME_Turn_Off when system suspends
(Frank Li)

- Fix incorrect interpretation of iATU slot 0 after PERST#
assert/deassert (Frank Li)

- Use msleep() instead of usleep_range() while waiting for link
(Konrad Dybcio)

- Refactor dw_pcie_edma_find_chip() to enable adding support for
Hyper DMA (HDMA) (Manivannan Sadhasivam)

- Enable drivers to supply the eDMA channel count since some can't
auto detect this (Manivannan Sadhasivam)

- Call pci_epc_init_notify() and drop dw_pcie_ep_init_notify()
wrapper (Manivannan Sadhasivam)

- Pass the eDMA mapping format directly from drivers instead of
maintaining a capability for it (Manivannan Sadhasivam)

- Add generic dw_pcie_ep_linkdown() to notify EPF drivers about
link-down events and restore non-sticky DWC registers lost on link
down (Manivannan Sadhasivam)

- Add vendor-specific "apb" reg name, interrupt names, INTx names to
generic binding (Niklas Cassel)

- Enforce DWC restriction that 64-bit BARs must start with an
even-numbered BAR (Niklas Cassel)

- Consolidate args of dw_pcie_prog_outbound_atu() into a structure
(Yoshihiro Shimoda)

- Add support for endpoints to send Message TLPs, e.g., for INTx
emulation (Yoshihiro Shimoda)

TI DRA7xx PCIe controller driver:

- Rename .cpu_addr_fixup() parameter to reflect that it is a PCI
address, not a CPU address (Niklas Cassel)

TI Keystone PCIe controller driver:

- Validate IORESOURCE_BUS presence to avoid NULL pointer dereference
(Aleksandr Mishin)

- Work around AM65x/DRA80xM Errata #i2037 that corrupts TLPs and
causes processor hangs by limiting Max_Read_Request_Size (MRRS) and
Max_Payload_Size (MPS) (Kishon Vijay Abraham I)

- Leave BAR 0 disabled for AM654x to fix a regression caused by
6ab15b5e7057 ("PCI: dwc: keystone: Convert .scan_bus() callback to
use add_bus"), which caused a 45-second boot delay (Siddharth
Vadapalli)

Xilinx Versal CPM PCIe controller driver:

- Fix overlapping bridge registers and 32-bit BAR addresses in DT
binding (Thippeswamy Havalige)

MicroSemi Switchtec management driver:

- Make struct switchtec_class constant (Greg Kroah-Hartman)

Miscellaneous:

- Remove unused struct acpi_handle_node (Dr. David Alan Gilbert)

- Add missing MODULE_DESCRIPTION() macros (Jeff Johnson)"

* tag 'pci-v6.11-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (154 commits)
PCI: loongson: Enable MSI in LS7A Root Complex
PCI: Extend ACS configurability
PCI: Add missing bridge lock to pci_bus_lock()
drm/vboxvideo: fix mapping leaks
PCI: Add managed pcim_iomap_range()
PCI: Remove legacy pcim_release()
PCI: Add managed pcim_intx()
PCI: vmd: Create domain symlink before pci_bus_add_devices()
PCI: qcom: Prevent use of uninitialized data in qcom_pcie_suspend_noirq()
PCI: qcom: Prevent potential error pointer dereference
PCI: qcom: Fix missing error code in qcom_pcie_probe()
PCI: Give pcim_set_mwi() its own devres cleanup callback
PCI: Move struct pci_devres.pinned bit to struct pci_dev
PCI: Remove struct pci_devres.enabled status bit
PCI: Document hybrid devres hazards
PCI: Add managed pcim_request_region()
PCI: Deprecate pcim_iomap_table(), pcim_iomap_regions_request_all()
PCI: Add managed partial-BAR request and map infrastructure
PCI: Add devres helpers for iomap table
PCI: Add and use devres helper for bit masks
...

+5202 -1932
+2 -2
Documentation/PCI/endpoint/pci-endpoint.rst
··· 172 172 * bind: ops to perform when a EPC device has been bound to EPF device 173 173 * unbind: ops to perform when a binding has been lost between a EPC 174 174 device and EPF device 175 - * linkup: ops to perform when the EPC device has established a 176 - connection with a host system 175 + * add_cfs: optional ops to create function specific configfs 176 + attributes 177 177 178 178 The PCI Function driver can then register the PCI EPF driver by using 179 179 pci_epf_register_driver().
+1 -1
Documentation/PCI/pciebus-howto.rst
··· 139 139 140 140 static struct pcie_port_service_driver root_aerdrv = { 141 141 .name = (char *)device_name, 142 - .id_table = &service_id[0], 142 + .id_table = service_id, 143 143 144 144 .probe = aerdrv_load, 145 145 .remove = aerdrv_unload,
+32
Documentation/admin-guide/kernel-parameters.txt
··· 4564 4564 bridges without forcing it upstream. Note: 4565 4565 this removes isolation between devices and 4566 4566 may put more devices in an IOMMU group. 4567 + config_acs= 4568 + Format: 4569 + <ACS flags>@<pci_dev>[; ...] 4570 + Specify one or more PCI devices (in the format 4571 + specified above) optionally prepended with flags 4572 + and separated by semicolons. The respective 4573 + capabilities will be enabled, disabled or 4574 + unchanged based on what is specified in 4575 + flags. 4576 + 4577 + ACS Flags is defined as follows: 4578 + bit-0 : ACS Source Validation 4579 + bit-1 : ACS Translation Blocking 4580 + bit-2 : ACS P2P Request Redirect 4581 + bit-3 : ACS P2P Completion Redirect 4582 + bit-4 : ACS Upstream Forwarding 4583 + bit-5 : ACS P2P Egress Control 4584 + bit-6 : ACS Direct Translated P2P 4585 + Each bit can be marked as: 4586 + '0' – force disabled 4587 + '1' – force enabled 4588 + 'x' – unchanged 4589 + For example, 4590 + pci=config_acs=10x 4591 + would configure all devices that support 4592 + ACS to enable P2P Request Redirect, disable 4593 + Translation Blocking, and leave Source 4594 + Validation unchanged from whatever power-up 4595 + or firmware set it to. 4596 + 4597 + Note: this may remove isolation between devices 4598 + and may put more devices in an IOMMU group. 4567 4599 force_floating [S390] Force usage of floating interrupts. 4568 4600 nomio [S390] Do not use MIO instructions. 4569 4601 norid [S390] ignore the RID field and force use of
+29
Documentation/devicetree/bindings/pci/mediatek,mt7621-pcie.yaml
··· 13 13 MediaTek MT7621 PCIe subsys supports a single Root Complex (RC) 14 14 with 3 Root Ports. Each Root Port supports a Gen1 1-lane Link 15 15 16 + MT7621 PCIe HOST Topology 17 + 18 + .-------. 19 + | | 20 + | CPU | 21 + | | 22 + '-------' 23 + | 24 + | 25 + | 26 + v 27 + .------------------. 28 + .-----------| HOST/PCI Bridge |------------. 29 + | '------------------' | Type1 30 + BUS0 | | | Access 31 + v v v On Bus0 32 + .-------------. .-------------. .-------------. 33 + | VIRTUAL P2P | | VIRTUAL P2P | | VIRTUAL P2P | 34 + | BUS0 | | BUS0 | | BUS0 | 35 + | DEV0 | | DEV1 | | DEV2 | 36 + '-------------' '-------------' '-------------' 37 + Type0 | Type0 | Type0 | 38 + Access BUS1 | Access BUS2| Access BUS3| 39 + On Bus1 v On Bus2 v On Bus3 v 40 + .----------. .----------. .----------. 41 + | Device 0 | | Device 0 | | Device 0 | 42 + | Func 0 | | Func 0 | | Func 0 | 43 + '----------' '----------' '----------' 44 + 16 45 allOf: 17 46 - $ref: /schemas/pci/pci-host-bridge.yaml# 18 47
+1 -54
Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
··· 10 10 - Daire McNamara <daire.mcnamara@microchip.com> 11 11 12 12 allOf: 13 - - $ref: /schemas/pci/pci-host-bridge.yaml# 13 + - $ref: plda,xpressrich3-axi-common.yaml# 14 14 - $ref: /schemas/interrupt-controller/msi-controller.yaml# 15 15 16 16 properties: 17 17 compatible: 18 18 const: microchip,pcie-host-1.0 # PolarFire 19 - 20 - reg: 21 - maxItems: 2 22 - 23 - reg-names: 24 - items: 25 - - const: cfg 26 - - const: apb 27 19 28 20 clocks: 29 21 description: ··· 44 52 items: 45 53 pattern: '^fic[0-3]$' 46 54 47 - interrupts: 48 - minItems: 1 49 - items: 50 - - description: PCIe host controller 51 - - description: builtin MSI controller 52 - 53 - interrupt-names: 54 - minItems: 1 55 - items: 56 - - const: pcie 57 - - const: msi 58 - 59 55 ranges: 60 56 minItems: 1 61 57 maxItems: 3 ··· 51 71 dma-ranges: 52 72 minItems: 1 53 73 maxItems: 6 54 - 55 - msi-controller: 56 - description: Identifies the node as an MSI controller. 57 - 58 - msi-parent: 59 - description: MSI controller the device is capable of using. 60 - 61 - interrupt-controller: 62 - type: object 63 - properties: 64 - '#address-cells': 65 - const: 0 66 - 67 - '#interrupt-cells': 68 - const: 1 69 - 70 - interrupt-controller: true 71 - 72 - required: 73 - - '#address-cells' 74 - - '#interrupt-cells' 75 - - interrupt-controller 76 - 77 - additionalProperties: false 78 - 79 - required: 80 - - reg 81 - - reg-names 82 - - "#interrupt-cells" 83 - - interrupts 84 - - interrupt-map-mask 85 - - interrupt-map 86 - - msi-controller 87 74 88 75 unevaluatedProperties: false 89 76
+75
Documentation/devicetree/bindings/pci/plda,xpressrich3-axi-common.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/plda,xpressrich3-axi-common.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: PLDA XpressRICH PCIe host common properties 8 + 9 + maintainers: 10 + - Daire McNamara <daire.mcnamara@microchip.com> 11 + - Kevin Xie <kevin.xie@starfivetech.com> 12 + 13 + description: 14 + Generic PLDA XpressRICH PCIe host common properties. 15 + 16 + allOf: 17 + - $ref: /schemas/pci/pci-host-bridge.yaml# 18 + 19 + properties: 20 + reg: 21 + maxItems: 2 22 + 23 + reg-names: 24 + items: 25 + - const: cfg 26 + - const: apb 27 + 28 + interrupts: 29 + minItems: 1 30 + items: 31 + - description: PCIe host controller 32 + - description: builtin MSI controller 33 + 34 + interrupt-names: 35 + minItems: 1 36 + items: 37 + - const: pcie 38 + - const: msi 39 + 40 + msi-controller: 41 + description: Identifies the node as an MSI controller. 42 + 43 + msi-parent: 44 + description: MSI controller the device is capable of using. 45 + 46 + interrupt-controller: 47 + type: object 48 + properties: 49 + '#address-cells': 50 + const: 0 51 + 52 + '#interrupt-cells': 53 + const: 1 54 + 55 + interrupt-controller: true 56 + 57 + required: 58 + - '#address-cells' 59 + - '#interrupt-cells' 60 + - interrupt-controller 61 + 62 + additionalProperties: false 63 + 64 + required: 65 + - reg 66 + - reg-names 67 + - interrupts 68 + - msi-controller 69 + - "#interrupt-cells" 70 + - interrupt-map-mask 71 + - interrupt-map 72 + 73 + additionalProperties: true 74 + 75 + ...
+62 -2
Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
··· 13 13 compatible: 14 14 oneOf: 15 15 - enum: 16 + - qcom,sa8775p-pcie-ep 16 17 - qcom,sdx55-pcie-ep 17 18 - qcom,sm8450-pcie-ep 18 19 - items: ··· 21 20 - const: qcom,sdx55-pcie-ep 22 21 23 22 reg: 23 + minItems: 6 24 24 items: 25 25 - description: Qualcomm-specific PARF configuration registers 26 26 - description: DesignWare PCIe registers ··· 29 27 - description: Address Translation Unit (ATU) registers 30 28 - description: Memory region used to map remote RC address space 31 29 - description: BAR memory region 30 + - description: DMA register space 32 31 33 32 reg-names: 33 + minItems: 6 34 34 items: 35 35 - const: parf 36 36 - const: dbi ··· 40 36 - const: atu 41 37 - const: addr_space 42 38 - const: mmio 39 + - const: dma 43 40 44 41 clocks: 45 - minItems: 7 42 + minItems: 5 46 43 maxItems: 8 47 44 48 45 clock-names: 49 - minItems: 7 46 + minItems: 5 50 47 maxItems: 8 51 48 52 49 qcom,perst-regs: ··· 62 57 - description: Perst separation enable offset 63 58 64 59 interrupts: 60 + minItems: 2 65 61 items: 66 62 - description: PCIe Global interrupt 67 63 - description: PCIe Doorbell interrupt 64 + - description: DMA interrupt 68 65 69 66 interrupt-names: 67 + minItems: 2 70 68 items: 71 69 - const: global 72 70 - const: doorbell 71 + - const: dma 73 72 74 73 reset-gpios: 75 74 description: GPIO used as PERST# input signal ··· 134 125 - qcom,sdx55-pcie-ep 135 126 then: 136 127 properties: 128 + reg: 129 + maxItems: 6 130 + reg-names: 131 + maxItems: 6 137 132 clocks: 138 133 items: 139 134 - description: PCIe Auxiliary clock ··· 156 143 - const: slave_q2a 157 144 - const: sleep 158 145 - const: ref 146 + interrupts: 147 + maxItems: 2 148 + interrupt-names: 149 + maxItems: 2 159 150 160 151 - if: 161 152 properties: ··· 169 152 - qcom,sm8450-pcie-ep 170 153 then: 171 154 properties: 155 + reg: 156 + maxItems: 6 157 + reg-names: 158 + maxItems: 6 172 159 clocks: 173 160 items: 174 161 - description: PCIe Auxiliary clock ··· 193 172 - const: ref 194 173 - const: ddrss_sf_tbu 195 174 - const: aggre_noc_axi 175 + interrupts: 176 + maxItems: 2 177 + interrupt-names: 178 + maxItems: 2 179 + 180 + - if: 181 + properties: 182 + compatible: 183 + contains: 184 + enum: 185 + - qcom,sa8775p-pcie-ep 186 + then: 187 + properties: 188 + reg: 189 + minItems: 7 190 + maxItems: 7 191 + reg-names: 192 + minItems: 7 193 + maxItems: 7 194 + clocks: 195 + items: 196 + - description: PCIe Auxiliary clock 197 + - description: PCIe CFG AHB clock 198 + - description: PCIe Master AXI clock 199 + - description: PCIe Slave AXI clock 200 + - description: PCIe Slave Q2A AXI clock 201 + clock-names: 202 + items: 203 + - const: aux 204 + - const: cfg 205 + - const: bus_master 206 + - const: bus_slave 207 + - const: slave_q2a 208 + interrupts: 209 + minItems: 3 210 + maxItems: 3 211 + interrupt-names: 212 + minItems: 3 213 + maxItems: 3 196 214 197 215 unevaluatedProperties: false 198 216
+4
Documentation/devicetree/bindings/pci/qcom,pcie-sm8450.yaml
··· 69 69 - const: msi6 70 70 - const: msi7 71 71 72 + operating-points-v2: true 73 + opp-table: 74 + type: object 75 + 72 76 resets: 73 77 maxItems: 1 74 78
+1 -2
Documentation/devicetree/bindings/pci/qcom,pcie-x1e80100.yaml
··· 19 19 const: qcom,pcie-x1e80100 20 20 21 21 reg: 22 - minItems: 5 22 + minItems: 6 23 23 maxItems: 6 24 24 25 25 reg-names: 26 - minItems: 5 27 26 items: 28 27 - const: parf # Qualcomm specific registers 29 28 - const: dbi # DesignWare PCIe registers
+126
Documentation/devicetree/bindings/pci/rockchip-dw-pcie-common.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/rockchip-dw-pcie-common.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: DesignWare based PCIe RC/EP controller on Rockchip SoCs 8 + 9 + maintainers: 10 + - Shawn Lin <shawn.lin@rock-chips.com> 11 + - Simon Xue <xxm@rock-chips.com> 12 + - Heiko Stuebner <heiko@sntech.de> 13 + 14 + description: |+ 15 + Generic properties for the DesignWare based PCIe RC/EP controller on Rockchip 16 + SoCs. 17 + 18 + properties: 19 + clocks: 20 + minItems: 5 21 + items: 22 + - description: AHB clock for PCIe master 23 + - description: AHB clock for PCIe slave 24 + - description: AHB clock for PCIe dbi 25 + - description: APB clock for PCIe 26 + - description: Auxiliary clock for PCIe 27 + - description: PIPE clock 28 + - description: Reference clock for PCIe 29 + 30 + clock-names: 31 + minItems: 5 32 + items: 33 + - const: aclk_mst 34 + - const: aclk_slv 35 + - const: aclk_dbi 36 + - const: pclk 37 + - const: aux 38 + - const: pipe 39 + - const: ref 40 + 41 + interrupts: 42 + minItems: 5 43 + items: 44 + - description: 45 + Combined system interrupt, which is used to signal the following 46 + interrupts - phy_link_up, dll_link_up, link_req_rst_not, hp_pme, 47 + hp, hp_msi, link_auto_bw, link_auto_bw_msi, bw_mgt, bw_mgt_msi, 48 + edma_wr, edma_rd, dpa_sub_upd, rbar_update, link_eq_req, ep_elbi_app 49 + - description: 50 + Combined PM interrupt, which is used to signal the following 51 + interrupts - linkst_in_l1sub, linkst_in_l1, linkst_in_l2, 52 + linkst_in_l0s, linkst_out_l1sub, linkst_out_l1, linkst_out_l2, 53 + linkst_out_l0s, pm_dstate_update 54 + - description: 55 + Combined message interrupt, which is used to signal the following 56 + interrupts - ven_msg, unlock_msg, ltr_msg, cfg_pme, cfg_pme_msi, 57 + pm_pme, pm_to_ack, pm_turnoff, obff_idle, obff_obff, obff_cpu_active 58 + - description: 59 + Combined legacy interrupt, which is used to signal the following 60 + interrupts - inta, intb, intc, intd, tx_inta, tx_intb, tx_intc, 61 + tx_intd 62 + - description: 63 + Combined error interrupt, which is used to signal the following 64 + interrupts - aer_rc_err, aer_rc_err_msi, rx_cpl_timeout, 65 + tx_cpl_timeout, cor_err_sent, nf_err_sent, f_err_sent, cor_err_rx, 66 + nf_err_rx, f_err_rx, radm_qoverflow 67 + - description: 68 + eDMA write channel 0 interrupt 69 + - description: 70 + eDMA write channel 1 interrupt 71 + - description: 72 + eDMA read channel 0 interrupt 73 + - description: 74 + eDMA read channel 1 interrupt 75 + 76 + interrupt-names: 77 + minItems: 5 78 + items: 79 + - const: sys 80 + - const: pmc 81 + - const: msg 82 + - const: legacy 83 + - const: err 84 + - const: dma0 85 + - const: dma1 86 + - const: dma2 87 + - const: dma3 88 + 89 + num-lanes: true 90 + 91 + phys: 92 + maxItems: 1 93 + 94 + phy-names: 95 + const: pcie-phy 96 + 97 + power-domains: 98 + maxItems: 1 99 + 100 + resets: 101 + minItems: 1 102 + maxItems: 2 103 + 104 + reset-names: 105 + oneOf: 106 + - const: pipe 107 + - items: 108 + - const: pwr 109 + - const: pipe 110 + 111 + required: 112 + - compatible 113 + - reg 114 + - reg-names 115 + - clocks 116 + - clock-names 117 + - num-lanes 118 + - phys 119 + - phy-names 120 + - power-domains 121 + - resets 122 + - reset-names 123 + 124 + additionalProperties: true 125 + 126 + ...
+95
Documentation/devicetree/bindings/pci/rockchip-dw-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/rockchip-dw-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: DesignWare based PCIe Endpoint controller on Rockchip SoCs 8 + 9 + maintainers: 10 + - Niklas Cassel <cassel@kernel.org> 11 + 12 + description: |+ 13 + RK3588 SoC PCIe Endpoint controller is based on the Synopsys DesignWare 14 + PCIe IP and thus inherits all the common properties defined in 15 + snps,dw-pcie-ep.yaml. 16 + 17 + allOf: 18 + - $ref: /schemas/pci/snps,dw-pcie-ep.yaml# 19 + - $ref: /schemas/pci/rockchip-dw-pcie-common.yaml# 20 + 21 + properties: 22 + compatible: 23 + enum: 24 + - rockchip,rk3568-pcie-ep 25 + - rockchip,rk3588-pcie-ep 26 + 27 + reg: 28 + items: 29 + - description: Data Bus Interface (DBI) registers 30 + - description: Data Bus Interface (DBI) shadow registers 31 + - description: Rockchip designed configuration registers 32 + - description: Memory region used to map remote RC address space 33 + - description: Internal Address Translation Unit (iATU) registers 34 + 35 + reg-names: 36 + items: 37 + - const: dbi 38 + - const: dbi2 39 + - const: apb 40 + - const: addr_space 41 + - const: atu 42 + 43 + required: 44 + - interrupts 45 + - interrupt-names 46 + 47 + unevaluatedProperties: false 48 + 49 + examples: 50 + - | 51 + #include <dt-bindings/clock/rockchip,rk3588-cru.h> 52 + #include <dt-bindings/interrupt-controller/arm-gic.h> 53 + #include <dt-bindings/interrupt-controller/irq.h> 54 + #include <dt-bindings/power/rk3588-power.h> 55 + #include <dt-bindings/reset/rockchip,rk3588-cru.h> 56 + 57 + soc { 58 + #address-cells = <2>; 59 + #size-cells = <2>; 60 + 61 + pcie3x4_ep: pcie-ep@fe150000 { 62 + compatible = "rockchip,rk3588-pcie-ep"; 63 + reg = <0xa 0x40000000 0x0 0x00100000>, 64 + <0xa 0x40100000 0x0 0x00100000>, 65 + <0x0 0xfe150000 0x0 0x00010000>, 66 + <0x9 0x00000000 0x0 0x40000000>, 67 + <0xa 0x40300000 0x0 0x00100000>; 68 + reg-names = "dbi", "dbi2", "apb", "addr_space", "atu"; 69 + clocks = <&cru ACLK_PCIE_4L_MSTR>, <&cru ACLK_PCIE_4L_SLV>, 70 + <&cru ACLK_PCIE_4L_DBI>, <&cru PCLK_PCIE_4L>, 71 + <&cru CLK_PCIE_AUX0>, <&cru CLK_PCIE4L_PIPE>; 72 + clock-names = "aclk_mst", "aclk_slv", 73 + "aclk_dbi", "pclk", 74 + "aux", "pipe"; 75 + interrupts = <GIC_SPI 263 IRQ_TYPE_LEVEL_HIGH 0>, 76 + <GIC_SPI 262 IRQ_TYPE_LEVEL_HIGH 0>, 77 + <GIC_SPI 261 IRQ_TYPE_LEVEL_HIGH 0>, 78 + <GIC_SPI 260 IRQ_TYPE_LEVEL_HIGH 0>, 79 + <GIC_SPI 259 IRQ_TYPE_LEVEL_HIGH 0>, 80 + <GIC_SPI 271 IRQ_TYPE_LEVEL_HIGH 0>, 81 + <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH 0>, 82 + <GIC_SPI 269 IRQ_TYPE_LEVEL_HIGH 0>, 83 + <GIC_SPI 270 IRQ_TYPE_LEVEL_HIGH 0>; 84 + interrupt-names = "sys", "pmc", "msg", "legacy", "err", 85 + "dma0", "dma1", "dma2", "dma3"; 86 + max-link-speed = <3>; 87 + num-lanes = <4>; 88 + phys = <&pcie30phy>; 89 + phy-names = "pcie-phy"; 90 + power-domains = <&power RK3588_PD_PCIE>; 91 + resets = <&cru SRST_PCIE0_POWER_UP>, <&cru SRST_P_PCIE0>; 92 + reset-names = "pwr", "pipe"; 93 + }; 94 + }; 95 + ...
+3 -90
Documentation/devicetree/bindings/pci/rockchip-dw-pcie.yaml
··· 4 4 $id: http://devicetree.org/schemas/pci/rockchip-dw-pcie.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: DesignWare based PCIe controller on Rockchip SoCs 7 + title: DesignWare based PCIe Root Complex controller on Rockchip SoCs 8 8 9 9 maintainers: 10 10 - Shawn Lin <shawn.lin@rock-chips.com> ··· 12 12 - Heiko Stuebner <heiko@sntech.de> 13 13 14 14 description: |+ 15 - RK3568 SoC PCIe host controller is based on the Synopsys DesignWare 15 + RK3568 SoC PCIe Root Complex controller is based on the Synopsys DesignWare 16 16 PCIe IP and thus inherits all the common properties defined in 17 17 snps,dw-pcie.yaml. 18 18 19 19 allOf: 20 20 - $ref: /schemas/pci/snps,dw-pcie.yaml# 21 + - $ref: /schemas/pci/rockchip-dw-pcie-common.yaml# 21 22 22 23 properties: 23 24 compatible: ··· 40 39 - const: dbi 41 40 - const: apb 42 41 - const: config 43 - 44 - clocks: 45 - minItems: 5 46 - items: 47 - - description: AHB clock for PCIe master 48 - - description: AHB clock for PCIe slave 49 - - description: AHB clock for PCIe dbi 50 - - description: APB clock for PCIe 51 - - description: Auxiliary clock for PCIe 52 - - description: PIPE clock 53 - - description: Reference clock for PCIe 54 - 55 - clock-names: 56 - minItems: 5 57 - items: 58 - - const: aclk_mst 59 - - const: aclk_slv 60 - - const: aclk_dbi 61 - - const: pclk 62 - - const: aux 63 - - const: pipe 64 - - const: ref 65 - 66 - interrupts: 67 - items: 68 - - description: 69 - Combined system interrupt, which is used to signal the following 70 - interrupts - phy_link_up, dll_link_up, link_req_rst_not, hp_pme, 71 - hp, hp_msi, link_auto_bw, link_auto_bw_msi, bw_mgt, bw_mgt_msi, 72 - edma_wr, edma_rd, dpa_sub_upd, rbar_update, link_eq_req, ep_elbi_app 73 - - description: 74 - Combined PM interrupt, which is used to signal the following 75 - interrupts - linkst_in_l1sub, linkst_in_l1, linkst_in_l2, 76 - linkst_in_l0s, linkst_out_l1sub, linkst_out_l1, linkst_out_l2, 77 - linkst_out_l0s, pm_dstate_update 78 - - description: 79 - Combined message interrupt, which is used to signal the following 80 - interrupts - ven_msg, unlock_msg, ltr_msg, cfg_pme, cfg_pme_msi, 81 - pm_pme, pm_to_ack, pm_turnoff, obff_idle, obff_obff, obff_cpu_active 82 - - description: 83 - Combined legacy interrupt, which is used to signal the following 84 - interrupts - inta, intb, intc, intd 85 - - description: 86 - Combined error interrupt, which is used to signal the following 87 - interrupts - aer_rc_err, aer_rc_err_msi, rx_cpl_timeout, 88 - tx_cpl_timeout, cor_err_sent, nf_err_sent, f_err_sent, cor_err_rx, 89 - nf_err_rx, f_err_rx, radm_qoverflow 90 - 91 - interrupt-names: 92 - items: 93 - - const: sys 94 - - const: pmc 95 - - const: msg 96 - - const: legacy 97 - - const: err 98 42 99 43 legacy-interrupt-controller: 100 44 description: Interrupt controller node for handling legacy PCI interrupts. ··· 65 119 66 120 msi-map: true 67 121 68 - num-lanes: true 69 - 70 - phys: 71 - maxItems: 1 72 - 73 - phy-names: 74 - const: pcie-phy 75 - 76 - power-domains: 77 - maxItems: 1 78 - 79 122 ranges: 80 123 minItems: 2 81 124 maxItems: 3 82 125 83 - resets: 84 - minItems: 1 85 - maxItems: 2 86 - 87 - reset-names: 88 - oneOf: 89 - - const: pipe 90 - - items: 91 - - const: pwr 92 - - const: pipe 93 - 94 126 vpcie3v3-supply: true 95 127 96 128 required: 97 - - compatible 98 - - reg 99 - - reg-names 100 - - clocks 101 - - clock-names 102 129 - msi-map 103 - - num-lanes 104 - - phys 105 - - phy-names 106 - - power-domains 107 - - resets 108 - - reset-names 109 130 110 131 unevaluatedProperties: false 111 132
+11 -2
Documentation/devicetree/bindings/pci/snps,dw-pcie-ep.yaml
··· 100 100 for new bindings. 101 101 oneOf: 102 102 - description: See native 'elbi/app' CSR region for details. 103 - enum: [ link, appl ] 103 + enum: [ apb, link, appl ] 104 104 - description: See native 'atu' CSR region for details. 105 105 enum: [ atu_dma ] 106 106 allOf: ··· 152 152 events basis. 153 153 const: app 154 154 - description: 155 + Interrupts triggered when the controller itself (in Endpoint mode) 156 + has sent an Assert_INT{A,B,C,D}/Desassert_INT{A,B,C,D} message to 157 + the upstream device. 158 + pattern: "^tx_int(a|b|c|d)$" 159 + - description: 160 + Combined interrupt signal raised when the controller has sent an 161 + Assert_INT{A,B,C,D} message. See "^tx_int(a|b|c|d)$" for details. 162 + const: legacy 163 + - description: 155 164 Vendor-specific IRQ names. Consider using the generic names above 156 165 for new bindings. 157 166 oneOf: 158 167 - description: See native "app" IRQ for details 159 - enum: [ intr ] 168 + enum: [ intr, sys, pmc, msg, err ] 160 169 161 170 max-functions: 162 171 maximum: 32
+120
Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/starfive,jh7110-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: StarFive JH7110 PCIe host controller 8 + 9 + maintainers: 10 + - Kevin Xie <kevin.xie@starfivetech.com> 11 + 12 + allOf: 13 + - $ref: plda,xpressrich3-axi-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: starfive,jh7110-pcie 18 + 19 + clocks: 20 + items: 21 + - description: NOC bus clock 22 + - description: Transport layer clock 23 + - description: AXI MST0 clock 24 + - description: APB clock 25 + 26 + clock-names: 27 + items: 28 + - const: noc 29 + - const: tl 30 + - const: axi_mst0 31 + - const: apb 32 + 33 + resets: 34 + items: 35 + - description: AXI MST0 reset 36 + - description: AXI SLAVE0 reset 37 + - description: AXI SLAVE reset 38 + - description: PCIE BRIDGE reset 39 + - description: PCIE CORE reset 40 + - description: PCIE APB reset 41 + 42 + reset-names: 43 + items: 44 + - const: mst0 45 + - const: slv0 46 + - const: slv 47 + - const: brg 48 + - const: core 49 + - const: apb 50 + 51 + starfive,stg-syscon: 52 + $ref: /schemas/types.yaml#/definitions/phandle-array 53 + description: 54 + The phandle to System Register Controller syscon node. 55 + 56 + perst-gpios: 57 + description: GPIO controlled connection to PERST# signal 58 + maxItems: 1 59 + 60 + phys: 61 + description: 62 + Specified PHY is attached to PCIe controller. 63 + maxItems: 1 64 + 65 + required: 66 + - clocks 67 + - resets 68 + - starfive,stg-syscon 69 + 70 + unevaluatedProperties: false 71 + 72 + examples: 73 + - | 74 + #include <dt-bindings/gpio/gpio.h> 75 + soc { 76 + #address-cells = <2>; 77 + #size-cells = <2>; 78 + 79 + pcie@940000000 { 80 + compatible = "starfive,jh7110-pcie"; 81 + reg = <0x9 0x40000000 0x0 0x10000000>, 82 + <0x0 0x2b000000 0x0 0x1000000>; 83 + reg-names = "cfg", "apb"; 84 + #address-cells = <3>; 85 + #size-cells = <2>; 86 + #interrupt-cells = <1>; 87 + device_type = "pci"; 88 + ranges = <0x82000000 0x0 0x30000000 0x0 0x30000000 0x0 0x08000000>, 89 + <0xc3000000 0x9 0x00000000 0x9 0x00000000 0x0 0x40000000>; 90 + starfive,stg-syscon = <&stg_syscon>; 91 + bus-range = <0x0 0xff>; 92 + interrupt-parent = <&plic>; 93 + interrupts = <56>; 94 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 95 + interrupt-map = <0x0 0x0 0x0 0x1 &pcie_intc0 0x1>, 96 + <0x0 0x0 0x0 0x2 &pcie_intc0 0x2>, 97 + <0x0 0x0 0x0 0x3 &pcie_intc0 0x3>, 98 + <0x0 0x0 0x0 0x4 &pcie_intc0 0x4>; 99 + msi-controller; 100 + clocks = <&syscrg 86>, 101 + <&stgcrg 10>, 102 + <&stgcrg 8>, 103 + <&stgcrg 9>; 104 + clock-names = "noc", "tl", "axi_mst0", "apb"; 105 + resets = <&stgcrg 11>, 106 + <&stgcrg 12>, 107 + <&stgcrg 13>, 108 + <&stgcrg 14>, 109 + <&stgcrg 15>, 110 + <&stgcrg 16>; 111 + perst-gpios = <&gpios 26 GPIO_ACTIVE_LOW>; 112 + phys = <&pciephy0>; 113 + 114 + pcie_intc0: interrupt-controller { 115 + #address-cells = <0>; 116 + #interrupt-cells = <1>; 117 + interrupt-controller; 118 + }; 119 + }; 120 + };
+1 -1
Documentation/devicetree/bindings/pci/xilinx-versal-cpm.yaml
··· 92 92 <0 0 0 3 &pcie_intc_0 2>, 93 93 <0 0 0 4 &pcie_intc_0 3>; 94 94 bus-range = <0x00 0xff>; 95 - ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>, 95 + ranges = <0x02000000 0x0 0xe0010000 0x0 0xe0010000 0x0 0x10000000>, 96 96 <0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>; 97 97 msi-map = <0x0 &its_gic 0x0 0x10000>; 98 98 reg = <0x0 0xfca10000 0x0 0x1000>,
+1 -1
Documentation/translations/zh_CN/PCI/pciebus-howto.rst
··· 124 124 125 125 static struct pcie_port_service_driver root_aerdrv = { 126 126 .name = (char *)device_name, 127 - .id_table = &service_id[0], 127 + .id_table = service_id, 128 128 129 129 .probe = aerdrv_load, 130 130 .remove = aerdrv_unload,
+17 -2
MAINTAINERS
··· 17456 17456 F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt 17457 17457 F: drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c 17458 17458 17459 + PCI DRIVER FOR PLDA PCIE IP 17460 + M: Daire McNamara <daire.mcnamara@microchip.com> 17461 + L: linux-pci@vger.kernel.org 17462 + S: Maintained 17463 + F: Documentation/devicetree/bindings/pci/plda,xpressrich3-axi-common.yaml 17464 + F: drivers/pci/controller/plda/pcie-plda-host.c 17465 + F: drivers/pci/controller/plda/pcie-plda.h 17466 + 17459 17467 PCI DRIVER FOR RENESAS R-CAR 17460 17468 M: Marek Vasut <marek.vasut+renesas@gmail.com> 17461 17469 M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> ··· 17702 17694 L: linux-pci@vger.kernel.org 17703 17695 S: Supported 17704 17696 F: Documentation/devicetree/bindings/pci/microchip* 17705 - F: drivers/pci/controller/*microchip* 17697 + F: drivers/pci/controller/plda/*microchip* 17706 17698 17707 17699 PCIE DRIVER FOR QUALCOMM MSM 17708 17700 M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> ··· 17731 17723 L: linux-pci@vger.kernel.org 17732 17724 S: Maintained 17733 17725 F: drivers/pci/controller/dwc/*spear* 17726 + 17727 + PCIE DRIVER FOR STARFIVE JH71x0 17728 + M: Kevin Xie <kevin.xie@starfivetech.com> 17729 + L: linux-pci@vger.kernel.org 17730 + S: Maintained 17731 + F: Documentation/devicetree/bindings/pci/starfive,jh7110-pcie.yaml 17732 + F: drivers/pci/controller/plda/pcie-starfive.c 17734 17733 17735 17734 PCIE ENDPOINT DRIVER FOR QUALCOMM 17736 17735 M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> ··· 19579 19564 F: drivers/firmware/microchip/mpfs-auto-update.c 19580 19565 F: drivers/i2c/busses/i2c-microchip-corei2c.c 19581 19566 F: drivers/mailbox/mailbox-mpfs.c 19582 - F: drivers/pci/controller/pcie-microchip-host.c 19567 + F: drivers/pci/controller/plda/pcie-microchip-host.c 19583 19568 F: drivers/pwm/pwm-microchip-core.c 19584 19569 F: drivers/reset/reset-mpfs.c 19585 19570 F: drivers/rtc/rtc-mpfs.c
-17
drivers/acpi/pci_root.c
··· 293 293 } 294 294 EXPORT_SYMBOL_GPL(acpi_pci_find_root); 295 295 296 - struct acpi_handle_node { 297 - struct list_head node; 298 - acpi_handle handle; 299 - }; 300 - 301 296 /** 302 297 * acpi_get_pci_dev - convert ACPI CA handle to struct pci_dev 303 298 * @handle: the handle in question ··· 1003 1008 int node = acpi_get_node(device->handle); 1004 1009 struct pci_bus *bus; 1005 1010 struct pci_host_bridge *host_bridge; 1006 - union acpi_object *obj; 1007 1011 1008 1012 info->root = root; 1009 1013 info->bridge = device; ··· 1043 1049 1044 1050 if (!(root->osc_ext_control_set & OSC_CXL_ERROR_REPORTING_CONTROL)) 1045 1051 host_bridge->native_cxl_error = 0; 1046 - 1047 - /* 1048 - * Evaluate the "PCI Boot Configuration" _DSM Function. If it 1049 - * exists and returns 0, we must preserve any PCI resource 1050 - * assignments made by firmware for this host bridge. 1051 - */ 1052 - obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(bus->bridge), &pci_acpi_dsm_guid, 1, 1053 - DSM_PCI_PRESERVE_BOOT_CONFIG, NULL, ACPI_TYPE_INTEGER); 1054 - if (obj && obj->integer.value == 0) 1055 - host_bridge->preserve_config = 1; 1056 - ACPI_FREE(obj); 1057 1052 1058 1053 acpi_dev_power_up_children_with_adr(device); 1059 1054
+9 -11
drivers/gpu/drm/vboxvideo/vbox_main.c
··· 42 42 /* Take a command buffer for each screen from the end of usable VRAM. */ 43 43 vbox->available_vram_size -= vbox->num_crtcs * VBVA_MIN_BUFFER_SIZE; 44 44 45 - vbox->vbva_buffers = pci_iomap_range(pdev, 0, 46 - vbox->available_vram_size, 47 - vbox->num_crtcs * 48 - VBVA_MIN_BUFFER_SIZE); 49 - if (!vbox->vbva_buffers) 50 - return -ENOMEM; 45 + vbox->vbva_buffers = pcim_iomap_range( 46 + pdev, 0, vbox->available_vram_size, 47 + vbox->num_crtcs * VBVA_MIN_BUFFER_SIZE); 48 + if (IS_ERR(vbox->vbva_buffers)) 49 + return PTR_ERR(vbox->vbva_buffers); 51 50 52 51 for (i = 0; i < vbox->num_crtcs; ++i) { 53 52 vbva_setup_buffer_context(&vbox->vbva_info[i], ··· 115 116 DRM_INFO("VRAM %08x\n", vbox->full_vram_size); 116 117 117 118 /* Map guest-heap at end of vram */ 118 - vbox->guest_heap = 119 - pci_iomap_range(pdev, 0, GUEST_HEAP_OFFSET(vbox), 120 - GUEST_HEAP_SIZE); 121 - if (!vbox->guest_heap) 122 - return -ENOMEM; 119 + vbox->guest_heap = pcim_iomap_range(pdev, 0, 120 + GUEST_HEAP_OFFSET(vbox), GUEST_HEAP_SIZE); 121 + if (IS_ERR(vbox->guest_heap)) 122 + return PTR_ERR(vbox->guest_heap); 123 123 124 124 /* Create guest-heap mem-pool use 2^4 = 16 byte chunks */ 125 125 vbox->guest_pool = devm_gen_pool_create(vbox->ddev.dev, 4, -1,
+58 -29
drivers/misc/pci_endpoint_test.c
··· 7 7 */ 8 8 9 9 #include <linux/crc32.h> 10 + #include <linux/cleanup.h> 10 11 #include <linux/delay.h> 11 12 #include <linux/fs.h> 12 13 #include <linux/io.h> ··· 85 84 #define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025 86 85 #define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031 87 86 87 + #define PCI_VENDOR_ID_ROCKCHIP 0x1d87 88 + #define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588 89 + 88 90 static DEFINE_IDA(pci_endpoint_test_ida); 89 91 90 92 #define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \ ··· 142 138 u32 offset, u32 value) 143 139 { 144 140 writel(value, test->base + offset); 145 - } 146 - 147 - static inline u32 pci_endpoint_test_bar_readl(struct pci_endpoint_test *test, 148 - int bar, int offset) 149 - { 150 - return readl(test->bar[bar] + offset); 151 - } 152 - 153 - static inline void pci_endpoint_test_bar_writel(struct pci_endpoint_test *test, 154 - int bar, u32 offset, u32 value) 155 - { 156 - writel(value, test->bar[bar] + offset); 157 141 } 158 142 159 143 static irqreturn_t pci_endpoint_test_irqhandler(int irq, void *dev_id) ··· 264 272 0xA5A5A5A5, 265 273 }; 266 274 275 + static int pci_endpoint_test_bar_memcmp(struct pci_endpoint_test *test, 276 + enum pci_barno barno, int offset, 277 + void *write_buf, void *read_buf, 278 + int size) 279 + { 280 + memset(write_buf, bar_test_pattern[barno], size); 281 + memcpy_toio(test->bar[barno] + offset, write_buf, size); 282 + 283 + memcpy_fromio(read_buf, test->bar[barno] + offset, size); 284 + 285 + return memcmp(write_buf, read_buf, size); 286 + } 287 + 267 288 static bool pci_endpoint_test_bar(struct pci_endpoint_test *test, 268 289 enum pci_barno barno) 269 290 { 270 - int j; 271 - u32 val; 272 - int size; 291 + int j, bar_size, buf_size, iters, remain; 292 + void *write_buf __free(kfree) = NULL; 293 + void *read_buf __free(kfree) = NULL; 273 294 struct pci_dev *pdev = test->pdev; 274 295 275 296 if (!test->bar[barno]) 276 297 return false; 277 298 278 - size = pci_resource_len(pdev, barno); 299 + bar_size = pci_resource_len(pdev, barno); 279 300 280 301 if (barno == test->test_reg_bar) 281 - size = 0x4; 302 + bar_size = 0x4; 282 303 283 - for (j = 0; j < size; j += 4) 284 - pci_endpoint_test_bar_writel(test, barno, j, 285 - bar_test_pattern[barno]); 304 + /* 305 + * Allocate a buffer of max size 1MB, and reuse that buffer while 306 + * iterating over the whole BAR size (which might be much larger). 307 + */ 308 + buf_size = min(SZ_1M, bar_size); 286 309 287 - for (j = 0; j < size; j += 4) { 288 - val = pci_endpoint_test_bar_readl(test, barno, j); 289 - if (val != bar_test_pattern[barno]) 310 + write_buf = kmalloc(buf_size, GFP_KERNEL); 311 + if (!write_buf) 312 + return false; 313 + 314 + read_buf = kmalloc(buf_size, GFP_KERNEL); 315 + if (!read_buf) 316 + return false; 317 + 318 + iters = bar_size / buf_size; 319 + for (j = 0; j < iters; j++) 320 + if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * j, 321 + write_buf, read_buf, buf_size)) 290 322 return false; 291 - } 323 + 324 + remain = bar_size % buf_size; 325 + if (remain) 326 + if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * iters, 327 + write_buf, read_buf, remain)) 328 + return false; 292 329 293 330 return true; 294 331 } ··· 845 824 init_completion(&test->irq_raised); 846 825 mutex_init(&test->mutex); 847 826 848 - if ((dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48)) != 0) && 849 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)) != 0) { 850 - dev_err(dev, "Cannot set DMA mask\n"); 851 - return -EINVAL; 852 - } 827 + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48)); 853 828 854 829 err = pci_enable_device(pdev); 855 830 if (err) { ··· 997 980 .irq_type = IRQ_TYPE_MSI, 998 981 }; 999 982 983 + static const struct pci_endpoint_test_data rk3588_data = { 984 + .alignment = SZ_64K, 985 + .irq_type = IRQ_TYPE_MSI, 986 + }; 987 + 988 + /* 989 + * If the controller's Vendor/Device ID are programmable, you may be able to 990 + * use one of the existing entries for testing instead of adding a new one. 991 + */ 1000 992 static const struct pci_device_id pci_endpoint_test_tbl[] = { 1001 993 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x), 1002 994 .driver_data = (kernel_ulong_t)&default_data, ··· 1042 1016 }, 1043 1017 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_J721S2), 1044 1018 .driver_data = (kernel_ulong_t)&j721e_data, 1019 + }, 1020 + { PCI_DEVICE(PCI_VENDOR_ID_ROCKCHIP, PCI_DEVICE_ID_ROCKCHIP_RK3588), 1021 + .driver_data = (kernel_ulong_t)&rk3588_data, 1045 1022 }, 1046 1023 { } 1047 1024 };
+1 -1
drivers/ntb/hw/mscc/ntb_hw_switchtec.c
··· 1565 1565 1566 1566 static int __init switchtec_ntb_init(void) 1567 1567 { 1568 - switchtec_interface.class = switchtec_class; 1568 + switchtec_interface.class = &switchtec_class; 1569 1569 return class_interface_register(&switchtec_interface); 1570 1570 } 1571 1571 module_init(switchtec_ntb_init);
+2 -8
drivers/pci/bus.c
··· 177 177 static int pci_bus_alloc_from_region(struct pci_bus *bus, struct resource *res, 178 178 resource_size_t size, resource_size_t align, 179 179 resource_size_t min, unsigned long type_mask, 180 - resource_size_t (*alignf)(void *, 181 - const struct resource *, 182 - resource_size_t, 183 - resource_size_t), 180 + resource_alignf alignf, 184 181 void *alignf_data, 185 182 struct pci_bus_region *region) 186 183 { ··· 248 251 int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res, 249 252 resource_size_t size, resource_size_t align, 250 253 resource_size_t min, unsigned long type_mask, 251 - resource_size_t (*alignf)(void *, 252 - const struct resource *, 253 - resource_size_t, 254 - resource_size_t), 254 + resource_alignf alignf, 255 255 void *alignf_data) 256 256 { 257 257 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+1 -8
drivers/pci/controller/Kconfig
··· 215 215 help 216 216 This selects a driver for the MediaTek MT7621 PCIe Controller. 217 217 218 - config PCIE_MICROCHIP_HOST 219 - tristate "Microchip AXI PCIe controller" 220 - depends on PCI_MSI && OF 221 - select PCI_HOST_COMMON 222 - help 223 - Say Y here if you want kernel to support the Microchip AXI PCIe 224 - Host Bridge driver. 225 - 226 218 config PCI_HYPERV_INTERFACE 227 219 tristate "Microsoft Hyper-V PCI Interface" 228 220 depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI ··· 348 356 source "drivers/pci/controller/cadence/Kconfig" 349 357 source "drivers/pci/controller/dwc/Kconfig" 350 358 source "drivers/pci/controller/mobiveil/Kconfig" 359 + source "drivers/pci/controller/plda/Kconfig" 351 360 endmenu
+1 -1
drivers/pci/controller/Makefile
··· 33 33 obj-$(CONFIG_PCIE_ROCKCHIP_HOST) += pcie-rockchip-host.o 34 34 obj-$(CONFIG_PCIE_MEDIATEK) += pcie-mediatek.o 35 35 obj-$(CONFIG_PCIE_MEDIATEK_GEN3) += pcie-mediatek-gen3.o 36 - obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o 37 36 obj-$(CONFIG_VMD) += vmd.o 38 37 obj-$(CONFIG_PCIE_BRCMSTB) += pcie-brcmstb.o 39 38 obj-$(CONFIG_PCI_LOONGSON) += pci-loongson.o ··· 43 44 # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW 44 45 obj-y += dwc/ 45 46 obj-y += mobiveil/ 47 + obj-y += plda/ 46 48 47 49 48 50 # The following drivers are for devices that use the generic ACPI
+18 -4
drivers/pci/controller/dwc/Kconfig
··· 311 311 SoCs. To compile this driver as a module, choose M here: the module 312 312 will be called pcie-rcar-gen4.ko. This uses the DesignWare core. 313 313 314 + config PCIE_ROCKCHIP_DW 315 + bool 316 + 314 317 config PCIE_ROCKCHIP_DW_HOST 315 - bool "Rockchip DesignWare PCIe controller" 316 - select PCIE_DW 317 - select PCIE_DW_HOST 318 + bool "Rockchip DesignWare PCIe controller (host mode)" 318 319 depends on PCI_MSI 319 320 depends on ARCH_ROCKCHIP || COMPILE_TEST 320 321 depends on OF 322 + select PCIE_DW_HOST 323 + select PCIE_ROCKCHIP_DW 321 324 help 322 325 Enables support for the DesignWare PCIe controller in the 323 - Rockchip SoC except RK3399. 326 + Rockchip SoC (except RK3399) to work in host mode. 327 + 328 + config PCIE_ROCKCHIP_DW_EP 329 + bool "Rockchip DesignWare PCIe controller (endpoint mode)" 330 + depends on ARCH_ROCKCHIP || COMPILE_TEST 331 + depends on OF 332 + depends on PCI_ENDPOINT 333 + select PCIE_DW_EP 334 + select PCIE_ROCKCHIP_DW 335 + help 336 + Enables support for the DesignWare PCIe controller in the 337 + Rockchip SoC (except RK3399) to work in endpoint mode. 324 338 325 339 config PCI_EXYNOS 326 340 tristate "Samsung Exynos PCIe controller"
+1 -1
drivers/pci/controller/dwc/Makefile
··· 16 16 obj-$(CONFIG_PCIE_QCOM_EP) += pcie-qcom-ep.o 17 17 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o 18 18 obj-$(CONFIG_PCIE_ARTPEC6) += pcie-artpec6.o 19 - obj-$(CONFIG_PCIE_ROCKCHIP_DW_HOST) += pcie-dw-rockchip.o 19 + obj-$(CONFIG_PCIE_ROCKCHIP_DW) += pcie-dw-rockchip.o 20 20 obj-$(CONFIG_PCIE_INTEL_GW) += pcie-intel-gw.o 21 21 obj-$(CONFIG_PCIE_KEEMBAY) += pcie-keembay.o 22 22 obj-$(CONFIG_PCIE_KIRIN) += pcie-kirin.o
+4 -4
drivers/pci/controller/dwc/pci-dra7xx.c
··· 13 13 #include <linux/err.h> 14 14 #include <linux/interrupt.h> 15 15 #include <linux/irq.h> 16 + #include <linux/irqchip/chained_irq.h> 16 17 #include <linux/irqdomain.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/module.h> 19 20 #include <linux/of.h> 20 - #include <linux/of_gpio.h> 21 21 #include <linux/of_pci.h> 22 22 #include <linux/pci.h> 23 23 #include <linux/phy/phy.h> ··· 113 113 writel(value, pcie->base + offset); 114 114 } 115 115 116 - static u64 dra7xx_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 pci_addr) 116 + static u64 dra7xx_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 cpu_addr) 117 117 { 118 - return pci_addr & DRA7XX_CPU_TO_BUS_ADDR; 118 + return cpu_addr & DRA7XX_CPU_TO_BUS_ADDR; 119 119 } 120 120 121 121 static int dra7xx_pcie_link_up(struct dw_pcie *pci) ··· 474 474 return ret; 475 475 } 476 476 477 - dw_pcie_ep_init_notify(ep); 477 + pci_epc_init_notify(ep->epc); 478 478 479 479 return 0; 480 480 }
+5 -50
drivers/pci/controller/dwc/pci-exynos.c
··· 54 54 struct exynos_pcie { 55 55 struct dw_pcie pci; 56 56 void __iomem *elbi_base; 57 - struct clk *clk; 58 - struct clk *bus_clk; 57 + struct clk_bulk_data *clks; 59 58 struct phy *phy; 60 59 struct regulator_bulk_data supplies[2]; 61 60 }; 62 - 63 - static int exynos_pcie_init_clk_resources(struct exynos_pcie *ep) 64 - { 65 - struct device *dev = ep->pci.dev; 66 - int ret; 67 - 68 - ret = clk_prepare_enable(ep->clk); 69 - if (ret) { 70 - dev_err(dev, "cannot enable pcie rc clock"); 71 - return ret; 72 - } 73 - 74 - ret = clk_prepare_enable(ep->bus_clk); 75 - if (ret) { 76 - dev_err(dev, "cannot enable pcie bus clock"); 77 - goto err_bus_clk; 78 - } 79 - 80 - return 0; 81 - 82 - err_bus_clk: 83 - clk_disable_unprepare(ep->clk); 84 - 85 - return ret; 86 - } 87 - 88 - static void exynos_pcie_deinit_clk_resources(struct exynos_pcie *ep) 89 - { 90 - clk_disable_unprepare(ep->bus_clk); 91 - clk_disable_unprepare(ep->clk); 92 - } 93 61 94 62 static void exynos_pcie_writel(void __iomem *base, u32 val, u32 reg) 95 63 { ··· 300 332 if (IS_ERR(ep->elbi_base)) 301 333 return PTR_ERR(ep->elbi_base); 302 334 303 - ep->clk = devm_clk_get(dev, "pcie"); 304 - if (IS_ERR(ep->clk)) { 305 - dev_err(dev, "Failed to get pcie rc clock\n"); 306 - return PTR_ERR(ep->clk); 307 - } 308 - 309 - ep->bus_clk = devm_clk_get(dev, "pcie_bus"); 310 - if (IS_ERR(ep->bus_clk)) { 311 - dev_err(dev, "Failed to get pcie bus clock\n"); 312 - return PTR_ERR(ep->bus_clk); 313 - } 335 + ret = devm_clk_bulk_get_all_enable(dev, &ep->clks); 336 + if (ret < 0) 337 + return ret; 314 338 315 339 ep->supplies[0].supply = "vdd18"; 316 340 ep->supplies[1].supply = "vdd10"; 317 341 ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(ep->supplies), 318 342 ep->supplies); 319 - if (ret) 320 - return ret; 321 - 322 - ret = exynos_pcie_init_clk_resources(ep); 323 343 if (ret) 324 344 return ret; 325 345 ··· 325 369 326 370 fail_probe: 327 371 phy_exit(ep->phy); 328 - exynos_pcie_deinit_clk_resources(ep); 329 372 regulator_bulk_disable(ARRAY_SIZE(ep->supplies), ep->supplies); 330 373 331 374 return ret; ··· 338 383 exynos_pcie_assert_core_reset(ep); 339 384 phy_power_off(ep->phy); 340 385 phy_exit(ep->phy); 341 - exynos_pcie_deinit_clk_resources(ep); 342 386 regulator_bulk_disable(ARRAY_SIZE(ep->supplies), ep->supplies); 343 387 } 344 388 ··· 391 437 }, 392 438 }; 393 439 module_platform_driver(exynos_pcie_driver); 440 + MODULE_DESCRIPTION("Samsung Exynos PCIe host controller driver"); 394 441 MODULE_LICENSE("GPL v2"); 395 442 MODULE_DEVICE_TABLE(of, exynos_pcie_of_match);
+11 -27
drivers/pci/controller/dwc/pci-imx6.c
··· 11 11 #include <linux/bitfield.h> 12 12 #include <linux/clk.h> 13 13 #include <linux/delay.h> 14 - #include <linux/gpio.h> 14 + #include <linux/gpio/consumer.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/mfd/syscon.h> 17 17 #include <linux/mfd/syscon/imx6q-iomuxc-gpr.h> 18 18 #include <linux/mfd/syscon/imx7-iomuxc-gpr.h> 19 19 #include <linux/module.h> 20 20 #include <linux/of.h> 21 - #include <linux/of_gpio.h> 22 21 #include <linux/of_address.h> 23 22 #include <linux/pci.h> 24 23 #include <linux/platform_device.h> ··· 106 107 107 108 struct imx6_pcie { 108 109 struct dw_pcie *pci; 109 - int reset_gpio; 110 - bool gpio_active_high; 110 + struct gpio_desc *reset_gpiod; 111 111 bool link_is_up; 112 112 struct clk_bulk_data clks[IMX6_PCIE_MAX_CLKS]; 113 113 struct regmap *iomuxc_gpr; ··· 719 721 } 720 722 721 723 /* Some boards don't have PCIe reset GPIO. */ 722 - if (gpio_is_valid(imx6_pcie->reset_gpio)) 723 - gpio_set_value_cansleep(imx6_pcie->reset_gpio, 724 - imx6_pcie->gpio_active_high); 724 + gpiod_set_value_cansleep(imx6_pcie->reset_gpiod, 1); 725 725 } 726 726 727 727 static int imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) ··· 767 771 } 768 772 769 773 /* Some boards don't have PCIe reset GPIO. */ 770 - if (gpio_is_valid(imx6_pcie->reset_gpio)) { 774 + if (imx6_pcie->reset_gpiod) { 771 775 msleep(100); 772 - gpio_set_value_cansleep(imx6_pcie->reset_gpio, 773 - !imx6_pcie->gpio_active_high); 776 + gpiod_set_value_cansleep(imx6_pcie->reset_gpiod, 0); 774 777 /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */ 775 778 msleep(100); 776 779 } ··· 1126 1131 return ret; 1127 1132 } 1128 1133 1129 - dw_pcie_ep_init_notify(ep); 1134 + pci_epc_init_notify(ep->epc); 1130 1135 1131 1136 /* Start LTSSM. */ 1132 1137 imx6_pcie_ltssm_enable(dev); ··· 1280 1285 return PTR_ERR(pci->dbi_base); 1281 1286 1282 1287 /* Fetch GPIOs */ 1283 - imx6_pcie->reset_gpio = of_get_named_gpio(node, "reset-gpio", 0); 1284 - imx6_pcie->gpio_active_high = of_property_read_bool(node, 1285 - "reset-gpio-active-high"); 1286 - if (gpio_is_valid(imx6_pcie->reset_gpio)) { 1287 - ret = devm_gpio_request_one(dev, imx6_pcie->reset_gpio, 1288 - imx6_pcie->gpio_active_high ? 1289 - GPIOF_OUT_INIT_HIGH : 1290 - GPIOF_OUT_INIT_LOW, 1291 - "PCIe reset"); 1292 - if (ret) { 1293 - dev_err(dev, "unable to get reset gpio\n"); 1294 - return ret; 1295 - } 1296 - } else if (imx6_pcie->reset_gpio == -EPROBE_DEFER) { 1297 - return imx6_pcie->reset_gpio; 1298 - } 1288 + imx6_pcie->reset_gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 1289 + if (IS_ERR(imx6_pcie->reset_gpiod)) 1290 + return dev_err_probe(dev, PTR_ERR(imx6_pcie->reset_gpiod), 1291 + "unable to get reset gpio\n"); 1292 + gpiod_set_consumer_name(imx6_pcie->reset_gpiod, "PCIe reset"); 1299 1293 1300 1294 if (imx6_pcie->drvdata->clks_cnt >= IMX6_PCIE_MAX_CLKS) 1301 1295 return dev_err_probe(dev, -ENOMEM, "clks_cnt is too big\n");
+119 -83
drivers/pci/controller/dwc/pci-keystone.c
··· 34 34 #define PCIE_DEVICEID_SHIFT 16 35 35 36 36 /* Application registers */ 37 + #define PID 0x000 38 + #define RTL GENMASK(15, 11) 39 + #define RTL_SHIFT 11 40 + #define AM6_PCI_PG1_RTL_VER 0x15 41 + 37 42 #define CMD_STATUS 0x004 38 43 #define LTSSM_EN_VAL BIT(0) 39 44 #define OB_XLAT_EN_VAL BIT(1) ··· 108 103 #define APP_ADDR_SPACE_0 (16 * SZ_1K) 109 104 110 105 #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) 106 + 107 + #define PCI_DEVICE_ID_TI_AM654X 0xb00c 111 108 112 109 struct ks_pcie_of_data { 113 110 enum dw_pcie_device_mode mode; ··· 252 245 .irq_unmask = ks_pcie_msi_unmask, 253 246 }; 254 247 248 + /** 249 + * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers 250 + * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone 251 + * PCIe host controller driver information. 252 + * 253 + * Since modification of dbi_cs2 involves different clock domain, read the 254 + * status back to ensure the transition is complete. 255 + */ 256 + static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie) 257 + { 258 + u32 val; 259 + 260 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 261 + val |= DBI_CS2; 262 + ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 263 + 264 + do { 265 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 266 + } while (!(val & DBI_CS2)); 267 + } 268 + 269 + /** 270 + * ks_pcie_clear_dbi_mode() - Disable DBI mode 271 + * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone 272 + * PCIe host controller driver information. 273 + * 274 + * Since modification of dbi_cs2 involves different clock domain, read the 275 + * status back to ensure the transition is complete. 276 + */ 277 + static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie) 278 + { 279 + u32 val; 280 + 281 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 282 + val &= ~DBI_CS2; 283 + ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 284 + 285 + do { 286 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 287 + } while (val & DBI_CS2); 288 + } 289 + 255 290 static int ks_pcie_msi_host_init(struct dw_pcie_rp *pp) 256 291 { 292 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 293 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 294 + 295 + /* Configure and set up BAR0 */ 296 + ks_pcie_set_dbi_mode(ks_pcie); 297 + 298 + /* Enable BAR0 */ 299 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1); 300 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1); 301 + 302 + ks_pcie_clear_dbi_mode(ks_pcie); 303 + 304 + /* 305 + * For BAR0, just setting bus address for inbound writes (MSI) should 306 + * be sufficient. Use physical address to avoid any conflicts. 307 + */ 308 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start); 309 + 257 310 pp->msi_irq_chip = &ks_pcie_msi_irq_chip; 258 311 return dw_pcie_allocate_domains(pp); 259 312 } ··· 407 340 .xlate = irq_domain_xlate_onetwocell, 408 341 }; 409 342 410 - /** 411 - * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask registers 412 - * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone 413 - * PCIe host controller driver information. 414 - * 415 - * Since modification of dbi_cs2 involves different clock domain, read the 416 - * status back to ensure the transition is complete. 417 - */ 418 - static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie) 419 - { 420 - u32 val; 421 - 422 - val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 423 - val |= DBI_CS2; 424 - ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 425 - 426 - do { 427 - val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 428 - } while (!(val & DBI_CS2)); 429 - } 430 - 431 - /** 432 - * ks_pcie_clear_dbi_mode() - Disable DBI mode 433 - * @ks_pcie: A pointer to the keystone_pcie structure which holds the KeyStone 434 - * PCIe host controller driver information. 435 - * 436 - * Since modification of dbi_cs2 involves different clock domain, read the 437 - * status back to ensure the transition is complete. 438 - */ 439 - static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie) 440 - { 441 - u32 val; 442 - 443 - val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 444 - val &= ~DBI_CS2; 445 - ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 446 - 447 - do { 448 - val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 449 - } while (val & DBI_CS2); 450 - } 451 - 452 - static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) 343 + static int ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) 453 344 { 454 345 u32 val; 455 346 u32 num_viewport = ks_pcie->num_viewport; 456 347 struct dw_pcie *pci = ks_pcie->pci; 457 348 struct dw_pcie_rp *pp = &pci->pp; 458 - u64 start, end; 349 + struct resource_entry *entry; 459 350 struct resource *mem; 351 + u64 start, end; 460 352 int i; 461 353 462 - mem = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM)->res; 354 + entry = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM); 355 + if (!entry) 356 + return -ENODEV; 357 + 358 + mem = entry->res; 463 359 start = mem->start; 464 360 end = mem->end; 465 361 ··· 433 403 ks_pcie_clear_dbi_mode(ks_pcie); 434 404 435 405 if (ks_pcie->is_am6) 436 - return; 406 + return 0; 437 407 438 408 val = ilog2(OB_WIN_SIZE); 439 409 ks_pcie_app_writel(ks_pcie, OB_SIZE, val); ··· 450 420 val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 451 421 val |= OB_XLAT_EN_VAL; 452 422 ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 423 + 424 + return 0; 453 425 } 454 426 455 427 static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus, ··· 477 445 .write = pci_generic_config_write, 478 446 }; 479 447 480 - /** 481 - * ks_pcie_v3_65_add_bus() - keystone add_bus post initialization 482 - * @bus: A pointer to the PCI bus structure. 483 - * 484 - * This sets BAR0 to enable inbound access for MSI_IRQ register 485 - */ 486 - static int ks_pcie_v3_65_add_bus(struct pci_bus *bus) 487 - { 488 - struct dw_pcie_rp *pp = bus->sysdata; 489 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 490 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 491 - 492 - if (!pci_is_root_bus(bus)) 493 - return 0; 494 - 495 - /* Configure and set up BAR0 */ 496 - ks_pcie_set_dbi_mode(ks_pcie); 497 - 498 - /* Enable BAR0 */ 499 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1); 500 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1); 501 - 502 - ks_pcie_clear_dbi_mode(ks_pcie); 503 - 504 - /* 505 - * For BAR0, just setting bus address for inbound writes (MSI) should 506 - * be sufficient. Use physical address to avoid any conflicts. 507 - */ 508 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start); 509 - 510 - return 0; 511 - } 512 - 513 448 static struct pci_ops ks_pcie_ops = { 514 449 .map_bus = dw_pcie_own_conf_map_bus, 515 450 .read = pci_generic_config_read, 516 451 .write = pci_generic_config_write, 517 - .add_bus = ks_pcie_v3_65_add_bus, 518 452 }; 519 453 520 454 /** ··· 523 525 static void ks_pcie_quirk(struct pci_dev *dev) 524 526 { 525 527 struct pci_bus *bus = dev->bus; 528 + struct keystone_pcie *ks_pcie; 529 + struct device *bridge_dev; 526 530 struct pci_dev *bridge; 531 + u32 val; 532 + 527 533 static const struct pci_device_id rc_pci_devids[] = { 528 534 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), 529 535 .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, }, ··· 537 535 .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, }, 538 536 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G), 539 537 .class = PCI_CLASS_BRIDGE_PCI_NORMAL, .class_mask = ~0, }, 538 + { 0, }, 539 + }; 540 + static const struct pci_device_id am6_pci_devids[] = { 541 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654X), 542 + .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 540 543 { 0, }, 541 544 }; 542 545 ··· 565 558 */ 566 559 if (pci_match_id(rc_pci_devids, bridge)) { 567 560 if (pcie_get_readrq(dev) > 256) { 568 - dev_info(&dev->dev, "limiting MRRS to 256\n"); 561 + dev_info(&dev->dev, "limiting MRRS to 256 bytes\n"); 569 562 pcie_set_readrq(dev, 256); 563 + } 564 + } 565 + 566 + /* 567 + * Memory transactions fail with PCI controller in AM654 PG1.0 568 + * when MRRS is set to more than 128 bytes. Force the MRRS to 569 + * 128 bytes in all downstream devices. 570 + */ 571 + if (pci_match_id(am6_pci_devids, bridge)) { 572 + bridge_dev = pci_get_host_bridge_device(dev); 573 + if (!bridge_dev && !bridge_dev->parent) 574 + return; 575 + 576 + ks_pcie = dev_get_drvdata(bridge_dev->parent); 577 + if (!ks_pcie) 578 + return; 579 + 580 + val = ks_pcie_app_readl(ks_pcie, PID); 581 + val &= RTL; 582 + val >>= RTL_SHIFT; 583 + if (val != AM6_PCI_PG1_RTL_VER) 584 + return; 585 + 586 + if (pcie_get_readrq(dev) > 128) { 587 + dev_info(&dev->dev, "limiting MRRS to 128 bytes\n"); 588 + pcie_set_readrq(dev, 128); 570 589 } 571 590 } 572 591 } ··· 847 814 return ret; 848 815 849 816 ks_pcie_stop_link(pci); 850 - ks_pcie_setup_rc_app_regs(ks_pcie); 817 + ret = ks_pcie_setup_rc_app_regs(ks_pcie); 818 + if (ret) 819 + return ret; 820 + 851 821 writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8), 852 822 pci->dbi_base + PCI_IO_BASE); 853 823 ··· 1329 1293 goto err_ep_init; 1330 1294 } 1331 1295 1332 - dw_pcie_ep_init_notify(&pci->ep); 1296 + pci_epc_init_notify(pci->ep.epc); 1333 1297 1334 1298 break; 1335 1299 default:
+2 -2
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 104 104 dev_dbg(pci->dev, "Link up\n"); 105 105 } else if (val & PEX_PF0_PME_MES_DR_LDD) { 106 106 dev_dbg(pci->dev, "Link down\n"); 107 - pci_epc_linkdown(pci->ep.epc); 107 + dw_pcie_ep_linkdown(&pci->ep); 108 108 } else if (val & PEX_PF0_PME_MES_DR_HRD) { 109 109 dev_dbg(pci->dev, "Hot reset\n"); 110 110 } ··· 286 286 return ret; 287 287 } 288 288 289 - dw_pcie_ep_init_notify(&pci->ep); 289 + pci_epc_init_notify(pci->ep.epc); 290 290 291 291 return ls_pcie_ep_interrupt_init(pcie, pdev); 292 292 }
-1
drivers/pci/controller/dwc/pci-meson.c
··· 9 9 #include <linux/clk.h> 10 10 #include <linux/delay.h> 11 11 #include <linux/gpio/consumer.h> 12 - #include <linux/of_gpio.h> 13 12 #include <linux/pci.h> 14 13 #include <linux/platform_device.h> 15 14 #include <linux/reset.h>
+13 -3
drivers/pci/controller/dwc/pcie-al.c
··· 242 242 .write = pci_generic_config_write, 243 243 }; 244 244 245 - static void al_pcie_config_prepare(struct al_pcie *pcie) 245 + static int al_pcie_config_prepare(struct al_pcie *pcie) 246 246 { 247 247 struct al_pcie_target_bus_cfg *target_bus_cfg; 248 248 struct dw_pcie_rp *pp = &pcie->pci->pp; 249 249 unsigned int ecam_bus_mask; 250 + struct resource_entry *ft; 250 251 u32 cfg_control_offset; 252 + struct resource *bus; 251 253 u8 subordinate_bus; 252 254 u8 secondary_bus; 253 255 u32 cfg_control; 254 256 u32 reg; 255 - struct resource *bus = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS)->res; 256 257 258 + ft = resource_list_first_type(&pp->bridge->windows, IORESOURCE_BUS); 259 + if (!ft) 260 + return -ENODEV; 261 + 262 + bus = ft->res; 257 263 target_bus_cfg = &pcie->target_bus_cfg; 258 264 259 265 ecam_bus_mask = (pcie->ecam_size >> PCIE_ECAM_BUS_SHIFT) - 1; ··· 293 287 FIELD_PREP(CFG_CONTROL_SEC_BUS_MASK, secondary_bus); 294 288 295 289 al_pcie_controller_writel(pcie, cfg_control_offset, reg); 290 + 291 + return 0; 296 292 } 297 293 298 294 static int al_pcie_host_init(struct dw_pcie_rp *pp) ··· 313 305 if (rc) 314 306 return rc; 315 307 316 - al_pcie_config_prepare(pcie); 308 + rc = al_pcie_config_prepare(pcie); 309 + if (rc) 310 + return rc; 317 311 318 312 return 0; 319 313 }
+5 -5
drivers/pci/controller/dwc/pcie-artpec6.c
··· 94 94 regmap_write(artpec6_pcie->regmap, offset, val); 95 95 } 96 96 97 - static u64 artpec6_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 pci_addr) 97 + static u64 artpec6_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 cpu_addr) 98 98 { 99 99 struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci); 100 100 struct dw_pcie_rp *pp = &pci->pp; ··· 102 102 103 103 switch (artpec6_pcie->mode) { 104 104 case DW_PCIE_RC_TYPE: 105 - return pci_addr - pp->cfg0_base; 105 + return cpu_addr - pp->cfg0_base; 106 106 case DW_PCIE_EP_TYPE: 107 - return pci_addr - ep->phys_base; 107 + return cpu_addr - ep->phys_base; 108 108 default: 109 109 dev_err(pci->dev, "UNKNOWN device type\n"); 110 110 } 111 - return pci_addr; 111 + return cpu_addr; 112 112 } 113 113 114 114 static int artpec6_pcie_establish_link(struct dw_pcie *pci) ··· 452 452 return ret; 453 453 } 454 454 455 - dw_pcie_ep_init_notify(&pci->ep); 455 + pci_epc_init_notify(pci->ep.epc); 456 456 457 457 break; 458 458 default:
+97 -58
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 16 16 #include <linux/pci-epf.h> 17 17 18 18 /** 19 - * dw_pcie_ep_linkup - Notify EPF drivers about Link Up event 20 - * @ep: DWC EP device 21 - */ 22 - void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) 23 - { 24 - struct pci_epc *epc = ep->epc; 25 - 26 - pci_epc_linkup(epc); 27 - } 28 - EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup); 29 - 30 - /** 31 - * dw_pcie_ep_init_notify - Notify EPF drivers about EPC initialization complete 32 - * @ep: DWC EP device 33 - */ 34 - void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep) 35 - { 36 - struct pci_epc *epc = ep->epc; 37 - 38 - pci_epc_init_notify(epc); 39 - } 40 - EXPORT_SYMBOL_GPL(dw_pcie_ep_init_notify); 41 - 42 - /** 43 19 * dw_pcie_ep_get_func_from_ep - Get the struct dw_pcie_ep_func corresponding to 44 20 * the endpoint function 45 21 * @ep: DWC EP device ··· 137 161 if (!ep->bar_to_atu[bar]) 138 162 free_win = find_first_zero_bit(ep->ib_window_map, pci->num_ib_windows); 139 163 else 140 - free_win = ep->bar_to_atu[bar]; 164 + free_win = ep->bar_to_atu[bar] - 1; 141 165 142 166 if (free_win >= pci->num_ib_windows) { 143 167 dev_err(pci->dev, "No free inbound window\n"); ··· 151 175 return ret; 152 176 } 153 177 154 - ep->bar_to_atu[bar] = free_win; 178 + /* 179 + * Always increment free_win before assignment, since value 0 is used to identify 180 + * unallocated mapping. 181 + */ 182 + ep->bar_to_atu[bar] = free_win + 1; 155 183 set_bit(free_win, ep->ib_window_map); 156 184 157 185 return 0; 158 186 } 159 187 160 - static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, u8 func_no, 161 - phys_addr_t phys_addr, 162 - u64 pci_addr, size_t size) 188 + static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, 189 + struct dw_pcie_ob_atu_cfg *atu) 163 190 { 164 191 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 165 192 u32 free_win; ··· 174 195 return -EINVAL; 175 196 } 176 197 177 - ret = dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM, 178 - phys_addr, pci_addr, size); 198 + atu->index = free_win; 199 + ret = dw_pcie_prog_outbound_atu(pci, atu); 179 200 if (ret) 180 201 return ret; 181 202 182 203 set_bit(free_win, ep->ob_window_map); 183 - ep->outbound_addr[free_win] = phys_addr; 204 + ep->outbound_addr[free_win] = atu->cpu_addr; 184 205 185 206 return 0; 186 207 } ··· 191 212 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 192 213 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 193 214 enum pci_barno bar = epf_bar->barno; 194 - u32 atu_index = ep->bar_to_atu[bar]; 215 + u32 atu_index = ep->bar_to_atu[bar] - 1; 216 + 217 + if (!ep->bar_to_atu[bar]) 218 + return; 195 219 196 220 __dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags); 197 221 ··· 214 232 int flags = epf_bar->flags; 215 233 int ret, type; 216 234 u32 reg; 235 + 236 + /* 237 + * DWC does not allow BAR pairs to overlap, e.g. you cannot combine BARs 238 + * 1 and 2 to form a 64-bit BAR. 239 + */ 240 + if ((flags & PCI_BASE_ADDRESS_MEM_TYPE_64) && (bar & 1)) 241 + return -EINVAL; 217 242 218 243 reg = PCI_BASE_ADDRESS_0 + (4 * bar); 219 244 ··· 290 301 int ret; 291 302 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 292 303 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 304 + struct dw_pcie_ob_atu_cfg atu = { 0 }; 293 305 294 - ret = dw_pcie_ep_outbound_atu(ep, func_no, addr, pci_addr, size); 306 + atu.func_no = func_no; 307 + atu.type = PCIE_ATU_TYPE_MEM; 308 + atu.cpu_addr = addr; 309 + atu.pci_addr = pci_addr; 310 + atu.size = size; 311 + ret = dw_pcie_ep_outbound_atu(ep, &atu); 295 312 if (ret) { 296 313 dev_err(pci->dev, "Failed to enable address\n"); 297 314 return ret; ··· 627 632 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 628 633 629 634 dw_pcie_edma_remove(pci); 630 - ep->epc->init_complete = false; 631 635 } 632 636 EXPORT_SYMBOL_GPL(dw_pcie_ep_cleanup); 633 637 ··· 668 674 return 0; 669 675 } 670 676 677 + static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci) 678 + { 679 + unsigned int offset; 680 + unsigned int nbars; 681 + u32 reg, i; 682 + 683 + offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 684 + 685 + dw_pcie_dbi_ro_wr_en(pci); 686 + 687 + if (offset) { 688 + reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 689 + nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 690 + PCI_REBAR_CTRL_NBAR_SHIFT; 691 + 692 + /* 693 + * PCIe r6.0, sec 7.8.6.2 require us to support at least one 694 + * size in the range from 1 MB to 512 GB. Advertise support 695 + * for 1 MB BAR size only. 696 + */ 697 + for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) 698 + dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0); 699 + } 700 + 701 + dw_pcie_setup(pci); 702 + dw_pcie_dbi_ro_wr_dis(pci); 703 + } 704 + 671 705 /** 672 706 * dw_pcie_ep_init_registers - Initialize DWC EP specific registers 673 707 * @ep: DWC EP device ··· 710 688 struct dw_pcie_ep_func *ep_func; 711 689 struct device *dev = pci->dev; 712 690 struct pci_epc *epc = ep->epc; 713 - unsigned int offset, ptm_cap_base; 714 - unsigned int nbars; 691 + u32 ptm_cap_base, reg; 715 692 u8 hdr_type; 716 693 u8 func_no; 717 - int i, ret; 718 694 void *addr; 719 - u32 reg; 695 + int ret; 720 696 721 697 hdr_type = dw_pcie_readb_dbi(pci, PCI_HEADER_TYPE) & 722 698 PCI_HEADER_TYPE_MASK; ··· 777 757 if (ep->ops->init) 778 758 ep->ops->init(ep); 779 759 780 - offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 781 760 ptm_cap_base = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM); 782 - 783 - dw_pcie_dbi_ro_wr_en(pci); 784 - 785 - if (offset) { 786 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 787 - nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >> 788 - PCI_REBAR_CTRL_NBAR_SHIFT; 789 - 790 - /* 791 - * PCIe r6.0, sec 7.8.6.2 require us to support at least one 792 - * size in the range from 1 MB to 512 GB. Advertise support 793 - * for 1 MB BAR size only. 794 - */ 795 - for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) 796 - dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4)); 797 - } 798 761 799 762 /* 800 763 * PTM responder capability can be disabled only after disabling ··· 795 792 dw_pcie_dbi_ro_wr_dis(pci); 796 793 } 797 794 798 - dw_pcie_setup(pci); 799 - dw_pcie_dbi_ro_wr_dis(pci); 795 + dw_pcie_ep_init_non_sticky_registers(pci); 800 796 801 797 return 0; 802 798 ··· 805 803 return ret; 806 804 } 807 805 EXPORT_SYMBOL_GPL(dw_pcie_ep_init_registers); 806 + 807 + /** 808 + * dw_pcie_ep_linkup - Notify EPF drivers about Link Up event 809 + * @ep: DWC EP device 810 + */ 811 + void dw_pcie_ep_linkup(struct dw_pcie_ep *ep) 812 + { 813 + struct pci_epc *epc = ep->epc; 814 + 815 + pci_epc_linkup(epc); 816 + } 817 + EXPORT_SYMBOL_GPL(dw_pcie_ep_linkup); 818 + 819 + /** 820 + * dw_pcie_ep_linkdown - Notify EPF drivers about Link Down event 821 + * @ep: DWC EP device 822 + * 823 + * Non-sticky registers are also initialized before sending the notification to 824 + * the EPF drivers. This is needed since the registers need to be initialized 825 + * before the link comes back again. 826 + */ 827 + void dw_pcie_ep_linkdown(struct dw_pcie_ep *ep) 828 + { 829 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 830 + struct pci_epc *epc = ep->epc; 831 + 832 + /* 833 + * Initialize the non-sticky DWC registers as they would've reset post 834 + * Link Down. This is specifically needed for drivers not supporting 835 + * PERST# as they have no way to reinitialize the registers before the 836 + * link comes back again. 837 + */ 838 + dw_pcie_ep_init_non_sticky_registers(pci); 839 + 840 + pci_epc_linkdown(epc); 841 + } 842 + EXPORT_SYMBOL_GPL(dw_pcie_ep_linkdown); 808 843 809 844 /** 810 845 * dw_pcie_ep_init - Initialize the endpoint device
+125 -20
drivers/pci/controller/dwc/pcie-designware-host.c
··· 398 398 return 0; 399 399 } 400 400 401 + static void dw_pcie_host_request_msg_tlp_res(struct dw_pcie_rp *pp) 402 + { 403 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 404 + struct resource_entry *win; 405 + struct resource *res; 406 + 407 + win = resource_list_first_type(&pp->bridge->windows, IORESOURCE_MEM); 408 + if (win) { 409 + res = devm_kzalloc(pci->dev, sizeof(*res), GFP_KERNEL); 410 + if (!res) 411 + return; 412 + 413 + /* 414 + * Allocate MSG TLP region of size 'region_align' at the end of 415 + * the host bridge window. 416 + */ 417 + res->start = win->res->end - pci->region_align + 1; 418 + res->end = win->res->end; 419 + res->name = "msg"; 420 + res->flags = win->res->flags | IORESOURCE_BUSY; 421 + 422 + if (!devm_request_resource(pci->dev, win->res, res)) 423 + pp->msg_res = res; 424 + } 425 + } 426 + 401 427 int dw_pcie_host_init(struct dw_pcie_rp *pp) 402 428 { 403 429 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 510 484 511 485 dw_pcie_iatu_detect(pci); 512 486 487 + /* 488 + * Allocate the resource for MSG TLP before programming the iATU 489 + * outbound window in dw_pcie_setup_rc(). Since the allocation depends 490 + * on the value of 'region_align', this has to be done after 491 + * dw_pcie_iatu_detect(). 492 + * 493 + * Glue drivers need to set 'use_atu_msg' before dw_pcie_host_init() to 494 + * make use of the generic MSG TLP implementation. 495 + */ 496 + if (pp->use_atu_msg) 497 + dw_pcie_host_request_msg_tlp_res(pp); 498 + 513 499 ret = dw_pcie_edma_detect(pci); 514 500 if (ret) 515 501 goto err_free_msi; ··· 592 554 { 593 555 struct dw_pcie_rp *pp = bus->sysdata; 594 556 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 557 + struct dw_pcie_ob_atu_cfg atu = { 0 }; 595 558 int type, ret; 596 559 u32 busdev; 597 560 ··· 615 576 else 616 577 type = PCIE_ATU_TYPE_CFG1; 617 578 618 - ret = dw_pcie_prog_outbound_atu(pci, 0, type, pp->cfg0_base, busdev, 619 - pp->cfg0_size); 579 + atu.type = type; 580 + atu.cpu_addr = pp->cfg0_base; 581 + atu.pci_addr = busdev; 582 + atu.size = pp->cfg0_size; 583 + 584 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 620 585 if (ret) 621 586 return NULL; 622 587 ··· 632 589 { 633 590 struct dw_pcie_rp *pp = bus->sysdata; 634 591 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 592 + struct dw_pcie_ob_atu_cfg atu = { 0 }; 635 593 int ret; 636 594 637 595 ret = pci_generic_config_read(bus, devfn, where, size, val); ··· 640 596 return ret; 641 597 642 598 if (pp->cfg0_io_shared) { 643 - ret = dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO, 644 - pp->io_base, pp->io_bus_addr, 645 - pp->io_size); 599 + atu.type = PCIE_ATU_TYPE_IO; 600 + atu.cpu_addr = pp->io_base; 601 + atu.pci_addr = pp->io_bus_addr; 602 + atu.size = pp->io_size; 603 + 604 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 646 605 if (ret) 647 606 return PCIBIOS_SET_FAILED; 648 607 } ··· 658 611 { 659 612 struct dw_pcie_rp *pp = bus->sysdata; 660 613 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 614 + struct dw_pcie_ob_atu_cfg atu = { 0 }; 661 615 int ret; 662 616 663 617 ret = pci_generic_config_write(bus, devfn, where, size, val); ··· 666 618 return ret; 667 619 668 620 if (pp->cfg0_io_shared) { 669 - ret = dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO, 670 - pp->io_base, pp->io_bus_addr, 671 - pp->io_size); 621 + atu.type = PCIE_ATU_TYPE_IO; 622 + atu.cpu_addr = pp->io_base; 623 + atu.pci_addr = pp->io_bus_addr; 624 + atu.size = pp->io_size; 625 + 626 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 672 627 if (ret) 673 628 return PCIBIOS_SET_FAILED; 674 629 } ··· 706 655 static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp) 707 656 { 708 657 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 658 + struct dw_pcie_ob_atu_cfg atu = { 0 }; 709 659 struct resource_entry *entry; 710 660 int i, ret; 711 661 ··· 734 682 if (pci->num_ob_windows <= ++i) 735 683 break; 736 684 737 - ret = dw_pcie_prog_outbound_atu(pci, i, PCIE_ATU_TYPE_MEM, 738 - entry->res->start, 739 - entry->res->start - entry->offset, 740 - resource_size(entry->res)); 685 + atu.index = i; 686 + atu.type = PCIE_ATU_TYPE_MEM; 687 + atu.cpu_addr = entry->res->start; 688 + atu.pci_addr = entry->res->start - entry->offset; 689 + 690 + /* Adjust iATU size if MSG TLP region was allocated before */ 691 + if (pp->msg_res && pp->msg_res->parent == entry->res) 692 + atu.size = resource_size(entry->res) - 693 + resource_size(pp->msg_res); 694 + else 695 + atu.size = resource_size(entry->res); 696 + 697 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 741 698 if (ret) { 742 699 dev_err(pci->dev, "Failed to set MEM range %pr\n", 743 700 entry->res); ··· 756 695 757 696 if (pp->io_size) { 758 697 if (pci->num_ob_windows > ++i) { 759 - ret = dw_pcie_prog_outbound_atu(pci, i, PCIE_ATU_TYPE_IO, 760 - pp->io_base, 761 - pp->io_bus_addr, 762 - pp->io_size); 698 + atu.index = i; 699 + atu.type = PCIE_ATU_TYPE_IO; 700 + atu.cpu_addr = pp->io_base; 701 + atu.pci_addr = pp->io_bus_addr; 702 + atu.size = pp->io_size; 703 + 704 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 763 705 if (ret) { 764 706 dev_err(pci->dev, "Failed to set IO range %pr\n", 765 707 entry->res); ··· 776 712 if (pci->num_ob_windows <= i) 777 713 dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n", 778 714 pci->num_ob_windows); 715 + 716 + pp->msg_atu_index = i; 779 717 780 718 i = 0; 781 719 resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) { ··· 884 818 } 885 819 EXPORT_SYMBOL_GPL(dw_pcie_setup_rc); 886 820 821 + static int dw_pcie_pme_turn_off(struct dw_pcie *pci) 822 + { 823 + struct dw_pcie_ob_atu_cfg atu = { 0 }; 824 + void __iomem *mem; 825 + int ret; 826 + 827 + if (pci->num_ob_windows <= pci->pp.msg_atu_index) 828 + return -ENOSPC; 829 + 830 + if (!pci->pp.msg_res) 831 + return -ENOSPC; 832 + 833 + atu.code = PCIE_MSG_CODE_PME_TURN_OFF; 834 + atu.routing = PCIE_MSG_TYPE_R_BC; 835 + atu.type = PCIE_ATU_TYPE_MSG; 836 + atu.size = resource_size(pci->pp.msg_res); 837 + atu.index = pci->pp.msg_atu_index; 838 + 839 + atu.cpu_addr = pci->pp.msg_res->start; 840 + 841 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 842 + if (ret) 843 + return ret; 844 + 845 + mem = ioremap(atu.cpu_addr, pci->region_align); 846 + if (!mem) 847 + return -ENOMEM; 848 + 849 + /* A dummy write is converted to a Msg TLP */ 850 + writel(0, mem); 851 + 852 + iounmap(mem); 853 + 854 + return 0; 855 + } 856 + 887 857 int dw_pcie_suspend_noirq(struct dw_pcie *pci) 888 858 { 889 859 u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 890 860 u32 val; 891 - int ret; 861 + int ret = 0; 892 862 893 863 /* 894 864 * If L1SS is supported, then do not put the link into L2 as some ··· 936 834 if (dw_pcie_get_ltssm(pci) <= DW_PCIE_LTSSM_DETECT_ACT) 937 835 return 0; 938 836 939 - if (!pci->pp.ops->pme_turn_off) 940 - return 0; 837 + if (pci->pp.ops->pme_turn_off) 838 + pci->pp.ops->pme_turn_off(&pci->pp); 839 + else 840 + ret = dw_pcie_pme_turn_off(pci); 941 841 942 - pci->pp.ops->pme_turn_off(&pci->pp); 842 + if (ret) 843 + return ret; 943 844 944 845 ret = read_poll_timeout(dw_pcie_get_ltssm, val, val == DW_PCIE_LTSSM_L2_IDLE, 945 846 PCIE_PME_TO_L2_TIMEOUT_US/10,
+1 -1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 154 154 dw_pcie_ep_deinit(&pci->ep); 155 155 } 156 156 157 - dw_pcie_ep_init_notify(&pci->ep); 157 + pci_epc_init_notify(pci->ep.epc); 158 158 159 159 break; 160 160 default:
+72 -49
drivers/pci/controller/dwc/pcie-designware.c
··· 465 465 return val | PCIE_ATU_TD; 466 466 } 467 467 468 - static int __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no, 469 - int index, int type, u64 cpu_addr, 470 - u64 pci_addr, u64 size) 468 + int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, 469 + const struct dw_pcie_ob_atu_cfg *atu) 471 470 { 471 + u64 cpu_addr = atu->cpu_addr; 472 472 u32 retries, val; 473 473 u64 limit_addr; 474 474 475 475 if (pci->ops && pci->ops->cpu_addr_fixup) 476 476 cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr); 477 477 478 - limit_addr = cpu_addr + size - 1; 478 + limit_addr = cpu_addr + atu->size - 1; 479 479 480 480 if ((limit_addr & ~pci->region_limit) != (cpu_addr & ~pci->region_limit) || 481 481 !IS_ALIGNED(cpu_addr, pci->region_align) || 482 - !IS_ALIGNED(pci_addr, pci->region_align) || !size) { 482 + !IS_ALIGNED(atu->pci_addr, pci->region_align) || !atu->size) { 483 483 return -EINVAL; 484 484 } 485 485 486 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LOWER_BASE, 486 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LOWER_BASE, 487 487 lower_32_bits(cpu_addr)); 488 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_BASE, 488 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_UPPER_BASE, 489 489 upper_32_bits(cpu_addr)); 490 490 491 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LIMIT, 491 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LIMIT, 492 492 lower_32_bits(limit_addr)); 493 493 if (dw_pcie_ver_is_ge(pci, 460A)) 494 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_LIMIT, 494 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_UPPER_LIMIT, 495 495 upper_32_bits(limit_addr)); 496 496 497 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LOWER_TARGET, 498 - lower_32_bits(pci_addr)); 499 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_TARGET, 500 - upper_32_bits(pci_addr)); 497 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_LOWER_TARGET, 498 + lower_32_bits(atu->pci_addr)); 499 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_UPPER_TARGET, 500 + upper_32_bits(atu->pci_addr)); 501 501 502 - val = type | PCIE_ATU_FUNC_NUM(func_no); 502 + val = atu->type | atu->routing | PCIE_ATU_FUNC_NUM(atu->func_no); 503 503 if (upper_32_bits(limit_addr) > upper_32_bits(cpu_addr) && 504 504 dw_pcie_ver_is_ge(pci, 460A)) 505 505 val |= PCIE_ATU_INCREASE_REGION_SIZE; 506 506 if (dw_pcie_ver_is(pci, 490A)) 507 507 val = dw_pcie_enable_ecrc(val); 508 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_REGION_CTRL1, val); 508 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val); 509 509 510 - dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_REGION_CTRL2, PCIE_ATU_ENABLE); 510 + val = PCIE_ATU_ENABLE; 511 + if (atu->type == PCIE_ATU_TYPE_MSG) { 512 + /* The data-less messages only for now */ 513 + val |= PCIE_ATU_INHIBIT_PAYLOAD | atu->code; 514 + } 515 + dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL2, val); 511 516 512 517 /* 513 518 * Make sure ATU enable takes effect before any subsequent config 514 519 * and I/O accesses. 515 520 */ 516 521 for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) { 517 - val = dw_pcie_readl_atu_ob(pci, index, PCIE_ATU_REGION_CTRL2); 522 + val = dw_pcie_readl_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL2); 518 523 if (val & PCIE_ATU_ENABLE) 519 524 return 0; 520 525 ··· 529 524 dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 530 525 531 526 return -ETIMEDOUT; 532 - } 533 - 534 - int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, 535 - u64 cpu_addr, u64 pci_addr, u64 size) 536 - { 537 - return __dw_pcie_prog_outbound_atu(pci, 0, index, type, 538 - cpu_addr, pci_addr, size); 539 - } 540 - 541 - int dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 542 - int type, u64 cpu_addr, u64 pci_addr, 543 - u64 size) 544 - { 545 - return __dw_pcie_prog_outbound_atu(pci, func_no, index, type, 546 - cpu_addr, pci_addr, size); 547 527 } 548 528 549 529 static inline u32 dw_pcie_readl_atu_ib(struct dw_pcie *pci, u32 index, u32 reg) ··· 645 655 if (dw_pcie_link_up(pci)) 646 656 break; 647 657 648 - usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 658 + msleep(LINK_WAIT_SLEEP_MS); 649 659 } 650 660 651 661 if (retries >= LINK_WAIT_MAX_RETRIES) { ··· 870 880 .irq_vector = dw_pcie_edma_irq_vector, 871 881 }; 872 882 873 - static int dw_pcie_edma_find_chip(struct dw_pcie *pci) 883 + static void dw_pcie_edma_init_data(struct dw_pcie *pci) 884 + { 885 + pci->edma.dev = pci->dev; 886 + 887 + if (!pci->edma.ops) 888 + pci->edma.ops = &dw_pcie_edma_ops; 889 + 890 + pci->edma.flags |= DW_EDMA_CHIP_LOCAL; 891 + } 892 + 893 + static int dw_pcie_edma_find_mf(struct dw_pcie *pci) 874 894 { 875 895 u32 val; 896 + 897 + /* 898 + * Bail out finding the mapping format if it is already set by the glue 899 + * driver. Also ensure that the edma.reg_base is pointing to a valid 900 + * memory region. 901 + */ 902 + if (pci->edma.mf != EDMA_MF_EDMA_LEGACY) 903 + return pci->edma.reg_base ? 0 : -ENODEV; 876 904 877 905 /* 878 906 * Indirect eDMA CSRs access has been completely removed since v5.40a 879 907 * thus no space is now reserved for the eDMA channels viewport and 880 908 * former DMA CTRL register is no longer fixed to FFs. 881 - * 882 - * Note that Renesas R-Car S4-8's PCIe controllers for unknown reason 883 - * have zeros in the eDMA CTRL register even though the HW-manual 884 - * explicitly states there must FFs if the unrolled mapping is enabled. 885 - * For such cases the low-level drivers are supposed to manually 886 - * activate the unrolled mapping to bypass the auto-detection procedure. 887 909 */ 888 - if (dw_pcie_ver_is_ge(pci, 540A) || dw_pcie_cap_is(pci, EDMA_UNROLL)) 910 + if (dw_pcie_ver_is_ge(pci, 540A)) 889 911 val = 0xFFFFFFFF; 890 912 else 891 913 val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL); 892 914 893 915 if (val == 0xFFFFFFFF && pci->edma.reg_base) { 894 916 pci->edma.mf = EDMA_MF_EDMA_UNROLL; 895 - 896 - val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL); 897 917 } else if (val != 0xFFFFFFFF) { 898 918 pci->edma.mf = EDMA_MF_EDMA_LEGACY; 899 919 ··· 912 912 return -ENODEV; 913 913 } 914 914 915 - pci->edma.dev = pci->dev; 915 + return 0; 916 + } 916 917 917 - if (!pci->edma.ops) 918 - pci->edma.ops = &dw_pcie_edma_ops; 918 + static int dw_pcie_edma_find_channels(struct dw_pcie *pci) 919 + { 920 + u32 val; 919 921 920 - pci->edma.flags |= DW_EDMA_CHIP_LOCAL; 922 + /* 923 + * Autodetect the read/write channels count only for non-HDMA platforms. 924 + * HDMA platforms with native CSR mapping doesn't support autodetect, 925 + * so the glue drivers should've passed the valid count already. If not, 926 + * the below sanity check will catch it. 927 + */ 928 + if (pci->edma.mf != EDMA_MF_HDMA_NATIVE) { 929 + val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL); 921 930 922 - pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val); 923 - pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val); 931 + pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val); 932 + pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val); 933 + } 924 934 925 935 /* Sanity check the channels count if the mapping was incorrect */ 926 936 if (!pci->edma.ll_wr_cnt || pci->edma.ll_wr_cnt > EDMA_MAX_WR_CH || ··· 938 928 return -EINVAL; 939 929 940 930 return 0; 931 + } 932 + 933 + static int dw_pcie_edma_find_chip(struct dw_pcie *pci) 934 + { 935 + int ret; 936 + 937 + dw_pcie_edma_init_data(pci); 938 + 939 + ret = dw_pcie_edma_find_mf(pci); 940 + if (ret) 941 + return ret; 942 + 943 + return dw_pcie_edma_find_channels(pci); 941 944 } 942 945 943 946 static int dw_pcie_edma_irq_verify(struct dw_pcie *pci)
+32 -14
drivers/pci/controller/dwc/pcie-designware.h
··· 51 51 52 52 /* DWC PCIe controller capabilities */ 53 53 #define DW_PCIE_CAP_REQ_RES 0 54 - #define DW_PCIE_CAP_EDMA_UNROLL 1 55 - #define DW_PCIE_CAP_IATU_UNROLL 2 56 - #define DW_PCIE_CAP_CDM_CHECK 3 54 + #define DW_PCIE_CAP_IATU_UNROLL 1 55 + #define DW_PCIE_CAP_CDM_CHECK 2 57 56 58 57 #define dw_pcie_cap_is(_pci, _cap) \ 59 58 test_bit(DW_PCIE_CAP_ ## _cap, &(_pci)->caps) ··· 62 63 63 64 /* Parameters for the waiting for link up routine */ 64 65 #define LINK_WAIT_MAX_RETRIES 10 65 - #define LINK_WAIT_USLEEP_MIN 90000 66 - #define LINK_WAIT_USLEEP_MAX 100000 66 + #define LINK_WAIT_SLEEP_MS 90 67 67 68 68 /* Parameters for the waiting for iATU enabled routine */ 69 69 #define LINK_WAIT_MAX_IATU_RETRIES 5 70 70 #define LINK_WAIT_IATU 9 71 71 72 72 /* Synopsys-specific PCIe configuration registers */ 73 + #define PCIE_PORT_FORCE 0x708 74 + #define PORT_FORCE_DO_DESKEW_FOR_SRIS BIT(23) 75 + 73 76 #define PCIE_PORT_AFR 0x70C 74 77 #define PORT_AFR_N_FTS_MASK GENMASK(15, 8) 75 78 #define PORT_AFR_N_FTS(n) FIELD_PREP(PORT_AFR_N_FTS_MASK, n) ··· 92 91 #define PORT_LINK_MODE_2_LANES PORT_LINK_MODE(0x3) 93 92 #define PORT_LINK_MODE_4_LANES PORT_LINK_MODE(0x7) 94 93 #define PORT_LINK_MODE_8_LANES PORT_LINK_MODE(0xf) 94 + 95 + #define PCIE_PORT_LANE_SKEW 0x714 96 + #define PORT_LANE_SKEW_INSERT_MASK GENMASK(23, 0) 95 97 96 98 #define PCIE_PORT_DEBUG0 0x728 97 99 #define PORT_LOGIC_LTSSM_STATE_MASK 0x1f ··· 152 148 #define PCIE_ATU_TYPE_IO 0x2 153 149 #define PCIE_ATU_TYPE_CFG0 0x4 154 150 #define PCIE_ATU_TYPE_CFG1 0x5 151 + #define PCIE_ATU_TYPE_MSG 0x10 155 152 #define PCIE_ATU_TD BIT(8) 156 153 #define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20) 157 154 #define PCIE_ATU_REGION_CTRL2 0x004 158 155 #define PCIE_ATU_ENABLE BIT(31) 159 156 #define PCIE_ATU_BAR_MODE_ENABLE BIT(30) 157 + #define PCIE_ATU_INHIBIT_PAYLOAD BIT(22) 160 158 #define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19) 161 159 #define PCIE_ATU_LOWER_BASE 0x008 162 160 #define PCIE_ATU_UPPER_BASE 0x00C ··· 305 299 DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF, 306 300 }; 307 301 302 + struct dw_pcie_ob_atu_cfg { 303 + int index; 304 + int type; 305 + u8 func_no; 306 + u8 code; 307 + u8 routing; 308 + u64 cpu_addr; 309 + u64 pci_addr; 310 + u64 size; 311 + }; 312 + 308 313 struct dw_pcie_host_ops { 309 314 int (*init)(struct dw_pcie_rp *pp); 310 315 void (*deinit)(struct dw_pcie_rp *pp); ··· 345 328 struct pci_host_bridge *bridge; 346 329 raw_spinlock_t lock; 347 330 DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS); 331 + bool use_atu_msg; 332 + int msg_atu_index; 333 + struct resource *msg_res; 348 334 }; 349 335 350 336 struct dw_pcie_ep_ops { ··· 453 433 int dw_pcie_link_up(struct dw_pcie *pci); 454 434 void dw_pcie_upconfig_setup(struct dw_pcie *pci); 455 435 int dw_pcie_wait_for_link(struct dw_pcie *pci); 456 - int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, 457 - u64 cpu_addr, u64 pci_addr, u64 size); 458 - int dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 459 - int type, u64 cpu_addr, u64 pci_addr, u64 size); 436 + int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, 437 + const struct dw_pcie_ob_atu_cfg *atu); 460 438 int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type, 461 439 u64 cpu_addr, u64 pci_addr, u64 size); 462 440 int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, ··· 686 668 687 669 #ifdef CONFIG_PCIE_DW_EP 688 670 void dw_pcie_ep_linkup(struct dw_pcie_ep *ep); 671 + void dw_pcie_ep_linkdown(struct dw_pcie_ep *ep); 689 672 int dw_pcie_ep_init(struct dw_pcie_ep *ep); 690 673 int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep); 691 - void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep); 692 674 void dw_pcie_ep_deinit(struct dw_pcie_ep *ep); 693 675 void dw_pcie_ep_cleanup(struct dw_pcie_ep *ep); 694 676 int dw_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep, u8 func_no); ··· 706 688 { 707 689 } 708 690 691 + static inline void dw_pcie_ep_linkdown(struct dw_pcie_ep *ep) 692 + { 693 + } 694 + 709 695 static inline int dw_pcie_ep_init(struct dw_pcie_ep *ep) 710 696 { 711 697 return 0; ··· 718 696 static inline int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep) 719 697 { 720 698 return 0; 721 - } 722 - 723 - static inline void dw_pcie_ep_init_notify(struct dw_pcie_ep *ep) 724 - { 725 699 } 726 700 727 701 static inline void dw_pcie_ep_deinit(struct dw_pcie_ep *ep)
+292 -38
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 34 34 #define to_rockchip_pcie(x) dev_get_drvdata((x)->dev) 35 35 36 36 #define PCIE_CLIENT_RC_MODE HIWORD_UPDATE_BIT(0x40) 37 + #define PCIE_CLIENT_EP_MODE HIWORD_UPDATE(0xf0, 0x0) 37 38 #define PCIE_CLIENT_ENABLE_LTSSM HIWORD_UPDATE_BIT(0xc) 39 + #define PCIE_CLIENT_DISABLE_LTSSM HIWORD_UPDATE(0x0c, 0x8) 40 + #define PCIE_CLIENT_INTR_STATUS_MISC 0x10 41 + #define PCIE_CLIENT_INTR_MASK_MISC 0x24 38 42 #define PCIE_SMLH_LINKUP BIT(16) 39 43 #define PCIE_RDLH_LINKUP BIT(17) 40 44 #define PCIE_LINKUP (PCIE_SMLH_LINKUP | PCIE_RDLH_LINKUP) 45 + #define PCIE_RDLH_LINK_UP_CHGED BIT(1) 46 + #define PCIE_LINK_REQ_RST_NOT_INT BIT(2) 41 47 #define PCIE_L0S_ENTRY 0x11 42 48 #define PCIE_CLIENT_GENERAL_CONTROL 0x0 43 49 #define PCIE_CLIENT_INTR_STATUS_LEGACY 0x8 ··· 55 49 #define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0) 56 50 57 51 struct rockchip_pcie { 58 - struct dw_pcie pci; 59 - void __iomem *apb_base; 60 - struct phy *phy; 61 - struct clk_bulk_data *clks; 62 - unsigned int clk_cnt; 63 - struct reset_control *rst; 64 - struct gpio_desc *rst_gpio; 65 - struct regulator *vpcie3v3; 66 - struct irq_domain *irq_domain; 52 + struct dw_pcie pci; 53 + void __iomem *apb_base; 54 + struct phy *phy; 55 + struct clk_bulk_data *clks; 56 + unsigned int clk_cnt; 57 + struct reset_control *rst; 58 + struct gpio_desc *rst_gpio; 59 + struct regulator *vpcie3v3; 60 + struct irq_domain *irq_domain; 61 + const struct rockchip_pcie_of_data *data; 67 62 }; 68 63 69 - static int rockchip_pcie_readl_apb(struct rockchip_pcie *rockchip, 70 - u32 reg) 64 + struct rockchip_pcie_of_data { 65 + enum dw_pcie_device_mode mode; 66 + const struct pci_epc_features *epc_features; 67 + }; 68 + 69 + static int rockchip_pcie_readl_apb(struct rockchip_pcie *rockchip, u32 reg) 71 70 { 72 71 return readl_relaxed(rockchip->apb_base + reg); 73 72 } 74 73 75 - static void rockchip_pcie_writel_apb(struct rockchip_pcie *rockchip, 76 - u32 val, u32 reg) 74 + static void rockchip_pcie_writel_apb(struct rockchip_pcie *rockchip, u32 val, 75 + u32 reg) 77 76 { 78 77 writel_relaxed(val, rockchip->apb_base + reg); 79 78 } ··· 155 144 return 0; 156 145 } 157 146 147 + static u32 rockchip_pcie_get_ltssm(struct rockchip_pcie *rockchip) 148 + { 149 + return rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS); 150 + } 151 + 158 152 static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip) 159 153 { 160 154 rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM, 161 155 PCIE_CLIENT_GENERAL_CONTROL); 162 156 } 163 157 158 + static void rockchip_pcie_disable_ltssm(struct rockchip_pcie *rockchip) 159 + { 160 + rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_DISABLE_LTSSM, 161 + PCIE_CLIENT_GENERAL_CONTROL); 162 + } 163 + 164 164 static int rockchip_pcie_link_up(struct dw_pcie *pci) 165 165 { 166 166 struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 167 - u32 val = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS); 167 + u32 val = rockchip_pcie_get_ltssm(rockchip); 168 168 169 169 if ((val & PCIE_LINKUP) == PCIE_LINKUP && 170 170 (val & PCIE_LTSSM_STATUS_MASK) == PCIE_L0S_ENTRY) ··· 208 186 return 0; 209 187 } 210 188 189 + static void rockchip_pcie_stop_link(struct dw_pcie *pci) 190 + { 191 + struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 192 + 193 + rockchip_pcie_disable_ltssm(rockchip); 194 + } 195 + 211 196 static int rockchip_pcie_host_init(struct dw_pcie_rp *pp) 212 197 { 213 198 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 214 199 struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 215 200 struct device *dev = rockchip->pci.dev; 216 - u32 val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE); 217 201 int irq, ret; 218 202 219 203 irq = of_irq_get_byname(dev->of_node, "legacy"); ··· 233 205 irq_set_chained_handler_and_data(irq, rockchip_pcie_intx_handler, 234 206 rockchip); 235 207 236 - /* LTSSM enable control mode */ 237 - rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL); 238 - 239 - rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_RC_MODE, 240 - PCIE_CLIENT_GENERAL_CONTROL); 241 - 242 208 return 0; 243 209 } 244 210 245 211 static const struct dw_pcie_host_ops rockchip_pcie_host_ops = { 246 212 .init = rockchip_pcie_host_init, 213 + }; 214 + 215 + static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep) 216 + { 217 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 218 + enum pci_barno bar; 219 + 220 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 221 + dw_pcie_ep_reset_bar(pci, bar); 222 + }; 223 + 224 + static int rockchip_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 225 + unsigned int type, u16 interrupt_num) 226 + { 227 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 228 + 229 + switch (type) { 230 + case PCI_IRQ_INTX: 231 + return dw_pcie_ep_raise_intx_irq(ep, func_no); 232 + case PCI_IRQ_MSI: 233 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 234 + case PCI_IRQ_MSIX: 235 + return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num); 236 + default: 237 + dev_err(pci->dev, "UNKNOWN IRQ type\n"); 238 + } 239 + 240 + return 0; 241 + } 242 + 243 + static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = { 244 + .linkup_notifier = true, 245 + .msi_capable = true, 246 + .msix_capable = true, 247 + .align = SZ_64K, 248 + .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 249 + .bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 250 + .bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 251 + .bar[BAR_3] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 252 + .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 253 + .bar[BAR_5] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 254 + }; 255 + 256 + /* 257 + * BAR4 on rk3588 exposes the ATU Port Logic Structure to the host regardless of 258 + * iATU settings for BAR4. This means that BAR4 cannot be used by an EPF driver, 259 + * so mark it as RESERVED. (rockchip_pcie_ep_init() will disable all BARs by 260 + * default.) If the host could write to BAR4, the iATU settings (for all other 261 + * BARs) would be overwritten, resulting in (all other BARs) no longer working. 262 + */ 263 + static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = { 264 + .linkup_notifier = true, 265 + .msi_capable = true, 266 + .msix_capable = true, 267 + .align = SZ_64K, 268 + .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 269 + .bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 270 + .bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 271 + .bar[BAR_3] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 272 + .bar[BAR_4] = { .type = BAR_RESERVED, }, 273 + .bar[BAR_5] = { .type = BAR_FIXED, .fixed_size = SZ_1M, }, 274 + }; 275 + 276 + static const struct pci_epc_features * 277 + rockchip_pcie_get_features(struct dw_pcie_ep *ep) 278 + { 279 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 280 + struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 281 + 282 + return rockchip->data->epc_features; 283 + } 284 + 285 + static const struct dw_pcie_ep_ops rockchip_pcie_ep_ops = { 286 + .init = rockchip_pcie_ep_init, 287 + .raise_irq = rockchip_pcie_raise_irq, 288 + .get_features = rockchip_pcie_get_features, 247 289 }; 248 290 249 291 static int rockchip_pcie_clk_init(struct rockchip_pcie *rockchip) ··· 323 225 324 226 ret = devm_clk_bulk_get_all(dev, &rockchip->clks); 325 227 if (ret < 0) 326 - return ret; 228 + return dev_err_probe(dev, ret, "failed to get clocks\n"); 327 229 328 230 rockchip->clk_cnt = ret; 329 231 330 - return clk_bulk_prepare_enable(rockchip->clk_cnt, rockchip->clks); 232 + ret = clk_bulk_prepare_enable(rockchip->clk_cnt, rockchip->clks); 233 + if (ret) 234 + return dev_err_probe(dev, ret, "failed to enable clocks\n"); 235 + 236 + return 0; 331 237 } 332 238 333 239 static int rockchip_pcie_resource_get(struct platform_device *pdev, ··· 339 237 { 340 238 rockchip->apb_base = devm_platform_ioremap_resource_byname(pdev, "apb"); 341 239 if (IS_ERR(rockchip->apb_base)) 342 - return PTR_ERR(rockchip->apb_base); 240 + return dev_err_probe(&pdev->dev, PTR_ERR(rockchip->apb_base), 241 + "failed to map apb registers\n"); 343 242 344 243 rockchip->rst_gpio = devm_gpiod_get_optional(&pdev->dev, "reset", 345 - GPIOD_OUT_HIGH); 244 + GPIOD_OUT_LOW); 346 245 if (IS_ERR(rockchip->rst_gpio)) 347 - return PTR_ERR(rockchip->rst_gpio); 246 + return dev_err_probe(&pdev->dev, PTR_ERR(rockchip->rst_gpio), 247 + "failed to get reset gpio\n"); 348 248 349 249 rockchip->rst = devm_reset_control_array_get_exclusive(&pdev->dev); 350 250 if (IS_ERR(rockchip->rst)) ··· 386 282 static const struct dw_pcie_ops dw_pcie_ops = { 387 283 .link_up = rockchip_pcie_link_up, 388 284 .start_link = rockchip_pcie_start_link, 285 + .stop_link = rockchip_pcie_stop_link, 389 286 }; 287 + 288 + static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg) 289 + { 290 + struct rockchip_pcie *rockchip = arg; 291 + struct dw_pcie *pci = &rockchip->pci; 292 + struct device *dev = pci->dev; 293 + u32 reg, val; 294 + 295 + reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC); 296 + rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); 297 + 298 + dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg); 299 + dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip)); 300 + 301 + if (reg & PCIE_LINK_REQ_RST_NOT_INT) { 302 + dev_dbg(dev, "hot reset or link-down reset\n"); 303 + dw_pcie_ep_linkdown(&pci->ep); 304 + } 305 + 306 + if (reg & PCIE_RDLH_LINK_UP_CHGED) { 307 + val = rockchip_pcie_get_ltssm(rockchip); 308 + if ((val & PCIE_LINKUP) == PCIE_LINKUP) { 309 + dev_dbg(dev, "link up\n"); 310 + dw_pcie_ep_linkup(&pci->ep); 311 + } 312 + } 313 + 314 + return IRQ_HANDLED; 315 + } 316 + 317 + static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip) 318 + { 319 + struct dw_pcie_rp *pp; 320 + u32 val; 321 + 322 + if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST)) 323 + return -ENODEV; 324 + 325 + /* LTSSM enable control mode */ 326 + val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE); 327 + rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL); 328 + 329 + rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_RC_MODE, 330 + PCIE_CLIENT_GENERAL_CONTROL); 331 + 332 + pp = &rockchip->pci.pp; 333 + pp->ops = &rockchip_pcie_host_ops; 334 + 335 + return dw_pcie_host_init(pp); 336 + } 337 + 338 + static int rockchip_pcie_configure_ep(struct platform_device *pdev, 339 + struct rockchip_pcie *rockchip) 340 + { 341 + struct device *dev = &pdev->dev; 342 + int irq, ret; 343 + u32 val; 344 + 345 + if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_EP)) 346 + return -ENODEV; 347 + 348 + irq = platform_get_irq_byname(pdev, "sys"); 349 + if (irq < 0) { 350 + dev_err(dev, "missing sys IRQ resource\n"); 351 + return irq; 352 + } 353 + 354 + ret = devm_request_threaded_irq(dev, irq, NULL, 355 + rockchip_pcie_ep_sys_irq_thread, 356 + IRQF_ONESHOT, "pcie-sys", rockchip); 357 + if (ret) { 358 + dev_err(dev, "failed to request PCIe sys IRQ\n"); 359 + return ret; 360 + } 361 + 362 + /* LTSSM enable control mode */ 363 + val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE); 364 + rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_HOT_RESET_CTRL); 365 + 366 + rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_EP_MODE, 367 + PCIE_CLIENT_GENERAL_CONTROL); 368 + 369 + rockchip->pci.ep.ops = &rockchip_pcie_ep_ops; 370 + rockchip->pci.ep.page_size = SZ_64K; 371 + 372 + dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); 373 + 374 + ret = dw_pcie_ep_init(&rockchip->pci.ep); 375 + if (ret) { 376 + dev_err(dev, "failed to initialize endpoint\n"); 377 + return ret; 378 + } 379 + 380 + ret = dw_pcie_ep_init_registers(&rockchip->pci.ep); 381 + if (ret) { 382 + dev_err(dev, "failed to initialize DWC endpoint registers\n"); 383 + dw_pcie_ep_deinit(&rockchip->pci.ep); 384 + return ret; 385 + } 386 + 387 + pci_epc_init_notify(rockchip->pci.ep.epc); 388 + 389 + /* unmask DLL up/down indicator and hot reset/link-down reset */ 390 + rockchip_pcie_writel_apb(rockchip, 0x60000, PCIE_CLIENT_INTR_MASK_MISC); 391 + 392 + return ret; 393 + } 390 394 391 395 static int rockchip_pcie_probe(struct platform_device *pdev) 392 396 { 393 397 struct device *dev = &pdev->dev; 394 398 struct rockchip_pcie *rockchip; 395 - struct dw_pcie_rp *pp; 399 + const struct rockchip_pcie_of_data *data; 396 400 int ret; 401 + 402 + data = of_device_get_match_data(dev); 403 + if (!data) 404 + return -EINVAL; 397 405 398 406 rockchip = devm_kzalloc(dev, sizeof(*rockchip), GFP_KERNEL); 399 407 if (!rockchip) ··· 515 299 516 300 rockchip->pci.dev = dev; 517 301 rockchip->pci.ops = &dw_pcie_ops; 518 - 519 - pp = &rockchip->pci.pp; 520 - pp->ops = &rockchip_pcie_host_ops; 302 + rockchip->data = data; 521 303 522 304 ret = rockchip_pcie_resource_get(pdev, rockchip); 523 305 if (ret) ··· 534 320 rockchip->vpcie3v3 = NULL; 535 321 } else { 536 322 ret = regulator_enable(rockchip->vpcie3v3); 537 - if (ret) { 538 - dev_err(dev, "failed to enable vpcie3v3 regulator\n"); 539 - return ret; 540 - } 323 + if (ret) 324 + return dev_err_probe(dev, ret, 325 + "failed to enable vpcie3v3 regulator\n"); 541 326 } 542 327 543 328 ret = rockchip_pcie_phy_init(rockchip); ··· 551 338 if (ret) 552 339 goto deinit_phy; 553 340 554 - ret = dw_pcie_host_init(pp); 555 - if (!ret) 556 - return 0; 341 + switch (data->mode) { 342 + case DW_PCIE_RC_TYPE: 343 + ret = rockchip_pcie_configure_rc(rockchip); 344 + if (ret) 345 + goto deinit_clk; 346 + break; 347 + case DW_PCIE_EP_TYPE: 348 + ret = rockchip_pcie_configure_ep(pdev, rockchip); 349 + if (ret) 350 + goto deinit_clk; 351 + break; 352 + default: 353 + dev_err(dev, "INVALID device type %d\n", data->mode); 354 + ret = -EINVAL; 355 + goto deinit_clk; 356 + } 557 357 358 + return 0; 359 + 360 + deinit_clk: 558 361 clk_bulk_disable_unprepare(rockchip->clk_cnt, rockchip->clks); 559 362 deinit_phy: 560 363 rockchip_pcie_phy_deinit(rockchip); ··· 581 352 return ret; 582 353 } 583 354 355 + static const struct rockchip_pcie_of_data rockchip_pcie_rc_of_data_rk3568 = { 356 + .mode = DW_PCIE_RC_TYPE, 357 + }; 358 + 359 + static const struct rockchip_pcie_of_data rockchip_pcie_ep_of_data_rk3568 = { 360 + .mode = DW_PCIE_EP_TYPE, 361 + .epc_features = &rockchip_pcie_epc_features_rk3568, 362 + }; 363 + 364 + static const struct rockchip_pcie_of_data rockchip_pcie_ep_of_data_rk3588 = { 365 + .mode = DW_PCIE_EP_TYPE, 366 + .epc_features = &rockchip_pcie_epc_features_rk3588, 367 + }; 368 + 584 369 static const struct of_device_id rockchip_pcie_of_match[] = { 585 - { .compatible = "rockchip,rk3568-pcie", }, 370 + { 371 + .compatible = "rockchip,rk3568-pcie", 372 + .data = &rockchip_pcie_rc_of_data_rk3568, 373 + }, 374 + { 375 + .compatible = "rockchip,rk3568-pcie-ep", 376 + .data = &rockchip_pcie_ep_of_data_rk3568, 377 + }, 378 + { 379 + .compatible = "rockchip,rk3588-pcie-ep", 380 + .data = &rockchip_pcie_ep_of_data_rk3588, 381 + }, 586 382 {}, 587 383 }; 588 384
+1 -1
drivers/pci/controller/dwc/pcie-keembay.c
··· 442 442 return ret; 443 443 } 444 444 445 - dw_pcie_ep_init_notify(&pci->ep); 445 + pci_epc_init_notify(pci->ep.epc); 446 446 447 447 break; 448 448 default:
+41 -85
drivers/pci/controller/dwc/pcie-kirin.c
··· 12 12 #include <linux/compiler.h> 13 13 #include <linux/delay.h> 14 14 #include <linux/err.h> 15 - #include <linux/gpio.h> 16 15 #include <linux/gpio/consumer.h> 17 16 #include <linux/interrupt.h> 18 17 #include <linux/mfd/syscon.h> 19 18 #include <linux/of.h> 20 - #include <linux/of_gpio.h> 21 19 #include <linux/of_pci.h> 22 20 #include <linux/phy/phy.h> 23 21 #include <linux/pci.h> ··· 76 78 void *phy_priv; /* only for PCIE_KIRIN_INTERNAL_PHY */ 77 79 78 80 /* DWC PERST# */ 79 - int gpio_id_dwc_perst; 81 + struct gpio_desc *id_dwc_perst_gpio; 80 82 81 83 /* Per-slot PERST# */ 82 84 int num_slots; 83 - int gpio_id_reset[MAX_PCI_SLOTS]; 85 + struct gpio_desc *id_reset_gpio[MAX_PCI_SLOTS]; 84 86 const char *reset_names[MAX_PCI_SLOTS]; 85 87 86 88 /* Per-slot clkreq */ 87 89 int n_gpio_clkreq; 88 - int gpio_id_clkreq[MAX_PCI_SLOTS]; 90 + struct gpio_desc *id_clkreq_gpio[MAX_PCI_SLOTS]; 89 91 const char *clkreq_names[MAX_PCI_SLOTS]; 90 92 }; 91 93 ··· 379 381 pcie->n_gpio_clkreq = ret; 380 382 381 383 for (i = 0; i < pcie->n_gpio_clkreq; i++) { 382 - pcie->gpio_id_clkreq[i] = of_get_named_gpio(dev->of_node, 383 - "hisilicon,clken-gpios", i); 384 - if (pcie->gpio_id_clkreq[i] < 0) 385 - return pcie->gpio_id_clkreq[i]; 384 + pcie->id_clkreq_gpio[i] = devm_gpiod_get_index(dev, 385 + "hisilicon,clken", i, 386 + GPIOD_OUT_LOW); 387 + if (IS_ERR(pcie->id_clkreq_gpio[i])) 388 + return dev_err_probe(dev, PTR_ERR(pcie->id_clkreq_gpio[i]), 389 + "unable to get a valid clken gpio\n"); 386 390 387 391 pcie->clkreq_names[i] = devm_kasprintf(dev, GFP_KERNEL, 388 392 "pcie_clkreq_%d", i); 389 393 if (!pcie->clkreq_names[i]) 390 394 return -ENOMEM; 395 + 396 + gpiod_set_consumer_name(pcie->id_clkreq_gpio[i], 397 + pcie->clkreq_names[i]); 391 398 } 392 399 393 400 return 0; ··· 403 400 struct device_node *node) 404 401 { 405 402 struct device *dev = &pdev->dev; 406 - struct device_node *parent, *child; 407 403 int ret, slot, i; 408 404 409 - for_each_available_child_of_node(node, parent) { 410 - for_each_available_child_of_node(parent, child) { 405 + for_each_available_child_of_node_scoped(node, parent) { 406 + for_each_available_child_of_node_scoped(parent, child) { 411 407 i = pcie->num_slots; 412 408 413 - pcie->gpio_id_reset[i] = of_get_named_gpio(child, 414 - "reset-gpios", 0); 415 - if (pcie->gpio_id_reset[i] < 0) 416 - continue; 409 + pcie->id_reset_gpio[i] = devm_fwnode_gpiod_get_index(dev, 410 + of_fwnode_handle(child), 411 + "reset", 0, GPIOD_OUT_LOW, 412 + NULL); 413 + if (IS_ERR(pcie->id_reset_gpio[i])) { 414 + if (PTR_ERR(pcie->id_reset_gpio[i]) == -ENOENT) 415 + continue; 416 + return dev_err_probe(dev, PTR_ERR(pcie->id_reset_gpio[i]), 417 + "unable to get a valid reset gpio\n"); 418 + } 417 419 418 420 pcie->num_slots++; 419 421 if (pcie->num_slots > MAX_PCI_SLOTS) { 420 422 dev_err(dev, "Too many PCI slots!\n"); 421 - ret = -EINVAL; 422 - goto put_node; 423 + return -EINVAL; 423 424 } 424 425 425 426 ret = of_pci_get_devfn(child); 426 427 if (ret < 0) { 427 428 dev_err(dev, "failed to parse devfn: %d\n", ret); 428 - goto put_node; 429 + return ret; 429 430 } 430 431 431 432 slot = PCI_SLOT(ret); ··· 437 430 pcie->reset_names[i] = devm_kasprintf(dev, GFP_KERNEL, 438 431 "pcie_perst_%d", 439 432 slot); 440 - if (!pcie->reset_names[i]) { 441 - ret = -ENOMEM; 442 - goto put_node; 443 - } 433 + if (!pcie->reset_names[i]) 434 + return -ENOMEM; 435 + 436 + gpiod_set_consumer_name(pcie->id_reset_gpio[i], 437 + pcie->reset_names[i]); 444 438 } 445 439 } 446 440 447 441 return 0; 448 - 449 - put_node: 450 - of_node_put(child); 451 - of_node_put(parent); 452 - return ret; 453 442 } 454 443 455 444 static long kirin_pcie_get_resource(struct kirin_pcie *kirin_pcie, ··· 466 463 return PTR_ERR(kirin_pcie->apb); 467 464 468 465 /* pcie internal PERST# gpio */ 469 - kirin_pcie->gpio_id_dwc_perst = of_get_named_gpio(dev->of_node, 470 - "reset-gpios", 0); 471 - if (kirin_pcie->gpio_id_dwc_perst == -EPROBE_DEFER) { 472 - return -EPROBE_DEFER; 473 - } else if (!gpio_is_valid(kirin_pcie->gpio_id_dwc_perst)) { 474 - dev_err(dev, "unable to get a valid gpio pin\n"); 475 - return -ENODEV; 476 - } 466 + kirin_pcie->id_dwc_perst_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 467 + if (IS_ERR(kirin_pcie->id_dwc_perst_gpio)) 468 + return dev_err_probe(dev, PTR_ERR(kirin_pcie->id_dwc_perst_gpio), 469 + "unable to get a valid gpio pin\n"); 470 + gpiod_set_consumer_name(kirin_pcie->id_dwc_perst_gpio, "pcie_perst_bridge"); 477 471 478 472 ret = kirin_pcie_get_gpio_enable(kirin_pcie, pdev); 479 473 if (ret) ··· 553 553 554 554 /* Send PERST# to each slot */ 555 555 for (i = 0; i < kirin_pcie->num_slots; i++) { 556 - ret = gpio_direction_output(kirin_pcie->gpio_id_reset[i], 1); 556 + ret = gpiod_direction_output_raw(kirin_pcie->id_reset_gpio[i], 1); 557 557 if (ret) { 558 558 dev_err(pci->dev, "PERST# %s error: %d\n", 559 559 kirin_pcie->reset_names[i], ret); ··· 623 623 return 0; 624 624 } 625 625 626 - static int kirin_pcie_gpio_request(struct kirin_pcie *kirin_pcie, 627 - struct device *dev) 628 - { 629 - int ret, i; 630 - 631 - for (i = 0; i < kirin_pcie->num_slots; i++) { 632 - if (!gpio_is_valid(kirin_pcie->gpio_id_reset[i])) { 633 - dev_err(dev, "unable to get a valid %s gpio\n", 634 - kirin_pcie->reset_names[i]); 635 - return -ENODEV; 636 - } 637 - 638 - ret = devm_gpio_request(dev, kirin_pcie->gpio_id_reset[i], 639 - kirin_pcie->reset_names[i]); 640 - if (ret) 641 - return ret; 642 - } 643 - 644 - for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) { 645 - if (!gpio_is_valid(kirin_pcie->gpio_id_clkreq[i])) { 646 - dev_err(dev, "unable to get a valid %s gpio\n", 647 - kirin_pcie->clkreq_names[i]); 648 - return -ENODEV; 649 - } 650 - 651 - ret = devm_gpio_request(dev, kirin_pcie->gpio_id_clkreq[i], 652 - kirin_pcie->clkreq_names[i]); 653 - if (ret) 654 - return ret; 655 - 656 - ret = gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 0); 657 - if (ret) 658 - return ret; 659 - } 660 - 661 - return 0; 662 - } 663 - 664 626 static const struct dw_pcie_ops kirin_dw_pcie_ops = { 665 627 .read_dbi = kirin_pcie_read_dbi, 666 628 .write_dbi = kirin_pcie_write_dbi, ··· 642 680 return hi3660_pcie_phy_power_off(kirin_pcie); 643 681 644 682 for (i = 0; i < kirin_pcie->n_gpio_clkreq; i++) 645 - gpio_direction_output(kirin_pcie->gpio_id_clkreq[i], 1); 683 + gpiod_direction_output_raw(kirin_pcie->id_clkreq_gpio[i], 1); 646 684 647 685 phy_power_off(kirin_pcie->phy); 648 686 phy_exit(kirin_pcie->phy); ··· 669 707 if (IS_ERR(kirin_pcie->phy)) 670 708 return PTR_ERR(kirin_pcie->phy); 671 709 672 - ret = kirin_pcie_gpio_request(kirin_pcie, dev); 673 - if (ret) 674 - return ret; 675 - 676 710 ret = phy_init(kirin_pcie->phy); 677 711 if (ret) 678 712 goto err; ··· 681 723 /* perst assert Endpoint */ 682 724 usleep_range(REF_2_PERST_MIN, REF_2_PERST_MAX); 683 725 684 - if (!gpio_request(kirin_pcie->gpio_id_dwc_perst, "pcie_perst_bridge")) { 685 - ret = gpio_direction_output(kirin_pcie->gpio_id_dwc_perst, 1); 686 - if (ret) 687 - goto err; 688 - } 726 + ret = gpiod_direction_output_raw(kirin_pcie->id_dwc_perst_gpio, 1); 727 + if (ret) 728 + goto err; 689 729 690 730 usleep_range(PERST_2_ACCESS_MIN, PERST_2_ACCESS_MAX); 691 731
+40 -10
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 47 47 #define PARF_DBI_BASE_ADDR_HI 0x354 48 48 #define PARF_SLV_ADDR_SPACE_SIZE 0x358 49 49 #define PARF_SLV_ADDR_SPACE_SIZE_HI 0x35c 50 + #define PARF_NO_SNOOP_OVERIDE 0x3d4 50 51 #define PARF_ATU_BASE_ADDR 0x634 51 52 #define PARF_ATU_BASE_ADDR_HI 0x638 52 53 #define PARF_SRIS_MODE 0x644 ··· 86 85 #define PARF_DEBUG_INT_PM_DSTATE_CHANGE BIT(1) 87 86 #define PARF_DEBUG_INT_CFG_BUS_MASTER_EN BIT(2) 88 87 #define PARF_DEBUG_INT_RADM_PM_TURNOFF BIT(3) 88 + 89 + /* PARF_NO_SNOOP_OVERIDE register fields */ 90 + #define WR_NO_SNOOP_OVERIDE_EN BIT(1) 91 + #define RD_NO_SNOOP_OVERIDE_EN BIT(3) 89 92 90 93 /* PARF_DEVICE_TYPE register fields */ 91 94 #define PARF_DEVICE_TYPE_EP 0x0 ··· 155 150 }; 156 151 157 152 /** 153 + * struct qcom_pcie_ep_cfg - Per SoC config struct 154 + * @hdma_support: HDMA support on this SoC 155 + * @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache snooping 156 + */ 157 + struct qcom_pcie_ep_cfg { 158 + bool hdma_support; 159 + bool override_no_snoop; 160 + }; 161 + 162 + /** 158 163 * struct qcom_pcie_ep - Qualcomm PCIe Endpoint Controller 159 164 * @pci: Designware PCIe controller struct 160 165 * @parf: Qualcomm PCIe specific PARF register base ··· 182 167 * @num_clks: PCIe clocks count 183 168 * @perst_en: Flag for PERST enable 184 169 * @perst_sep_en: Flag for PERST separation enable 170 + * @cfg: PCIe EP config struct 185 171 * @link_status: PCIe Link status 186 172 * @global_irq: Qualcomm PCIe specific Global IRQ 187 173 * @perst_irq: PERST# IRQ ··· 210 194 u32 perst_en; 211 195 u32 perst_sep_en; 212 196 197 + const struct qcom_pcie_ep_cfg *cfg; 213 198 enum qcom_pcie_ep_link_status link_status; 214 199 int global_irq; 215 200 int perst_irq; ··· 499 482 val &= ~PARF_MSTR_AXI_CLK_EN; 500 483 writel_relaxed(val, pcie_ep->parf + PARF_MHI_CLOCK_RESET_CTRL); 501 484 502 - dw_pcie_ep_init_notify(&pcie_ep->pci.ep); 485 + pci_epc_init_notify(pcie_ep->pci.ep.epc); 503 486 504 487 /* Enable LTSSM */ 505 488 val = readl_relaxed(pcie_ep->parf + PARF_LTSSM); 506 489 val |= BIT(8); 507 490 writel_relaxed(val, pcie_ep->parf + PARF_LTSSM); 491 + 492 + if (pcie_ep->cfg && pcie_ep->cfg->override_no_snoop) 493 + writel_relaxed(WR_NO_SNOOP_OVERIDE_EN | RD_NO_SNOOP_OVERIDE_EN, 494 + pcie_ep->parf + PARF_NO_SNOOP_OVERIDE); 508 495 509 496 return 0; 510 497 ··· 521 500 static void qcom_pcie_perst_assert(struct dw_pcie *pci) 522 501 { 523 502 struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci); 524 - struct device *dev = pci->dev; 525 503 526 - if (pcie_ep->link_status == QCOM_PCIE_EP_LINK_DISABLED) { 527 - dev_dbg(dev, "Link is already disabled\n"); 528 - return; 529 - } 530 - 504 + pci_epc_deinit_notify(pci->ep.epc); 531 505 dw_pcie_ep_cleanup(&pci->ep); 532 506 qcom_pcie_disable_resources(pcie_ep); 533 507 pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED; ··· 656 640 if (FIELD_GET(PARF_INT_ALL_LINK_DOWN, status)) { 657 641 dev_dbg(dev, "Received Linkdown event\n"); 658 642 pcie_ep->link_status = QCOM_PCIE_EP_LINK_DOWN; 659 - pci_epc_linkdown(pci->ep.epc); 643 + dw_pcie_ep_linkdown(&pci->ep); 660 644 } else if (FIELD_GET(PARF_INT_ALL_BME, status)) { 661 - dev_dbg(dev, "Received BME event. Link is enabled!\n"); 645 + dev_dbg(dev, "Received Bus Master Enable event\n"); 662 646 pcie_ep->link_status = QCOM_PCIE_EP_LINK_ENABLED; 663 647 qcom_pcie_ep_icc_update(pcie_ep); 664 - pci_epc_bme_notify(pci->ep.epc); 648 + pci_epc_bus_master_enable_notify(pci->ep.epc); 665 649 } else if (FIELD_GET(PARF_INT_ALL_PM_TURNOFF, status)) { 666 650 dev_dbg(dev, "Received PM Turn-off event! Entering L23\n"); 667 651 val = readl_relaxed(pcie_ep->parf + PARF_PM_CTRL); ··· 832 816 pcie_ep->pci.ops = &pci_ops; 833 817 pcie_ep->pci.ep.ops = &pci_ep_ops; 834 818 pcie_ep->pci.edma.nr_irqs = 1; 819 + 820 + pcie_ep->cfg = of_device_get_match_data(dev); 821 + if (pcie_ep->cfg && pcie_ep->cfg->hdma_support) { 822 + pcie_ep->pci.edma.ll_wr_cnt = 8; 823 + pcie_ep->pci.edma.ll_rd_cnt = 8; 824 + pcie_ep->pci.edma.mf = EDMA_MF_HDMA_NATIVE; 825 + } 826 + 835 827 platform_set_drvdata(pdev, pcie_ep); 836 828 837 829 ret = qcom_pcie_ep_get_resources(pdev, pcie_ep); ··· 898 874 qcom_pcie_disable_resources(pcie_ep); 899 875 } 900 876 877 + static const struct qcom_pcie_ep_cfg cfg_1_34_0 = { 878 + .hdma_support = true, 879 + .override_no_snoop = true, 880 + }; 881 + 901 882 static const struct of_device_id qcom_pcie_ep_match[] = { 883 + { .compatible = "qcom,sa8775p-pcie-ep", .data = &cfg_1_34_0}, 902 884 { .compatible = "qcom,sdx55-pcie-ep", }, 903 885 { .compatible = "qcom,sm8450-pcie-ep", }, 904 886 { }
+203 -143
drivers/pci/controller/dwc/pcie-qcom.c
··· 18 18 #include <linux/io.h> 19 19 #include <linux/iopoll.h> 20 20 #include <linux/kernel.h> 21 + #include <linux/limits.h> 21 22 #include <linux/init.h> 22 23 #include <linux/of.h> 23 - #include <linux/of_gpio.h> 24 24 #include <linux/pci.h> 25 + #include <linux/pm_opp.h> 25 26 #include <linux/pm_runtime.h> 26 27 #include <linux/platform_device.h> 27 28 #include <linux/phy/pcie.h> ··· 31 30 #include <linux/reset.h> 32 31 #include <linux/slab.h> 33 32 #include <linux/types.h> 33 + #include <linux/units.h> 34 34 35 35 #include "../../pci.h" 36 36 #include "pcie-designware.h" ··· 53 51 #define PARF_SID_OFFSET 0x234 54 52 #define PARF_BDF_TRANSLATE_CFG 0x24c 55 53 #define PARF_SLV_ADDR_SPACE_SIZE 0x358 54 + #define PARF_NO_SNOOP_OVERIDE 0x3d4 56 55 #define PARF_DEVICE_TYPE 0x1000 57 56 #define PARF_BDF_TO_SID_TABLE_N 0x2000 58 57 #define PARF_BDF_TO_SID_CFG 0x2c00 ··· 121 118 /* PARF_LTSSM register fields */ 122 119 #define LTSSM_EN BIT(8) 123 120 121 + /* PARF_NO_SNOOP_OVERIDE register fields */ 122 + #define WR_NO_SNOOP_OVERIDE_EN BIT(1) 123 + #define RD_NO_SNOOP_OVERIDE_EN BIT(3) 124 + 124 125 /* PARF_DEVICE_TYPE register fields */ 125 126 #define DEVICE_TYPE_RC 0x4 126 127 ··· 161 154 #define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \ 162 155 Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed])) 163 156 164 - #define QCOM_PCIE_1_0_0_MAX_CLOCKS 4 165 157 struct qcom_pcie_resources_1_0_0 { 166 - struct clk_bulk_data clks[QCOM_PCIE_1_0_0_MAX_CLOCKS]; 158 + struct clk_bulk_data *clks; 159 + int num_clks; 167 160 struct reset_control *core; 168 161 struct regulator *vdda; 169 162 }; 170 163 171 - #define QCOM_PCIE_2_1_0_MAX_CLOCKS 5 172 164 #define QCOM_PCIE_2_1_0_MAX_RESETS 6 173 165 #define QCOM_PCIE_2_1_0_MAX_SUPPLY 3 174 166 struct qcom_pcie_resources_2_1_0 { 175 - struct clk_bulk_data clks[QCOM_PCIE_2_1_0_MAX_CLOCKS]; 167 + struct clk_bulk_data *clks; 168 + int num_clks; 176 169 struct reset_control_bulk_data resets[QCOM_PCIE_2_1_0_MAX_RESETS]; 177 170 int num_resets; 178 171 struct regulator_bulk_data supplies[QCOM_PCIE_2_1_0_MAX_SUPPLY]; 179 172 }; 180 173 181 - #define QCOM_PCIE_2_3_2_MAX_CLOCKS 4 182 174 #define QCOM_PCIE_2_3_2_MAX_SUPPLY 2 183 175 struct qcom_pcie_resources_2_3_2 { 184 - struct clk_bulk_data clks[QCOM_PCIE_2_3_2_MAX_CLOCKS]; 176 + struct clk_bulk_data *clks; 177 + int num_clks; 185 178 struct regulator_bulk_data supplies[QCOM_PCIE_2_3_2_MAX_SUPPLY]; 186 179 }; 187 180 188 - #define QCOM_PCIE_2_3_3_MAX_CLOCKS 5 189 181 #define QCOM_PCIE_2_3_3_MAX_RESETS 7 190 182 struct qcom_pcie_resources_2_3_3 { 191 - struct clk_bulk_data clks[QCOM_PCIE_2_3_3_MAX_CLOCKS]; 183 + struct clk_bulk_data *clks; 184 + int num_clks; 192 185 struct reset_control_bulk_data rst[QCOM_PCIE_2_3_3_MAX_RESETS]; 193 186 }; 194 187 195 - #define QCOM_PCIE_2_4_0_MAX_CLOCKS 4 196 188 #define QCOM_PCIE_2_4_0_MAX_RESETS 12 197 189 struct qcom_pcie_resources_2_4_0 { 198 - struct clk_bulk_data clks[QCOM_PCIE_2_4_0_MAX_CLOCKS]; 190 + struct clk_bulk_data *clks; 199 191 int num_clks; 200 192 struct reset_control_bulk_data resets[QCOM_PCIE_2_4_0_MAX_RESETS]; 201 193 int num_resets; 202 194 }; 203 195 204 - #define QCOM_PCIE_2_7_0_MAX_CLOCKS 15 205 196 #define QCOM_PCIE_2_7_0_MAX_SUPPLIES 2 206 197 struct qcom_pcie_resources_2_7_0 { 207 - struct clk_bulk_data clks[QCOM_PCIE_2_7_0_MAX_CLOCKS]; 198 + struct clk_bulk_data *clks; 208 199 int num_clks; 209 200 struct regulator_bulk_data supplies[QCOM_PCIE_2_7_0_MAX_SUPPLIES]; 210 201 struct reset_control *rst; 211 202 }; 212 203 213 - #define QCOM_PCIE_2_9_0_MAX_CLOCKS 5 214 204 struct qcom_pcie_resources_2_9_0 { 215 - struct clk_bulk_data clks[QCOM_PCIE_2_9_0_MAX_CLOCKS]; 205 + struct clk_bulk_data *clks; 206 + int num_clks; 216 207 struct reset_control *rst; 217 208 }; 218 209 ··· 236 231 int (*config_sid)(struct qcom_pcie *pcie); 237 232 }; 238 233 234 + /** 235 + * struct qcom_pcie_cfg - Per SoC config struct 236 + * @ops: qcom PCIe ops structure 237 + * @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache 238 + * snooping 239 + */ 239 240 struct qcom_pcie_cfg { 240 241 const struct qcom_pcie_ops *ops; 242 + bool override_no_snoop; 241 243 bool no_l0s; 242 244 }; 243 245 ··· 257 245 struct phy *phy; 258 246 struct gpio_desc *reset; 259 247 struct icc_path *icc_mem; 248 + struct icc_path *icc_cpu; 260 249 const struct qcom_pcie_cfg *cfg; 261 250 struct dentry *debugfs; 262 251 bool suspended; ··· 350 337 if (ret) 351 338 return ret; 352 339 353 - res->clks[0].id = "iface"; 354 - res->clks[1].id = "core"; 355 - res->clks[2].id = "phy"; 356 - res->clks[3].id = "aux"; 357 - res->clks[4].id = "ref"; 358 - 359 - /* iface, core, phy are required */ 360 - ret = devm_clk_bulk_get(dev, 3, res->clks); 361 - if (ret < 0) 362 - return ret; 363 - 364 - /* aux, ref are optional */ 365 - ret = devm_clk_bulk_get_optional(dev, 2, res->clks + 3); 366 - if (ret < 0) 367 - return ret; 340 + res->num_clks = devm_clk_bulk_get_all(dev, &res->clks); 341 + if (res->num_clks < 0) { 342 + dev_err(dev, "Failed to get clocks\n"); 343 + return res->num_clks; 344 + } 368 345 369 346 res->resets[0].id = "pci"; 370 347 res->resets[1].id = "axi"; ··· 376 373 { 377 374 struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; 378 375 379 - clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks); 376 + clk_bulk_disable_unprepare(res->num_clks, res->clks); 380 377 reset_control_bulk_assert(res->num_resets, res->resets); 381 378 382 379 writel(1, pcie->parf + PARF_PHY_CTRL); ··· 428 425 val &= ~PHY_TEST_PWR_DOWN; 429 426 writel(val, pcie->parf + PARF_PHY_CTRL); 430 427 431 - ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 428 + ret = clk_bulk_prepare_enable(res->num_clks, res->clks); 432 429 if (ret) 433 430 return ret; 434 431 ··· 479 476 struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; 480 477 struct dw_pcie *pci = pcie->pci; 481 478 struct device *dev = pci->dev; 482 - int ret; 483 479 484 480 res->vdda = devm_regulator_get(dev, "vdda"); 485 481 if (IS_ERR(res->vdda)) 486 482 return PTR_ERR(res->vdda); 487 483 488 - res->clks[0].id = "iface"; 489 - res->clks[1].id = "aux"; 490 - res->clks[2].id = "master_bus"; 491 - res->clks[3].id = "slave_bus"; 492 - 493 - ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks); 494 - if (ret < 0) 495 - return ret; 484 + res->num_clks = devm_clk_bulk_get_all(dev, &res->clks); 485 + if (res->num_clks < 0) { 486 + dev_err(dev, "Failed to get clocks\n"); 487 + return res->num_clks; 488 + } 496 489 497 490 res->core = devm_reset_control_get_exclusive(dev, "core"); 498 491 return PTR_ERR_OR_ZERO(res->core); ··· 499 500 struct qcom_pcie_resources_1_0_0 *res = &pcie->res.v1_0_0; 500 501 501 502 reset_control_assert(res->core); 502 - clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks); 503 + clk_bulk_disable_unprepare(res->num_clks, res->clks); 503 504 regulator_disable(res->vdda); 504 505 } 505 506 ··· 516 517 return ret; 517 518 } 518 519 519 - ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 520 + ret = clk_bulk_prepare_enable(res->num_clks, res->clks); 520 521 if (ret) { 521 522 dev_err(dev, "cannot prepare/enable clocks\n"); 522 523 goto err_assert_reset; ··· 531 532 return 0; 532 533 533 534 err_disable_clks: 534 - clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks); 535 + clk_bulk_disable_unprepare(res->num_clks, res->clks); 535 536 err_assert_reset: 536 537 reset_control_assert(res->core); 537 538 ··· 579 580 if (ret) 580 581 return ret; 581 582 582 - res->clks[0].id = "aux"; 583 - res->clks[1].id = "cfg"; 584 - res->clks[2].id = "bus_master"; 585 - res->clks[3].id = "bus_slave"; 586 - 587 - ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks); 588 - if (ret < 0) 589 - return ret; 583 + res->num_clks = devm_clk_bulk_get_all(dev, &res->clks); 584 + if (res->num_clks < 0) { 585 + dev_err(dev, "Failed to get clocks\n"); 586 + return res->num_clks; 587 + } 590 588 591 589 return 0; 592 590 } ··· 592 596 { 593 597 struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 594 598 595 - clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks); 599 + clk_bulk_disable_unprepare(res->num_clks, res->clks); 596 600 regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 597 601 } 598 602 ··· 609 613 return ret; 610 614 } 611 615 612 - ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 616 + ret = clk_bulk_prepare_enable(res->num_clks, res->clks); 613 617 if (ret) { 614 618 dev_err(dev, "cannot prepare/enable clocks\n"); 615 619 regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); ··· 657 661 bool is_ipq = of_device_is_compatible(dev->of_node, "qcom,pcie-ipq4019"); 658 662 int ret; 659 663 660 - res->clks[0].id = "aux"; 661 - res->clks[1].id = "master_bus"; 662 - res->clks[2].id = "slave_bus"; 663 - res->clks[3].id = "iface"; 664 - 665 - /* qcom,pcie-ipq4019 is defined without "iface" */ 666 - res->num_clks = is_ipq ? 3 : 4; 667 - 668 - ret = devm_clk_bulk_get(dev, res->num_clks, res->clks); 669 - if (ret < 0) 670 - return ret; 664 + res->num_clks = devm_clk_bulk_get_all(dev, &res->clks); 665 + if (res->num_clks < 0) { 666 + dev_err(dev, "Failed to get clocks\n"); 667 + return res->num_clks; 668 + } 671 669 672 670 res->resets[0].id = "axi_m"; 673 671 res->resets[1].id = "axi_s"; ··· 732 742 struct device *dev = pci->dev; 733 743 int ret; 734 744 735 - res->clks[0].id = "iface"; 736 - res->clks[1].id = "axi_m"; 737 - res->clks[2].id = "axi_s"; 738 - res->clks[3].id = "ahb"; 739 - res->clks[4].id = "aux"; 740 - 741 - ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks); 742 - if (ret < 0) 743 - return ret; 745 + res->num_clks = devm_clk_bulk_get_all(dev, &res->clks); 746 + if (res->num_clks < 0) { 747 + dev_err(dev, "Failed to get clocks\n"); 748 + return res->num_clks; 749 + } 744 750 745 751 res->rst[0].id = "axi_m"; 746 752 res->rst[1].id = "axi_s"; ··· 757 771 { 758 772 struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; 759 773 760 - clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks); 774 + clk_bulk_disable_unprepare(res->num_clks, res->clks); 761 775 } 762 776 763 777 static int qcom_pcie_init_2_3_3(struct qcom_pcie *pcie) ··· 787 801 */ 788 802 usleep_range(2000, 2500); 789 803 790 - ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 804 + ret = clk_bulk_prepare_enable(res->num_clks, res->clks); 791 805 if (ret) { 792 806 dev_err(dev, "cannot prepare/enable clocks\n"); 793 807 goto err_assert_resets; ··· 848 862 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; 849 863 struct dw_pcie *pci = pcie->pci; 850 864 struct device *dev = pci->dev; 851 - unsigned int num_clks, num_opt_clks; 852 - unsigned int idx; 853 865 int ret; 854 866 855 867 res->rst = devm_reset_control_array_get_exclusive(dev); ··· 861 877 if (ret) 862 878 return ret; 863 879 864 - idx = 0; 865 - res->clks[idx++].id = "aux"; 866 - res->clks[idx++].id = "cfg"; 867 - res->clks[idx++].id = "bus_master"; 868 - res->clks[idx++].id = "bus_slave"; 869 - res->clks[idx++].id = "slave_q2a"; 870 - 871 - num_clks = idx; 872 - 873 - ret = devm_clk_bulk_get(dev, num_clks, res->clks); 874 - if (ret < 0) 875 - return ret; 876 - 877 - res->clks[idx++].id = "tbu"; 878 - res->clks[idx++].id = "ddrss_sf_tbu"; 879 - res->clks[idx++].id = "aggre0"; 880 - res->clks[idx++].id = "aggre1"; 881 - res->clks[idx++].id = "noc_aggr"; 882 - res->clks[idx++].id = "noc_aggr_4"; 883 - res->clks[idx++].id = "noc_aggr_south_sf"; 884 - res->clks[idx++].id = "cnoc_qx"; 885 - res->clks[idx++].id = "sleep"; 886 - res->clks[idx++].id = "cnoc_sf_axi"; 887 - 888 - num_opt_clks = idx - num_clks; 889 - res->num_clks = idx; 890 - 891 - ret = devm_clk_bulk_get_optional(dev, num_opt_clks, res->clks + num_clks); 892 - if (ret < 0) 893 - return ret; 880 + res->num_clks = devm_clk_bulk_get_all(dev, &res->clks); 881 + if (res->num_clks < 0) { 882 + dev_err(dev, "Failed to get clocks\n"); 883 + return res->num_clks; 884 + } 894 885 895 886 return 0; 896 887 } ··· 945 986 946 987 static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie) 947 988 { 989 + const struct qcom_pcie_cfg *pcie_cfg = pcie->cfg; 990 + 991 + if (pcie_cfg->override_no_snoop) 992 + writel(WR_NO_SNOOP_OVERIDE_EN | RD_NO_SNOOP_OVERIDE_EN, 993 + pcie->parf + PARF_NO_SNOOP_OVERIDE); 994 + 948 995 qcom_pcie_clear_aspm_l0s(pcie->pci); 949 996 qcom_pcie_clear_hpc(pcie->pci); 950 997 ··· 1066 1101 struct qcom_pcie_resources_2_9_0 *res = &pcie->res.v2_9_0; 1067 1102 struct dw_pcie *pci = pcie->pci; 1068 1103 struct device *dev = pci->dev; 1069 - int ret; 1070 1104 1071 - res->clks[0].id = "iface"; 1072 - res->clks[1].id = "axi_m"; 1073 - res->clks[2].id = "axi_s"; 1074 - res->clks[3].id = "axi_bridge"; 1075 - res->clks[4].id = "rchng"; 1076 - 1077 - ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks); 1078 - if (ret < 0) 1079 - return ret; 1105 + res->num_clks = devm_clk_bulk_get_all(dev, &res->clks); 1106 + if (res->num_clks < 0) { 1107 + dev_err(dev, "Failed to get clocks\n"); 1108 + return res->num_clks; 1109 + } 1080 1110 1081 1111 res->rst = devm_reset_control_array_get_exclusive(dev); 1082 1112 if (IS_ERR(res->rst)) ··· 1084 1124 { 1085 1125 struct qcom_pcie_resources_2_9_0 *res = &pcie->res.v2_9_0; 1086 1126 1087 - clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks); 1127 + clk_bulk_disable_unprepare(res->num_clks, res->clks); 1088 1128 } 1089 1129 1090 1130 static int qcom_pcie_init_2_9_0(struct qcom_pcie *pcie) ··· 1113 1153 1114 1154 usleep_range(2000, 2500); 1115 1155 1116 - return clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 1156 + return clk_bulk_prepare_enable(res->num_clks, res->clks); 1117 1157 } 1118 1158 1119 1159 static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie) ··· 1326 1366 .ops = &ops_1_9_0, 1327 1367 }; 1328 1368 1369 + static const struct qcom_pcie_cfg cfg_1_34_0 = { 1370 + .ops = &ops_1_9_0, 1371 + .override_no_snoop = true, 1372 + }; 1373 + 1329 1374 static const struct qcom_pcie_cfg cfg_2_1_0 = { 1330 1375 .ops = &ops_2_1_0, 1331 1376 }; ··· 1374 1409 if (IS_ERR(pcie->icc_mem)) 1375 1410 return PTR_ERR(pcie->icc_mem); 1376 1411 1412 + pcie->icc_cpu = devm_of_icc_get(pci->dev, "cpu-pcie"); 1413 + if (IS_ERR(pcie->icc_cpu)) 1414 + return PTR_ERR(pcie->icc_cpu); 1377 1415 /* 1378 1416 * Some Qualcomm platforms require interconnect bandwidth constraints 1379 1417 * to be set before enabling interconnect clocks. ··· 1386 1418 */ 1387 1419 ret = icc_set_bw(pcie->icc_mem, 0, QCOM_PCIE_LINK_SPEED_TO_BW(1)); 1388 1420 if (ret) { 1389 - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 1421 + dev_err(pci->dev, "Failed to set bandwidth for PCIe-MEM interconnect path: %d\n", 1390 1422 ret); 1423 + return ret; 1424 + } 1425 + 1426 + /* 1427 + * Since the CPU-PCIe path is only used for activities like register 1428 + * access of the host controller and endpoint Config/BAR space access, 1429 + * HW team has recommended to use a minimal bandwidth of 1KBps just to 1430 + * keep the path active. 1431 + */ 1432 + ret = icc_set_bw(pcie->icc_cpu, 0, kBps_to_icc(1)); 1433 + if (ret) { 1434 + dev_err(pci->dev, "Failed to set bandwidth for CPU-PCIe interconnect path: %d\n", 1435 + ret); 1436 + icc_set_bw(pcie->icc_mem, 0, 0); 1391 1437 return ret; 1392 1438 } 1393 1439 1394 1440 return 0; 1395 1441 } 1396 1442 1397 - static void qcom_pcie_icc_update(struct qcom_pcie *pcie) 1443 + static void qcom_pcie_icc_opp_update(struct qcom_pcie *pcie) 1398 1444 { 1445 + u32 offset, status, width, speed; 1399 1446 struct dw_pcie *pci = pcie->pci; 1400 - u32 offset, status; 1401 - int speed, width; 1402 - int ret; 1403 - 1404 - if (!pcie->icc_mem) 1405 - return; 1447 + unsigned long freq_kbps; 1448 + struct dev_pm_opp *opp; 1449 + int ret, freq_mbps; 1406 1450 1407 1451 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1408 1452 status = readw(pci->dbi_base + offset + PCI_EXP_LNKSTA); ··· 1426 1446 speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, status); 1427 1447 width = FIELD_GET(PCI_EXP_LNKSTA_NLW, status); 1428 1448 1429 - ret = icc_set_bw(pcie->icc_mem, 0, width * QCOM_PCIE_LINK_SPEED_TO_BW(speed)); 1430 - if (ret) { 1431 - dev_err(pci->dev, "failed to set interconnect bandwidth: %d\n", 1432 - ret); 1449 + if (pcie->icc_mem) { 1450 + ret = icc_set_bw(pcie->icc_mem, 0, 1451 + width * QCOM_PCIE_LINK_SPEED_TO_BW(speed)); 1452 + if (ret) { 1453 + dev_err(pci->dev, "Failed to set bandwidth for PCIe-MEM interconnect path: %d\n", 1454 + ret); 1455 + } 1456 + } else { 1457 + freq_mbps = pcie_dev_speed_mbps(pcie_link_speed[speed]); 1458 + if (freq_mbps < 0) 1459 + return; 1460 + 1461 + freq_kbps = freq_mbps * KILO; 1462 + opp = dev_pm_opp_find_freq_exact(pci->dev, freq_kbps * width, 1463 + true); 1464 + if (!IS_ERR(opp)) { 1465 + ret = dev_pm_opp_set_opp(pci->dev, opp); 1466 + if (ret) 1467 + dev_err(pci->dev, "Failed to set OPP for freq (%lu): %d\n", 1468 + freq_kbps * width, ret); 1469 + dev_pm_opp_put(opp); 1470 + } 1433 1471 } 1434 1472 } 1435 1473 ··· 1491 1493 static int qcom_pcie_probe(struct platform_device *pdev) 1492 1494 { 1493 1495 const struct qcom_pcie_cfg *pcie_cfg; 1496 + unsigned long max_freq = ULONG_MAX; 1494 1497 struct device *dev = &pdev->dev; 1498 + struct dev_pm_opp *opp; 1495 1499 struct qcom_pcie *pcie; 1496 1500 struct dw_pcie_rp *pp; 1497 1501 struct resource *res; ··· 1561 1561 goto err_pm_runtime_put; 1562 1562 } 1563 1563 1564 - ret = qcom_pcie_icc_init(pcie); 1565 - if (ret) 1564 + /* OPP table is optional */ 1565 + ret = devm_pm_opp_of_add_table(dev); 1566 + if (ret && ret != -ENODEV) { 1567 + dev_err_probe(dev, ret, "Failed to add OPP table\n"); 1566 1568 goto err_pm_runtime_put; 1569 + } 1570 + 1571 + /* 1572 + * Before the PCIe link is initialized, vote for highest OPP in the OPP 1573 + * table, so that we are voting for maximum voltage corner for the 1574 + * link to come up in maximum supported speed. At the end of the 1575 + * probe(), OPP will be updated using qcom_pcie_icc_opp_update(). 1576 + */ 1577 + if (!ret) { 1578 + opp = dev_pm_opp_find_freq_floor(dev, &max_freq); 1579 + if (IS_ERR(opp)) { 1580 + ret = PTR_ERR(opp); 1581 + dev_err_probe(pci->dev, ret, 1582 + "Unable to find max freq OPP\n"); 1583 + goto err_pm_runtime_put; 1584 + } else { 1585 + ret = dev_pm_opp_set_opp(dev, opp); 1586 + } 1587 + 1588 + dev_pm_opp_put(opp); 1589 + if (ret) { 1590 + dev_err_probe(pci->dev, ret, 1591 + "Failed to set OPP for freq %lu\n", 1592 + max_freq); 1593 + goto err_pm_runtime_put; 1594 + } 1595 + } else { 1596 + /* Skip ICC init if OPP is supported as it is handled by OPP */ 1597 + ret = qcom_pcie_icc_init(pcie); 1598 + if (ret) 1599 + goto err_pm_runtime_put; 1600 + } 1567 1601 1568 1602 ret = pcie->cfg->ops->get_resources(pcie); 1569 1603 if (ret) ··· 1617 1583 goto err_phy_exit; 1618 1584 } 1619 1585 1620 - qcom_pcie_icc_update(pcie); 1586 + qcom_pcie_icc_opp_update(pcie); 1621 1587 1622 1588 if (pcie->mhi) 1623 1589 qcom_pcie_init_debugfs(pcie); ··· 1636 1602 static int qcom_pcie_suspend_noirq(struct device *dev) 1637 1603 { 1638 1604 struct qcom_pcie *pcie = dev_get_drvdata(dev); 1639 - int ret; 1605 + int ret = 0; 1640 1606 1641 1607 /* 1642 1608 * Set minimum bandwidth required to keep data path functional during 1643 1609 * suspend. 1644 1610 */ 1645 - ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1)); 1646 - if (ret) { 1647 - dev_err(dev, "Failed to set interconnect bandwidth: %d\n", ret); 1648 - return ret; 1611 + if (pcie->icc_mem) { 1612 + ret = icc_set_bw(pcie->icc_mem, 0, kBps_to_icc(1)); 1613 + if (ret) { 1614 + dev_err(dev, 1615 + "Failed to set bandwidth for PCIe-MEM interconnect path: %d\n", 1616 + ret); 1617 + return ret; 1618 + } 1649 1619 } 1650 1620 1651 1621 /* ··· 1672 1634 pcie->suspended = true; 1673 1635 } 1674 1636 1675 - return 0; 1637 + /* 1638 + * Only disable CPU-PCIe interconnect path if the suspend is non-S2RAM. 1639 + * Because on some platforms, DBI access can happen very late during the 1640 + * S2RAM and a non-active CPU-PCIe interconnect path may lead to NoC 1641 + * error. 1642 + */ 1643 + if (pm_suspend_target_state != PM_SUSPEND_MEM) { 1644 + ret = icc_disable(pcie->icc_cpu); 1645 + if (ret) 1646 + dev_err(dev, "Failed to disable CPU-PCIe interconnect path: %d\n", ret); 1647 + 1648 + if (!pcie->icc_mem) 1649 + dev_pm_opp_set_opp(pcie->pci->dev, NULL); 1650 + } 1651 + return ret; 1676 1652 } 1677 1653 1678 1654 static int qcom_pcie_resume_noirq(struct device *dev) 1679 1655 { 1680 1656 struct qcom_pcie *pcie = dev_get_drvdata(dev); 1681 1657 int ret; 1658 + 1659 + if (pm_suspend_target_state != PM_SUSPEND_MEM) { 1660 + ret = icc_enable(pcie->icc_cpu); 1661 + if (ret) { 1662 + dev_err(dev, "Failed to enable CPU-PCIe interconnect path: %d\n", ret); 1663 + return ret; 1664 + } 1665 + } 1682 1666 1683 1667 if (pcie->suspended) { 1684 1668 ret = qcom_pcie_host_init(&pcie->pci->pp); ··· 1710 1650 pcie->suspended = false; 1711 1651 } 1712 1652 1713 - qcom_pcie_icc_update(pcie); 1653 + qcom_pcie_icc_opp_update(pcie); 1714 1654 1715 1655 return 0; 1716 1656 } ··· 1727 1667 { .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 }, 1728 1668 { .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 }, 1729 1669 { .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp }, 1730 - { .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_9_0}, 1670 + { .compatible = "qcom,pcie-sa8775p", .data = &cfg_1_34_0}, 1731 1671 { .compatible = "qcom,pcie-sc7280", .data = &cfg_1_9_0 }, 1732 1672 { .compatible = "qcom,pcie-sc8180x", .data = &cfg_1_9_0 }, 1733 1673 { .compatible = "qcom,pcie-sc8280xp", .data = &cfg_sc8280xp },
+272 -36
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 2 2 /* 3 3 * PCIe controller driver for Renesas R-Car Gen4 Series SoCs 4 4 * Copyright (C) 2022-2023 Renesas Electronics Corporation 5 + * 6 + * The r8a779g0 (R-Car V4H) controller requires a specific firmware to be 7 + * provided, to initialize the PHY. Otherwise, the PCIe controller will not 8 + * work. 5 9 */ 6 10 7 11 #include <linux/delay.h> 12 + #include <linux/firmware.h> 8 13 #include <linux/interrupt.h> 9 14 #include <linux/io.h> 15 + #include <linux/iopoll.h> 10 16 #include <linux/module.h> 11 17 #include <linux/of.h> 12 18 #include <linux/pci.h> ··· 26 20 /* Renesas-specific */ 27 21 /* PCIe Mode Setting Register 0 */ 28 22 #define PCIEMSR0 0x0000 29 - #define BIFUR_MOD_SET_ON BIT(0) 23 + #define APP_SRIS_MODE BIT(6) 30 24 #define DEVICE_TYPE_EP 0 31 25 #define DEVICE_TYPE_RC BIT(4) 26 + #define BIFUR_MOD_SET_ON BIT(0) 32 27 33 28 /* PCIe Interrupt Status 0 */ 34 29 #define PCIEINTSTS0 0x0084 ··· 44 37 #define PCIEDMAINTSTSEN 0x0314 45 38 #define PCIEDMAINTSTSEN_INIT GENMASK(15, 0) 46 39 40 + /* Port Logic Registers 89 */ 41 + #define PRTLGC89 0x0b70 42 + 43 + /* Port Logic Registers 90 */ 44 + #define PRTLGC90 0x0b74 45 + 47 46 /* PCIe Reset Control Register 1 */ 48 47 #define PCIERSTCTRL1 0x0014 49 48 #define APP_HOLD_PHY_RST BIT(16) 50 49 #define APP_LTSSM_ENABLE BIT(0) 50 + 51 + /* PCIe Power Management Control */ 52 + #define PCIEPWRMNGCTRL 0x0070 53 + #define APP_CLK_REQ_N BIT(11) 54 + #define APP_CLK_PM_EN BIT(10) 51 55 52 56 #define RCAR_NUM_SPEED_CHANGE_RETRIES 10 53 57 #define RCAR_MAX_LINK_SPEED 4 ··· 66 48 #define RCAR_GEN4_PCIE_EP_FUNC_DBI_OFFSET 0x1000 67 49 #define RCAR_GEN4_PCIE_EP_FUNC_DBI2_OFFSET 0x800 68 50 51 + #define RCAR_GEN4_PCIE_FIRMWARE_NAME "rcar_gen4_pcie.bin" 52 + #define RCAR_GEN4_PCIE_FIRMWARE_BASE_ADDR 0xc000 53 + MODULE_FIRMWARE(RCAR_GEN4_PCIE_FIRMWARE_NAME); 54 + 55 + struct rcar_gen4_pcie; 56 + struct rcar_gen4_pcie_drvdata { 57 + void (*additional_common_init)(struct rcar_gen4_pcie *rcar); 58 + int (*ltssm_control)(struct rcar_gen4_pcie *rcar, bool enable); 59 + enum dw_pcie_device_mode mode; 60 + }; 61 + 69 62 struct rcar_gen4_pcie { 70 63 struct dw_pcie dw; 71 64 void __iomem *base; 65 + void __iomem *phy_base; 72 66 struct platform_device *pdev; 73 - enum dw_pcie_device_mode mode; 67 + const struct rcar_gen4_pcie_drvdata *drvdata; 74 68 }; 75 69 #define to_rcar_gen4_pcie(_dw) container_of(_dw, struct rcar_gen4_pcie, dw) 76 70 77 71 /* Common */ 78 - static void rcar_gen4_pcie_ltssm_enable(struct rcar_gen4_pcie *rcar, 79 - bool enable) 80 - { 81 - u32 val; 82 - 83 - val = readl(rcar->base + PCIERSTCTRL1); 84 - if (enable) { 85 - val |= APP_LTSSM_ENABLE; 86 - val &= ~APP_HOLD_PHY_RST; 87 - } else { 88 - /* 89 - * Since the datasheet of R-Car doesn't mention how to assert 90 - * the APP_HOLD_PHY_RST, don't assert it again. Otherwise, 91 - * hang-up issue happened in the dw_edma_core_off() when 92 - * the controller didn't detect a PCI device. 93 - */ 94 - val &= ~APP_LTSSM_ENABLE; 95 - } 96 - writel(val, rcar->base + PCIERSTCTRL1); 97 - } 98 - 99 72 static int rcar_gen4_pcie_link_up(struct dw_pcie *dw) 100 73 { 101 74 struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); ··· 132 123 static int rcar_gen4_pcie_start_link(struct dw_pcie *dw) 133 124 { 134 125 struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 135 - int i, changes; 126 + int i, changes, ret; 136 127 137 - rcar_gen4_pcie_ltssm_enable(rcar, true); 128 + if (rcar->drvdata->ltssm_control) { 129 + ret = rcar->drvdata->ltssm_control(rcar, true); 130 + if (ret) 131 + return ret; 132 + } 138 133 139 134 /* 140 135 * Require direct speed change with retrying here if the link_gen is ··· 150 137 * Since dw_pcie_setup_rc() sets it once, PCIe Gen2 will be trained. 151 138 * So, this needs remaining times for up to PCIe Gen4 if RC mode. 152 139 */ 153 - if (changes && rcar->mode == DW_PCIE_RC_TYPE) 140 + if (changes && rcar->drvdata->mode == DW_PCIE_RC_TYPE) 154 141 changes--; 155 142 156 143 for (i = 0; i < changes; i++) { ··· 166 153 { 167 154 struct rcar_gen4_pcie *rcar = to_rcar_gen4_pcie(dw); 168 155 169 - rcar_gen4_pcie_ltssm_enable(rcar, false); 156 + if (rcar->drvdata->ltssm_control) 157 + rcar->drvdata->ltssm_control(rcar, false); 170 158 } 171 159 172 160 static int rcar_gen4_pcie_common_init(struct rcar_gen4_pcie *rcar) ··· 186 172 reset_control_assert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 187 173 188 174 val = readl(rcar->base + PCIEMSR0); 189 - if (rcar->mode == DW_PCIE_RC_TYPE) { 175 + if (rcar->drvdata->mode == DW_PCIE_RC_TYPE) { 190 176 val |= DEVICE_TYPE_RC; 191 - } else if (rcar->mode == DW_PCIE_EP_TYPE) { 177 + } else if (rcar->drvdata->mode == DW_PCIE_EP_TYPE) { 192 178 val |= DEVICE_TYPE_EP; 193 179 } else { 194 180 ret = -EINVAL; ··· 203 189 ret = reset_control_deassert(dw->core_rsts[DW_PCIE_PWR_RST].rstc); 204 190 if (ret) 205 191 goto err_unprepare; 192 + 193 + if (rcar->drvdata->additional_common_init) 194 + rcar->drvdata->additional_common_init(rcar); 206 195 207 196 return 0; 208 197 ··· 248 231 249 232 static int rcar_gen4_pcie_get_resources(struct rcar_gen4_pcie *rcar) 250 233 { 234 + rcar->phy_base = devm_platform_ioremap_resource_byname(rcar->pdev, "phy"); 235 + if (IS_ERR(rcar->phy_base)) 236 + return PTR_ERR(rcar->phy_base); 237 + 251 238 /* Renesas-specific registers */ 252 239 rcar->base = devm_platform_ioremap_resource_byname(rcar->pdev, "app"); 253 240 ··· 276 255 rcar->dw.ops = &dw_pcie_ops; 277 256 rcar->dw.dev = dev; 278 257 rcar->pdev = pdev; 279 - dw_pcie_cap_set(&rcar->dw, EDMA_UNROLL); 258 + rcar->dw.edma.mf = EDMA_MF_EDMA_UNROLL; 280 259 dw_pcie_cap_set(&rcar->dw, REQ_RES); 281 260 platform_set_drvdata(pdev, rcar); 282 261 ··· 458 437 rcar_gen4_pcie_ep_deinit(rcar); 459 438 } 460 439 461 - dw_pcie_ep_init_notify(ep); 440 + pci_epc_init_notify(ep->epc); 462 441 463 442 return ret; 464 443 } ··· 472 451 /* Common */ 473 452 static int rcar_gen4_add_dw_pcie(struct rcar_gen4_pcie *rcar) 474 453 { 475 - rcar->mode = (uintptr_t)of_device_get_match_data(&rcar->pdev->dev); 454 + rcar->drvdata = of_device_get_match_data(&rcar->pdev->dev); 455 + if (!rcar->drvdata) 456 + return -EINVAL; 476 457 477 - switch (rcar->mode) { 458 + switch (rcar->drvdata->mode) { 478 459 case DW_PCIE_RC_TYPE: 479 460 return rcar_gen4_add_dw_pcie_rp(rcar); 480 461 case DW_PCIE_EP_TYPE: ··· 517 494 518 495 static void rcar_gen4_remove_dw_pcie(struct rcar_gen4_pcie *rcar) 519 496 { 520 - switch (rcar->mode) { 497 + switch (rcar->drvdata->mode) { 521 498 case DW_PCIE_RC_TYPE: 522 499 rcar_gen4_remove_dw_pcie_rp(rcar); 523 500 break; ··· 537 514 rcar_gen4_pcie_unprepare(rcar); 538 515 } 539 516 517 + static int r8a779f0_pcie_ltssm_control(struct rcar_gen4_pcie *rcar, bool enable) 518 + { 519 + u32 val; 520 + 521 + val = readl(rcar->base + PCIERSTCTRL1); 522 + if (enable) { 523 + val |= APP_LTSSM_ENABLE; 524 + val &= ~APP_HOLD_PHY_RST; 525 + } else { 526 + /* 527 + * Since the datasheet of R-Car doesn't mention how to assert 528 + * the APP_HOLD_PHY_RST, don't assert it again. Otherwise, 529 + * hang-up issue happened in the dw_edma_core_off() when 530 + * the controller didn't detect a PCI device. 531 + */ 532 + val &= ~APP_LTSSM_ENABLE; 533 + } 534 + writel(val, rcar->base + PCIERSTCTRL1); 535 + 536 + return 0; 537 + } 538 + 539 + static void rcar_gen4_pcie_additional_common_init(struct rcar_gen4_pcie *rcar) 540 + { 541 + struct dw_pcie *dw = &rcar->dw; 542 + u32 val; 543 + 544 + val = dw_pcie_readl_dbi(dw, PCIE_PORT_LANE_SKEW); 545 + val &= ~PORT_LANE_SKEW_INSERT_MASK; 546 + if (dw->num_lanes < 4) 547 + val |= BIT(6); 548 + dw_pcie_writel_dbi(dw, PCIE_PORT_LANE_SKEW, val); 549 + 550 + val = readl(rcar->base + PCIEPWRMNGCTRL); 551 + val |= APP_CLK_REQ_N | APP_CLK_PM_EN; 552 + writel(val, rcar->base + PCIEPWRMNGCTRL); 553 + } 554 + 555 + static void rcar_gen4_pcie_phy_reg_update_bits(struct rcar_gen4_pcie *rcar, 556 + u32 offset, u32 mask, u32 val) 557 + { 558 + u32 tmp; 559 + 560 + tmp = readl(rcar->phy_base + offset); 561 + tmp &= ~mask; 562 + tmp |= val; 563 + writel(tmp, rcar->phy_base + offset); 564 + } 565 + 566 + /* 567 + * SoC datasheet suggests checking port logic register bits during firmware 568 + * write. If read returns non-zero value, then this function returns -EAGAIN 569 + * indicating that the write needs to be done again. If read returns zero, 570 + * then return 0 to indicate success. 571 + */ 572 + static int rcar_gen4_pcie_reg_test_bit(struct rcar_gen4_pcie *rcar, 573 + u32 offset, u32 mask) 574 + { 575 + struct dw_pcie *dw = &rcar->dw; 576 + 577 + if (dw_pcie_readl_dbi(dw, offset) & mask) 578 + return -EAGAIN; 579 + 580 + return 0; 581 + } 582 + 583 + static int rcar_gen4_pcie_download_phy_firmware(struct rcar_gen4_pcie *rcar) 584 + { 585 + /* The check_addr values are magical numbers in the datasheet */ 586 + const u32 check_addr[] = { 0x00101018, 0x00101118, 0x00101021, 0x00101121}; 587 + struct dw_pcie *dw = &rcar->dw; 588 + const struct firmware *fw; 589 + unsigned int i, timeout; 590 + u32 data; 591 + int ret; 592 + 593 + ret = request_firmware(&fw, RCAR_GEN4_PCIE_FIRMWARE_NAME, dw->dev); 594 + if (ret) { 595 + dev_err(dw->dev, "Failed to load firmware (%s): %d\n", 596 + RCAR_GEN4_PCIE_FIRMWARE_NAME, ret); 597 + return ret; 598 + } 599 + 600 + for (i = 0; i < (fw->size / 2); i++) { 601 + data = fw->data[(i * 2) + 1] << 8 | fw->data[i * 2]; 602 + timeout = 100; 603 + do { 604 + dw_pcie_writel_dbi(dw, PRTLGC89, RCAR_GEN4_PCIE_FIRMWARE_BASE_ADDR + i); 605 + dw_pcie_writel_dbi(dw, PRTLGC90, data); 606 + if (!rcar_gen4_pcie_reg_test_bit(rcar, PRTLGC89, BIT(30))) 607 + break; 608 + if (!(--timeout)) { 609 + ret = -ETIMEDOUT; 610 + goto exit; 611 + } 612 + usleep_range(100, 200); 613 + } while (1); 614 + } 615 + 616 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(17), BIT(17)); 617 + 618 + for (i = 0; i < ARRAY_SIZE(check_addr); i++) { 619 + timeout = 100; 620 + do { 621 + dw_pcie_writel_dbi(dw, PRTLGC89, check_addr[i]); 622 + ret = rcar_gen4_pcie_reg_test_bit(rcar, PRTLGC89, BIT(30)); 623 + ret |= rcar_gen4_pcie_reg_test_bit(rcar, PRTLGC90, BIT(0)); 624 + if (!ret) 625 + break; 626 + if (!(--timeout)) { 627 + ret = -ETIMEDOUT; 628 + goto exit; 629 + } 630 + usleep_range(100, 200); 631 + } while (1); 632 + } 633 + 634 + exit: 635 + release_firmware(fw); 636 + 637 + return ret; 638 + } 639 + 640 + static int rcar_gen4_pcie_ltssm_control(struct rcar_gen4_pcie *rcar, bool enable) 641 + { 642 + struct dw_pcie *dw = &rcar->dw; 643 + u32 val; 644 + int ret; 645 + 646 + if (!enable) { 647 + val = readl(rcar->base + PCIERSTCTRL1); 648 + val &= ~APP_LTSSM_ENABLE; 649 + writel(val, rcar->base + PCIERSTCTRL1); 650 + 651 + return 0; 652 + } 653 + 654 + val = dw_pcie_readl_dbi(dw, PCIE_PORT_FORCE); 655 + val |= PORT_FORCE_DO_DESKEW_FOR_SRIS; 656 + dw_pcie_writel_dbi(dw, PCIE_PORT_FORCE, val); 657 + 658 + val = readl(rcar->base + PCIEMSR0); 659 + val |= APP_SRIS_MODE; 660 + writel(val, rcar->base + PCIEMSR0); 661 + 662 + /* 663 + * The R-Car Gen4 datasheet doesn't describe the PHY registers' name. 664 + * But, the initialization procedure describes these offsets. So, 665 + * this driver has magical offset numbers. 666 + */ 667 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(28), 0); 668 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(20), 0); 669 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(12), 0); 670 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x700, BIT(4), 0); 671 + 672 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(23, 22), BIT(22)); 673 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(18, 16), GENMASK(17, 16)); 674 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(7, 6), BIT(6)); 675 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x148, GENMASK(2, 0), GENMASK(11, 0)); 676 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x1d4, GENMASK(16, 15), GENMASK(16, 15)); 677 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x514, BIT(26), BIT(26)); 678 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(16), 0); 679 + rcar_gen4_pcie_phy_reg_update_bits(rcar, 0x0f8, BIT(19), BIT(19)); 680 + 681 + val = readl(rcar->base + PCIERSTCTRL1); 682 + val &= ~APP_HOLD_PHY_RST; 683 + writel(val, rcar->base + PCIERSTCTRL1); 684 + 685 + ret = readl_poll_timeout(rcar->phy_base + 0x0f8, val, !(val & BIT(18)), 100, 10000); 686 + if (ret < 0) 687 + return ret; 688 + 689 + ret = rcar_gen4_pcie_download_phy_firmware(rcar); 690 + if (ret) 691 + return ret; 692 + 693 + val = readl(rcar->base + PCIERSTCTRL1); 694 + val |= APP_LTSSM_ENABLE; 695 + writel(val, rcar->base + PCIERSTCTRL1); 696 + 697 + return 0; 698 + } 699 + 700 + static struct rcar_gen4_pcie_drvdata drvdata_r8a779f0_pcie = { 701 + .ltssm_control = r8a779f0_pcie_ltssm_control, 702 + .mode = DW_PCIE_RC_TYPE, 703 + }; 704 + 705 + static struct rcar_gen4_pcie_drvdata drvdata_r8a779f0_pcie_ep = { 706 + .ltssm_control = r8a779f0_pcie_ltssm_control, 707 + .mode = DW_PCIE_EP_TYPE, 708 + }; 709 + 710 + static struct rcar_gen4_pcie_drvdata drvdata_rcar_gen4_pcie = { 711 + .additional_common_init = rcar_gen4_pcie_additional_common_init, 712 + .ltssm_control = rcar_gen4_pcie_ltssm_control, 713 + .mode = DW_PCIE_RC_TYPE, 714 + }; 715 + 716 + static struct rcar_gen4_pcie_drvdata drvdata_rcar_gen4_pcie_ep = { 717 + .additional_common_init = rcar_gen4_pcie_additional_common_init, 718 + .ltssm_control = rcar_gen4_pcie_ltssm_control, 719 + .mode = DW_PCIE_EP_TYPE, 720 + }; 721 + 540 722 static const struct of_device_id rcar_gen4_pcie_of_match[] = { 541 723 { 724 + .compatible = "renesas,r8a779f0-pcie", 725 + .data = &drvdata_r8a779f0_pcie, 726 + }, 727 + { 728 + .compatible = "renesas,r8a779f0-pcie-ep", 729 + .data = &drvdata_r8a779f0_pcie_ep, 730 + }, 731 + { 542 732 .compatible = "renesas,rcar-gen4-pcie", 543 - .data = (void *)DW_PCIE_RC_TYPE, 733 + .data = &drvdata_rcar_gen4_pcie, 544 734 }, 545 735 { 546 736 .compatible = "renesas,rcar-gen4-pcie-ep", 547 - .data = (void *)DW_PCIE_EP_TYPE, 737 + .data = &drvdata_rcar_gen4_pcie_ep, 548 738 }, 549 739 {}, 550 740 };
+3 -7
drivers/pci/controller/dwc/pcie-tegra194.c
··· 13 13 #include <linux/clk.h> 14 14 #include <linux/debugfs.h> 15 15 #include <linux/delay.h> 16 - #include <linux/gpio.h> 17 16 #include <linux/gpio/consumer.h> 18 17 #include <linux/interconnect.h> 19 18 #include <linux/interrupt.h> ··· 20 21 #include <linux/kernel.h> 21 22 #include <linux/module.h> 22 23 #include <linux/of.h> 23 - #include <linux/of_gpio.h> 24 24 #include <linux/of_pci.h> 25 25 #include <linux/pci.h> 26 26 #include <linux/phy/phy.h> ··· 305 307 { 306 308 return readl_relaxed(pcie->appl_base + reg); 307 309 } 308 - 309 - struct tegra_pcie_soc { 310 - enum dw_pcie_device_mode mode; 311 - }; 312 310 313 311 static void tegra_pcie_icc_set(struct tegra_pcie_dw *pcie) 314 312 { ··· 1709 1715 if (ret) 1710 1716 dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret); 1711 1717 1718 + pci_epc_deinit_notify(pcie->pci.ep.epc); 1712 1719 dw_pcie_ep_cleanup(&pcie->pci.ep); 1713 1720 1714 1721 reset_control_assert(pcie->core_rst); ··· 1898 1903 goto fail_init_complete; 1899 1904 } 1900 1905 1901 - dw_pcie_ep_init_notify(ep); 1906 + pci_epc_init_notify(ep->epc); 1902 1907 1903 1908 /* Program the private control to allow sending LTR upstream */ 1904 1909 if (pcie->of_data->has_ltr_req_fix) { ··· 2010 2015 .bar[BAR_3] = { .type = BAR_RESERVED, }, 2011 2016 .bar[BAR_4] = { .type = BAR_RESERVED, }, 2012 2017 .bar[BAR_5] = { .type = BAR_RESERVED, }, 2018 + .align = SZ_64K, 2013 2019 }; 2014 2020 2015 2021 static const struct pci_epc_features*
+1 -1
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 410 410 return ret; 411 411 } 412 412 413 - dw_pcie_ep_init_notify(&priv->pci.ep); 413 + pci_epc_init_notify(priv->pci.ep.epc); 414 414 415 415 return 0; 416 416 }
+1 -1
drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c
··· 190 190 ls_g4_pcie_enable_interrupt(pcie); 191 191 } 192 192 193 - static struct mobiveil_rp_ops ls_g4_pcie_rp_ops = { 193 + static const struct mobiveil_rp_ops ls_g4_pcie_rp_ops = { 194 194 .interrupt_init = ls_g4_pcie_interrupt_init, 195 195 }; 196 196
+1 -1
drivers/pci/controller/mobiveil/pcie-mobiveil.h
··· 151 151 struct mobiveil_root_port { 152 152 void __iomem *config_axi_slave_base; /* endpoint config base */ 153 153 struct resource *ob_io_res; 154 - struct mobiveil_rp_ops *ops; 154 + const struct mobiveil_rp_ops *ops; 155 155 int irq; 156 156 raw_spinlock_t intx_mask_lock; 157 157 struct irq_domain *intx_domain;
-1
drivers/pci/controller/pci-aardvark.c
··· 23 23 #include <linux/platform_device.h> 24 24 #include <linux/msi.h> 25 25 #include <linux/of_address.h> 26 - #include <linux/of_gpio.h> 27 26 #include <linux/of_pci.h> 28 27 29 28 #include "../pci.h"
+1 -4
drivers/pci/controller/pci-host-common.c
··· 73 73 if (IS_ERR(cfg)) 74 74 return PTR_ERR(cfg); 75 75 76 - /* Do not reassign resources if probe only */ 77 - if (!pci_has_flag(PCI_PROBE_ONLY)) 78 - pci_add_flags(PCI_REASSIGN_ALL_BUS); 79 - 80 76 bridge->sysdata = cfg; 81 77 bridge->ops = (struct pci_ops *)&ops->pci_ops; 82 78 bridge->msi_domain = true; ··· 92 96 } 93 97 EXPORT_SYMBOL_GPL(pci_host_common_remove); 94 98 99 + MODULE_DESCRIPTION("Generic PCI host common driver"); 95 100 MODULE_LICENSE("GPL v2");
+1
drivers/pci/controller/pci-host-generic.c
··· 86 86 }; 87 87 module_platform_driver(gen_pci_driver); 88 88 89 + MODULE_DESCRIPTION("Generic PCI host controller driver"); 89 90 MODULE_LICENSE("GPL v2");
+2 -2
drivers/pci/controller/pci-hyperv.c
··· 1130 1130 PCI_CAPABILITY_LIST) { 1131 1131 /* ROM BARs are unimplemented */ 1132 1132 *val = 0; 1133 - } else if (where >= PCI_INTERRUPT_LINE && where + size <= 1134 - PCI_INTERRUPT_PIN) { 1133 + } else if ((where >= PCI_INTERRUPT_LINE && where + size <= PCI_INTERRUPT_PIN) || 1134 + (where >= PCI_INTERRUPT_PIN && where + size <= PCI_MIN_GNT)) { 1135 1135 /* 1136 1136 * Interrupt Line and Interrupt PIN are hard-wired to zero 1137 1137 * because this front-end only supports message-signaled
+13
drivers/pci/controller/pci-loongson.c
··· 163 163 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 164 164 DEV_LS7A_HDMI, loongson_pci_pin_quirk); 165 165 166 + static void loongson_pci_msi_quirk(struct pci_dev *dev) 167 + { 168 + u16 val, class = dev->class >> 8; 169 + 170 + if (class != PCI_CLASS_BRIDGE_HOST) 171 + return; 172 + 173 + pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &val); 174 + val |= PCI_MSI_FLAGS_ENABLE; 175 + pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, val); 176 + } 177 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, DEV_LS7A_PCIE_PORT5, loongson_pci_msi_quirk); 178 + 166 179 static struct loongson_pci *pci_bus_to_loongson_pci(struct pci_bus *bus) 167 180 { 168 181 struct pci_config_window *cfg;
+1
drivers/pci/controller/pcie-altera-msi.c
··· 290 290 subsys_initcall(altera_msi_init); 291 291 MODULE_DEVICE_TABLE(of, altera_msi_of_match); 292 292 module_exit(altera_msi_exit); 293 + MODULE_DESCRIPTION("Altera PCIe MSI support driver"); 293 294 MODULE_LICENSE("GPL v2");
+1
drivers/pci/controller/pcie-altera.c
··· 826 826 827 827 MODULE_DEVICE_TABLE(of, altera_pcie_of_match); 828 828 module_platform_driver(altera_pcie_driver); 829 + MODULE_DESCRIPTION("Altera PCIe host controller driver"); 829 830 MODULE_LICENSE("GPL v2");
+1
drivers/pci/controller/pcie-apple.c
··· 839 839 }; 840 840 module_platform_driver(apple_pcie_driver); 841 841 842 + MODULE_DESCRIPTION("Apple PCIe host bridge driver"); 842 843 MODULE_LICENSE("GPL v2");
+1
drivers/pci/controller/pcie-mediatek-gen3.c
··· 1091 1091 }; 1092 1092 1093 1093 module_platform_driver(mtk_pcie_driver); 1094 + MODULE_DESCRIPTION("MediaTek Gen3 PCIe host controller driver"); 1094 1095 MODULE_LICENSE("GPL v2");
+1
drivers/pci/controller/pcie-mediatek.c
··· 1252 1252 }, 1253 1253 }; 1254 1254 module_platform_driver(mtk_pcie_driver); 1255 + MODULE_DESCRIPTION("MediaTek PCIe host controller driver"); 1255 1256 MODULE_LICENSE("GPL v2");
+64 -551
drivers/pci/controller/pcie-microchip-host.c drivers/pci/controller/plda/pcie-microchip-host.c
··· 18 18 #include <linux/pci-ecam.h> 19 19 #include <linux/platform_device.h> 20 20 21 - #include "../pci.h" 22 - 23 - /* Number of MSI IRQs */ 24 - #define MC_MAX_NUM_MSI_IRQS 32 21 + #include "../../pci.h" 22 + #include "pcie-plda.h" 25 23 26 24 /* PCIe Bridge Phy and Controller Phy offsets */ 27 25 #define MC_PCIE1_BRIDGE_ADDR 0x00008000u ··· 27 29 28 30 #define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR) 29 31 #define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR) 30 - 31 - /* PCIe Bridge Phy Regs */ 32 - #define PCIE_PCI_IRQ_DW0 0xa8 33 - #define MSIX_CAP_MASK BIT(31) 34 - #define NUM_MSI_MSGS_MASK GENMASK(6, 4) 35 - #define NUM_MSI_MSGS_SHIFT 4 36 - 37 - #define IMASK_LOCAL 0x180 38 - #define DMA_END_ENGINE_0_MASK 0x00000000u 39 - #define DMA_END_ENGINE_0_SHIFT 0 40 - #define DMA_END_ENGINE_1_MASK 0x00000000u 41 - #define DMA_END_ENGINE_1_SHIFT 1 42 - #define DMA_ERROR_ENGINE_0_MASK 0x00000100u 43 - #define DMA_ERROR_ENGINE_0_SHIFT 8 44 - #define DMA_ERROR_ENGINE_1_MASK 0x00000200u 45 - #define DMA_ERROR_ENGINE_1_SHIFT 9 46 - #define A_ATR_EVT_POST_ERR_MASK 0x00010000u 47 - #define A_ATR_EVT_POST_ERR_SHIFT 16 48 - #define A_ATR_EVT_FETCH_ERR_MASK 0x00020000u 49 - #define A_ATR_EVT_FETCH_ERR_SHIFT 17 50 - #define A_ATR_EVT_DISCARD_ERR_MASK 0x00040000u 51 - #define A_ATR_EVT_DISCARD_ERR_SHIFT 18 52 - #define A_ATR_EVT_DOORBELL_MASK 0x00000000u 53 - #define A_ATR_EVT_DOORBELL_SHIFT 19 54 - #define P_ATR_EVT_POST_ERR_MASK 0x00100000u 55 - #define P_ATR_EVT_POST_ERR_SHIFT 20 56 - #define P_ATR_EVT_FETCH_ERR_MASK 0x00200000u 57 - #define P_ATR_EVT_FETCH_ERR_SHIFT 21 58 - #define P_ATR_EVT_DISCARD_ERR_MASK 0x00400000u 59 - #define P_ATR_EVT_DISCARD_ERR_SHIFT 22 60 - #define P_ATR_EVT_DOORBELL_MASK 0x00000000u 61 - #define P_ATR_EVT_DOORBELL_SHIFT 23 62 - #define PM_MSI_INT_INTA_MASK 0x01000000u 63 - #define PM_MSI_INT_INTA_SHIFT 24 64 - #define PM_MSI_INT_INTB_MASK 0x02000000u 65 - #define PM_MSI_INT_INTB_SHIFT 25 66 - #define PM_MSI_INT_INTC_MASK 0x04000000u 67 - #define PM_MSI_INT_INTC_SHIFT 26 68 - #define PM_MSI_INT_INTD_MASK 0x08000000u 69 - #define PM_MSI_INT_INTD_SHIFT 27 70 - #define PM_MSI_INT_INTX_MASK 0x0f000000u 71 - #define PM_MSI_INT_INTX_SHIFT 24 72 - #define PM_MSI_INT_MSI_MASK 0x10000000u 73 - #define PM_MSI_INT_MSI_SHIFT 28 74 - #define PM_MSI_INT_AER_EVT_MASK 0x20000000u 75 - #define PM_MSI_INT_AER_EVT_SHIFT 29 76 - #define PM_MSI_INT_EVENTS_MASK 0x40000000u 77 - #define PM_MSI_INT_EVENTS_SHIFT 30 78 - #define PM_MSI_INT_SYS_ERR_MASK 0x80000000u 79 - #define PM_MSI_INT_SYS_ERR_SHIFT 31 80 - #define NUM_LOCAL_EVENTS 15 81 - #define ISTATUS_LOCAL 0x184 82 - #define IMASK_HOST 0x188 83 - #define ISTATUS_HOST 0x18c 84 - #define IMSI_ADDR 0x190 85 - #define ISTATUS_MSI 0x194 86 - 87 - /* PCIe Master table init defines */ 88 - #define ATR0_PCIE_WIN0_SRCADDR_PARAM 0x600u 89 - #define ATR0_PCIE_ATR_SIZE 0x25 90 - #define ATR0_PCIE_ATR_SIZE_SHIFT 1 91 - #define ATR0_PCIE_WIN0_SRC_ADDR 0x604u 92 - #define ATR0_PCIE_WIN0_TRSL_ADDR_LSB 0x608u 93 - #define ATR0_PCIE_WIN0_TRSL_ADDR_UDW 0x60cu 94 - #define ATR0_PCIE_WIN0_TRSL_PARAM 0x610u 95 - 96 - /* PCIe AXI slave table init defines */ 97 - #define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u 98 - #define ATR_SIZE_SHIFT 1 99 - #define ATR_IMPL_ENABLE 1 100 - #define ATR0_AXI4_SLV0_SRC_ADDR 0x804u 101 - #define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u 102 - #define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu 103 - #define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u 104 - #define PCIE_TX_RX_INTERFACE 0x00000000u 105 - #define PCIE_CONFIG_INTERFACE 0x00000001u 106 - 107 - #define ATR_ENTRY_SIZE 32 108 32 109 33 /* PCIe Controller Phy Regs */ 110 34 #define SEC_ERROR_EVENT_CNT 0x20 ··· 99 179 #define EVENT_LOCAL_DMA_END_ENGINE_1 12 100 180 #define EVENT_LOCAL_DMA_ERROR_ENGINE_0 13 101 181 #define EVENT_LOCAL_DMA_ERROR_ENGINE_1 14 102 - #define EVENT_LOCAL_A_ATR_EVT_POST_ERR 15 103 - #define EVENT_LOCAL_A_ATR_EVT_FETCH_ERR 16 104 - #define EVENT_LOCAL_A_ATR_EVT_DISCARD_ERR 17 105 - #define EVENT_LOCAL_A_ATR_EVT_DOORBELL 18 106 - #define EVENT_LOCAL_P_ATR_EVT_POST_ERR 19 107 - #define EVENT_LOCAL_P_ATR_EVT_FETCH_ERR 20 108 - #define EVENT_LOCAL_P_ATR_EVT_DISCARD_ERR 21 109 - #define EVENT_LOCAL_P_ATR_EVT_DOORBELL 22 110 - #define EVENT_LOCAL_PM_MSI_INT_INTX 23 111 - #define EVENT_LOCAL_PM_MSI_INT_MSI 24 112 - #define EVENT_LOCAL_PM_MSI_INT_AER_EVT 25 113 - #define EVENT_LOCAL_PM_MSI_INT_EVENTS 26 114 - #define EVENT_LOCAL_PM_MSI_INT_SYS_ERR 27 115 - #define NUM_EVENTS 28 182 + #define NUM_MC_EVENTS 15 183 + #define EVENT_LOCAL_A_ATR_EVT_POST_ERR (NUM_MC_EVENTS + PLDA_AXI_POST_ERR) 184 + #define EVENT_LOCAL_A_ATR_EVT_FETCH_ERR (NUM_MC_EVENTS + PLDA_AXI_FETCH_ERR) 185 + #define EVENT_LOCAL_A_ATR_EVT_DISCARD_ERR (NUM_MC_EVENTS + PLDA_AXI_DISCARD_ERR) 186 + #define EVENT_LOCAL_A_ATR_EVT_DOORBELL (NUM_MC_EVENTS + PLDA_AXI_DOORBELL) 187 + #define EVENT_LOCAL_P_ATR_EVT_POST_ERR (NUM_MC_EVENTS + PLDA_PCIE_POST_ERR) 188 + #define EVENT_LOCAL_P_ATR_EVT_FETCH_ERR (NUM_MC_EVENTS + PLDA_PCIE_FETCH_ERR) 189 + #define EVENT_LOCAL_P_ATR_EVT_DISCARD_ERR (NUM_MC_EVENTS + PLDA_PCIE_DISCARD_ERR) 190 + #define EVENT_LOCAL_P_ATR_EVT_DOORBELL (NUM_MC_EVENTS + PLDA_PCIE_DOORBELL) 191 + #define EVENT_LOCAL_PM_MSI_INT_INTX (NUM_MC_EVENTS + PLDA_INTX) 192 + #define EVENT_LOCAL_PM_MSI_INT_MSI (NUM_MC_EVENTS + PLDA_MSI) 193 + #define EVENT_LOCAL_PM_MSI_INT_AER_EVT (NUM_MC_EVENTS + PLDA_AER_EVENT) 194 + #define EVENT_LOCAL_PM_MSI_INT_EVENTS (NUM_MC_EVENTS + PLDA_MISC_EVENTS) 195 + #define EVENT_LOCAL_PM_MSI_INT_SYS_ERR (NUM_MC_EVENTS + PLDA_SYS_ERR) 196 + #define NUM_EVENTS (NUM_MC_EVENTS + PLDA_INT_EVENT_NUM) 116 197 117 198 #define PCIE_EVENT_CAUSE(x, s) \ 118 199 [EVENT_PCIE_ ## x] = { __stringify(x), s } ··· 176 255 u32 event_bit; 177 256 }; 178 257 179 - struct mc_msi { 180 - struct mutex lock; /* Protect used bitmap */ 181 - struct irq_domain *msi_domain; 182 - struct irq_domain *dev_domain; 183 - u32 num_vectors; 184 - u64 vector_phy; 185 - DECLARE_BITMAP(used, MC_MAX_NUM_MSI_IRQS); 186 - }; 187 258 188 259 struct mc_pcie { 260 + struct plda_pcie_rp plda; 189 261 void __iomem *axi_base_addr; 190 - struct device *dev; 191 - struct irq_domain *intx_domain; 192 - struct irq_domain *event_domain; 193 - raw_spinlock_t lock; 194 - struct mc_msi msi; 195 262 }; 196 263 197 264 struct cause { ··· 297 388 298 389 static void mc_pcie_enable_msi(struct mc_pcie *port, void __iomem *ecam) 299 390 { 300 - struct mc_msi *msi = &port->msi; 391 + struct plda_msi *msi = &port->plda.msi; 301 392 u16 reg; 302 393 u8 queue_size; 303 394 ··· 317 408 writel_relaxed(upper_32_bits(msi->vector_phy), 318 409 ecam + MC_MSI_CAP_CTRL_OFFSET + PCI_MSI_ADDRESS_HI); 319 410 } 320 - 321 - static void mc_handle_msi(struct irq_desc *desc) 322 - { 323 - struct mc_pcie *port = irq_desc_get_handler_data(desc); 324 - struct irq_chip *chip = irq_desc_get_chip(desc); 325 - struct device *dev = port->dev; 326 - struct mc_msi *msi = &port->msi; 327 - void __iomem *bridge_base_addr = 328 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 329 - unsigned long status; 330 - u32 bit; 331 - int ret; 332 - 333 - chained_irq_enter(chip, desc); 334 - 335 - status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL); 336 - if (status & PM_MSI_INT_MSI_MASK) { 337 - writel_relaxed(status & PM_MSI_INT_MSI_MASK, bridge_base_addr + ISTATUS_LOCAL); 338 - status = readl_relaxed(bridge_base_addr + ISTATUS_MSI); 339 - for_each_set_bit(bit, &status, msi->num_vectors) { 340 - ret = generic_handle_domain_irq(msi->dev_domain, bit); 341 - if (ret) 342 - dev_err_ratelimited(dev, "bad MSI IRQ %d\n", 343 - bit); 344 - } 345 - } 346 - 347 - chained_irq_exit(chip, desc); 348 - } 349 - 350 - static void mc_msi_bottom_irq_ack(struct irq_data *data) 351 - { 352 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 353 - void __iomem *bridge_base_addr = 354 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 355 - u32 bitpos = data->hwirq; 356 - 357 - writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI); 358 - } 359 - 360 - static void mc_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 361 - { 362 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 363 - phys_addr_t addr = port->msi.vector_phy; 364 - 365 - msg->address_lo = lower_32_bits(addr); 366 - msg->address_hi = upper_32_bits(addr); 367 - msg->data = data->hwirq; 368 - 369 - dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n", 370 - (int)data->hwirq, msg->address_hi, msg->address_lo); 371 - } 372 - 373 - static int mc_msi_set_affinity(struct irq_data *irq_data, 374 - const struct cpumask *mask, bool force) 375 - { 376 - return -EINVAL; 377 - } 378 - 379 - static struct irq_chip mc_msi_bottom_irq_chip = { 380 - .name = "Microchip MSI", 381 - .irq_ack = mc_msi_bottom_irq_ack, 382 - .irq_compose_msi_msg = mc_compose_msi_msg, 383 - .irq_set_affinity = mc_msi_set_affinity, 384 - }; 385 - 386 - static int mc_irq_msi_domain_alloc(struct irq_domain *domain, unsigned int virq, 387 - unsigned int nr_irqs, void *args) 388 - { 389 - struct mc_pcie *port = domain->host_data; 390 - struct mc_msi *msi = &port->msi; 391 - unsigned long bit; 392 - 393 - mutex_lock(&msi->lock); 394 - bit = find_first_zero_bit(msi->used, msi->num_vectors); 395 - if (bit >= msi->num_vectors) { 396 - mutex_unlock(&msi->lock); 397 - return -ENOSPC; 398 - } 399 - 400 - set_bit(bit, msi->used); 401 - 402 - irq_domain_set_info(domain, virq, bit, &mc_msi_bottom_irq_chip, 403 - domain->host_data, handle_edge_irq, NULL, NULL); 404 - 405 - mutex_unlock(&msi->lock); 406 - 407 - return 0; 408 - } 409 - 410 - static void mc_irq_msi_domain_free(struct irq_domain *domain, unsigned int virq, 411 - unsigned int nr_irqs) 412 - { 413 - struct irq_data *d = irq_domain_get_irq_data(domain, virq); 414 - struct mc_pcie *port = irq_data_get_irq_chip_data(d); 415 - struct mc_msi *msi = &port->msi; 416 - 417 - mutex_lock(&msi->lock); 418 - 419 - if (test_bit(d->hwirq, msi->used)) 420 - __clear_bit(d->hwirq, msi->used); 421 - else 422 - dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq); 423 - 424 - mutex_unlock(&msi->lock); 425 - } 426 - 427 - static const struct irq_domain_ops msi_domain_ops = { 428 - .alloc = mc_irq_msi_domain_alloc, 429 - .free = mc_irq_msi_domain_free, 430 - }; 431 - 432 - static struct irq_chip mc_msi_irq_chip = { 433 - .name = "Microchip PCIe MSI", 434 - .irq_ack = irq_chip_ack_parent, 435 - .irq_mask = pci_msi_mask_irq, 436 - .irq_unmask = pci_msi_unmask_irq, 437 - }; 438 - 439 - static struct msi_domain_info mc_msi_domain_info = { 440 - .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 441 - MSI_FLAG_PCI_MSIX), 442 - .chip = &mc_msi_irq_chip, 443 - }; 444 - 445 - static int mc_allocate_msi_domains(struct mc_pcie *port) 446 - { 447 - struct device *dev = port->dev; 448 - struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node); 449 - struct mc_msi *msi = &port->msi; 450 - 451 - mutex_init(&port->msi.lock); 452 - 453 - msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors, 454 - &msi_domain_ops, port); 455 - if (!msi->dev_domain) { 456 - dev_err(dev, "failed to create IRQ domain\n"); 457 - return -ENOMEM; 458 - } 459 - 460 - msi->msi_domain = pci_msi_create_irq_domain(fwnode, &mc_msi_domain_info, 461 - msi->dev_domain); 462 - if (!msi->msi_domain) { 463 - dev_err(dev, "failed to create MSI domain\n"); 464 - irq_domain_remove(msi->dev_domain); 465 - return -ENOMEM; 466 - } 467 - 468 - return 0; 469 - } 470 - 471 - static void mc_handle_intx(struct irq_desc *desc) 472 - { 473 - struct mc_pcie *port = irq_desc_get_handler_data(desc); 474 - struct irq_chip *chip = irq_desc_get_chip(desc); 475 - struct device *dev = port->dev; 476 - void __iomem *bridge_base_addr = 477 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 478 - unsigned long status; 479 - u32 bit; 480 - int ret; 481 - 482 - chained_irq_enter(chip, desc); 483 - 484 - status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL); 485 - if (status & PM_MSI_INT_INTX_MASK) { 486 - status &= PM_MSI_INT_INTX_MASK; 487 - status >>= PM_MSI_INT_INTX_SHIFT; 488 - for_each_set_bit(bit, &status, PCI_NUM_INTX) { 489 - ret = generic_handle_domain_irq(port->intx_domain, bit); 490 - if (ret) 491 - dev_err_ratelimited(dev, "bad INTx IRQ %d\n", 492 - bit); 493 - } 494 - } 495 - 496 - chained_irq_exit(chip, desc); 497 - } 498 - 499 - static void mc_ack_intx_irq(struct irq_data *data) 500 - { 501 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 502 - void __iomem *bridge_base_addr = 503 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 504 - u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT); 505 - 506 - writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL); 507 - } 508 - 509 - static void mc_mask_intx_irq(struct irq_data *data) 510 - { 511 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 512 - void __iomem *bridge_base_addr = 513 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 514 - unsigned long flags; 515 - u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT); 516 - u32 val; 517 - 518 - raw_spin_lock_irqsave(&port->lock, flags); 519 - val = readl_relaxed(bridge_base_addr + IMASK_LOCAL); 520 - val &= ~mask; 521 - writel_relaxed(val, bridge_base_addr + IMASK_LOCAL); 522 - raw_spin_unlock_irqrestore(&port->lock, flags); 523 - } 524 - 525 - static void mc_unmask_intx_irq(struct irq_data *data) 526 - { 527 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 528 - void __iomem *bridge_base_addr = 529 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 530 - unsigned long flags; 531 - u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT); 532 - u32 val; 533 - 534 - raw_spin_lock_irqsave(&port->lock, flags); 535 - val = readl_relaxed(bridge_base_addr + IMASK_LOCAL); 536 - val |= mask; 537 - writel_relaxed(val, bridge_base_addr + IMASK_LOCAL); 538 - raw_spin_unlock_irqrestore(&port->lock, flags); 539 - } 540 - 541 - static struct irq_chip mc_intx_irq_chip = { 542 - .name = "Microchip PCIe INTx", 543 - .irq_ack = mc_ack_intx_irq, 544 - .irq_mask = mc_mask_intx_irq, 545 - .irq_unmask = mc_unmask_intx_irq, 546 - }; 547 - 548 - static int mc_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 549 - irq_hw_number_t hwirq) 550 - { 551 - irq_set_chip_and_handler(irq, &mc_intx_irq_chip, handle_level_irq); 552 - irq_set_chip_data(irq, domain->host_data); 553 - 554 - return 0; 555 - } 556 - 557 - static const struct irq_domain_ops intx_domain_ops = { 558 - .map = mc_pcie_intx_map, 559 - }; 560 411 561 412 static inline u32 reg_to_event(u32 reg, struct event_map field) 562 413 { ··· 375 706 return val; 376 707 } 377 708 378 - static u32 get_events(struct mc_pcie *port) 709 + static u32 mc_get_events(struct plda_pcie_rp *port) 379 710 { 711 + struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda); 380 712 u32 events = 0; 381 713 382 - events |= pcie_events(port); 383 - events |= sec_errors(port); 384 - events |= ded_errors(port); 385 - events |= local_events(port); 714 + events |= pcie_events(mc_port); 715 + events |= sec_errors(mc_port); 716 + events |= ded_errors(mc_port); 717 + events |= local_events(mc_port); 386 718 387 719 return events; 388 720 } 389 721 390 722 static irqreturn_t mc_event_handler(int irq, void *dev_id) 391 723 { 392 - struct mc_pcie *port = dev_id; 724 + struct plda_pcie_rp *port = dev_id; 393 725 struct device *dev = port->dev; 394 726 struct irq_data *data; 395 727 ··· 404 734 return IRQ_HANDLED; 405 735 } 406 736 407 - static void mc_handle_event(struct irq_desc *desc) 408 - { 409 - struct mc_pcie *port = irq_desc_get_handler_data(desc); 410 - unsigned long events; 411 - u32 bit; 412 - struct irq_chip *chip = irq_desc_get_chip(desc); 413 - 414 - chained_irq_enter(chip, desc); 415 - 416 - events = get_events(port); 417 - 418 - for_each_set_bit(bit, &events, NUM_EVENTS) 419 - generic_handle_domain_irq(port->event_domain, bit); 420 - 421 - chained_irq_exit(chip, desc); 422 - } 423 - 424 737 static void mc_ack_event_irq(struct irq_data *data) 425 738 { 426 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 739 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 740 + struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda); 427 741 u32 event = data->hwirq; 428 742 void __iomem *addr; 429 743 u32 mask; 430 744 431 - addr = port->axi_base_addr + event_descs[event].base + 745 + addr = mc_port->axi_base_addr + event_descs[event].base + 432 746 event_descs[event].offset; 433 747 mask = event_descs[event].mask; 434 748 mask |= event_descs[event].enb_mask; ··· 422 768 423 769 static void mc_mask_event_irq(struct irq_data *data) 424 770 { 425 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 771 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 772 + struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda); 426 773 u32 event = data->hwirq; 427 774 void __iomem *addr; 428 775 u32 mask; 429 776 u32 val; 430 777 431 - addr = port->axi_base_addr + event_descs[event].base + 778 + addr = mc_port->axi_base_addr + event_descs[event].base + 432 779 event_descs[event].mask_offset; 433 780 mask = event_descs[event].mask; 434 781 if (event_descs[event].enb_mask) { ··· 453 798 454 799 static void mc_unmask_event_irq(struct irq_data *data) 455 800 { 456 - struct mc_pcie *port = irq_data_get_irq_chip_data(data); 801 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 802 + struct mc_pcie *mc_port = container_of(port, struct mc_pcie, plda); 457 803 u32 event = data->hwirq; 458 804 void __iomem *addr; 459 805 u32 mask; 460 806 u32 val; 461 807 462 - addr = port->axi_base_addr + event_descs[event].base + 808 + addr = mc_port->axi_base_addr + event_descs[event].base + 463 809 event_descs[event].mask_offset; 464 810 mask = event_descs[event].mask; 465 811 ··· 488 832 .irq_ack = mc_ack_event_irq, 489 833 .irq_mask = mc_mask_event_irq, 490 834 .irq_unmask = mc_unmask_event_irq, 491 - }; 492 - 493 - static int mc_pcie_event_map(struct irq_domain *domain, unsigned int irq, 494 - irq_hw_number_t hwirq) 495 - { 496 - irq_set_chip_and_handler(irq, &mc_event_irq_chip, handle_level_irq); 497 - irq_set_chip_data(irq, domain->host_data); 498 - 499 - return 0; 500 - } 501 - 502 - static const struct irq_domain_ops event_domain_ops = { 503 - .map = mc_pcie_event_map, 504 835 }; 505 836 506 837 static inline void mc_pcie_deinit_clk(void *data) ··· 535 892 return 0; 536 893 } 537 894 538 - static int mc_pcie_init_irq_domains(struct mc_pcie *port) 895 + static int mc_request_event_irq(struct plda_pcie_rp *plda, int event_irq, 896 + int event) 539 897 { 540 - struct device *dev = port->dev; 541 - struct device_node *node = dev->of_node; 542 - struct device_node *pcie_intc_node; 543 - 544 - /* Setup INTx */ 545 - pcie_intc_node = of_get_next_child(node, NULL); 546 - if (!pcie_intc_node) { 547 - dev_err(dev, "failed to find PCIe Intc node\n"); 548 - return -EINVAL; 549 - } 550 - 551 - port->event_domain = irq_domain_add_linear(pcie_intc_node, NUM_EVENTS, 552 - &event_domain_ops, port); 553 - if (!port->event_domain) { 554 - dev_err(dev, "failed to get event domain\n"); 555 - of_node_put(pcie_intc_node); 556 - return -ENOMEM; 557 - } 558 - 559 - irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS); 560 - 561 - port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 562 - &intx_domain_ops, port); 563 - if (!port->intx_domain) { 564 - dev_err(dev, "failed to get an INTx IRQ domain\n"); 565 - of_node_put(pcie_intc_node); 566 - return -ENOMEM; 567 - } 568 - 569 - irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED); 570 - 571 - of_node_put(pcie_intc_node); 572 - raw_spin_lock_init(&port->lock); 573 - 574 - return mc_allocate_msi_domains(port); 898 + return devm_request_irq(plda->dev, event_irq, mc_event_handler, 899 + 0, event_cause[event].sym, plda); 575 900 } 576 901 577 - static void mc_pcie_setup_window(void __iomem *bridge_base_addr, u32 index, 578 - phys_addr_t axi_addr, phys_addr_t pci_addr, 579 - size_t size) 580 - { 581 - u32 atr_sz = ilog2(size) - 1; 582 - u32 val; 902 + static const struct plda_event_ops mc_event_ops = { 903 + .get_events = mc_get_events, 904 + }; 583 905 584 - if (index == 0) 585 - val = PCIE_CONFIG_INTERFACE; 586 - else 587 - val = PCIE_TX_RX_INTERFACE; 588 - 589 - writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 590 - ATR0_AXI4_SLV0_TRSL_PARAM); 591 - 592 - val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) | 593 - ATR_IMPL_ENABLE; 594 - writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 595 - ATR0_AXI4_SLV0_SRCADDR_PARAM); 596 - 597 - val = upper_32_bits(axi_addr); 598 - writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 599 - ATR0_AXI4_SLV0_SRC_ADDR); 600 - 601 - val = lower_32_bits(pci_addr); 602 - writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 603 - ATR0_AXI4_SLV0_TRSL_ADDR_LSB); 604 - 605 - val = upper_32_bits(pci_addr); 606 - writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 607 - ATR0_AXI4_SLV0_TRSL_ADDR_UDW); 608 - 609 - val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM); 610 - val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT); 611 - writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM); 612 - writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR); 613 - } 614 - 615 - static int mc_pcie_setup_windows(struct platform_device *pdev, 616 - struct mc_pcie *port) 617 - { 618 - void __iomem *bridge_base_addr = 619 - port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 620 - struct pci_host_bridge *bridge = platform_get_drvdata(pdev); 621 - struct resource_entry *entry; 622 - u64 pci_addr; 623 - u32 index = 1; 624 - 625 - resource_list_for_each_entry(entry, &bridge->windows) { 626 - if (resource_type(entry->res) == IORESOURCE_MEM) { 627 - pci_addr = entry->res->start - entry->offset; 628 - mc_pcie_setup_window(bridge_base_addr, index, 629 - entry->res->start, pci_addr, 630 - resource_size(entry->res)); 631 - index++; 632 - } 633 - } 634 - 635 - return 0; 636 - } 906 + static const struct plda_event mc_event = { 907 + .request_event_irq = mc_request_event_irq, 908 + .intx_event = EVENT_LOCAL_PM_MSI_INT_INTX, 909 + .msi_event = EVENT_LOCAL_PM_MSI_INT_MSI, 910 + }; 637 911 638 912 static inline void mc_clear_secs(struct mc_pcie *port) 639 913 { ··· 612 1052 writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST); 613 1053 } 614 1054 615 - static int mc_init_interrupts(struct platform_device *pdev, struct mc_pcie *port) 616 - { 617 - struct device *dev = &pdev->dev; 618 - int irq; 619 - int i, intx_irq, msi_irq, event_irq; 620 - int ret; 621 - 622 - ret = mc_pcie_init_irq_domains(port); 623 - if (ret) { 624 - dev_err(dev, "failed creating IRQ domains\n"); 625 - return ret; 626 - } 627 - 628 - irq = platform_get_irq(pdev, 0); 629 - if (irq < 0) 630 - return -ENODEV; 631 - 632 - for (i = 0; i < NUM_EVENTS; i++) { 633 - event_irq = irq_create_mapping(port->event_domain, i); 634 - if (!event_irq) { 635 - dev_err(dev, "failed to map hwirq %d\n", i); 636 - return -ENXIO; 637 - } 638 - 639 - ret = devm_request_irq(dev, event_irq, mc_event_handler, 640 - 0, event_cause[i].sym, port); 641 - if (ret) { 642 - dev_err(dev, "failed to request IRQ %d\n", event_irq); 643 - return ret; 644 - } 645 - } 646 - 647 - intx_irq = irq_create_mapping(port->event_domain, 648 - EVENT_LOCAL_PM_MSI_INT_INTX); 649 - if (!intx_irq) { 650 - dev_err(dev, "failed to map INTx interrupt\n"); 651 - return -ENXIO; 652 - } 653 - 654 - /* Plug the INTx chained handler */ 655 - irq_set_chained_handler_and_data(intx_irq, mc_handle_intx, port); 656 - 657 - msi_irq = irq_create_mapping(port->event_domain, 658 - EVENT_LOCAL_PM_MSI_INT_MSI); 659 - if (!msi_irq) 660 - return -ENXIO; 661 - 662 - /* Plug the MSI chained handler */ 663 - irq_set_chained_handler_and_data(msi_irq, mc_handle_msi, port); 664 - 665 - /* Plug the main event chained handler */ 666 - irq_set_chained_handler_and_data(irq, mc_handle_event, port); 667 - 668 - return 0; 669 - } 670 - 671 1055 static int mc_platform_init(struct pci_config_window *cfg) 672 1056 { 673 1057 struct device *dev = cfg->parent; 674 1058 struct platform_device *pdev = to_platform_device(dev); 1059 + struct pci_host_bridge *bridge = platform_get_drvdata(pdev); 675 1060 void __iomem *bridge_base_addr = 676 1061 port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 677 1062 int ret; 678 1063 679 1064 /* Configure address translation table 0 for PCIe config space */ 680 - mc_pcie_setup_window(bridge_base_addr, 0, cfg->res.start, 681 - cfg->res.start, 682 - resource_size(&cfg->res)); 1065 + plda_pcie_setup_window(bridge_base_addr, 0, cfg->res.start, 1066 + cfg->res.start, 1067 + resource_size(&cfg->res)); 683 1068 684 1069 /* Need some fixups in config space */ 685 1070 mc_pcie_enable_msi(port, cfg->win); 686 1071 687 1072 /* Configure non-config space outbound ranges */ 688 - ret = mc_pcie_setup_windows(pdev, port); 1073 + ret = plda_pcie_setup_iomems(bridge, &port->plda); 689 1074 if (ret) 690 1075 return ret; 691 1076 1077 + port->plda.event_ops = &mc_event_ops; 1078 + port->plda.event_irq_chip = &mc_event_irq_chip; 1079 + port->plda.events_bitmap = GENMASK(NUM_EVENTS - 1, 0); 1080 + 692 1081 /* Address translation is up; safe to enable interrupts */ 693 - ret = mc_init_interrupts(pdev, port); 1082 + ret = plda_init_interrupts(pdev, &port->plda, &mc_event); 694 1083 if (ret) 695 1084 return ret; 696 1085 ··· 650 1141 { 651 1142 struct device *dev = &pdev->dev; 652 1143 void __iomem *bridge_base_addr; 1144 + struct plda_pcie_rp *plda; 653 1145 int ret; 654 1146 u32 val; 655 1147 ··· 658 1148 if (!port) 659 1149 return -ENOMEM; 660 1150 661 - port->dev = dev; 1151 + plda = &port->plda; 1152 + plda->dev = dev; 662 1153 663 1154 port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1); 664 1155 if (IS_ERR(port->axi_base_addr)) ··· 668 1157 mc_disable_interrupts(port); 669 1158 670 1159 bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR; 1160 + plda->bridge_addr = bridge_base_addr; 1161 + plda->num_events = NUM_EVENTS; 671 1162 672 1163 /* Allow enabling MSI by disabling MSI-X */ 673 1164 val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0); ··· 681 1168 val &= NUM_MSI_MSGS_MASK; 682 1169 val >>= NUM_MSI_MSGS_SHIFT; 683 1170 684 - port->msi.num_vectors = 1 << val; 1171 + plda->msi.num_vectors = 1 << val; 685 1172 686 1173 /* Pick vector address from design */ 687 - port->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR); 1174 + plda->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR); 688 1175 689 1176 ret = mc_pcie_init_clks(dev); 690 1177 if (ret) {
+1
drivers/pci/controller/pcie-mt7621.c
··· 549 549 }; 550 550 builtin_platform_driver(mt7621_pcie_driver); 551 551 552 + MODULE_DESCRIPTION("MediaTek MT7621 PCIe host controller driver"); 552 553 MODULE_LICENSE("GPL v2");
+5 -1
drivers/pci/controller/pcie-rcar-host.c
··· 78 78 writel(L1IATN, pcie_base + PMCTLR); 79 79 ret = readl_poll_timeout_atomic(pcie_base + PMSR, val, 80 80 val & L1FAEG, 10, 1000); 81 - WARN(ret, "Timeout waiting for L1 link state, ret=%d\n", ret); 81 + if (ret) { 82 + dev_warn_ratelimited(pcie_dev, 83 + "Timeout waiting for L1 link state, ret=%d\n", 84 + ret); 85 + } 82 86 writel(L1FAEG | PMEL1RX, pcie_base + PMSR); 83 87 } 84 88
+3
drivers/pci/controller/pcie-rockchip-host.c
··· 322 322 rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE, 323 323 PCIE_CLIENT_CONFIG); 324 324 325 + msleep(PCIE_T_PVPERL_MS); 325 326 gpiod_set_value_cansleep(rockchip->ep_gpio, 1); 327 + 328 + msleep(PCIE_T_RRS_READY_MS); 326 329 327 330 /* 500ms timeout value should be enough for Gen1/2 training */ 328 331 err = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1,
+1 -1
drivers/pci/controller/pcie-rockchip.c
··· 121 121 122 122 if (rockchip->is_rc) { 123 123 rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep", 124 - GPIOD_OUT_HIGH); 124 + GPIOD_OUT_LOW); 125 125 if (IS_ERR(rockchip->ep_gpio)) 126 126 return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio), 127 127 "failed to get ep GPIO\n");
+30
drivers/pci/controller/plda/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + menu "PLDA-based PCIe controllers" 4 + depends on PCI 5 + 6 + config PCIE_PLDA_HOST 7 + bool 8 + 9 + config PCIE_MICROCHIP_HOST 10 + tristate "Microchip AXI PCIe controller" 11 + depends on PCI_MSI && OF 12 + select PCI_HOST_COMMON 13 + select PCIE_PLDA_HOST 14 + help 15 + Say Y here if you want kernel to support the Microchip AXI PCIe 16 + Host Bridge driver. 17 + 18 + config PCIE_STARFIVE_HOST 19 + tristate "StarFive PCIe host controller" 20 + depends on PCI_MSI && OF 21 + depends on ARCH_STARFIVE || COMPILE_TEST 22 + select PCIE_PLDA_HOST 23 + help 24 + Say Y here if you want to support the StarFive PCIe controller in 25 + host mode. StarFive PCIe controller uses PLDA PCIe core. 26 + 27 + If you choose to build this driver as module it will be dynamically 28 + linked and module will be called pcie-starfive.ko. 29 + 30 + endmenu
+4
drivers/pci/controller/plda/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_PCIE_PLDA_HOST) += pcie-plda-host.o 3 + obj-$(CONFIG_PCIE_MICROCHIP_HOST) += pcie-microchip-host.o 4 + obj-$(CONFIG_PCIE_STARFIVE_HOST) += pcie-starfive.o
+651
drivers/pci/controller/plda/pcie-plda-host.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PLDA PCIe XpressRich host controller driver 4 + * 5 + * Copyright (C) 2023 Microchip Co. Ltd 6 + * StarFive Co. Ltd 7 + * 8 + * Author: Daire McNamara <daire.mcnamara@microchip.com> 9 + */ 10 + 11 + #include <linux/irqchip/chained_irq.h> 12 + #include <linux/irqdomain.h> 13 + #include <linux/msi.h> 14 + #include <linux/pci_regs.h> 15 + #include <linux/pci-ecam.h> 16 + 17 + #include "pcie-plda.h" 18 + 19 + void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 20 + int where) 21 + { 22 + struct plda_pcie_rp *pcie = bus->sysdata; 23 + 24 + return pcie->config_base + PCIE_ECAM_OFFSET(bus->number, devfn, where); 25 + } 26 + EXPORT_SYMBOL_GPL(plda_pcie_map_bus); 27 + 28 + static void plda_handle_msi(struct irq_desc *desc) 29 + { 30 + struct plda_pcie_rp *port = irq_desc_get_handler_data(desc); 31 + struct irq_chip *chip = irq_desc_get_chip(desc); 32 + struct device *dev = port->dev; 33 + struct plda_msi *msi = &port->msi; 34 + void __iomem *bridge_base_addr = port->bridge_addr; 35 + unsigned long status; 36 + u32 bit; 37 + int ret; 38 + 39 + chained_irq_enter(chip, desc); 40 + 41 + status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL); 42 + if (status & PM_MSI_INT_MSI_MASK) { 43 + writel_relaxed(status & PM_MSI_INT_MSI_MASK, 44 + bridge_base_addr + ISTATUS_LOCAL); 45 + status = readl_relaxed(bridge_base_addr + ISTATUS_MSI); 46 + for_each_set_bit(bit, &status, msi->num_vectors) { 47 + ret = generic_handle_domain_irq(msi->dev_domain, bit); 48 + if (ret) 49 + dev_err_ratelimited(dev, "bad MSI IRQ %d\n", 50 + bit); 51 + } 52 + } 53 + 54 + chained_irq_exit(chip, desc); 55 + } 56 + 57 + static void plda_msi_bottom_irq_ack(struct irq_data *data) 58 + { 59 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 60 + void __iomem *bridge_base_addr = port->bridge_addr; 61 + u32 bitpos = data->hwirq; 62 + 63 + writel_relaxed(BIT(bitpos), bridge_base_addr + ISTATUS_MSI); 64 + } 65 + 66 + static void plda_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 67 + { 68 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 69 + phys_addr_t addr = port->msi.vector_phy; 70 + 71 + msg->address_lo = lower_32_bits(addr); 72 + msg->address_hi = upper_32_bits(addr); 73 + msg->data = data->hwirq; 74 + 75 + dev_dbg(port->dev, "msi#%x address_hi %#x address_lo %#x\n", 76 + (int)data->hwirq, msg->address_hi, msg->address_lo); 77 + } 78 + 79 + static int plda_msi_set_affinity(struct irq_data *irq_data, 80 + const struct cpumask *mask, bool force) 81 + { 82 + return -EINVAL; 83 + } 84 + 85 + static struct irq_chip plda_msi_bottom_irq_chip = { 86 + .name = "PLDA MSI", 87 + .irq_ack = plda_msi_bottom_irq_ack, 88 + .irq_compose_msi_msg = plda_compose_msi_msg, 89 + .irq_set_affinity = plda_msi_set_affinity, 90 + }; 91 + 92 + static int plda_irq_msi_domain_alloc(struct irq_domain *domain, 93 + unsigned int virq, 94 + unsigned int nr_irqs, 95 + void *args) 96 + { 97 + struct plda_pcie_rp *port = domain->host_data; 98 + struct plda_msi *msi = &port->msi; 99 + unsigned long bit; 100 + 101 + mutex_lock(&msi->lock); 102 + bit = find_first_zero_bit(msi->used, msi->num_vectors); 103 + if (bit >= msi->num_vectors) { 104 + mutex_unlock(&msi->lock); 105 + return -ENOSPC; 106 + } 107 + 108 + set_bit(bit, msi->used); 109 + 110 + irq_domain_set_info(domain, virq, bit, &plda_msi_bottom_irq_chip, 111 + domain->host_data, handle_edge_irq, NULL, NULL); 112 + 113 + mutex_unlock(&msi->lock); 114 + 115 + return 0; 116 + } 117 + 118 + static void plda_irq_msi_domain_free(struct irq_domain *domain, 119 + unsigned int virq, 120 + unsigned int nr_irqs) 121 + { 122 + struct irq_data *d = irq_domain_get_irq_data(domain, virq); 123 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(d); 124 + struct plda_msi *msi = &port->msi; 125 + 126 + mutex_lock(&msi->lock); 127 + 128 + if (test_bit(d->hwirq, msi->used)) 129 + __clear_bit(d->hwirq, msi->used); 130 + else 131 + dev_err(port->dev, "trying to free unused MSI%lu\n", d->hwirq); 132 + 133 + mutex_unlock(&msi->lock); 134 + } 135 + 136 + static const struct irq_domain_ops msi_domain_ops = { 137 + .alloc = plda_irq_msi_domain_alloc, 138 + .free = plda_irq_msi_domain_free, 139 + }; 140 + 141 + static struct irq_chip plda_msi_irq_chip = { 142 + .name = "PLDA PCIe MSI", 143 + .irq_ack = irq_chip_ack_parent, 144 + .irq_mask = pci_msi_mask_irq, 145 + .irq_unmask = pci_msi_unmask_irq, 146 + }; 147 + 148 + static struct msi_domain_info plda_msi_domain_info = { 149 + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | 150 + MSI_FLAG_PCI_MSIX), 151 + .chip = &plda_msi_irq_chip, 152 + }; 153 + 154 + static int plda_allocate_msi_domains(struct plda_pcie_rp *port) 155 + { 156 + struct device *dev = port->dev; 157 + struct fwnode_handle *fwnode = of_node_to_fwnode(dev->of_node); 158 + struct plda_msi *msi = &port->msi; 159 + 160 + mutex_init(&port->msi.lock); 161 + 162 + msi->dev_domain = irq_domain_add_linear(NULL, msi->num_vectors, 163 + &msi_domain_ops, port); 164 + if (!msi->dev_domain) { 165 + dev_err(dev, "failed to create IRQ domain\n"); 166 + return -ENOMEM; 167 + } 168 + 169 + msi->msi_domain = pci_msi_create_irq_domain(fwnode, 170 + &plda_msi_domain_info, 171 + msi->dev_domain); 172 + if (!msi->msi_domain) { 173 + dev_err(dev, "failed to create MSI domain\n"); 174 + irq_domain_remove(msi->dev_domain); 175 + return -ENOMEM; 176 + } 177 + 178 + return 0; 179 + } 180 + 181 + static void plda_handle_intx(struct irq_desc *desc) 182 + { 183 + struct plda_pcie_rp *port = irq_desc_get_handler_data(desc); 184 + struct irq_chip *chip = irq_desc_get_chip(desc); 185 + struct device *dev = port->dev; 186 + void __iomem *bridge_base_addr = port->bridge_addr; 187 + unsigned long status; 188 + u32 bit; 189 + int ret; 190 + 191 + chained_irq_enter(chip, desc); 192 + 193 + status = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL); 194 + if (status & PM_MSI_INT_INTX_MASK) { 195 + status &= PM_MSI_INT_INTX_MASK; 196 + status >>= PM_MSI_INT_INTX_SHIFT; 197 + for_each_set_bit(bit, &status, PCI_NUM_INTX) { 198 + ret = generic_handle_domain_irq(port->intx_domain, bit); 199 + if (ret) 200 + dev_err_ratelimited(dev, "bad INTx IRQ %d\n", 201 + bit); 202 + } 203 + } 204 + 205 + chained_irq_exit(chip, desc); 206 + } 207 + 208 + static void plda_ack_intx_irq(struct irq_data *data) 209 + { 210 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 211 + void __iomem *bridge_base_addr = port->bridge_addr; 212 + u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT); 213 + 214 + writel_relaxed(mask, bridge_base_addr + ISTATUS_LOCAL); 215 + } 216 + 217 + static void plda_mask_intx_irq(struct irq_data *data) 218 + { 219 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 220 + void __iomem *bridge_base_addr = port->bridge_addr; 221 + unsigned long flags; 222 + u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT); 223 + u32 val; 224 + 225 + raw_spin_lock_irqsave(&port->lock, flags); 226 + val = readl_relaxed(bridge_base_addr + IMASK_LOCAL); 227 + val &= ~mask; 228 + writel_relaxed(val, bridge_base_addr + IMASK_LOCAL); 229 + raw_spin_unlock_irqrestore(&port->lock, flags); 230 + } 231 + 232 + static void plda_unmask_intx_irq(struct irq_data *data) 233 + { 234 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 235 + void __iomem *bridge_base_addr = port->bridge_addr; 236 + unsigned long flags; 237 + u32 mask = BIT(data->hwirq + PM_MSI_INT_INTX_SHIFT); 238 + u32 val; 239 + 240 + raw_spin_lock_irqsave(&port->lock, flags); 241 + val = readl_relaxed(bridge_base_addr + IMASK_LOCAL); 242 + val |= mask; 243 + writel_relaxed(val, bridge_base_addr + IMASK_LOCAL); 244 + raw_spin_unlock_irqrestore(&port->lock, flags); 245 + } 246 + 247 + static struct irq_chip plda_intx_irq_chip = { 248 + .name = "PLDA PCIe INTx", 249 + .irq_ack = plda_ack_intx_irq, 250 + .irq_mask = plda_mask_intx_irq, 251 + .irq_unmask = plda_unmask_intx_irq, 252 + }; 253 + 254 + static int plda_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 255 + irq_hw_number_t hwirq) 256 + { 257 + irq_set_chip_and_handler(irq, &plda_intx_irq_chip, handle_level_irq); 258 + irq_set_chip_data(irq, domain->host_data); 259 + 260 + return 0; 261 + } 262 + 263 + static const struct irq_domain_ops intx_domain_ops = { 264 + .map = plda_pcie_intx_map, 265 + }; 266 + 267 + static u32 plda_get_events(struct plda_pcie_rp *port) 268 + { 269 + u32 events, val, origin; 270 + 271 + origin = readl_relaxed(port->bridge_addr + ISTATUS_LOCAL); 272 + 273 + /* MSI event and sys events */ 274 + val = (origin & SYS_AND_MSI_MASK) >> PM_MSI_INT_MSI_SHIFT; 275 + events = val << (PM_MSI_INT_MSI_SHIFT - PCI_NUM_INTX + 1); 276 + 277 + /* INTx events */ 278 + if (origin & PM_MSI_INT_INTX_MASK) 279 + events |= BIT(PM_MSI_INT_INTX_SHIFT); 280 + 281 + /* remains are same with register */ 282 + events |= origin & GENMASK(P_ATR_EVT_DOORBELL_SHIFT, 0); 283 + 284 + return events; 285 + } 286 + 287 + static irqreturn_t plda_event_handler(int irq, void *dev_id) 288 + { 289 + return IRQ_HANDLED; 290 + } 291 + 292 + static void plda_handle_event(struct irq_desc *desc) 293 + { 294 + struct plda_pcie_rp *port = irq_desc_get_handler_data(desc); 295 + unsigned long events; 296 + u32 bit; 297 + struct irq_chip *chip = irq_desc_get_chip(desc); 298 + 299 + chained_irq_enter(chip, desc); 300 + 301 + events = port->event_ops->get_events(port); 302 + 303 + events &= port->events_bitmap; 304 + for_each_set_bit(bit, &events, port->num_events) 305 + generic_handle_domain_irq(port->event_domain, bit); 306 + 307 + chained_irq_exit(chip, desc); 308 + } 309 + 310 + static u32 plda_hwirq_to_mask(int hwirq) 311 + { 312 + u32 mask; 313 + 314 + /* hwirq 23 - 0 are the same with register */ 315 + if (hwirq < EVENT_PM_MSI_INT_INTX) 316 + mask = BIT(hwirq); 317 + else if (hwirq == EVENT_PM_MSI_INT_INTX) 318 + mask = PM_MSI_INT_INTX_MASK; 319 + else 320 + mask = BIT(hwirq + PCI_NUM_INTX - 1); 321 + 322 + return mask; 323 + } 324 + 325 + static void plda_ack_event_irq(struct irq_data *data) 326 + { 327 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 328 + 329 + writel_relaxed(plda_hwirq_to_mask(data->hwirq), 330 + port->bridge_addr + ISTATUS_LOCAL); 331 + } 332 + 333 + static void plda_mask_event_irq(struct irq_data *data) 334 + { 335 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 336 + u32 mask, val; 337 + 338 + mask = plda_hwirq_to_mask(data->hwirq); 339 + 340 + raw_spin_lock(&port->lock); 341 + val = readl_relaxed(port->bridge_addr + IMASK_LOCAL); 342 + val &= ~mask; 343 + writel_relaxed(val, port->bridge_addr + IMASK_LOCAL); 344 + raw_spin_unlock(&port->lock); 345 + } 346 + 347 + static void plda_unmask_event_irq(struct irq_data *data) 348 + { 349 + struct plda_pcie_rp *port = irq_data_get_irq_chip_data(data); 350 + u32 mask, val; 351 + 352 + mask = plda_hwirq_to_mask(data->hwirq); 353 + 354 + raw_spin_lock(&port->lock); 355 + val = readl_relaxed(port->bridge_addr + IMASK_LOCAL); 356 + val |= mask; 357 + writel_relaxed(val, port->bridge_addr + IMASK_LOCAL); 358 + raw_spin_unlock(&port->lock); 359 + } 360 + 361 + static struct irq_chip plda_event_irq_chip = { 362 + .name = "PLDA PCIe EVENT", 363 + .irq_ack = plda_ack_event_irq, 364 + .irq_mask = plda_mask_event_irq, 365 + .irq_unmask = plda_unmask_event_irq, 366 + }; 367 + 368 + static const struct plda_event_ops plda_event_ops = { 369 + .get_events = plda_get_events, 370 + }; 371 + 372 + static int plda_pcie_event_map(struct irq_domain *domain, unsigned int irq, 373 + irq_hw_number_t hwirq) 374 + { 375 + struct plda_pcie_rp *port = (void *)domain->host_data; 376 + 377 + irq_set_chip_and_handler(irq, port->event_irq_chip, handle_level_irq); 378 + irq_set_chip_data(irq, domain->host_data); 379 + 380 + return 0; 381 + } 382 + 383 + static const struct irq_domain_ops plda_event_domain_ops = { 384 + .map = plda_pcie_event_map, 385 + }; 386 + 387 + static int plda_pcie_init_irq_domains(struct plda_pcie_rp *port) 388 + { 389 + struct device *dev = port->dev; 390 + struct device_node *node = dev->of_node; 391 + struct device_node *pcie_intc_node; 392 + 393 + /* Setup INTx */ 394 + pcie_intc_node = of_get_next_child(node, NULL); 395 + if (!pcie_intc_node) { 396 + dev_err(dev, "failed to find PCIe Intc node\n"); 397 + return -EINVAL; 398 + } 399 + 400 + port->event_domain = irq_domain_add_linear(pcie_intc_node, 401 + port->num_events, 402 + &plda_event_domain_ops, 403 + port); 404 + if (!port->event_domain) { 405 + dev_err(dev, "failed to get event domain\n"); 406 + of_node_put(pcie_intc_node); 407 + return -ENOMEM; 408 + } 409 + 410 + irq_domain_update_bus_token(port->event_domain, DOMAIN_BUS_NEXUS); 411 + 412 + port->intx_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, 413 + &intx_domain_ops, port); 414 + if (!port->intx_domain) { 415 + dev_err(dev, "failed to get an INTx IRQ domain\n"); 416 + of_node_put(pcie_intc_node); 417 + return -ENOMEM; 418 + } 419 + 420 + irq_domain_update_bus_token(port->intx_domain, DOMAIN_BUS_WIRED); 421 + 422 + of_node_put(pcie_intc_node); 423 + raw_spin_lock_init(&port->lock); 424 + 425 + return plda_allocate_msi_domains(port); 426 + } 427 + 428 + int plda_init_interrupts(struct platform_device *pdev, 429 + struct plda_pcie_rp *port, 430 + const struct plda_event *event) 431 + { 432 + struct device *dev = &pdev->dev; 433 + int event_irq, ret; 434 + u32 i; 435 + 436 + if (!port->event_ops) 437 + port->event_ops = &plda_event_ops; 438 + 439 + if (!port->event_irq_chip) 440 + port->event_irq_chip = &plda_event_irq_chip; 441 + 442 + ret = plda_pcie_init_irq_domains(port); 443 + if (ret) { 444 + dev_err(dev, "failed creating IRQ domains\n"); 445 + return ret; 446 + } 447 + 448 + port->irq = platform_get_irq(pdev, 0); 449 + if (port->irq < 0) 450 + return -ENODEV; 451 + 452 + for_each_set_bit(i, &port->events_bitmap, port->num_events) { 453 + event_irq = irq_create_mapping(port->event_domain, i); 454 + if (!event_irq) { 455 + dev_err(dev, "failed to map hwirq %d\n", i); 456 + return -ENXIO; 457 + } 458 + 459 + if (event->request_event_irq) 460 + ret = event->request_event_irq(port, event_irq, i); 461 + else 462 + ret = devm_request_irq(dev, event_irq, 463 + plda_event_handler, 464 + 0, NULL, port); 465 + 466 + if (ret) { 467 + dev_err(dev, "failed to request IRQ %d\n", event_irq); 468 + return ret; 469 + } 470 + } 471 + 472 + port->intx_irq = irq_create_mapping(port->event_domain, 473 + event->intx_event); 474 + if (!port->intx_irq) { 475 + dev_err(dev, "failed to map INTx interrupt\n"); 476 + return -ENXIO; 477 + } 478 + 479 + /* Plug the INTx chained handler */ 480 + irq_set_chained_handler_and_data(port->intx_irq, plda_handle_intx, port); 481 + 482 + port->msi_irq = irq_create_mapping(port->event_domain, 483 + event->msi_event); 484 + if (!port->msi_irq) 485 + return -ENXIO; 486 + 487 + /* Plug the MSI chained handler */ 488 + irq_set_chained_handler_and_data(port->msi_irq, plda_handle_msi, port); 489 + 490 + /* Plug the main event chained handler */ 491 + irq_set_chained_handler_and_data(port->irq, plda_handle_event, port); 492 + 493 + return 0; 494 + } 495 + EXPORT_SYMBOL_GPL(plda_init_interrupts); 496 + 497 + void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index, 498 + phys_addr_t axi_addr, phys_addr_t pci_addr, 499 + size_t size) 500 + { 501 + u32 atr_sz = ilog2(size) - 1; 502 + u32 val; 503 + 504 + if (index == 0) 505 + val = PCIE_CONFIG_INTERFACE; 506 + else 507 + val = PCIE_TX_RX_INTERFACE; 508 + 509 + writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 510 + ATR0_AXI4_SLV0_TRSL_PARAM); 511 + 512 + val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) | 513 + ATR_IMPL_ENABLE; 514 + writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 515 + ATR0_AXI4_SLV0_SRCADDR_PARAM); 516 + 517 + val = upper_32_bits(axi_addr); 518 + writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 519 + ATR0_AXI4_SLV0_SRC_ADDR); 520 + 521 + val = lower_32_bits(pci_addr); 522 + writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 523 + ATR0_AXI4_SLV0_TRSL_ADDR_LSB); 524 + 525 + val = upper_32_bits(pci_addr); 526 + writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 527 + ATR0_AXI4_SLV0_TRSL_ADDR_UDW); 528 + 529 + val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM); 530 + val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT); 531 + writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM); 532 + writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR); 533 + } 534 + EXPORT_SYMBOL_GPL(plda_pcie_setup_window); 535 + 536 + int plda_pcie_setup_iomems(struct pci_host_bridge *bridge, 537 + struct plda_pcie_rp *port) 538 + { 539 + void __iomem *bridge_base_addr = port->bridge_addr; 540 + struct resource_entry *entry; 541 + u64 pci_addr; 542 + u32 index = 1; 543 + 544 + resource_list_for_each_entry(entry, &bridge->windows) { 545 + if (resource_type(entry->res) == IORESOURCE_MEM) { 546 + pci_addr = entry->res->start - entry->offset; 547 + plda_pcie_setup_window(bridge_base_addr, index, 548 + entry->res->start, pci_addr, 549 + resource_size(entry->res)); 550 + index++; 551 + } 552 + } 553 + 554 + return 0; 555 + } 556 + EXPORT_SYMBOL_GPL(plda_pcie_setup_iomems); 557 + 558 + static void plda_pcie_irq_domain_deinit(struct plda_pcie_rp *pcie) 559 + { 560 + irq_set_chained_handler_and_data(pcie->irq, NULL, NULL); 561 + irq_set_chained_handler_and_data(pcie->msi_irq, NULL, NULL); 562 + irq_set_chained_handler_and_data(pcie->intx_irq, NULL, NULL); 563 + 564 + irq_domain_remove(pcie->msi.msi_domain); 565 + irq_domain_remove(pcie->msi.dev_domain); 566 + 567 + irq_domain_remove(pcie->intx_domain); 568 + irq_domain_remove(pcie->event_domain); 569 + } 570 + 571 + int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops, 572 + const struct plda_event *plda_event) 573 + { 574 + struct device *dev = port->dev; 575 + struct pci_host_bridge *bridge; 576 + struct platform_device *pdev = to_platform_device(dev); 577 + struct resource *cfg_res; 578 + int ret; 579 + 580 + pdev = to_platform_device(dev); 581 + 582 + port->bridge_addr = 583 + devm_platform_ioremap_resource_byname(pdev, "apb"); 584 + 585 + if (IS_ERR(port->bridge_addr)) 586 + return dev_err_probe(dev, PTR_ERR(port->bridge_addr), 587 + "failed to map reg memory\n"); 588 + 589 + cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "cfg"); 590 + if (!cfg_res) 591 + return dev_err_probe(dev, -ENODEV, 592 + "failed to get config memory\n"); 593 + 594 + port->config_base = devm_ioremap_resource(dev, cfg_res); 595 + if (IS_ERR(port->config_base)) 596 + return dev_err_probe(dev, PTR_ERR(port->config_base), 597 + "failed to map config memory\n"); 598 + 599 + bridge = devm_pci_alloc_host_bridge(dev, 0); 600 + if (!bridge) 601 + return dev_err_probe(dev, -ENOMEM, 602 + "failed to alloc bridge\n"); 603 + 604 + if (port->host_ops && port->host_ops->host_init) { 605 + ret = port->host_ops->host_init(port); 606 + if (ret) 607 + return ret; 608 + } 609 + 610 + port->bridge = bridge; 611 + plda_pcie_setup_window(port->bridge_addr, 0, cfg_res->start, 0, 612 + resource_size(cfg_res)); 613 + plda_pcie_setup_iomems(bridge, port); 614 + plda_set_default_msi(&port->msi); 615 + ret = plda_init_interrupts(pdev, port, plda_event); 616 + if (ret) 617 + goto err_host; 618 + 619 + /* Set default bus ops */ 620 + bridge->ops = ops; 621 + bridge->sysdata = port; 622 + 623 + ret = pci_host_probe(bridge); 624 + if (ret < 0) { 625 + dev_err_probe(dev, ret, "failed to probe pci host\n"); 626 + goto err_probe; 627 + } 628 + 629 + return ret; 630 + 631 + err_probe: 632 + plda_pcie_irq_domain_deinit(port); 633 + err_host: 634 + if (port->host_ops && port->host_ops->host_deinit) 635 + port->host_ops->host_deinit(port); 636 + 637 + return ret; 638 + } 639 + EXPORT_SYMBOL_GPL(plda_pcie_host_init); 640 + 641 + void plda_pcie_host_deinit(struct plda_pcie_rp *port) 642 + { 643 + pci_stop_root_bus(port->bridge->bus); 644 + pci_remove_root_bus(port->bridge->bus); 645 + 646 + plda_pcie_irq_domain_deinit(port); 647 + 648 + if (port->host_ops && port->host_ops->host_deinit) 649 + port->host_ops->host_deinit(port); 650 + } 651 + EXPORT_SYMBOL_GPL(plda_pcie_host_deinit);
+273
drivers/pci/controller/plda/pcie-plda.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * PLDA PCIe host controller driver 4 + */ 5 + 6 + #ifndef _PCIE_PLDA_H 7 + #define _PCIE_PLDA_H 8 + 9 + /* Number of MSI IRQs */ 10 + #define PLDA_MAX_NUM_MSI_IRQS 32 11 + 12 + /* PCIe Bridge Phy Regs */ 13 + #define GEN_SETTINGS 0x80 14 + #define RP_ENABLE 1 15 + #define PCIE_PCI_IDS_DW1 0x9c 16 + #define IDS_CLASS_CODE_SHIFT 16 17 + #define REVISION_ID_MASK GENMASK(7, 0) 18 + #define CLASS_CODE_ID_MASK GENMASK(31, 8) 19 + #define PCIE_PCI_IRQ_DW0 0xa8 20 + #define MSIX_CAP_MASK BIT(31) 21 + #define NUM_MSI_MSGS_MASK GENMASK(6, 4) 22 + #define NUM_MSI_MSGS_SHIFT 4 23 + #define PCI_MISC 0xb4 24 + #define PHY_FUNCTION_DIS BIT(15) 25 + #define PCIE_WINROM 0xfc 26 + #define PREF_MEM_WIN_64_SUPPORT BIT(3) 27 + 28 + #define IMASK_LOCAL 0x180 29 + #define DMA_END_ENGINE_0_MASK 0x00000000u 30 + #define DMA_END_ENGINE_0_SHIFT 0 31 + #define DMA_END_ENGINE_1_MASK 0x00000000u 32 + #define DMA_END_ENGINE_1_SHIFT 1 33 + #define DMA_ERROR_ENGINE_0_MASK 0x00000100u 34 + #define DMA_ERROR_ENGINE_0_SHIFT 8 35 + #define DMA_ERROR_ENGINE_1_MASK 0x00000200u 36 + #define DMA_ERROR_ENGINE_1_SHIFT 9 37 + #define A_ATR_EVT_POST_ERR_MASK 0x00010000u 38 + #define A_ATR_EVT_POST_ERR_SHIFT 16 39 + #define A_ATR_EVT_FETCH_ERR_MASK 0x00020000u 40 + #define A_ATR_EVT_FETCH_ERR_SHIFT 17 41 + #define A_ATR_EVT_DISCARD_ERR_MASK 0x00040000u 42 + #define A_ATR_EVT_DISCARD_ERR_SHIFT 18 43 + #define A_ATR_EVT_DOORBELL_MASK 0x00000000u 44 + #define A_ATR_EVT_DOORBELL_SHIFT 19 45 + #define P_ATR_EVT_POST_ERR_MASK 0x00100000u 46 + #define P_ATR_EVT_POST_ERR_SHIFT 20 47 + #define P_ATR_EVT_FETCH_ERR_MASK 0x00200000u 48 + #define P_ATR_EVT_FETCH_ERR_SHIFT 21 49 + #define P_ATR_EVT_DISCARD_ERR_MASK 0x00400000u 50 + #define P_ATR_EVT_DISCARD_ERR_SHIFT 22 51 + #define P_ATR_EVT_DOORBELL_MASK 0x00000000u 52 + #define P_ATR_EVT_DOORBELL_SHIFT 23 53 + #define PM_MSI_INT_INTA_MASK 0x01000000u 54 + #define PM_MSI_INT_INTA_SHIFT 24 55 + #define PM_MSI_INT_INTB_MASK 0x02000000u 56 + #define PM_MSI_INT_INTB_SHIFT 25 57 + #define PM_MSI_INT_INTC_MASK 0x04000000u 58 + #define PM_MSI_INT_INTC_SHIFT 26 59 + #define PM_MSI_INT_INTD_MASK 0x08000000u 60 + #define PM_MSI_INT_INTD_SHIFT 27 61 + #define PM_MSI_INT_INTX_MASK 0x0f000000u 62 + #define PM_MSI_INT_INTX_SHIFT 24 63 + #define PM_MSI_INT_MSI_MASK 0x10000000u 64 + #define PM_MSI_INT_MSI_SHIFT 28 65 + #define PM_MSI_INT_AER_EVT_MASK 0x20000000u 66 + #define PM_MSI_INT_AER_EVT_SHIFT 29 67 + #define PM_MSI_INT_EVENTS_MASK 0x40000000u 68 + #define PM_MSI_INT_EVENTS_SHIFT 30 69 + #define PM_MSI_INT_SYS_ERR_MASK 0x80000000u 70 + #define PM_MSI_INT_SYS_ERR_SHIFT 31 71 + #define SYS_AND_MSI_MASK GENMASK(31, 28) 72 + #define NUM_LOCAL_EVENTS 15 73 + #define ISTATUS_LOCAL 0x184 74 + #define IMASK_HOST 0x188 75 + #define ISTATUS_HOST 0x18c 76 + #define IMSI_ADDR 0x190 77 + #define ISTATUS_MSI 0x194 78 + #define PMSG_SUPPORT_RX 0x3f0 79 + #define PMSG_LTR_SUPPORT BIT(2) 80 + 81 + /* PCIe Master table init defines */ 82 + #define ATR0_PCIE_WIN0_SRCADDR_PARAM 0x600u 83 + #define ATR0_PCIE_ATR_SIZE 0x25 84 + #define ATR0_PCIE_ATR_SIZE_SHIFT 1 85 + #define ATR0_PCIE_WIN0_SRC_ADDR 0x604u 86 + #define ATR0_PCIE_WIN0_TRSL_ADDR_LSB 0x608u 87 + #define ATR0_PCIE_WIN0_TRSL_ADDR_UDW 0x60cu 88 + #define ATR0_PCIE_WIN0_TRSL_PARAM 0x610u 89 + 90 + /* PCIe AXI slave table init defines */ 91 + #define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u 92 + #define ATR_SIZE_SHIFT 1 93 + #define ATR_IMPL_ENABLE 1 94 + #define ATR0_AXI4_SLV0_SRC_ADDR 0x804u 95 + #define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u 96 + #define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu 97 + #define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u 98 + #define PCIE_TX_RX_INTERFACE 0x00000000u 99 + #define PCIE_CONFIG_INTERFACE 0x00000001u 100 + 101 + #define CONFIG_SPACE_ADDR_OFFSET 0x1000u 102 + 103 + #define ATR_ENTRY_SIZE 32 104 + 105 + enum plda_int_event { 106 + PLDA_AXI_POST_ERR, 107 + PLDA_AXI_FETCH_ERR, 108 + PLDA_AXI_DISCARD_ERR, 109 + PLDA_AXI_DOORBELL, 110 + PLDA_PCIE_POST_ERR, 111 + PLDA_PCIE_FETCH_ERR, 112 + PLDA_PCIE_DISCARD_ERR, 113 + PLDA_PCIE_DOORBELL, 114 + PLDA_INTX, 115 + PLDA_MSI, 116 + PLDA_AER_EVENT, 117 + PLDA_MISC_EVENTS, 118 + PLDA_SYS_ERR, 119 + PLDA_INT_EVENT_NUM 120 + }; 121 + 122 + #define PLDA_NUM_DMA_EVENTS 16 123 + 124 + #define EVENT_PM_MSI_INT_INTX (PLDA_NUM_DMA_EVENTS + PLDA_INTX) 125 + #define EVENT_PM_MSI_INT_MSI (PLDA_NUM_DMA_EVENTS + PLDA_MSI) 126 + #define PLDA_MAX_EVENT_NUM (PLDA_NUM_DMA_EVENTS + PLDA_INT_EVENT_NUM) 127 + 128 + /* 129 + * PLDA interrupt register 130 + * 131 + * 31 27 23 15 7 0 132 + * +--+--+--+-+------+-+-+-+-+-+-+-+-+-----------+-----------+ 133 + * |12|11|10|9| intx |7|6|5|4|3|2|1|0| DMA error | DMA end | 134 + * +--+--+--+-+------+-+-+-+-+-+-+-+-+-----------+-----------+ 135 + * event bit 136 + * 0-7 (0-7) DMA interrupt end : reserved for vendor implement 137 + * 8-15 (8-15) DMA error : reserved for vendor implement 138 + * 16 (16) AXI post error (PLDA_AXI_POST_ERR) 139 + * 17 (17) AXI fetch error (PLDA_AXI_FETCH_ERR) 140 + * 18 (18) AXI discard error (PLDA_AXI_DISCARD_ERR) 141 + * 19 (19) AXI doorbell (PLDA_PCIE_DOORBELL) 142 + * 20 (20) PCIe post error (PLDA_PCIE_POST_ERR) 143 + * 21 (21) PCIe fetch error (PLDA_PCIE_FETCH_ERR) 144 + * 22 (22) PCIe discard error (PLDA_PCIE_DISCARD_ERR) 145 + * 23 (23) PCIe doorbell (PLDA_PCIE_DOORBELL) 146 + * 24 (27-24) INTx interruts (PLDA_INTX) 147 + * 25 (28): MSI interrupt (PLDA_MSI) 148 + * 26 (29): AER event (PLDA_AER_EVENT) 149 + * 27 (30): PM/LTR/Hotplug (PLDA_MISC_EVENTS) 150 + * 28 (31): System error (PLDA_SYS_ERR) 151 + */ 152 + 153 + struct plda_pcie_rp; 154 + 155 + struct plda_event_ops { 156 + u32 (*get_events)(struct plda_pcie_rp *pcie); 157 + }; 158 + 159 + struct plda_pcie_host_ops { 160 + int (*host_init)(struct plda_pcie_rp *pcie); 161 + void (*host_deinit)(struct plda_pcie_rp *pcie); 162 + }; 163 + 164 + struct plda_msi { 165 + struct mutex lock; /* Protect used bitmap */ 166 + struct irq_domain *msi_domain; 167 + struct irq_domain *dev_domain; 168 + u32 num_vectors; 169 + u64 vector_phy; 170 + DECLARE_BITMAP(used, PLDA_MAX_NUM_MSI_IRQS); 171 + }; 172 + 173 + struct plda_pcie_rp { 174 + struct device *dev; 175 + struct pci_host_bridge *bridge; 176 + struct irq_domain *intx_domain; 177 + struct irq_domain *event_domain; 178 + raw_spinlock_t lock; 179 + struct plda_msi msi; 180 + const struct plda_event_ops *event_ops; 181 + const struct irq_chip *event_irq_chip; 182 + const struct plda_pcie_host_ops *host_ops; 183 + void __iomem *bridge_addr; 184 + void __iomem *config_base; 185 + unsigned long events_bitmap; 186 + int irq; 187 + int msi_irq; 188 + int intx_irq; 189 + int num_events; 190 + }; 191 + 192 + struct plda_event { 193 + int (*request_event_irq)(struct plda_pcie_rp *pcie, 194 + int event_irq, int event); 195 + int intx_event; 196 + int msi_event; 197 + }; 198 + 199 + void __iomem *plda_pcie_map_bus(struct pci_bus *bus, unsigned int devfn, 200 + int where); 201 + int plda_init_interrupts(struct platform_device *pdev, 202 + struct plda_pcie_rp *port, 203 + const struct plda_event *event); 204 + void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index, 205 + phys_addr_t axi_addr, phys_addr_t pci_addr, 206 + size_t size); 207 + int plda_pcie_setup_iomems(struct pci_host_bridge *bridge, 208 + struct plda_pcie_rp *port); 209 + int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops, 210 + const struct plda_event *plda_event); 211 + void plda_pcie_host_deinit(struct plda_pcie_rp *pcie); 212 + 213 + static inline void plda_set_default_msi(struct plda_msi *msi) 214 + { 215 + msi->vector_phy = IMSI_ADDR; 216 + msi->num_vectors = PLDA_MAX_NUM_MSI_IRQS; 217 + } 218 + 219 + static inline void plda_pcie_enable_root_port(struct plda_pcie_rp *plda) 220 + { 221 + u32 value; 222 + 223 + value = readl_relaxed(plda->bridge_addr + GEN_SETTINGS); 224 + value |= RP_ENABLE; 225 + writel_relaxed(value, plda->bridge_addr + GEN_SETTINGS); 226 + } 227 + 228 + static inline void plda_pcie_set_standard_class(struct plda_pcie_rp *plda) 229 + { 230 + u32 value; 231 + 232 + /* set class code and reserve revision id */ 233 + value = readl_relaxed(plda->bridge_addr + PCIE_PCI_IDS_DW1); 234 + value &= REVISION_ID_MASK; 235 + value |= (PCI_CLASS_BRIDGE_PCI << IDS_CLASS_CODE_SHIFT); 236 + writel_relaxed(value, plda->bridge_addr + PCIE_PCI_IDS_DW1); 237 + } 238 + 239 + static inline void plda_pcie_set_pref_win_64bit(struct plda_pcie_rp *plda) 240 + { 241 + u32 value; 242 + 243 + value = readl_relaxed(plda->bridge_addr + PCIE_WINROM); 244 + value |= PREF_MEM_WIN_64_SUPPORT; 245 + writel_relaxed(value, plda->bridge_addr + PCIE_WINROM); 246 + } 247 + 248 + static inline void plda_pcie_disable_ltr(struct plda_pcie_rp *plda) 249 + { 250 + u32 value; 251 + 252 + value = readl_relaxed(plda->bridge_addr + PMSG_SUPPORT_RX); 253 + value &= ~PMSG_LTR_SUPPORT; 254 + writel_relaxed(value, plda->bridge_addr + PMSG_SUPPORT_RX); 255 + } 256 + 257 + static inline void plda_pcie_disable_func(struct plda_pcie_rp *plda) 258 + { 259 + u32 value; 260 + 261 + value = readl_relaxed(plda->bridge_addr + PCI_MISC); 262 + value |= PHY_FUNCTION_DIS; 263 + writel_relaxed(value, plda->bridge_addr + PCI_MISC); 264 + } 265 + 266 + static inline void plda_pcie_write_rc_bar(struct plda_pcie_rp *plda, u64 val) 267 + { 268 + void __iomem *addr = plda->bridge_addr + CONFIG_SPACE_ADDR_OFFSET; 269 + 270 + writel_relaxed(lower_32_bits(val), addr + PCI_BASE_ADDRESS_0); 271 + writel_relaxed(upper_32_bits(val), addr + PCI_BASE_ADDRESS_1); 272 + } 273 + #endif /* _PCIE_PLDA_H */
+488
drivers/pci/controller/plda/pcie-starfive.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * PCIe host controller driver for StarFive JH7110 Soc. 4 + * 5 + * Copyright (C) 2023 StarFive Technology Co., Ltd. 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/clk.h> 10 + #include <linux/delay.h> 11 + #include <linux/gpio/consumer.h> 12 + #include <linux/interrupt.h> 13 + #include <linux/kernel.h> 14 + #include <linux/mfd/syscon.h> 15 + #include <linux/module.h> 16 + #include <linux/of_address.h> 17 + #include <linux/of_irq.h> 18 + #include <linux/of_pci.h> 19 + #include <linux/pci.h> 20 + #include <linux/phy/phy.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/pm_runtime.h> 23 + #include <linux/regmap.h> 24 + #include <linux/reset.h> 25 + #include "../../pci.h" 26 + 27 + #include "pcie-plda.h" 28 + 29 + #define PCIE_FUNC_NUM 4 30 + 31 + /* system control */ 32 + #define STG_SYSCON_PCIE0_BASE 0x48 33 + #define STG_SYSCON_PCIE1_BASE 0x1f8 34 + 35 + #define STG_SYSCON_AR_OFFSET 0x78 36 + #define STG_SYSCON_AXI4_SLVL_AR_MASK GENMASK(22, 8) 37 + #define STG_SYSCON_AXI4_SLVL_PHY_AR(x) FIELD_PREP(GENMASK(20, 17), x) 38 + #define STG_SYSCON_AW_OFFSET 0x7c 39 + #define STG_SYSCON_AXI4_SLVL_AW_MASK GENMASK(14, 0) 40 + #define STG_SYSCON_AXI4_SLVL_PHY_AW(x) FIELD_PREP(GENMASK(12, 9), x) 41 + #define STG_SYSCON_CLKREQ BIT(22) 42 + #define STG_SYSCON_CKREF_SRC_MASK GENMASK(19, 18) 43 + #define STG_SYSCON_RP_NEP_OFFSET 0xe8 44 + #define STG_SYSCON_K_RP_NEP BIT(8) 45 + #define STG_SYSCON_LNKSTA_OFFSET 0x170 46 + #define DATA_LINK_ACTIVE BIT(5) 47 + 48 + /* Parameters for the waiting for link up routine */ 49 + #define LINK_WAIT_MAX_RETRIES 10 50 + #define LINK_WAIT_USLEEP_MIN 90000 51 + #define LINK_WAIT_USLEEP_MAX 100000 52 + 53 + struct starfive_jh7110_pcie { 54 + struct plda_pcie_rp plda; 55 + struct reset_control *resets; 56 + struct clk_bulk_data *clks; 57 + struct regmap *reg_syscon; 58 + struct gpio_desc *power_gpio; 59 + struct gpio_desc *reset_gpio; 60 + struct phy *phy; 61 + 62 + unsigned int stg_pcie_base; 63 + int num_clks; 64 + }; 65 + 66 + /* 67 + * JH7110 PCIe port BAR0/1 can be configured as 64-bit prefetchable memory 68 + * space. PCIe read and write requests targeting BAR0/1 are routed to so called 69 + * 'Bridge Configuration space' in PLDA IP datasheet, which contains the bridge 70 + * internal registers, such as interrupt, DMA and ATU registers... 71 + * JH7110 can access the Bridge Configuration space by local bus, and don`t 72 + * want the bridge internal registers accessed by the DMA from EP devices. 73 + * Thus, they are unimplemented and should be hidden here. 74 + */ 75 + static bool starfive_pcie_hide_rc_bar(struct pci_bus *bus, unsigned int devfn, 76 + int offset) 77 + { 78 + if (pci_is_root_bus(bus) && !devfn && 79 + (offset == PCI_BASE_ADDRESS_0 || offset == PCI_BASE_ADDRESS_1)) 80 + return true; 81 + 82 + return false; 83 + } 84 + 85 + static int starfive_pcie_config_write(struct pci_bus *bus, unsigned int devfn, 86 + int where, int size, u32 value) 87 + { 88 + if (starfive_pcie_hide_rc_bar(bus, devfn, where)) 89 + return PCIBIOS_SUCCESSFUL; 90 + 91 + return pci_generic_config_write(bus, devfn, where, size, value); 92 + } 93 + 94 + static int starfive_pcie_config_read(struct pci_bus *bus, unsigned int devfn, 95 + int where, int size, u32 *value) 96 + { 97 + if (starfive_pcie_hide_rc_bar(bus, devfn, where)) { 98 + *value = 0; 99 + return PCIBIOS_SUCCESSFUL; 100 + } 101 + 102 + return pci_generic_config_read(bus, devfn, where, size, value); 103 + } 104 + 105 + static int starfive_pcie_parse_dt(struct starfive_jh7110_pcie *pcie, 106 + struct device *dev) 107 + { 108 + int domain_nr; 109 + 110 + pcie->num_clks = devm_clk_bulk_get_all(dev, &pcie->clks); 111 + if (pcie->num_clks < 0) 112 + return dev_err_probe(dev, pcie->num_clks, 113 + "failed to get pcie clocks\n"); 114 + 115 + pcie->resets = devm_reset_control_array_get_exclusive(dev); 116 + if (IS_ERR(pcie->resets)) 117 + return dev_err_probe(dev, PTR_ERR(pcie->resets), 118 + "failed to get pcie resets"); 119 + 120 + pcie->reg_syscon = 121 + syscon_regmap_lookup_by_phandle(dev->of_node, 122 + "starfive,stg-syscon"); 123 + 124 + if (IS_ERR(pcie->reg_syscon)) 125 + return dev_err_probe(dev, PTR_ERR(pcie->reg_syscon), 126 + "failed to parse starfive,stg-syscon\n"); 127 + 128 + pcie->phy = devm_phy_optional_get(dev, NULL); 129 + if (IS_ERR(pcie->phy)) 130 + return dev_err_probe(dev, PTR_ERR(pcie->phy), 131 + "failed to get pcie phy\n"); 132 + 133 + /* 134 + * The PCIe domain numbers are set to be static in JH7110 DTS. 135 + * As the STG system controller defines different bases in PCIe RP0 & 136 + * RP1, we use them to identify which controller is doing the hardware 137 + * initialization. 138 + */ 139 + domain_nr = of_get_pci_domain_nr(dev->of_node); 140 + 141 + if (domain_nr < 0 || domain_nr > 1) 142 + return dev_err_probe(dev, -ENODEV, 143 + "failed to get valid pcie domain\n"); 144 + 145 + if (domain_nr == 0) 146 + pcie->stg_pcie_base = STG_SYSCON_PCIE0_BASE; 147 + else 148 + pcie->stg_pcie_base = STG_SYSCON_PCIE1_BASE; 149 + 150 + pcie->reset_gpio = devm_gpiod_get_optional(dev, "perst", 151 + GPIOD_OUT_HIGH); 152 + if (IS_ERR(pcie->reset_gpio)) 153 + return dev_err_probe(dev, PTR_ERR(pcie->reset_gpio), 154 + "failed to get perst-gpio\n"); 155 + 156 + pcie->power_gpio = devm_gpiod_get_optional(dev, "enable", 157 + GPIOD_OUT_LOW); 158 + if (IS_ERR(pcie->power_gpio)) 159 + return dev_err_probe(dev, PTR_ERR(pcie->power_gpio), 160 + "failed to get power-gpio\n"); 161 + 162 + return 0; 163 + } 164 + 165 + static struct pci_ops starfive_pcie_ops = { 166 + .map_bus = plda_pcie_map_bus, 167 + .read = starfive_pcie_config_read, 168 + .write = starfive_pcie_config_write, 169 + }; 170 + 171 + static int starfive_pcie_clk_rst_init(struct starfive_jh7110_pcie *pcie) 172 + { 173 + struct device *dev = pcie->plda.dev; 174 + int ret; 175 + 176 + ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks); 177 + if (ret) 178 + return dev_err_probe(dev, ret, "failed to enable clocks\n"); 179 + 180 + ret = reset_control_deassert(pcie->resets); 181 + if (ret) { 182 + clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks); 183 + dev_err_probe(dev, ret, "failed to deassert resets\n"); 184 + } 185 + 186 + return ret; 187 + } 188 + 189 + static void starfive_pcie_clk_rst_deinit(struct starfive_jh7110_pcie *pcie) 190 + { 191 + reset_control_assert(pcie->resets); 192 + clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks); 193 + } 194 + 195 + static bool starfive_pcie_link_up(struct plda_pcie_rp *plda) 196 + { 197 + struct starfive_jh7110_pcie *pcie = 198 + container_of(plda, struct starfive_jh7110_pcie, plda); 199 + int ret; 200 + u32 stg_reg_val; 201 + 202 + ret = regmap_read(pcie->reg_syscon, 203 + pcie->stg_pcie_base + STG_SYSCON_LNKSTA_OFFSET, 204 + &stg_reg_val); 205 + if (ret) { 206 + dev_err(pcie->plda.dev, "failed to read link status\n"); 207 + return false; 208 + } 209 + 210 + return !!(stg_reg_val & DATA_LINK_ACTIVE); 211 + } 212 + 213 + static int starfive_pcie_host_wait_for_link(struct starfive_jh7110_pcie *pcie) 214 + { 215 + int retries; 216 + 217 + /* Check if the link is up or not */ 218 + for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 219 + if (starfive_pcie_link_up(&pcie->plda)) { 220 + dev_info(pcie->plda.dev, "port link up\n"); 221 + return 0; 222 + } 223 + usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 224 + } 225 + 226 + return -ETIMEDOUT; 227 + } 228 + 229 + static int starfive_pcie_enable_phy(struct device *dev, 230 + struct starfive_jh7110_pcie *pcie) 231 + { 232 + int ret; 233 + 234 + if (!pcie->phy) 235 + return 0; 236 + 237 + ret = phy_init(pcie->phy); 238 + if (ret) 239 + return dev_err_probe(dev, ret, 240 + "failed to initialize pcie phy\n"); 241 + 242 + ret = phy_set_mode(pcie->phy, PHY_MODE_PCIE); 243 + if (ret) { 244 + dev_err_probe(dev, ret, "failed to set pcie mode\n"); 245 + goto err_phy_on; 246 + } 247 + 248 + ret = phy_power_on(pcie->phy); 249 + if (ret) { 250 + dev_err_probe(dev, ret, "failed to power on pcie phy\n"); 251 + goto err_phy_on; 252 + } 253 + 254 + return 0; 255 + 256 + err_phy_on: 257 + phy_exit(pcie->phy); 258 + return ret; 259 + } 260 + 261 + static void starfive_pcie_disable_phy(struct starfive_jh7110_pcie *pcie) 262 + { 263 + phy_power_off(pcie->phy); 264 + phy_exit(pcie->phy); 265 + } 266 + 267 + static void starfive_pcie_host_deinit(struct plda_pcie_rp *plda) 268 + { 269 + struct starfive_jh7110_pcie *pcie = 270 + container_of(plda, struct starfive_jh7110_pcie, plda); 271 + 272 + starfive_pcie_clk_rst_deinit(pcie); 273 + if (pcie->power_gpio) 274 + gpiod_set_value_cansleep(pcie->power_gpio, 0); 275 + starfive_pcie_disable_phy(pcie); 276 + } 277 + 278 + static int starfive_pcie_host_init(struct plda_pcie_rp *plda) 279 + { 280 + struct starfive_jh7110_pcie *pcie = 281 + container_of(plda, struct starfive_jh7110_pcie, plda); 282 + struct device *dev = plda->dev; 283 + int ret; 284 + int i; 285 + 286 + ret = starfive_pcie_enable_phy(dev, pcie); 287 + if (ret) 288 + return ret; 289 + 290 + regmap_update_bits(pcie->reg_syscon, 291 + pcie->stg_pcie_base + STG_SYSCON_RP_NEP_OFFSET, 292 + STG_SYSCON_K_RP_NEP, STG_SYSCON_K_RP_NEP); 293 + 294 + regmap_update_bits(pcie->reg_syscon, 295 + pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET, 296 + STG_SYSCON_CKREF_SRC_MASK, 297 + FIELD_PREP(STG_SYSCON_CKREF_SRC_MASK, 2)); 298 + 299 + regmap_update_bits(pcie->reg_syscon, 300 + pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET, 301 + STG_SYSCON_CLKREQ, STG_SYSCON_CLKREQ); 302 + 303 + ret = starfive_pcie_clk_rst_init(pcie); 304 + if (ret) 305 + return ret; 306 + 307 + if (pcie->power_gpio) 308 + gpiod_set_value_cansleep(pcie->power_gpio, 1); 309 + 310 + if (pcie->reset_gpio) 311 + gpiod_set_value_cansleep(pcie->reset_gpio, 1); 312 + 313 + /* Disable physical functions except #0 */ 314 + for (i = 1; i < PCIE_FUNC_NUM; i++) { 315 + regmap_update_bits(pcie->reg_syscon, 316 + pcie->stg_pcie_base + STG_SYSCON_AR_OFFSET, 317 + STG_SYSCON_AXI4_SLVL_AR_MASK, 318 + STG_SYSCON_AXI4_SLVL_PHY_AR(i)); 319 + 320 + regmap_update_bits(pcie->reg_syscon, 321 + pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET, 322 + STG_SYSCON_AXI4_SLVL_AW_MASK, 323 + STG_SYSCON_AXI4_SLVL_PHY_AW(i)); 324 + 325 + plda_pcie_disable_func(plda); 326 + } 327 + 328 + regmap_update_bits(pcie->reg_syscon, 329 + pcie->stg_pcie_base + STG_SYSCON_AR_OFFSET, 330 + STG_SYSCON_AXI4_SLVL_AR_MASK, 0); 331 + regmap_update_bits(pcie->reg_syscon, 332 + pcie->stg_pcie_base + STG_SYSCON_AW_OFFSET, 333 + STG_SYSCON_AXI4_SLVL_AW_MASK, 0); 334 + 335 + plda_pcie_enable_root_port(plda); 336 + plda_pcie_write_rc_bar(plda, 0); 337 + 338 + /* PCIe PCI Standard Configuration Identification Settings. */ 339 + plda_pcie_set_standard_class(plda); 340 + 341 + /* 342 + * The LTR message receiving is enabled by the register "PCIe Message 343 + * Reception" as default, but the forward id & addr are uninitialized. 344 + * If we do not disable LTR message forwarding here, or set a legal 345 + * forwarding address, the kernel will get stuck. 346 + * To workaround, disable the LTR message forwarding here before using 347 + * this feature. 348 + */ 349 + plda_pcie_disable_ltr(plda); 350 + 351 + /* 352 + * Enable the prefetchable memory window 64-bit addressing in JH7110. 353 + * The 64-bits prefetchable address translation configurations in ATU 354 + * can be work after enable the register setting below. 355 + */ 356 + plda_pcie_set_pref_win_64bit(plda); 357 + 358 + /* 359 + * Ensure that PERST has been asserted for at least 100 ms, 360 + * the sleep value is T_PVPERL from PCIe CEM spec r2.0 (Table 2-4) 361 + */ 362 + msleep(100); 363 + if (pcie->reset_gpio) 364 + gpiod_set_value_cansleep(pcie->reset_gpio, 0); 365 + 366 + /* 367 + * With a Downstream Port (<=5GT/s), software must wait a minimum 368 + * of 100ms following exit from a conventional reset before 369 + * sending a configuration request to the device. 370 + */ 371 + msleep(PCIE_RESET_CONFIG_DEVICE_WAIT_MS); 372 + 373 + if (starfive_pcie_host_wait_for_link(pcie)) 374 + dev_info(dev, "port link down\n"); 375 + 376 + return 0; 377 + } 378 + 379 + static const struct plda_pcie_host_ops sf_host_ops = { 380 + .host_init = starfive_pcie_host_init, 381 + .host_deinit = starfive_pcie_host_deinit, 382 + }; 383 + 384 + static const struct plda_event stf_pcie_event = { 385 + .intx_event = EVENT_PM_MSI_INT_INTX, 386 + .msi_event = EVENT_PM_MSI_INT_MSI 387 + }; 388 + 389 + static int starfive_pcie_probe(struct platform_device *pdev) 390 + { 391 + struct starfive_jh7110_pcie *pcie; 392 + struct device *dev = &pdev->dev; 393 + struct plda_pcie_rp *plda; 394 + int ret; 395 + 396 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 397 + if (!pcie) 398 + return -ENOMEM; 399 + 400 + plda = &pcie->plda; 401 + plda->dev = dev; 402 + 403 + ret = starfive_pcie_parse_dt(pcie, dev); 404 + if (ret) 405 + return ret; 406 + 407 + plda->host_ops = &sf_host_ops; 408 + plda->num_events = PLDA_MAX_EVENT_NUM; 409 + /* mask doorbell event */ 410 + plda->events_bitmap = GENMASK(PLDA_INT_EVENT_NUM - 1, 0) 411 + & ~BIT(PLDA_AXI_DOORBELL) 412 + & ~BIT(PLDA_PCIE_DOORBELL); 413 + plda->events_bitmap <<= PLDA_NUM_DMA_EVENTS; 414 + ret = plda_pcie_host_init(&pcie->plda, &starfive_pcie_ops, 415 + &stf_pcie_event); 416 + if (ret) 417 + return ret; 418 + 419 + pm_runtime_enable(&pdev->dev); 420 + pm_runtime_get_sync(&pdev->dev); 421 + platform_set_drvdata(pdev, pcie); 422 + 423 + return 0; 424 + } 425 + 426 + static void starfive_pcie_remove(struct platform_device *pdev) 427 + { 428 + struct starfive_jh7110_pcie *pcie = platform_get_drvdata(pdev); 429 + 430 + pm_runtime_put(&pdev->dev); 431 + pm_runtime_disable(&pdev->dev); 432 + plda_pcie_host_deinit(&pcie->plda); 433 + platform_set_drvdata(pdev, NULL); 434 + } 435 + 436 + static int starfive_pcie_suspend_noirq(struct device *dev) 437 + { 438 + struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev); 439 + 440 + clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks); 441 + starfive_pcie_disable_phy(pcie); 442 + 443 + return 0; 444 + } 445 + 446 + static int starfive_pcie_resume_noirq(struct device *dev) 447 + { 448 + struct starfive_jh7110_pcie *pcie = dev_get_drvdata(dev); 449 + int ret; 450 + 451 + ret = starfive_pcie_enable_phy(dev, pcie); 452 + if (ret) 453 + return ret; 454 + 455 + ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks); 456 + if (ret) { 457 + dev_err(dev, "failed to enable clocks\n"); 458 + starfive_pcie_disable_phy(pcie); 459 + return ret; 460 + } 461 + 462 + return 0; 463 + } 464 + 465 + static const struct dev_pm_ops starfive_pcie_pm_ops = { 466 + NOIRQ_SYSTEM_SLEEP_PM_OPS(starfive_pcie_suspend_noirq, 467 + starfive_pcie_resume_noirq) 468 + }; 469 + 470 + static const struct of_device_id starfive_pcie_of_match[] = { 471 + { .compatible = "starfive,jh7110-pcie", }, 472 + { /* sentinel */ } 473 + }; 474 + MODULE_DEVICE_TABLE(of, starfive_pcie_of_match); 475 + 476 + static struct platform_driver starfive_pcie_driver = { 477 + .driver = { 478 + .name = "pcie-starfive", 479 + .of_match_table = of_match_ptr(starfive_pcie_of_match), 480 + .pm = pm_sleep_ptr(&starfive_pcie_pm_ops), 481 + }, 482 + .probe = starfive_pcie_probe, 483 + .remove_new = starfive_pcie_remove, 484 + }; 485 + module_platform_driver(starfive_pcie_driver); 486 + 487 + MODULE_DESCRIPTION("StarFive JH7110 PCIe host driver"); 488 + MODULE_LICENSE("GPL v2");
+5 -4
drivers/pci/controller/vmd.c
··· 925 925 dev_set_msi_domain(&vmd->bus->dev, 926 926 dev_get_msi_domain(&vmd->dev->dev)); 927 927 928 + WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, 929 + "domain"), "Can't create symlink to domain\n"); 930 + 928 931 vmd_acpi_begin(); 929 932 930 933 pci_scan_child_bus(vmd->bus); ··· 967 964 pci_bus_add_devices(vmd->bus); 968 965 969 966 vmd_acpi_end(); 970 - 971 - WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, 972 - "domain"), "Can't create symlink to domain\n"); 973 967 return 0; 974 968 } 975 969 ··· 1042 1042 { 1043 1043 struct vmd_dev *vmd = pci_get_drvdata(dev); 1044 1044 1045 - sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 1046 1045 pci_stop_root_bus(vmd->bus); 1046 + sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 1047 1047 pci_remove_root_bus(vmd->bus); 1048 1048 vmd_cleanup_srcu(vmd); 1049 1049 vmd_detach_resources(vmd); ··· 1128 1128 module_pci_driver(vmd_drv); 1129 1129 1130 1130 MODULE_AUTHOR("Intel Corporation"); 1131 + MODULE_DESCRIPTION("Volume Management Device driver"); 1131 1132 MODULE_LICENSE("GPL v2"); 1132 1133 MODULE_VERSION("0.6");
+773 -148
drivers/pci/devres.c
··· 4 4 #include "pci.h" 5 5 6 6 /* 7 - * PCI iomap devres 7 + * On the state of PCI's devres implementation: 8 + * 9 + * The older devres API for PCI has two significant problems: 10 + * 11 + * 1. It is very strongly tied to the statically allocated mapping table in 12 + * struct pcim_iomap_devres below. This is mostly solved in the sense of the 13 + * pcim_ functions in this file providing things like ranged mapping by 14 + * bypassing this table, whereas the functions that were present in the old 15 + * API still enter the mapping addresses into the table for users of the old 16 + * API. 17 + * 18 + * 2. The region-request-functions in pci.c do become managed IF the device has 19 + * been enabled with pcim_enable_device() instead of pci_enable_device(). 20 + * This resulted in the API becoming inconsistent: Some functions have an 21 + * obviously managed counter-part (e.g., pci_iomap() <-> pcim_iomap()), 22 + * whereas some don't and are never managed, while others don't and are 23 + * _sometimes_ managed (e.g. pci_request_region()). 24 + * 25 + * Consequently, in the new API, region requests performed by the pcim_ 26 + * functions are automatically cleaned up through the devres callback 27 + * pcim_addr_resource_release(). 28 + * 29 + * Users of pcim_enable_device() + pci_*region*() are redirected in 30 + * pci.c to the managed functions here in this file. This isn't exactly 31 + * perfect, but the only alternative way would be to port ALL drivers 32 + * using said combination to pcim_ functions. 33 + * 34 + * TODO: 35 + * Remove the legacy table entirely once all calls to pcim_iomap_table() in 36 + * the kernel have been removed. 8 37 */ 9 - #define PCIM_IOMAP_MAX PCI_STD_NUM_BARS 10 38 39 + /* 40 + * Legacy struct storing addresses to whole mapped BARs. 41 + */ 11 42 struct pcim_iomap_devres { 12 - void __iomem *table[PCIM_IOMAP_MAX]; 43 + void __iomem *table[PCI_STD_NUM_BARS]; 13 44 }; 14 45 46 + /* Used to restore the old INTx state on driver detach. */ 47 + struct pcim_intx_devres { 48 + int orig_intx; 49 + }; 50 + 51 + enum pcim_addr_devres_type { 52 + /* Default initializer. */ 53 + PCIM_ADDR_DEVRES_TYPE_INVALID, 54 + 55 + /* A requested region spanning an entire BAR. */ 56 + PCIM_ADDR_DEVRES_TYPE_REGION, 57 + 58 + /* 59 + * A requested region spanning an entire BAR, and a mapping for 60 + * the entire BAR. 61 + */ 62 + PCIM_ADDR_DEVRES_TYPE_REGION_MAPPING, 63 + 64 + /* 65 + * A mapping within a BAR, either spanning the whole BAR or just a 66 + * range. Without a requested region. 67 + */ 68 + PCIM_ADDR_DEVRES_TYPE_MAPPING, 69 + }; 70 + 71 + /* 72 + * This struct envelops IO or MEM addresses, i.e., mappings and region 73 + * requests, because those are very frequently requested and released 74 + * together. 75 + */ 76 + struct pcim_addr_devres { 77 + enum pcim_addr_devres_type type; 78 + void __iomem *baseaddr; 79 + unsigned long offset; 80 + unsigned long len; 81 + int bar; 82 + }; 83 + 84 + static inline void pcim_addr_devres_clear(struct pcim_addr_devres *res) 85 + { 86 + memset(res, 0, sizeof(*res)); 87 + res->bar = -1; 88 + } 89 + 90 + /* 91 + * The following functions, __pcim_*_region*, exist as counterparts to the 92 + * versions from pci.c - which, unfortunately, can be in "hybrid mode", i.e., 93 + * sometimes managed, sometimes not. 94 + * 95 + * To separate the APIs cleanly, we define our own, simplified versions here. 96 + */ 97 + 98 + /** 99 + * __pcim_request_region_range - Request a ranged region 100 + * @pdev: PCI device the region belongs to 101 + * @bar: BAR the range is within 102 + * @offset: offset from the BAR's start address 103 + * @maxlen: length in bytes, beginning at @offset 104 + * @name: name associated with the request 105 + * @req_flags: flags for the request, e.g., for kernel-exclusive requests 106 + * 107 + * Returns: 0 on success, a negative error code on failure. 108 + * 109 + * Request a range within a device's PCI BAR. Sanity check the input. 110 + */ 111 + static int __pcim_request_region_range(struct pci_dev *pdev, int bar, 112 + unsigned long offset, 113 + unsigned long maxlen, 114 + const char *name, int req_flags) 115 + { 116 + resource_size_t start = pci_resource_start(pdev, bar); 117 + resource_size_t len = pci_resource_len(pdev, bar); 118 + unsigned long dev_flags = pci_resource_flags(pdev, bar); 119 + 120 + if (start == 0 || len == 0) /* Unused BAR. */ 121 + return 0; 122 + if (len <= offset) 123 + return -EINVAL; 124 + 125 + start += offset; 126 + len -= offset; 127 + 128 + if (len > maxlen && maxlen != 0) 129 + len = maxlen; 130 + 131 + if (dev_flags & IORESOURCE_IO) { 132 + if (!request_region(start, len, name)) 133 + return -EBUSY; 134 + } else if (dev_flags & IORESOURCE_MEM) { 135 + if (!__request_mem_region(start, len, name, req_flags)) 136 + return -EBUSY; 137 + } else { 138 + /* That's not a device we can request anything on. */ 139 + return -ENODEV; 140 + } 141 + 142 + return 0; 143 + } 144 + 145 + static void __pcim_release_region_range(struct pci_dev *pdev, int bar, 146 + unsigned long offset, 147 + unsigned long maxlen) 148 + { 149 + resource_size_t start = pci_resource_start(pdev, bar); 150 + resource_size_t len = pci_resource_len(pdev, bar); 151 + unsigned long flags = pci_resource_flags(pdev, bar); 152 + 153 + if (len <= offset || start == 0) 154 + return; 155 + 156 + if (len == 0 || maxlen == 0) /* This an unused BAR. Do nothing. */ 157 + return; 158 + 159 + start += offset; 160 + len -= offset; 161 + 162 + if (len > maxlen) 163 + len = maxlen; 164 + 165 + if (flags & IORESOURCE_IO) 166 + release_region(start, len); 167 + else if (flags & IORESOURCE_MEM) 168 + release_mem_region(start, len); 169 + } 170 + 171 + static int __pcim_request_region(struct pci_dev *pdev, int bar, 172 + const char *name, int flags) 173 + { 174 + unsigned long offset = 0; 175 + unsigned long len = pci_resource_len(pdev, bar); 176 + 177 + return __pcim_request_region_range(pdev, bar, offset, len, name, flags); 178 + } 179 + 180 + static void __pcim_release_region(struct pci_dev *pdev, int bar) 181 + { 182 + unsigned long offset = 0; 183 + unsigned long len = pci_resource_len(pdev, bar); 184 + 185 + __pcim_release_region_range(pdev, bar, offset, len); 186 + } 187 + 188 + static void pcim_addr_resource_release(struct device *dev, void *resource_raw) 189 + { 190 + struct pci_dev *pdev = to_pci_dev(dev); 191 + struct pcim_addr_devres *res = resource_raw; 192 + 193 + switch (res->type) { 194 + case PCIM_ADDR_DEVRES_TYPE_REGION: 195 + __pcim_release_region(pdev, res->bar); 196 + break; 197 + case PCIM_ADDR_DEVRES_TYPE_REGION_MAPPING: 198 + pci_iounmap(pdev, res->baseaddr); 199 + __pcim_release_region(pdev, res->bar); 200 + break; 201 + case PCIM_ADDR_DEVRES_TYPE_MAPPING: 202 + pci_iounmap(pdev, res->baseaddr); 203 + break; 204 + default: 205 + break; 206 + } 207 + } 208 + 209 + static struct pcim_addr_devres *pcim_addr_devres_alloc(struct pci_dev *pdev) 210 + { 211 + struct pcim_addr_devres *res; 212 + 213 + res = devres_alloc_node(pcim_addr_resource_release, sizeof(*res), 214 + GFP_KERNEL, dev_to_node(&pdev->dev)); 215 + if (res) 216 + pcim_addr_devres_clear(res); 217 + return res; 218 + } 219 + 220 + /* Just for consistency and readability. */ 221 + static inline void pcim_addr_devres_free(struct pcim_addr_devres *res) 222 + { 223 + devres_free(res); 224 + } 225 + 226 + /* 227 + * Used by devres to identify a pcim_addr_devres. 228 + */ 229 + static int pcim_addr_resources_match(struct device *dev, 230 + void *a_raw, void *b_raw) 231 + { 232 + struct pcim_addr_devres *a, *b; 233 + 234 + a = a_raw; 235 + b = b_raw; 236 + 237 + if (a->type != b->type) 238 + return 0; 239 + 240 + switch (a->type) { 241 + case PCIM_ADDR_DEVRES_TYPE_REGION: 242 + case PCIM_ADDR_DEVRES_TYPE_REGION_MAPPING: 243 + return a->bar == b->bar; 244 + case PCIM_ADDR_DEVRES_TYPE_MAPPING: 245 + return a->baseaddr == b->baseaddr; 246 + default: 247 + return 0; 248 + } 249 + } 15 250 16 251 static void devm_pci_unmap_iospace(struct device *dev, void *ptr) 17 252 { ··· 327 92 * 328 93 * All operations are managed and will be undone on driver detach. 329 94 * 330 - * Returns a pointer to the remapped memory or an ERR_PTR() encoded error code 331 - * on failure. Usage example:: 95 + * Returns a pointer to the remapped memory or an IOMEM_ERR_PTR() encoded error 96 + * code on failure. Usage example:: 332 97 * 333 98 * res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 334 99 * base = devm_pci_remap_cfg_resource(&pdev->dev, res); ··· 375 140 } 376 141 EXPORT_SYMBOL(devm_pci_remap_cfg_resource); 377 142 143 + static void __pcim_clear_mwi(void *pdev_raw) 144 + { 145 + struct pci_dev *pdev = pdev_raw; 146 + 147 + pci_clear_mwi(pdev); 148 + } 149 + 378 150 /** 379 151 * pcim_set_mwi - a device-managed pci_set_mwi() 380 - * @dev: the PCI device for which MWI is enabled 152 + * @pdev: the PCI device for which MWI is enabled 381 153 * 382 154 * Managed pci_set_mwi(). 383 155 * 384 156 * RETURNS: An appropriate -ERRNO error value on error, or zero for success. 385 157 */ 386 - int pcim_set_mwi(struct pci_dev *dev) 158 + int pcim_set_mwi(struct pci_dev *pdev) 387 159 { 388 - struct pci_devres *dr; 160 + int ret; 389 161 390 - dr = find_pci_dr(dev); 391 - if (!dr) 392 - return -ENOMEM; 162 + ret = devm_add_action(&pdev->dev, __pcim_clear_mwi, pdev); 163 + if (ret != 0) 164 + return ret; 393 165 394 - dr->mwi = 1; 395 - return pci_set_mwi(dev); 166 + ret = pci_set_mwi(pdev); 167 + if (ret != 0) 168 + devm_remove_action(&pdev->dev, __pcim_clear_mwi, pdev); 169 + 170 + return ret; 396 171 } 397 172 EXPORT_SYMBOL(pcim_set_mwi); 398 173 399 - 400 - static void pcim_release(struct device *gendev, void *res) 174 + static inline bool mask_contains_bar(int mask, int bar) 401 175 { 402 - struct pci_dev *dev = to_pci_dev(gendev); 403 - struct pci_devres *this = res; 404 - int i; 405 - 406 - for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) 407 - if (this->region_mask & (1 << i)) 408 - pci_release_region(dev, i); 409 - 410 - if (this->mwi) 411 - pci_clear_mwi(dev); 412 - 413 - if (this->restore_intx) 414 - pci_intx(dev, this->orig_intx); 415 - 416 - if (this->enabled && !this->pinned) 417 - pci_disable_device(dev); 176 + return mask & BIT(bar); 418 177 } 419 178 420 179 /* 421 - * TODO: After the last four callers in pci.c are ported, find_pci_dr() 422 - * needs to be made static again. 180 + * This is a copy of pci_intx() used to bypass the problem of recursive 181 + * function calls due to the hybrid nature of pci_intx(). 423 182 */ 424 - struct pci_devres *find_pci_dr(struct pci_dev *pdev) 183 + static void __pcim_intx(struct pci_dev *pdev, int enable) 425 184 { 426 - if (pci_is_managed(pdev)) 427 - return devres_find(&pdev->dev, pcim_release, NULL, NULL); 428 - return NULL; 185 + u16 pci_command, new; 186 + 187 + pci_read_config_word(pdev, PCI_COMMAND, &pci_command); 188 + 189 + if (enable) 190 + new = pci_command & ~PCI_COMMAND_INTX_DISABLE; 191 + else 192 + new = pci_command | PCI_COMMAND_INTX_DISABLE; 193 + 194 + if (new != pci_command) 195 + pci_write_config_word(pdev, PCI_COMMAND, new); 429 196 } 430 197 431 - static struct pci_devres *get_pci_dr(struct pci_dev *pdev) 198 + static void pcim_intx_restore(struct device *dev, void *data) 432 199 { 433 - struct pci_devres *dr, *new_dr; 200 + struct pci_dev *pdev = to_pci_dev(dev); 201 + struct pcim_intx_devres *res = data; 434 202 435 - dr = devres_find(&pdev->dev, pcim_release, NULL, NULL); 436 - if (dr) 437 - return dr; 203 + __pcim_intx(pdev, res->orig_intx); 204 + } 438 205 439 - new_dr = devres_alloc(pcim_release, sizeof(*new_dr), GFP_KERNEL); 440 - if (!new_dr) 441 - return NULL; 442 - return devres_get(&pdev->dev, new_dr, NULL, NULL); 206 + static struct pcim_intx_devres *get_or_create_intx_devres(struct device *dev) 207 + { 208 + struct pcim_intx_devres *res; 209 + 210 + res = devres_find(dev, pcim_intx_restore, NULL, NULL); 211 + if (res) 212 + return res; 213 + 214 + res = devres_alloc(pcim_intx_restore, sizeof(*res), GFP_KERNEL); 215 + if (res) 216 + devres_add(dev, res); 217 + 218 + return res; 219 + } 220 + 221 + /** 222 + * pcim_intx - managed pci_intx() 223 + * @pdev: the PCI device to operate on 224 + * @enable: boolean: whether to enable or disable PCI INTx 225 + * 226 + * Returns: 0 on success, -ENOMEM on error. 227 + * 228 + * Enable/disable PCI INTx for device @pdev. 229 + * Restore the original state on driver detach. 230 + */ 231 + int pcim_intx(struct pci_dev *pdev, int enable) 232 + { 233 + struct pcim_intx_devres *res; 234 + 235 + res = get_or_create_intx_devres(&pdev->dev); 236 + if (!res) 237 + return -ENOMEM; 238 + 239 + res->orig_intx = !enable; 240 + __pcim_intx(pdev, enable); 241 + 242 + return 0; 243 + } 244 + 245 + static void pcim_disable_device(void *pdev_raw) 246 + { 247 + struct pci_dev *pdev = pdev_raw; 248 + 249 + if (!pdev->pinned) 250 + pci_disable_device(pdev); 443 251 } 444 252 445 253 /** 446 254 * pcim_enable_device - Managed pci_enable_device() 447 255 * @pdev: PCI device to be initialized 448 256 * 449 - * Managed pci_enable_device(). 257 + * Returns: 0 on success, negative error code on failure. 258 + * 259 + * Managed pci_enable_device(). Device will automatically be disabled on 260 + * driver detach. 450 261 */ 451 262 int pcim_enable_device(struct pci_dev *pdev) 452 263 { 453 - struct pci_devres *dr; 454 - int rc; 264 + int ret; 455 265 456 - dr = get_pci_dr(pdev); 457 - if (unlikely(!dr)) 458 - return -ENOMEM; 459 - if (dr->enabled) 460 - return 0; 266 + ret = devm_add_action(&pdev->dev, pcim_disable_device, pdev); 267 + if (ret != 0) 268 + return ret; 461 269 462 - rc = pci_enable_device(pdev); 463 - if (!rc) { 464 - pdev->is_managed = 1; 465 - dr->enabled = 1; 270 + /* 271 + * We prefer removing the action in case of an error over 272 + * devm_add_action_or_reset() because the latter could theoretically be 273 + * disturbed by users having pinned the device too soon. 274 + */ 275 + ret = pci_enable_device(pdev); 276 + if (ret != 0) { 277 + devm_remove_action(&pdev->dev, pcim_disable_device, pdev); 278 + return ret; 466 279 } 467 - return rc; 280 + 281 + pdev->is_managed = true; 282 + 283 + return ret; 468 284 } 469 285 EXPORT_SYMBOL(pcim_enable_device); 470 286 ··· 523 237 * pcim_pin_device - Pin managed PCI device 524 238 * @pdev: PCI device to pin 525 239 * 526 - * Pin managed PCI device @pdev. Pinned device won't be disabled on 527 - * driver detach. @pdev must have been enabled with 528 - * pcim_enable_device(). 240 + * Pin managed PCI device @pdev. Pinned device won't be disabled on driver 241 + * detach. @pdev must have been enabled with pcim_enable_device(). 529 242 */ 530 243 void pcim_pin_device(struct pci_dev *pdev) 531 244 { 532 - struct pci_devres *dr; 533 - 534 - dr = find_pci_dr(pdev); 535 - WARN_ON(!dr || !dr->enabled); 536 - if (dr) 537 - dr->pinned = 1; 245 + pdev->pinned = true; 538 246 } 539 247 EXPORT_SYMBOL(pcim_pin_device); 540 248 541 249 static void pcim_iomap_release(struct device *gendev, void *res) 542 250 { 543 - struct pci_dev *dev = to_pci_dev(gendev); 544 - struct pcim_iomap_devres *this = res; 545 - int i; 546 - 547 - for (i = 0; i < PCIM_IOMAP_MAX; i++) 548 - if (this->table[i]) 549 - pci_iounmap(dev, this->table[i]); 251 + /* 252 + * Do nothing. This is legacy code. 253 + * 254 + * Cleanup of the mappings is now done directly through the callbacks 255 + * registered when creating them. 256 + */ 550 257 } 551 258 552 259 /** 553 - * pcim_iomap_table - access iomap allocation table 260 + * pcim_iomap_table - access iomap allocation table (DEPRECATED) 554 261 * @pdev: PCI device to access iomap table for 262 + * 263 + * Returns: 264 + * Const pointer to array of __iomem pointers on success, NULL on failure. 555 265 * 556 266 * Access iomap allocation table for @dev. If iomap table doesn't 557 267 * exist and @pdev is managed, it will be allocated. All iomaps ··· 557 275 * This function might sleep when the table is first allocated but can 558 276 * be safely called without context and guaranteed to succeed once 559 277 * allocated. 278 + * 279 + * This function is DEPRECATED. Do not use it in new code. Instead, obtain a 280 + * mapping's address directly from one of the pcim_* mapping functions. For 281 + * example: 282 + * void __iomem \*mappy = pcim_iomap(pdev, bar, length); 560 283 */ 561 284 void __iomem * const *pcim_iomap_table(struct pci_dev *pdev) 562 285 { ··· 580 293 } 581 294 EXPORT_SYMBOL(pcim_iomap_table); 582 295 296 + /* 297 + * Fill the legacy mapping-table, so that drivers using the old API can 298 + * still get a BAR's mapping address through pcim_iomap_table(). 299 + */ 300 + static int pcim_add_mapping_to_legacy_table(struct pci_dev *pdev, 301 + void __iomem *mapping, int bar) 302 + { 303 + void __iomem **legacy_iomap_table; 304 + 305 + if (bar >= PCI_STD_NUM_BARS) 306 + return -EINVAL; 307 + 308 + legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev); 309 + if (!legacy_iomap_table) 310 + return -ENOMEM; 311 + 312 + /* The legacy mechanism doesn't allow for duplicate mappings. */ 313 + WARN_ON(legacy_iomap_table[bar]); 314 + 315 + legacy_iomap_table[bar] = mapping; 316 + 317 + return 0; 318 + } 319 + 320 + /* 321 + * Remove a mapping. The table only contains whole-BAR mappings, so this will 322 + * never interfere with ranged mappings. 323 + */ 324 + static void pcim_remove_mapping_from_legacy_table(struct pci_dev *pdev, 325 + void __iomem *addr) 326 + { 327 + int bar; 328 + void __iomem **legacy_iomap_table; 329 + 330 + legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev); 331 + if (!legacy_iomap_table) 332 + return; 333 + 334 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 335 + if (legacy_iomap_table[bar] == addr) { 336 + legacy_iomap_table[bar] = NULL; 337 + return; 338 + } 339 + } 340 + } 341 + 342 + /* 343 + * The same as pcim_remove_mapping_from_legacy_table(), but identifies the 344 + * mapping by its BAR index. 345 + */ 346 + static void pcim_remove_bar_from_legacy_table(struct pci_dev *pdev, int bar) 347 + { 348 + void __iomem **legacy_iomap_table; 349 + 350 + if (bar >= PCI_STD_NUM_BARS) 351 + return; 352 + 353 + legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev); 354 + if (!legacy_iomap_table) 355 + return; 356 + 357 + legacy_iomap_table[bar] = NULL; 358 + } 359 + 583 360 /** 584 361 * pcim_iomap - Managed pcim_iomap() 585 362 * @pdev: PCI device to iomap for 586 363 * @bar: BAR to iomap 587 364 * @maxlen: Maximum length of iomap 588 365 * 589 - * Managed pci_iomap(). Map is automatically unmapped on driver 590 - * detach. 366 + * Returns: __iomem pointer on success, NULL on failure. 367 + * 368 + * Managed pci_iomap(). Map is automatically unmapped on driver detach. If 369 + * desired, unmap manually only with pcim_iounmap(). 370 + * 371 + * This SHOULD only be used once per BAR. 372 + * 373 + * NOTE: 374 + * Contrary to the other pcim_* functions, this function does not return an 375 + * IOMEM_ERR_PTR() on failure, but a simple NULL. This is done for backwards 376 + * compatibility. 591 377 */ 592 378 void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen) 593 379 { 594 - void __iomem **tbl; 380 + void __iomem *mapping; 381 + struct pcim_addr_devres *res; 595 382 596 - BUG_ON(bar >= PCIM_IOMAP_MAX); 597 - 598 - tbl = (void __iomem **)pcim_iomap_table(pdev); 599 - if (!tbl || tbl[bar]) /* duplicate mappings not allowed */ 383 + res = pcim_addr_devres_alloc(pdev); 384 + if (!res) 600 385 return NULL; 386 + res->type = PCIM_ADDR_DEVRES_TYPE_MAPPING; 601 387 602 - tbl[bar] = pci_iomap(pdev, bar, maxlen); 603 - return tbl[bar]; 388 + mapping = pci_iomap(pdev, bar, maxlen); 389 + if (!mapping) 390 + goto err_iomap; 391 + res->baseaddr = mapping; 392 + 393 + if (pcim_add_mapping_to_legacy_table(pdev, mapping, bar) != 0) 394 + goto err_table; 395 + 396 + devres_add(&pdev->dev, res); 397 + return mapping; 398 + 399 + err_table: 400 + pci_iounmap(pdev, mapping); 401 + err_iomap: 402 + pcim_addr_devres_free(res); 403 + return NULL; 604 404 } 605 405 EXPORT_SYMBOL(pcim_iomap); 606 406 ··· 696 322 * @pdev: PCI device to iounmap for 697 323 * @addr: Address to unmap 698 324 * 699 - * Managed pci_iounmap(). @addr must have been mapped using pcim_iomap(). 325 + * Managed pci_iounmap(). @addr must have been mapped using a pcim_* mapping 326 + * function. 700 327 */ 701 328 void pcim_iounmap(struct pci_dev *pdev, void __iomem *addr) 702 329 { 703 - void __iomem **tbl; 704 - int i; 330 + struct pcim_addr_devres res_searched; 705 331 706 - pci_iounmap(pdev, addr); 332 + pcim_addr_devres_clear(&res_searched); 333 + res_searched.type = PCIM_ADDR_DEVRES_TYPE_MAPPING; 334 + res_searched.baseaddr = addr; 707 335 708 - tbl = (void __iomem **)pcim_iomap_table(pdev); 709 - BUG_ON(!tbl); 336 + if (devres_release(&pdev->dev, pcim_addr_resource_release, 337 + pcim_addr_resources_match, &res_searched) != 0) { 338 + /* Doesn't exist. User passed nonsense. */ 339 + return; 340 + } 710 341 711 - for (i = 0; i < PCIM_IOMAP_MAX; i++) 712 - if (tbl[i] == addr) { 713 - tbl[i] = NULL; 714 - return; 715 - } 716 - WARN_ON(1); 342 + pcim_remove_mapping_from_legacy_table(pdev, addr); 717 343 } 718 344 EXPORT_SYMBOL(pcim_iounmap); 345 + 346 + /** 347 + * pcim_iomap_region - Request and iomap a PCI BAR 348 + * @pdev: PCI device to map IO resources for 349 + * @bar: Index of a BAR to map 350 + * @name: Name associated with the request 351 + * 352 + * Returns: __iomem pointer on success, an IOMEM_ERR_PTR on failure. 353 + * 354 + * Mapping and region will get automatically released on driver detach. If 355 + * desired, release manually only with pcim_iounmap_region(). 356 + */ 357 + static void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar, 358 + const char *name) 359 + { 360 + int ret; 361 + struct pcim_addr_devres *res; 362 + 363 + res = pcim_addr_devres_alloc(pdev); 364 + if (!res) 365 + return IOMEM_ERR_PTR(-ENOMEM); 366 + 367 + res->type = PCIM_ADDR_DEVRES_TYPE_REGION_MAPPING; 368 + res->bar = bar; 369 + 370 + ret = __pcim_request_region(pdev, bar, name, 0); 371 + if (ret != 0) 372 + goto err_region; 373 + 374 + res->baseaddr = pci_iomap(pdev, bar, 0); 375 + if (!res->baseaddr) { 376 + ret = -EINVAL; 377 + goto err_iomap; 378 + } 379 + 380 + devres_add(&pdev->dev, res); 381 + return res->baseaddr; 382 + 383 + err_iomap: 384 + __pcim_release_region(pdev, bar); 385 + err_region: 386 + pcim_addr_devres_free(res); 387 + 388 + return IOMEM_ERR_PTR(ret); 389 + } 390 + 391 + /** 392 + * pcim_iounmap_region - Unmap and release a PCI BAR 393 + * @pdev: PCI device to operate on 394 + * @bar: Index of BAR to unmap and release 395 + * 396 + * Unmap a BAR and release its region manually. Only pass BARs that were 397 + * previously mapped by pcim_iomap_region(). 398 + */ 399 + static void pcim_iounmap_region(struct pci_dev *pdev, int bar) 400 + { 401 + struct pcim_addr_devres res_searched; 402 + 403 + pcim_addr_devres_clear(&res_searched); 404 + res_searched.type = PCIM_ADDR_DEVRES_TYPE_REGION_MAPPING; 405 + res_searched.bar = bar; 406 + 407 + devres_release(&pdev->dev, pcim_addr_resource_release, 408 + pcim_addr_resources_match, &res_searched); 409 + } 719 410 720 411 /** 721 412 * pcim_iomap_regions - Request and iomap PCI BARs 722 413 * @pdev: PCI device to map IO resources for 723 414 * @mask: Mask of BARs to request and iomap 724 - * @name: Name used when requesting regions 415 + * @name: Name associated with the requests 416 + * 417 + * Returns: 0 on success, negative error code on failure. 725 418 * 726 419 * Request and iomap regions specified by @mask. 727 420 */ 728 421 int pcim_iomap_regions(struct pci_dev *pdev, int mask, const char *name) 729 422 { 730 - void __iomem * const *iomap; 731 - int i, rc; 423 + int ret; 424 + int bar; 425 + void __iomem *mapping; 732 426 733 - iomap = pcim_iomap_table(pdev); 734 - if (!iomap) 735 - return -ENOMEM; 736 - 737 - for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 738 - unsigned long len; 739 - 740 - if (!(mask & (1 << i))) 427 + for (bar = 0; bar < DEVICE_COUNT_RESOURCE; bar++) { 428 + if (!mask_contains_bar(mask, bar)) 741 429 continue; 742 430 743 - rc = -EINVAL; 744 - len = pci_resource_len(pdev, i); 745 - if (!len) 746 - goto err_inval; 747 - 748 - rc = pci_request_region(pdev, i, name); 749 - if (rc) 750 - goto err_inval; 751 - 752 - rc = -ENOMEM; 753 - if (!pcim_iomap(pdev, i, 0)) 754 - goto err_region; 431 + mapping = pcim_iomap_region(pdev, bar, name); 432 + if (IS_ERR(mapping)) { 433 + ret = PTR_ERR(mapping); 434 + goto err; 435 + } 436 + ret = pcim_add_mapping_to_legacy_table(pdev, mapping, bar); 437 + if (ret != 0) 438 + goto err; 755 439 } 756 440 757 441 return 0; 758 442 759 - err_region: 760 - pci_release_region(pdev, i); 761 - err_inval: 762 - while (--i >= 0) { 763 - if (!(mask & (1 << i))) 764 - continue; 765 - pcim_iounmap(pdev, iomap[i]); 766 - pci_release_region(pdev, i); 443 + err: 444 + while (--bar >= 0) { 445 + pcim_iounmap_region(pdev, bar); 446 + pcim_remove_bar_from_legacy_table(pdev, bar); 767 447 } 768 448 769 - return rc; 449 + return ret; 770 450 } 771 451 EXPORT_SYMBOL(pcim_iomap_regions); 772 452 453 + static int _pcim_request_region(struct pci_dev *pdev, int bar, const char *name, 454 + int request_flags) 455 + { 456 + int ret; 457 + struct pcim_addr_devres *res; 458 + 459 + res = pcim_addr_devres_alloc(pdev); 460 + if (!res) 461 + return -ENOMEM; 462 + res->type = PCIM_ADDR_DEVRES_TYPE_REGION; 463 + res->bar = bar; 464 + 465 + ret = __pcim_request_region(pdev, bar, name, request_flags); 466 + if (ret != 0) { 467 + pcim_addr_devres_free(res); 468 + return ret; 469 + } 470 + 471 + devres_add(&pdev->dev, res); 472 + return 0; 473 + } 474 + 475 + /** 476 + * pcim_request_region - Request a PCI BAR 477 + * @pdev: PCI device to requestion region for 478 + * @bar: Index of BAR to request 479 + * @name: Name associated with the request 480 + * 481 + * Returns: 0 on success, a negative error code on failure. 482 + * 483 + * Request region specified by @bar. 484 + * 485 + * The region will automatically be released on driver detach. If desired, 486 + * release manually only with pcim_release_region(). 487 + */ 488 + int pcim_request_region(struct pci_dev *pdev, int bar, const char *name) 489 + { 490 + return _pcim_request_region(pdev, bar, name, 0); 491 + } 492 + 493 + /** 494 + * pcim_request_region_exclusive - Request a PCI BAR exclusively 495 + * @pdev: PCI device to requestion region for 496 + * @bar: Index of BAR to request 497 + * @name: Name associated with the request 498 + * 499 + * Returns: 0 on success, a negative error code on failure. 500 + * 501 + * Request region specified by @bar exclusively. 502 + * 503 + * The region will automatically be released on driver detach. If desired, 504 + * release manually only with pcim_release_region(). 505 + */ 506 + int pcim_request_region_exclusive(struct pci_dev *pdev, int bar, const char *name) 507 + { 508 + return _pcim_request_region(pdev, bar, name, IORESOURCE_EXCLUSIVE); 509 + } 510 + 511 + /** 512 + * pcim_release_region - Release a PCI BAR 513 + * @pdev: PCI device to operate on 514 + * @bar: Index of BAR to release 515 + * 516 + * Release a region manually that was previously requested by 517 + * pcim_request_region(). 518 + */ 519 + void pcim_release_region(struct pci_dev *pdev, int bar) 520 + { 521 + struct pcim_addr_devres res_searched; 522 + 523 + pcim_addr_devres_clear(&res_searched); 524 + res_searched.type = PCIM_ADDR_DEVRES_TYPE_REGION; 525 + res_searched.bar = bar; 526 + 527 + devres_release(&pdev->dev, pcim_addr_resource_release, 528 + pcim_addr_resources_match, &res_searched); 529 + } 530 + 531 + 532 + /** 533 + * pcim_release_all_regions - Release all regions of a PCI-device 534 + * @pdev: the PCI device 535 + * 536 + * Release all regions previously requested through pcim_request_region() 537 + * or pcim_request_all_regions(). 538 + * 539 + * Can be called from any context, i.e., not necessarily as a counterpart to 540 + * pcim_request_all_regions(). 541 + */ 542 + static void pcim_release_all_regions(struct pci_dev *pdev) 543 + { 544 + int bar; 545 + 546 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 547 + pcim_release_region(pdev, bar); 548 + } 549 + 550 + /** 551 + * pcim_request_all_regions - Request all regions 552 + * @pdev: PCI device to map IO resources for 553 + * @name: name associated with the request 554 + * 555 + * Returns: 0 on success, negative error code on failure. 556 + * 557 + * Requested regions will automatically be released at driver detach. If 558 + * desired, release individual regions with pcim_release_region() or all of 559 + * them at once with pcim_release_all_regions(). 560 + */ 561 + static int pcim_request_all_regions(struct pci_dev *pdev, const char *name) 562 + { 563 + int ret; 564 + int bar; 565 + 566 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 567 + ret = pcim_request_region(pdev, bar, name); 568 + if (ret != 0) 569 + goto err; 570 + } 571 + 572 + return 0; 573 + 574 + err: 575 + pcim_release_all_regions(pdev); 576 + 577 + return ret; 578 + } 579 + 773 580 /** 774 581 * pcim_iomap_regions_request_all - Request all BARs and iomap specified ones 582 + * (DEPRECATED) 775 583 * @pdev: PCI device to map IO resources for 776 584 * @mask: Mask of BARs to iomap 777 - * @name: Name used when requesting regions 585 + * @name: Name associated with the requests 586 + * 587 + * Returns: 0 on success, negative error code on failure. 778 588 * 779 589 * Request all PCI BARs and iomap regions specified by @mask. 590 + * 591 + * To release these resources manually, call pcim_release_region() for the 592 + * regions and pcim_iounmap() for the mappings. 593 + * 594 + * This function is DEPRECATED. Don't use it in new code. Instead, use one 595 + * of the pcim_* region request functions in combination with a pcim_* 596 + * mapping function. 780 597 */ 781 598 int pcim_iomap_regions_request_all(struct pci_dev *pdev, int mask, 782 599 const char *name) 783 600 { 784 - int request_mask = ((1 << 6) - 1) & ~mask; 785 - int rc; 601 + int bar; 602 + int ret; 603 + void __iomem **legacy_iomap_table; 786 604 787 - rc = pci_request_selected_regions(pdev, request_mask, name); 788 - if (rc) 789 - return rc; 605 + ret = pcim_request_all_regions(pdev, name); 606 + if (ret != 0) 607 + return ret; 790 608 791 - rc = pcim_iomap_regions(pdev, mask, name); 792 - if (rc) 793 - pci_release_selected_regions(pdev, request_mask); 794 - return rc; 609 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 610 + if (!mask_contains_bar(mask, bar)) 611 + continue; 612 + if (!pcim_iomap(pdev, bar, 0)) 613 + goto err; 614 + } 615 + 616 + return 0; 617 + 618 + err: 619 + /* 620 + * If bar is larger than 0, then pcim_iomap() above has most likely 621 + * failed because of -EINVAL. If it is equal 0, most likely the table 622 + * couldn't be created, indicating -ENOMEM. 623 + */ 624 + ret = bar > 0 ? -EINVAL : -ENOMEM; 625 + legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev); 626 + 627 + while (--bar >= 0) 628 + pcim_iounmap(pdev, legacy_iomap_table[bar]); 629 + 630 + pcim_release_all_regions(pdev); 631 + 632 + return ret; 795 633 } 796 634 EXPORT_SYMBOL(pcim_iomap_regions_request_all); 797 635 ··· 1016 430 */ 1017 431 void pcim_iounmap_regions(struct pci_dev *pdev, int mask) 1018 432 { 1019 - void __iomem * const *iomap; 1020 433 int i; 1021 434 1022 - iomap = pcim_iomap_table(pdev); 1023 - if (!iomap) 1024 - return; 1025 - 1026 - for (i = 0; i < PCIM_IOMAP_MAX; i++) { 1027 - if (!(mask & (1 << i))) 435 + for (i = 0; i < PCI_STD_NUM_BARS; i++) { 436 + if (!mask_contains_bar(mask, i)) 1028 437 continue; 1029 438 1030 - pcim_iounmap(pdev, iomap[i]); 1031 - pci_release_region(pdev, i); 439 + pcim_iounmap_region(pdev, i); 440 + pcim_remove_bar_from_legacy_table(pdev, i); 1032 441 } 1033 442 } 1034 443 EXPORT_SYMBOL(pcim_iounmap_regions); 444 + 445 + /** 446 + * pcim_iomap_range - Create a ranged __iomap mapping within a PCI BAR 447 + * @pdev: PCI device to map IO resources for 448 + * @bar: Index of the BAR 449 + * @offset: Offset from the begin of the BAR 450 + * @len: Length in bytes for the mapping 451 + * 452 + * Returns: __iomem pointer on success, an IOMEM_ERR_PTR on failure. 453 + * 454 + * Creates a new IO-Mapping within the specified @bar, ranging from @offset to 455 + * @offset + @len. 456 + * 457 + * The mapping will automatically get unmapped on driver detach. If desired, 458 + * release manually only with pcim_iounmap(). 459 + */ 460 + void __iomem *pcim_iomap_range(struct pci_dev *pdev, int bar, 461 + unsigned long offset, unsigned long len) 462 + { 463 + void __iomem *mapping; 464 + struct pcim_addr_devres *res; 465 + 466 + res = pcim_addr_devres_alloc(pdev); 467 + if (!res) 468 + return IOMEM_ERR_PTR(-ENOMEM); 469 + 470 + mapping = pci_iomap_range(pdev, bar, offset, len); 471 + if (!mapping) { 472 + pcim_addr_devres_free(res); 473 + return IOMEM_ERR_PTR(-EINVAL); 474 + } 475 + 476 + res->type = PCIM_ADDR_DEVRES_TYPE_MAPPING; 477 + res->baseaddr = mapping; 478 + 479 + /* 480 + * Ranged mappings don't get added to the legacy-table, since the table 481 + * only ever keeps track of whole BARs. 482 + */ 483 + 484 + devres_add(&pdev->dev, res); 485 + return mapping; 486 + } 487 + EXPORT_SYMBOL(pcim_iomap_range);
+34 -14
drivers/pci/endpoint/functions/pci-epf-mhi.c
··· 137 137 .epf_flags = PCI_BASE_ADDRESS_MEM_TYPE_32, 138 138 .msi_count = 32, 139 139 .mru = 0x8000, 140 + .flags = MHI_EPF_USE_DMA, 140 141 }; 141 142 142 143 struct pci_epf_mhi { ··· 717 716 epf_mhi->dma_chan_rx = NULL; 718 717 } 719 718 720 - static int pci_epf_mhi_core_init(struct pci_epf *epf) 719 + static int pci_epf_mhi_epc_init(struct pci_epf *epf) 721 720 { 722 721 struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 723 722 const struct pci_epf_mhi_ep_info *info = epf_mhi->info; ··· 754 753 if (!epf_mhi->epc_features) 755 754 return -ENODATA; 756 755 756 + if (info->flags & MHI_EPF_USE_DMA) { 757 + ret = pci_epf_mhi_dma_init(epf_mhi); 758 + if (ret) { 759 + dev_err(dev, "Failed to initialize DMA: %d\n", ret); 760 + return ret; 761 + } 762 + } 763 + 757 764 return 0; 765 + } 766 + 767 + static void pci_epf_mhi_epc_deinit(struct pci_epf *epf) 768 + { 769 + struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 770 + const struct pci_epf_mhi_ep_info *info = epf_mhi->info; 771 + struct pci_epf_bar *epf_bar = &epf->bar[info->bar_num]; 772 + struct mhi_ep_cntrl *mhi_cntrl = &epf_mhi->mhi_cntrl; 773 + struct pci_epc *epc = epf->epc; 774 + 775 + if (mhi_cntrl->mhi_dev) { 776 + mhi_ep_power_down(mhi_cntrl); 777 + if (info->flags & MHI_EPF_USE_DMA) 778 + pci_epf_mhi_dma_deinit(epf_mhi); 779 + mhi_ep_unregister_controller(mhi_cntrl); 780 + } 781 + 782 + pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, epf_bar); 758 783 } 759 784 760 785 static int pci_epf_mhi_link_up(struct pci_epf *epf) ··· 791 764 struct pci_epc *epc = epf->epc; 792 765 struct device *dev = &epf->dev; 793 766 int ret; 794 - 795 - if (info->flags & MHI_EPF_USE_DMA) { 796 - ret = pci_epf_mhi_dma_init(epf_mhi); 797 - if (ret) { 798 - dev_err(dev, "Failed to initialize DMA: %d\n", ret); 799 - return ret; 800 - } 801 - } 802 767 803 768 mhi_cntrl->mmio = epf_mhi->mmio; 804 769 mhi_cntrl->irq = epf_mhi->irq; ··· 838 819 return 0; 839 820 } 840 821 841 - static int pci_epf_mhi_bme(struct pci_epf *epf) 822 + static int pci_epf_mhi_bus_master_enable(struct pci_epf *epf) 842 823 { 843 824 struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf); 844 825 const struct pci_epf_mhi_ep_info *info = epf_mhi->info; ··· 901 882 902 883 /* 903 884 * Forcefully power down the MHI EP stack. Only way to bring the MHI EP 904 - * stack back to working state after successive bind is by getting BME 905 - * from host. 885 + * stack back to working state after successive bind is by getting Bus 886 + * Master Enable event from host. 906 887 */ 907 888 if (mhi_cntrl->mhi_dev) { 908 889 mhi_ep_power_down(mhi_cntrl); ··· 916 897 } 917 898 918 899 static const struct pci_epc_event_ops pci_epf_mhi_event_ops = { 919 - .core_init = pci_epf_mhi_core_init, 900 + .epc_init = pci_epf_mhi_epc_init, 901 + .epc_deinit = pci_epf_mhi_epc_deinit, 920 902 .link_up = pci_epf_mhi_link_up, 921 903 .link_down = pci_epf_mhi_link_down, 922 - .bme = pci_epf_mhi_bme, 904 + .bus_master_enable = pci_epf_mhi_bus_master_enable, 923 905 }; 924 906 925 907 static int pci_epf_mhi_probe(struct pci_epf *epf,
+74 -43
drivers/pci/endpoint/functions/pci-epf-test.c
··· 686 686 msecs_to_jiffies(1)); 687 687 } 688 688 689 - static void pci_epf_test_unbind(struct pci_epf *epf) 690 - { 691 - struct pci_epf_test *epf_test = epf_get_drvdata(epf); 692 - struct pci_epc *epc = epf->epc; 693 - int bar; 694 - 695 - cancel_delayed_work(&epf_test->cmd_handler); 696 - pci_epf_test_clean_dma_chan(epf_test); 697 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 698 - if (!epf_test->reg[bar]) 699 - continue; 700 - 701 - pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, 702 - &epf->bar[bar]); 703 - pci_epf_free_space(epf, epf_test->reg[bar], bar, 704 - PRIMARY_INTERFACE); 705 - } 706 - } 707 - 708 689 static int pci_epf_test_set_bar(struct pci_epf *epf) 709 690 { 710 691 int bar, ret; ··· 712 731 return 0; 713 732 } 714 733 715 - static int pci_epf_test_core_init(struct pci_epf *epf) 734 + static void pci_epf_test_clear_bar(struct pci_epf *epf) 735 + { 736 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 737 + struct pci_epc *epc = epf->epc; 738 + int bar; 739 + 740 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 741 + if (!epf_test->reg[bar]) 742 + continue; 743 + 744 + pci_epc_clear_bar(epc, epf->func_no, epf->vfunc_no, 745 + &epf->bar[bar]); 746 + } 747 + } 748 + 749 + static int pci_epf_test_epc_init(struct pci_epf *epf) 716 750 { 717 751 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 718 752 struct pci_epf_header *header = epf->header; 719 - const struct pci_epc_features *epc_features; 753 + const struct pci_epc_features *epc_features = epf_test->epc_features; 720 754 struct pci_epc *epc = epf->epc; 721 755 struct device *dev = &epf->dev; 722 756 bool linkup_notifier = false; 723 - bool msix_capable = false; 724 - bool msi_capable = true; 725 757 int ret; 726 758 727 - epc_features = pci_epc_get_features(epc, epf->func_no, epf->vfunc_no); 728 - if (epc_features) { 729 - msix_capable = epc_features->msix_capable; 730 - msi_capable = epc_features->msi_capable; 731 - } 759 + epf_test->dma_supported = true; 760 + 761 + ret = pci_epf_test_init_dma_chan(epf_test); 762 + if (ret) 763 + epf_test->dma_supported = false; 732 764 733 765 if (epf->vfunc_no <= 1) { 734 766 ret = pci_epc_write_header(epc, epf->func_no, epf->vfunc_no, header); ··· 755 761 if (ret) 756 762 return ret; 757 763 758 - if (msi_capable) { 764 + if (epc_features->msi_capable) { 759 765 ret = pci_epc_set_msi(epc, epf->func_no, epf->vfunc_no, 760 766 epf->msi_interrupts); 761 767 if (ret) { ··· 764 770 } 765 771 } 766 772 767 - if (msix_capable) { 773 + if (epc_features->msix_capable) { 768 774 ret = pci_epc_set_msix(epc, epf->func_no, epf->vfunc_no, 769 775 epf->msix_interrupts, 770 776 epf_test->test_reg_bar, ··· 782 788 return 0; 783 789 } 784 790 791 + static void pci_epf_test_epc_deinit(struct pci_epf *epf) 792 + { 793 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 794 + 795 + cancel_delayed_work(&epf_test->cmd_handler); 796 + pci_epf_test_clean_dma_chan(epf_test); 797 + pci_epf_test_clear_bar(epf); 798 + } 799 + 785 800 static int pci_epf_test_link_up(struct pci_epf *epf) 786 801 { 787 802 struct pci_epf_test *epf_test = epf_get_drvdata(epf); ··· 801 798 return 0; 802 799 } 803 800 801 + static int pci_epf_test_link_down(struct pci_epf *epf) 802 + { 803 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 804 + 805 + cancel_delayed_work_sync(&epf_test->cmd_handler); 806 + 807 + return 0; 808 + } 809 + 804 810 static const struct pci_epc_event_ops pci_epf_test_event_ops = { 805 - .core_init = pci_epf_test_core_init, 811 + .epc_init = pci_epf_test_epc_init, 812 + .epc_deinit = pci_epf_test_epc_deinit, 806 813 .link_up = pci_epf_test_link_up, 814 + .link_down = pci_epf_test_link_down, 807 815 }; 808 816 809 817 static int pci_epf_test_alloc_space(struct pci_epf *epf) ··· 824 810 size_t msix_table_size = 0; 825 811 size_t test_reg_bar_size; 826 812 size_t pba_size = 0; 827 - bool msix_capable; 828 813 void *base; 829 814 enum pci_barno test_reg_bar = epf_test->test_reg_bar; 830 815 enum pci_barno bar; 831 - const struct pci_epc_features *epc_features; 816 + const struct pci_epc_features *epc_features = epf_test->epc_features; 832 817 size_t test_reg_size; 833 - 834 - epc_features = epf_test->epc_features; 835 818 836 819 test_reg_bar_size = ALIGN(sizeof(struct pci_epf_test_reg), 128); 837 820 838 - msix_capable = epc_features->msix_capable; 839 - if (msix_capable) { 821 + if (epc_features->msix_capable) { 840 822 msix_table_size = PCI_MSIX_ENTRY_SIZE * epf->msix_interrupts; 841 823 epf_test->msix_table_offset = test_reg_bar_size; 842 824 /* Align to QWORD or 8 Bytes */ ··· 867 857 return 0; 868 858 } 869 859 860 + static void pci_epf_test_free_space(struct pci_epf *epf) 861 + { 862 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 863 + int bar; 864 + 865 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 866 + if (!epf_test->reg[bar]) 867 + continue; 868 + 869 + pci_epf_free_space(epf, epf_test->reg[bar], bar, 870 + PRIMARY_INTERFACE); 871 + } 872 + } 873 + 870 874 static int pci_epf_test_bind(struct pci_epf *epf) 871 875 { 872 876 int ret; ··· 909 885 if (ret) 910 886 return ret; 911 887 912 - epf_test->dma_supported = true; 913 - 914 - ret = pci_epf_test_init_dma_chan(epf_test); 915 - if (ret) 916 - epf_test->dma_supported = false; 917 - 918 888 return 0; 889 + } 890 + 891 + static void pci_epf_test_unbind(struct pci_epf *epf) 892 + { 893 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 894 + struct pci_epc *epc = epf->epc; 895 + 896 + cancel_delayed_work(&epf_test->cmd_handler); 897 + if (epc->init_complete) { 898 + pci_epf_test_clean_dma_chan(epf_test); 899 + pci_epf_test_clear_bar(epf); 900 + } 901 + pci_epf_test_free_space(epf); 919 902 } 920 903 921 904 static const struct pci_epf_device_id pci_epf_test_ids[] = {
+14 -5
drivers/pci/endpoint/functions/pci-epf-vntb.c
··· 799 799 */ 800 800 static void epf_ntb_epc_cleanup(struct epf_ntb *ntb) 801 801 { 802 - epf_ntb_db_bar_clear(ntb); 803 802 epf_ntb_mw_bar_clear(ntb, ntb->num_mws); 803 + epf_ntb_db_bar_clear(ntb); 804 + epf_ntb_config_sspad_bar_clear(ntb); 804 805 } 805 806 806 807 #define EPF_NTB_R(_name) \ ··· 1019 1018 struct epf_ntb *ndev = sysdata; 1020 1019 1021 1020 vpci_bus = pci_scan_bus(ndev->vbus_number, &vpci_ops, sysdata); 1022 - if (vpci_bus) 1023 - pr_err("create pci bus\n"); 1021 + if (!vpci_bus) { 1022 + pr_err("create pci bus failed\n"); 1023 + return -EINVAL; 1024 + } 1024 1025 1025 1026 pci_bus_add_devices(vpci_bus); 1026 1027 ··· 1338 1335 ret = pci_register_driver(&vntb_pci_driver); 1339 1336 if (ret) { 1340 1337 dev_err(dev, "failure register vntb pci driver\n"); 1341 - goto err_bar_alloc; 1338 + goto err_epc_cleanup; 1342 1339 } 1343 1340 1344 - vpci_scan_bus(ntb); 1341 + ret = vpci_scan_bus(ntb); 1342 + if (ret) 1343 + goto err_unregister; 1345 1344 1346 1345 return 0; 1347 1346 1347 + err_unregister: 1348 + pci_unregister_driver(&vntb_pci_driver); 1349 + err_epc_cleanup: 1350 + epf_ntb_epc_cleanup(ntb); 1348 1351 err_bar_alloc: 1349 1352 epf_ntb_config_spad_bar_free(ntb); 1350 1353
-1
drivers/pci/endpoint/pci-ep-cfs.c
··· 23 23 struct config_group group; 24 24 struct config_group primary_epc_group; 25 25 struct config_group secondary_epc_group; 26 - struct config_group *type_group; 27 26 struct delayed_work cfs_work; 28 27 struct pci_epf *epf; 29 28 int index;
+50 -29
drivers/pci/endpoint/pci-epc-core.c
··· 14 14 #include <linux/pci-epf.h> 15 15 #include <linux/pci-ep-cfs.h> 16 16 17 - static struct class *pci_epc_class; 17 + static const struct class pci_epc_class = { 18 + .name = "pci_epc", 19 + }; 18 20 19 21 static void devm_pci_epc_release(struct device *dev, void *res) 20 22 { ··· 62 60 struct device *dev; 63 61 struct class_dev_iter iter; 64 62 65 - class_dev_iter_init(&iter, pci_epc_class, NULL, NULL); 63 + class_dev_iter_init(&iter, &pci_epc_class, NULL, NULL); 66 64 while ((dev = class_dev_iter_next(&iter))) { 67 65 if (strcmp(epc_name, dev_name(dev))) 68 66 continue; ··· 729 727 EXPORT_SYMBOL_GPL(pci_epc_linkdown); 730 728 731 729 /** 732 - * pci_epc_init_notify() - Notify the EPF device that EPC device's core 733 - * initialization is completed. 734 - * @epc: the EPC device whose core initialization is completed 730 + * pci_epc_init_notify() - Notify the EPF device that EPC device initialization 731 + * is completed. 732 + * @epc: the EPC device whose initialization is completed 735 733 * 736 734 * Invoke to Notify the EPF device that the EPC device's initialization 737 735 * is completed. ··· 746 744 mutex_lock(&epc->list_lock); 747 745 list_for_each_entry(epf, &epc->pci_epf, list) { 748 746 mutex_lock(&epf->lock); 749 - if (epf->event_ops && epf->event_ops->core_init) 750 - epf->event_ops->core_init(epf); 747 + if (epf->event_ops && epf->event_ops->epc_init) 748 + epf->event_ops->epc_init(epf); 751 749 mutex_unlock(&epf->lock); 752 750 } 753 751 epc->init_complete = true; ··· 758 756 /** 759 757 * pci_epc_notify_pending_init() - Notify the pending EPC device initialization 760 758 * complete to the EPF device 761 - * @epc: the EPC device whose core initialization is pending to be notified 759 + * @epc: the EPC device whose initialization is pending to be notified 762 760 * @epf: the EPF device to be notified 763 761 * 764 762 * Invoke to notify the pending EPC device initialization complete to the EPF ··· 769 767 { 770 768 if (epc->init_complete) { 771 769 mutex_lock(&epf->lock); 772 - if (epf->event_ops && epf->event_ops->core_init) 773 - epf->event_ops->core_init(epf); 770 + if (epf->event_ops && epf->event_ops->epc_init) 771 + epf->event_ops->epc_init(epf); 774 772 mutex_unlock(&epf->lock); 775 773 } 776 774 } 777 775 EXPORT_SYMBOL_GPL(pci_epc_notify_pending_init); 778 776 779 777 /** 780 - * pci_epc_bme_notify() - Notify the EPF device that the EPC device has received 781 - * the BME event from the Root complex 782 - * @epc: the EPC device that received the BME event 778 + * pci_epc_deinit_notify() - Notify the EPF device about EPC deinitialization 779 + * @epc: the EPC device whose deinitialization is completed 783 780 * 784 - * Invoke to Notify the EPF device that the EPC device has received the Bus 785 - * Master Enable (BME) event from the Root complex 781 + * Invoke to notify the EPF device that the EPC deinitialization is completed. 786 782 */ 787 - void pci_epc_bme_notify(struct pci_epc *epc) 783 + void pci_epc_deinit_notify(struct pci_epc *epc) 788 784 { 789 785 struct pci_epf *epf; 790 786 ··· 792 792 mutex_lock(&epc->list_lock); 793 793 list_for_each_entry(epf, &epc->pci_epf, list) { 794 794 mutex_lock(&epf->lock); 795 - if (epf->event_ops && epf->event_ops->bme) 796 - epf->event_ops->bme(epf); 795 + if (epf->event_ops && epf->event_ops->epc_deinit) 796 + epf->event_ops->epc_deinit(epf); 797 + mutex_unlock(&epf->lock); 798 + } 799 + epc->init_complete = false; 800 + mutex_unlock(&epc->list_lock); 801 + } 802 + EXPORT_SYMBOL_GPL(pci_epc_deinit_notify); 803 + 804 + /** 805 + * pci_epc_bus_master_enable_notify() - Notify the EPF device that the EPC 806 + * device has received the Bus Master 807 + * Enable event from the Root complex 808 + * @epc: the EPC device that received the Bus Master Enable event 809 + * 810 + * Notify the EPF device that the EPC device has generated the Bus Master Enable 811 + * event due to host setting the Bus Master Enable bit in the Command register. 812 + */ 813 + void pci_epc_bus_master_enable_notify(struct pci_epc *epc) 814 + { 815 + struct pci_epf *epf; 816 + 817 + if (IS_ERR_OR_NULL(epc)) 818 + return; 819 + 820 + mutex_lock(&epc->list_lock); 821 + list_for_each_entry(epf, &epc->pci_epf, list) { 822 + mutex_lock(&epf->lock); 823 + if (epf->event_ops && epf->event_ops->bus_master_enable) 824 + epf->event_ops->bus_master_enable(epf); 797 825 mutex_unlock(&epf->lock); 798 826 } 799 827 mutex_unlock(&epc->list_lock); 800 828 } 801 - EXPORT_SYMBOL_GPL(pci_epc_bme_notify); 829 + EXPORT_SYMBOL_GPL(pci_epc_bus_master_enable_notify); 802 830 803 831 /** 804 832 * pci_epc_destroy() - destroy the EPC device ··· 895 867 INIT_LIST_HEAD(&epc->pci_epf); 896 868 897 869 device_initialize(&epc->dev); 898 - epc->dev.class = pci_epc_class; 870 + epc->dev.class = &pci_epc_class; 899 871 epc->dev.parent = dev; 900 872 epc->dev.release = pci_epc_release; 901 873 epc->ops = ops; ··· 955 927 956 928 static int __init pci_epc_init(void) 957 929 { 958 - pci_epc_class = class_create("pci_epc"); 959 - if (IS_ERR(pci_epc_class)) { 960 - pr_err("failed to create pci epc class --> %ld\n", 961 - PTR_ERR(pci_epc_class)); 962 - return PTR_ERR(pci_epc_class); 963 - } 964 - 965 - return 0; 930 + return class_register(&pci_epc_class); 966 931 } 967 932 module_init(pci_epc_init); 968 933 969 934 static void __exit pci_epc_exit(void) 970 935 { 971 - class_destroy(pci_epc_class); 936 + class_unregister(&pci_epc_class); 972 937 } 973 938 module_exit(pci_epc_exit); 974 939
+1
drivers/pci/hotplug/acpiphp_ampere_altra.c
··· 124 124 module_platform_driver(altra_led_driver); 125 125 126 126 MODULE_AUTHOR("D Scott Phillips <scott@os.amperecomputing.com>"); 127 + MODULE_DESCRIPTION("ACPI PCI Hot Plug Extension for Ampere Altra"); 127 128 MODULE_LICENSE("GPL");
+4
drivers/pci/hotplug/pciehp.h
··· 46 46 /** 47 47 * struct controller - PCIe hotplug controller 48 48 * @pcie: pointer to the controller's PCIe port service device 49 + * @dsn: cached copy of Device Serial Number of Function 0 in the hotplug slot 50 + * (PCIe r6.2 sec 7.9.3); used to determine whether a hotplugged device 51 + * was replaced with a different one during system sleep 49 52 * @slot_cap: cached copy of the Slot Capabilities register 50 53 * @inband_presence_disabled: In-Band Presence Detect Disable supported by 51 54 * controller and disabled per spec recommendation (PCIe r5.0, appendix I ··· 90 87 */ 91 88 struct controller { 92 89 struct pcie_device *pcie; 90 + u64 dsn; 93 91 94 92 u32 slot_cap; /* capabilities and quirks */ 95 93 unsigned int inband_presence_disabled:1;
+41 -1
drivers/pci/hotplug/pciehp_core.c
··· 284 284 return 0; 285 285 } 286 286 287 + static bool pciehp_device_replaced(struct controller *ctrl) 288 + { 289 + struct pci_dev *pdev __free(pci_dev_put); 290 + u32 reg; 291 + 292 + pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0)); 293 + if (!pdev) 294 + return true; 295 + 296 + if (pci_read_config_dword(pdev, PCI_VENDOR_ID, &reg) || 297 + reg != (pdev->vendor | (pdev->device << 16)) || 298 + pci_read_config_dword(pdev, PCI_CLASS_REVISION, &reg) || 299 + reg != (pdev->revision | (pdev->class << 8))) 300 + return true; 301 + 302 + if (pdev->hdr_type == PCI_HEADER_TYPE_NORMAL && 303 + (pci_read_config_dword(pdev, PCI_SUBSYSTEM_VENDOR_ID, &reg) || 304 + reg != (pdev->subsystem_vendor | (pdev->subsystem_device << 16)))) 305 + return true; 306 + 307 + if (pci_get_dsn(pdev) != ctrl->dsn) 308 + return true; 309 + 310 + return false; 311 + } 312 + 287 313 static int pciehp_resume_noirq(struct pcie_device *dev) 288 314 { 289 315 struct controller *ctrl = get_service_data(dev); ··· 319 293 ctrl->cmd_busy = true; 320 294 321 295 /* clear spurious events from rediscovery of inserted card */ 322 - if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) 296 + if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) { 323 297 pcie_clear_hotplug_events(ctrl); 298 + 299 + /* 300 + * If hotplugged device was replaced with a different one 301 + * during system sleep, mark the old device disconnected 302 + * (to prevent its driver from accessing the new device) 303 + * and synthesize a Presence Detect Changed event. 304 + */ 305 + if (pciehp_device_replaced(ctrl)) { 306 + ctrl_dbg(ctrl, "device replaced during system sleep\n"); 307 + pci_walk_bus(ctrl->pcie->port->subordinate, 308 + pci_dev_set_disconnected, NULL); 309 + pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); 310 + } 311 + } 324 312 325 313 return 0; 326 314 }
+5
drivers/pci/hotplug/pciehp_hpc.c
··· 1055 1055 } 1056 1056 } 1057 1057 1058 + pdev = pci_get_slot(subordinate, PCI_DEVFN(0, 0)); 1059 + if (pdev) 1060 + ctrl->dsn = pci_get_dsn(pdev); 1061 + pci_dev_put(pdev); 1062 + 1058 1063 return ctrl; 1059 1064 } 1060 1065
+4
drivers/pci/hotplug/pciehp_pci.c
··· 72 72 pci_bus_add_devices(parent); 73 73 down_read_nested(&ctrl->reset_lock, ctrl->depth); 74 74 75 + dev = pci_get_slot(parent, PCI_DEVFN(0, 0)); 76 + ctrl->dsn = pci_get_dsn(dev); 77 + pci_dev_put(dev); 78 + 75 79 out: 76 80 pci_unlock_rescan_remove(); 77 81 return ret;
+16
drivers/pci/iomap.c
··· 23 23 * 24 24 * @maxlen specifies the maximum length to map. If you want to get access to 25 25 * the complete BAR from offset to the end, pass %0 here. 26 + * 27 + * NOTE: 28 + * This function is never managed, even if you initialized with 29 + * pcim_enable_device(). 26 30 * */ 27 31 void __iomem *pci_iomap_range(struct pci_dev *dev, 28 32 int bar, ··· 67 63 * 68 64 * @maxlen specifies the maximum length to map. If you want to get access to 69 65 * the complete BAR from offset to the end, pass %0 here. 66 + * 67 + * NOTE: 68 + * This function is never managed, even if you initialized with 69 + * pcim_enable_device(). 70 70 * */ 71 71 void __iomem *pci_iomap_wc_range(struct pci_dev *dev, 72 72 int bar, ··· 114 106 * 115 107 * @maxlen specifies the maximum length to map. If you want to get access to 116 108 * the complete BAR without checking for its length first, pass %0 here. 109 + * 110 + * NOTE: 111 + * This function is never managed, even if you initialized with 112 + * pcim_enable_device(). If you need automatic cleanup, use pcim_iomap(). 117 113 * */ 118 114 void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long maxlen) 119 115 { ··· 139 127 * 140 128 * @maxlen specifies the maximum length to map. If you want to get access to 141 129 * the complete BAR without checking for its length first, pass %0 here. 130 + * 131 + * NOTE: 132 + * This function is never managed, even if you initialized with 133 + * pcim_enable_device(). 142 134 * */ 143 135 void __iomem *pci_iomap_wc(struct pci_dev *dev, int bar, unsigned long maxlen) 144 136 {
+47 -13
drivers/pci/of.c
··· 240 240 EXPORT_SYMBOL_GPL(of_get_pci_domain_nr); 241 241 242 242 /** 243 + * of_pci_preserve_config - Return true if the boot configuration needs to 244 + * be preserved 245 + * @node: Device tree node. 246 + * 247 + * Look for "linux,pci-probe-only" property for a given PCI controller's 248 + * node and return true if found. Also look in the chosen node if the 249 + * property is not found in the given controller's node. Having this 250 + * property ensures that the kernel doesn't reconfigure the BARs and bridge 251 + * windows that are already done by the platform firmware. 252 + * 253 + * Return: true if the property exists; false otherwise. 254 + */ 255 + bool of_pci_preserve_config(struct device_node *node) 256 + { 257 + u32 val = 0; 258 + int ret; 259 + 260 + if (!node) { 261 + pr_warn("device node is NULL, trying with of_chosen\n"); 262 + node = of_chosen; 263 + } 264 + 265 + retry: 266 + ret = of_property_read_u32(node, "linux,pci-probe-only", &val); 267 + if (ret) { 268 + if (ret == -ENODATA || ret == -EOVERFLOW) { 269 + pr_warn("Incorrect value for linux,pci-probe-only in %pOF, ignoring\n", 270 + node); 271 + return false; 272 + } 273 + if (ret == -EINVAL) { 274 + if (node == of_chosen) 275 + return false; 276 + 277 + node = of_chosen; 278 + goto retry; 279 + } 280 + } 281 + 282 + if (val) 283 + return true; 284 + else 285 + return false; 286 + } 287 + 288 + /** 243 289 * of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only 244 290 * is present and valid 245 291 */ 246 292 void of_pci_check_probe_only(void) 247 293 { 248 - u32 val; 249 - int ret; 250 - 251 - ret = of_property_read_u32(of_chosen, "linux,pci-probe-only", &val); 252 - if (ret) { 253 - if (ret == -ENODATA || ret == -EOVERFLOW) 254 - pr_warn("linux,pci-probe-only without valid value, ignoring\n"); 255 - return; 256 - } 257 - 258 - if (val) 294 + if (of_pci_preserve_config(of_chosen)) 259 295 pci_add_flags(PCI_PROBE_ONLY); 260 296 else 261 297 pci_clear_flags(PCI_PROBE_ONLY); 262 - 263 - pr_info("PROBE_ONLY %s\n", val ? "enabled" : "disabled"); 264 298 } 265 299 EXPORT_SYMBOL_GPL(of_pci_check_probe_only); 266 300
+22
drivers/pci/pci-acpi.c
··· 119 119 return (phys_addr_t)mcfg_addr; 120 120 } 121 121 122 + bool pci_acpi_preserve_config(struct pci_host_bridge *host_bridge) 123 + { 124 + if (ACPI_HANDLE(&host_bridge->dev)) { 125 + union acpi_object *obj; 126 + 127 + /* 128 + * Evaluate the "PCI Boot Configuration" _DSM Function. If it 129 + * exists and returns 0, we must preserve any PCI resource 130 + * assignments made by firmware for this host bridge. 131 + */ 132 + obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(&host_bridge->dev), 133 + &pci_acpi_dsm_guid, 134 + 1, DSM_PCI_PRESERVE_BOOT_CONFIG, 135 + NULL, ACPI_TYPE_INTEGER); 136 + if (obj && obj->integer.value == 0) 137 + return true; 138 + ACPI_FREE(obj); 139 + } 140 + 141 + return false; 142 + } 143 + 122 144 /* _HPX PCI Setting Record (Type 0); same as _HPP */ 123 145 struct hpx_type0 { 124 146 u32 revision; /* Not present in _HPP */
+2 -2
drivers/pci/pci-mid.c
··· 38 38 * arch/x86/platform/intel-mid/pwr.c. 39 39 */ 40 40 static const struct x86_cpu_id lpss_cpu_ids[] = { 41 - X86_MATCH_INTEL_FAM6_MODEL(ATOM_SALTWELL_MID, NULL), 42 - X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_MID, NULL), 41 + X86_MATCH_VFM(INTEL_ATOM_SALTWELL_MID, NULL), 42 + X86_MATCH_VFM(INTEL_ATOM_SILVERMONT_MID, NULL), 43 43 {} 44 44 }; 45 45
+1
drivers/pci/pci-pf-stub.c
··· 39 39 }; 40 40 module_pci_driver(pf_stub_driver); 41 41 42 + MODULE_DESCRIPTION("SR-IOV PF stub driver with no functionality"); 42 43 MODULE_LICENSE("GPL");
+1
drivers/pci/pci-stub.c
··· 92 92 module_init(pci_stub_init); 93 93 module_exit(pci_stub_exit); 94 94 95 + MODULE_DESCRIPTION("VM device assignment stub driver"); 95 96 MODULE_LICENSE("GPL"); 96 97 MODULE_AUTHOR("Chris Wright <chrisw@sous-sol.org>");
+187 -121
drivers/pci/pci.c
··· 946 946 } 947 947 948 948 static const char *disable_acs_redir_param; 949 + static const char *config_acs_param; 949 950 950 - /** 951 - * pci_disable_acs_redir - disable ACS redirect capabilities 952 - * @dev: the PCI device 953 - * 954 - * For only devices specified in the disable_acs_redir parameter. 955 - */ 956 - static void pci_disable_acs_redir(struct pci_dev *dev) 957 - { 958 - int ret = 0; 959 - const char *p; 960 - int pos; 951 + struct pci_acs { 952 + u16 cap; 961 953 u16 ctrl; 954 + u16 fw_ctrl; 955 + }; 962 956 963 - if (!disable_acs_redir_param) 957 + static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps, 958 + const char *p, u16 mask, u16 flags) 959 + { 960 + char *delimit; 961 + int ret = 0; 962 + 963 + if (!p) 964 964 return; 965 965 966 - p = disable_acs_redir_param; 967 966 while (*p) { 967 + if (!mask) { 968 + /* Check for ACS flags */ 969 + delimit = strstr(p, "@"); 970 + if (delimit) { 971 + int end; 972 + u32 shift = 0; 973 + 974 + end = delimit - p - 1; 975 + 976 + while (end > -1) { 977 + if (*(p + end) == '0') { 978 + mask |= 1 << shift; 979 + shift++; 980 + end--; 981 + } else if (*(p + end) == '1') { 982 + mask |= 1 << shift; 983 + flags |= 1 << shift; 984 + shift++; 985 + end--; 986 + } else if ((*(p + end) == 'x') || (*(p + end) == 'X')) { 987 + shift++; 988 + end--; 989 + } else { 990 + pci_err(dev, "Invalid ACS flags... Ignoring\n"); 991 + return; 992 + } 993 + } 994 + p = delimit + 1; 995 + } else { 996 + pci_err(dev, "ACS Flags missing\n"); 997 + return; 998 + } 999 + } 1000 + 1001 + if (mask & ~(PCI_ACS_SV | PCI_ACS_TB | PCI_ACS_RR | PCI_ACS_CR | 1002 + PCI_ACS_UF | PCI_ACS_EC | PCI_ACS_DT)) { 1003 + pci_err(dev, "Invalid ACS flags specified\n"); 1004 + return; 1005 + } 1006 + 968 1007 ret = pci_dev_str_match(dev, p, &p); 969 1008 if (ret < 0) { 970 - pr_info_once("PCI: Can't parse disable_acs_redir parameter: %s\n", 971 - disable_acs_redir_param); 972 - 1009 + pr_info_once("PCI: Can't parse ACS command line parameter\n"); 973 1010 break; 974 1011 } else if (ret == 1) { 975 1012 /* Found a match */ ··· 1026 989 if (!pci_dev_specific_disable_acs_redir(dev)) 1027 990 return; 1028 991 1029 - pos = dev->acs_cap; 1030 - if (!pos) { 1031 - pci_warn(dev, "cannot disable ACS redirect for this hardware as it does not have ACS capabilities\n"); 1032 - return; 1033 - } 992 + pci_dbg(dev, "ACS mask = %#06x\n", mask); 993 + pci_dbg(dev, "ACS flags = %#06x\n", flags); 1034 994 1035 - pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl); 995 + /* If mask is 0 then we copy the bit from the firmware setting. */ 996 + caps->ctrl = (caps->ctrl & ~mask) | (caps->fw_ctrl & mask); 997 + caps->ctrl |= flags; 1036 998 1037 - /* P2P Request & Completion Redirect */ 1038 - ctrl &= ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC); 1039 - 1040 - pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); 1041 - 1042 - pci_info(dev, "disabled ACS redirect\n"); 999 + pci_info(dev, "Configured ACS to %#06x\n", caps->ctrl); 1043 1000 } 1044 1001 1045 1002 /** 1046 1003 * pci_std_enable_acs - enable ACS on devices using standard ACS capabilities 1047 1004 * @dev: the PCI device 1005 + * @caps: default ACS controls 1048 1006 */ 1049 - static void pci_std_enable_acs(struct pci_dev *dev) 1007 + static void pci_std_enable_acs(struct pci_dev *dev, struct pci_acs *caps) 1050 1008 { 1051 - int pos; 1052 - u16 cap; 1053 - u16 ctrl; 1054 - 1055 - pos = dev->acs_cap; 1056 - if (!pos) 1057 - return; 1058 - 1059 - pci_read_config_word(dev, pos + PCI_ACS_CAP, &cap); 1060 - pci_read_config_word(dev, pos + PCI_ACS_CTRL, &ctrl); 1061 - 1062 1009 /* Source Validation */ 1063 - ctrl |= (cap & PCI_ACS_SV); 1010 + caps->ctrl |= (caps->cap & PCI_ACS_SV); 1064 1011 1065 1012 /* P2P Request Redirect */ 1066 - ctrl |= (cap & PCI_ACS_RR); 1013 + caps->ctrl |= (caps->cap & PCI_ACS_RR); 1067 1014 1068 1015 /* P2P Completion Redirect */ 1069 - ctrl |= (cap & PCI_ACS_CR); 1016 + caps->ctrl |= (caps->cap & PCI_ACS_CR); 1070 1017 1071 1018 /* Upstream Forwarding */ 1072 - ctrl |= (cap & PCI_ACS_UF); 1019 + caps->ctrl |= (caps->cap & PCI_ACS_UF); 1073 1020 1074 1021 /* Enable Translation Blocking for external devices and noats */ 1075 1022 if (pci_ats_disabled() || dev->external_facing || dev->untrusted) 1076 - ctrl |= (cap & PCI_ACS_TB); 1077 - 1078 - pci_write_config_word(dev, pos + PCI_ACS_CTRL, ctrl); 1023 + caps->ctrl |= (caps->cap & PCI_ACS_TB); 1079 1024 } 1080 1025 1081 1026 /** ··· 1066 1047 */ 1067 1048 static void pci_enable_acs(struct pci_dev *dev) 1068 1049 { 1069 - if (!pci_acs_enable) 1070 - goto disable_acs_redir; 1050 + struct pci_acs caps; 1051 + int pos; 1071 1052 1072 - if (!pci_dev_specific_enable_acs(dev)) 1073 - goto disable_acs_redir; 1053 + pos = dev->acs_cap; 1054 + if (!pos) 1055 + return; 1074 1056 1075 - pci_std_enable_acs(dev); 1057 + pci_read_config_word(dev, pos + PCI_ACS_CAP, &caps.cap); 1058 + pci_read_config_word(dev, pos + PCI_ACS_CTRL, &caps.ctrl); 1059 + caps.fw_ctrl = caps.ctrl; 1076 1060 1077 - disable_acs_redir: 1061 + /* If an iommu is present we start with kernel default caps */ 1062 + if (pci_acs_enable) { 1063 + if (pci_dev_specific_enable_acs(dev)) 1064 + pci_std_enable_acs(dev, &caps); 1065 + } 1066 + 1078 1067 /* 1079 - * Note: pci_disable_acs_redir() must be called even if ACS was not 1080 - * enabled by the kernel because it may have been enabled by 1081 - * platform firmware. So if we are told to disable it, we should 1082 - * always disable it after setting the kernel's default 1083 - * preferences. 1068 + * Always apply caps from the command line, even if there is no iommu. 1069 + * Trust that the admin has a reason to change the ACS settings. 1084 1070 */ 1085 - pci_disable_acs_redir(dev); 1071 + __pci_config_acs(dev, &caps, disable_acs_redir_param, 1072 + PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC, 1073 + ~(PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC)); 1074 + __pci_config_acs(dev, &caps, config_acs_param, 0, 0); 1075 + 1076 + pci_write_config_word(dev, pos + PCI_ACS_CTRL, caps.ctrl); 1086 1077 } 1087 1078 1088 1079 /** ··· 2247 2218 */ 2248 2219 void pci_disable_device(struct pci_dev *dev) 2249 2220 { 2250 - struct pci_devres *dr; 2251 - 2252 - dr = find_pci_dr(dev); 2253 - if (dr) 2254 - dr->enabled = 0; 2255 - 2256 2221 dev_WARN_ONCE(&dev->dev, atomic_read(&dev->enable_cnt) <= 0, 2257 2222 "disabling already-disabled device"); 2258 2223 ··· 3895 3872 */ 3896 3873 void pci_release_region(struct pci_dev *pdev, int bar) 3897 3874 { 3898 - struct pci_devres *dr; 3875 + /* 3876 + * This is done for backwards compatibility, because the old PCI devres 3877 + * API had a mode in which the function became managed if it had been 3878 + * enabled with pcim_enable_device() instead of pci_enable_device(). 3879 + */ 3880 + if (pci_is_managed(pdev)) { 3881 + pcim_release_region(pdev, bar); 3882 + return; 3883 + } 3899 3884 3900 3885 if (pci_resource_len(pdev, bar) == 0) 3901 3886 return; ··· 3913 3882 else if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) 3914 3883 release_mem_region(pci_resource_start(pdev, bar), 3915 3884 pci_resource_len(pdev, bar)); 3916 - 3917 - dr = find_pci_dr(pdev); 3918 - if (dr) 3919 - dr->region_mask &= ~(1 << bar); 3920 3885 } 3921 3886 EXPORT_SYMBOL(pci_release_region); 3922 3887 ··· 3922 3895 * @bar: BAR to be reserved 3923 3896 * @res_name: Name to be associated with resource. 3924 3897 * @exclusive: whether the region access is exclusive or not 3898 + * 3899 + * Returns: 0 on success, negative error code on failure. 3925 3900 * 3926 3901 * Mark the PCI region associated with PCI device @pdev BAR @bar as 3927 3902 * being reserved by owner @res_name. Do not access any ··· 3940 3911 static int __pci_request_region(struct pci_dev *pdev, int bar, 3941 3912 const char *res_name, int exclusive) 3942 3913 { 3943 - struct pci_devres *dr; 3914 + if (pci_is_managed(pdev)) { 3915 + if (exclusive == IORESOURCE_EXCLUSIVE) 3916 + return pcim_request_region_exclusive(pdev, bar, res_name); 3917 + 3918 + return pcim_request_region(pdev, bar, res_name); 3919 + } 3944 3920 3945 3921 if (pci_resource_len(pdev, bar) == 0) 3946 3922 return 0; ··· 3961 3927 goto err_out; 3962 3928 } 3963 3929 3964 - dr = find_pci_dr(pdev); 3965 - if (dr) 3966 - dr->region_mask |= 1 << bar; 3967 - 3968 3930 return 0; 3969 3931 3970 3932 err_out: ··· 3975 3945 * @bar: BAR to be reserved 3976 3946 * @res_name: Name to be associated with resource 3977 3947 * 3948 + * Returns: 0 on success, negative error code on failure. 3949 + * 3978 3950 * Mark the PCI region associated with PCI device @pdev BAR @bar as 3979 3951 * being reserved by owner @res_name. Do not access any 3980 3952 * address inside the PCI regions unless this call returns ··· 3984 3952 * 3985 3953 * Returns 0 on success, or %EBUSY on error. A warning 3986 3954 * message is also printed on failure. 3955 + * 3956 + * NOTE: 3957 + * This is a "hybrid" function: It's normally unmanaged, but becomes managed 3958 + * when pcim_enable_device() has been called in advance. This hybrid feature is 3959 + * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 3987 3960 */ 3988 3961 int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name) 3989 3962 { ··· 4039 4002 * @pdev: PCI device whose resources are to be reserved 4040 4003 * @bars: Bitmask of BARs to be requested 4041 4004 * @res_name: Name to be associated with resource 4005 + * 4006 + * Returns: 0 on success, negative error code on failure. 4007 + * 4008 + * NOTE: 4009 + * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4010 + * when pcim_enable_device() has been called in advance. This hybrid feature is 4011 + * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4042 4012 */ 4043 4013 int pci_request_selected_regions(struct pci_dev *pdev, int bars, 4044 4014 const char *res_name) ··· 4054 4010 } 4055 4011 EXPORT_SYMBOL(pci_request_selected_regions); 4056 4012 4013 + /** 4014 + * pci_request_selected_regions_exclusive - Request regions exclusively 4015 + * @pdev: PCI device to request regions from 4016 + * @bars: bit mask of BARs to request 4017 + * @res_name: name to be associated with the requests 4018 + * 4019 + * Returns: 0 on success, negative error code on failure. 4020 + * 4021 + * NOTE: 4022 + * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4023 + * when pcim_enable_device() has been called in advance. This hybrid feature is 4024 + * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4025 + */ 4057 4026 int pci_request_selected_regions_exclusive(struct pci_dev *pdev, int bars, 4058 4027 const char *res_name) 4059 4028 { ··· 4084 4027 * successful call to pci_request_regions(). Call this function only 4085 4028 * after all use of the PCI regions has ceased. 4086 4029 */ 4087 - 4088 4030 void pci_release_regions(struct pci_dev *pdev) 4089 4031 { 4090 4032 pci_release_selected_regions(pdev, (1 << PCI_STD_NUM_BARS) - 1); ··· 4102 4046 * 4103 4047 * Returns 0 on success, or %EBUSY on error. A warning 4104 4048 * message is also printed on failure. 4049 + * 4050 + * NOTE: 4051 + * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4052 + * when pcim_enable_device() has been called in advance. This hybrid feature is 4053 + * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4105 4054 */ 4106 4055 int pci_request_regions(struct pci_dev *pdev, const char *res_name) 4107 4056 { ··· 4120 4059 * @pdev: PCI device whose resources are to be reserved 4121 4060 * @res_name: Name to be associated with resource. 4122 4061 * 4062 + * Returns: 0 on success, negative error code on failure. 4063 + * 4123 4064 * Mark all PCI regions associated with PCI device @pdev as being reserved 4124 4065 * by owner @res_name. Do not access any address inside the PCI regions 4125 4066 * unless this call returns successfully. ··· 4131 4068 * 4132 4069 * Returns 0 on success, or %EBUSY on error. A warning message is also 4133 4070 * printed on failure. 4071 + * 4072 + * NOTE: 4073 + * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4074 + * when pcim_enable_device() has been called in advance. This hybrid feature is 4075 + * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4134 4076 */ 4135 4077 int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name) 4136 4078 { ··· 4467 4399 * @enable: boolean: whether to enable or disable PCI INTx 4468 4400 * 4469 4401 * Enables/disables PCI INTx for device @pdev 4402 + * 4403 + * NOTE: 4404 + * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4405 + * when pcim_enable_device() has been called in advance. This hybrid feature is 4406 + * DEPRECATED! If you want managed cleanup, use pcim_intx() instead. 4470 4407 */ 4471 4408 void pci_intx(struct pci_dev *pdev, int enable) 4472 4409 { 4473 4410 u16 pci_command, new; 4411 + 4412 + /* Preserve the "hybrid" behavior for backwards compatibility */ 4413 + if (pci_is_managed(pdev)) { 4414 + WARN_ON_ONCE(pcim_intx(pdev, enable) != 0); 4415 + return; 4416 + } 4474 4417 4475 4418 pci_read_config_word(pdev, PCI_COMMAND, &pci_command); 4476 4419 ··· 4490 4411 else 4491 4412 new = pci_command | PCI_COMMAND_INTX_DISABLE; 4492 4413 4493 - if (new != pci_command) { 4494 - struct pci_devres *dr; 4495 - 4414 + if (new != pci_command) 4496 4415 pci_write_config_word(pdev, PCI_COMMAND, new); 4497 - 4498 - dr = find_pci_dr(pdev); 4499 - if (dr && !dr->restore_intx) { 4500 - dr->restore_intx = 1; 4501 - dr->orig_intx = !enable; 4502 - } 4503 - } 4504 4416 } 4505 4417 EXPORT_SYMBOL_GPL(pci_intx); 4506 4418 ··· 4823 4753 */ 4824 4754 int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type) 4825 4755 { 4826 - struct pci_dev *child; 4756 + struct pci_dev *child __free(pci_dev_put) = NULL; 4827 4757 int delay; 4828 4758 4829 4759 if (pci_dev_is_disconnected(dev)) ··· 4852 4782 return 0; 4853 4783 } 4854 4784 4855 - child = list_first_entry(&dev->subordinate->devices, struct pci_dev, 4856 - bus_list); 4785 + child = pci_dev_get(list_first_entry(&dev->subordinate->devices, 4786 + struct pci_dev, bus_list)); 4857 4787 up_read(&pci_bus_sem); 4858 4788 4859 4789 /* ··· 4953 4883 */ 4954 4884 int pci_bridge_secondary_bus_reset(struct pci_dev *dev) 4955 4885 { 4886 + if (!dev->block_cfg_access) 4887 + pci_warn_once(dev, "unlocked secondary bus reset via: %pS\n", 4888 + __builtin_return_address(0)); 4956 4889 pcibios_reset_secondary_bus(dev); 4957 4890 4958 4891 return pci_bridge_wait_for_secondary_bus(dev, "bus reset"); ··· 5514 5441 { 5515 5442 struct pci_dev *dev; 5516 5443 5444 + pci_dev_lock(bus->self); 5517 5445 list_for_each_entry(dev, &bus->devices, bus_list) { 5518 - pci_dev_lock(dev); 5519 5446 if (dev->subordinate) 5520 5447 pci_bus_lock(dev->subordinate); 5448 + else 5449 + pci_dev_lock(dev); 5521 5450 } 5522 5451 } 5523 5452 ··· 5531 5456 list_for_each_entry(dev, &bus->devices, bus_list) { 5532 5457 if (dev->subordinate) 5533 5458 pci_bus_unlock(dev->subordinate); 5534 - pci_dev_unlock(dev); 5459 + else 5460 + pci_dev_unlock(dev); 5535 5461 } 5462 + pci_dev_unlock(bus->self); 5536 5463 } 5537 5464 5538 5465 /* Return 1 on successful lock, 0 on contention */ ··· 5542 5465 { 5543 5466 struct pci_dev *dev; 5544 5467 5468 + if (!pci_dev_trylock(bus->self)) 5469 + return 0; 5470 + 5545 5471 list_for_each_entry(dev, &bus->devices, bus_list) { 5546 - if (!pci_dev_trylock(dev)) 5547 - goto unlock; 5548 5472 if (dev->subordinate) { 5549 - if (!pci_bus_trylock(dev->subordinate)) { 5550 - pci_dev_unlock(dev); 5473 + if (!pci_bus_trylock(dev->subordinate)) 5551 5474 goto unlock; 5552 - } 5553 - } 5475 + } else if (!pci_dev_trylock(dev)) 5476 + goto unlock; 5554 5477 } 5555 5478 return 1; 5556 5479 ··· 5558 5481 list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) { 5559 5482 if (dev->subordinate) 5560 5483 pci_bus_unlock(dev->subordinate); 5561 - pci_dev_unlock(dev); 5484 + else 5485 + pci_dev_unlock(dev); 5562 5486 } 5487 + pci_dev_unlock(bus->self); 5563 5488 return 0; 5564 5489 } 5565 5490 ··· 5593 5514 list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5594 5515 if (!dev->slot || dev->slot != slot) 5595 5516 continue; 5596 - pci_dev_lock(dev); 5597 5517 if (dev->subordinate) 5598 5518 pci_bus_lock(dev->subordinate); 5519 + else 5520 + pci_dev_lock(dev); 5599 5521 } 5600 5522 } 5601 5523 ··· 5622 5542 list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5623 5543 if (!dev->slot || dev->slot != slot) 5624 5544 continue; 5625 - if (!pci_dev_trylock(dev)) 5626 - goto unlock; 5627 5545 if (dev->subordinate) { 5628 5546 if (!pci_bus_trylock(dev->subordinate)) { 5629 5547 pci_dev_unlock(dev); 5630 5548 goto unlock; 5631 5549 } 5632 - } 5550 + } else if (!pci_dev_trylock(dev)) 5551 + goto unlock; 5633 5552 } 5634 5553 return 1; 5635 5554 ··· 5639 5560 continue; 5640 5561 if (dev->subordinate) 5641 5562 pci_bus_unlock(dev->subordinate); 5642 - pci_dev_unlock(dev); 5563 + else 5564 + pci_dev_unlock(dev); 5643 5565 } 5644 5566 return 0; 5645 5567 } ··· 6099 6019 if (err) 6100 6020 return err; 6101 6021 6102 - switch (to_pcie_link_speed(lnksta)) { 6103 - case PCIE_SPEED_2_5GT: 6104 - return 2500; 6105 - case PCIE_SPEED_5_0GT: 6106 - return 5000; 6107 - case PCIE_SPEED_8_0GT: 6108 - return 8000; 6109 - case PCIE_SPEED_16_0GT: 6110 - return 16000; 6111 - case PCIE_SPEED_32_0GT: 6112 - return 32000; 6113 - case PCIE_SPEED_64_0GT: 6114 - return 64000; 6115 - default: 6116 - break; 6117 - } 6118 - 6119 - return -EINVAL; 6022 + return pcie_dev_speed_mbps(to_pcie_link_speed(lnksta)); 6120 6023 } 6121 6024 EXPORT_SYMBOL(pcie_link_speed_mbps); 6122 6025 ··· 6902 6839 pci_add_flags(PCI_SCAN_ALL_PCIE_DEVS); 6903 6840 } else if (!strncmp(str, "disable_acs_redir=", 18)) { 6904 6841 disable_acs_redir_param = str + 18; 6842 + } else if (!strncmp(str, "config_acs=", 11)) { 6843 + config_acs_param = str + 11; 6905 6844 } else { 6906 6845 pr_err("PCI: Unknown option `%s'\n", str); 6907 6846 } ··· 6928 6863 resource_alignment_param = kstrdup(resource_alignment_param, 6929 6864 GFP_KERNEL); 6930 6865 disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL); 6866 + config_acs_param = kstrdup(config_acs_param, GFP_KERNEL); 6931 6867 6932 6868 return 0; 6933 6869 }
+81 -19
drivers/pci/pci.h
··· 17 17 #define PCIE_T_PVPERL_MS 100 18 18 19 19 /* 20 + * End of conventional reset (PERST# de-asserted) to first configuration 21 + * request (device able to respond with a "Request Retry Status" completion), 22 + * from PCIe r6.0, sec 6.6.1. 23 + */ 24 + #define PCIE_T_RRS_READY_MS 100 25 + 26 + /* 20 27 * PCIe r6.0, sec 5.3.3.2.1 <PME Synchronization> 21 28 * Recommends 1ms to 10ms timeout to check L2 ready. 22 29 */ 23 30 #define PCIE_PME_TO_L2_TIMEOUT_US 10000 31 + 32 + /* 33 + * PCIe r6.0, sec 6.6.1 <Conventional Reset> 34 + * 35 + * - "With a Downstream Port that does not support Link speeds greater 36 + * than 5.0 GT/s, software must wait a minimum of 100 ms following exit 37 + * from a Conventional Reset before sending a Configuration Request to 38 + * the device immediately below that Port." 39 + * 40 + * - "With a Downstream Port that supports Link speeds greater than 41 + * 5.0 GT/s, software must wait a minimum of 100 ms after Link training 42 + * completes before sending a Configuration Request to the device 43 + * immediately below that Port." 44 + */ 45 + #define PCIE_RESET_CONFIG_DEVICE_WAIT_MS 100 46 + 47 + /* Message Routing (r[2:0]); PCIe r6.0, sec 2.2.8 */ 48 + #define PCIE_MSG_TYPE_R_RC 0 49 + #define PCIE_MSG_TYPE_R_ADDR 1 50 + #define PCIE_MSG_TYPE_R_ID 2 51 + #define PCIE_MSG_TYPE_R_BC 3 52 + #define PCIE_MSG_TYPE_R_LOCAL 4 53 + #define PCIE_MSG_TYPE_R_GATHER 5 54 + 55 + /* Power Management Messages; PCIe r6.0, sec 2.2.8.2 */ 56 + #define PCIE_MSG_CODE_PME_TURN_OFF 0x19 57 + 58 + /* INTx Mechanism Messages; PCIe r6.0, sec 2.2.8.1 */ 59 + #define PCIE_MSG_CODE_ASSERT_INTA 0x20 60 + #define PCIE_MSG_CODE_ASSERT_INTB 0x21 61 + #define PCIE_MSG_CODE_ASSERT_INTC 0x22 62 + #define PCIE_MSG_CODE_ASSERT_INTD 0x23 63 + #define PCIE_MSG_CODE_DEASSERT_INTA 0x24 64 + #define PCIE_MSG_CODE_DEASSERT_INTB 0x25 65 + #define PCIE_MSG_CODE_DEASSERT_INTC 0x26 66 + #define PCIE_MSG_CODE_DEASSERT_INTD 0x27 24 67 25 68 extern const unsigned char pcie_link_speed[]; 26 69 extern bool pci_early_dump; ··· 332 289 (speed) == PCIE_SPEED_5_0GT ? 5000*8/10 : \ 333 290 (speed) == PCIE_SPEED_2_5GT ? 2500*8/10 : \ 334 291 0) 292 + 293 + static inline int pcie_dev_speed_mbps(enum pci_bus_speed speed) 294 + { 295 + switch (speed) { 296 + case PCIE_SPEED_2_5GT: 297 + return 2500; 298 + case PCIE_SPEED_5_0GT: 299 + return 5000; 300 + case PCIE_SPEED_8_0GT: 301 + return 8000; 302 + case PCIE_SPEED_16_0GT: 303 + return 16000; 304 + case PCIE_SPEED_32_0GT: 305 + return 32000; 306 + case PCIE_SPEED_64_0GT: 307 + return 64000; 308 + default: 309 + break; 310 + } 311 + 312 + return -EINVAL; 313 + } 335 314 336 315 const char *pci_speed_string(enum pci_bus_speed speed); 337 316 enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev); ··· 713 648 u32 of_pci_get_slot_power_limit(struct device_node *node, 714 649 u8 *slot_power_limit_value, 715 650 u8 *slot_power_limit_scale); 651 + bool of_pci_preserve_config(struct device_node *node); 716 652 int pci_set_of_node(struct pci_dev *dev); 717 653 void pci_release_of_node(struct pci_dev *dev); 718 654 void pci_set_bus_of_node(struct pci_bus *bus); ··· 750 684 if (slot_power_limit_scale) 751 685 *slot_power_limit_scale = 0; 752 686 return 0; 687 + } 688 + 689 + static inline bool of_pci_preserve_config(struct device_node *node) 690 + { 691 + return false; 753 692 } 754 693 755 694 static inline int pci_set_of_node(struct pci_dev *dev) { return 0; } ··· 803 732 #endif 804 733 805 734 #ifdef CONFIG_ACPI 735 + bool pci_acpi_preserve_config(struct pci_host_bridge *bridge); 806 736 int pci_acpi_program_hp_params(struct pci_dev *dev); 807 737 extern const struct attribute_group pci_dev_acpi_attr_group; 808 738 void pci_set_acpi_fwnode(struct pci_dev *dev); ··· 817 745 bool acpi_pci_need_resume(struct pci_dev *dev); 818 746 pci_power_t acpi_pci_choose_state(struct pci_dev *pdev); 819 747 #else 748 + static inline bool pci_acpi_preserve_config(struct pci_host_bridge *bridge) 749 + { 750 + return false; 751 + } 820 752 static inline int pci_dev_acpi_reset(struct pci_dev *dev, bool probe) 821 753 { 822 754 return -ENOTTY; ··· 886 810 } 887 811 #endif 888 812 889 - /* 890 - * Managed PCI resources. This manages device on/off, INTx/MSI/MSI-X 891 - * on/off and BAR regions. pci_dev itself records MSI/MSI-X status, so 892 - * there's no need to track it separately. pci_devres is initialized 893 - * when a device is enabled using managed PCI device enable interface. 894 - * 895 - * TODO: Struct pci_devres and find_pci_dr() only need to be here because 896 - * they're used in pci.c. Port or move these functions to devres.c and 897 - * then remove them from here. 898 - */ 899 - struct pci_devres { 900 - unsigned int enabled:1; 901 - unsigned int pinned:1; 902 - unsigned int orig_intx:1; 903 - unsigned int restore_intx:1; 904 - unsigned int mwi:1; 905 - u32 region_mask; 906 - }; 813 + int pcim_intx(struct pci_dev *dev, int enable); 907 814 908 - struct pci_devres *find_pci_dr(struct pci_dev *pdev); 815 + int pcim_request_region(struct pci_dev *pdev, int bar, const char *name); 816 + int pcim_request_region_exclusive(struct pci_dev *pdev, int bar, 817 + const char *name); 818 + void pcim_release_region(struct pci_dev *pdev, int bar); 909 819 910 820 /* 911 821 * Config Address for PCI Configuration Mechanism #1
+18
drivers/pci/pcie/aer.c
··· 1497 1497 return 0; 1498 1498 } 1499 1499 1500 + static int aer_suspend(struct pcie_device *dev) 1501 + { 1502 + struct aer_rpc *rpc = get_service_data(dev); 1503 + 1504 + aer_disable_rootport(rpc); 1505 + return 0; 1506 + } 1507 + 1508 + static int aer_resume(struct pcie_device *dev) 1509 + { 1510 + struct aer_rpc *rpc = get_service_data(dev); 1511 + 1512 + aer_enable_rootport(rpc); 1513 + return 0; 1514 + } 1515 + 1500 1516 /** 1501 1517 * aer_root_reset - reset Root Port hierarchy, RCEC, or RCiEP 1502 1518 * @dev: pointer to Root Port, RCEC, or RCiEP ··· 1577 1561 .service = PCIE_PORT_SERVICE_AER, 1578 1562 1579 1563 .probe = aer_probe, 1564 + .suspend = aer_suspend, 1565 + .resume = aer_resume, 1580 1566 .remove = aer_remove, 1581 1567 }; 1582 1568
+48 -12
drivers/pci/pcie/dpc.c
··· 412 412 } 413 413 } 414 414 415 + static void dpc_enable(struct pcie_device *dev) 416 + { 417 + struct pci_dev *pdev = dev->port; 418 + int dpc = pdev->dpc_cap; 419 + u16 ctl; 420 + 421 + /* 422 + * Clear DPC Interrupt Status so we don't get an interrupt for an 423 + * old event when setting DPC Interrupt Enable. 424 + */ 425 + pci_write_config_word(pdev, dpc + PCI_EXP_DPC_STATUS, 426 + PCI_EXP_DPC_STATUS_INTERRUPT); 427 + 428 + pci_read_config_word(pdev, dpc + PCI_EXP_DPC_CTL, &ctl); 429 + ctl &= ~PCI_EXP_DPC_CTL_EN_MASK; 430 + ctl |= PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 431 + pci_write_config_word(pdev, dpc + PCI_EXP_DPC_CTL, ctl); 432 + } 433 + 434 + static void dpc_disable(struct pcie_device *dev) 435 + { 436 + struct pci_dev *pdev = dev->port; 437 + int dpc = pdev->dpc_cap; 438 + u16 ctl; 439 + 440 + /* Disable DPC triggering and DPC interrupts */ 441 + pci_read_config_word(pdev, dpc + PCI_EXP_DPC_CTL, &ctl); 442 + ctl &= ~(PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN); 443 + pci_write_config_word(pdev, dpc + PCI_EXP_DPC_CTL, ctl); 444 + } 445 + 415 446 #define FLAG(x, y) (((x) & (y)) ? '+' : '-') 416 447 static int dpc_probe(struct pcie_device *dev) 417 448 { 418 449 struct pci_dev *pdev = dev->port; 419 450 struct device *device = &dev->device; 420 451 int status; 421 - u16 ctl, cap; 452 + u16 cap; 422 453 423 454 if (!pcie_aer_is_native(pdev) && !pcie_ports_dpc_native) 424 455 return -ENOTSUPP; ··· 464 433 } 465 434 466 435 pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CAP, &cap); 467 - 468 - pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl); 469 - ctl &= ~PCI_EXP_DPC_CTL_EN_MASK; 470 - ctl |= PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; 471 - pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); 436 + dpc_enable(dev); 472 437 473 438 pci_info(pdev, "enabled with IRQ %d\n", dev->irq); 474 439 pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", ··· 477 450 return status; 478 451 } 479 452 453 + static int dpc_suspend(struct pcie_device *dev) 454 + { 455 + dpc_disable(dev); 456 + return 0; 457 + } 458 + 459 + static int dpc_resume(struct pcie_device *dev) 460 + { 461 + dpc_enable(dev); 462 + return 0; 463 + } 464 + 480 465 static void dpc_remove(struct pcie_device *dev) 481 466 { 482 - struct pci_dev *pdev = dev->port; 483 - u16 ctl; 484 - 485 - pci_read_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, &ctl); 486 - ctl &= ~(PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN); 487 - pci_write_config_word(pdev, pdev->dpc_cap + PCI_EXP_DPC_CTL, ctl); 467 + dpc_disable(dev); 488 468 } 489 469 490 470 static struct pcie_port_service_driver dpcdriver = { ··· 499 465 .port_type = PCIE_ANY_PORT, 500 466 .service = PCIE_PORT_SERVICE_DPC, 501 467 .probe = dpc_probe, 468 + .suspend = dpc_suspend, 469 + .resume = dpc_resume, 502 470 .remove = dpc_remove, 503 471 }; 504 472
+1 -1
drivers/pci/pcie/portdrv.c
··· 786 786 787 787 static struct pci_driver pcie_portdriver = { 788 788 .name = "pcieport", 789 - .id_table = &port_pci_ids[0], 789 + .id_table = port_pci_ids, 790 790 791 791 .probe = pcie_portdrv_probe, 792 792 .remove = pcie_portdrv_remove,
+24 -12
drivers/pci/probe.c
··· 889 889 dev_set_msi_domain(&bus->dev, d); 890 890 } 891 891 892 + static bool pci_preserve_config(struct pci_host_bridge *host_bridge) 893 + { 894 + if (pci_acpi_preserve_config(host_bridge)) 895 + return true; 896 + 897 + if (host_bridge->dev.parent && host_bridge->dev.parent->of_node) 898 + return of_pci_preserve_config(host_bridge->dev.parent->of_node); 899 + 900 + return false; 901 + } 902 + 892 903 static int pci_register_host_bridge(struct pci_host_bridge *bridge) 893 904 { 894 905 struct device *parent = bridge->dev.parent; ··· 993 982 994 983 if (nr_node_ids > 1 && pcibus_to_node(bus) == NUMA_NO_NODE) 995 984 dev_warn(&bus->dev, "Unknown NUMA node; performance will be reduced\n"); 985 + 986 + /* Check if the boot configuration by FW needs to be preserved */ 987 + bridge->preserve_config = pci_preserve_config(bridge); 996 988 997 989 /* Coalesce contiguous windows */ 998 990 resource_list_for_each_entry_safe(window, n, &resources) { ··· 3093 3079 3094 3080 bus = bridge->bus; 3095 3081 3096 - /* 3097 - * We insert PCI resources into the iomem_resource and 3098 - * ioport_resource trees in either pci_bus_claim_resources() 3099 - * or pci_bus_assign_resources(). 3100 - */ 3101 - if (pci_has_flag(PCI_PROBE_ONLY)) { 3082 + /* If we must preserve the resource configuration, claim now */ 3083 + if (bridge->preserve_config) 3102 3084 pci_bus_claim_resources(bus); 3103 - } else { 3104 - pci_bus_size_bridges(bus); 3105 - pci_bus_assign_resources(bus); 3106 3085 3107 - list_for_each_entry(child, &bus->children, node) 3108 - pcie_bus_configure_settings(child); 3109 - } 3086 + /* 3087 + * Assign whatever was left unassigned. If we didn't claim above, 3088 + * this will reassign everything. 3089 + */ 3090 + pci_assign_unassigned_root_bus_resources(bus); 3091 + 3092 + list_for_each_entry(child, &bus->children, node) 3093 + pcie_bus_configure_settings(child); 3110 3094 3111 3095 pci_bus_add_devices(bus); 3112 3096 return 0;
+4
drivers/pci/quirks.c
··· 5099 5099 { PCI_VENDOR_ID_BROADCOM, 0x1750, pci_quirk_mf_endpoint_acs }, 5100 5100 { PCI_VENDOR_ID_BROADCOM, 0x1751, pci_quirk_mf_endpoint_acs }, 5101 5101 { PCI_VENDOR_ID_BROADCOM, 0x1752, pci_quirk_mf_endpoint_acs }, 5102 + { PCI_VENDOR_ID_BROADCOM, 0x1760, pci_quirk_mf_endpoint_acs }, 5103 + { PCI_VENDOR_ID_BROADCOM, 0x1761, pci_quirk_mf_endpoint_acs }, 5104 + { PCI_VENDOR_ID_BROADCOM, 0x1762, pci_quirk_mf_endpoint_acs }, 5105 + { PCI_VENDOR_ID_BROADCOM, 0x1763, pci_quirk_mf_endpoint_acs }, 5102 5106 { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs }, 5103 5107 /* Amazon Annapurna Labs */ 5104 5108 { PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
+83 -8
drivers/pci/setup-bus.c
··· 14 14 * tighter packing. Prefetchable range support. 15 15 */ 16 16 17 + #include <linux/bitops.h> 17 18 #include <linux/init.h> 18 19 #include <linux/kernel.h> 19 20 #include <linux/module.h> ··· 22 21 #include <linux/errno.h> 23 22 #include <linux/ioport.h> 24 23 #include <linux/cache.h> 24 + #include <linux/limits.h> 25 + #include <linux/sizes.h> 25 26 #include <linux/slab.h> 26 27 #include <linux/acpi.h> 27 28 #include "pci.h" ··· 832 829 size = min_size; 833 830 if (old_size == 1) 834 831 old_size = 0; 835 - if (size < old_size) 836 - size = old_size; 837 832 838 - size = ALIGN(max(size, add_size) + children_add_size, align); 839 - return size; 833 + size = max(size, add_size) + children_add_size; 834 + return ALIGN(max(size, old_size), align); 840 835 } 841 836 842 837 resource_size_t __weak pcibios_window_alignment(struct pci_bus *bus, ··· 960 959 for (order = 0; order <= max_order; order++) { 961 960 resource_size_t align1 = 1; 962 961 963 - align1 <<= (order + 20); 962 + align1 <<= order + __ffs(SZ_1M); 964 963 965 964 if (!align) 966 965 min_align = align1; ··· 970 969 } 971 970 972 971 return min_align; 972 + } 973 + 974 + /** 975 + * pbus_upstream_space_available - Check no upstream resource limits allocation 976 + * @bus: The bus 977 + * @mask: Mask the resource flag, then compare it with type 978 + * @type: The type of resource from bridge 979 + * @size: The size required from the bridge window 980 + * @align: Required alignment for the resource 981 + * 982 + * Checks that @size can fit inside the upstream bridge resources that are 983 + * already assigned. 984 + * 985 + * Return: %true if enough space is available on all assigned upstream 986 + * resources. 987 + */ 988 + static bool pbus_upstream_space_available(struct pci_bus *bus, unsigned long mask, 989 + unsigned long type, resource_size_t size, 990 + resource_size_t align) 991 + { 992 + struct resource_constraint constraint = { 993 + .max = RESOURCE_SIZE_MAX, 994 + .align = align, 995 + }; 996 + struct pci_bus *downstream = bus; 997 + struct resource *r; 998 + 999 + while ((bus = bus->parent)) { 1000 + if (pci_is_root_bus(bus)) 1001 + break; 1002 + 1003 + pci_bus_for_each_resource(bus, r) { 1004 + if (!r || !r->parent || (r->flags & mask) != type) 1005 + continue; 1006 + 1007 + if (resource_size(r) >= size) { 1008 + struct resource gap = {}; 1009 + 1010 + if (find_resource_space(r, &gap, size, &constraint) == 0) { 1011 + gap.flags = type; 1012 + pci_dbg(bus->self, 1013 + "Assigned bridge window %pR to %pR free space at %pR\n", 1014 + r, &bus->busn_res, &gap); 1015 + return true; 1016 + } 1017 + } 1018 + 1019 + if (bus->self) { 1020 + pci_info(bus->self, 1021 + "Assigned bridge window %pR to %pR cannot fit 0x%llx required for %s bridging to %pR\n", 1022 + r, &bus->busn_res, 1023 + (unsigned long long)size, 1024 + pci_name(downstream->self), 1025 + &downstream->busn_res); 1026 + } 1027 + 1028 + return false; 1029 + } 1030 + } 1031 + 1032 + return true; 973 1033 } 974 1034 975 1035 /** ··· 1059 997 struct list_head *realloc_head) 1060 998 { 1061 999 struct pci_dev *dev; 1062 - resource_size_t min_align, align, size, size0, size1; 1000 + resource_size_t min_align, win_align, align, size, size0, size1; 1063 1001 resource_size_t aligns[24]; /* Alignments from 1MB to 8TB */ 1064 1002 int order, max_order; 1065 1003 struct resource *b_res = find_bus_resource_of_type(bus, ··· 1111 1049 * resources. 1112 1050 */ 1113 1051 align = pci_resource_alignment(dev, r); 1114 - order = __ffs(align) - 20; 1052 + order = __ffs(align) - __ffs(SZ_1M); 1115 1053 if (order < 0) 1116 1054 order = 0; 1117 1055 if (order >= ARRAY_SIZE(aligns)) { ··· 1138 1076 } 1139 1077 } 1140 1078 1079 + win_align = window_alignment(bus, b_res->flags); 1141 1080 min_align = calculate_mem_align(aligns, max_order); 1142 - min_align = max(min_align, window_alignment(bus, b_res->flags)); 1081 + min_align = max(min_align, win_align); 1143 1082 size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align); 1144 1083 add_align = max(min_align, add_align); 1084 + 1085 + if (bus->self && size0 && 1086 + !pbus_upstream_space_available(bus, mask | IORESOURCE_PREFETCH, type, 1087 + size0, add_align)) { 1088 + min_align = 1ULL << (max_order + __ffs(SZ_1M)); 1089 + min_align = max(min_align, win_align); 1090 + size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), win_align); 1091 + add_align = win_align; 1092 + pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n", 1093 + b_res, &bus->busn_res); 1094 + } 1095 + 1145 1096 size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 : 1146 1097 calculate_memsize(size, min_size, add_size, children_add_size, 1147 1098 resource_size(b_res), add_align);
+8 -8
drivers/pci/switch/switchtec.c
··· 37 37 static dev_t switchtec_devt; 38 38 static DEFINE_IDA(switchtec_minor_ida); 39 39 40 - struct class *switchtec_class; 40 + const struct class switchtec_class = { 41 + .name = "switchtec", 42 + }; 41 43 EXPORT_SYMBOL_GPL(switchtec_class); 42 44 43 45 enum mrpc_state { ··· 1365 1363 1366 1364 dev = &stdev->dev; 1367 1365 device_initialize(dev); 1368 - dev->class = switchtec_class; 1366 + dev->class = &switchtec_class; 1369 1367 dev->parent = &pdev->dev; 1370 1368 dev->groups = switchtec_device_groups; 1371 1369 dev->release = stdev_release; ··· 1853 1851 if (rc) 1854 1852 return rc; 1855 1853 1856 - switchtec_class = class_create("switchtec"); 1857 - if (IS_ERR(switchtec_class)) { 1858 - rc = PTR_ERR(switchtec_class); 1854 + rc = class_register(&switchtec_class); 1855 + if (rc) 1859 1856 goto err_create_class; 1860 - } 1861 1857 1862 1858 rc = pci_register_driver(&switchtec_pci_driver); 1863 1859 if (rc) ··· 1866 1866 return 0; 1867 1867 1868 1868 err_pci_register: 1869 - class_destroy(switchtec_class); 1869 + class_unregister(&switchtec_class); 1870 1870 1871 1871 err_create_class: 1872 1872 unregister_chrdev_region(switchtec_devt, max_devices); ··· 1878 1878 static void __exit switchtec_exit(void) 1879 1879 { 1880 1880 pci_unregister_driver(&switchtec_pci_driver); 1881 - class_destroy(switchtec_class); 1881 + class_unregister(&switchtec_class); 1882 1882 unregister_chrdev_region(switchtec_devt, max_devices); 1883 1883 ida_destroy(&switchtec_minor_ida); 1884 1884
+1 -1
drivers/usb/cdns3/cdnsp-pci.c
··· 231 231 232 232 static struct pci_driver cdnsp_pci_driver = { 233 233 .name = "cdnsp-pci", 234 - .id_table = &cdnsp_pci_ids[0], 234 + .id_table = cdnsp_pci_ids, 235 235 .probe = cdnsp_pci_probe, 236 236 .remove = cdnsp_pci_remove, 237 237 .driver = {
+1 -1
drivers/usb/gadget/udc/cdns2/cdns2-pci.c
··· 121 121 122 122 static struct pci_driver cdns2_pci_driver = { 123 123 .name = "cdns2-pci", 124 - .id_table = &cdns2_pci_ids[0], 124 + .id_table = cdns2_pci_ids, 125 125 .probe = cdns2_pci_probe, 126 126 .remove = cdns2_pci_remove, 127 127 .driver = {
+40 -4
include/linux/ioport.h
··· 188 188 #define DEFINE_RES_DMA(_dma) \ 189 189 DEFINE_RES_DMA_NAMED((_dma), NULL) 190 190 191 + /** 192 + * typedef resource_alignf - Resource alignment callback 193 + * @data: Private data used by the callback 194 + * @res: Resource candidate range (an empty resource space) 195 + * @size: The minimum size of the empty space 196 + * @align: Alignment from the constraints 197 + * 198 + * Callback allows calculating resource placement and alignment beyond min, 199 + * max, and align fields in the struct resource_constraint. 200 + * 201 + * Return: Start address for the resource. 202 + */ 203 + typedef resource_size_t (*resource_alignf)(void *data, 204 + const struct resource *res, 205 + resource_size_t size, 206 + resource_size_t align); 207 + 208 + /** 209 + * struct resource_constraint - constraints to be met while searching empty 210 + * resource space 211 + * @min: The minimum address for the memory range 212 + * @max: The maximum address for the memory range 213 + * @align: Alignment for the start address of the empty space 214 + * @alignf: Additional alignment constraints callback 215 + * @alignf_data: Data provided for @alignf callback 216 + * 217 + * Contains the range and alignment constraints that have to be met during 218 + * find_resource_space(). @alignf can be NULL indicating no alignment beyond 219 + * @align is necessary. 220 + */ 221 + struct resource_constraint { 222 + resource_size_t min, max, align; 223 + resource_alignf alignf; 224 + void *alignf_data; 225 + }; 226 + 191 227 /* PC/ISA/whatever - the normal PC address spaces: IO and memory */ 192 228 extern struct resource ioport_resource; 193 229 extern struct resource iomem_resource; ··· 243 207 extern int allocate_resource(struct resource *root, struct resource *new, 244 208 resource_size_t size, resource_size_t min, 245 209 resource_size_t max, resource_size_t align, 246 - resource_size_t (*alignf)(void *, 247 - const struct resource *, 248 - resource_size_t, 249 - resource_size_t), 210 + resource_alignf alignf, 250 211 void *alignf_data); 251 212 struct resource *lookup_resource(struct resource *root, resource_size_t start); 252 213 int adjust_resource(struct resource *res, resource_size_t start, ··· 296 263 r->end = max(r1->end, r2->end); 297 264 return true; 298 265 } 266 + 267 + int find_resource_space(struct resource *root, struct resource *new, 268 + resource_size_t size, struct resource_constraint *constraint); 299 269 300 270 /* Convenience shorthand with allocation */ 301 271 #define request_region(start,n,name) __request_region(&ioport_resource, (start), (n), (name), 0)
+14 -1
include/linux/pci-epc.h
··· 197 197 198 198 #define to_pci_epc(device) container_of((device), struct pci_epc, dev) 199 199 200 + #ifdef CONFIG_PCI_ENDPOINT 201 + 200 202 #define pci_epc_create(dev, ops) \ 201 203 __pci_epc_create((dev), (ops), THIS_MODULE) 202 204 #define devm_pci_epc_create(dev, ops) \ ··· 228 226 void pci_epc_linkdown(struct pci_epc *epc); 229 227 void pci_epc_init_notify(struct pci_epc *epc); 230 228 void pci_epc_notify_pending_init(struct pci_epc *epc, struct pci_epf *epf); 231 - void pci_epc_bme_notify(struct pci_epc *epc); 229 + void pci_epc_deinit_notify(struct pci_epc *epc); 230 + void pci_epc_bus_master_enable_notify(struct pci_epc *epc); 232 231 void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf, 233 232 enum pci_epc_interface_type type); 234 233 int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no, ··· 275 272 phys_addr_t *phys_addr, size_t size); 276 273 void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr, 277 274 void __iomem *virt_addr, size_t size); 275 + 276 + #else 277 + static inline void pci_epc_init_notify(struct pci_epc *epc) 278 + { 279 + } 280 + 281 + static inline void pci_epc_deinit_notify(struct pci_epc *epc) 282 + { 283 + } 284 + #endif /* CONFIG_PCI_ENDPOINT */ 278 285 #endif /* __LINUX_PCI_EPC_H */
+6 -4
include/linux/pci-epf.h
··· 70 70 71 71 /** 72 72 * struct pci_epc_event_ops - Callbacks for capturing the EPC events 73 - * @core_init: Callback for the EPC initialization complete event 73 + * @epc_init: Callback for the EPC initialization complete event 74 + * @epc_deinit: Callback for the EPC deinitialization event 74 75 * @link_up: Callback for the EPC link up event 75 76 * @link_down: Callback for the EPC link down event 76 - * @bme: Callback for the EPC BME (Bus Master Enable) event 77 + * @bus_master_enable: Callback for the EPC Bus Master Enable event 77 78 */ 78 79 struct pci_epc_event_ops { 79 - int (*core_init)(struct pci_epf *epf); 80 + int (*epc_init)(struct pci_epf *epf); 81 + void (*epc_deinit)(struct pci_epf *epf); 80 82 int (*link_up)(struct pci_epf *epf); 81 83 int (*link_down)(struct pci_epf *epf); 82 - int (*bme)(struct pci_epf *epf); 84 + int (*bus_master_enable)(struct pci_epf *epf); 83 85 }; 84 86 85 87 /**
+5 -5
include/linux/pci.h
··· 367 367 this is D0-D3, D0 being fully 368 368 functional, and D3 being off. */ 369 369 u8 pm_cap; /* PM capability offset */ 370 - unsigned int imm_ready:1; /* Supports Immediate Readiness */ 371 370 unsigned int pme_support:5; /* Bitmask of states from which PME# 372 371 can be generated */ 373 372 unsigned int pme_poll:1; /* Poll device's PME status bit */ 373 + unsigned int pinned:1; /* Whether this dev is pinned */ 374 + unsigned int imm_ready:1; /* Supports Immediate Readiness */ 374 375 unsigned int d1_support:1; /* Low power state D1 is supported */ 375 376 unsigned int d2_support:1; /* Low power state D2 is supported */ 376 377 unsigned int no_d1d2:1; /* D1 and D2 are forbidden */ ··· 1550 1549 struct resource *res, resource_size_t size, 1551 1550 resource_size_t align, resource_size_t min, 1552 1551 unsigned long type_mask, 1553 - resource_size_t (*alignf)(void *, 1554 - const struct resource *, 1555 - resource_size_t, 1556 - resource_size_t), 1552 + resource_alignf alignf, 1557 1553 void *alignf_data); 1558 1554 1559 1555 ··· 2298 2300 int pcim_iomap_regions_request_all(struct pci_dev *pdev, int mask, 2299 2301 const char *name); 2300 2302 void pcim_iounmap_regions(struct pci_dev *pdev, int mask); 2303 + void __iomem *pcim_iomap_range(struct pci_dev *pdev, int bar, 2304 + unsigned long offset, unsigned long len); 2301 2305 2302 2306 extern int pci_pci_problems; 2303 2307 #define PCIPCI_FAIL 1 /* No PCI PCI DMA */
+1 -1
include/linux/switchtec.h
··· 521 521 return container_of(dev, struct switchtec_dev, dev); 522 522 } 523 523 524 - extern struct class *switchtec_class; 524 + extern const struct class switchtec_class; 525 525 526 526 #endif
+31 -37
kernel/resource.c
··· 48 48 }; 49 49 EXPORT_SYMBOL(iomem_resource); 50 50 51 - /* constraints to be met while allocating resources */ 52 - struct resource_constraint { 53 - resource_size_t min, max, align; 54 - resource_size_t (*alignf)(void *, const struct resource *, 55 - resource_size_t, resource_size_t); 56 - void *alignf_data; 57 - }; 58 - 59 51 static DEFINE_RWLOCK(resource_lock); 60 52 61 53 static struct resource *next_resource(struct resource *p, bool skip_children) ··· 602 610 { 603 611 } 604 612 605 - static resource_size_t simple_align_resource(void *data, 606 - const struct resource *avail, 607 - resource_size_t size, 608 - resource_size_t align) 609 - { 610 - return avail->start; 611 - } 612 - 613 613 static void resource_clip(struct resource *res, resource_size_t min, 614 614 resource_size_t max) 615 615 { ··· 612 628 } 613 629 614 630 /* 615 - * Find empty slot in the resource tree with the given range and 631 + * Find empty space in the resource tree with the given range and 616 632 * alignment constraints 617 633 */ 618 - static int __find_resource(struct resource *root, struct resource *old, 619 - struct resource *new, 620 - resource_size_t size, 621 - struct resource_constraint *constraint) 634 + static int __find_resource_space(struct resource *root, struct resource *old, 635 + struct resource *new, resource_size_t size, 636 + struct resource_constraint *constraint) 622 637 { 623 638 struct resource *this = root->child; 624 639 struct resource tmp = *new, avail, alloc; 640 + resource_alignf alignf = constraint->alignf; 625 641 626 642 tmp.start = root->start; 627 643 /* ··· 650 666 avail.flags = new->flags & ~IORESOURCE_UNSET; 651 667 if (avail.start >= tmp.start) { 652 668 alloc.flags = avail.flags; 653 - alloc.start = constraint->alignf(constraint->alignf_data, &avail, 654 - size, constraint->align); 669 + if (alignf) { 670 + alloc.start = alignf(constraint->alignf_data, 671 + &avail, size, constraint->align); 672 + } else { 673 + alloc.start = avail.start; 674 + } 655 675 alloc.end = alloc.start + size - 1; 656 676 if (alloc.start <= alloc.end && 657 677 resource_contains(&avail, &alloc)) { ··· 675 687 return -EBUSY; 676 688 } 677 689 678 - /* 679 - * Find empty slot in the resource tree given range and alignment. 690 + /** 691 + * find_resource_space - Find empty space in the resource tree 692 + * @root: Root resource descriptor 693 + * @new: Resource descriptor awaiting an empty resource space 694 + * @size: The minimum size of the empty space 695 + * @constraint: The range and alignment constraints to be met 696 + * 697 + * Finds an empty space under @root in the resource tree satisfying range and 698 + * alignment @constraints. 699 + * 700 + * Return: 701 + * * %0 - if successful, @new members start, end, and flags are altered. 702 + * * %-EBUSY - if no empty space was found. 680 703 */ 681 - static int find_resource(struct resource *root, struct resource *new, 704 + int find_resource_space(struct resource *root, struct resource *new, 682 705 resource_size_t size, 683 - struct resource_constraint *constraint) 706 + struct resource_constraint *constraint) 684 707 { 685 - return __find_resource(root, NULL, new, size, constraint); 708 + return __find_resource_space(root, NULL, new, size, constraint); 686 709 } 710 + EXPORT_SYMBOL_GPL(find_resource_space); 687 711 688 712 /** 689 713 * reallocate_resource - allocate a slot in the resource tree given range & alignment. ··· 717 717 718 718 write_lock(&resource_lock); 719 719 720 - if ((err = __find_resource(root, old, &new, newsize, constraint))) 720 + if ((err = __find_resource_space(root, old, &new, newsize, constraint))) 721 721 goto out; 722 722 723 723 if (resource_contains(&new, old)) { ··· 761 761 int allocate_resource(struct resource *root, struct resource *new, 762 762 resource_size_t size, resource_size_t min, 763 763 resource_size_t max, resource_size_t align, 764 - resource_size_t (*alignf)(void *, 765 - const struct resource *, 766 - resource_size_t, 767 - resource_size_t), 764 + resource_alignf alignf, 768 765 void *alignf_data) 769 766 { 770 767 int err; 771 768 struct resource_constraint constraint; 772 - 773 - if (!alignf) 774 - alignf = simple_align_resource; 775 769 776 770 constraint.min = min; 777 771 constraint.max = max; ··· 780 786 } 781 787 782 788 write_lock(&resource_lock); 783 - err = find_resource(root, new, size, &constraint); 789 + err = find_resource_space(root, new, size, &constraint); 784 790 if (err >= 0 && __request_resource(root, new)) 785 791 err = -EBUSY; 786 792 write_unlock(&resource_lock);