Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v5.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Consolidate duplicated 'next function' scanning and extend to allow
'isolated functions' on s390, similar to existing hypervisors
(Niklas Schnelle)

Resource management:
- Implement pci_iobar_pfn() for sparc, which allows us to remove the
sparc-specific pci_mmap_page_range() and pci_mmap_resource_range().

This removes the ability to map the entire PCI I/O space using
/proc/bus/pci, but we believe that's already been broken since
v2.6.28 (Arnd Bergmann)

- Move common PCI definitions to asm-generic/pci.h and rework others
to be be more specific and more encapsulated in arches that need
them (Stafford Horne)

Power management:

- Convert drivers to new *_PM_OPS macros to avoid need for '#ifdef
CONFIG_PM_SLEEP' or '__maybe_unused' (Bjorn Helgaas)

Virtualization:

- Add ACS quirk for Broadcom BCM5750x multifunction NICs that isolate
the functions but don't advertise an ACS capability (Pavan Chebbi)

Error handling:

- Clear PCI Status register during enumeration in case firmware left
errors logged (Kai-Heng Feng)

- When we have native control of AER, enable error reporting for all
devices that support AER. Previously only a few drivers enabled
this (Stefan Roese)

- Keep AER error reporting enabled for switches. Previously we
enabled this during enumeration but immediately disabled it (Stefan
Roese)

- Iterate over error counters instead of error strings to avoid
printing junk in AER sysfs counters (Mohamed Khalfella)

ASPM:

- Remove pcie_aspm_pm_state_change() so ASPM config changes, e.g.,
via sysfs, are not lost across power state changes (Kai-Heng Feng)

Endpoint framework:

- Don't stop an EPC when unbinding an EPF from it (Shunsuke Mie)

Endpoint embedded DMA controller driver:

- Simplify and clean up support for the DesignWare embedded DMA
(eDMA) controller (Frank Li, Serge Semin)

Broadcom STB PCIe controller driver:

- Avoid config space accesses when link is down because we can't
recover from the CPU aborts these cause (Jim Quinlan)

- Look for power regulators described under Root Ports in DT and
enable them before scanning the secondary bus (Jim Quinlan)

- Disable/enable regulators in suspend/resume (Jim Quinlan)

Freescale i.MX6 PCIe controller driver:

- Simplify and clean up clock and PHY management (Richard Zhu)

- Disable/enable regulators in suspend/resume (Richard Zhu)

- Set PCIE_DBI_RO_WR_EN before writing DBI registers (Richard Zhu)

- Allow speeds faster than Gen2 (Richard Zhu)

- Make link being down a non-fatal error so controller probe doesn't
fail if there are no Endpoints connected (Richard Zhu)

Loongson PCIe controller driver:

- Add ACPI and MCFG support for Loongson LS7A (Huacai Chen)

- Avoid config reads to non-existent LS2K/LS7A devices because a
hardware defect causes machine hangs (Huacai Chen)

- Work around LS7A integrated devices that report incorrect Interrupt
Pin values (Jianmin Lv)

Marvell Aardvark PCIe controller driver:

- Add support for AER and Slot capability on emulated bridge (Pali
Rohár)

MediaTek PCIe controller driver:

- Add Airoha EN7532 to DT binding (John Crispin)

- Allow building of driver for ARCH_AIROHA (Felix Fietkau)

MediaTek PCIe Gen3 controller driver:

- Print decoded LTSSM state when the link doesn't come up (Jianjun
Wang)

NVIDIA Tegra194 PCIe controller driver:

- Convert DT binding to json-schema (Vidya Sagar)

- Add DT bindings and driver support for Tegra234 Root Port and
Endpoint mode (Vidya Sagar)

- Fix some Root Port interrupt handling issues (Vidya Sagar)

- Set default Max Payload Size to 256 bytes (Vidya Sagar)

- Fix Data Link Feature capability programming (Vidya Sagar)

- Extend Endpoint mode support to devices beyond Controller-5 (Vidya
Sagar)

Qualcomm PCIe controller driver:

- Rework clock, reset, PHY power-on ordering to avoid hangs and
improve consistency (Robert Marko, Christian Marangi)

- Move pipe_clk handling to PHY drivers (Dmitry Baryshkov)

- Add IPQ60xx support (Selvam Sathappan Periakaruppan)

- Allow ASPM L1 and substates for 2.7.0 (Krishna chaitanya chundru)

- Add support for more than 32 MSI interrupts (Dmitry Baryshkov)

Renesas R-Car PCIe controller driver:

- Convert DT binding to json-schema (Herve Codina)

- Add Renesas RZ/N1D (R9A06G032) to rcar-gen2 DT binding and driver
(Herve Codina)

Samsung Exynos PCIe controller driver:

- Fix phy-exynos-pcie driver so it follows the 'phy_init() before
phy_power_on()' PHY programming model (Marek Szyprowski)

Synopsys DesignWare PCIe controller driver:

- Simplify and clean up the DWC core extensively (Serge Semin)

- Fix an issue with programming the ATU for regions that cross a 4GB
boundary (Serge Semin)

- Enable the CDM check if 'snps,enable-cdm-check' exists; previously
we skipped it if 'num-lanes' was absent (Serge Semin)

- Allocate a 32-bit DMA-able page to be MSI target instead of using a
driver data structure that may not be addressable with 32-bit
address (Will McVicker)

- Add DWC core support for more than 32 MSI interrupts (Dmitry
Baryshkov)

Xilinx Versal CPM PCIe controller driver:

- Add DT binding and driver support for Versal CPM5 Gen5 Root Port
(Bharat Kumar Gogada)"

* tag 'pci-v5.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (150 commits)
PCI: imx6: Support more than Gen2 speed link mode
PCI: imx6: Set PCIE_DBI_RO_WR_EN before writing DBI registers
PCI: imx6: Reformat suspend callback to keep symmetric with resume
PCI: imx6: Move the imx6_pcie_ltssm_disable() earlier
PCI: imx6: Disable clocks in reverse order of enable
PCI: imx6: Do not hide PHY driver callbacks and refine the error handling
PCI: imx6: Reduce resume time by only starting link if it was up before suspend
PCI: imx6: Mark the link down as non-fatal error
PCI: imx6: Move regulator enable out of imx6_pcie_deassert_core_reset()
PCI: imx6: Turn off regulator when system is in suspend mode
PCI: imx6: Call host init function directly in resume
PCI: imx6: Disable i.MX6QDL clock when disabling ref clocks
PCI: imx6: Propagate .host_init() errors to caller
PCI: imx6: Collect clock enables in imx6_pcie_clk_enable()
PCI: imx6: Factor out ref clock disable to match enable
PCI: imx6: Move imx6_pcie_clk_disable() earlier
PCI: imx6: Move imx6_pcie_enable_ref_clk() earlier
PCI: imx6: Move PHY management functions together
PCI: imx6: Move imx6_pcie_grp_offset(), imx6_pcie_configure_type() earlier
PCI: imx6: Convert to NOIRQ_SYSTEM_SLEEP_PM_OPS()
...

+4042 -2760
+3 -4
Documentation/PCI/pci-iov-howto.rst
··· 125 125 ... 126 126 } 127 127 128 - static int dev_suspend(struct pci_dev *dev, pm_message_t state) 128 + static int dev_suspend(struct device *dev) 129 129 { 130 130 ... 131 131 132 132 return 0; 133 133 } 134 134 135 - static int dev_resume(struct pci_dev *dev) 135 + static int dev_resume(struct device *dev) 136 136 { 137 137 ... 138 138 ··· 165 165 .id_table = dev_id_table, 166 166 .probe = dev_probe, 167 167 .remove = dev_remove, 168 - .suspend = dev_suspend, 169 - .resume = dev_resume, 168 + .driver.pm = &dev_pm_ops, 170 169 .shutdown = dev_shutdown, 171 170 .sriov_configure = dev_sriov_configure, 172 171 };
+1 -1
Documentation/PCI/sysfs-pci.rst
··· 125 125 mmap() through files in /proc/bus/pci, platforms may also set HAVE_PCI_MMAP. 126 126 127 127 Alternatively, platforms which set HAVE_PCI_MMAP may provide their own 128 - implementation of pci_mmap_page_range() instead of defining 128 + implementation of pci_mmap_resource_range() instead of defining 129 129 ARCH_GENERIC_PCI_MMAP_RESOURCE. 130 130 131 131 Platforms which support write-combining maps of PCI resources must define
+1
Documentation/devicetree/bindings/pci/mediatek-pcie.txt
··· 7 7 "mediatek,mt7622-pcie" 8 8 "mediatek,mt7623-pcie" 9 9 "mediatek,mt7629-pcie" 10 + "airoha,en7523-pcie" 10 11 - device_type: Must be "pci" 11 12 - reg: Base addresses and lengths of the root ports. 12 13 - reg-names: Names of the above areas to use during resource lookup.
+319
Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/nvidia,tegra194-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NVIDIA Tegra194 (and later) PCIe Endpoint controller (Synopsys DesignWare Core based) 8 + 9 + maintainers: 10 + - Thierry Reding <thierry.reding@gmail.com> 11 + - Jon Hunter <jonathanh@nvidia.com> 12 + - Vidya Sagar <vidyas@nvidia.com> 13 + 14 + description: | 15 + This PCIe controller is based on the Synopsys DesignWare PCIe IP and thus 16 + inherits all the common properties defined in snps,dw-pcie-ep.yaml. Some 17 + of the controller instances are dual mode; they can work either in Root 18 + Port mode or Endpoint mode but one at a time. 19 + 20 + On Tegra194, controllers C0, C4 and C5 support Endpoint mode. 21 + On Tegra234, controllers C5, C6, C7 and C10 support Endpoint mode. 22 + 23 + Note: On Tegra194's P2972-0000 platform, only C5 controller can be enabled to 24 + operate in the Endpoint mode because of the way the platform is designed. 25 + 26 + properties: 27 + compatible: 28 + enum: 29 + - nvidia,tegra194-pcie-ep 30 + - nvidia,tegra234-pcie-ep 31 + 32 + reg: 33 + items: 34 + - description: controller's application logic registers 35 + - description: iATU and DMA registers. This is where the iATU (internal 36 + Address Translation Unit) registers of the PCIe core are made 37 + available for software access. 38 + - description: aperture where the Root Port's own configuration 39 + registers are available. 40 + - description: aperture used to map the remote Root Complex address space 41 + 42 + reg-names: 43 + items: 44 + - const: appl 45 + - const: atu_dma 46 + - const: dbi 47 + - const: addr_space 48 + 49 + interrupts: 50 + items: 51 + - description: controller interrupt 52 + 53 + interrupt-names: 54 + items: 55 + - const: intr 56 + 57 + clocks: 58 + items: 59 + - description: module clock 60 + 61 + clock-names: 62 + items: 63 + - const: core 64 + 65 + resets: 66 + items: 67 + - description: APB bus interface reset 68 + - description: module reset 69 + 70 + reset-names: 71 + items: 72 + - const: apb 73 + - const: core 74 + 75 + reset-gpios: 76 + description: Must contain a phandle to a GPIO controller followed by GPIO 77 + that is being used as PERST input signal. Please refer to pci.txt. 78 + 79 + phys: 80 + minItems: 1 81 + maxItems: 8 82 + 83 + phy-names: 84 + minItems: 1 85 + items: 86 + - const: p2u-0 87 + - const: p2u-1 88 + - const: p2u-2 89 + - const: p2u-3 90 + - const: p2u-4 91 + - const: p2u-5 92 + - const: p2u-6 93 + - const: p2u-7 94 + 95 + power-domains: 96 + maxItems: 1 97 + description: | 98 + A phandle to the node that controls power to the respective PCIe 99 + controller and a specifier name for the PCIe controller. 100 + 101 + Tegra194 specifiers are defined in "include/dt-bindings/power/tegra194-powergate.h" 102 + Tegra234 specifiers are defined in "include/dt-bindings/power/tegra234-powergate.h" 103 + 104 + interconnects: 105 + items: 106 + - description: memory read client 107 + - description: memory write client 108 + 109 + interconnect-names: 110 + items: 111 + - const: dma-mem # read 112 + - const: write 113 + 114 + dma-coherent: true 115 + 116 + nvidia,bpmp: 117 + $ref: /schemas/types.yaml#/definitions/phandle-array 118 + description: | 119 + Must contain a pair of phandles to BPMP controller node followed by 120 + controller ID. Following are the controller IDs for each controller: 121 + 122 + Tegra194 123 + 124 + 0: C0 125 + 1: C1 126 + 2: C2 127 + 3: C3 128 + 4: C4 129 + 5: C5 130 + 131 + Tegra234 132 + 133 + 0 : C0 134 + 1 : C1 135 + 2 : C2 136 + 3 : C3 137 + 4 : C4 138 + 5 : C5 139 + 6 : C6 140 + 7 : C7 141 + 8 : C8 142 + 9 : C9 143 + 10: C10 144 + 145 + items: 146 + - items: 147 + - description: phandle to BPMP controller node 148 + - description: PCIe controller ID 149 + maximum: 10 150 + 151 + nvidia,aspm-cmrt-us: 152 + description: Common Mode Restore Time for proper operation of ASPM to be 153 + specified in microseconds 154 + 155 + nvidia,aspm-pwr-on-t-us: 156 + description: Power On time for proper operation of ASPM to be specified in 157 + microseconds 158 + 159 + nvidia,aspm-l0s-entrance-latency-us: 160 + description: ASPM L0s entrance latency to be specified in microseconds 161 + 162 + vddio-pex-ctl-supply: 163 + description: A phandle to the regulator supply for PCIe side band signals 164 + 165 + nvidia,refclk-select-gpios: 166 + maxItems: 1 167 + description: GPIO used to enable REFCLK to controller from the host 168 + 169 + nvidia,enable-ext-refclk: 170 + description: | 171 + This boolean property needs to be present if the controller is configured 172 + to receive Reference Clock from the host. 173 + NOTE: This is applicable only for Tegra234. 174 + 175 + $ref: /schemas/types.yaml#/definitions/flag 176 + 177 + nvidia,enable-srns: 178 + description: | 179 + This boolean property needs to be present if the controller is 180 + configured to operate in SRNS (Separate Reference Clocks with No 181 + Spread-Spectrum Clocking). NOTE: This is applicable only for 182 + Tegra234. 183 + 184 + $ref: /schemas/types.yaml#/definitions/flag 185 + 186 + allOf: 187 + - $ref: /schemas/pci/snps,dw-pcie-ep.yaml# 188 + 189 + unevaluatedProperties: false 190 + 191 + required: 192 + - interrupts 193 + - interrupt-names 194 + - clocks 195 + - clock-names 196 + - resets 197 + - reset-names 198 + - power-domains 199 + - reset-gpios 200 + - vddio-pex-ctl-supply 201 + - num-lanes 202 + - phys 203 + - phy-names 204 + - nvidia,bpmp 205 + 206 + examples: 207 + - | 208 + #include <dt-bindings/clock/tegra194-clock.h> 209 + #include <dt-bindings/gpio/tegra194-gpio.h> 210 + #include <dt-bindings/interrupt-controller/arm-gic.h> 211 + #include <dt-bindings/power/tegra194-powergate.h> 212 + #include <dt-bindings/reset/tegra194-reset.h> 213 + 214 + bus@0 { 215 + #address-cells = <2>; 216 + #size-cells = <2>; 217 + ranges = <0x0 0x0 0x0 0x8 0x0>; 218 + 219 + pcie-ep@141a0000 { 220 + compatible = "nvidia,tegra194-pcie-ep"; 221 + reg = <0x00 0x141a0000 0x0 0x00020000>, /* appl registers (128K) */ 222 + <0x00 0x3a040000 0x0 0x00040000>, /* iATU_DMA reg space (256K) */ 223 + <0x00 0x3a080000 0x0 0x00040000>, /* DBI reg space (256K) */ 224 + <0x1c 0x00000000 0x4 0x00000000>; /* Address Space (16G) */ 225 + reg-names = "appl", "atu_dma", "dbi", "addr_space"; 226 + interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */ 227 + interrupt-names = "intr"; 228 + 229 + clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>; 230 + clock-names = "core"; 231 + 232 + resets = <&bpmp TEGRA194_RESET_PEX1_CORE_5_APB>, 233 + <&bpmp TEGRA194_RESET_PEX1_CORE_5>; 234 + reset-names = "apb", "core"; 235 + 236 + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>; 237 + pinctrl-names = "default"; 238 + pinctrl-0 = <&clkreq_c5_bi_dir_state>; 239 + 240 + nvidia,bpmp = <&bpmp 5>; 241 + 242 + nvidia,aspm-cmrt-us = <60>; 243 + nvidia,aspm-pwr-on-t-us = <20>; 244 + nvidia,aspm-l0s-entrance-latency-us = <3>; 245 + 246 + vddio-pex-ctl-supply = <&vdd_1v8ao>; 247 + 248 + reset-gpios = <&gpio TEGRA194_MAIN_GPIO(GG, 1) GPIO_ACTIVE_LOW>; 249 + 250 + nvidia,refclk-select-gpios = <&gpio_aon TEGRA194_AON_GPIO(AA, 5) 251 + GPIO_ACTIVE_HIGH>; 252 + 253 + num-lanes = <8>; 254 + 255 + phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>, 256 + <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>, 257 + <&p2u_nvhs_6>, <&p2u_nvhs_7>; 258 + 259 + phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4", 260 + "p2u-5", "p2u-6", "p2u-7"; 261 + }; 262 + }; 263 + 264 + - | 265 + #include <dt-bindings/clock/tegra234-clock.h> 266 + #include <dt-bindings/gpio/tegra234-gpio.h> 267 + #include <dt-bindings/interrupt-controller/arm-gic.h> 268 + #include <dt-bindings/power/tegra234-powergate.h> 269 + #include <dt-bindings/reset/tegra234-reset.h> 270 + 271 + bus@0 { 272 + #address-cells = <2>; 273 + #size-cells = <2>; 274 + ranges = <0x0 0x0 0x0 0x8 0x0>; 275 + 276 + pcie-ep@141a0000 { 277 + compatible = "nvidia,tegra234-pcie-ep"; 278 + power-domains = <&bpmp TEGRA234_POWER_DOMAIN_PCIEX8A>; 279 + reg = <0x00 0x141a0000 0x0 0x00020000>, /* appl registers (128K) */ 280 + <0x00 0x3a040000 0x0 0x00040000>, /* iATU_DMA reg space (256K) */ 281 + <0x00 0x3a080000 0x0 0x00040000>, /* DBI reg space (256K) */ 282 + <0x27 0x40000000 0x4 0x00000000>; /* Address Space (16G) */ 283 + reg-names = "appl", "atu_dma", "dbi", "addr_space"; 284 + 285 + interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */ 286 + interrupt-names = "intr"; 287 + 288 + clocks = <&bpmp TEGRA234_CLK_PEX1_C5_CORE>; 289 + clock-names = "core"; 290 + 291 + resets = <&bpmp TEGRA234_RESET_PEX1_CORE_5_APB>, 292 + <&bpmp TEGRA234_RESET_PEX1_CORE_5>; 293 + reset-names = "apb", "core"; 294 + 295 + nvidia,bpmp = <&bpmp 5>; 296 + 297 + nvidia,enable-ext-refclk; 298 + nvidia,aspm-cmrt-us = <60>; 299 + nvidia,aspm-pwr-on-t-us = <20>; 300 + nvidia,aspm-l0s-entrance-latency-us = <3>; 301 + 302 + vddio-pex-ctl-supply = <&p3701_vdd_1v8_ls>; 303 + 304 + reset-gpios = <&gpio TEGRA234_MAIN_GPIO(AF, 1) GPIO_ACTIVE_LOW>; 305 + 306 + nvidia,refclk-select-gpios = <&gpio_aon 307 + TEGRA234_AON_GPIO(AA, 4) 308 + GPIO_ACTIVE_HIGH>; 309 + 310 + num-lanes = <8>; 311 + 312 + phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>, 313 + <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>, 314 + <&p2u_nvhs_6>, <&p2u_nvhs_7>; 315 + 316 + phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4", 317 + "p2u-5", "p2u-6", "p2u-7"; 318 + }; 319 + };
-245
Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
··· 1 - NVIDIA Tegra PCIe controller (Synopsys DesignWare Core based) 2 - 3 - This PCIe controller is based on the Synopsis Designware PCIe IP 4 - and thus inherits all the common properties defined in snps,dw-pcie.yaml and 5 - snps,dw-pcie-ep.yaml. 6 - Some of the controller instances are dual mode where in they can work either 7 - in root port mode or endpoint mode but one at a time. 8 - 9 - Required properties: 10 - - power-domains: A phandle to the node that controls power to the respective 11 - PCIe controller and a specifier name for the PCIe controller. Following are 12 - the specifiers for the different PCIe controllers 13 - TEGRA194_POWER_DOMAIN_PCIEX8B: C0 14 - TEGRA194_POWER_DOMAIN_PCIEX1A: C1 15 - TEGRA194_POWER_DOMAIN_PCIEX1A: C2 16 - TEGRA194_POWER_DOMAIN_PCIEX1A: C3 17 - TEGRA194_POWER_DOMAIN_PCIEX4A: C4 18 - TEGRA194_POWER_DOMAIN_PCIEX8A: C5 19 - these specifiers are defined in 20 - "include/dt-bindings/power/tegra194-powergate.h" file. 21 - - reg: A list of physical base address and length pairs for each set of 22 - controller registers. Must contain an entry for each entry in the reg-names 23 - property. 24 - - reg-names: Must include the following entries: 25 - "appl": Controller's application logic registers 26 - "config": As per the definition in snps,dw-pcie.yaml 27 - "atu_dma": iATU and DMA registers. This is where the iATU (internal Address 28 - Translation Unit) registers of the PCIe core are made available 29 - for SW access. 30 - "dbi": The aperture where root port's own configuration registers are 31 - available 32 - - interrupts: A list of interrupt outputs of the controller. Must contain an 33 - entry for each entry in the interrupt-names property. 34 - - interrupt-names: Must include the following entries: 35 - "intr": The Tegra interrupt that is asserted for controller interrupts 36 - - clocks: Must contain an entry for each entry in clock-names. 37 - See ../clocks/clock-bindings.txt for details. 38 - - clock-names: Must include the following entries: 39 - - core 40 - - resets: Must contain an entry for each entry in reset-names. 41 - See ../reset/reset.txt for details. 42 - - reset-names: Must include the following entries: 43 - - apb 44 - - core 45 - - phys: Must contain a phandle to P2U PHY for each entry in phy-names. 46 - - phy-names: Must include an entry for each active lane. 47 - "p2u-N": where N ranges from 0 to one less than the total number of lanes 48 - - nvidia,bpmp: Must contain a pair of phandle to BPMP controller node followed 49 - by controller-id. Following are the controller ids for each controller. 50 - 0: C0 51 - 1: C1 52 - 2: C2 53 - 3: C3 54 - 4: C4 55 - 5: C5 56 - - vddio-pex-ctl-supply: Regulator supply for PCIe side band signals 57 - 58 - RC mode: 59 - - compatible: Tegra19x must contain "nvidia,tegra194-pcie" 60 - - device_type: Must be "pci" for RC mode 61 - - interrupt-names: Must include the following entries: 62 - "msi": The Tegra interrupt that is asserted when an MSI is received 63 - - bus-range: Range of bus numbers associated with this controller 64 - - #address-cells: Address representation for root ports (must be 3) 65 - - cell 0 specifies the bus and device numbers of the root port: 66 - [23:16]: bus number 67 - [15:11]: device number 68 - - cell 1 denotes the upper 32 address bits and should be 0 69 - - cell 2 contains the lower 32 address bits and is used to translate to the 70 - CPU address space 71 - - #size-cells: Size representation for root ports (must be 2) 72 - - ranges: Describes the translation of addresses for root ports and standard 73 - PCI regions. The entries must be 7 cells each, where the first three cells 74 - correspond to the address as described for the #address-cells property 75 - above, the fourth and fifth cells are for the physical CPU address to 76 - translate to and the sixth and seventh cells are as described for the 77 - #size-cells property above. 78 - - Entries setup the mapping for the standard I/O, memory and 79 - prefetchable PCI regions. The first cell determines the type of region 80 - that is setup: 81 - - 0x81000000: I/O memory region 82 - - 0x82000000: non-prefetchable memory region 83 - - 0xc2000000: prefetchable memory region 84 - Please refer to the standard PCI bus binding document for a more detailed 85 - explanation. 86 - - #interrupt-cells: Size representation for interrupts (must be 1) 87 - - interrupt-map-mask and interrupt-map: Standard PCI IRQ mapping properties 88 - Please refer to the standard PCI bus binding document for a more detailed 89 - explanation. 90 - 91 - EP mode: 92 - In Tegra194, Only controllers C0, C4 & C5 support EP mode. 93 - - compatible: Tegra19x must contain "nvidia,tegra194-pcie-ep" 94 - - reg-names: Must include the following entries: 95 - "addr_space": Used to map remote RC address space 96 - - reset-gpios: Must contain a phandle to a GPIO controller followed by 97 - GPIO that is being used as PERST input signal. Please refer to pci.txt 98 - document. 99 - 100 - Optional properties: 101 - - pinctrl-names: A list of pinctrl state names. 102 - It is mandatory for C5 controller and optional for other controllers. 103 - - "default": Configures PCIe I/O for proper operation. 104 - - pinctrl-0: phandle for the 'default' state of pin configuration. 105 - It is mandatory for C5 controller and optional for other controllers. 106 - - supports-clkreq: Refer to Documentation/devicetree/bindings/pci/pci.txt 107 - - nvidia,update-fc-fixup: This is a boolean property and needs to be present to 108 - improve performance when a platform is designed in such a way that it 109 - satisfies at least one of the following conditions thereby enabling root 110 - port to exchange optimum number of FC (Flow Control) credits with 111 - downstream devices 112 - 1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS) 113 - 2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and 114 - a) speed is Gen-2 and MPS is 256B 115 - b) speed is >= Gen-3 with any MPS 116 - - nvidia,aspm-cmrt-us: Common Mode Restore Time for proper operation of ASPM 117 - to be specified in microseconds 118 - - nvidia,aspm-pwr-on-t-us: Power On time for proper operation of ASPM to be 119 - specified in microseconds 120 - - nvidia,aspm-l0s-entrance-latency-us: ASPM L0s entrance latency to be 121 - specified in microseconds 122 - 123 - RC mode: 124 - - vpcie3v3-supply: A phandle to the regulator node that supplies 3.3V to the slot 125 - if the platform has one such slot. (Ex:- x16 slot owned by C5 controller 126 - in p2972-0000 platform). 127 - - vpcie12v-supply: A phandle to the regulator node that supplies 12V to the slot 128 - if the platform has one such slot. (Ex:- x16 slot owned by C5 controller 129 - in p2972-0000 platform). 130 - 131 - EP mode: 132 - - nvidia,refclk-select-gpios: Must contain a phandle to a GPIO controller 133 - followed by GPIO that is being used to enable REFCLK to controller from host 134 - 135 - NOTE:- On Tegra194's P2972-0000 platform, only C5 controller can be enabled to 136 - operate in the endpoint mode because of the way the platform is designed. 137 - 138 - Examples: 139 - ========= 140 - 141 - Tegra194 RC mode: 142 - ----------------- 143 - 144 - pcie@14180000 { 145 - compatible = "nvidia,tegra194-pcie"; 146 - power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>; 147 - reg = <0x00 0x14180000 0x0 0x00020000 /* appl registers (128K) */ 148 - 0x00 0x38000000 0x0 0x00040000 /* configuration space (256K) */ 149 - 0x00 0x38040000 0x0 0x00040000>; /* iATU_DMA reg space (256K) */ 150 - reg-names = "appl", "config", "atu_dma"; 151 - 152 - #address-cells = <3>; 153 - #size-cells = <2>; 154 - device_type = "pci"; 155 - num-lanes = <8>; 156 - linux,pci-domain = <0>; 157 - 158 - pinctrl-names = "default"; 159 - pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>; 160 - 161 - clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>; 162 - clock-names = "core"; 163 - 164 - resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>, 165 - <&bpmp TEGRA194_RESET_PEX0_CORE_0>; 166 - reset-names = "apb", "core"; 167 - 168 - interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 169 - <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */ 170 - interrupt-names = "intr", "msi"; 171 - 172 - #interrupt-cells = <1>; 173 - interrupt-map-mask = <0 0 0 0>; 174 - interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>; 175 - 176 - nvidia,bpmp = <&bpmp 0>; 177 - 178 - supports-clkreq; 179 - nvidia,aspm-cmrt-us = <60>; 180 - nvidia,aspm-pwr-on-t-us = <20>; 181 - nvidia,aspm-l0s-entrance-latency-us = <3>; 182 - 183 - bus-range = <0x0 0xff>; 184 - ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000 /* downstream I/O (1MB) */ 185 - 0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01E00000 /* non-prefetchable memory (30MB) */ 186 - 0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>; /* prefetchable memory (16GB) */ 187 - 188 - vddio-pex-ctl-supply = <&vdd_1v8ao>; 189 - vpcie3v3-supply = <&vdd_3v3_pcie>; 190 - vpcie12v-supply = <&vdd_12v_pcie>; 191 - 192 - phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>, 193 - <&p2u_hsio_5>; 194 - phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3"; 195 - }; 196 - 197 - Tegra194 EP mode: 198 - ----------------- 199 - 200 - pcie-ep@141a0000 { 201 - compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep"; 202 - power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>; 203 - reg = <0x00 0x141a0000 0x0 0x00020000 /* appl registers (128K) */ 204 - 0x00 0x3a040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */ 205 - 0x00 0x3a080000 0x0 0x00040000 /* DBI reg space (256K) */ 206 - 0x1c 0x00000000 0x4 0x00000000>; /* Address Space (16G) */ 207 - reg-names = "appl", "atu_dma", "dbi", "addr_space"; 208 - 209 - num-lanes = <8>; 210 - num-ib-windows = <2>; 211 - num-ob-windows = <8>; 212 - 213 - pinctrl-names = "default"; 214 - pinctrl-0 = <&clkreq_c5_bi_dir_state>; 215 - 216 - clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>; 217 - clock-names = "core"; 218 - 219 - resets = <&bpmp TEGRA194_RESET_PEX1_CORE_5_APB>, 220 - <&bpmp TEGRA194_RESET_PEX1_CORE_5>; 221 - reset-names = "apb", "core"; 222 - 223 - interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */ 224 - interrupt-names = "intr"; 225 - 226 - nvidia,bpmp = <&bpmp 5>; 227 - 228 - nvidia,aspm-cmrt-us = <60>; 229 - nvidia,aspm-pwr-on-t-us = <20>; 230 - nvidia,aspm-l0s-entrance-latency-us = <3>; 231 - 232 - vddio-pex-ctl-supply = <&vdd_1v8ao>; 233 - 234 - reset-gpios = <&gpio TEGRA194_MAIN_GPIO(GG, 1) GPIO_ACTIVE_LOW>; 235 - 236 - nvidia,refclk-select-gpios = <&gpio_aon TEGRA194_AON_GPIO(AA, 5) 237 - GPIO_ACTIVE_HIGH>; 238 - 239 - phys = <&p2u_nvhs_0>, <&p2u_nvhs_1>, <&p2u_nvhs_2>, 240 - <&p2u_nvhs_3>, <&p2u_nvhs_4>, <&p2u_nvhs_5>, 241 - <&p2u_nvhs_6>, <&p2u_nvhs_7>; 242 - 243 - phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3", "p2u-4", 244 - "p2u-5", "p2u-6", "p2u-7"; 245 - };
+350
Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/nvidia,tegra194-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NVIDIA Tegra194 (and later) PCIe controller (Synopsys DesignWare Core based) 8 + 9 + maintainers: 10 + - Thierry Reding <thierry.reding@gmail.com> 11 + - Jon Hunter <jonathanh@nvidia.com> 12 + - Vidya Sagar <vidyas@nvidia.com> 13 + 14 + description: | 15 + This PCIe controller is based on the Synopsys DesignWare PCIe IP and thus 16 + inherits all the common properties defined in snps,dw-pcie.yaml. Some of 17 + the controller instances are dual mode where in they can work either in 18 + Root Port mode or Endpoint mode but one at a time. 19 + 20 + See nvidia,tegra194-pcie-ep.yaml for details on the Endpoint mode device 21 + tree bindings. 22 + 23 + properties: 24 + compatible: 25 + enum: 26 + - nvidia,tegra194-pcie 27 + - nvidia,tegra234-pcie 28 + 29 + reg: 30 + items: 31 + - description: controller's application logic registers 32 + - description: configuration registers 33 + - description: iATU and DMA registers. This is where the iATU (internal 34 + Address Translation Unit) registers of the PCIe core are made 35 + available for software access. 36 + - description: aperture where the Root Port's own configuration 37 + registers are available. 38 + 39 + reg-names: 40 + items: 41 + - const: appl 42 + - const: config 43 + - const: atu_dma 44 + - const: dbi 45 + 46 + interrupts: 47 + items: 48 + - description: controller interrupt 49 + - description: MSI interrupt 50 + 51 + interrupt-names: 52 + items: 53 + - const: intr 54 + - const: msi 55 + 56 + clocks: 57 + items: 58 + - description: module clock 59 + 60 + clock-names: 61 + items: 62 + - const: core 63 + 64 + resets: 65 + items: 66 + - description: APB bus interface reset 67 + - description: module reset 68 + 69 + reset-names: 70 + items: 71 + - const: apb 72 + - const: core 73 + 74 + phys: 75 + minItems: 1 76 + maxItems: 8 77 + 78 + phy-names: 79 + minItems: 1 80 + items: 81 + - const: p2u-0 82 + - const: p2u-1 83 + - const: p2u-2 84 + - const: p2u-3 85 + - const: p2u-4 86 + - const: p2u-5 87 + - const: p2u-6 88 + - const: p2u-7 89 + 90 + power-domains: 91 + maxItems: 1 92 + description: | 93 + A phandle to the node that controls power to the respective PCIe 94 + controller and a specifier name for the PCIe controller. 95 + 96 + Tegra194 specifiers defined in "include/dt-bindings/power/tegra194-powergate.h" 97 + Tegra234 specifiers defined in "include/dt-bindings/power/tegra234-powergate.h" 98 + 99 + interconnects: 100 + items: 101 + - description: memory read client 102 + - description: memory write client 103 + 104 + interconnect-names: 105 + items: 106 + - const: dma-mem # read 107 + - const: write 108 + 109 + dma-coherent: true 110 + 111 + nvidia,bpmp: 112 + $ref: /schemas/types.yaml#/definitions/phandle-array 113 + description: | 114 + Must contain a pair of phandles to BPMP controller node followed by 115 + controller ID. Following are the controller IDs for each controller: 116 + 117 + Tegra194 118 + 119 + 0: C0 120 + 1: C1 121 + 2: C2 122 + 3: C3 123 + 4: C4 124 + 5: C5 125 + 126 + Tegra234 127 + 128 + 0 : C0 129 + 1 : C1 130 + 2 : C2 131 + 3 : C3 132 + 4 : C4 133 + 5 : C5 134 + 6 : C6 135 + 7 : C7 136 + 8 : C8 137 + 9 : C9 138 + 10: C10 139 + 140 + items: 141 + - items: 142 + - description: phandle to BPMP controller node 143 + - description: PCIe controller ID 144 + maximum: 10 145 + 146 + nvidia,update-fc-fixup: 147 + description: | 148 + This is a boolean property and needs to be present to improve performance 149 + when a platform is designed in such a way that it satisfies at least one 150 + of the following conditions thereby enabling Root Port to exchange 151 + optimum number of FC (Flow Control) credits with downstream devices: 152 + 153 + NOTE: This is applicable only for Tegra194. 154 + 155 + 1. If C0/C4/C5 run at x1/x2 link widths (irrespective of speed and MPS) 156 + 2. If C0/C1/C2/C3/C4/C5 operate at their respective max link widths and 157 + a) speed is Gen-2 and MPS is 256B 158 + b) speed is >= Gen-3 with any MPS 159 + 160 + $ref: /schemas/types.yaml#/definitions/flag 161 + 162 + nvidia,aspm-cmrt-us: 163 + description: Common Mode Restore Time for proper operation of ASPM to be 164 + specified in microseconds 165 + 166 + nvidia,aspm-pwr-on-t-us: 167 + description: Power On time for proper operation of ASPM to be specified in 168 + microseconds 169 + 170 + nvidia,aspm-l0s-entrance-latency-us: 171 + description: ASPM L0s entrance latency to be specified in microseconds 172 + 173 + vddio-pex-ctl-supply: 174 + description: A phandle to the regulator supply for PCIe side band signals. 175 + 176 + vpcie3v3-supply: 177 + description: A phandle to the regulator node that supplies 3.3V to the slot 178 + if the platform has one such slot, e.g., x16 slot owned by C5 controller 179 + in p2972-0000 platform. 180 + 181 + vpcie12v-supply: 182 + description: A phandle to the regulator node that supplies 12V to the slot 183 + if the platform has one such slot, e.g., x16 slot owned by C5 controller 184 + in p2972-0000 platform. 185 + 186 + nvidia,enable-srns: 187 + description: | 188 + This boolean property needs to be present if the controller is 189 + configured to operate in SRNS (Separate Reference Clocks with No 190 + Spread-Spectrum Clocking). NOTE: This is applicable only for 191 + Tegra234. 192 + 193 + $ref: /schemas/types.yaml#/definitions/flag 194 + 195 + nvidia,enable-ext-refclk: 196 + description: | 197 + This boolean property needs to be present if the controller is 198 + configured to use the reference clocking coming in from an external 199 + clock source instead of using the internal clock source. 200 + 201 + $ref: /schemas/types.yaml#/definitions/flag 202 + 203 + allOf: 204 + - $ref: /schemas/pci/snps,dw-pcie.yaml# 205 + 206 + unevaluatedProperties: false 207 + 208 + required: 209 + - interrupts 210 + - interrupt-names 211 + - interrupt-map 212 + - interrupt-map-mask 213 + - clocks 214 + - clock-names 215 + - resets 216 + - reset-names 217 + - power-domains 218 + - vddio-pex-ctl-supply 219 + - num-lanes 220 + - phys 221 + - phy-names 222 + - nvidia,bpmp 223 + 224 + examples: 225 + - | 226 + #include <dt-bindings/clock/tegra194-clock.h> 227 + #include <dt-bindings/interrupt-controller/arm-gic.h> 228 + #include <dt-bindings/power/tegra194-powergate.h> 229 + #include <dt-bindings/reset/tegra194-reset.h> 230 + 231 + bus@0 { 232 + #address-cells = <2>; 233 + #size-cells = <2>; 234 + ranges = <0x0 0x0 0x0 0x8 0x0>; 235 + 236 + pcie@14180000 { 237 + compatible = "nvidia,tegra194-pcie"; 238 + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>; 239 + reg = <0x0 0x14180000 0x0 0x00020000>, /* appl registers (128K) */ 240 + <0x0 0x38000000 0x0 0x00040000>, /* configuration space (256K) */ 241 + <0x0 0x38040000 0x0 0x00040000>, /* iATU_DMA reg space (256K) */ 242 + <0x0 0x38080000 0x0 0x00040000>; /* DBI reg space (256K) */ 243 + reg-names = "appl", "config", "atu_dma", "dbi"; 244 + 245 + #address-cells = <3>; 246 + #size-cells = <2>; 247 + device_type = "pci"; 248 + num-lanes = <8>; 249 + linux,pci-domain = <0>; 250 + 251 + pinctrl-names = "default"; 252 + pinctrl-0 = <&pex_rst_c5_out_state>, <&clkreq_c5_bi_dir_state>; 253 + 254 + clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>; 255 + clock-names = "core"; 256 + 257 + resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>, 258 + <&bpmp TEGRA194_RESET_PEX0_CORE_0>; 259 + reset-names = "apb", "core"; 260 + 261 + interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 262 + <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */ 263 + interrupt-names = "intr", "msi"; 264 + 265 + #interrupt-cells = <1>; 266 + interrupt-map-mask = <0 0 0 0>; 267 + interrupt-map = <0 0 0 0 &gic GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>; 268 + 269 + nvidia,bpmp = <&bpmp 0>; 270 + 271 + supports-clkreq; 272 + nvidia,aspm-cmrt-us = <60>; 273 + nvidia,aspm-pwr-on-t-us = <20>; 274 + nvidia,aspm-l0s-entrance-latency-us = <3>; 275 + 276 + bus-range = <0x0 0xff>; 277 + ranges = <0x81000000 0x0 0x38100000 0x0 0x38100000 0x0 0x00100000>, /* downstream I/O */ 278 + <0x82000000 0x0 0x38200000 0x0 0x38200000 0x0 0x01e00000>, /* non-prefetch memory */ 279 + <0xc2000000 0x18 0x00000000 0x18 0x00000000 0x4 0x00000000>; /* prefetchable memory */ 280 + 281 + vddio-pex-ctl-supply = <&vdd_1v8ao>; 282 + vpcie3v3-supply = <&vdd_3v3_pcie>; 283 + vpcie12v-supply = <&vdd_12v_pcie>; 284 + 285 + phys = <&p2u_hsio_2>, <&p2u_hsio_3>, <&p2u_hsio_4>, 286 + <&p2u_hsio_5>; 287 + phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3"; 288 + }; 289 + }; 290 + 291 + - | 292 + #include <dt-bindings/clock/tegra234-clock.h> 293 + #include <dt-bindings/interrupt-controller/arm-gic.h> 294 + #include <dt-bindings/power/tegra234-powergate.h> 295 + #include <dt-bindings/reset/tegra234-reset.h> 296 + 297 + bus@0 { 298 + #address-cells = <2>; 299 + #size-cells = <2>; 300 + ranges = <0x0 0x0 0x0 0x8 0x0>; 301 + 302 + pcie@14160000 { 303 + compatible = "nvidia,tegra234-pcie"; 304 + power-domains = <&bpmp TEGRA234_POWER_DOMAIN_PCIEX4BB>; 305 + reg = <0x00 0x14160000 0x0 0x00020000>, /* appl registers (128K) */ 306 + <0x00 0x36000000 0x0 0x00040000>, /* configuration space (256K) */ 307 + <0x00 0x36040000 0x0 0x00040000>, /* iATU_DMA reg space (256K) */ 308 + <0x00 0x36080000 0x0 0x00040000>; /* DBI reg space (256K) */ 309 + reg-names = "appl", "config", "atu_dma", "dbi"; 310 + 311 + #address-cells = <3>; 312 + #size-cells = <2>; 313 + device_type = "pci"; 314 + num-lanes = <4>; 315 + num-viewport = <8>; 316 + linux,pci-domain = <4>; 317 + 318 + clocks = <&bpmp TEGRA234_CLK_PEX0_C4_CORE>; 319 + clock-names = "core"; 320 + 321 + resets = <&bpmp TEGRA234_RESET_PEX0_CORE_4_APB>, 322 + <&bpmp TEGRA234_RESET_PEX0_CORE_4>; 323 + reset-names = "apb", "core"; 324 + 325 + interrupts = <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */ 326 + <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */ 327 + interrupt-names = "intr", "msi"; 328 + 329 + #interrupt-cells = <1>; 330 + interrupt-map-mask = <0 0 0 0>; 331 + interrupt-map = <0 0 0 0 &gic GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>; 332 + 333 + nvidia,bpmp = <&bpmp 4>; 334 + 335 + nvidia,aspm-cmrt-us = <60>; 336 + nvidia,aspm-pwr-on-t-us = <20>; 337 + nvidia,aspm-l0s-entrance-latency-us = <3>; 338 + 339 + bus-range = <0x0 0xff>; 340 + ranges = <0x43000000 0x21 0x40000000 0x21 0x40000000 0x2 0xe8000000>, /* prefetchable */ 341 + <0x02000000 0x0 0x40000000 0x24 0x28000000 0x0 0x08000000>, /* non-prefetchable */ 342 + <0x01000000 0x0 0x36100000 0x00 0x36100000 0x0 0x00100000>; /* downstream I/O */ 343 + 344 + vddio-pex-ctl-supply = <&p3701_vdd_AO_1v8>; 345 + 346 + phys = <&p2u_hsio_4>, <&p2u_hsio_5>, <&p2u_hsio_6>, 347 + <&p2u_hsio_7>; 348 + phy-names = "p2u-0", "p2u-1", "p2u-2", "p2u-3"; 349 + }; 350 + };
-84
Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
··· 1 - Renesas AHB to PCI bridge 2 - ------------------------- 3 - 4 - This is the bridge used internally to connect the USB controllers to the 5 - AHB. There is one bridge instance per USB port connected to the internal 6 - OHCI and EHCI controllers. 7 - 8 - Required properties: 9 - - compatible: "renesas,pci-r8a7742" for the R8A7742 SoC; 10 - "renesas,pci-r8a7743" for the R8A7743 SoC; 11 - "renesas,pci-r8a7744" for the R8A7744 SoC; 12 - "renesas,pci-r8a7745" for the R8A7745 SoC; 13 - "renesas,pci-r8a7790" for the R8A7790 SoC; 14 - "renesas,pci-r8a7791" for the R8A7791 SoC; 15 - "renesas,pci-r8a7793" for the R8A7793 SoC; 16 - "renesas,pci-r8a7794" for the R8A7794 SoC; 17 - "renesas,pci-rcar-gen2" for a generic R-Car Gen2 or 18 - RZ/G1 compatible device. 19 - 20 - 21 - When compatible with the generic version, nodes must list the 22 - SoC-specific version corresponding to the platform first 23 - followed by the generic version. 24 - 25 - - reg: A list of physical regions to access the device: the first is 26 - the operational registers for the OHCI/EHCI controllers and the 27 - second is for the bridge configuration and control registers. 28 - - interrupts: interrupt for the device. 29 - - clocks: The reference to the device clock. 30 - - bus-range: The PCI bus number range; as this is a single bus, the range 31 - should be specified as the same value twice. 32 - - #address-cells: must be 3. 33 - - #size-cells: must be 2. 34 - - #interrupt-cells: must be 1. 35 - - interrupt-map: standard property used to define the mapping of the PCI 36 - interrupts to the GIC interrupts. 37 - - interrupt-map-mask: standard property that helps to define the interrupt 38 - mapping. 39 - 40 - Optional properties: 41 - - dma-ranges: a single range for the inbound memory region. If not supplied, 42 - defaults to 1GiB at 0x40000000. Note there are hardware restrictions on the 43 - allowed combinations of address and size. 44 - 45 - Example SoC configuration: 46 - 47 - pci0: pci@ee090000 { 48 - compatible = "renesas,pci-r8a7790", "renesas,pci-rcar-gen2"; 49 - clocks = <&mstp7_clks R8A7790_CLK_EHCI>; 50 - reg = <0x0 0xee090000 0x0 0xc00>, 51 - <0x0 0xee080000 0x0 0x1100>; 52 - interrupts = <0 108 IRQ_TYPE_LEVEL_HIGH>; 53 - status = "disabled"; 54 - 55 - bus-range = <0 0>; 56 - #address-cells = <3>; 57 - #size-cells = <2>; 58 - #interrupt-cells = <1>; 59 - dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>; 60 - interrupt-map-mask = <0xff00 0 0 0x7>; 61 - interrupt-map = <0x0000 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH 62 - 0x0800 0 0 1 &gic 0 108 IRQ_TYPE_LEVEL_HIGH 63 - 0x1000 0 0 2 &gic 0 108 IRQ_TYPE_LEVEL_HIGH>; 64 - 65 - usb@1,0 { 66 - reg = <0x800 0 0 0 0>; 67 - phys = <&usb0 0>; 68 - phy-names = "usb"; 69 - }; 70 - 71 - usb@2,0 { 72 - reg = <0x1000 0 0 0 0>; 73 - phys = <&usb0 0>; 74 - phy-names = "usb"; 75 - }; 76 - }; 77 - 78 - Example board setup: 79 - 80 - &pci0 { 81 - status = "okay"; 82 - pinctrl-0 = <&usb0_pins>; 83 - pinctrl-names = "default"; 84 - };
+50 -5
Documentation/devicetree/bindings/pci/qcom,pcie.yaml
··· 11 11 - Stanimir Varbanov <svarbanov@mm-sol.com> 12 12 13 13 description: | 14 - Qualcomm PCIe root complex controller is bansed on the Synopsys DesignWare 14 + Qualcomm PCIe root complex controller is based on the Synopsys DesignWare 15 15 PCIe IP. 16 16 17 17 properties: ··· 43 43 maxItems: 5 44 44 45 45 interrupts: 46 - maxItems: 1 46 + minItems: 1 47 + maxItems: 8 47 48 48 49 interrupt-names: 49 - items: 50 - - const: msi 50 + minItems: 1 51 + maxItems: 8 51 52 52 53 # Common definitions for clocks, clock-names and reset. 53 54 # Platform constraints are described later. ··· 615 614 - if: 616 615 not: 617 616 properties: 618 - compatibles: 617 + compatible: 619 618 contains: 620 619 enum: 621 620 - qcom,pcie-msm8996 ··· 623 622 required: 624 623 - resets 625 624 - reset-names 625 + 626 + # Newer chipsets support either 1 or 8 MSI vectors 627 + # On older chipsets it's always 1 MSI vector 628 + - if: 629 + properties: 630 + compatible: 631 + contains: 632 + enum: 633 + - qcom,pcie-msm8996 634 + - qcom,pcie-sc7280 635 + - qcom,pcie-sc8180x 636 + - qcom,pcie-sdm845 637 + - qcom,pcie-sm8150 638 + - qcom,pcie-sm8250 639 + - qcom,pcie-sm8450-pcie0 640 + - qcom,pcie-sm8450-pcie1 641 + then: 642 + oneOf: 643 + - properties: 644 + interrupts: 645 + maxItems: 1 646 + interrupt-names: 647 + items: 648 + - const: msi 649 + - properties: 650 + interrupts: 651 + minItems: 8 652 + interrupt-names: 653 + items: 654 + - const: msi0 655 + - const: msi1 656 + - const: msi2 657 + - const: msi3 658 + - const: msi4 659 + - const: msi5 660 + - const: msi6 661 + - const: msi7 662 + else: 663 + properties: 664 + interrupts: 665 + maxItems: 1 666 + interrupt-names: 667 + items: 668 + - const: msi 626 669 627 670 unevaluatedProperties: false 628 671
+186
Documentation/devicetree/bindings/pci/renesas,pci-rcar-gen2.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/renesas,pci-rcar-gen2.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Renesas AHB to PCI bridge 8 + 9 + maintainers: 10 + - Marek Vasut <marek.vasut+renesas@gmail.com> 11 + - Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> 12 + 13 + description: | 14 + This is the bridge used internally to connect the USB controllers to the 15 + AHB. There is one bridge instance per USB port connected to the internal 16 + OHCI and EHCI controllers. 17 + 18 + properties: 19 + compatible: 20 + oneOf: 21 + - items: 22 + - enum: 23 + - renesas,pci-r8a7742 # RZ/G1H 24 + - renesas,pci-r8a7743 # RZ/G1M 25 + - renesas,pci-r8a7744 # RZ/G1N 26 + - renesas,pci-r8a7745 # RZ/G1E 27 + - renesas,pci-r8a7790 # R-Car H2 28 + - renesas,pci-r8a7791 # R-Car M2-W 29 + - renesas,pci-r8a7793 # R-Car M2-N 30 + - renesas,pci-r8a7794 # R-Car E2 31 + - const: renesas,pci-rcar-gen2 # R-Car Gen2 and RZ/G1 32 + - items: 33 + - enum: 34 + - renesas,pci-r9a06g032 # RZ/N1D 35 + - const: renesas,pci-rzn1 # RZ/N1 36 + 37 + reg: 38 + items: 39 + - description: Operational registers for the OHCI/EHCI controllers. 40 + - description: Bridge configuration and control registers. 41 + 42 + interrupts: 43 + maxItems: 1 44 + 45 + clocks: true 46 + 47 + clock-names: true 48 + 49 + resets: 50 + maxItems: 1 51 + 52 + power-domains: 53 + maxItems: 1 54 + 55 + bus-range: 56 + description: | 57 + The PCI bus number range; as this is a single bus, the range 58 + should be specified as the same value twice. 59 + 60 + dma-ranges: 61 + description: | 62 + A single range for the inbound memory region. If not supplied, 63 + defaults to 1GiB at 0x40000000. Note there are hardware restrictions on 64 + the allowed combinations of address and size. 65 + maxItems: 1 66 + 67 + patternProperties: 68 + 'usb@[0-1],0': 69 + type: object 70 + 71 + description: 72 + This a USB controller PCI device 73 + 74 + properties: 75 + reg: 76 + description: 77 + Identify the correct bus, device and function number in the 78 + form <bdf 0 0 0 0>. 79 + 80 + items: 81 + minItems: 5 82 + maxItems: 5 83 + 84 + phys: 85 + description: 86 + Reference to the USB phy 87 + maxItems: 1 88 + 89 + phy-names: 90 + maxItems: 1 91 + 92 + required: 93 + - reg 94 + - phys 95 + - phy-names 96 + 97 + unevaluatedProperties: false 98 + 99 + required: 100 + - compatible 101 + - reg 102 + - interrupts 103 + - interrupt-map 104 + - interrupt-map-mask 105 + - clocks 106 + - power-domains 107 + - bus-range 108 + - "#address-cells" 109 + - "#size-cells" 110 + - "#interrupt-cells" 111 + 112 + allOf: 113 + - $ref: /schemas/pci/pci-bus.yaml# 114 + 115 + - if: 116 + properties: 117 + compatible: 118 + contains: 119 + enum: 120 + - renesas,pci-rzn1 121 + then: 122 + properties: 123 + clocks: 124 + items: 125 + - description: Internal bus clock (AHB) for HOST 126 + - description: Internal bus clock (AHB) Power Management 127 + - description: PCI clock for USB subsystem 128 + clock-names: 129 + items: 130 + - const: hclkh 131 + - const: hclkpm 132 + - const: pciclk 133 + required: 134 + - clock-names 135 + else: 136 + properties: 137 + clocks: 138 + items: 139 + - description: Device clock 140 + clock-names: 141 + items: 142 + - const: pclk 143 + required: 144 + - resets 145 + 146 + unevaluatedProperties: false 147 + 148 + examples: 149 + - | 150 + #include <dt-bindings/interrupt-controller/arm-gic.h> 151 + #include <dt-bindings/clock/r8a7790-cpg-mssr.h> 152 + #include <dt-bindings/power/r8a7790-sysc.h> 153 + 154 + pci@ee090000 { 155 + compatible = "renesas,pci-r8a7790", "renesas,pci-rcar-gen2"; 156 + device_type = "pci"; 157 + reg = <0xee090000 0xc00>, 158 + <0xee080000 0x1100>; 159 + clocks = <&cpg CPG_MOD 703>; 160 + power-domains = <&sysc R8A7790_PD_ALWAYS_ON>; 161 + resets = <&cpg 703>; 162 + interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; 163 + 164 + bus-range = <0 0>; 165 + #address-cells = <3>; 166 + #size-cells = <2>; 167 + #interrupt-cells = <1>; 168 + ranges = <0x02000000 0 0xee080000 0xee080000 0 0x00010000>; 169 + dma-ranges = <0x42000000 0 0x40000000 0x40000000 0 0x40000000>; 170 + interrupt-map-mask = <0xf800 0 0 0x7>; 171 + interrupt-map = <0x0000 0 0 1 &gic GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, 172 + <0x0800 0 0 1 &gic GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, 173 + <0x1000 0 0 2 &gic GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; 174 + 175 + usb@1,0 { 176 + reg = <0x800 0 0 0 0>; 177 + phys = <&usb0 0>; 178 + phy-names = "usb"; 179 + }; 180 + 181 + usb@2,0 { 182 + reg = <0x1000 0 0 0 0>; 183 + phys = <&usb0 0>; 184 + phy-names = "usb"; 185 + }; 186 + };
+2 -2
Documentation/devicetree/bindings/pci/snps,dw-pcie.yaml
··· 34 34 minItems: 2 35 35 maxItems: 5 36 36 items: 37 - enum: [ dbi, dbi2, config, atu, app, elbi, mgmt, ctrl, parf, cfg, link, 38 - ulreg, smu, mpu, apb, phy ] 37 + enum: [ dbi, dbi2, config, atu, atu_dma, app, appl, elbi, mgmt, ctrl, 38 + parf, cfg, link, ulreg, smu, mpu, apb, phy ] 39 39 40 40 num-lanes: 41 41 description: |
+37 -1
Documentation/devicetree/bindings/pci/xilinx-versal-cpm.yaml
··· 14 14 15 15 properties: 16 16 compatible: 17 - const: xlnx,versal-cpm-host-1.00 17 + enum: 18 + - xlnx,versal-cpm-host-1.00 19 + - xlnx,versal-cpm5-host 18 20 19 21 reg: 20 22 items: 21 23 - description: CPM system level control and status registers. 22 24 - description: Configuration space region and bridge registers. 25 + - description: CPM5 control and status registers. 26 + minItems: 2 23 27 24 28 reg-names: 25 29 items: 26 30 - const: cpm_slcr 27 31 - const: cfg 32 + - const: cpm_csr 33 + minItems: 2 28 34 29 35 interrupts: 30 36 maxItems: 1 ··· 101 95 interrupt-controller; 102 96 }; 103 97 }; 98 + 99 + cpm5_pcie: pcie@fcdd0000 { 100 + compatible = "xlnx,versal-cpm5-host"; 101 + device_type = "pci"; 102 + #address-cells = <3>; 103 + #interrupt-cells = <1>; 104 + #size-cells = <2>; 105 + interrupts = <0 72 4>; 106 + interrupt-parent = <&gic>; 107 + interrupt-map-mask = <0 0 0 7>; 108 + interrupt-map = <0 0 0 1 &pcie_intc_1 0>, 109 + <0 0 0 2 &pcie_intc_1 1>, 110 + <0 0 0 3 &pcie_intc_1 2>, 111 + <0 0 0 4 &pcie_intc_1 3>; 112 + bus-range = <0x00 0xff>; 113 + ranges = <0x02000000 0x0 0xe0000000 0x0 0xe0000000 0x0 0x10000000>, 114 + <0x43000000 0x80 0x00000000 0x80 0x00000000 0x0 0x80000000>; 115 + msi-map = <0x0 &its_gic 0x0 0x10000>; 116 + reg = <0x00 0xfcdd0000 0x00 0x1000>, 117 + <0x06 0x00000000 0x00 0x1000000>, 118 + <0x00 0xfce20000 0x00 0x1000000>; 119 + reg-names = "cpm_slcr", "cfg", "cpm_csr"; 120 + 121 + pcie_intc_1: interrupt-controller { 122 + #address-cells = <0>; 123 + #interrupt-cells = <1>; 124 + interrupt-controller; 125 + }; 126 + }; 127 + 104 128 };
+8
MAINTAINERS
··· 15862 15862 S: Maintained 15863 15863 F: drivers/pci/controller/dwc/*spear* 15864 15864 15865 + PCI DRIVER FOR XILINX VERSAL CPM 15866 + M: Bharat Kumar Gogada <bharat.kumar.gogada@amd.com> 15867 + M: Michal Simek <michal.simek@amd.com> 15868 + L: linux-pci@vger.kernel.org 15869 + S: Maintained 15870 + F: Documentation/devicetree/bindings/pci/xilinx-versal-cpm.yaml 15871 + F: drivers/pci/controller/pcie-xilinx-cpm.c 15872 + 15865 15873 PCMCIA SUBSYSTEM 15866 15874 M: Dominik Brodowski <linux@dominikbrodowski.net> 15867 15875 S: Odd Fixes
-9
arch/alpha/include/asm/dma.h
··· 365 365 #define KERNEL_HAVE_CHECK_DMA 366 366 extern int check_dma(unsigned int dmanr); 367 367 368 - /* From PCI */ 369 - 370 - #ifdef CONFIG_PCI 371 - extern int isa_dma_bridge_buggy; 372 - #else 373 - #define isa_dma_bridge_buggy (0) 374 - #endif 375 - 376 - 377 368 #endif /* _ASM_DMA_H */
-6
arch/alpha/include/asm/pci.h
··· 56 56 57 57 /* IOMMU controls. */ 58 58 59 - /* TODO: integrate with include/asm-generic/pci.h ? */ 60 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 61 - { 62 - return channel ? 15 : 14; 63 - } 64 - 65 59 #define pci_domain_nr(bus) ((struct pci_controller *)(bus)->sysdata)->index 66 60 67 61 static inline int pci_proc_domain(struct pci_bus *bus)
-5
arch/arc/include/asm/dma.h
··· 7 7 #define ASM_ARC_DMA_H 8 8 9 9 #define MAX_DMA_ADDRESS 0xC0000000 10 - #ifdef CONFIG_PCI 11 - extern int isa_dma_bridge_buggy; 12 - #else 13 - #define isa_dma_bridge_buggy 0 14 - #endif 15 10 16 11 #endif
-6
arch/arm/include/asm/dma.h
··· 143 143 144 144 #endif /* CONFIG_ISA_DMA_API */ 145 145 146 - #ifdef CONFIG_PCI 147 - extern int isa_dma_bridge_buggy; 148 - #else 149 - #define isa_dma_bridge_buggy (0) 150 - #endif 151 - 152 146 #endif /* __ASM_ARM_DMA_H */
-5
arch/arm/include/asm/pci.h
··· 22 22 #define HAVE_PCI_MMAP 23 23 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 24 24 25 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 26 - { 27 - return channel ? 15 : 14; 28 - } 29 - 30 25 extern void pcibios_report_status(unsigned int status_mask, int warn); 31 26 32 27 #endif /* __KERNEL__ */
+2 -16
arch/arm64/include/asm/pci.h
··· 9 9 #include <asm/io.h> 10 10 11 11 #define PCIBIOS_MIN_IO 0x1000 12 - #define PCIBIOS_MIN_MEM 0 13 12 14 13 /* 15 14 * Set to 1 if the kernel should re-assign all PCI bus numbers ··· 17 18 (pci_has_flag(PCI_REASSIGN_ALL_BUS)) 18 19 19 20 #define arch_can_pci_mmap_wc() 1 20 - #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 21 21 22 - extern int isa_dma_bridge_buggy; 23 - 24 - #ifdef CONFIG_PCI 25 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 26 - { 27 - /* no legacy IRQ on arm64 */ 28 - return -ENODEV; 29 - } 30 - 31 - static inline int pci_proc_domain(struct pci_bus *bus) 32 - { 33 - return 1; 34 - } 35 - #endif /* CONFIG_PCI */ 22 + /* Generic PCI */ 23 + #include <asm-generic/pci.h> 36 24 37 25 #endif /* __ASM_PCI_H */
+2 -21
arch/csky/include/asm/pci.h
··· 9 9 10 10 #include <asm/io.h> 11 11 12 - #define PCIBIOS_MIN_IO 0 13 - #define PCIBIOS_MIN_MEM 0 14 - 15 - /* C-SKY shim does not initialize PCI bus */ 16 - #define pcibios_assign_all_busses() 1 17 - 18 - extern int isa_dma_bridge_buggy; 19 - 20 - #ifdef CONFIG_PCI 21 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 22 - { 23 - /* no legacy IRQ on csky */ 24 - return -ENODEV; 25 - } 26 - 27 - static inline int pci_proc_domain(struct pci_bus *bus) 28 - { 29 - /* always show the domain in /proc */ 30 - return 1; 31 - } 32 - #endif /* CONFIG_PCI */ 12 + /* Generic PCI */ 13 + #include <asm-generic/pci.h> 33 14 34 15 #endif /* __ASM_CSKY_PCI_H */
-2
arch/ia64/include/asm/dma.h
··· 12 12 13 13 extern unsigned long MAX_DMA_ADDRESS; 14 14 15 - extern int isa_dma_bridge_buggy; 16 - 17 15 #define free_dma(x) 18 16 19 17 #endif /* _ASM_IA64_DMA_H */
-6
arch/ia64/include/asm/pci.h
··· 63 63 return (pci_domain_nr(bus) != 0); 64 64 } 65 65 66 - #define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ 67 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 68 - { 69 - return channel ? isa_irq_to_vector(15) : isa_irq_to_vector(14); 70 - } 71 - 72 66 #endif /* _ASM_IA64_PCI_H */
-6
arch/m68k/include/asm/dma.h
··· 6 6 bootmem allocator (but this should do it for this) */ 7 7 #define MAX_DMA_ADDRESS PAGE_OFFSET 8 8 9 - #ifdef CONFIG_PCI 10 - extern int isa_dma_bridge_buggy; 11 - #else 12 - #define isa_dma_bridge_buggy (0) 13 - #endif 14 - 15 9 #endif /* _M68K_DMA_H */
-2
arch/m68k/include/asm/pci.h
··· 2 2 #ifndef _ASM_M68K_PCI_H 3 3 #define _ASM_M68K_PCI_H 4 4 5 - #include <asm-generic/pci.h> 6 - 7 5 #define pcibios_assign_all_busses() 1 8 6 9 7 #define PCIBIOS_MIN_IO 0x00000100
-6
arch/microblaze/include/asm/dma.h
··· 9 9 /* Virtual address corresponding to last available physical memory address. */ 10 10 #define MAX_DMA_ADDRESS (CONFIG_KERNEL_START + memory_size - 1) 11 11 12 - #ifdef CONFIG_PCI 13 - extern int isa_dma_bridge_buggy; 14 - #else 15 - #define isa_dma_bridge_buggy (0) 16 - #endif 17 - 18 12 #endif /* _ASM_MICROBLAZE_DMA_H */
-8
arch/mips/include/asm/dma.h
··· 307 307 extern int request_dma(unsigned int dmanr, const char * device_id); /* reserve a DMA channel */ 308 308 extern void free_dma(unsigned int dmanr); /* release it again */ 309 309 310 - /* From PCI */ 311 - 312 - #ifdef CONFIG_PCI 313 - extern int isa_dma_bridge_buggy; 314 - #else 315 - #define isa_dma_bridge_buggy (0) 316 - #endif 317 - 318 310 #endif /* _ASM_DMA_H */
-6
arch/mips/include/asm/pci.h
··· 139 139 /* Do platform specific device initialization at pci_enable_device() time */ 140 140 extern int pcibios_plat_dev_init(struct pci_dev *dev); 141 141 142 - /* Chances are this interrupt is wired PC-style ... */ 143 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 144 - { 145 - return channel ? 15 : 14; 146 - } 147 - 148 142 #endif /* _ASM_PCI_H */
-6
arch/parisc/include/asm/dma.h
··· 176 176 177 177 #define free_dma(dmanr) 178 178 179 - #ifdef CONFIG_PCI 180 - extern int isa_dma_bridge_buggy; 181 - #else 182 - #define isa_dma_bridge_buggy (0) 183 - #endif 184 - 185 179 #endif /* _ASM_DMA_H */
-5
arch/parisc/include/asm/pci.h
··· 162 162 #define PCIBIOS_MIN_IO 0x10 163 163 #define PCIBIOS_MIN_MEM 0x1000 /* NBPG - but pci/setup-res.c dies */ 164 164 165 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 166 - { 167 - return channel ? 15 : 14; 168 - } 169 - 170 165 #define HAVE_PCI_MMAP 171 166 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 172 167
-6
arch/powerpc/include/asm/dma.h
··· 340 340 /* release it again */ 341 341 extern void free_dma(unsigned int dmanr); 342 342 343 - #ifdef CONFIG_PCI 344 - extern int isa_dma_bridge_buggy; 345 - #else 346 - #define isa_dma_bridge_buggy (0) 347 - #endif 348 - 349 343 #endif /* __KERNEL__ */ 350 344 #endif /* _ASM_POWERPC_DMA_H */
-1
arch/powerpc/include/asm/pci.h
··· 39 39 #define pcibios_assign_all_busses() \ 40 40 (pci_has_flag(PCI_REASSIGN_ALL_BUS)) 41 41 42 - #define HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ 43 42 static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 44 43 { 45 44 if (ppc_md.pci_get_legacy_ide_irq)
+4 -27
arch/riscv/include/asm/pci.h
··· 12 12 13 13 #include <asm/io.h> 14 14 15 - #define PCIBIOS_MIN_IO 0 16 - #define PCIBIOS_MIN_MEM 0 17 - 18 - /* RISC-V shim does not initialize PCI bus */ 19 - #define pcibios_assign_all_busses() 1 20 - 21 - #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 22 - 23 - extern int isa_dma_bridge_buggy; 24 - 25 - #ifdef CONFIG_PCI 26 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 27 - { 28 - /* no legacy IRQ on risc-v */ 29 - return -ENODEV; 30 - } 31 - 32 - static inline int pci_proc_domain(struct pci_bus *bus) 33 - { 34 - /* always show the domain in /proc */ 35 - return 1; 36 - } 37 - 38 - #ifdef CONFIG_NUMA 39 - 15 + #if defined(CONFIG_PCI) && defined(CONFIG_NUMA) 40 16 static inline int pcibus_to_node(struct pci_bus *bus) 41 17 { 42 18 return dev_to_node(&bus->dev); ··· 22 46 cpu_all_mask : \ 23 47 cpumask_of_node(pcibus_to_node(bus))) 24 48 #endif 25 - #endif /* CONFIG_NUMA */ 49 + #endif /* defined(CONFIG_PCI) && defined(CONFIG_NUMA) */ 26 50 27 - #endif /* CONFIG_PCI */ 51 + /* Generic PCI */ 52 + #include <asm-generic/pci.h> 28 53 29 54 #endif /* _ASM_RISCV_PCI_H */
-6
arch/s390/include/asm/dma.h
··· 11 11 */ 12 12 #define MAX_DMA_ADDRESS 0x80000000 13 13 14 - #ifdef CONFIG_PCI 15 - extern int isa_dma_bridge_buggy; 16 - #else 17 - #define isa_dma_bridge_buggy (0) 18 - #endif 19 - 20 14 #endif /* _ASM_S390_DMA_H */
-1
arch/s390/include/asm/pci.h
··· 6 6 #include <linux/mutex.h> 7 7 #include <linux/iommu.h> 8 8 #include <linux/pci_hotplug.h> 9 - #include <asm-generic/pci.h> 10 9 #include <asm/pci_clp.h> 11 10 #include <asm/pci_debug.h> 12 11 #include <asm/pci_insn.h>
+20 -62
arch/s390/pci/pci_bus.c
··· 145 145 struct zpci_dev *zdev; 146 146 int devfn, rc, ret = 0; 147 147 148 - if (!zbus->function[0]) 149 - return 0; 150 - 151 148 for (devfn = 0; devfn < ZPCI_FUNCTIONS_PER_BUS; devfn++) { 152 149 zdev = zbus->function[devfn]; 153 150 if (zdev && zdev->state == ZPCI_FN_STATE_CONFIGURED) { ··· 181 184 182 185 /* zpci_bus_create_pci_bus - Create the PCI bus associated with this zbus 183 186 * @zbus: the zbus holding the zdevices 184 - * @f0: function 0 of the bus 187 + * @fr: PCI root function that will determine the bus's domain, and bus speeed 185 188 * @ops: the pci operations 186 189 * 187 - * Function zero is taken as a parameter as this is used to determine the 188 - * domain, multifunction property and maximum bus speed of the entire bus. 190 + * The PCI function @fr determines the domain (its UID), multifunction property 191 + * and maximum bus speed of the entire bus. 189 192 * 190 193 * Return: 0 on success, an error code otherwise 191 194 */ 192 - static int zpci_bus_create_pci_bus(struct zpci_bus *zbus, struct zpci_dev *f0, struct pci_ops *ops) 195 + static int zpci_bus_create_pci_bus(struct zpci_bus *zbus, struct zpci_dev *fr, struct pci_ops *ops) 193 196 { 194 197 struct pci_bus *bus; 195 198 int domain; 196 199 197 - domain = zpci_alloc_domain((u16)f0->uid); 200 + domain = zpci_alloc_domain((u16)fr->uid); 198 201 if (domain < 0) 199 202 return domain; 200 203 201 204 zbus->domain_nr = domain; 202 - zbus->multifunction = f0->rid_available; 203 - zbus->max_bus_speed = f0->max_bus_speed; 205 + zbus->multifunction = fr->rid_available; 206 + zbus->max_bus_speed = fr->max_bus_speed; 204 207 205 208 /* 206 209 * Note that the zbus->resources are taken over and zbus->resources ··· 300 303 } 301 304 } 302 305 303 - /* zpci_bus_create_hotplug_slots - Add hotplug slot(s) for device added to bus 304 - * @zdev: the zPCI device that was newly added 305 - * 306 - * Add the hotplug slot(s) for the newly added PCI function. Normally this is 307 - * simply the slot for the function itself. If however we are adding the 308 - * function 0 on a zbus, it might be that we already registered functions on 309 - * that zbus but could not create their hotplug slots yet so add those now too. 310 - * 311 - * Return: 0 on success, an error code otherwise 312 - */ 313 - static int zpci_bus_create_hotplug_slots(struct zpci_dev *zdev) 314 - { 315 - struct zpci_bus *zbus = zdev->zbus; 316 - int devfn, rc = 0; 317 - 318 - rc = zpci_init_slot(zdev); 319 - if (rc) 320 - return rc; 321 - zdev->has_hp_slot = 1; 322 - 323 - if (zdev->devfn == 0 && zbus->multifunction) { 324 - /* Now that function 0 is there we can finally create the 325 - * hotplug slots for those functions with devfn != 0 that have 326 - * been parked in zbus->function[] waiting for us to be able to 327 - * create the PCI bus. 328 - */ 329 - for (devfn = 1; devfn < ZPCI_FUNCTIONS_PER_BUS; devfn++) { 330 - zdev = zbus->function[devfn]; 331 - if (zdev && !zdev->has_hp_slot) { 332 - rc = zpci_init_slot(zdev); 333 - if (rc) 334 - return rc; 335 - zdev->has_hp_slot = 1; 336 - } 337 - } 338 - 339 - } 340 - 341 - return rc; 342 - } 343 - 344 306 static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev) 345 307 { 346 308 int rc = -EINVAL; ··· 308 352 pr_err("devfn %04x is already assigned\n", zdev->devfn); 309 353 return rc; 310 354 } 355 + 311 356 zdev->zbus = zbus; 312 357 zbus->function[zdev->devfn] = zdev; 313 358 zpci_nb_devices++; 314 359 315 - if (zbus->bus) { 316 - if (zbus->multifunction && !zdev->rid_available) { 317 - WARN_ONCE(1, "rid_available not set for multifunction\n"); 318 - goto error; 319 - } 320 - 321 - zpci_bus_create_hotplug_slots(zdev); 322 - } else { 323 - /* Hotplug slot will be created once function 0 appears */ 324 - zbus->multifunction = 1; 360 + if (zbus->multifunction && !zdev->rid_available) { 361 + WARN_ONCE(1, "rid_available not set for multifunction\n"); 362 + goto error; 325 363 } 364 + rc = zpci_init_slot(zdev); 365 + if (rc) 366 + goto error; 367 + zdev->has_hp_slot = 1; 326 368 327 369 return 0; 328 370 ··· 354 400 return -ENOMEM; 355 401 } 356 402 357 - if (zdev->devfn == 0) { 403 + if (!zbus->bus) { 404 + /* The UID of the first PCI function registered with a zpci_bus 405 + * is used as the domain number for that bus. Currently there 406 + * is exactly one zpci_bus per domain. 407 + */ 358 408 rc = zpci_bus_create_pci_bus(zbus, zdev, ops); 359 409 if (rc) 360 410 goto error;
-6
arch/sh/include/asm/dma.h
··· 137 137 extern int dma_create_sysfs_files(struct dma_channel *, struct dma_info *); 138 138 extern void dma_remove_sysfs_files(struct dma_channel *, struct dma_info *); 139 139 140 - #ifdef CONFIG_PCI 141 - extern int isa_dma_bridge_buggy; 142 - #else 143 - #define isa_dma_bridge_buggy (0) 144 - #endif 145 - 146 140 #endif /* __ASM_SH_DMA_H */
-6
arch/sh/include/asm/pci.h
··· 88 88 return hose->need_domain_info; 89 89 } 90 90 91 - /* Chances are this interrupt is wired PC-style ... */ 92 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 93 - { 94 - return channel ? 15 : 14; 95 - } 96 - 97 91 #endif /* __ASM_SH_PCI_H */
-8
arch/sparc/include/asm/dma.h
··· 82 82 #define DMA_BURST64 0x40 83 83 #define DMA_BURSTBITS 0x7f 84 84 85 - /* From PCI */ 86 - 87 - #ifdef CONFIG_PCI 88 - extern int isa_dma_bridge_buggy; 89 - #else 90 - #define isa_dma_bridge_buggy (0) 91 - #endif 92 - 93 85 #ifdef CONFIG_SPARC32 94 86 struct device; 95 87
+1 -9
arch/sparc/include/asm/pci.h
··· 37 37 #define HAVE_PCI_MMAP 38 38 #define arch_can_pci_mmap_io() 1 39 39 #define HAVE_ARCH_PCI_GET_UNMAPPED_AREA 40 + #define ARCH_GENERIC_PCI_MMAP_RESOURCE 40 41 #define get_pci_unmapped_area get_fb_unmapped_area 41 42 #endif /* CONFIG_SPARC64 */ 42 - 43 - #if defined(CONFIG_SPARC64) || defined(CONFIG_LEON_PCI) 44 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 45 - { 46 - return PCI_IRQ_NONE; 47 - } 48 - #else 49 - #include <asm-generic/pci.h> 50 - #endif 51 43 52 44 #endif /* ___ASM_SPARC_PCI_H */
+4 -145
arch/sparc/kernel/pci.c
··· 751 751 } 752 752 753 753 /* Platform support for /proc/bus/pci/X/Y mmap()s. */ 754 - 755 - /* If the user uses a host-bridge as the PCI device, he may use 756 - * this to perform a raw mmap() of the I/O or MEM space behind 757 - * that controller. 758 - * 759 - * This can be useful for execution of x86 PCI bios initialization code 760 - * on a PCI card, like the xfree86 int10 stuff does. 761 - */ 762 - static int __pci_mmap_make_offset_bus(struct pci_dev *pdev, struct vm_area_struct *vma, 763 - enum pci_mmap_state mmap_state) 754 + int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma) 764 755 { 765 756 struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller; 766 - unsigned long space_size, user_offset, user_size; 757 + resource_size_t ioaddr = pci_resource_start(pdev, bar); 767 758 768 - if (mmap_state == pci_mmap_io) { 769 - space_size = resource_size(&pbm->io_space); 770 - } else { 771 - space_size = resource_size(&pbm->mem_space); 772 - } 773 - 774 - /* Make sure the request is in range. */ 775 - user_offset = vma->vm_pgoff << PAGE_SHIFT; 776 - user_size = vma->vm_end - vma->vm_start; 777 - 778 - if (user_offset >= space_size || 779 - (user_offset + user_size) > space_size) 759 + if (!pbm) 780 760 return -EINVAL; 781 761 782 - if (mmap_state == pci_mmap_io) { 783 - vma->vm_pgoff = (pbm->io_space.start + 784 - user_offset) >> PAGE_SHIFT; 785 - } else { 786 - vma->vm_pgoff = (pbm->mem_space.start + 787 - user_offset) >> PAGE_SHIFT; 788 - } 789 - 790 - return 0; 791 - } 792 - 793 - /* Adjust vm_pgoff of VMA such that it is the physical page offset 794 - * corresponding to the 32-bit pci bus offset for DEV requested by the user. 795 - * 796 - * Basically, the user finds the base address for his device which he wishes 797 - * to mmap. They read the 32-bit value from the config space base register, 798 - * add whatever PAGE_SIZE multiple offset they wish, and feed this into the 799 - * offset parameter of mmap on /proc/bus/pci/XXX for that device. 800 - * 801 - * Returns negative error code on failure, zero on success. 802 - */ 803 - static int __pci_mmap_make_offset(struct pci_dev *pdev, 804 - struct vm_area_struct *vma, 805 - enum pci_mmap_state mmap_state) 806 - { 807 - unsigned long user_paddr, user_size; 808 - int i, err; 809 - 810 - /* First compute the physical address in vma->vm_pgoff, 811 - * making sure the user offset is within range in the 812 - * appropriate PCI space. 813 - */ 814 - err = __pci_mmap_make_offset_bus(pdev, vma, mmap_state); 815 - if (err) 816 - return err; 817 - 818 - /* If this is a mapping on a host bridge, any address 819 - * is OK. 820 - */ 821 - if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_HOST) 822 - return err; 823 - 824 - /* Otherwise make sure it's in the range for one of the 825 - * device's resources. 826 - */ 827 - user_paddr = vma->vm_pgoff << PAGE_SHIFT; 828 - user_size = vma->vm_end - vma->vm_start; 829 - 830 - for (i = 0; i <= PCI_ROM_RESOURCE; i++) { 831 - struct resource *rp = &pdev->resource[i]; 832 - resource_size_t aligned_end; 833 - 834 - /* Active? */ 835 - if (!rp->flags) 836 - continue; 837 - 838 - /* Same type? */ 839 - if (i == PCI_ROM_RESOURCE) { 840 - if (mmap_state != pci_mmap_mem) 841 - continue; 842 - } else { 843 - if ((mmap_state == pci_mmap_io && 844 - (rp->flags & IORESOURCE_IO) == 0) || 845 - (mmap_state == pci_mmap_mem && 846 - (rp->flags & IORESOURCE_MEM) == 0)) 847 - continue; 848 - } 849 - 850 - /* Align the resource end to the next page address. 851 - * PAGE_SIZE intentionally added instead of (PAGE_SIZE - 1), 852 - * because actually we need the address of the next byte 853 - * after rp->end. 854 - */ 855 - aligned_end = (rp->end + PAGE_SIZE) & PAGE_MASK; 856 - 857 - if ((rp->start <= user_paddr) && 858 - (user_paddr + user_size) <= aligned_end) 859 - break; 860 - } 861 - 862 - if (i > PCI_ROM_RESOURCE) 863 - return -EINVAL; 864 - 865 - return 0; 866 - } 867 - 868 - /* Set vm_page_prot of VMA, as appropriate for this architecture, for a pci 869 - * device mapping. 870 - */ 871 - static void __pci_mmap_set_pgprot(struct pci_dev *dev, struct vm_area_struct *vma, 872 - enum pci_mmap_state mmap_state) 873 - { 874 - /* Our io_remap_pfn_range takes care of this, do nothing. */ 875 - } 876 - 877 - /* Perform the actual remap of the pages for a PCI device mapping, as appropriate 878 - * for this architecture. The region in the process to map is described by vm_start 879 - * and vm_end members of VMA, the base physical address is found in vm_pgoff. 880 - * The pci device structure is provided so that architectures may make mapping 881 - * decisions on a per-device or per-bus basis. 882 - * 883 - * Returns a negative error code on failure, zero on success. 884 - */ 885 - int pci_mmap_page_range(struct pci_dev *dev, int bar, 886 - struct vm_area_struct *vma, 887 - enum pci_mmap_state mmap_state, int write_combine) 888 - { 889 - int ret; 890 - 891 - ret = __pci_mmap_make_offset(dev, vma, mmap_state); 892 - if (ret < 0) 893 - return ret; 894 - 895 - __pci_mmap_set_pgprot(dev, vma, mmap_state); 896 - 897 - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 898 - ret = io_remap_pfn_range(vma, vma->vm_start, 899 - vma->vm_pgoff, 900 - vma->vm_end - vma->vm_start, 901 - vma->vm_page_prot); 902 - if (ret) 903 - return ret; 762 + vma->vm_pgoff += (ioaddr + pbm->io_space.start) >> PAGE_SHIFT; 904 763 905 764 return 0; 906 765 }
+2 -22
arch/um/include/asm/pci.h
··· 4 4 #include <linux/types.h> 5 5 #include <asm/io.h> 6 6 7 - #define PCIBIOS_MIN_IO 0 8 - #define PCIBIOS_MIN_MEM 0 9 - 10 - #define pcibios_assign_all_busses() 1 11 - 12 - extern int isa_dma_bridge_buggy; 13 - 14 - #ifdef CONFIG_PCI 15 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 16 - { 17 - /* no legacy IRQs */ 18 - return -ENODEV; 19 - } 20 - #endif 21 - 22 - #ifdef CONFIG_PCI_DOMAINS 23 - static inline int pci_proc_domain(struct pci_bus *bus) 24 - { 25 - /* always show the domain in /proc */ 26 - return 1; 27 - } 28 - #endif /* CONFIG_PCI */ 7 + /* Generic PCI */ 8 + #include <asm-generic/pci.h> 29 9 30 10 #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN 31 11 /*
-8
arch/x86/include/asm/dma.h
··· 307 307 extern void free_dma(unsigned int dmanr); 308 308 #endif 309 309 310 - /* From PCI */ 311 - 312 - #ifdef CONFIG_PCI 313 - extern int isa_dma_bridge_buggy; 314 - #else 315 - #define isa_dma_bridge_buggy (0) 316 - #endif 317 - 318 310 #endif /* _ASM_X86_DMA_H */
-3
arch/x86/include/asm/pci.h
··· 105 105 106 106 extern void pci_iommu_alloc(void); 107 107 108 - /* generic pci stuff */ 109 - #include <asm-generic/pci.h> 110 - 111 108 #ifdef CONFIG_NUMA 112 109 /* Returns the node based on pci bus */ 113 110 static inline int __pcibus_to_node(const struct pci_bus *bus)
+1
arch/x86/kernel/cpu/cyrix.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/bitops.h> 3 3 #include <linux/delay.h> 4 + #include <linux/isa-dma.h> 4 5 #include <linux/pci.h> 5 6 #include <asm/dma.h> 6 7 #include <linux/io.h>
-7
arch/xtensa/include/asm/dma.h
··· 52 52 extern int request_dma(unsigned int dmanr, const char * device_id); 53 53 extern void free_dma(unsigned int dmanr); 54 54 55 - #ifdef CONFIG_PCI 56 - extern int isa_dma_bridge_buggy; 57 - #else 58 - #define isa_dma_bridge_buggy (0) 59 - #endif 60 - 61 - 62 55 #endif
-3
arch/xtensa/include/asm/pci.h
··· 43 43 #define ARCH_GENERIC_PCI_MMAP_RESOURCE 1 44 44 #define arch_can_pci_mmap_io() 1 45 45 46 - /* Generic PCI */ 47 - #include <asm-generic/pci.h> 48 - 49 46 #endif /* _XTENSA_PCI_H */
+13
drivers/acpi/pci_mcfg.c
··· 41 41 static struct mcfg_fixup mcfg_quirks[] = { 42 42 /* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */ 43 43 44 + #ifdef CONFIG_ARM64 45 + 44 46 #define AL_ECAM(table_id, rev, seg, ops) \ 45 47 { "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops } 46 48 ··· 171 169 ALTRA_ECAM_QUIRK(1, 13), 172 170 ALTRA_ECAM_QUIRK(1, 14), 173 171 ALTRA_ECAM_QUIRK(1, 15), 172 + #endif /* ARM64 */ 173 + 174 + #ifdef CONFIG_LOONGARCH 175 + #define LOONGSON_ECAM_MCFG(table_id, seg) \ 176 + { "LOONGS", table_id, 1, seg, MCFG_BUS_ANY, &loongson_pci_ecam_ops } 177 + 178 + LOONGSON_ECAM_MCFG("\0", 0), 179 + LOONGSON_ECAM_MCFG("LOONGSON", 0), 180 + LOONGSON_ECAM_MCFG("\0", 1), 181 + LOONGSON_ECAM_MCFG("LOONGSON", 1), 182 + #endif /* LOONGARCH */ 174 183 }; 175 184 176 185 static char mcfg_oem_id[ACPI_OEM_ID_SIZE];
+1 -1
drivers/comedi/drivers/comedi_isadma.c
··· 8 8 #include <linux/slab.h> 9 9 #include <linux/delay.h> 10 10 #include <linux/dma-mapping.h> 11 - #include <asm/dma.h> 11 + #include <linux/isa-dma.h> 12 12 #include <linux/comedi/comedidev.h> 13 13 #include <linux/comedi/comedi_isadma.h> 14 14
+84 -57
drivers/dma/dw-edma/dw-edma-core.c
··· 64 64 65 65 static struct dw_edma_chunk *dw_edma_alloc_chunk(struct dw_edma_desc *desc) 66 66 { 67 + struct dw_edma_chip *chip = desc->chan->dw->chip; 67 68 struct dw_edma_chan *chan = desc->chan; 68 - struct dw_edma *dw = chan->chip->dw; 69 69 struct dw_edma_chunk *chunk; 70 70 71 71 chunk = kzalloc(sizeof(*chunk), GFP_NOWAIT); ··· 82 82 */ 83 83 chunk->cb = !(desc->chunks_alloc % 2); 84 84 if (chan->dir == EDMA_DIR_WRITE) { 85 - chunk->ll_region.paddr = dw->ll_region_wr[chan->id].paddr; 86 - chunk->ll_region.vaddr = dw->ll_region_wr[chan->id].vaddr; 85 + chunk->ll_region.paddr = chip->ll_region_wr[chan->id].paddr; 86 + chunk->ll_region.vaddr = chip->ll_region_wr[chan->id].vaddr; 87 87 } else { 88 - chunk->ll_region.paddr = dw->ll_region_rd[chan->id].paddr; 89 - chunk->ll_region.vaddr = dw->ll_region_rd[chan->id].vaddr; 88 + chunk->ll_region.paddr = chip->ll_region_rd[chan->id].paddr; 89 + chunk->ll_region.vaddr = chip->ll_region_rd[chan->id].vaddr; 90 90 } 91 91 92 92 if (desc->chunk) { ··· 339 339 if (!chan->configured) 340 340 return NULL; 341 341 342 - switch (chan->config.direction) { 343 - case DMA_DEV_TO_MEM: /* local DMA */ 344 - if (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_READ) 345 - break; 346 - return NULL; 347 - case DMA_MEM_TO_DEV: /* local DMA */ 348 - if (dir == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_WRITE) 349 - break; 350 - return NULL; 351 - default: /* remote DMA */ 352 - if (dir == DMA_MEM_TO_DEV && chan->dir == EDMA_DIR_READ) 353 - break; 354 - if (dir == DMA_DEV_TO_MEM && chan->dir == EDMA_DIR_WRITE) 355 - break; 356 - return NULL; 342 + /* 343 + * Local Root Port/End-point Remote End-point 344 + * +-----------------------+ PCIe bus +----------------------+ 345 + * | | +-+ | | 346 + * | DEV_TO_MEM Rx Ch <----+ +---+ Tx Ch DEV_TO_MEM | 347 + * | | | | | | 348 + * | MEM_TO_DEV Tx Ch +----+ +---> Rx Ch MEM_TO_DEV | 349 + * | | +-+ | | 350 + * +-----------------------+ +----------------------+ 351 + * 352 + * 1. Normal logic: 353 + * If eDMA is embedded into the DW PCIe RP/EP and controlled from the 354 + * CPU/Application side, the Rx channel (EDMA_DIR_READ) will be used 355 + * for the device read operations (DEV_TO_MEM) and the Tx channel 356 + * (EDMA_DIR_WRITE) - for the write operations (MEM_TO_DEV). 357 + * 358 + * 2. Inverted logic: 359 + * If eDMA is embedded into a Remote PCIe EP and is controlled by the 360 + * MWr/MRd TLPs sent from the CPU's PCIe host controller, the Tx 361 + * channel (EDMA_DIR_WRITE) will be used for the device read operations 362 + * (DEV_TO_MEM) and the Rx channel (EDMA_DIR_READ) - for the write 363 + * operations (MEM_TO_DEV). 364 + * 365 + * It is the client driver responsibility to choose a proper channel 366 + * for the DMA transfers. 367 + */ 368 + if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { 369 + if ((chan->dir == EDMA_DIR_READ && dir != DMA_DEV_TO_MEM) || 370 + (chan->dir == EDMA_DIR_WRITE && dir != DMA_MEM_TO_DEV)) 371 + return NULL; 372 + } else { 373 + if ((chan->dir == EDMA_DIR_WRITE && dir != DMA_DEV_TO_MEM) || 374 + (chan->dir == EDMA_DIR_READ && dir != DMA_MEM_TO_DEV)) 375 + return NULL; 357 376 } 358 377 359 378 if (xfer->type == EDMA_XFER_CYCLIC) { ··· 442 423 chunk->ll_region.sz += burst->sz; 443 424 desc->alloc_sz += burst->sz; 444 425 445 - if (chan->dir == EDMA_DIR_WRITE) { 426 + if (dir == DMA_DEV_TO_MEM) { 446 427 burst->sar = src_addr; 447 428 if (xfer->type == EDMA_XFER_CYCLIC) { 448 429 burst->dar = xfer->xfer.cyclic.paddr; ··· 682 663 if (chan->status != EDMA_ST_IDLE) 683 664 return -EBUSY; 684 665 685 - pm_runtime_get(chan->chip->dev); 666 + pm_runtime_get(chan->dw->chip->dev); 686 667 687 668 return 0; 688 669 } ··· 704 685 cpu_relax(); 705 686 } 706 687 707 - pm_runtime_put(chan->chip->dev); 688 + pm_runtime_put(chan->dw->chip->dev); 708 689 } 709 690 710 - static int dw_edma_channel_setup(struct dw_edma_chip *chip, bool write, 691 + static int dw_edma_channel_setup(struct dw_edma *dw, bool write, 711 692 u32 wr_alloc, u32 rd_alloc) 712 693 { 694 + struct dw_edma_chip *chip = dw->chip; 713 695 struct dw_edma_region *dt_region; 714 696 struct device *dev = chip->dev; 715 - struct dw_edma *dw = chip->dw; 716 697 struct dw_edma_chan *chan; 717 698 struct dw_edma_irq *irq; 718 699 struct dma_device *dma; ··· 745 726 746 727 chan->vc.chan.private = dt_region; 747 728 748 - chan->chip = chip; 729 + chan->dw = dw; 749 730 chan->id = j; 750 731 chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ; 751 732 chan->configured = false; ··· 753 734 chan->status = EDMA_ST_IDLE; 754 735 755 736 if (write) 756 - chan->ll_max = (dw->ll_region_wr[j].sz / EDMA_LL_SZ); 737 + chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ); 757 738 else 758 - chan->ll_max = (dw->ll_region_rd[j].sz / EDMA_LL_SZ); 739 + chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ); 759 740 chan->ll_max -= 1; 760 741 761 742 dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n", ··· 785 766 vchan_init(&chan->vc, dma); 786 767 787 768 if (write) { 788 - dt_region->paddr = dw->dt_region_wr[j].paddr; 789 - dt_region->vaddr = dw->dt_region_wr[j].vaddr; 790 - dt_region->sz = dw->dt_region_wr[j].sz; 769 + dt_region->paddr = chip->dt_region_wr[j].paddr; 770 + dt_region->vaddr = chip->dt_region_wr[j].vaddr; 771 + dt_region->sz = chip->dt_region_wr[j].sz; 791 772 } else { 792 - dt_region->paddr = dw->dt_region_rd[j].paddr; 793 - dt_region->vaddr = dw->dt_region_rd[j].vaddr; 794 - dt_region->sz = dw->dt_region_rd[j].sz; 773 + dt_region->paddr = chip->dt_region_rd[j].paddr; 774 + dt_region->vaddr = chip->dt_region_rd[j].vaddr; 775 + dt_region->sz = chip->dt_region_rd[j].sz; 795 776 } 796 777 797 778 dw_edma_v0_core_device_config(chan); ··· 845 826 (*mask)++; 846 827 } 847 828 848 - static int dw_edma_irq_request(struct dw_edma_chip *chip, 829 + static int dw_edma_irq_request(struct dw_edma *dw, 849 830 u32 *wr_alloc, u32 *rd_alloc) 850 831 { 851 - struct device *dev = chip->dev; 852 - struct dw_edma *dw = chip->dw; 832 + struct dw_edma_chip *chip = dw->chip; 833 + struct device *dev = dw->chip->dev; 853 834 u32 wr_mask = 1; 854 835 u32 rd_mask = 1; 855 836 int i, err = 0; ··· 858 839 859 840 ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt; 860 841 861 - if (dw->nr_irqs < 1) 842 + if (chip->nr_irqs < 1 || !chip->ops->irq_vector) 862 843 return -EINVAL; 863 844 864 - if (dw->nr_irqs == 1) { 845 + dw->irq = devm_kcalloc(dev, chip->nr_irqs, sizeof(*dw->irq), GFP_KERNEL); 846 + if (!dw->irq) 847 + return -ENOMEM; 848 + 849 + if (chip->nr_irqs == 1) { 865 850 /* Common IRQ shared among all channels */ 866 - irq = dw->ops->irq_vector(dev, 0); 851 + irq = chip->ops->irq_vector(dev, 0); 867 852 err = request_irq(irq, dw_edma_interrupt_common, 868 853 IRQF_SHARED, dw->name, &dw->irq[0]); 869 854 if (err) { ··· 877 854 878 855 if (irq_get_msi_desc(irq)) 879 856 get_cached_msi_msg(irq, &dw->irq[0].msi); 857 + 858 + dw->nr_irqs = 1; 880 859 } else { 881 860 /* Distribute IRQs equally among all channels */ 882 - int tmp = dw->nr_irqs; 861 + int tmp = chip->nr_irqs; 883 862 884 863 while (tmp && (*wr_alloc + *rd_alloc) < ch_cnt) { 885 864 dw_edma_dec_irq_alloc(&tmp, wr_alloc, dw->wr_ch_cnt); ··· 892 867 dw_edma_add_irq_mask(&rd_mask, *rd_alloc, dw->rd_ch_cnt); 893 868 894 869 for (i = 0; i < (*wr_alloc + *rd_alloc); i++) { 895 - irq = dw->ops->irq_vector(dev, i); 870 + irq = chip->ops->irq_vector(dev, i); 896 871 err = request_irq(irq, 897 872 i < *wr_alloc ? 898 873 dw_edma_interrupt_write : ··· 926 901 return -EINVAL; 927 902 928 903 dev = chip->dev; 929 - if (!dev) 904 + if (!dev || !chip->ops) 930 905 return -EINVAL; 931 906 932 - dw = chip->dw; 933 - if (!dw || !dw->irq || !dw->ops || !dw->ops->irq_vector) 934 - return -EINVAL; 907 + dw = devm_kzalloc(dev, sizeof(*dw), GFP_KERNEL); 908 + if (!dw) 909 + return -ENOMEM; 910 + 911 + dw->chip = chip; 935 912 936 913 raw_spin_lock_init(&dw->lock); 937 914 938 - dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt, 915 + dw->wr_ch_cnt = min_t(u16, chip->ll_wr_cnt, 939 916 dw_edma_v0_core_ch_count(dw, EDMA_DIR_WRITE)); 940 917 dw->wr_ch_cnt = min_t(u16, dw->wr_ch_cnt, EDMA_MAX_WR_CH); 941 918 942 - dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt, 919 + dw->rd_ch_cnt = min_t(u16, chip->ll_rd_cnt, 943 920 dw_edma_v0_core_ch_count(dw, EDMA_DIR_READ)); 944 921 dw->rd_ch_cnt = min_t(u16, dw->rd_ch_cnt, EDMA_MAX_RD_CH); 945 922 ··· 963 936 dw_edma_v0_core_off(dw); 964 937 965 938 /* Request IRQs */ 966 - err = dw_edma_irq_request(chip, &wr_alloc, &rd_alloc); 939 + err = dw_edma_irq_request(dw, &wr_alloc, &rd_alloc); 967 940 if (err) 968 941 return err; 969 942 970 943 /* Setup write channels */ 971 - err = dw_edma_channel_setup(chip, true, wr_alloc, rd_alloc); 944 + err = dw_edma_channel_setup(dw, true, wr_alloc, rd_alloc); 972 945 if (err) 973 946 goto err_irq_free; 974 947 975 948 /* Setup read channels */ 976 - err = dw_edma_channel_setup(chip, false, wr_alloc, rd_alloc); 949 + err = dw_edma_channel_setup(dw, false, wr_alloc, rd_alloc); 977 950 if (err) 978 951 goto err_irq_free; 979 952 ··· 981 954 pm_runtime_enable(dev); 982 955 983 956 /* Turn debugfs on */ 984 - dw_edma_v0_core_debugfs_on(chip); 957 + dw_edma_v0_core_debugfs_on(dw); 958 + 959 + chip->dw = dw; 985 960 986 961 return 0; 987 962 988 963 err_irq_free: 989 964 for (i = (dw->nr_irqs - 1); i >= 0; i--) 990 - free_irq(dw->ops->irq_vector(dev, i), &dw->irq[i]); 991 - 992 - dw->nr_irqs = 0; 965 + free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]); 993 966 994 967 return err; 995 968 } ··· 1007 980 1008 981 /* Free irqs */ 1009 982 for (i = (dw->nr_irqs - 1); i >= 0; i--) 1010 - free_irq(dw->ops->irq_vector(dev, i), &dw->irq[i]); 983 + free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]); 1011 984 1012 985 /* Power management */ 1013 986 pm_runtime_disable(dev); ··· 1028 1001 } 1029 1002 1030 1003 /* Turn debugfs off */ 1031 - dw_edma_v0_core_debugfs_off(chip); 1004 + dw_edma_v0_core_debugfs_off(dw); 1032 1005 1033 1006 return 0; 1034 1007 }
+3 -28
drivers/dma/dw-edma/dw-edma-core.h
··· 15 15 #include "../virt-dma.h" 16 16 17 17 #define EDMA_LL_SZ 24 18 - #define EDMA_MAX_WR_CH 8 19 - #define EDMA_MAX_RD_CH 8 20 18 21 19 enum dw_edma_dir { 22 20 EDMA_DIR_WRITE = 0, 23 21 EDMA_DIR_READ 24 - }; 25 - 26 - enum dw_edma_map_format { 27 - EDMA_MF_EDMA_LEGACY = 0x0, 28 - EDMA_MF_EDMA_UNROLL = 0x1, 29 - EDMA_MF_HDMA_COMPAT = 0x5 30 22 }; 31 23 32 24 enum dw_edma_request { ··· 49 57 u32 sz; 50 58 }; 51 59 52 - struct dw_edma_region { 53 - phys_addr_t paddr; 54 - void __iomem *vaddr; 55 - size_t sz; 56 - }; 57 - 58 60 struct dw_edma_chunk { 59 61 struct list_head list; 60 62 struct dw_edma_chan *chan; ··· 73 87 74 88 struct dw_edma_chan { 75 89 struct virt_dma_chan vc; 76 - struct dw_edma_chip *chip; 90 + struct dw_edma *dw; 77 91 int id; 78 92 enum dw_edma_dir dir; 79 93 ··· 95 109 struct dw_edma *dw; 96 110 }; 97 111 98 - struct dw_edma_core_ops { 99 - int (*irq_vector)(struct device *dev, unsigned int nr); 100 - }; 101 - 102 112 struct dw_edma { 103 113 char name[20]; 104 114 ··· 104 122 struct dma_device rd_edma; 105 123 u16 rd_ch_cnt; 106 124 107 - struct dw_edma_region rg_region; /* Registers */ 108 - struct dw_edma_region ll_region_wr[EDMA_MAX_WR_CH]; 109 - struct dw_edma_region ll_region_rd[EDMA_MAX_RD_CH]; 110 - struct dw_edma_region dt_region_wr[EDMA_MAX_WR_CH]; 111 - struct dw_edma_region dt_region_rd[EDMA_MAX_RD_CH]; 112 - 113 125 struct dw_edma_irq *irq; 114 126 int nr_irqs; 115 127 116 - enum dw_edma_map_format mf; 117 - 118 128 struct dw_edma_chan *chan; 119 - const struct dw_edma_core_ops *ops; 120 129 121 130 raw_spinlock_t lock; /* Only for legacy */ 131 + 132 + struct dw_edma_chip *chip; 122 133 #ifdef CONFIG_DEBUG_FS 123 134 struct dentry *debugfs; 124 135 #endif /* CONFIG_DEBUG_FS */
+34 -49
drivers/dma/dw-edma/dw-edma-pcie.c
··· 148 148 struct dw_edma_pcie_data vsec_data; 149 149 struct device *dev = &pdev->dev; 150 150 struct dw_edma_chip *chip; 151 - struct dw_edma *dw; 152 151 int err, nr_irqs; 153 152 int i, mask; 154 153 ··· 196 197 if (!chip) 197 198 return -ENOMEM; 198 199 199 - dw = devm_kzalloc(dev, sizeof(*dw), GFP_KERNEL); 200 - if (!dw) 201 - return -ENOMEM; 202 - 203 200 /* IRQs allocation */ 204 201 nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data.irqs, 205 202 PCI_IRQ_MSI | PCI_IRQ_MSIX); ··· 206 211 } 207 212 208 213 /* Data structure initialization */ 209 - chip->dw = dw; 210 214 chip->dev = dev; 211 215 chip->id = pdev->devfn; 212 - chip->irq = pdev->irq; 213 216 214 - dw->mf = vsec_data.mf; 215 - dw->nr_irqs = nr_irqs; 216 - dw->ops = &dw_edma_pcie_core_ops; 217 - dw->wr_ch_cnt = vsec_data.wr_ch_cnt; 218 - dw->rd_ch_cnt = vsec_data.rd_ch_cnt; 217 + chip->mf = vsec_data.mf; 218 + chip->nr_irqs = nr_irqs; 219 + chip->ops = &dw_edma_pcie_core_ops; 219 220 220 - dw->rg_region.vaddr = pcim_iomap_table(pdev)[vsec_data.rg.bar]; 221 - if (!dw->rg_region.vaddr) 221 + chip->ll_wr_cnt = vsec_data.wr_ch_cnt; 222 + chip->ll_rd_cnt = vsec_data.rd_ch_cnt; 223 + 224 + chip->reg_base = pcim_iomap_table(pdev)[vsec_data.rg.bar]; 225 + if (!chip->reg_base) 222 226 return -ENOMEM; 223 227 224 - dw->rg_region.vaddr += vsec_data.rg.off; 225 - dw->rg_region.paddr = pdev->resource[vsec_data.rg.bar].start; 226 - dw->rg_region.paddr += vsec_data.rg.off; 227 - dw->rg_region.sz = vsec_data.rg.sz; 228 - 229 - for (i = 0; i < dw->wr_ch_cnt; i++) { 230 - struct dw_edma_region *ll_region = &dw->ll_region_wr[i]; 231 - struct dw_edma_region *dt_region = &dw->dt_region_wr[i]; 228 + for (i = 0; i < chip->ll_wr_cnt; i++) { 229 + struct dw_edma_region *ll_region = &chip->ll_region_wr[i]; 230 + struct dw_edma_region *dt_region = &chip->dt_region_wr[i]; 232 231 struct dw_edma_block *ll_block = &vsec_data.ll_wr[i]; 233 232 struct dw_edma_block *dt_block = &vsec_data.dt_wr[i]; 234 233 ··· 245 256 dt_region->sz = dt_block->sz; 246 257 } 247 258 248 - for (i = 0; i < dw->rd_ch_cnt; i++) { 249 - struct dw_edma_region *ll_region = &dw->ll_region_rd[i]; 250 - struct dw_edma_region *dt_region = &dw->dt_region_rd[i]; 259 + for (i = 0; i < chip->ll_rd_cnt; i++) { 260 + struct dw_edma_region *ll_region = &chip->ll_region_rd[i]; 261 + struct dw_edma_region *dt_region = &chip->dt_region_rd[i]; 251 262 struct dw_edma_block *ll_block = &vsec_data.ll_rd[i]; 252 263 struct dw_edma_block *dt_block = &vsec_data.dt_rd[i]; 253 264 ··· 271 282 } 272 283 273 284 /* Debug info */ 274 - if (dw->mf == EDMA_MF_EDMA_LEGACY) 275 - pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", dw->mf); 276 - else if (dw->mf == EDMA_MF_EDMA_UNROLL) 277 - pci_dbg(pdev, "Version:\teDMA Unroll (0x%x)\n", dw->mf); 278 - else if (dw->mf == EDMA_MF_HDMA_COMPAT) 279 - pci_dbg(pdev, "Version:\tHDMA Compatible (0x%x)\n", dw->mf); 285 + if (chip->mf == EDMA_MF_EDMA_LEGACY) 286 + pci_dbg(pdev, "Version:\teDMA Port Logic (0x%x)\n", chip->mf); 287 + else if (chip->mf == EDMA_MF_EDMA_UNROLL) 288 + pci_dbg(pdev, "Version:\teDMA Unroll (0x%x)\n", chip->mf); 289 + else if (chip->mf == EDMA_MF_HDMA_COMPAT) 290 + pci_dbg(pdev, "Version:\tHDMA Compatible (0x%x)\n", chip->mf); 280 291 else 281 - pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", dw->mf); 292 + pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", chip->mf); 282 293 283 - pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 294 + pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p)\n", 284 295 vsec_data.rg.bar, vsec_data.rg.off, vsec_data.rg.sz, 285 - dw->rg_region.vaddr, &dw->rg_region.paddr); 296 + chip->reg_base); 286 297 287 298 288 - for (i = 0; i < dw->wr_ch_cnt; i++) { 299 + for (i = 0; i < chip->ll_wr_cnt; i++) { 289 300 pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 290 301 i, vsec_data.ll_wr[i].bar, 291 - vsec_data.ll_wr[i].off, dw->ll_region_wr[i].sz, 292 - dw->ll_region_wr[i].vaddr, &dw->ll_region_wr[i].paddr); 302 + vsec_data.ll_wr[i].off, chip->ll_region_wr[i].sz, 303 + chip->ll_region_wr[i].vaddr, &chip->ll_region_wr[i].paddr); 293 304 294 305 pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 295 306 i, vsec_data.dt_wr[i].bar, 296 - vsec_data.dt_wr[i].off, dw->dt_region_wr[i].sz, 297 - dw->dt_region_wr[i].vaddr, &dw->dt_region_wr[i].paddr); 307 + vsec_data.dt_wr[i].off, chip->dt_region_wr[i].sz, 308 + chip->dt_region_wr[i].vaddr, &chip->dt_region_wr[i].paddr); 298 309 } 299 310 300 - for (i = 0; i < dw->rd_ch_cnt; i++) { 311 + for (i = 0; i < chip->ll_rd_cnt; i++) { 301 312 pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 302 313 i, vsec_data.ll_rd[i].bar, 303 - vsec_data.ll_rd[i].off, dw->ll_region_rd[i].sz, 304 - dw->ll_region_rd[i].vaddr, &dw->ll_region_rd[i].paddr); 314 + vsec_data.ll_rd[i].off, chip->ll_region_rd[i].sz, 315 + chip->ll_region_rd[i].vaddr, &chip->ll_region_rd[i].paddr); 305 316 306 317 pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 307 318 i, vsec_data.dt_rd[i].bar, 308 - vsec_data.dt_rd[i].off, dw->dt_region_rd[i].sz, 309 - dw->dt_region_rd[i].vaddr, &dw->dt_region_rd[i].paddr); 319 + vsec_data.dt_rd[i].off, chip->dt_region_rd[i].sz, 320 + chip->dt_region_rd[i].vaddr, &chip->dt_region_rd[i].paddr); 310 321 } 311 322 312 - pci_dbg(pdev, "Nr. IRQs:\t%u\n", dw->nr_irqs); 323 + pci_dbg(pdev, "Nr. IRQs:\t%u\n", chip->nr_irqs); 313 324 314 325 /* Validating if PCI interrupts were enabled */ 315 326 if (!pci_dev_msi_enabled(pdev)) { 316 327 pci_err(pdev, "enable interrupt failed\n"); 317 328 return -EPERM; 318 329 } 319 - 320 - dw->irq = devm_kcalloc(dev, nr_irqs, sizeof(*dw->irq), GFP_KERNEL); 321 - if (!dw->irq) 322 - return -ENOMEM; 323 330 324 331 /* Starting eDMA driver */ 325 332 err = dw_edma_probe(chip);
+22 -19
drivers/dma/dw-edma/dw-edma-v0-core.c
··· 25 25 26 26 static inline struct dw_edma_v0_regs __iomem *__dw_regs(struct dw_edma *dw) 27 27 { 28 - return dw->rg_region.vaddr; 28 + return dw->chip->reg_base; 29 29 } 30 30 31 31 #define SET_32(dw, name, value) \ ··· 96 96 static inline struct dw_edma_v0_ch_regs __iomem * 97 97 __dw_ch_regs(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch) 98 98 { 99 - if (dw->mf == EDMA_MF_EDMA_LEGACY) 99 + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) 100 100 return &(__dw_regs(dw)->type.legacy.ch); 101 101 102 102 if (dir == EDMA_DIR_WRITE) ··· 108 108 static inline void writel_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, 109 109 u32 value, void __iomem *addr) 110 110 { 111 - if (dw->mf == EDMA_MF_EDMA_LEGACY) { 111 + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { 112 112 u32 viewport_sel; 113 113 unsigned long flags; 114 114 ··· 133 133 { 134 134 u32 value; 135 135 136 - if (dw->mf == EDMA_MF_EDMA_LEGACY) { 136 + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { 137 137 u32 viewport_sel; 138 138 unsigned long flags; 139 139 ··· 169 169 static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, 170 170 u64 value, void __iomem *addr) 171 171 { 172 - if (dw->mf == EDMA_MF_EDMA_LEGACY) { 172 + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { 173 173 u32 viewport_sel; 174 174 unsigned long flags; 175 175 ··· 194 194 { 195 195 u32 value; 196 196 197 - if (dw->mf == EDMA_MF_EDMA_LEGACY) { 197 + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { 198 198 u32 viewport_sel; 199 199 unsigned long flags; 200 200 ··· 256 256 257 257 enum dma_status dw_edma_v0_core_ch_status(struct dw_edma_chan *chan) 258 258 { 259 - struct dw_edma *dw = chan->chip->dw; 259 + struct dw_edma *dw = chan->dw; 260 260 u32 tmp; 261 261 262 262 tmp = FIELD_GET(EDMA_V0_CH_STATUS_MASK, ··· 272 272 273 273 void dw_edma_v0_core_clear_done_int(struct dw_edma_chan *chan) 274 274 { 275 - struct dw_edma *dw = chan->chip->dw; 275 + struct dw_edma *dw = chan->dw; 276 276 277 277 SET_RW_32(dw, chan->dir, int_clear, 278 278 FIELD_PREP(EDMA_V0_DONE_INT_MASK, BIT(chan->id))); ··· 280 280 281 281 void dw_edma_v0_core_clear_abort_int(struct dw_edma_chan *chan) 282 282 { 283 - struct dw_edma *dw = chan->chip->dw; 283 + struct dw_edma *dw = chan->dw; 284 284 285 285 SET_RW_32(dw, chan->dir, int_clear, 286 286 FIELD_PREP(EDMA_V0_ABORT_INT_MASK, BIT(chan->id))); ··· 301 301 static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) 302 302 { 303 303 struct dw_edma_burst *child; 304 + struct dw_edma_chan *chan = chunk->chan; 304 305 struct dw_edma_v0_lli __iomem *lli; 305 306 struct dw_edma_v0_llp __iomem *llp; 306 307 u32 control = 0, i = 0; ··· 315 314 j = chunk->bursts_alloc; 316 315 list_for_each_entry(child, &chunk->burst->list, list) { 317 316 j--; 318 - if (!j) 319 - control |= (DW_EDMA_V0_LIE | DW_EDMA_V0_RIE); 320 - 317 + if (!j) { 318 + control |= DW_EDMA_V0_LIE; 319 + if (!(chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL)) 320 + control |= DW_EDMA_V0_RIE; 321 + } 321 322 /* Channel control */ 322 323 SET_LL_32(&lli[i].control, control); 323 324 /* Transfer size */ ··· 360 357 void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) 361 358 { 362 359 struct dw_edma_chan *chan = chunk->chan; 363 - struct dw_edma *dw = chan->chip->dw; 360 + struct dw_edma *dw = chan->dw; 364 361 u32 tmp; 365 362 366 363 dw_edma_v0_core_write_chunk(chunk); ··· 368 365 if (first) { 369 366 /* Enable engine */ 370 367 SET_RW_32(dw, chan->dir, engine_en, BIT(0)); 371 - if (dw->mf == EDMA_MF_HDMA_COMPAT) { 368 + if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { 372 369 switch (chan->id) { 373 370 case 0: 374 371 SET_RW_COMPAT(dw, chan->dir, ch0_pwr_en, ··· 430 427 431 428 int dw_edma_v0_core_device_config(struct dw_edma_chan *chan) 432 429 { 433 - struct dw_edma *dw = chan->chip->dw; 430 + struct dw_edma *dw = chan->dw; 434 431 u32 tmp = 0; 435 432 436 433 /* MSI done addr - low, high */ ··· 500 497 } 501 498 502 499 /* eDMA debugfs callbacks */ 503 - void dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip) 500 + void dw_edma_v0_core_debugfs_on(struct dw_edma *dw) 504 501 { 505 - dw_edma_v0_debugfs_on(chip); 502 + dw_edma_v0_debugfs_on(dw); 506 503 } 507 504 508 - void dw_edma_v0_core_debugfs_off(struct dw_edma_chip *chip) 505 + void dw_edma_v0_core_debugfs_off(struct dw_edma *dw) 509 506 { 510 - dw_edma_v0_debugfs_off(chip); 507 + dw_edma_v0_debugfs_off(dw); 511 508 }
+2 -2
drivers/dma/dw-edma/dw-edma-v0-core.h
··· 22 22 void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first); 23 23 int dw_edma_v0_core_device_config(struct dw_edma_chan *chan); 24 24 /* eDMA debug fs callbacks */ 25 - void dw_edma_v0_core_debugfs_on(struct dw_edma_chip *chip); 26 - void dw_edma_v0_core_debugfs_off(struct dw_edma_chip *chip); 25 + void dw_edma_v0_core_debugfs_on(struct dw_edma *dw); 26 + void dw_edma_v0_core_debugfs_off(struct dw_edma *dw); 27 27 28 28 #endif /* _DW_EDMA_V0_CORE_H */
+9 -9
drivers/dma/dw-edma/dw-edma-v0-debugfs.c
··· 54 54 static int dw_edma_debugfs_u32_get(void *data, u64 *val) 55 55 { 56 56 void __iomem *reg = (void __force __iomem *)data; 57 - if (dw->mf == EDMA_MF_EDMA_LEGACY && 57 + if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && 58 58 reg >= (void __iomem *)&regs->type.legacy.ch) { 59 59 void __iomem *ptr = &regs->type.legacy.ch; 60 60 u32 viewport_sel = 0; ··· 173 173 nr_entries = ARRAY_SIZE(debugfs_regs); 174 174 dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); 175 175 176 - if (dw->mf == EDMA_MF_HDMA_COMPAT) { 176 + if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { 177 177 nr_entries = ARRAY_SIZE(debugfs_unroll_regs); 178 178 dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, 179 179 regs_dir); ··· 242 242 nr_entries = ARRAY_SIZE(debugfs_regs); 243 243 dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); 244 244 245 - if (dw->mf == EDMA_MF_HDMA_COMPAT) { 245 + if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { 246 246 nr_entries = ARRAY_SIZE(debugfs_unroll_regs); 247 247 dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, 248 248 regs_dir); ··· 282 282 dw_edma_debugfs_regs_rd(regs_dir); 283 283 } 284 284 285 - void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) 285 + void dw_edma_v0_debugfs_on(struct dw_edma *_dw) 286 286 { 287 - dw = chip->dw; 287 + dw = _dw; 288 288 if (!dw) 289 289 return; 290 290 291 - regs = dw->rg_region.vaddr; 291 + regs = dw->chip->reg_base; 292 292 if (!regs) 293 293 return; 294 294 ··· 296 296 if (!dw->debugfs) 297 297 return; 298 298 299 - debugfs_create_u32("mf", 0444, dw->debugfs, &dw->mf); 299 + debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); 300 300 debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); 301 301 debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); 302 302 303 303 dw_edma_debugfs_regs(); 304 304 } 305 305 306 - void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip) 306 + void dw_edma_v0_debugfs_off(struct dw_edma *_dw) 307 307 { 308 - dw = chip->dw; 308 + dw = _dw; 309 309 if (!dw) 310 310 return; 311 311
+4 -4
drivers/dma/dw-edma/dw-edma-v0-debugfs.h
··· 12 12 #include <linux/dma/edma.h> 13 13 14 14 #ifdef CONFIG_DEBUG_FS 15 - void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip); 16 - void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip); 15 + void dw_edma_v0_debugfs_on(struct dw_edma *dw); 16 + void dw_edma_v0_debugfs_off(struct dw_edma *dw); 17 17 #else 18 - static inline void dw_edma_v0_debugfs_on(struct dw_edma_chip *chip) 18 + static inline void dw_edma_v0_debugfs_on(struct dw_edma *dw) 19 19 { 20 20 } 21 21 22 - static inline void dw_edma_v0_debugfs_off(struct dw_edma_chip *chip) 22 + static inline void dw_edma_v0_debugfs_off(struct dw_edma *dw) 23 23 { 24 24 } 25 25 #endif /* CONFIG_DEBUG_FS */
+2 -2
drivers/pci/controller/Kconfig
··· 237 237 238 238 config PCIE_MEDIATEK 239 239 tristate "MediaTek PCIe controller" 240 - depends on ARCH_MEDIATEK || COMPILE_TEST 240 + depends on ARCH_AIROHA || ARCH_MEDIATEK || COMPILE_TEST 241 241 depends on OF 242 242 depends on PCI_MSI_IRQ_DOMAIN 243 243 help ··· 293 293 config PCI_LOONGSON 294 294 bool "LOONGSON PCI Controller" 295 295 depends on MACH_LOONGSON64 || COMPILE_TEST 296 - depends on OF 296 + depends on OF || ACPI 297 297 depends on PCI_QUIRKS 298 298 default MACH_LOONGSON64 299 299 help
+2 -4
drivers/pci/controller/cadence/pcie-cadence.c
··· 243 243 return ret; 244 244 } 245 245 246 - #ifdef CONFIG_PM_SLEEP 247 246 static int cdns_pcie_suspend_noirq(struct device *dev) 248 247 { 249 248 struct cdns_pcie *pcie = dev_get_drvdata(dev); ··· 265 266 266 267 return 0; 267 268 } 268 - #endif 269 269 270 270 const struct dev_pm_ops cdns_pcie_pm_ops = { 271 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq, 272 - cdns_pcie_resume_noirq) 271 + NOIRQ_SYSTEM_SLEEP_PM_OPS(cdns_pcie_suspend_noirq, 272 + cdns_pcie_resume_noirq) 273 273 };
+10 -12
drivers/pci/controller/dwc/pci-dra7xx.c
··· 178 178 dra7xx_pcie_enable_msi_interrupts(dra7xx); 179 179 } 180 180 181 - static int dra7xx_pcie_host_init(struct pcie_port *pp) 181 + static int dra7xx_pcie_host_init(struct dw_pcie_rp *pp) 182 182 { 183 183 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 184 184 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); ··· 202 202 .xlate = pci_irqd_intx_xlate, 203 203 }; 204 204 205 - static int dra7xx_pcie_handle_msi(struct pcie_port *pp, int index) 205 + static int dra7xx_pcie_handle_msi(struct dw_pcie_rp *pp, int index) 206 206 { 207 207 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 208 208 unsigned long val; ··· 224 224 return 1; 225 225 } 226 226 227 - static void dra7xx_pcie_handle_msi_irq(struct pcie_port *pp) 227 + static void dra7xx_pcie_handle_msi_irq(struct dw_pcie_rp *pp) 228 228 { 229 229 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 230 230 int ret, i, count, num_ctrls; ··· 255 255 { 256 256 struct irq_chip *chip = irq_desc_get_chip(desc); 257 257 struct dra7xx_pcie *dra7xx; 258 + struct dw_pcie_rp *pp; 258 259 struct dw_pcie *pci; 259 - struct pcie_port *pp; 260 260 unsigned long reg; 261 261 u32 bit; 262 262 ··· 344 344 return IRQ_HANDLED; 345 345 } 346 346 347 - static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp) 347 + static int dra7xx_pcie_init_irq_domain(struct dw_pcie_rp *pp) 348 348 { 349 349 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 350 350 struct device *dev = pci->dev; ··· 475 475 { 476 476 int ret; 477 477 struct dw_pcie *pci = dra7xx->pci; 478 - struct pcie_port *pp = &pci->pp; 478 + struct dw_pcie_rp *pp = &pci->pp; 479 479 struct device *dev = pci->dev; 480 480 481 481 pp->irq = platform_get_irq(pdev, 1); ··· 483 483 return pp->irq; 484 484 485 485 /* MSI IRQ is muxed */ 486 - pp->msi_irq = -ENODEV; 486 + pp->msi_irq[0] = -ENODEV; 487 487 488 488 ret = dra7xx_pcie_init_irq_domain(pp); 489 489 if (ret < 0) ··· 862 862 return ret; 863 863 } 864 864 865 - #ifdef CONFIG_PM_SLEEP 866 865 static int dra7xx_pcie_suspend(struct device *dev) 867 866 { 868 867 struct dra7xx_pcie *dra7xx = dev_get_drvdata(dev); ··· 918 919 919 920 return 0; 920 921 } 921 - #endif 922 922 923 923 static void dra7xx_pcie_shutdown(struct platform_device *pdev) 924 924 { ··· 938 940 } 939 941 940 942 static const struct dev_pm_ops dra7xx_pcie_pm_ops = { 941 - SET_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend, dra7xx_pcie_resume) 942 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend_noirq, 943 - dra7xx_pcie_resume_noirq) 943 + SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend, dra7xx_pcie_resume) 944 + NOIRQ_SYSTEM_SLEEP_PM_OPS(dra7xx_pcie_suspend_noirq, 945 + dra7xx_pcie_resume_noirq) 944 946 }; 945 947 946 948 static struct platform_driver dra7xx_pcie_driver = {
+9 -10
drivers/pci/controller/dwc/pci-exynos.c
··· 249 249 return (val & PCIE_ELBI_XMLH_LINKUP); 250 250 } 251 251 252 - static int exynos_pcie_host_init(struct pcie_port *pp) 252 + static int exynos_pcie_host_init(struct dw_pcie_rp *pp) 253 253 { 254 254 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 255 255 struct exynos_pcie *ep = to_exynos_pcie(pci); ··· 258 258 259 259 exynos_pcie_assert_core_reset(ep); 260 260 261 - phy_reset(ep->phy); 262 - phy_power_on(ep->phy); 263 261 phy_init(ep->phy); 262 + phy_power_on(ep->phy); 264 263 265 264 exynos_pcie_deassert_core_reset(ep); 266 265 exynos_pcie_enable_irq_pulse(ep); ··· 275 276 struct platform_device *pdev) 276 277 { 277 278 struct dw_pcie *pci = &ep->pci; 278 - struct pcie_port *pp = &pci->pp; 279 + struct dw_pcie_rp *pp = &pci->pp; 279 280 struct device *dev = &pdev->dev; 280 281 int ret; 281 282 ··· 291 292 } 292 293 293 294 pp->ops = &exynos_pcie_host_ops; 294 - pp->msi_irq = -ENODEV; 295 + pp->msi_irq[0] = -ENODEV; 295 296 296 297 ret = dw_pcie_host_init(pp); 297 298 if (ret) { ··· 389 390 return 0; 390 391 } 391 392 392 - static int __maybe_unused exynos_pcie_suspend_noirq(struct device *dev) 393 + static int exynos_pcie_suspend_noirq(struct device *dev) 393 394 { 394 395 struct exynos_pcie *ep = dev_get_drvdata(dev); 395 396 ··· 401 402 return 0; 402 403 } 403 404 404 - static int __maybe_unused exynos_pcie_resume_noirq(struct device *dev) 405 + static int exynos_pcie_resume_noirq(struct device *dev) 405 406 { 406 407 struct exynos_pcie *ep = dev_get_drvdata(dev); 407 408 struct dw_pcie *pci = &ep->pci; 408 - struct pcie_port *pp = &pci->pp; 409 + struct dw_pcie_rp *pp = &pci->pp; 409 410 int ret; 410 411 411 412 ret = regulator_bulk_enable(ARRAY_SIZE(ep->supplies), ep->supplies); ··· 420 421 } 421 422 422 423 static const struct dev_pm_ops exynos_pcie_pm_ops = { 423 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(exynos_pcie_suspend_noirq, 424 - exynos_pcie_resume_noirq) 424 + NOIRQ_SYSTEM_SLEEP_PM_OPS(exynos_pcie_suspend_noirq, 425 + exynos_pcie_resume_noirq) 425 426 }; 426 427 427 428 static const struct of_device_id exynos_pcie_of_match[] = {
+369 -313
drivers/pci/controller/dwc/pci-imx6.c
··· 67 67 struct dw_pcie *pci; 68 68 int reset_gpio; 69 69 bool gpio_active_high; 70 + bool link_is_up; 70 71 struct clk *pcie_bus; 71 72 struct clk *pcie_phy; 72 73 struct clk *pcie_inbound_axi; ··· 146 145 #define PHY_RX_OVRD_IN_LO 0x1005 147 146 #define PHY_RX_OVRD_IN_LO_RX_DATA_EN BIT(5) 148 147 #define PHY_RX_OVRD_IN_LO_RX_PLL_EN BIT(3) 148 + 149 + static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie) 150 + { 151 + WARN_ON(imx6_pcie->drvdata->variant != IMX8MQ && 152 + imx6_pcie->drvdata->variant != IMX8MM); 153 + return imx6_pcie->controller_id == 1 ? IOMUXC_GPR16 : IOMUXC_GPR14; 154 + } 155 + 156 + static void imx6_pcie_configure_type(struct imx6_pcie *imx6_pcie) 157 + { 158 + unsigned int mask, val; 159 + 160 + if (imx6_pcie->drvdata->variant == IMX8MQ && 161 + imx6_pcie->controller_id == 1) { 162 + mask = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE; 163 + val = FIELD_PREP(IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE, 164 + PCI_EXP_TYPE_ROOT_PORT); 165 + } else { 166 + mask = IMX6Q_GPR12_DEVICE_TYPE; 167 + val = FIELD_PREP(IMX6Q_GPR12_DEVICE_TYPE, 168 + PCI_EXP_TYPE_ROOT_PORT); 169 + } 170 + 171 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, mask, val); 172 + } 149 173 150 174 static int pcie_phy_poll_ack(struct imx6_pcie *imx6_pcie, bool exp_val) 151 175 { ··· 297 271 return 0; 298 272 } 299 273 274 + static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie) 275 + { 276 + switch (imx6_pcie->drvdata->variant) { 277 + case IMX8MM: 278 + /* 279 + * The PHY initialization had been done in the PHY 280 + * driver, break here directly. 281 + */ 282 + break; 283 + case IMX8MQ: 284 + /* 285 + * TODO: Currently this code assumes external 286 + * oscillator is being used 287 + */ 288 + regmap_update_bits(imx6_pcie->iomuxc_gpr, 289 + imx6_pcie_grp_offset(imx6_pcie), 290 + IMX8MQ_GPR_PCIE_REF_USE_PAD, 291 + IMX8MQ_GPR_PCIE_REF_USE_PAD); 292 + /* 293 + * Regarding the datasheet, the PCIE_VPH is suggested 294 + * to be 1.8V. If the PCIE_VPH is supplied by 3.3V, the 295 + * VREG_BYPASS should be cleared to zero. 296 + */ 297 + if (imx6_pcie->vph && 298 + regulator_get_voltage(imx6_pcie->vph) > 3000000) 299 + regmap_update_bits(imx6_pcie->iomuxc_gpr, 300 + imx6_pcie_grp_offset(imx6_pcie), 301 + IMX8MQ_GPR_PCIE_VREG_BYPASS, 302 + 0); 303 + break; 304 + case IMX7D: 305 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 306 + IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 0); 307 + break; 308 + case IMX6SX: 309 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 310 + IMX6SX_GPR12_PCIE_RX_EQ_MASK, 311 + IMX6SX_GPR12_PCIE_RX_EQ_2); 312 + fallthrough; 313 + default: 314 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 315 + IMX6Q_GPR12_PCIE_CTL_2, 0 << 10); 316 + 317 + /* configure constant input signal to the pcie ctrl and phy */ 318 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 319 + IMX6Q_GPR12_LOS_LEVEL, 9 << 4); 320 + 321 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 322 + IMX6Q_GPR8_TX_DEEMPH_GEN1, 323 + imx6_pcie->tx_deemph_gen1 << 0); 324 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 325 + IMX6Q_GPR8_TX_DEEMPH_GEN2_3P5DB, 326 + imx6_pcie->tx_deemph_gen2_3p5db << 6); 327 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 328 + IMX6Q_GPR8_TX_DEEMPH_GEN2_6DB, 329 + imx6_pcie->tx_deemph_gen2_6db << 12); 330 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 331 + IMX6Q_GPR8_TX_SWING_FULL, 332 + imx6_pcie->tx_swing_full << 18); 333 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 334 + IMX6Q_GPR8_TX_SWING_LOW, 335 + imx6_pcie->tx_swing_low << 25); 336 + break; 337 + } 338 + 339 + imx6_pcie_configure_type(imx6_pcie); 340 + } 341 + 342 + static void imx7d_pcie_wait_for_phy_pll_lock(struct imx6_pcie *imx6_pcie) 343 + { 344 + u32 val; 345 + struct device *dev = imx6_pcie->pci->dev; 346 + 347 + if (regmap_read_poll_timeout(imx6_pcie->iomuxc_gpr, 348 + IOMUXC_GPR22, val, 349 + val & IMX7D_GPR22_PCIE_PHY_PLL_LOCKED, 350 + PHY_PLL_LOCK_WAIT_USLEEP_MAX, 351 + PHY_PLL_LOCK_WAIT_TIMEOUT)) 352 + dev_err(dev, "PCIe PLL lock timeout\n"); 353 + } 354 + 355 + static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie) 356 + { 357 + unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy); 358 + int mult, div; 359 + u16 val; 360 + 361 + if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) 362 + return 0; 363 + 364 + switch (phy_rate) { 365 + case 125000000: 366 + /* 367 + * The default settings of the MPLL are for a 125MHz input 368 + * clock, so no need to reconfigure anything in that case. 369 + */ 370 + return 0; 371 + case 100000000: 372 + mult = 25; 373 + div = 0; 374 + break; 375 + case 200000000: 376 + mult = 25; 377 + div = 1; 378 + break; 379 + default: 380 + dev_err(imx6_pcie->pci->dev, 381 + "Unsupported PHY reference clock rate %lu\n", phy_rate); 382 + return -EINVAL; 383 + } 384 + 385 + pcie_phy_read(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, &val); 386 + val &= ~(PCIE_PHY_MPLL_MULTIPLIER_MASK << 387 + PCIE_PHY_MPLL_MULTIPLIER_SHIFT); 388 + val |= mult << PCIE_PHY_MPLL_MULTIPLIER_SHIFT; 389 + val |= PCIE_PHY_MPLL_MULTIPLIER_OVRD; 390 + pcie_phy_write(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, val); 391 + 392 + pcie_phy_read(imx6_pcie, PCIE_PHY_ATEOVRD, &val); 393 + val &= ~(PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK << 394 + PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT); 395 + val |= div << PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT; 396 + val |= PCIE_PHY_ATEOVRD_EN; 397 + pcie_phy_write(imx6_pcie, PCIE_PHY_ATEOVRD, val); 398 + 399 + return 0; 400 + } 401 + 300 402 static void imx6_pcie_reset_phy(struct imx6_pcie *imx6_pcie) 301 403 { 302 404 u16 tmp; ··· 521 367 return 0; 522 368 } 523 369 524 - static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) 525 - { 526 - struct device *dev = imx6_pcie->pci->dev; 527 - 528 - switch (imx6_pcie->drvdata->variant) { 529 - case IMX7D: 530 - case IMX8MQ: 531 - reset_control_assert(imx6_pcie->pciephy_reset); 532 - fallthrough; 533 - case IMX8MM: 534 - reset_control_assert(imx6_pcie->apps_reset); 535 - break; 536 - case IMX6SX: 537 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 538 - IMX6SX_GPR12_PCIE_TEST_POWERDOWN, 539 - IMX6SX_GPR12_PCIE_TEST_POWERDOWN); 540 - /* Force PCIe PHY reset */ 541 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR5, 542 - IMX6SX_GPR5_PCIE_BTNRST_RESET, 543 - IMX6SX_GPR5_PCIE_BTNRST_RESET); 544 - break; 545 - case IMX6QP: 546 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 547 - IMX6Q_GPR1_PCIE_SW_RST, 548 - IMX6Q_GPR1_PCIE_SW_RST); 549 - break; 550 - case IMX6Q: 551 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 552 - IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18); 553 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 554 - IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16); 555 - break; 556 - } 557 - 558 - if (imx6_pcie->vpcie && regulator_is_enabled(imx6_pcie->vpcie) > 0) { 559 - int ret = regulator_disable(imx6_pcie->vpcie); 560 - 561 - if (ret) 562 - dev_err(dev, "failed to disable vpcie regulator: %d\n", 563 - ret); 564 - } 565 - 566 - /* Some boards don't have PCIe reset GPIO. */ 567 - if (gpio_is_valid(imx6_pcie->reset_gpio)) 568 - gpio_set_value_cansleep(imx6_pcie->reset_gpio, 569 - imx6_pcie->gpio_active_high); 570 - } 571 - 572 - static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie) 573 - { 574 - WARN_ON(imx6_pcie->drvdata->variant != IMX8MQ && 575 - imx6_pcie->drvdata->variant != IMX8MM); 576 - return imx6_pcie->controller_id == 1 ? IOMUXC_GPR16 : IOMUXC_GPR14; 577 - } 578 - 579 370 static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie) 580 371 { 581 372 struct dw_pcie *pci = imx6_pcie->pci; ··· 581 482 return ret; 582 483 } 583 484 584 - static void imx7d_pcie_wait_for_phy_pll_lock(struct imx6_pcie *imx6_pcie) 485 + static void imx6_pcie_disable_ref_clk(struct imx6_pcie *imx6_pcie) 585 486 { 586 - u32 val; 587 - struct device *dev = imx6_pcie->pci->dev; 588 - 589 - if (regmap_read_poll_timeout(imx6_pcie->iomuxc_gpr, 590 - IOMUXC_GPR22, val, 591 - val & IMX7D_GPR22_PCIE_PHY_PLL_LOCKED, 592 - PHY_PLL_LOCK_WAIT_USLEEP_MAX, 593 - PHY_PLL_LOCK_WAIT_TIMEOUT)) 594 - dev_err(dev, "PCIe PLL lock timeout\n"); 487 + switch (imx6_pcie->drvdata->variant) { 488 + case IMX6SX: 489 + clk_disable_unprepare(imx6_pcie->pcie_inbound_axi); 490 + break; 491 + case IMX6QP: 492 + case IMX6Q: 493 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 494 + IMX6Q_GPR1_PCIE_REF_CLK_EN, 0); 495 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 496 + IMX6Q_GPR1_PCIE_TEST_PD, 497 + IMX6Q_GPR1_PCIE_TEST_PD); 498 + break; 499 + case IMX7D: 500 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 501 + IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 502 + IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); 503 + break; 504 + case IMX8MM: 505 + case IMX8MQ: 506 + clk_disable_unprepare(imx6_pcie->pcie_aux); 507 + break; 508 + default: 509 + break; 510 + } 595 511 } 596 512 597 - static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) 513 + static int imx6_pcie_clk_enable(struct imx6_pcie *imx6_pcie) 598 514 { 599 515 struct dw_pcie *pci = imx6_pcie->pci; 600 516 struct device *dev = pci->dev; 601 517 int ret; 602 518 603 - if (imx6_pcie->vpcie && !regulator_is_enabled(imx6_pcie->vpcie)) { 604 - ret = regulator_enable(imx6_pcie->vpcie); 605 - if (ret) { 606 - dev_err(dev, "failed to enable vpcie regulator: %d\n", 607 - ret); 608 - return; 609 - } 610 - } 611 - 612 519 ret = clk_prepare_enable(imx6_pcie->pcie_phy); 613 520 if (ret) { 614 521 dev_err(dev, "unable to enable pcie_phy clock\n"); 615 - goto err_pcie_phy; 522 + return ret; 616 523 } 617 524 618 525 ret = clk_prepare_enable(imx6_pcie->pcie_bus); ··· 639 534 goto err_ref_clk; 640 535 } 641 536 642 - switch (imx6_pcie->drvdata->variant) { 643 - case IMX8MM: 644 - if (phy_power_on(imx6_pcie->phy)) 645 - dev_err(dev, "unable to power on PHY\n"); 646 - break; 647 - default: 648 - break; 649 - } 650 537 /* allow the clocks to stabilize */ 651 538 usleep_range(200, 500); 539 + return 0; 540 + 541 + err_ref_clk: 542 + clk_disable_unprepare(imx6_pcie->pcie); 543 + err_pcie: 544 + clk_disable_unprepare(imx6_pcie->pcie_bus); 545 + err_pcie_bus: 546 + clk_disable_unprepare(imx6_pcie->pcie_phy); 547 + 548 + return ret; 549 + } 550 + 551 + static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie) 552 + { 553 + imx6_pcie_disable_ref_clk(imx6_pcie); 554 + clk_disable_unprepare(imx6_pcie->pcie); 555 + clk_disable_unprepare(imx6_pcie->pcie_bus); 556 + clk_disable_unprepare(imx6_pcie->pcie_phy); 557 + } 558 + 559 + static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) 560 + { 561 + switch (imx6_pcie->drvdata->variant) { 562 + case IMX7D: 563 + case IMX8MQ: 564 + reset_control_assert(imx6_pcie->pciephy_reset); 565 + fallthrough; 566 + case IMX8MM: 567 + reset_control_assert(imx6_pcie->apps_reset); 568 + break; 569 + case IMX6SX: 570 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 571 + IMX6SX_GPR12_PCIE_TEST_POWERDOWN, 572 + IMX6SX_GPR12_PCIE_TEST_POWERDOWN); 573 + /* Force PCIe PHY reset */ 574 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR5, 575 + IMX6SX_GPR5_PCIE_BTNRST_RESET, 576 + IMX6SX_GPR5_PCIE_BTNRST_RESET); 577 + break; 578 + case IMX6QP: 579 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 580 + IMX6Q_GPR1_PCIE_SW_RST, 581 + IMX6Q_GPR1_PCIE_SW_RST); 582 + break; 583 + case IMX6Q: 584 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 585 + IMX6Q_GPR1_PCIE_TEST_PD, 1 << 18); 586 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, 587 + IMX6Q_GPR1_PCIE_REF_CLK_EN, 0 << 16); 588 + break; 589 + } 590 + 591 + /* Some boards don't have PCIe reset GPIO. */ 592 + if (gpio_is_valid(imx6_pcie->reset_gpio)) 593 + gpio_set_value_cansleep(imx6_pcie->reset_gpio, 594 + imx6_pcie->gpio_active_high); 595 + } 596 + 597 + static int imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) 598 + { 599 + struct dw_pcie *pci = imx6_pcie->pci; 600 + struct device *dev = pci->dev; 652 601 653 602 switch (imx6_pcie->drvdata->variant) { 654 603 case IMX8MQ: 655 604 reset_control_deassert(imx6_pcie->pciephy_reset); 656 - break; 657 - case IMX8MM: 658 - if (phy_init(imx6_pcie->phy)) 659 - dev_err(dev, "waiting for phy ready timeout!\n"); 660 605 break; 661 606 case IMX7D: 662 607 reset_control_deassert(imx6_pcie->pciephy_reset); ··· 743 588 usleep_range(200, 500); 744 589 break; 745 590 case IMX6Q: /* Nothing to do */ 591 + case IMX8MM: 746 592 break; 747 593 } 748 594 ··· 755 599 /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */ 756 600 msleep(100); 757 601 } 758 - 759 - return; 760 - 761 - err_ref_clk: 762 - clk_disable_unprepare(imx6_pcie->pcie); 763 - err_pcie: 764 - clk_disable_unprepare(imx6_pcie->pcie_bus); 765 - err_pcie_bus: 766 - clk_disable_unprepare(imx6_pcie->pcie_phy); 767 - err_pcie_phy: 768 - if (imx6_pcie->vpcie && regulator_is_enabled(imx6_pcie->vpcie) > 0) { 769 - ret = regulator_disable(imx6_pcie->vpcie); 770 - if (ret) 771 - dev_err(dev, "failed to disable vpcie regulator: %d\n", 772 - ret); 773 - } 774 - } 775 - 776 - static void imx6_pcie_configure_type(struct imx6_pcie *imx6_pcie) 777 - { 778 - unsigned int mask, val; 779 - 780 - if (imx6_pcie->drvdata->variant == IMX8MQ && 781 - imx6_pcie->controller_id == 1) { 782 - mask = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE; 783 - val = FIELD_PREP(IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE, 784 - PCI_EXP_TYPE_ROOT_PORT); 785 - } else { 786 - mask = IMX6Q_GPR12_DEVICE_TYPE; 787 - val = FIELD_PREP(IMX6Q_GPR12_DEVICE_TYPE, 788 - PCI_EXP_TYPE_ROOT_PORT); 789 - } 790 - 791 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, mask, val); 792 - } 793 - 794 - static void imx6_pcie_init_phy(struct imx6_pcie *imx6_pcie) 795 - { 796 - switch (imx6_pcie->drvdata->variant) { 797 - case IMX8MM: 798 - /* 799 - * The PHY initialization had been done in the PHY 800 - * driver, break here directly. 801 - */ 802 - break; 803 - case IMX8MQ: 804 - /* 805 - * TODO: Currently this code assumes external 806 - * oscillator is being used 807 - */ 808 - regmap_update_bits(imx6_pcie->iomuxc_gpr, 809 - imx6_pcie_grp_offset(imx6_pcie), 810 - IMX8MQ_GPR_PCIE_REF_USE_PAD, 811 - IMX8MQ_GPR_PCIE_REF_USE_PAD); 812 - /* 813 - * Regarding the datasheet, the PCIE_VPH is suggested 814 - * to be 1.8V. If the PCIE_VPH is supplied by 3.3V, the 815 - * VREG_BYPASS should be cleared to zero. 816 - */ 817 - if (imx6_pcie->vph && 818 - regulator_get_voltage(imx6_pcie->vph) > 3000000) 819 - regmap_update_bits(imx6_pcie->iomuxc_gpr, 820 - imx6_pcie_grp_offset(imx6_pcie), 821 - IMX8MQ_GPR_PCIE_VREG_BYPASS, 822 - 0); 823 - break; 824 - case IMX7D: 825 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 826 - IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 0); 827 - break; 828 - case IMX6SX: 829 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 830 - IMX6SX_GPR12_PCIE_RX_EQ_MASK, 831 - IMX6SX_GPR12_PCIE_RX_EQ_2); 832 - fallthrough; 833 - default: 834 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 835 - IMX6Q_GPR12_PCIE_CTL_2, 0 << 10); 836 - 837 - /* configure constant input signal to the pcie ctrl and phy */ 838 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 839 - IMX6Q_GPR12_LOS_LEVEL, 9 << 4); 840 - 841 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 842 - IMX6Q_GPR8_TX_DEEMPH_GEN1, 843 - imx6_pcie->tx_deemph_gen1 << 0); 844 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 845 - IMX6Q_GPR8_TX_DEEMPH_GEN2_3P5DB, 846 - imx6_pcie->tx_deemph_gen2_3p5db << 6); 847 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 848 - IMX6Q_GPR8_TX_DEEMPH_GEN2_6DB, 849 - imx6_pcie->tx_deemph_gen2_6db << 12); 850 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 851 - IMX6Q_GPR8_TX_SWING_FULL, 852 - imx6_pcie->tx_swing_full << 18); 853 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR8, 854 - IMX6Q_GPR8_TX_SWING_LOW, 855 - imx6_pcie->tx_swing_low << 25); 856 - break; 857 - } 858 - 859 - imx6_pcie_configure_type(imx6_pcie); 860 - } 861 - 862 - static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie) 863 - { 864 - unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy); 865 - int mult, div; 866 - u16 val; 867 - 868 - if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) 869 - return 0; 870 - 871 - switch (phy_rate) { 872 - case 125000000: 873 - /* 874 - * The default settings of the MPLL are for a 125MHz input 875 - * clock, so no need to reconfigure anything in that case. 876 - */ 877 - return 0; 878 - case 100000000: 879 - mult = 25; 880 - div = 0; 881 - break; 882 - case 200000000: 883 - mult = 25; 884 - div = 1; 885 - break; 886 - default: 887 - dev_err(imx6_pcie->pci->dev, 888 - "Unsupported PHY reference clock rate %lu\n", phy_rate); 889 - return -EINVAL; 890 - } 891 - 892 - pcie_phy_read(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, &val); 893 - val &= ~(PCIE_PHY_MPLL_MULTIPLIER_MASK << 894 - PCIE_PHY_MPLL_MULTIPLIER_SHIFT); 895 - val |= mult << PCIE_PHY_MPLL_MULTIPLIER_SHIFT; 896 - val |= PCIE_PHY_MPLL_MULTIPLIER_OVRD; 897 - pcie_phy_write(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, val); 898 - 899 - pcie_phy_read(imx6_pcie, PCIE_PHY_ATEOVRD, &val); 900 - val &= ~(PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK << 901 - PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT); 902 - val |= div << PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT; 903 - val |= PCIE_PHY_ATEOVRD_EN; 904 - pcie_phy_write(imx6_pcie, PCIE_PHY_ATEOVRD, val); 905 602 906 603 return 0; 907 604 } ··· 798 789 } 799 790 } 800 791 792 + static void imx6_pcie_ltssm_disable(struct device *dev) 793 + { 794 + struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 795 + 796 + switch (imx6_pcie->drvdata->variant) { 797 + case IMX6Q: 798 + case IMX6SX: 799 + case IMX6QP: 800 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 801 + IMX6Q_GPR12_PCIE_CTL_2, 0); 802 + break; 803 + case IMX7D: 804 + case IMX8MQ: 805 + case IMX8MM: 806 + reset_control_assert(imx6_pcie->apps_reset); 807 + break; 808 + } 809 + } 810 + 801 811 static int imx6_pcie_start_link(struct dw_pcie *pci) 802 812 { 803 813 struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci); ··· 830 802 * started in Gen2 mode, there is a possibility the devices on the 831 803 * bus will not be detected at all. This happens with PCIe switches. 832 804 */ 805 + dw_pcie_dbi_ro_wr_en(pci); 833 806 tmp = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 834 807 tmp &= ~PCI_EXP_LNKCAP_SLS; 835 808 tmp |= PCI_EXP_LNKCAP_SLS_2_5GB; 836 809 dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, tmp); 810 + dw_pcie_dbi_ro_wr_dis(pci); 837 811 838 812 /* Start LTSSM. */ 839 813 imx6_pcie_ltssm_enable(dev); 840 814 841 - dw_pcie_wait_for_link(pci); 815 + ret = dw_pcie_wait_for_link(pci); 816 + if (ret) 817 + goto err_reset_phy; 842 818 843 - if (pci->link_gen == 2) { 844 - /* Allow Gen2 mode after the link is up. */ 819 + if (pci->link_gen > 1) { 820 + /* Allow faster modes after the link is up */ 821 + dw_pcie_dbi_ro_wr_en(pci); 845 822 tmp = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 846 823 tmp &= ~PCI_EXP_LNKCAP_SLS; 847 - tmp |= PCI_EXP_LNKCAP_SLS_5_0GB; 824 + tmp |= pci->link_gen; 848 825 dw_pcie_writel_dbi(pci, offset + PCI_EXP_LNKCAP, tmp); 849 826 850 827 /* ··· 859 826 tmp = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 860 827 tmp |= PORT_LOGIC_SPEED_CHANGE; 861 828 dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, tmp); 829 + dw_pcie_dbi_ro_wr_dis(pci); 862 830 863 831 if (imx6_pcie->drvdata->flags & 864 832 IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE) { ··· 880 846 } 881 847 882 848 /* Make sure link training is finished as well! */ 883 - dw_pcie_wait_for_link(pci); 849 + ret = dw_pcie_wait_for_link(pci); 850 + if (ret) 851 + goto err_reset_phy; 884 852 } else { 885 - dev_info(dev, "Link: Gen2 disabled\n"); 853 + dev_info(dev, "Link: Only Gen1 is enabled\n"); 886 854 } 887 855 856 + imx6_pcie->link_is_up = true; 888 857 tmp = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 889 858 dev_info(dev, "Link up, Gen%i\n", tmp & PCI_EXP_LNKSTA_CLS); 890 859 return 0; 891 860 892 861 err_reset_phy: 862 + imx6_pcie->link_is_up = false; 893 863 dev_dbg(dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n", 894 864 dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0), 895 865 dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG1)); 896 866 imx6_pcie_reset_phy(imx6_pcie); 867 + return 0; 868 + } 869 + 870 + static void imx6_pcie_stop_link(struct dw_pcie *pci) 871 + { 872 + struct device *dev = pci->dev; 873 + 874 + /* Turn off PCIe LTSSM */ 875 + imx6_pcie_ltssm_disable(dev); 876 + } 877 + 878 + static int imx6_pcie_host_init(struct dw_pcie_rp *pp) 879 + { 880 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 881 + struct device *dev = pci->dev; 882 + struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci); 883 + int ret; 884 + 885 + if (imx6_pcie->vpcie) { 886 + ret = regulator_enable(imx6_pcie->vpcie); 887 + if (ret) { 888 + dev_err(dev, "failed to enable vpcie regulator: %d\n", 889 + ret); 890 + return ret; 891 + } 892 + } 893 + 894 + imx6_pcie_assert_core_reset(imx6_pcie); 895 + imx6_pcie_init_phy(imx6_pcie); 896 + 897 + ret = imx6_pcie_clk_enable(imx6_pcie); 898 + if (ret) { 899 + dev_err(dev, "unable to enable pcie clocks: %d\n", ret); 900 + goto err_reg_disable; 901 + } 902 + 903 + if (imx6_pcie->phy) { 904 + ret = phy_power_on(imx6_pcie->phy); 905 + if (ret) { 906 + dev_err(dev, "pcie PHY power up failed\n"); 907 + goto err_clk_disable; 908 + } 909 + } 910 + 911 + ret = imx6_pcie_deassert_core_reset(imx6_pcie); 912 + if (ret < 0) { 913 + dev_err(dev, "pcie deassert core reset failed: %d\n", ret); 914 + goto err_phy_off; 915 + } 916 + 917 + if (imx6_pcie->phy) { 918 + ret = phy_init(imx6_pcie->phy); 919 + if (ret) { 920 + dev_err(dev, "waiting for PHY ready timeout!\n"); 921 + goto err_phy_off; 922 + } 923 + } 924 + imx6_setup_phy_mpll(imx6_pcie); 925 + 926 + return 0; 927 + 928 + err_phy_off: 929 + if (imx6_pcie->phy) 930 + phy_power_off(imx6_pcie->phy); 931 + err_clk_disable: 932 + imx6_pcie_clk_disable(imx6_pcie); 933 + err_reg_disable: 934 + if (imx6_pcie->vpcie) 935 + regulator_disable(imx6_pcie->vpcie); 897 936 return ret; 898 937 } 899 938 900 - static int imx6_pcie_host_init(struct pcie_port *pp) 939 + static void imx6_pcie_host_exit(struct dw_pcie_rp *pp) 901 940 { 902 941 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 903 942 struct imx6_pcie *imx6_pcie = to_imx6_pcie(pci); 904 943 905 - imx6_pcie_assert_core_reset(imx6_pcie); 906 - imx6_pcie_init_phy(imx6_pcie); 907 - imx6_pcie_deassert_core_reset(imx6_pcie); 908 - imx6_setup_phy_mpll(imx6_pcie); 944 + if (imx6_pcie->phy) { 945 + if (phy_power_off(imx6_pcie->phy)) 946 + dev_err(pci->dev, "unable to power off PHY\n"); 947 + phy_exit(imx6_pcie->phy); 948 + } 949 + imx6_pcie_clk_disable(imx6_pcie); 909 950 910 - return 0; 951 + if (imx6_pcie->vpcie) 952 + regulator_disable(imx6_pcie->vpcie); 911 953 } 912 954 913 955 static const struct dw_pcie_host_ops imx6_pcie_host_ops = { ··· 993 883 static const struct dw_pcie_ops dw_pcie_ops = { 994 884 .start_link = imx6_pcie_start_link, 995 885 }; 996 - 997 - #ifdef CONFIG_PM_SLEEP 998 - static void imx6_pcie_ltssm_disable(struct device *dev) 999 - { 1000 - struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 1001 - 1002 - switch (imx6_pcie->drvdata->variant) { 1003 - case IMX6SX: 1004 - case IMX6QP: 1005 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 1006 - IMX6Q_GPR12_PCIE_CTL_2, 0); 1007 - break; 1008 - case IMX7D: 1009 - case IMX8MM: 1010 - reset_control_assert(imx6_pcie->apps_reset); 1011 - break; 1012 - default: 1013 - dev_err(dev, "ltssm_disable not supported\n"); 1014 - } 1015 - } 1016 886 1017 887 static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie) 1018 888 { ··· 1031 941 usleep_range(1000, 10000); 1032 942 } 1033 943 1034 - static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie) 1035 - { 1036 - clk_disable_unprepare(imx6_pcie->pcie); 1037 - clk_disable_unprepare(imx6_pcie->pcie_phy); 1038 - clk_disable_unprepare(imx6_pcie->pcie_bus); 1039 - 1040 - switch (imx6_pcie->drvdata->variant) { 1041 - case IMX6SX: 1042 - clk_disable_unprepare(imx6_pcie->pcie_inbound_axi); 1043 - break; 1044 - case IMX7D: 1045 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 1046 - IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 1047 - IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); 1048 - break; 1049 - case IMX8MQ: 1050 - case IMX8MM: 1051 - clk_disable_unprepare(imx6_pcie->pcie_aux); 1052 - break; 1053 - default: 1054 - break; 1055 - } 1056 - } 1057 - 1058 944 static int imx6_pcie_suspend_noirq(struct device *dev) 1059 945 { 1060 946 struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 947 + struct dw_pcie_rp *pp = &imx6_pcie->pci->pp; 1061 948 1062 949 if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND)) 1063 950 return 0; 1064 951 1065 952 imx6_pcie_pm_turnoff(imx6_pcie); 1066 - imx6_pcie_ltssm_disable(dev); 1067 - imx6_pcie_clk_disable(imx6_pcie); 1068 - switch (imx6_pcie->drvdata->variant) { 1069 - case IMX8MM: 1070 - if (phy_power_off(imx6_pcie->phy)) 1071 - dev_err(dev, "unable to power off PHY\n"); 1072 - phy_exit(imx6_pcie->phy); 1073 - break; 1074 - default: 1075 - break; 1076 - } 953 + imx6_pcie_stop_link(imx6_pcie->pci); 954 + imx6_pcie_host_exit(pp); 1077 955 1078 956 return 0; 1079 957 } ··· 1050 992 { 1051 993 int ret; 1052 994 struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 1053 - struct pcie_port *pp = &imx6_pcie->pci->pp; 995 + struct dw_pcie_rp *pp = &imx6_pcie->pci->pp; 1054 996 1055 997 if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND)) 1056 998 return 0; 1057 999 1058 - imx6_pcie_assert_core_reset(imx6_pcie); 1059 - imx6_pcie_init_phy(imx6_pcie); 1060 - imx6_pcie_deassert_core_reset(imx6_pcie); 1000 + ret = imx6_pcie_host_init(pp); 1001 + if (ret) 1002 + return ret; 1061 1003 dw_pcie_setup_rc(pp); 1062 1004 1063 - ret = imx6_pcie_start_link(imx6_pcie->pci); 1064 - if (ret < 0) 1065 - dev_info(dev, "pcie link is down after resume.\n"); 1005 + if (imx6_pcie->link_is_up) 1006 + imx6_pcie_start_link(imx6_pcie->pci); 1066 1007 1067 1008 return 0; 1068 1009 } 1069 - #endif 1070 1010 1071 1011 static const struct dev_pm_ops imx6_pcie_pm_ops = { 1072 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx6_pcie_suspend_noirq, 1073 - imx6_pcie_resume_noirq) 1012 + NOIRQ_SYSTEM_SLEEP_PM_OPS(imx6_pcie_suspend_noirq, 1013 + imx6_pcie_resume_noirq) 1074 1014 }; 1075 1015 1076 1016 static int imx6_pcie_probe(struct platform_device *pdev) ··· 1347 1291 static void imx6_pcie_quirk(struct pci_dev *dev) 1348 1292 { 1349 1293 struct pci_bus *bus = dev->bus; 1350 - struct pcie_port *pp = bus->sysdata; 1294 + struct dw_pcie_rp *pp = bus->sysdata; 1351 1295 1352 1296 /* Bus parent is the PCI bridge, its parent is this platform driver */ 1353 1297 if (!bus->dev.parent || !bus->dev.parent->parent)
+17 -17
drivers/pci/controller/dwc/pci-keystone.c
··· 109 109 enum dw_pcie_device_mode mode; 110 110 const struct dw_pcie_host_ops *host_ops; 111 111 const struct dw_pcie_ep_ops *ep_ops; 112 - unsigned int version; 112 + u32 version; 113 113 }; 114 114 115 115 struct keystone_pcie { ··· 147 147 148 148 static void ks_pcie_msi_irq_ack(struct irq_data *data) 149 149 { 150 - struct pcie_port *pp = irq_data_get_irq_chip_data(data); 150 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data); 151 151 struct keystone_pcie *ks_pcie; 152 152 u32 irq = data->hwirq; 153 153 struct dw_pcie *pci; ··· 167 167 168 168 static void ks_pcie_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 169 169 { 170 - struct pcie_port *pp = irq_data_get_irq_chip_data(data); 170 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data); 171 171 struct keystone_pcie *ks_pcie; 172 172 struct dw_pcie *pci; 173 173 u64 msi_target; ··· 192 192 193 193 static void ks_pcie_msi_mask(struct irq_data *data) 194 194 { 195 - struct pcie_port *pp = irq_data_get_irq_chip_data(data); 195 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data); 196 196 struct keystone_pcie *ks_pcie; 197 197 u32 irq = data->hwirq; 198 198 struct dw_pcie *pci; ··· 216 216 217 217 static void ks_pcie_msi_unmask(struct irq_data *data) 218 218 { 219 - struct pcie_port *pp = irq_data_get_irq_chip_data(data); 219 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(data); 220 220 struct keystone_pcie *ks_pcie; 221 221 u32 irq = data->hwirq; 222 222 struct dw_pcie *pci; ··· 247 247 .irq_unmask = ks_pcie_msi_unmask, 248 248 }; 249 249 250 - static int ks_pcie_msi_host_init(struct pcie_port *pp) 250 + static int ks_pcie_msi_host_init(struct dw_pcie_rp *pp) 251 251 { 252 252 pp->msi_irq_chip = &ks_pcie_msi_irq_chip; 253 253 return dw_pcie_allocate_domains(pp); ··· 390 390 u32 val; 391 391 u32 num_viewport = ks_pcie->num_viewport; 392 392 struct dw_pcie *pci = ks_pcie->pci; 393 - struct pcie_port *pp = &pci->pp; 393 + struct dw_pcie_rp *pp = &pci->pp; 394 394 u64 start, end; 395 395 struct resource *mem; 396 396 int i; ··· 428 428 static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus, 429 429 unsigned int devfn, int where) 430 430 { 431 - struct pcie_port *pp = bus->sysdata; 431 + struct dw_pcie_rp *pp = bus->sysdata; 432 432 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 433 433 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 434 434 u32 reg; ··· 456 456 */ 457 457 static int ks_pcie_v3_65_add_bus(struct pci_bus *bus) 458 458 { 459 - struct pcie_port *pp = bus->sysdata; 459 + struct dw_pcie_rp *pp = bus->sysdata; 460 460 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 461 461 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 462 462 ··· 574 574 struct keystone_pcie *ks_pcie = irq_desc_get_handler_data(desc); 575 575 u32 offset = irq - ks_pcie->msi_host_irq; 576 576 struct dw_pcie *pci = ks_pcie->pci; 577 - struct pcie_port *pp = &pci->pp; 577 + struct dw_pcie_rp *pp = &pci->pp; 578 578 struct device *dev = pci->dev; 579 579 struct irq_chip *chip = irq_desc_get_chip(desc); 580 580 u32 vector, reg, pos; ··· 799 799 return 0; 800 800 } 801 801 802 - static int __init ks_pcie_host_init(struct pcie_port *pp) 802 + static int __init ks_pcie_host_init(struct dw_pcie_rp *pp) 803 803 { 804 804 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 805 805 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); ··· 1069 1069 1070 1070 static const struct ks_pcie_of_data ks_pcie_rc_of_data = { 1071 1071 .host_ops = &ks_pcie_host_ops, 1072 - .version = 0x365A, 1072 + .version = DW_PCIE_VER_365A, 1073 1073 }; 1074 1074 1075 1075 static const struct ks_pcie_of_data ks_pcie_am654_rc_of_data = { 1076 1076 .host_ops = &ks_pcie_am654_host_ops, 1077 1077 .mode = DW_PCIE_RC_TYPE, 1078 - .version = 0x490A, 1078 + .version = DW_PCIE_VER_490A, 1079 1079 }; 1080 1080 1081 1081 static const struct ks_pcie_of_data ks_pcie_am654_ep_of_data = { 1082 1082 .ep_ops = &ks_pcie_am654_ep_ops, 1083 1083 .mode = DW_PCIE_EP_TYPE, 1084 - .version = 0x490A, 1084 + .version = DW_PCIE_VER_490A, 1085 1085 }; 1086 1086 1087 1087 static const struct of_device_id ks_pcie_of_match[] = { ··· 1114 1114 struct device_link **link; 1115 1115 struct gpio_desc *gpiod; 1116 1116 struct resource *res; 1117 - unsigned int version; 1118 1117 void __iomem *base; 1119 1118 u32 num_viewport; 1120 1119 struct phy **phy; 1121 1120 u32 num_lanes; 1122 1121 char name[10]; 1122 + u32 version; 1123 1123 int ret; 1124 1124 int irq; 1125 1125 int i; ··· 1233 1233 goto err_get_sync; 1234 1234 } 1235 1235 1236 - if (pci->version >= 0x480A) 1236 + if (dw_pcie_ver_is_ge(pci, 480A)) 1237 1237 ret = ks_pcie_am654_set_mode(dev, mode); 1238 1238 else 1239 1239 ret = ks_pcie_set_mode(dev); ··· 1324 1324 .remove = __exit_p(ks_pcie_remove), 1325 1325 .driver = { 1326 1326 .name = "keystone-pcie", 1327 - .of_match_table = of_match_ptr(ks_pcie_of_match), 1327 + .of_match_table = ks_pcie_of_match, 1328 1328 }, 1329 1329 }; 1330 1330 builtin_platform_driver(ks_pcie_driver);
-12
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 32 32 const struct ls_pcie_ep_drvdata *drvdata; 33 33 }; 34 34 35 - static int ls_pcie_establish_link(struct dw_pcie *pci) 36 - { 37 - return 0; 38 - } 39 - 40 - static const struct dw_pcie_ops dw_ls_pcie_ep_ops = { 41 - .start_link = ls_pcie_establish_link, 42 - }; 43 - 44 35 static const struct pci_epc_features* 45 36 ls_pcie_ep_get_features(struct dw_pcie_ep *ep) 46 37 { ··· 97 106 98 107 static const struct ls_pcie_ep_drvdata ls1_ep_drvdata = { 99 108 .ops = &ls_pcie_ep_ops, 100 - .dw_pcie_ops = &dw_ls_pcie_ep_ops, 101 109 }; 102 110 103 111 static const struct ls_pcie_ep_drvdata ls2_ep_drvdata = { 104 112 .func_offset = 0x20000, 105 113 .ops = &ls_pcie_ep_ops, 106 - .dw_pcie_ops = &dw_ls_pcie_ep_ops, 107 114 }; 108 115 109 116 static const struct ls_pcie_ep_drvdata lx2_ep_drvdata = { 110 117 .func_offset = 0x8000, 111 118 .ops = &ls_pcie_ep_ops, 112 - .dw_pcie_ops = &dw_ls_pcie_ep_ops, 113 119 }; 114 120 115 121 static const struct of_device_id ls_pcie_ep_of_match[] = {
+1 -1
drivers/pci/controller/dwc/pci-layerscape.c
··· 74 74 iowrite32(PCIE_ABSERR_SETTING, pci->dbi_base + PCIE_ABSERR); 75 75 } 76 76 77 - static int ls_pcie_host_init(struct pcie_port *pp) 77 + static int ls_pcie_host_init(struct dw_pcie_rp *pp) 78 78 { 79 79 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 80 80 struct ls_pcie *pcie = to_ls_pcie(pci);
+1 -1
drivers/pci/controller/dwc/pci-meson.c
··· 370 370 return 0; 371 371 } 372 372 373 - static int meson_pcie_host_init(struct pcie_port *pp) 373 + static int meson_pcie_host_init(struct dw_pcie_rp *pp) 374 374 { 375 375 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 376 376 struct meson_pcie *mp = to_meson_pcie(pci);
+3 -3
drivers/pci/controller/dwc/pcie-al.c
··· 217 217 static void __iomem *al_pcie_conf_addr_map_bus(struct pci_bus *bus, 218 218 unsigned int devfn, int where) 219 219 { 220 - struct pcie_port *pp = bus->sysdata; 220 + struct dw_pcie_rp *pp = bus->sysdata; 221 221 struct al_pcie *pcie = to_al_pcie(to_dw_pcie_from_pp(pp)); 222 222 unsigned int busnr = bus->number; 223 223 struct al_pcie_target_bus_cfg *target_bus_cfg = &pcie->target_bus_cfg; ··· 245 245 static void al_pcie_config_prepare(struct al_pcie *pcie) 246 246 { 247 247 struct al_pcie_target_bus_cfg *target_bus_cfg; 248 - struct pcie_port *pp = &pcie->pci->pp; 248 + struct dw_pcie_rp *pp = &pcie->pci->pp; 249 249 unsigned int ecam_bus_mask; 250 250 u32 cfg_control_offset; 251 251 u8 subordinate_bus; ··· 289 289 al_pcie_controller_writel(pcie, cfg_control_offset, reg); 290 290 } 291 291 292 - static int al_pcie_host_init(struct pcie_port *pp) 292 + static int al_pcie_host_init(struct dw_pcie_rp *pp) 293 293 { 294 294 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 295 295 struct al_pcie *pcie = to_al_pcie(pci);
+3 -3
drivers/pci/controller/dwc/pcie-armada8k.c
··· 166 166 return 0; 167 167 } 168 168 169 - static int armada8k_pcie_host_init(struct pcie_port *pp) 169 + static int armada8k_pcie_host_init(struct dw_pcie_rp *pp) 170 170 { 171 171 u32 reg; 172 172 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 233 233 struct platform_device *pdev) 234 234 { 235 235 struct dw_pcie *pci = pcie->pci; 236 - struct pcie_port *pp = &pci->pp; 236 + struct dw_pcie_rp *pp = &pci->pp; 237 237 struct device *dev = &pdev->dev; 238 238 int ret; 239 239 ··· 343 343 .probe = armada8k_pcie_probe, 344 344 .driver = { 345 345 .name = "armada8k-pcie", 346 - .of_match_table = of_match_ptr(armada8k_pcie_of_match), 346 + .of_match_table = armada8k_pcie_of_match, 347 347 .suppress_bind_attrs = true, 348 348 }, 349 349 };
+2 -2
drivers/pci/controller/dwc/pcie-artpec6.c
··· 97 97 static u64 artpec6_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 pci_addr) 98 98 { 99 99 struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci); 100 - struct pcie_port *pp = &pci->pp; 100 + struct dw_pcie_rp *pp = &pci->pp; 101 101 struct dw_pcie_ep *ep = &pci->ep; 102 102 103 103 switch (artpec6_pcie->mode) { ··· 315 315 usleep_range(100, 200); 316 316 } 317 317 318 - static int artpec6_pcie_host_init(struct pcie_port *pp) 318 + static int artpec6_pcie_host_init(struct dw_pcie_rp *pp) 319 319 { 320 320 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 321 321 struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci);
+45 -37
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 154 154 return 0; 155 155 } 156 156 157 - static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, 158 - enum pci_barno bar, dma_addr_t cpu_addr, 159 - enum dw_pcie_as_type as_type) 157 + static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type, 158 + dma_addr_t cpu_addr, enum pci_barno bar) 160 159 { 161 160 int ret; 162 161 u32 free_win; ··· 167 168 return -EINVAL; 168 169 } 169 170 170 - ret = dw_pcie_prog_inbound_atu(pci, func_no, free_win, bar, cpu_addr, 171 - as_type); 171 + ret = dw_pcie_prog_inbound_atu(pci, func_no, free_win, type, 172 + cpu_addr, bar); 172 173 if (ret < 0) { 173 174 dev_err(pci->dev, "Failed to program IB window\n"); 174 175 return ret; ··· 184 185 phys_addr_t phys_addr, 185 186 u64 pci_addr, size_t size) 186 187 { 187 - u32 free_win; 188 188 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 189 + u32 free_win; 190 + int ret; 189 191 190 192 free_win = find_first_zero_bit(ep->ob_window_map, pci->num_ob_windows); 191 193 if (free_win >= pci->num_ob_windows) { ··· 194 194 return -EINVAL; 195 195 } 196 196 197 - dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM, 198 - phys_addr, pci_addr, size); 197 + ret = dw_pcie_prog_ep_outbound_atu(pci, func_no, free_win, PCIE_ATU_TYPE_MEM, 198 + phys_addr, pci_addr, size); 199 + if (ret) 200 + return ret; 199 201 200 202 set_bit(free_win, ep->ob_window_map); 201 203 ep->outbound_addr[free_win] = phys_addr; ··· 215 213 216 214 __dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags); 217 215 218 - dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_INBOUND); 216 + dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index); 219 217 clear_bit(atu_index, ep->ib_window_map); 220 218 ep->epf_bar[bar] = NULL; 221 219 } ··· 223 221 static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 224 222 struct pci_epf_bar *epf_bar) 225 223 { 226 - int ret; 227 224 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 228 225 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 229 226 enum pci_barno bar = epf_bar->barno; 230 227 size_t size = epf_bar->size; 231 228 int flags = epf_bar->flags; 232 - enum dw_pcie_as_type as_type; 233 - u32 reg; 234 229 unsigned int func_offset = 0; 230 + int ret, type; 231 + u32 reg; 235 232 236 233 func_offset = dw_pcie_ep_func_select(ep, func_no); 237 234 238 235 reg = PCI_BASE_ADDRESS_0 + (4 * bar) + func_offset; 239 236 240 237 if (!(flags & PCI_BASE_ADDRESS_SPACE)) 241 - as_type = DW_PCIE_AS_MEM; 238 + type = PCIE_ATU_TYPE_MEM; 242 239 else 243 - as_type = DW_PCIE_AS_IO; 240 + type = PCIE_ATU_TYPE_IO; 244 241 245 - ret = dw_pcie_ep_inbound_atu(ep, func_no, bar, 246 - epf_bar->phys_addr, as_type); 242 + ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar); 247 243 if (ret) 248 244 return ret; 249 245 ··· 289 289 if (ret < 0) 290 290 return; 291 291 292 - dw_pcie_disable_atu(pci, atu_index, DW_PCIE_REGION_OUTBOUND); 292 + dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_OB, atu_index); 293 293 clear_bit(atu_index, ep->ob_window_map); 294 294 } 295 295 ··· 435 435 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 436 436 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 437 437 438 - if (pci->ops && pci->ops->stop_link) 439 - pci->ops->stop_link(pci); 438 + dw_pcie_stop_link(pci); 440 439 } 441 440 442 441 static int dw_pcie_ep_start(struct pci_epc *epc) ··· 443 444 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 444 445 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 445 446 446 - if (!pci->ops || !pci->ops->start_link) 447 - return -EINVAL; 448 - 449 - return pci->ops->start_link(pci); 447 + return dw_pcie_start_link(pci); 450 448 } 451 449 452 450 static const struct pci_epc_features* ··· 695 699 696 700 if (!pci->dbi_base2) { 697 701 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi2"); 698 - if (!res) 702 + if (!res) { 699 703 pci->dbi_base2 = pci->dbi_base + SZ_4K; 700 - else { 704 + } else { 701 705 pci->dbi_base2 = devm_pci_remap_cfg_resource(dev, res); 702 706 if (IS_ERR(pci->dbi_base2)) 703 707 return PTR_ERR(pci->dbi_base2); 704 708 } 705 709 } 706 - 707 - dw_pcie_iatu_detect(pci); 708 710 709 711 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 710 712 if (!res) ··· 711 717 ep->phys_base = res->start; 712 718 ep->addr_size = resource_size(res); 713 719 714 - ep->ib_window_map = devm_kcalloc(dev, 715 - BITS_TO_LONGS(pci->num_ib_windows), 716 - sizeof(long), 717 - GFP_KERNEL); 720 + dw_pcie_version_detect(pci); 721 + 722 + dw_pcie_iatu_detect(pci); 723 + 724 + ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows, 725 + GFP_KERNEL); 718 726 if (!ep->ib_window_map) 719 727 return -ENOMEM; 720 728 721 - ep->ob_window_map = devm_kcalloc(dev, 722 - BITS_TO_LONGS(pci->num_ob_windows), 723 - sizeof(long), 724 - GFP_KERNEL); 729 + ep->ob_window_map = devm_bitmap_zalloc(dev, pci->num_ob_windows, 730 + GFP_KERNEL); 725 731 if (!ep->ob_window_map) 726 732 return -ENOMEM; 727 733 ··· 774 780 ep->msi_mem = pci_epc_mem_alloc_addr(epc, &ep->msi_mem_phys, 775 781 epc->mem->window.page_size); 776 782 if (!ep->msi_mem) { 783 + ret = -ENOMEM; 777 784 dev_err(dev, "Failed to reserve memory for MSI/MSI-X\n"); 778 - return -ENOMEM; 785 + goto err_exit_epc_mem; 779 786 } 780 787 781 788 if (ep->ops->get_features) { ··· 785 790 return 0; 786 791 } 787 792 788 - return dw_pcie_ep_init_complete(ep); 793 + ret = dw_pcie_ep_init_complete(ep); 794 + if (ret) 795 + goto err_free_epc_mem; 796 + 797 + return 0; 798 + 799 + err_free_epc_mem: 800 + pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, 801 + epc->mem->window.page_size); 802 + 803 + err_exit_epc_mem: 804 + pci_epc_mem_exit(epc); 805 + 806 + return ret; 789 807 } 790 808 EXPORT_SYMBOL_GPL(dw_pcie_ep_init);
+276 -128
drivers/pci/controller/dwc/pcie-designware-host.c
··· 53 53 }; 54 54 55 55 /* MSI int handler */ 56 - irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) 56 + irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp) 57 57 { 58 58 int i, pos; 59 59 unsigned long val; ··· 88 88 static void dw_chained_msi_isr(struct irq_desc *desc) 89 89 { 90 90 struct irq_chip *chip = irq_desc_get_chip(desc); 91 - struct pcie_port *pp; 91 + struct dw_pcie_rp *pp; 92 92 93 93 chained_irq_enter(chip, desc); 94 94 ··· 100 100 101 101 static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg) 102 102 { 103 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 103 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d); 104 104 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 105 105 u64 msi_target; 106 106 ··· 123 123 124 124 static void dw_pci_bottom_mask(struct irq_data *d) 125 125 { 126 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 126 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d); 127 127 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 128 128 unsigned int res, bit, ctrl; 129 129 unsigned long flags; ··· 142 142 143 143 static void dw_pci_bottom_unmask(struct irq_data *d) 144 144 { 145 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 145 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d); 146 146 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 147 147 unsigned int res, bit, ctrl; 148 148 unsigned long flags; ··· 161 161 162 162 static void dw_pci_bottom_ack(struct irq_data *d) 163 163 { 164 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 164 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d); 165 165 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 166 166 unsigned int res, bit, ctrl; 167 167 ··· 185 185 unsigned int virq, unsigned int nr_irqs, 186 186 void *args) 187 187 { 188 - struct pcie_port *pp = domain->host_data; 188 + struct dw_pcie_rp *pp = domain->host_data; 189 189 unsigned long flags; 190 190 u32 i; 191 191 int bit; ··· 213 213 unsigned int virq, unsigned int nr_irqs) 214 214 { 215 215 struct irq_data *d = irq_domain_get_irq_data(domain, virq); 216 - struct pcie_port *pp = domain->host_data; 216 + struct dw_pcie_rp *pp = domain->host_data; 217 217 unsigned long flags; 218 218 219 219 raw_spin_lock_irqsave(&pp->lock, flags); ··· 229 229 .free = dw_pcie_irq_domain_free, 230 230 }; 231 231 232 - int dw_pcie_allocate_domains(struct pcie_port *pp) 232 + int dw_pcie_allocate_domains(struct dw_pcie_rp *pp) 233 233 { 234 234 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 235 235 struct fwnode_handle *fwnode = of_node_to_fwnode(pci->dev->of_node); ··· 255 255 return 0; 256 256 } 257 257 258 - static void dw_pcie_free_msi(struct pcie_port *pp) 258 + static void dw_pcie_free_msi(struct dw_pcie_rp *pp) 259 259 { 260 - if (pp->msi_irq) 261 - irq_set_chained_handler_and_data(pp->msi_irq, NULL, NULL); 260 + u32 ctrl; 261 + 262 + for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++) { 263 + if (pp->msi_irq[ctrl] > 0) 264 + irq_set_chained_handler_and_data(pp->msi_irq[ctrl], 265 + NULL, NULL); 266 + } 262 267 263 268 irq_domain_remove(pp->msi_domain); 264 269 irq_domain_remove(pp->irq_domain); ··· 272 267 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 273 268 struct device *dev = pci->dev; 274 269 275 - dma_unmap_single_attrs(dev, pp->msi_data, sizeof(pp->msi_msg), 276 - DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); 270 + dma_unmap_page(dev, pp->msi_data, PAGE_SIZE, DMA_FROM_DEVICE); 271 + if (pp->msi_page) 272 + __free_page(pp->msi_page); 277 273 } 278 274 } 279 275 280 - static void dw_pcie_msi_init(struct pcie_port *pp) 276 + static void dw_pcie_msi_init(struct dw_pcie_rp *pp) 281 277 { 282 278 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 283 279 u64 msi_target = (u64)pp->msi_data; ··· 291 285 dw_pcie_writel_dbi(pci, PCIE_MSI_ADDR_HI, upper_32_bits(msi_target)); 292 286 } 293 287 294 - int dw_pcie_host_init(struct pcie_port *pp) 288 + static int dw_pcie_parse_split_msi_irq(struct dw_pcie_rp *pp) 289 + { 290 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 291 + struct device *dev = pci->dev; 292 + struct platform_device *pdev = to_platform_device(dev); 293 + u32 ctrl, max_vectors; 294 + int irq; 295 + 296 + /* Parse any "msiX" IRQs described in the devicetree */ 297 + for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++) { 298 + char msi_name[] = "msiX"; 299 + 300 + msi_name[3] = '0' + ctrl; 301 + irq = platform_get_irq_byname_optional(pdev, msi_name); 302 + if (irq == -ENXIO) 303 + break; 304 + if (irq < 0) 305 + return dev_err_probe(dev, irq, 306 + "Failed to parse MSI IRQ '%s'\n", 307 + msi_name); 308 + 309 + pp->msi_irq[ctrl] = irq; 310 + } 311 + 312 + /* If no "msiX" IRQs, caller should fallback to "msi" IRQ */ 313 + if (ctrl == 0) 314 + return -ENXIO; 315 + 316 + max_vectors = ctrl * MAX_MSI_IRQS_PER_CTRL; 317 + if (pp->num_vectors > max_vectors) { 318 + dev_warn(dev, "Exceeding number of MSI vectors, limiting to %u\n", 319 + max_vectors); 320 + pp->num_vectors = max_vectors; 321 + } 322 + if (!pp->num_vectors) 323 + pp->num_vectors = max_vectors; 324 + 325 + return 0; 326 + } 327 + 328 + static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp) 329 + { 330 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 331 + struct device *dev = pci->dev; 332 + struct platform_device *pdev = to_platform_device(dev); 333 + int ret; 334 + u32 ctrl, num_ctrls; 335 + 336 + for (ctrl = 0; ctrl < MAX_MSI_CTRLS; ctrl++) 337 + pp->irq_mask[ctrl] = ~0; 338 + 339 + if (!pp->msi_irq[0]) { 340 + ret = dw_pcie_parse_split_msi_irq(pp); 341 + if (ret < 0 && ret != -ENXIO) 342 + return ret; 343 + } 344 + 345 + if (!pp->num_vectors) 346 + pp->num_vectors = MSI_DEF_NUM_VECTORS; 347 + num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 348 + 349 + if (!pp->msi_irq[0]) { 350 + pp->msi_irq[0] = platform_get_irq_byname_optional(pdev, "msi"); 351 + if (pp->msi_irq[0] < 0) { 352 + pp->msi_irq[0] = platform_get_irq(pdev, 0); 353 + if (pp->msi_irq[0] < 0) 354 + return pp->msi_irq[0]; 355 + } 356 + } 357 + 358 + dev_dbg(dev, "Using %d MSI vectors\n", pp->num_vectors); 359 + 360 + pp->msi_irq_chip = &dw_pci_msi_bottom_irq_chip; 361 + 362 + ret = dw_pcie_allocate_domains(pp); 363 + if (ret) 364 + return ret; 365 + 366 + for (ctrl = 0; ctrl < num_ctrls; ctrl++) { 367 + if (pp->msi_irq[ctrl] > 0) 368 + irq_set_chained_handler_and_data(pp->msi_irq[ctrl], 369 + dw_chained_msi_isr, pp); 370 + } 371 + 372 + ret = dma_set_mask(dev, DMA_BIT_MASK(32)); 373 + if (ret) 374 + dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n"); 375 + 376 + pp->msi_page = alloc_page(GFP_DMA32); 377 + pp->msi_data = dma_map_page(dev, pp->msi_page, 0, 378 + PAGE_SIZE, DMA_FROM_DEVICE); 379 + ret = dma_mapping_error(dev, pp->msi_data); 380 + if (ret) { 381 + dev_err(pci->dev, "Failed to map MSI data\n"); 382 + __free_page(pp->msi_page); 383 + pp->msi_page = NULL; 384 + pp->msi_data = 0; 385 + dw_pcie_free_msi(pp); 386 + 387 + return ret; 388 + } 389 + 390 + return 0; 391 + } 392 + 393 + int dw_pcie_host_init(struct dw_pcie_rp *pp) 295 394 { 296 395 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 297 396 struct device *dev = pci->dev; ··· 404 293 struct platform_device *pdev = to_platform_device(dev); 405 294 struct resource_entry *win; 406 295 struct pci_host_bridge *bridge; 407 - struct resource *cfg_res; 296 + struct resource *res; 408 297 int ret; 409 298 410 - raw_spin_lock_init(&pci->pp.lock); 299 + raw_spin_lock_init(&pp->lock); 411 300 412 - cfg_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 413 - if (cfg_res) { 414 - pp->cfg0_size = resource_size(cfg_res); 415 - pp->cfg0_base = cfg_res->start; 301 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 302 + if (res) { 303 + pp->cfg0_size = resource_size(res); 304 + pp->cfg0_base = res->start; 416 305 417 - pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, cfg_res); 306 + pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 418 307 if (IS_ERR(pp->va_cfg0_base)) 419 308 return PTR_ERR(pp->va_cfg0_base); 420 309 } else { ··· 423 312 } 424 313 425 314 if (!pci->dbi_base) { 426 - struct resource *dbi_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 427 - pci->dbi_base = devm_pci_remap_cfg_resource(dev, dbi_res); 315 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 316 + pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); 428 317 if (IS_ERR(pci->dbi_base)) 429 318 return PTR_ERR(pci->dbi_base); 430 319 } ··· 461 350 of_property_read_bool(np, "msi-parent") || 462 351 of_property_read_bool(np, "msi-map")); 463 352 464 - if (!pp->num_vectors) { 353 + /* 354 + * For the has_msi_ctrl case the default assignment is handled 355 + * in the dw_pcie_msi_host_init(). 356 + */ 357 + if (!pp->has_msi_ctrl && !pp->num_vectors) { 465 358 pp->num_vectors = MSI_DEF_NUM_VECTORS; 466 359 } else if (pp->num_vectors > MAX_MSI_IRQS) { 467 360 dev_err(dev, "Invalid number of vectors\n"); 468 - return -EINVAL; 361 + ret = -EINVAL; 362 + goto err_deinit_host; 469 363 } 470 364 471 365 if (pp->ops->msi_host_init) { 472 366 ret = pp->ops->msi_host_init(pp); 473 367 if (ret < 0) 474 - return ret; 368 + goto err_deinit_host; 475 369 } else if (pp->has_msi_ctrl) { 476 - u32 ctrl, num_ctrls; 477 - 478 - num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; 479 - for (ctrl = 0; ctrl < num_ctrls; ctrl++) 480 - pp->irq_mask[ctrl] = ~0; 481 - 482 - if (!pp->msi_irq) { 483 - pp->msi_irq = platform_get_irq_byname_optional(pdev, "msi"); 484 - if (pp->msi_irq < 0) { 485 - pp->msi_irq = platform_get_irq(pdev, 0); 486 - if (pp->msi_irq < 0) 487 - return pp->msi_irq; 488 - } 489 - } 490 - 491 - pp->msi_irq_chip = &dw_pci_msi_bottom_irq_chip; 492 - 493 - ret = dw_pcie_allocate_domains(pp); 494 - if (ret) 495 - return ret; 496 - 497 - if (pp->msi_irq > 0) 498 - irq_set_chained_handler_and_data(pp->msi_irq, 499 - dw_chained_msi_isr, 500 - pp); 501 - 502 - ret = dma_set_mask(pci->dev, DMA_BIT_MASK(32)); 503 - if (ret) 504 - dev_warn(pci->dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n"); 505 - 506 - pp->msi_data = dma_map_single_attrs(pci->dev, &pp->msi_msg, 507 - sizeof(pp->msi_msg), 508 - DMA_FROM_DEVICE, 509 - DMA_ATTR_SKIP_CPU_SYNC); 510 - ret = dma_mapping_error(pci->dev, pp->msi_data); 511 - if (ret) { 512 - dev_err(pci->dev, "Failed to map MSI data\n"); 513 - pp->msi_data = 0; 514 - goto err_free_msi; 515 - } 370 + ret = dw_pcie_msi_host_init(pp); 371 + if (ret < 0) 372 + goto err_deinit_host; 516 373 } 517 374 } 518 375 376 + dw_pcie_version_detect(pci); 377 + 519 378 dw_pcie_iatu_detect(pci); 520 379 521 - dw_pcie_setup_rc(pp); 380 + ret = dw_pcie_setup_rc(pp); 381 + if (ret) 382 + goto err_free_msi; 522 383 523 - if (!dw_pcie_link_up(pci) && pci->ops && pci->ops->start_link) { 524 - ret = pci->ops->start_link(pci); 384 + if (!dw_pcie_link_up(pci)) { 385 + ret = dw_pcie_start_link(pci); 525 386 if (ret) 526 387 goto err_free_msi; 527 388 } ··· 504 421 bridge->sysdata = pp; 505 422 506 423 ret = pci_host_probe(bridge); 507 - if (!ret) 508 - return 0; 424 + if (ret) 425 + goto err_stop_link; 426 + 427 + return 0; 428 + 429 + err_stop_link: 430 + dw_pcie_stop_link(pci); 509 431 510 432 err_free_msi: 511 433 if (pp->has_msi_ctrl) 512 434 dw_pcie_free_msi(pp); 435 + 436 + err_deinit_host: 437 + if (pp->ops->host_deinit) 438 + pp->ops->host_deinit(pp); 439 + 513 440 return ret; 514 441 } 515 442 EXPORT_SYMBOL_GPL(dw_pcie_host_init); 516 443 517 - void dw_pcie_host_deinit(struct pcie_port *pp) 444 + void dw_pcie_host_deinit(struct dw_pcie_rp *pp) 518 445 { 446 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 447 + 519 448 pci_stop_root_bus(pp->bridge->bus); 520 449 pci_remove_root_bus(pp->bridge->bus); 450 + 451 + dw_pcie_stop_link(pci); 452 + 521 453 if (pp->has_msi_ctrl) 522 454 dw_pcie_free_msi(pp); 455 + 456 + if (pp->ops->host_deinit) 457 + pp->ops->host_deinit(pp); 523 458 } 524 459 EXPORT_SYMBOL_GPL(dw_pcie_host_deinit); 525 460 526 461 static void __iomem *dw_pcie_other_conf_map_bus(struct pci_bus *bus, 527 462 unsigned int devfn, int where) 528 463 { 529 - int type; 530 - u32 busdev; 531 - struct pcie_port *pp = bus->sysdata; 464 + struct dw_pcie_rp *pp = bus->sysdata; 532 465 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 466 + int type, ret; 467 + u32 busdev; 533 468 534 469 /* 535 470 * Checking whether the link is up here is a last line of defense ··· 568 467 else 569 468 type = PCIE_ATU_TYPE_CFG1; 570 469 571 - 572 - dw_pcie_prog_outbound_atu(pci, 0, type, pp->cfg0_base, busdev, pp->cfg0_size); 470 + ret = dw_pcie_prog_outbound_atu(pci, 0, type, pp->cfg0_base, busdev, 471 + pp->cfg0_size); 472 + if (ret) 473 + return NULL; 573 474 574 475 return pp->va_cfg0_base + where; 575 476 } ··· 579 476 static int dw_pcie_rd_other_conf(struct pci_bus *bus, unsigned int devfn, 580 477 int where, int size, u32 *val) 581 478 { 582 - int ret; 583 - struct pcie_port *pp = bus->sysdata; 479 + struct dw_pcie_rp *pp = bus->sysdata; 584 480 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 481 + int ret; 585 482 586 483 ret = pci_generic_config_read(bus, devfn, where, size, val); 484 + if (ret != PCIBIOS_SUCCESSFUL) 485 + return ret; 587 486 588 - if (!ret && pci->io_cfg_atu_shared) 589 - dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO, pp->io_base, 590 - pp->io_bus_addr, pp->io_size); 487 + if (pp->cfg0_io_shared) { 488 + ret = dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO, 489 + pp->io_base, pp->io_bus_addr, 490 + pp->io_size); 491 + if (ret) 492 + return PCIBIOS_SET_FAILED; 493 + } 591 494 592 - return ret; 495 + return PCIBIOS_SUCCESSFUL; 593 496 } 594 497 595 498 static int dw_pcie_wr_other_conf(struct pci_bus *bus, unsigned int devfn, 596 499 int where, int size, u32 val) 597 500 { 598 - int ret; 599 - struct pcie_port *pp = bus->sysdata; 501 + struct dw_pcie_rp *pp = bus->sysdata; 600 502 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 503 + int ret; 601 504 602 505 ret = pci_generic_config_write(bus, devfn, where, size, val); 506 + if (ret != PCIBIOS_SUCCESSFUL) 507 + return ret; 603 508 604 - if (!ret && pci->io_cfg_atu_shared) 605 - dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO, pp->io_base, 606 - pp->io_bus_addr, pp->io_size); 509 + if (pp->cfg0_io_shared) { 510 + ret = dw_pcie_prog_outbound_atu(pci, 0, PCIE_ATU_TYPE_IO, 511 + pp->io_base, pp->io_bus_addr, 512 + pp->io_size); 513 + if (ret) 514 + return PCIBIOS_SET_FAILED; 515 + } 607 516 608 - return ret; 517 + return PCIBIOS_SUCCESSFUL; 609 518 } 610 519 611 520 static struct pci_ops dw_child_pcie_ops = { ··· 628 513 629 514 void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus, unsigned int devfn, int where) 630 515 { 631 - struct pcie_port *pp = bus->sysdata; 516 + struct dw_pcie_rp *pp = bus->sysdata; 632 517 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 633 518 634 519 if (PCI_SLOT(devfn) > 0) ··· 644 529 .write = pci_generic_config_write, 645 530 }; 646 531 647 - void dw_pcie_setup_rc(struct pcie_port *pp) 532 + static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp) 648 533 { 649 - int i; 650 - u32 val, ctrl, num_ctrls; 651 534 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 535 + struct resource_entry *entry; 536 + int i, ret; 537 + 538 + /* Note the very first outbound ATU is used for CFG IOs */ 539 + if (!pci->num_ob_windows) { 540 + dev_err(pci->dev, "No outbound iATU found\n"); 541 + return -EINVAL; 542 + } 543 + 544 + /* 545 + * Ensure all outbound windows are disabled before proceeding with 546 + * the MEM/IO ranges setups. 547 + */ 548 + for (i = 0; i < pci->num_ob_windows; i++) 549 + dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_OB, i); 550 + 551 + i = 0; 552 + resource_list_for_each_entry(entry, &pp->bridge->windows) { 553 + if (resource_type(entry->res) != IORESOURCE_MEM) 554 + continue; 555 + 556 + if (pci->num_ob_windows <= ++i) 557 + break; 558 + 559 + ret = dw_pcie_prog_outbound_atu(pci, i, PCIE_ATU_TYPE_MEM, 560 + entry->res->start, 561 + entry->res->start - entry->offset, 562 + resource_size(entry->res)); 563 + if (ret) { 564 + dev_err(pci->dev, "Failed to set MEM range %pr\n", 565 + entry->res); 566 + return ret; 567 + } 568 + } 569 + 570 + if (pp->io_size) { 571 + if (pci->num_ob_windows > ++i) { 572 + ret = dw_pcie_prog_outbound_atu(pci, i, PCIE_ATU_TYPE_IO, 573 + pp->io_base, 574 + pp->io_bus_addr, 575 + pp->io_size); 576 + if (ret) { 577 + dev_err(pci->dev, "Failed to set IO range %pr\n", 578 + entry->res); 579 + return ret; 580 + } 581 + } else { 582 + pp->cfg0_io_shared = true; 583 + } 584 + } 585 + 586 + if (pci->num_ob_windows <= i) 587 + dev_warn(pci->dev, "Resources exceed number of ATU entries (%d)\n", 588 + pci->num_ob_windows); 589 + 590 + return 0; 591 + } 592 + 593 + int dw_pcie_setup_rc(struct dw_pcie_rp *pp) 594 + { 595 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 596 + u32 val, ctrl, num_ctrls; 597 + int ret; 652 598 653 599 /* 654 600 * Enable DBI read-only registers for writing/updating configuration. ··· 758 582 PCI_COMMAND_MASTER | PCI_COMMAND_SERR; 759 583 dw_pcie_writel_dbi(pci, PCI_COMMAND, val); 760 584 761 - /* Ensure all outbound windows are disabled so there are multiple matches */ 762 - for (i = 0; i < pci->num_ob_windows; i++) 763 - dw_pcie_disable_atu(pci, i, DW_PCIE_REGION_OUTBOUND); 764 - 765 585 /* 766 586 * If the platform provides its own child bus config accesses, it means 767 587 * the platform uses its own address translation component rather than 768 588 * ATU, so we should not program the ATU here. 769 589 */ 770 590 if (pp->bridge->child_ops == &dw_child_pcie_ops) { 771 - int atu_idx = 0; 772 - struct resource_entry *entry; 773 - 774 - /* Get last memory resource entry */ 775 - resource_list_for_each_entry(entry, &pp->bridge->windows) { 776 - if (resource_type(entry->res) != IORESOURCE_MEM) 777 - continue; 778 - 779 - if (pci->num_ob_windows <= ++atu_idx) 780 - break; 781 - 782 - dw_pcie_prog_outbound_atu(pci, atu_idx, 783 - PCIE_ATU_TYPE_MEM, entry->res->start, 784 - entry->res->start - entry->offset, 785 - resource_size(entry->res)); 786 - } 787 - 788 - if (pp->io_size) { 789 - if (pci->num_ob_windows > ++atu_idx) 790 - dw_pcie_prog_outbound_atu(pci, atu_idx, 791 - PCIE_ATU_TYPE_IO, pp->io_base, 792 - pp->io_bus_addr, pp->io_size); 793 - else 794 - pci->io_cfg_atu_shared = true; 795 - } 796 - 797 - if (pci->num_ob_windows <= atu_idx) 798 - dev_warn(pci->dev, "Resources exceed number of ATU entries (%d)", 799 - pci->num_ob_windows); 591 + ret = dw_pcie_iatu_setup(pp); 592 + if (ret) 593 + return ret; 800 594 } 801 595 802 596 dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); ··· 779 633 dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 780 634 781 635 dw_pcie_dbi_ro_wr_dis(pci); 636 + 637 + return 0; 782 638 } 783 639 EXPORT_SYMBOL_GPL(dw_pcie_setup_rc);
+6 -19
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 17 17 #include <linux/platform_device.h> 18 18 #include <linux/resource.h> 19 19 #include <linux/types.h> 20 - #include <linux/regmap.h> 21 20 22 21 #include "pcie-designware.h" 23 22 24 23 struct dw_plat_pcie { 25 24 struct dw_pcie *pci; 26 - struct regmap *regmap; 27 25 enum dw_pcie_device_mode mode; 28 26 }; 29 27 ··· 29 31 enum dw_pcie_device_mode mode; 30 32 }; 31 33 32 - static const struct of_device_id dw_plat_pcie_of_match[]; 33 - 34 34 static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = { 35 - }; 36 - 37 - static int dw_plat_pcie_establish_link(struct dw_pcie *pci) 38 - { 39 - return 0; 40 - } 41 - 42 - static const struct dw_pcie_ops dw_pcie_ops = { 43 - .start_link = dw_plat_pcie_establish_link, 44 35 }; 45 36 46 37 static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep) ··· 83 96 struct platform_device *pdev) 84 97 { 85 98 struct dw_pcie *pci = dw_plat_pcie->pci; 86 - struct pcie_port *pp = &pci->pp; 99 + struct dw_pcie_rp *pp = &pci->pp; 87 100 struct device *dev = &pdev->dev; 88 101 int ret; 89 102 ··· 127 140 return -ENOMEM; 128 141 129 142 pci->dev = dev; 130 - pci->ops = &dw_pcie_ops; 131 143 132 144 dw_plat_pcie->pci = pci; 133 145 dw_plat_pcie->mode = mode; ··· 139 153 return -ENODEV; 140 154 141 155 ret = dw_plat_add_pcie_port(dw_plat_pcie, pdev); 142 - if (ret < 0) 143 - return ret; 144 156 break; 145 157 case DW_PCIE_EP_TYPE: 146 158 if (!IS_ENABLED(CONFIG_PCIE_DW_PLAT_EP)) 147 159 return -ENODEV; 148 160 149 161 pci->ep.ops = &pcie_ep_ops; 150 - return dw_pcie_ep_init(&pci->ep); 162 + ret = dw_pcie_ep_init(&pci->ep); 163 + break; 151 164 default: 152 165 dev_err(dev, "INVALID device type %d\n", dw_plat_pcie->mode); 166 + ret = -EINVAL; 167 + break; 153 168 } 154 169 155 - return 0; 170 + return ret; 156 171 } 157 172 158 173 static const struct dw_plat_pcie_of_data dw_plat_pcie_rc_of_data = {
+215 -291
drivers/pci/controller/dwc/pcie-designware.c
··· 8 8 * Author: Jingoo Han <jg1.han@samsung.com> 9 9 */ 10 10 11 + #include <linux/align.h> 12 + #include <linux/bitops.h> 11 13 #include <linux/delay.h> 12 14 #include <linux/of.h> 13 15 #include <linux/of_platform.h> 16 + #include <linux/sizes.h> 14 17 #include <linux/types.h> 15 18 16 19 #include "../../pci.h" 17 20 #include "pcie-designware.h" 21 + 22 + void dw_pcie_version_detect(struct dw_pcie *pci) 23 + { 24 + u32 ver; 25 + 26 + /* The content of the CSR is zero on DWC PCIe older than v4.70a */ 27 + ver = dw_pcie_readl_dbi(pci, PCIE_VERSION_NUMBER); 28 + if (!ver) 29 + return; 30 + 31 + if (pci->version && pci->version != ver) 32 + dev_warn(pci->dev, "Versions don't match (%08x != %08x)\n", 33 + pci->version, ver); 34 + else 35 + pci->version = ver; 36 + 37 + ver = dw_pcie_readl_dbi(pci, PCIE_VERSION_TYPE); 38 + 39 + if (pci->type && pci->type != ver) 40 + dev_warn(pci->dev, "Types don't match (%08x != %08x)\n", 41 + pci->type, ver); 42 + else 43 + pci->type = ver; 44 + } 18 45 19 46 /* 20 47 * These interfaces resemble the pci_find_*capability() interfaces, but these ··· 208 181 dev_err(pci->dev, "write DBI address failed\n"); 209 182 } 210 183 211 - static u32 dw_pcie_readl_atu(struct dw_pcie *pci, u32 reg) 184 + static inline void __iomem *dw_pcie_select_atu(struct dw_pcie *pci, u32 dir, 185 + u32 index) 212 186 { 187 + if (pci->iatu_unroll_enabled) 188 + return pci->atu_base + PCIE_ATU_UNROLL_BASE(dir, index); 189 + 190 + dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, dir | index); 191 + return pci->atu_base; 192 + } 193 + 194 + static u32 dw_pcie_readl_atu(struct dw_pcie *pci, u32 dir, u32 index, u32 reg) 195 + { 196 + void __iomem *base; 213 197 int ret; 214 198 u32 val; 215 199 216 - if (pci->ops && pci->ops->read_dbi) 217 - return pci->ops->read_dbi(pci, pci->atu_base, reg, 4); 200 + base = dw_pcie_select_atu(pci, dir, index); 218 201 219 - ret = dw_pcie_read(pci->atu_base + reg, 4, &val); 202 + if (pci->ops && pci->ops->read_dbi) 203 + return pci->ops->read_dbi(pci, base, reg, 4); 204 + 205 + ret = dw_pcie_read(base + reg, 4, &val); 220 206 if (ret) 221 207 dev_err(pci->dev, "Read ATU address failed\n"); 222 208 223 209 return val; 224 210 } 225 211 226 - static void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val) 212 + static void dw_pcie_writel_atu(struct dw_pcie *pci, u32 dir, u32 index, 213 + u32 reg, u32 val) 227 214 { 215 + void __iomem *base; 228 216 int ret; 229 217 218 + base = dw_pcie_select_atu(pci, dir, index); 219 + 230 220 if (pci->ops && pci->ops->write_dbi) { 231 - pci->ops->write_dbi(pci, pci->atu_base, reg, 4, val); 221 + pci->ops->write_dbi(pci, base, reg, 4, val); 232 222 return; 233 223 } 234 224 235 - ret = dw_pcie_write(pci->atu_base + reg, 4, val); 225 + ret = dw_pcie_write(base + reg, 4, val); 236 226 if (ret) 237 227 dev_err(pci->dev, "Write ATU address failed\n"); 238 228 } 239 229 240 - static u32 dw_pcie_readl_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg) 230 + static inline u32 dw_pcie_readl_atu_ob(struct dw_pcie *pci, u32 index, u32 reg) 241 231 { 242 - u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index); 243 - 244 - return dw_pcie_readl_atu(pci, offset + reg); 232 + return dw_pcie_readl_atu(pci, PCIE_ATU_REGION_DIR_OB, index, reg); 245 233 } 246 234 247 - static void dw_pcie_writel_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg, 248 - u32 val) 235 + static inline void dw_pcie_writel_atu_ob(struct dw_pcie *pci, u32 index, u32 reg, 236 + u32 val) 249 237 { 250 - u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index); 251 - 252 - dw_pcie_writel_atu(pci, offset + reg, val); 238 + dw_pcie_writel_atu(pci, PCIE_ATU_REGION_DIR_OB, index, reg, val); 253 239 } 254 240 255 241 static inline u32 dw_pcie_enable_ecrc(u32 val) ··· 306 266 return val | PCIE_ATU_TD; 307 267 } 308 268 309 - static void dw_pcie_prog_outbound_atu_unroll(struct dw_pcie *pci, u8 func_no, 310 - int index, int type, 311 - u64 cpu_addr, u64 pci_addr, 312 - u64 size) 269 + static int __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no, 270 + int index, int type, u64 cpu_addr, 271 + u64 pci_addr, u64 size) 313 272 { 314 273 u32 retries, val; 315 - u64 limit_addr = cpu_addr + size - 1; 316 - 317 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_BASE, 318 - lower_32_bits(cpu_addr)); 319 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_BASE, 320 - upper_32_bits(cpu_addr)); 321 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_LIMIT, 322 - lower_32_bits(limit_addr)); 323 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_LIMIT, 324 - upper_32_bits(limit_addr)); 325 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_LOWER_TARGET, 326 - lower_32_bits(pci_addr)); 327 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET, 328 - upper_32_bits(pci_addr)); 329 - val = type | PCIE_ATU_FUNC_NUM(func_no); 330 - val = upper_32_bits(size - 1) ? 331 - val | PCIE_ATU_INCREASE_REGION_SIZE : val; 332 - if (pci->version == 0x490A) 333 - val = dw_pcie_enable_ecrc(val); 334 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, val); 335 - dw_pcie_writel_ob_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2, 336 - PCIE_ATU_ENABLE); 337 - 338 - /* 339 - * Make sure ATU enable takes effect before any subsequent config 340 - * and I/O accesses. 341 - */ 342 - for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) { 343 - val = dw_pcie_readl_ob_unroll(pci, index, 344 - PCIE_ATU_UNR_REGION_CTRL2); 345 - if (val & PCIE_ATU_ENABLE) 346 - return; 347 - 348 - mdelay(LINK_WAIT_IATU); 349 - } 350 - dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 351 - } 352 - 353 - static void __dw_pcie_prog_outbound_atu(struct dw_pcie *pci, u8 func_no, 354 - int index, int type, u64 cpu_addr, 355 - u64 pci_addr, u64 size) 356 - { 357 - u32 retries, val; 274 + u64 limit_addr; 358 275 359 276 if (pci->ops && pci->ops->cpu_addr_fixup) 360 277 cpu_addr = pci->ops->cpu_addr_fixup(pci, cpu_addr); 361 278 362 - if (pci->iatu_unroll_enabled) { 363 - dw_pcie_prog_outbound_atu_unroll(pci, func_no, index, type, 364 - cpu_addr, pci_addr, size); 365 - return; 366 - } 279 + limit_addr = cpu_addr + size - 1; 367 280 368 - dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, 369 - PCIE_ATU_REGION_OUTBOUND | index); 370 - dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_BASE, 371 - lower_32_bits(cpu_addr)); 372 - dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_BASE, 373 - upper_32_bits(cpu_addr)); 374 - dw_pcie_writel_dbi(pci, PCIE_ATU_LIMIT, 375 - lower_32_bits(cpu_addr + size - 1)); 376 - if (pci->version >= 0x460A) 377 - dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_LIMIT, 378 - upper_32_bits(cpu_addr + size - 1)); 379 - dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET, 380 - lower_32_bits(pci_addr)); 381 - dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET, 382 - upper_32_bits(pci_addr)); 383 - val = type | PCIE_ATU_FUNC_NUM(func_no); 384 - val = ((upper_32_bits(size - 1)) && (pci->version >= 0x460A)) ? 385 - val | PCIE_ATU_INCREASE_REGION_SIZE : val; 386 - if (pci->version == 0x490A) 387 - val = dw_pcie_enable_ecrc(val); 388 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, val); 389 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE); 390 - 391 - /* 392 - * Make sure ATU enable takes effect before any subsequent config 393 - * and I/O accesses. 394 - */ 395 - for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) { 396 - val = dw_pcie_readl_dbi(pci, PCIE_ATU_CR2); 397 - if (val & PCIE_ATU_ENABLE) 398 - return; 399 - 400 - mdelay(LINK_WAIT_IATU); 401 - } 402 - dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 403 - } 404 - 405 - void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, 406 - u64 cpu_addr, u64 pci_addr, u64 size) 407 - { 408 - __dw_pcie_prog_outbound_atu(pci, 0, index, type, 409 - cpu_addr, pci_addr, size); 410 - } 411 - 412 - void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 413 - int type, u64 cpu_addr, u64 pci_addr, 414 - u64 size) 415 - { 416 - __dw_pcie_prog_outbound_atu(pci, func_no, index, type, 417 - cpu_addr, pci_addr, size); 418 - } 419 - 420 - static u32 dw_pcie_readl_ib_unroll(struct dw_pcie *pci, u32 index, u32 reg) 421 - { 422 - u32 offset = PCIE_GET_ATU_INB_UNR_REG_OFFSET(index); 423 - 424 - return dw_pcie_readl_atu(pci, offset + reg); 425 - } 426 - 427 - static void dw_pcie_writel_ib_unroll(struct dw_pcie *pci, u32 index, u32 reg, 428 - u32 val) 429 - { 430 - u32 offset = PCIE_GET_ATU_INB_UNR_REG_OFFSET(index); 431 - 432 - dw_pcie_writel_atu(pci, offset + reg, val); 433 - } 434 - 435 - static int dw_pcie_prog_inbound_atu_unroll(struct dw_pcie *pci, u8 func_no, 436 - int index, int bar, u64 cpu_addr, 437 - enum dw_pcie_as_type as_type) 438 - { 439 - int type; 440 - u32 retries, val; 441 - 442 - dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_LOWER_TARGET, 443 - lower_32_bits(cpu_addr)); 444 - dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_UPPER_TARGET, 445 - upper_32_bits(cpu_addr)); 446 - 447 - switch (as_type) { 448 - case DW_PCIE_AS_MEM: 449 - type = PCIE_ATU_TYPE_MEM; 450 - break; 451 - case DW_PCIE_AS_IO: 452 - type = PCIE_ATU_TYPE_IO; 453 - break; 454 - default: 281 + if ((limit_addr & ~pci->region_limit) != (cpu_addr & ~pci->region_limit) || 282 + !IS_ALIGNED(cpu_addr, pci->region_align) || 283 + !IS_ALIGNED(pci_addr, pci->region_align) || !size) { 455 284 return -EINVAL; 456 285 } 457 286 458 - dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL1, type | 459 - PCIE_ATU_FUNC_NUM(func_no)); 460 - dw_pcie_writel_ib_unroll(pci, index, PCIE_ATU_UNR_REGION_CTRL2, 461 - PCIE_ATU_FUNC_NUM_MATCH_EN | 462 - PCIE_ATU_ENABLE | 463 - PCIE_ATU_BAR_MODE_ENABLE | (bar << 8)); 287 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LOWER_BASE, 288 + lower_32_bits(cpu_addr)); 289 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_BASE, 290 + upper_32_bits(cpu_addr)); 291 + 292 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LIMIT, 293 + lower_32_bits(limit_addr)); 294 + if (dw_pcie_ver_is_ge(pci, 460A)) 295 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_LIMIT, 296 + upper_32_bits(limit_addr)); 297 + 298 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_LOWER_TARGET, 299 + lower_32_bits(pci_addr)); 300 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_UPPER_TARGET, 301 + upper_32_bits(pci_addr)); 302 + 303 + val = type | PCIE_ATU_FUNC_NUM(func_no); 304 + if (upper_32_bits(limit_addr) > upper_32_bits(cpu_addr) && 305 + dw_pcie_ver_is_ge(pci, 460A)) 306 + val |= PCIE_ATU_INCREASE_REGION_SIZE; 307 + if (dw_pcie_ver_is(pci, 490A)) 308 + val = dw_pcie_enable_ecrc(val); 309 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_REGION_CTRL1, val); 310 + 311 + dw_pcie_writel_atu_ob(pci, index, PCIE_ATU_REGION_CTRL2, PCIE_ATU_ENABLE); 464 312 465 313 /* 466 314 * Make sure ATU enable takes effect before any subsequent config 467 315 * and I/O accesses. 468 316 */ 469 317 for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) { 470 - val = dw_pcie_readl_ib_unroll(pci, index, 471 - PCIE_ATU_UNR_REGION_CTRL2); 318 + val = dw_pcie_readl_atu_ob(pci, index, PCIE_ATU_REGION_CTRL2); 472 319 if (val & PCIE_ATU_ENABLE) 473 320 return 0; 474 321 475 322 mdelay(LINK_WAIT_IATU); 476 323 } 477 - dev_err(pci->dev, "Inbound iATU is not being enabled\n"); 478 324 479 - return -EBUSY; 325 + dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 326 + 327 + return -ETIMEDOUT; 328 + } 329 + 330 + int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, 331 + u64 cpu_addr, u64 pci_addr, u64 size) 332 + { 333 + return __dw_pcie_prog_outbound_atu(pci, 0, index, type, 334 + cpu_addr, pci_addr, size); 335 + } 336 + 337 + int dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 338 + int type, u64 cpu_addr, u64 pci_addr, 339 + u64 size) 340 + { 341 + return __dw_pcie_prog_outbound_atu(pci, func_no, index, type, 342 + cpu_addr, pci_addr, size); 343 + } 344 + 345 + static inline u32 dw_pcie_readl_atu_ib(struct dw_pcie *pci, u32 index, u32 reg) 346 + { 347 + return dw_pcie_readl_atu(pci, PCIE_ATU_REGION_DIR_IB, index, reg); 348 + } 349 + 350 + static inline void dw_pcie_writel_atu_ib(struct dw_pcie *pci, u32 index, u32 reg, 351 + u32 val) 352 + { 353 + dw_pcie_writel_atu(pci, PCIE_ATU_REGION_DIR_IB, index, reg, val); 480 354 } 481 355 482 356 int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, 483 - int bar, u64 cpu_addr, 484 - enum dw_pcie_as_type as_type) 357 + int type, u64 cpu_addr, u8 bar) 485 358 { 486 - int type; 487 359 u32 retries, val; 488 360 489 - if (pci->iatu_unroll_enabled) 490 - return dw_pcie_prog_inbound_atu_unroll(pci, func_no, index, bar, 491 - cpu_addr, as_type); 492 - 493 - dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, PCIE_ATU_REGION_INBOUND | 494 - index); 495 - dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET, lower_32_bits(cpu_addr)); 496 - dw_pcie_writel_dbi(pci, PCIE_ATU_UPPER_TARGET, upper_32_bits(cpu_addr)); 497 - 498 - switch (as_type) { 499 - case DW_PCIE_AS_MEM: 500 - type = PCIE_ATU_TYPE_MEM; 501 - break; 502 - case DW_PCIE_AS_IO: 503 - type = PCIE_ATU_TYPE_IO; 504 - break; 505 - default: 361 + if (!IS_ALIGNED(cpu_addr, pci->region_align)) 506 362 return -EINVAL; 507 - } 508 363 509 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR1, type | 510 - PCIE_ATU_FUNC_NUM(func_no)); 511 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, PCIE_ATU_ENABLE | 512 - PCIE_ATU_FUNC_NUM_MATCH_EN | 513 - PCIE_ATU_BAR_MODE_ENABLE | (bar << 8)); 364 + dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LOWER_TARGET, 365 + lower_32_bits(cpu_addr)); 366 + dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_UPPER_TARGET, 367 + upper_32_bits(cpu_addr)); 368 + 369 + dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_REGION_CTRL1, type | 370 + PCIE_ATU_FUNC_NUM(func_no)); 371 + dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_REGION_CTRL2, 372 + PCIE_ATU_ENABLE | PCIE_ATU_FUNC_NUM_MATCH_EN | 373 + PCIE_ATU_BAR_MODE_ENABLE | (bar << 8)); 514 374 515 375 /* 516 376 * Make sure ATU enable takes effect before any subsequent config 517 377 * and I/O accesses. 518 378 */ 519 379 for (retries = 0; retries < LINK_WAIT_MAX_IATU_RETRIES; retries++) { 520 - val = dw_pcie_readl_dbi(pci, PCIE_ATU_CR2); 380 + val = dw_pcie_readl_atu_ib(pci, index, PCIE_ATU_REGION_CTRL2); 521 381 if (val & PCIE_ATU_ENABLE) 522 382 return 0; 523 383 524 384 mdelay(LINK_WAIT_IATU); 525 385 } 386 + 526 387 dev_err(pci->dev, "Inbound iATU is not being enabled\n"); 527 388 528 - return -EBUSY; 389 + return -ETIMEDOUT; 529 390 } 530 391 531 - void dw_pcie_disable_atu(struct dw_pcie *pci, int index, 532 - enum dw_pcie_region_type type) 392 + void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index) 533 393 { 534 - int region; 535 - 536 - switch (type) { 537 - case DW_PCIE_REGION_INBOUND: 538 - region = PCIE_ATU_REGION_INBOUND; 539 - break; 540 - case DW_PCIE_REGION_OUTBOUND: 541 - region = PCIE_ATU_REGION_OUTBOUND; 542 - break; 543 - default: 544 - return; 545 - } 546 - 547 - dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, region | index); 548 - dw_pcie_writel_dbi(pci, PCIE_ATU_CR2, ~(u32)PCIE_ATU_ENABLE); 394 + dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0); 549 395 } 550 396 551 397 int dw_pcie_wait_for_link(struct dw_pcie *pci) 552 398 { 399 + u32 offset, val; 553 400 int retries; 554 401 555 402 /* Check if the link is up or not */ 556 403 for (retries = 0; retries < LINK_WAIT_MAX_RETRIES; retries++) { 557 - if (dw_pcie_link_up(pci)) { 558 - dev_info(pci->dev, "Link up\n"); 559 - return 0; 560 - } 404 + if (dw_pcie_link_up(pci)) 405 + break; 406 + 561 407 usleep_range(LINK_WAIT_USLEEP_MIN, LINK_WAIT_USLEEP_MAX); 562 408 } 563 409 564 - dev_info(pci->dev, "Phy link never came up\n"); 410 + if (retries >= LINK_WAIT_MAX_RETRIES) { 411 + dev_err(pci->dev, "Phy link never came up\n"); 412 + return -ETIMEDOUT; 413 + } 565 414 566 - return -ETIMEDOUT; 415 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 416 + val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 417 + 418 + dev_info(pci->dev, "PCIe Gen.%u x%u link up\n", 419 + FIELD_GET(PCI_EXP_LNKSTA_CLS, val), 420 + FIELD_GET(PCI_EXP_LNKSTA_NLW, val)); 421 + 422 + return 0; 567 423 } 568 424 EXPORT_SYMBOL_GPL(dw_pcie_wait_for_link); 569 425 ··· 470 534 if (pci->ops && pci->ops->link_up) 471 535 return pci->ops->link_up(pci); 472 536 473 - val = readl(pci->dbi_base + PCIE_PORT_DEBUG1); 537 + val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG1); 474 538 return ((val & PCIE_PORT_DEBUG1_LINK_UP) && 475 539 (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING))); 476 540 } ··· 522 586 523 587 } 524 588 525 - static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) 589 + static bool dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci) 526 590 { 527 591 u32 val; 528 592 529 593 val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT); 530 594 if (val == 0xffffffff) 531 - return 1; 595 + return true; 532 596 533 - return 0; 534 - } 535 - 536 - static void dw_pcie_iatu_detect_regions_unroll(struct dw_pcie *pci) 537 - { 538 - int max_region, i, ob = 0, ib = 0; 539 - u32 val; 540 - 541 - max_region = min((int)pci->atu_size / 512, 256); 542 - 543 - for (i = 0; i < max_region; i++) { 544 - dw_pcie_writel_ob_unroll(pci, i, PCIE_ATU_UNR_LOWER_TARGET, 545 - 0x11110000); 546 - 547 - val = dw_pcie_readl_ob_unroll(pci, i, PCIE_ATU_UNR_LOWER_TARGET); 548 - if (val == 0x11110000) 549 - ob++; 550 - else 551 - break; 552 - } 553 - 554 - for (i = 0; i < max_region; i++) { 555 - dw_pcie_writel_ib_unroll(pci, i, PCIE_ATU_UNR_LOWER_TARGET, 556 - 0x11110000); 557 - 558 - val = dw_pcie_readl_ib_unroll(pci, i, PCIE_ATU_UNR_LOWER_TARGET); 559 - if (val == 0x11110000) 560 - ib++; 561 - else 562 - break; 563 - } 564 - pci->num_ib_windows = ib; 565 - pci->num_ob_windows = ob; 597 + return false; 566 598 } 567 599 568 600 static void dw_pcie_iatu_detect_regions(struct dw_pcie *pci) 569 601 { 570 - int max_region, i, ob = 0, ib = 0; 571 - u32 val; 602 + int max_region, ob, ib; 603 + u32 val, min, dir; 604 + u64 max; 572 605 573 - dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, 0xFF); 574 - max_region = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT) + 1; 606 + if (pci->iatu_unroll_enabled) { 607 + max_region = min((int)pci->atu_size / 512, 256); 608 + } else { 609 + dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, 0xFF); 610 + max_region = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT) + 1; 611 + } 575 612 576 - for (i = 0; i < max_region; i++) { 577 - dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, PCIE_ATU_REGION_OUTBOUND | i); 578 - dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET, 0x11110000); 579 - val = dw_pcie_readl_dbi(pci, PCIE_ATU_LOWER_TARGET); 580 - if (val == 0x11110000) 581 - ob++; 582 - else 613 + for (ob = 0; ob < max_region; ob++) { 614 + dw_pcie_writel_atu_ob(pci, ob, PCIE_ATU_LOWER_TARGET, 0x11110000); 615 + val = dw_pcie_readl_atu_ob(pci, ob, PCIE_ATU_LOWER_TARGET); 616 + if (val != 0x11110000) 583 617 break; 584 618 } 585 619 586 - for (i = 0; i < max_region; i++) { 587 - dw_pcie_writel_dbi(pci, PCIE_ATU_VIEWPORT, PCIE_ATU_REGION_INBOUND | i); 588 - dw_pcie_writel_dbi(pci, PCIE_ATU_LOWER_TARGET, 0x11110000); 589 - val = dw_pcie_readl_dbi(pci, PCIE_ATU_LOWER_TARGET); 590 - if (val == 0x11110000) 591 - ib++; 592 - else 620 + for (ib = 0; ib < max_region; ib++) { 621 + dw_pcie_writel_atu_ib(pci, ib, PCIE_ATU_LOWER_TARGET, 0x11110000); 622 + val = dw_pcie_readl_atu_ib(pci, ib, PCIE_ATU_LOWER_TARGET); 623 + if (val != 0x11110000) 593 624 break; 594 625 } 595 626 596 - pci->num_ib_windows = ib; 627 + if (ob) { 628 + dir = PCIE_ATU_REGION_DIR_OB; 629 + } else if (ib) { 630 + dir = PCIE_ATU_REGION_DIR_IB; 631 + } else { 632 + dev_err(pci->dev, "No iATU regions found\n"); 633 + return; 634 + } 635 + 636 + dw_pcie_writel_atu(pci, dir, 0, PCIE_ATU_LIMIT, 0x0); 637 + min = dw_pcie_readl_atu(pci, dir, 0, PCIE_ATU_LIMIT); 638 + 639 + if (dw_pcie_ver_is_ge(pci, 460A)) { 640 + dw_pcie_writel_atu(pci, dir, 0, PCIE_ATU_UPPER_LIMIT, 0xFFFFFFFF); 641 + max = dw_pcie_readl_atu(pci, dir, 0, PCIE_ATU_UPPER_LIMIT); 642 + } else { 643 + max = 0; 644 + } 645 + 597 646 pci->num_ob_windows = ob; 647 + pci->num_ib_windows = ib; 648 + pci->region_align = 1 << fls(min); 649 + pci->region_limit = (max << 32) | (SZ_4G - 1); 598 650 } 599 651 600 652 void dw_pcie_iatu_detect(struct dw_pcie *pci) 601 653 { 602 - struct device *dev = pci->dev; 603 - struct platform_device *pdev = to_platform_device(dev); 654 + struct platform_device *pdev = to_platform_device(pci->dev); 604 655 605 - if (pci->version >= 0x480A || (!pci->version && 606 - dw_pcie_iatu_unroll_enabled(pci))) { 607 - pci->iatu_unroll_enabled = true; 656 + pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci); 657 + if (pci->iatu_unroll_enabled) { 608 658 if (!pci->atu_base) { 609 659 struct resource *res = 610 660 platform_get_resource_byname(pdev, IORESOURCE_MEM, "atu"); 611 661 if (res) { 612 662 pci->atu_size = resource_size(res); 613 - pci->atu_base = devm_ioremap_resource(dev, res); 663 + pci->atu_base = devm_ioremap_resource(pci->dev, res); 614 664 } 615 665 if (!pci->atu_base || IS_ERR(pci->atu_base)) 616 666 pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET; ··· 605 683 if (!pci->atu_size) 606 684 /* Pick a minimal default, enough for 8 in and 8 out windows */ 607 685 pci->atu_size = SZ_4K; 686 + } else { 687 + pci->atu_base = pci->dbi_base + PCIE_ATU_VIEWPORT_BASE; 688 + pci->atu_size = PCIE_ATU_VIEWPORT_SIZE; 689 + } 608 690 609 - dw_pcie_iatu_detect_regions_unroll(pci); 610 - } else 611 - dw_pcie_iatu_detect_regions(pci); 691 + dw_pcie_iatu_detect_regions(pci); 612 692 613 693 dev_info(pci->dev, "iATU unroll: %s\n", pci->iatu_unroll_enabled ? 614 694 "enabled" : "disabled"); 615 695 616 - dev_info(pci->dev, "Detected iATU regions: %u outbound, %u inbound", 617 - pci->num_ob_windows, pci->num_ib_windows); 696 + dev_info(pci->dev, "iATU regions: %u ob, %u ib, align %uK, limit %lluG\n", 697 + pci->num_ob_windows, pci->num_ib_windows, 698 + pci->region_align / SZ_1K, (pci->region_limit + 1) / SZ_1G); 618 699 } 619 700 620 701 void dw_pcie_setup(struct dw_pcie *pci) 621 702 { 703 + struct device_node *np = pci->dev->of_node; 622 704 u32 val; 623 - struct device *dev = pci->dev; 624 - struct device_node *np = dev->of_node; 625 705 626 706 if (pci->link_gen > 0) 627 707 dw_pcie_link_set_max_speed(pci, pci->link_gen); ··· 649 725 val &= ~PORT_LINK_FAST_LINK_MODE; 650 726 val |= PORT_LINK_DLL_LINK_EN; 651 727 dw_pcie_writel_dbi(pci, PCIE_PORT_LINK_CONTROL, val); 728 + 729 + if (of_property_read_bool(np, "snps,enable-cdm-check")) { 730 + val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); 731 + val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS | 732 + PCIE_PL_CHK_REG_CHK_REG_START; 733 + dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val); 734 + } 652 735 653 736 of_property_read_u32(np, "num-lanes", &pci->num_lanes); 654 737 if (!pci->num_lanes) { ··· 703 772 break; 704 773 } 705 774 dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 706 - 707 - if (of_property_read_bool(np, "snps,enable-cdm-check")) { 708 - val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); 709 - val |= PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS | 710 - PCIE_PL_CHK_REG_CHK_REG_START; 711 - dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val); 712 - } 713 775 }
+122 -62
drivers/pci/controller/dwc/pcie-designware.h
··· 20 20 #include <linux/pci-epc.h> 21 21 #include <linux/pci-epf.h> 22 22 23 + /* DWC PCIe IP-core versions (native support since v4.70a) */ 24 + #define DW_PCIE_VER_365A 0x3336352a 25 + #define DW_PCIE_VER_460A 0x3436302a 26 + #define DW_PCIE_VER_470A 0x3437302a 27 + #define DW_PCIE_VER_480A 0x3438302a 28 + #define DW_PCIE_VER_490A 0x3439302a 29 + #define DW_PCIE_VER_520A 0x3532302a 30 + 31 + #define __dw_pcie_ver_cmp(_pci, _ver, _op) \ 32 + ((_pci)->version _op DW_PCIE_VER_ ## _ver) 33 + 34 + #define dw_pcie_ver_is(_pci, _ver) __dw_pcie_ver_cmp(_pci, _ver, ==) 35 + 36 + #define dw_pcie_ver_is_ge(_pci, _ver) __dw_pcie_ver_cmp(_pci, _ver, >=) 37 + 38 + #define dw_pcie_ver_type_is(_pci, _ver, _type) \ 39 + (__dw_pcie_ver_cmp(_pci, _ver, ==) && \ 40 + __dw_pcie_ver_cmp(_pci, TYPE_ ## _type, ==)) 41 + 42 + #define dw_pcie_ver_type_is_ge(_pci, _ver, _type) \ 43 + (__dw_pcie_ver_cmp(_pci, _ver, ==) && \ 44 + __dw_pcie_ver_cmp(_pci, TYPE_ ## _type, >=)) 45 + 23 46 /* Parameters for the waiting for link up routine */ 24 47 #define LINK_WAIT_MAX_RETRIES 10 25 48 #define LINK_WAIT_USLEEP_MIN 90000 ··· 97 74 #define PCIE_MSI_INTR0_MASK 0x82C 98 75 #define PCIE_MSI_INTR0_STATUS 0x830 99 76 77 + #define GEN3_RELATED_OFF 0x890 78 + #define GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL BIT(0) 79 + #define GEN3_RELATED_OFF_RXEQ_RGRDLESS_RXTS BIT(13) 80 + #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) 81 + #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 82 + #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) 83 + 100 84 #define PCIE_PORT_MULTI_LANE_CTRL 0x8C0 101 85 #define PORT_MLTI_UPCFG_SUPPORT BIT(7) 102 86 87 + #define PCIE_VERSION_NUMBER 0x8F8 88 + #define PCIE_VERSION_TYPE 0x8FC 89 + 90 + /* 91 + * iATU inbound and outbound windows CSRs. Before the IP-core v4.80a each 92 + * iATU region CSRs had been indirectly accessible by means of the dedicated 93 + * viewport selector. The iATU/eDMA CSRs space was re-designed in DWC PCIe 94 + * v4.80a in a way so the viewport was unrolled into the directly accessible 95 + * iATU/eDMA CSRs space. 96 + */ 103 97 #define PCIE_ATU_VIEWPORT 0x900 104 - #define PCIE_ATU_REGION_INBOUND BIT(31) 105 - #define PCIE_ATU_REGION_OUTBOUND 0 106 - #define PCIE_ATU_CR1 0x904 98 + #define PCIE_ATU_REGION_DIR_IB BIT(31) 99 + #define PCIE_ATU_REGION_DIR_OB 0 100 + #define PCIE_ATU_VIEWPORT_BASE 0x904 101 + #define PCIE_ATU_UNROLL_BASE(dir, index) \ 102 + (((index) << 9) | ((dir == PCIE_ATU_REGION_DIR_IB) ? BIT(8) : 0)) 103 + #define PCIE_ATU_VIEWPORT_SIZE 0x2C 104 + #define PCIE_ATU_REGION_CTRL1 0x000 107 105 #define PCIE_ATU_INCREASE_REGION_SIZE BIT(13) 108 106 #define PCIE_ATU_TYPE_MEM 0x0 109 107 #define PCIE_ATU_TYPE_IO 0x2 ··· 132 88 #define PCIE_ATU_TYPE_CFG1 0x5 133 89 #define PCIE_ATU_TD BIT(8) 134 90 #define PCIE_ATU_FUNC_NUM(pf) ((pf) << 20) 135 - #define PCIE_ATU_CR2 0x908 91 + #define PCIE_ATU_REGION_CTRL2 0x004 136 92 #define PCIE_ATU_ENABLE BIT(31) 137 93 #define PCIE_ATU_BAR_MODE_ENABLE BIT(30) 138 94 #define PCIE_ATU_FUNC_NUM_MATCH_EN BIT(19) 139 - #define PCIE_ATU_LOWER_BASE 0x90C 140 - #define PCIE_ATU_UPPER_BASE 0x910 141 - #define PCIE_ATU_LIMIT 0x914 142 - #define PCIE_ATU_LOWER_TARGET 0x918 95 + #define PCIE_ATU_LOWER_BASE 0x008 96 + #define PCIE_ATU_UPPER_BASE 0x00C 97 + #define PCIE_ATU_LIMIT 0x010 98 + #define PCIE_ATU_LOWER_TARGET 0x014 143 99 #define PCIE_ATU_BUS(x) FIELD_PREP(GENMASK(31, 24), x) 144 100 #define PCIE_ATU_DEV(x) FIELD_PREP(GENMASK(23, 19), x) 145 101 #define PCIE_ATU_FUNC(x) FIELD_PREP(GENMASK(18, 16), x) 146 - #define PCIE_ATU_UPPER_TARGET 0x91C 147 - #define PCIE_ATU_UPPER_LIMIT 0x924 102 + #define PCIE_ATU_UPPER_TARGET 0x018 103 + #define PCIE_ATU_UPPER_LIMIT 0x020 148 104 149 105 #define PCIE_MISC_CONTROL_1_OFF 0x8BC 150 106 #define PCIE_DBI_RO_WR_EN BIT(0) ··· 175 131 #define PCIE_ATU_UNR_UPPER_LIMIT 0x20 176 132 177 133 /* 134 + * RAS-DES register definitions 135 + */ 136 + #define PCIE_RAS_DES_EVENT_COUNTER_CONTROL 0x8 137 + #define EVENT_COUNTER_ALL_CLEAR 0x3 138 + #define EVENT_COUNTER_ENABLE_ALL 0x7 139 + #define EVENT_COUNTER_ENABLE_SHIFT 2 140 + #define EVENT_COUNTER_EVENT_SEL_MASK GENMASK(7, 0) 141 + #define EVENT_COUNTER_EVENT_SEL_SHIFT 16 142 + #define EVENT_COUNTER_EVENT_Tx_L0S 0x2 143 + #define EVENT_COUNTER_EVENT_Rx_L0S 0x3 144 + #define EVENT_COUNTER_EVENT_L1 0x5 145 + #define EVENT_COUNTER_EVENT_L1_1 0x7 146 + #define EVENT_COUNTER_EVENT_L1_2 0x8 147 + #define EVENT_COUNTER_GROUP_SEL_SHIFT 24 148 + #define EVENT_COUNTER_GROUP_5 0x5 149 + 150 + #define PCIE_RAS_DES_EVENT_COUNTER_DATA 0xc 151 + 152 + /* 178 153 * The default address offset between dbi_base and atu_base. Root controller 179 154 * drivers are not required to initialize atu_base if the offset matches this 180 155 * default; the driver core automatically derives atu_base from dbi_base using 181 156 * this offset, if atu_base not set. 182 157 */ 183 158 #define DEFAULT_DBI_ATU_OFFSET (0x3 << 20) 184 - 185 - /* Register address builder */ 186 - #define PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(region) \ 187 - ((region) << 9) 188 - 189 - #define PCIE_GET_ATU_INB_UNR_REG_OFFSET(region) \ 190 - (((region) << 9) | BIT(8)) 191 159 192 160 #define MAX_MSI_IRQS 256 193 161 #define MAX_MSI_IRQS_PER_CTRL 32 ··· 211 155 #define MAX_IATU_IN 256 212 156 #define MAX_IATU_OUT 256 213 157 214 - struct pcie_port; 215 158 struct dw_pcie; 159 + struct dw_pcie_rp; 216 160 struct dw_pcie_ep; 217 - 218 - enum dw_pcie_region_type { 219 - DW_PCIE_REGION_UNKNOWN, 220 - DW_PCIE_REGION_INBOUND, 221 - DW_PCIE_REGION_OUTBOUND, 222 - }; 223 161 224 162 enum dw_pcie_device_mode { 225 163 DW_PCIE_UNKNOWN_TYPE, ··· 223 173 }; 224 174 225 175 struct dw_pcie_host_ops { 226 - int (*host_init)(struct pcie_port *pp); 227 - int (*msi_host_init)(struct pcie_port *pp); 176 + int (*host_init)(struct dw_pcie_rp *pp); 177 + void (*host_deinit)(struct dw_pcie_rp *pp); 178 + int (*msi_host_init)(struct dw_pcie_rp *pp); 228 179 }; 229 180 230 - struct pcie_port { 181 + struct dw_pcie_rp { 231 182 bool has_msi_ctrl:1; 183 + bool cfg0_io_shared:1; 232 184 u64 cfg0_base; 233 185 void __iomem *va_cfg0_base; 234 186 u32 cfg0_size; ··· 239 187 u32 io_size; 240 188 int irq; 241 189 const struct dw_pcie_host_ops *ops; 242 - int msi_irq; 190 + int msi_irq[MAX_MSI_CTRLS]; 243 191 struct irq_domain *irq_domain; 244 192 struct irq_domain *msi_domain; 245 - u16 msi_msg; 246 193 dma_addr_t msi_data; 194 + struct page *msi_page; 247 195 struct irq_chip *msi_irq_chip; 248 196 u32 num_vectors; 249 197 u32 irq_mask[MAX_MSI_CTRLS]; 250 198 struct pci_host_bridge *bridge; 251 199 raw_spinlock_t lock; 252 200 DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS); 253 - }; 254 - 255 - enum dw_pcie_as_type { 256 - DW_PCIE_AS_UNKNOWN, 257 - DW_PCIE_AS_MEM, 258 - DW_PCIE_AS_IO, 259 201 }; 260 202 261 203 struct dw_pcie_ep_ops { ··· 307 261 struct device *dev; 308 262 void __iomem *dbi_base; 309 263 void __iomem *dbi_base2; 310 - /* Used when iatu_unroll_enabled is true */ 311 264 void __iomem *atu_base; 312 265 size_t atu_size; 313 266 u32 num_ib_windows; 314 267 u32 num_ob_windows; 315 - struct pcie_port pp; 268 + u32 region_align; 269 + u64 region_limit; 270 + struct dw_pcie_rp pp; 316 271 struct dw_pcie_ep ep; 317 272 const struct dw_pcie_ops *ops; 318 - unsigned int version; 273 + u32 version; 274 + u32 type; 319 275 int num_lanes; 320 276 int link_gen; 321 277 u8 n_fts[2]; 322 278 bool iatu_unroll_enabled: 1; 323 - bool io_cfg_atu_shared: 1; 324 279 }; 325 280 326 281 #define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp) 327 282 328 283 #define to_dw_pcie_from_ep(endpoint) \ 329 284 container_of((endpoint), struct dw_pcie, ep) 285 + 286 + void dw_pcie_version_detect(struct dw_pcie *pci); 330 287 331 288 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap); 332 289 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap); ··· 343 294 int dw_pcie_link_up(struct dw_pcie *pci); 344 295 void dw_pcie_upconfig_setup(struct dw_pcie *pci); 345 296 int dw_pcie_wait_for_link(struct dw_pcie *pci); 346 - void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, 347 - int type, u64 cpu_addr, u64 pci_addr, 348 - u64 size); 349 - void dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 350 - int type, u64 cpu_addr, u64 pci_addr, 351 - u64 size); 297 + int dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, int type, 298 + u64 cpu_addr, u64 pci_addr, u64 size); 299 + int dw_pcie_prog_ep_outbound_atu(struct dw_pcie *pci, u8 func_no, int index, 300 + int type, u64 cpu_addr, u64 pci_addr, u64 size); 352 301 int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, 353 - int bar, u64 cpu_addr, 354 - enum dw_pcie_as_type as_type); 355 - void dw_pcie_disable_atu(struct dw_pcie *pci, int index, 356 - enum dw_pcie_region_type type); 302 + int type, u64 cpu_addr, u8 bar); 303 + void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index); 357 304 void dw_pcie_setup(struct dw_pcie *pci); 358 305 void dw_pcie_iatu_detect(struct dw_pcie *pci); 359 306 ··· 410 365 dw_pcie_writel_dbi(pci, reg, val); 411 366 } 412 367 368 + static inline int dw_pcie_start_link(struct dw_pcie *pci) 369 + { 370 + if (pci->ops && pci->ops->start_link) 371 + return pci->ops->start_link(pci); 372 + 373 + return 0; 374 + } 375 + 376 + static inline void dw_pcie_stop_link(struct dw_pcie *pci) 377 + { 378 + if (pci->ops && pci->ops->stop_link) 379 + pci->ops->stop_link(pci); 380 + } 381 + 413 382 #ifdef CONFIG_PCIE_DW_HOST 414 - irqreturn_t dw_handle_msi_irq(struct pcie_port *pp); 415 - void dw_pcie_setup_rc(struct pcie_port *pp); 416 - int dw_pcie_host_init(struct pcie_port *pp); 417 - void dw_pcie_host_deinit(struct pcie_port *pp); 418 - int dw_pcie_allocate_domains(struct pcie_port *pp); 383 + irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp); 384 + int dw_pcie_setup_rc(struct dw_pcie_rp *pp); 385 + int dw_pcie_host_init(struct dw_pcie_rp *pp); 386 + void dw_pcie_host_deinit(struct dw_pcie_rp *pp); 387 + int dw_pcie_allocate_domains(struct dw_pcie_rp *pp); 419 388 void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus, unsigned int devfn, 420 389 int where); 421 390 #else 422 - static inline irqreturn_t dw_handle_msi_irq(struct pcie_port *pp) 391 + static inline irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp) 423 392 { 424 393 return IRQ_NONE; 425 394 } 426 395 427 - static inline void dw_pcie_setup_rc(struct pcie_port *pp) 428 - { 429 - } 430 - 431 - static inline int dw_pcie_host_init(struct pcie_port *pp) 396 + static inline int dw_pcie_setup_rc(struct dw_pcie_rp *pp) 432 397 { 433 398 return 0; 434 399 } 435 400 436 - static inline void dw_pcie_host_deinit(struct pcie_port *pp) 401 + static inline int dw_pcie_host_init(struct dw_pcie_rp *pp) 402 + { 403 + return 0; 404 + } 405 + 406 + static inline void dw_pcie_host_deinit(struct dw_pcie_rp *pp) 437 407 { 438 408 } 439 409 440 - static inline int dw_pcie_allocate_domains(struct pcie_port *pp) 410 + static inline int dw_pcie_allocate_domains(struct dw_pcie_rp *pp) 441 411 { 442 412 return 0; 443 413 }
+2 -2
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 186 186 return 0; 187 187 } 188 188 189 - static int rockchip_pcie_host_init(struct pcie_port *pp) 189 + static int rockchip_pcie_host_init(struct dw_pcie_rp *pp) 190 190 { 191 191 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 192 192 struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); ··· 288 288 { 289 289 struct device *dev = &pdev->dev; 290 290 struct rockchip_pcie *rockchip; 291 - struct pcie_port *pp; 291 + struct dw_pcie_rp *pp; 292 292 int ret; 293 293 294 294 rockchip = devm_kzalloc(dev, sizeof(*rockchip), GFP_KERNEL);
+1 -3
drivers/pci/controller/dwc/pcie-fu740.c
··· 16 16 #include <linux/gpio.h> 17 17 #include <linux/gpio/consumer.h> 18 18 #include <linux/kernel.h> 19 - #include <linux/mfd/syscon.h> 20 19 #include <linux/module.h> 21 20 #include <linux/pci.h> 22 21 #include <linux/platform_device.h> 23 - #include <linux/regulator/consumer.h> 24 22 #include <linux/resource.h> 25 23 #include <linux/types.h> 26 24 #include <linux/interrupt.h> ··· 234 236 return ret; 235 237 } 236 238 237 - static int fu740_pcie_host_init(struct pcie_port *pp) 239 + static int fu740_pcie_host_init(struct dw_pcie_rp *pp) 238 240 { 239 241 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 240 242 struct fu740_pcie *afp = to_fu740_pcie(pci);
+5 -5
drivers/pci/controller/dwc/pcie-histb.c
··· 74 74 writel(val, histb_pcie->ctrl + reg); 75 75 } 76 76 77 - static void histb_pcie_dbi_w_mode(struct pcie_port *pp, bool enable) 77 + static void histb_pcie_dbi_w_mode(struct dw_pcie_rp *pp, bool enable) 78 78 { 79 79 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 80 80 struct histb_pcie *hipcie = to_histb_pcie(pci); ··· 88 88 histb_pcie_writel(hipcie, PCIE_SYS_CTRL0, val); 89 89 } 90 90 91 - static void histb_pcie_dbi_r_mode(struct pcie_port *pp, bool enable) 91 + static void histb_pcie_dbi_r_mode(struct dw_pcie_rp *pp, bool enable) 92 92 { 93 93 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 94 94 struct histb_pcie *hipcie = to_histb_pcie(pci); ··· 180 180 return 0; 181 181 } 182 182 183 - static int histb_pcie_host_init(struct pcie_port *pp) 183 + static int histb_pcie_host_init(struct dw_pcie_rp *pp) 184 184 { 185 185 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 186 186 struct histb_pcie *hipcie = to_histb_pcie(pci); ··· 219 219 regulator_disable(hipcie->vpcie); 220 220 } 221 221 222 - static int histb_pcie_host_enable(struct pcie_port *pp) 222 + static int histb_pcie_host_enable(struct dw_pcie_rp *pp) 223 223 { 224 224 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 225 225 struct histb_pcie *hipcie = to_histb_pcie(pci); ··· 297 297 { 298 298 struct histb_pcie *hipcie; 299 299 struct dw_pcie *pci; 300 - struct pcie_port *pp; 300 + struct dw_pcie_rp *pp; 301 301 struct device_node *np = pdev->dev.of_node; 302 302 struct device *dev = &pdev->dev; 303 303 enum of_gpio_flags of_flags;
+13 -23
drivers/pci/controller/dwc/pcie-intel-gw.c
··· 58 58 #define BUS_IATU_OFFSET SZ_256M 59 59 #define RESET_INTERVAL_MS 100 60 60 61 - struct intel_pcie_soc { 62 - unsigned int pcie_ver; 63 - }; 64 - 65 61 struct intel_pcie { 66 62 struct dw_pcie pci; 67 63 void __iomem *app_base; ··· 302 306 intel_pcie_ltssm_disable(pcie); 303 307 intel_pcie_link_setup(pcie); 304 308 intel_pcie_init_n_fts(pci); 305 - dw_pcie_setup_rc(&pci->pp); 309 + 310 + ret = dw_pcie_setup_rc(&pci->pp); 311 + if (ret) 312 + goto app_init_err; 313 + 306 314 dw_pcie_upconfig_setup(pci); 307 315 308 316 intel_pcie_device_rst_deassert(pcie); ··· 343 343 static int intel_pcie_remove(struct platform_device *pdev) 344 344 { 345 345 struct intel_pcie *pcie = platform_get_drvdata(pdev); 346 - struct pcie_port *pp = &pcie->pci.pp; 346 + struct dw_pcie_rp *pp = &pcie->pci.pp; 347 347 348 348 dw_pcie_host_deinit(pp); 349 349 __intel_pcie_remove(pcie); ··· 351 351 return 0; 352 352 } 353 353 354 - static int __maybe_unused intel_pcie_suspend_noirq(struct device *dev) 354 + static int intel_pcie_suspend_noirq(struct device *dev) 355 355 { 356 356 struct intel_pcie *pcie = dev_get_drvdata(dev); 357 357 int ret; ··· 366 366 return ret; 367 367 } 368 368 369 - static int __maybe_unused intel_pcie_resume_noirq(struct device *dev) 369 + static int intel_pcie_resume_noirq(struct device *dev) 370 370 { 371 371 struct intel_pcie *pcie = dev_get_drvdata(dev); 372 372 373 373 return intel_pcie_host_setup(pcie); 374 374 } 375 375 376 - static int intel_pcie_rc_init(struct pcie_port *pp) 376 + static int intel_pcie_rc_init(struct dw_pcie_rp *pp) 377 377 { 378 378 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 379 379 struct intel_pcie *pcie = dev_get_drvdata(pci->dev); ··· 394 394 .host_init = intel_pcie_rc_init, 395 395 }; 396 396 397 - static const struct intel_pcie_soc pcie_data = { 398 - .pcie_ver = 0x520A, 399 - }; 400 - 401 397 static int intel_pcie_probe(struct platform_device *pdev) 402 398 { 403 - const struct intel_pcie_soc *data; 404 399 struct device *dev = &pdev->dev; 405 400 struct intel_pcie *pcie; 406 - struct pcie_port *pp; 401 + struct dw_pcie_rp *pp; 407 402 struct dw_pcie *pci; 408 403 int ret; 409 404 ··· 419 424 if (ret) 420 425 return ret; 421 426 422 - data = device_get_match_data(dev); 423 - if (!data) 424 - return -ENODEV; 425 - 426 427 pci->ops = &intel_pcie_ops; 427 - pci->version = data->pcie_ver; 428 428 pp->ops = &intel_pcie_dw_ops; 429 429 430 430 ret = dw_pcie_host_init(pp); ··· 432 442 } 433 443 434 444 static const struct dev_pm_ops intel_pcie_pm_ops = { 435 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq, 436 - intel_pcie_resume_noirq) 445 + NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pcie_suspend_noirq, 446 + intel_pcie_resume_noirq) 437 447 }; 438 448 439 449 static const struct of_device_id of_intel_pcie_match[] = { 440 - { .compatible = "intel,lgm-pcie", .data = &pcie_data }, 450 + { .compatible = "intel,lgm-pcie" }, 441 451 {} 442 452 }; 443 453
+3 -3
drivers/pci/controller/dwc/pcie-keembay.c
··· 231 231 struct keembay_pcie *pcie = irq_desc_get_handler_data(desc); 232 232 struct irq_chip *chip = irq_desc_get_chip(desc); 233 233 u32 val, mask, status; 234 - struct pcie_port *pp; 234 + struct dw_pcie_rp *pp; 235 235 236 236 /* 237 237 * Keem Bay PCIe Controller provides an additional IP logic on top of ··· 332 332 struct platform_device *pdev) 333 333 { 334 334 struct dw_pcie *pci = &pcie->pci; 335 - struct pcie_port *pp = &pci->pp; 335 + struct dw_pcie_rp *pp = &pci->pp; 336 336 struct device *dev = &pdev->dev; 337 337 u32 val; 338 338 int ret; 339 339 340 340 pp->ops = &keembay_pcie_host_ops; 341 - pp->msi_irq = -ENODEV; 341 + pp->msi_irq[0] = -ENODEV; 342 342 343 343 ret = keembay_pcie_setup_msi_irq(pcie); 344 344 if (ret)
+1 -1
drivers/pci/controller/dwc/pcie-kirin.c
··· 620 620 return 0; 621 621 } 622 622 623 - static int kirin_pcie_host_init(struct pcie_port *pp) 623 + static int kirin_pcie_host_init(struct dw_pcie_rp *pp) 624 624 { 625 625 pp->bridge->ops = &kirin_pci_ops; 626 626
+278 -167
drivers/pci/controller/dwc/pcie-qcom.c
··· 41 41 #define L23_CLK_RMV_DIS BIT(2) 42 42 #define L1_CLK_RMV_DIS BIT(1) 43 43 44 + #define PCIE20_PARF_PM_CTRL 0x20 45 + #define REQ_NOT_ENTR_L1 BIT(5) 46 + 44 47 #define PCIE20_PARF_PHY_CTRL 0x40 45 48 #define PHY_CTRL_PHY_TX0_TERM_OFFSET_MASK GENMASK(20, 16) 46 49 #define PHY_CTRL_PHY_TX0_TERM_OFFSET(x) ((x) << 16) ··· 55 52 #define PCIE20_PARF_DBI_BASE_ADDR 0x168 56 53 #define PCIE20_PARF_SLV_ADDR_SPACE_SIZE 0x16C 57 54 #define PCIE20_PARF_MHI_CLOCK_RESET_CTRL 0x174 55 + #define AHB_CLK_EN BIT(0) 56 + #define MSTR_AXI_CLK_EN BIT(1) 57 + #define BYPASS BIT(4) 58 + 58 59 #define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT 0x178 59 60 #define PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1A8 60 61 #define PCIE20_PARF_LTSSM 0x1B0 ··· 76 69 #define PCIE20_AXI_MSTR_RESP_COMP_CTRL1 0x81c 77 70 #define CFG_BRIDGE_SB_INIT BIT(0) 78 71 79 - #define PCIE_CAP_LINK1_VAL 0x2FD7F 72 + #define PCIE_CAP_SLOT_POWER_LIMIT_VAL FIELD_PREP(PCI_EXP_SLTCAP_SPLV, \ 73 + 250) 74 + #define PCIE_CAP_SLOT_POWER_LIMIT_SCALE FIELD_PREP(PCI_EXP_SLTCAP_SPLS, \ 75 + 1) 76 + #define PCIE_CAP_SLOT_VAL (PCI_EXP_SLTCAP_ABP | \ 77 + PCI_EXP_SLTCAP_PCP | \ 78 + PCI_EXP_SLTCAP_MRLSP | \ 79 + PCI_EXP_SLTCAP_AIP | \ 80 + PCI_EXP_SLTCAP_PIP | \ 81 + PCI_EXP_SLTCAP_HPS | \ 82 + PCI_EXP_SLTCAP_HPC | \ 83 + PCI_EXP_SLTCAP_EIP | \ 84 + PCIE_CAP_SLOT_POWER_LIMIT_VAL | \ 85 + PCIE_CAP_SLOT_POWER_LIMIT_SCALE) 80 86 81 87 #define PCIE20_PARF_Q2A_FLUSH 0x1AC 82 88 ··· 148 128 struct clk *master_clk; 149 129 struct clk *slave_clk; 150 130 struct clk *cfg_clk; 151 - struct clk *pipe_clk; 152 131 struct regulator_bulk_data supplies[QCOM_PCIE_2_3_2_MAX_SUPPLY]; 153 132 }; 154 133 ··· 184 165 int num_clks; 185 166 struct regulator_bulk_data supplies[2]; 186 167 struct reset_control *pci_reset; 187 - struct clk *pipe_clk; 188 - struct clk *pipe_clk_src; 189 - struct clk *phy_pipe_clk; 190 - struct clk *ref_clk_src; 168 + }; 169 + 170 + struct qcom_pcie_resources_2_9_0 { 171 + struct clk_bulk_data clks[5]; 172 + struct reset_control *rst; 191 173 }; 192 174 193 175 union qcom_pcie_resources { ··· 198 178 struct qcom_pcie_resources_2_3_3 v2_3_3; 199 179 struct qcom_pcie_resources_2_4_0 v2_4_0; 200 180 struct qcom_pcie_resources_2_7_0 v2_7_0; 181 + struct qcom_pcie_resources_2_9_0 v2_9_0; 201 182 }; 202 183 203 184 struct qcom_pcie; ··· 215 194 216 195 struct qcom_pcie_cfg { 217 196 const struct qcom_pcie_ops *ops; 218 - unsigned int pipe_clk_need_muxing:1; 219 197 unsigned int has_tbu_clk:1; 220 198 unsigned int has_ddrss_sf_tbu_clk:1; 221 199 unsigned int has_aggre0_clk:1; ··· 345 325 struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; 346 326 struct dw_pcie *pci = pcie->pci; 347 327 struct device *dev = pci->dev; 348 - struct device_node *node = dev->of_node; 349 - u32 val; 350 328 int ret; 351 329 352 330 /* reset the PCIe interface as uboot can leave it undefined state */ ··· 354 336 reset_control_assert(res->por_reset); 355 337 reset_control_assert(res->ext_reset); 356 338 reset_control_assert(res->phy_reset); 357 - 358 - writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL); 359 339 360 340 ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies); 361 341 if (ret < 0) { ··· 397 381 goto err_deassert_axi; 398 382 } 399 383 400 - ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 401 - if (ret) 402 - goto err_clks; 384 + return 0; 385 + 386 + err_deassert_axi: 387 + reset_control_assert(res->por_reset); 388 + err_deassert_por: 389 + reset_control_assert(res->pci_reset); 390 + err_deassert_pci: 391 + reset_control_assert(res->phy_reset); 392 + err_deassert_phy: 393 + reset_control_assert(res->ext_reset); 394 + err_deassert_ext: 395 + reset_control_assert(res->ahb_reset); 396 + err_deassert_ahb: 397 + regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 398 + 399 + return ret; 400 + } 401 + 402 + static int qcom_pcie_post_init_2_1_0(struct qcom_pcie *pcie) 403 + { 404 + struct qcom_pcie_resources_2_1_0 *res = &pcie->res.v2_1_0; 405 + struct dw_pcie *pci = pcie->pci; 406 + struct device *dev = pci->dev; 407 + struct device_node *node = dev->of_node; 408 + u32 val; 409 + int ret; 403 410 404 411 /* enable PCIe clocks and resets */ 405 412 val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 406 413 val &= ~BIT(0); 407 414 writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 415 + 416 + ret = clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 417 + if (ret) 418 + return ret; 408 419 409 420 if (of_device_is_compatible(node, "qcom,pcie-ipq8064") || 410 421 of_device_is_compatible(node, "qcom,pcie-ipq8064-v2")) { ··· 471 428 pci->dbi_base + PCIE20_AXI_MSTR_RESP_COMP_CTRL1); 472 429 473 430 return 0; 474 - 475 - err_clks: 476 - reset_control_assert(res->axi_reset); 477 - err_deassert_axi: 478 - reset_control_assert(res->por_reset); 479 - err_deassert_por: 480 - reset_control_assert(res->pci_reset); 481 - err_deassert_pci: 482 - reset_control_assert(res->phy_reset); 483 - err_deassert_phy: 484 - reset_control_assert(res->ext_reset); 485 - err_deassert_ext: 486 - reset_control_assert(res->ahb_reset); 487 - err_deassert_ahb: 488 - regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 489 - 490 - return ret; 491 431 } 492 432 493 433 static int qcom_pcie_get_resources_1_0_0(struct qcom_pcie *pcie) ··· 558 532 goto err_slave; 559 533 } 560 534 561 - /* change DBI base address */ 562 - writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 563 - 564 - if (IS_ENABLED(CONFIG_PCI_MSI)) { 565 - u32 val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT); 566 - 567 - val |= BIT(31); 568 - writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT); 569 - } 570 - 571 535 return 0; 572 536 err_slave: 573 537 clk_disable_unprepare(res->slave_bus); ··· 571 555 reset_control_assert(res->core); 572 556 573 557 return ret; 558 + } 559 + 560 + static int qcom_pcie_post_init_1_0_0(struct qcom_pcie *pcie) 561 + { 562 + /* change DBI base address */ 563 + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 564 + 565 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 566 + u32 val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT); 567 + 568 + val |= BIT(31); 569 + writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT); 570 + } 571 + 572 + return 0; 574 573 } 575 574 576 575 static void qcom_pcie_2_3_2_ltssm_enable(struct qcom_pcie *pcie) ··· 628 597 if (IS_ERR(res->slave_clk)) 629 598 return PTR_ERR(res->slave_clk); 630 599 631 - res->pipe_clk = devm_clk_get(dev, "pipe"); 632 - return PTR_ERR_OR_ZERO(res->pipe_clk); 600 + return 0; 633 601 } 634 602 635 603 static void qcom_pcie_deinit_2_3_2(struct qcom_pcie *pcie) ··· 643 613 regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 644 614 } 645 615 646 - static void qcom_pcie_post_deinit_2_3_2(struct qcom_pcie *pcie) 647 - { 648 - struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 649 - 650 - clk_disable_unprepare(res->pipe_clk); 651 - } 652 - 653 616 static int qcom_pcie_init_2_3_2(struct qcom_pcie *pcie) 654 617 { 655 618 struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 656 619 struct dw_pcie *pci = pcie->pci; 657 620 struct device *dev = pci->dev; 658 - u32 val; 659 621 int ret; 660 622 661 623 ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies); ··· 680 658 goto err_slave_clk; 681 659 } 682 660 661 + return 0; 662 + 663 + err_slave_clk: 664 + clk_disable_unprepare(res->master_clk); 665 + err_master_clk: 666 + clk_disable_unprepare(res->cfg_clk); 667 + err_cfg_clk: 668 + clk_disable_unprepare(res->aux_clk); 669 + 670 + err_aux_clk: 671 + regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 672 + 673 + return ret; 674 + } 675 + 676 + static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie) 677 + { 678 + u32 val; 679 + 683 680 /* enable PCIe clocks and resets */ 684 681 val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 685 682 val &= ~BIT(0); ··· 719 678 val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2); 720 679 val |= BIT(31); 721 680 writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2); 722 - 723 - return 0; 724 - 725 - err_slave_clk: 726 - clk_disable_unprepare(res->master_clk); 727 - err_master_clk: 728 - clk_disable_unprepare(res->cfg_clk); 729 - err_cfg_clk: 730 - clk_disable_unprepare(res->aux_clk); 731 - 732 - err_aux_clk: 733 - regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 734 - 735 - return ret; 736 - } 737 - 738 - static int qcom_pcie_post_init_2_3_2(struct qcom_pcie *pcie) 739 - { 740 - struct qcom_pcie_resources_2_3_2 *res = &pcie->res.v2_3_2; 741 - struct dw_pcie *pci = pcie->pci; 742 - struct device *dev = pci->dev; 743 - int ret; 744 - 745 - ret = clk_prepare_enable(res->pipe_clk); 746 - if (ret) { 747 - dev_err(dev, "cannot prepare/enable pipe clock\n"); 748 - return ret; 749 - } 750 681 751 682 return 0; 752 683 } ··· 827 814 struct qcom_pcie_resources_2_4_0 *res = &pcie->res.v2_4_0; 828 815 struct dw_pcie *pci = pcie->pci; 829 816 struct device *dev = pci->dev; 830 - u32 val; 831 817 int ret; 832 818 833 819 ret = reset_control_assert(res->axi_m_reset); ··· 951 939 if (ret) 952 940 goto err_clks; 953 941 942 + return 0; 943 + 944 + err_clks: 945 + reset_control_assert(res->ahb_reset); 946 + err_rst_ahb: 947 + reset_control_assert(res->pwr_reset); 948 + err_rst_pwr: 949 + reset_control_assert(res->axi_s_reset); 950 + err_rst_axi_s: 951 + reset_control_assert(res->axi_m_sticky_reset); 952 + err_rst_axi_m_sticky: 953 + reset_control_assert(res->axi_m_reset); 954 + err_rst_axi_m: 955 + reset_control_assert(res->pipe_sticky_reset); 956 + err_rst_pipe_sticky: 957 + reset_control_assert(res->pipe_reset); 958 + err_rst_pipe: 959 + reset_control_assert(res->phy_reset); 960 + err_rst_phy: 961 + reset_control_assert(res->phy_ahb_reset); 962 + return ret; 963 + } 964 + 965 + static int qcom_pcie_post_init_2_4_0(struct qcom_pcie *pcie) 966 + { 967 + u32 val; 968 + 954 969 /* enable PCIe clocks and resets */ 955 970 val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 956 971 val &= ~BIT(0); ··· 1000 961 writel(val, pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT_V2); 1001 962 1002 963 return 0; 1003 - 1004 - err_clks: 1005 - reset_control_assert(res->ahb_reset); 1006 - err_rst_ahb: 1007 - reset_control_assert(res->pwr_reset); 1008 - err_rst_pwr: 1009 - reset_control_assert(res->axi_s_reset); 1010 - err_rst_axi_s: 1011 - reset_control_assert(res->axi_m_sticky_reset); 1012 - err_rst_axi_m_sticky: 1013 - reset_control_assert(res->axi_m_reset); 1014 - err_rst_axi_m: 1015 - reset_control_assert(res->pipe_sticky_reset); 1016 - err_rst_pipe_sticky: 1017 - reset_control_assert(res->pipe_reset); 1018 - err_rst_pipe: 1019 - reset_control_assert(res->phy_reset); 1020 - err_rst_phy: 1021 - reset_control_assert(res->phy_ahb_reset); 1022 - return ret; 1023 964 } 1024 965 1025 966 static int qcom_pcie_get_resources_2_3_3(struct qcom_pcie *pcie) ··· 1057 1038 struct qcom_pcie_resources_2_3_3 *res = &pcie->res.v2_3_3; 1058 1039 struct dw_pcie *pci = pcie->pci; 1059 1040 struct device *dev = pci->dev; 1060 - u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1061 1041 int i, ret; 1062 - u32 val; 1063 1042 1064 1043 for (i = 0; i < ARRAY_SIZE(res->rst); i++) { 1065 1044 ret = reset_control_assert(res->rst[i]); ··· 1114 1097 goto err_clk_aux; 1115 1098 } 1116 1099 1117 - writel(SLV_ADDR_SPACE_SZ, 1118 - pcie->parf + PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE); 1119 - 1120 - val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 1121 - val &= ~BIT(0); 1122 - writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 1123 - 1124 - writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 1125 - 1126 - writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS 1127 - | SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS | 1128 - AUX_PWR_DET | L23_CLK_RMV_DIS | L1_CLK_RMV_DIS, 1129 - pcie->parf + PCIE20_PARF_SYS_CTRL); 1130 - writel(0, pcie->parf + PCIE20_PARF_Q2A_FLUSH); 1131 - 1132 - writel(PCI_COMMAND_MASTER, pci->dbi_base + PCI_COMMAND); 1133 - writel(DBI_RO_WR_EN, pci->dbi_base + PCIE20_MISC_CONTROL_1_REG); 1134 - writel(PCIE_CAP_LINK1_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); 1135 - 1136 - val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); 1137 - val &= ~PCI_EXP_LNKCAP_ASPMS; 1138 - writel(val, pci->dbi_base + offset + PCI_EXP_LNKCAP); 1139 - 1140 - writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + 1141 - PCI_EXP_DEVCTL2); 1142 - 1143 1100 return 0; 1144 1101 1145 1102 err_clk_aux: ··· 1133 1142 reset_control_assert(res->rst[i]); 1134 1143 1135 1144 return ret; 1145 + } 1146 + 1147 + static int qcom_pcie_post_init_2_3_3(struct qcom_pcie *pcie) 1148 + { 1149 + struct dw_pcie *pci = pcie->pci; 1150 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1151 + u32 val; 1152 + 1153 + writel(SLV_ADDR_SPACE_SZ, 1154 + pcie->parf + PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE); 1155 + 1156 + val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 1157 + val &= ~BIT(0); 1158 + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 1159 + 1160 + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 1161 + 1162 + writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS 1163 + | SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS | 1164 + AUX_PWR_DET | L23_CLK_RMV_DIS | L1_CLK_RMV_DIS, 1165 + pcie->parf + PCIE20_PARF_SYS_CTRL); 1166 + writel(0, pcie->parf + PCIE20_PARF_Q2A_FLUSH); 1167 + 1168 + writel(PCI_COMMAND_MASTER, pci->dbi_base + PCI_COMMAND); 1169 + writel(DBI_RO_WR_EN, pci->dbi_base + PCIE20_MISC_CONTROL_1_REG); 1170 + writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); 1171 + 1172 + val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); 1173 + val &= ~PCI_EXP_LNKCAP_ASPMS; 1174 + writel(val, pci->dbi_base + offset + PCI_EXP_LNKCAP); 1175 + 1176 + writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + 1177 + PCI_EXP_DEVCTL2); 1178 + 1179 + return 0; 1136 1180 } 1137 1181 1138 1182 static int qcom_pcie_get_resources_2_7_0(struct qcom_pcie *pcie) ··· 1210 1184 if (ret < 0) 1211 1185 return ret; 1212 1186 1213 - if (pcie->cfg->pipe_clk_need_muxing) { 1214 - res->pipe_clk_src = devm_clk_get(dev, "pipe_mux"); 1215 - if (IS_ERR(res->pipe_clk_src)) 1216 - return PTR_ERR(res->pipe_clk_src); 1217 - 1218 - res->phy_pipe_clk = devm_clk_get(dev, "phy_pipe"); 1219 - if (IS_ERR(res->phy_pipe_clk)) 1220 - return PTR_ERR(res->phy_pipe_clk); 1221 - 1222 - res->ref_clk_src = devm_clk_get(dev, "ref"); 1223 - if (IS_ERR(res->ref_clk_src)) 1224 - return PTR_ERR(res->ref_clk_src); 1225 - } 1226 - 1227 - res->pipe_clk = devm_clk_get(dev, "pipe"); 1228 - return PTR_ERR_OR_ZERO(res->pipe_clk); 1187 + return 0; 1229 1188 } 1230 1189 1231 1190 static int qcom_pcie_init_2_7_0(struct qcom_pcie *pcie) ··· 1226 1215 dev_err(dev, "cannot enable regulators\n"); 1227 1216 return ret; 1228 1217 } 1229 - 1230 - /* Set TCXO as clock source for pcie_pipe_clk_src */ 1231 - if (pcie->cfg->pipe_clk_need_muxing) 1232 - clk_set_parent(res->pipe_clk_src, res->ref_clk_src); 1233 1218 1234 1219 ret = clk_bulk_prepare_enable(res->num_clks, res->clks); 1235 1220 if (ret < 0) ··· 1268 1261 val |= BIT(4); 1269 1262 writel(val, pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL); 1270 1263 1264 + /* Enable L1 and L1SS */ 1265 + val = readl(pcie->parf + PCIE20_PARF_PM_CTRL); 1266 + val &= ~REQ_NOT_ENTR_L1; 1267 + writel(val, pcie->parf + PCIE20_PARF_PM_CTRL); 1268 + 1271 1269 if (IS_ENABLED(CONFIG_PCI_MSI)) { 1272 1270 val = readl(pcie->parf + PCIE20_PARF_AXI_MSTR_WR_ADDR_HALT); 1273 1271 val |= BIT(31); ··· 1293 1281 struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; 1294 1282 1295 1283 clk_bulk_disable_unprepare(res->num_clks, res->clks); 1284 + 1296 1285 regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies); 1297 1286 } 1298 1287 1299 - static int qcom_pcie_post_init_2_7_0(struct qcom_pcie *pcie) 1288 + static int qcom_pcie_get_resources_2_9_0(struct qcom_pcie *pcie) 1300 1289 { 1301 - struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; 1290 + struct qcom_pcie_resources_2_9_0 *res = &pcie->res.v2_9_0; 1291 + struct dw_pcie *pci = pcie->pci; 1292 + struct device *dev = pci->dev; 1293 + int ret; 1302 1294 1303 - /* Set pipe clock as clock source for pcie_pipe_clk_src */ 1304 - if (pcie->cfg->pipe_clk_need_muxing) 1305 - clk_set_parent(res->pipe_clk_src, res->phy_pipe_clk); 1295 + res->clks[0].id = "iface"; 1296 + res->clks[1].id = "axi_m"; 1297 + res->clks[2].id = "axi_s"; 1298 + res->clks[3].id = "axi_bridge"; 1299 + res->clks[4].id = "rchng"; 1306 1300 1307 - return clk_prepare_enable(res->pipe_clk); 1301 + ret = devm_clk_bulk_get(dev, ARRAY_SIZE(res->clks), res->clks); 1302 + if (ret < 0) 1303 + return ret; 1304 + 1305 + res->rst = devm_reset_control_array_get_exclusive(dev); 1306 + if (IS_ERR(res->rst)) 1307 + return PTR_ERR(res->rst); 1308 + 1309 + return 0; 1308 1310 } 1309 1311 1310 - static void qcom_pcie_post_deinit_2_7_0(struct qcom_pcie *pcie) 1312 + static void qcom_pcie_deinit_2_9_0(struct qcom_pcie *pcie) 1311 1313 { 1312 - struct qcom_pcie_resources_2_7_0 *res = &pcie->res.v2_7_0; 1314 + struct qcom_pcie_resources_2_9_0 *res = &pcie->res.v2_9_0; 1313 1315 1314 - clk_disable_unprepare(res->pipe_clk); 1316 + clk_bulk_disable_unprepare(ARRAY_SIZE(res->clks), res->clks); 1317 + } 1318 + 1319 + static int qcom_pcie_init_2_9_0(struct qcom_pcie *pcie) 1320 + { 1321 + struct qcom_pcie_resources_2_9_0 *res = &pcie->res.v2_9_0; 1322 + struct device *dev = pcie->pci->dev; 1323 + int ret; 1324 + 1325 + ret = reset_control_assert(res->rst); 1326 + if (ret) { 1327 + dev_err(dev, "reset assert failed (%d)\n", ret); 1328 + return ret; 1329 + } 1330 + 1331 + /* 1332 + * Delay periods before and after reset deassert are working values 1333 + * from downstream Codeaurora kernel 1334 + */ 1335 + usleep_range(2000, 2500); 1336 + 1337 + ret = reset_control_deassert(res->rst); 1338 + if (ret) { 1339 + dev_err(dev, "reset deassert failed (%d)\n", ret); 1340 + return ret; 1341 + } 1342 + 1343 + usleep_range(2000, 2500); 1344 + 1345 + return clk_bulk_prepare_enable(ARRAY_SIZE(res->clks), res->clks); 1346 + } 1347 + 1348 + static int qcom_pcie_post_init_2_9_0(struct qcom_pcie *pcie) 1349 + { 1350 + struct dw_pcie *pci = pcie->pci; 1351 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1352 + u32 val; 1353 + int i; 1354 + 1355 + writel(SLV_ADDR_SPACE_SZ, 1356 + pcie->parf + PCIE20_v3_PARF_SLV_ADDR_SPACE_SIZE); 1357 + 1358 + val = readl(pcie->parf + PCIE20_PARF_PHY_CTRL); 1359 + val &= ~BIT(0); 1360 + writel(val, pcie->parf + PCIE20_PARF_PHY_CTRL); 1361 + 1362 + writel(0, pcie->parf + PCIE20_PARF_DBI_BASE_ADDR); 1363 + 1364 + writel(DEVICE_TYPE_RC, pcie->parf + PCIE20_PARF_DEVICE_TYPE); 1365 + writel(BYPASS | MSTR_AXI_CLK_EN | AHB_CLK_EN, 1366 + pcie->parf + PCIE20_PARF_MHI_CLOCK_RESET_CTRL); 1367 + writel(GEN3_RELATED_OFF_RXEQ_RGRDLESS_RXTS | 1368 + GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL, 1369 + pci->dbi_base + GEN3_RELATED_OFF); 1370 + 1371 + writel(MST_WAKEUP_EN | SLV_WAKEUP_EN | MSTR_ACLK_CGC_DIS | 1372 + SLV_ACLK_CGC_DIS | CORE_CLK_CGC_DIS | 1373 + AUX_PWR_DET | L23_CLK_RMV_DIS | L1_CLK_RMV_DIS, 1374 + pcie->parf + PCIE20_PARF_SYS_CTRL); 1375 + 1376 + writel(0, pcie->parf + PCIE20_PARF_Q2A_FLUSH); 1377 + 1378 + dw_pcie_dbi_ro_wr_en(pci); 1379 + writel(PCIE_CAP_SLOT_VAL, pci->dbi_base + offset + PCI_EXP_SLTCAP); 1380 + 1381 + val = readl(pci->dbi_base + offset + PCI_EXP_LNKCAP); 1382 + val &= ~PCI_EXP_LNKCAP_ASPMS; 1383 + writel(val, pci->dbi_base + offset + PCI_EXP_LNKCAP); 1384 + 1385 + writel(PCI_EXP_DEVCTL2_COMP_TMOUT_DIS, pci->dbi_base + offset + 1386 + PCI_EXP_DEVCTL2); 1387 + 1388 + for (i = 0; i < 256; i++) 1389 + writel(0, pcie->parf + PCIE20_PARF_BDF_TO_SID_TABLE_N + (4 * i)); 1390 + 1391 + return 0; 1315 1392 } 1316 1393 1317 1394 static int qcom_pcie_link_up(struct dw_pcie *pci) ··· 1482 1381 return 0; 1483 1382 } 1484 1383 1485 - static int qcom_pcie_host_init(struct pcie_port *pp) 1384 + static int qcom_pcie_host_init(struct dw_pcie_rp *pp) 1486 1385 { 1487 1386 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1488 1387 struct qcom_pcie *pcie = to_qcom_pcie(pci); ··· 1534 1433 static const struct qcom_pcie_ops ops_2_1_0 = { 1535 1434 .get_resources = qcom_pcie_get_resources_2_1_0, 1536 1435 .init = qcom_pcie_init_2_1_0, 1436 + .post_init = qcom_pcie_post_init_2_1_0, 1537 1437 .deinit = qcom_pcie_deinit_2_1_0, 1538 1438 .ltssm_enable = qcom_pcie_2_1_0_ltssm_enable, 1539 1439 }; ··· 1543 1441 static const struct qcom_pcie_ops ops_1_0_0 = { 1544 1442 .get_resources = qcom_pcie_get_resources_1_0_0, 1545 1443 .init = qcom_pcie_init_1_0_0, 1444 + .post_init = qcom_pcie_post_init_1_0_0, 1546 1445 .deinit = qcom_pcie_deinit_1_0_0, 1547 1446 .ltssm_enable = qcom_pcie_2_1_0_ltssm_enable, 1548 1447 }; ··· 1554 1451 .init = qcom_pcie_init_2_3_2, 1555 1452 .post_init = qcom_pcie_post_init_2_3_2, 1556 1453 .deinit = qcom_pcie_deinit_2_3_2, 1557 - .post_deinit = qcom_pcie_post_deinit_2_3_2, 1558 1454 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1559 1455 }; 1560 1456 ··· 1561 1459 static const struct qcom_pcie_ops ops_2_4_0 = { 1562 1460 .get_resources = qcom_pcie_get_resources_2_4_0, 1563 1461 .init = qcom_pcie_init_2_4_0, 1462 + .post_init = qcom_pcie_post_init_2_4_0, 1564 1463 .deinit = qcom_pcie_deinit_2_4_0, 1565 1464 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1566 1465 }; ··· 1570 1467 static const struct qcom_pcie_ops ops_2_3_3 = { 1571 1468 .get_resources = qcom_pcie_get_resources_2_3_3, 1572 1469 .init = qcom_pcie_init_2_3_3, 1470 + .post_init = qcom_pcie_post_init_2_3_3, 1573 1471 .deinit = qcom_pcie_deinit_2_3_3, 1574 1472 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1575 1473 }; ··· 1581 1477 .init = qcom_pcie_init_2_7_0, 1582 1478 .deinit = qcom_pcie_deinit_2_7_0, 1583 1479 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1584 - .post_init = qcom_pcie_post_init_2_7_0, 1585 - .post_deinit = qcom_pcie_post_deinit_2_7_0, 1586 1480 }; 1587 1481 1588 1482 /* Qcom IP rev.: 1.9.0 */ ··· 1589 1487 .init = qcom_pcie_init_2_7_0, 1590 1488 .deinit = qcom_pcie_deinit_2_7_0, 1591 1489 .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1592 - .post_init = qcom_pcie_post_init_2_7_0, 1593 - .post_deinit = qcom_pcie_post_deinit_2_7_0, 1594 1490 .config_sid = qcom_pcie_config_sid_sm8250, 1491 + }; 1492 + 1493 + /* Qcom IP rev.: 2.9.0 Synopsys IP rev.: 5.00a */ 1494 + static const struct qcom_pcie_ops ops_2_9_0 = { 1495 + .get_resources = qcom_pcie_get_resources_2_9_0, 1496 + .init = qcom_pcie_init_2_9_0, 1497 + .post_init = qcom_pcie_post_init_2_9_0, 1498 + .deinit = qcom_pcie_deinit_2_9_0, 1499 + .ltssm_enable = qcom_pcie_2_3_2_ltssm_enable, 1595 1500 }; 1596 1501 1597 1502 static const struct qcom_pcie_cfg apq8084_cfg = { ··· 1642 1533 static const struct qcom_pcie_cfg sm8450_pcie0_cfg = { 1643 1534 .ops = &ops_1_9_0, 1644 1535 .has_ddrss_sf_tbu_clk = true, 1645 - .pipe_clk_need_muxing = true, 1646 1536 .has_aggre0_clk = true, 1647 1537 .has_aggre1_clk = true, 1648 1538 }; ··· 1649 1541 static const struct qcom_pcie_cfg sm8450_pcie1_cfg = { 1650 1542 .ops = &ops_1_9_0, 1651 1543 .has_ddrss_sf_tbu_clk = true, 1652 - .pipe_clk_need_muxing = true, 1653 1544 .has_aggre1_clk = true, 1654 1545 }; 1655 1546 1656 1547 static const struct qcom_pcie_cfg sc7280_cfg = { 1657 1548 .ops = &ops_1_9_0, 1658 1549 .has_tbu_clk = true, 1659 - .pipe_clk_need_muxing = true, 1660 1550 }; 1661 1551 1662 1552 static const struct qcom_pcie_cfg sc8180x_cfg = { 1663 1553 .ops = &ops_1_9_0, 1664 1554 .has_tbu_clk = true, 1555 + }; 1556 + 1557 + static const struct qcom_pcie_cfg ipq6018_cfg = { 1558 + .ops = &ops_2_9_0, 1665 1559 }; 1666 1560 1667 1561 static const struct dw_pcie_ops dw_pcie_ops = { ··· 1674 1564 static int qcom_pcie_probe(struct platform_device *pdev) 1675 1565 { 1676 1566 struct device *dev = &pdev->dev; 1677 - struct pcie_port *pp; 1567 + struct dw_pcie_rp *pp; 1678 1568 struct dw_pcie *pci; 1679 1569 struct qcom_pcie *pcie; 1680 1570 const struct qcom_pcie_cfg *pcie_cfg; ··· 1776 1666 { .compatible = "qcom,pcie-sm8450-pcie0", .data = &sm8450_pcie0_cfg }, 1777 1667 { .compatible = "qcom,pcie-sm8450-pcie1", .data = &sm8450_pcie1_cfg }, 1778 1668 { .compatible = "qcom,pcie-sc7280", .data = &sc7280_cfg }, 1669 + { .compatible = "qcom,pcie-ipq6018", .data = &ipq6018_cfg }, 1779 1670 { } 1780 1671 }; 1781 1672
+5 -5
drivers/pci/controller/dwc/pcie-spear13xx.c
··· 85 85 struct spear13xx_pcie *spear13xx_pcie = arg; 86 86 struct pcie_app_reg __iomem *app_reg = spear13xx_pcie->app_base; 87 87 struct dw_pcie *pci = spear13xx_pcie->pci; 88 - struct pcie_port *pp = &pci->pp; 88 + struct dw_pcie_rp *pp = &pci->pp; 89 89 unsigned int status; 90 90 91 91 status = readl(&app_reg->int_sts); ··· 121 121 return 0; 122 122 } 123 123 124 - static int spear13xx_pcie_host_init(struct pcie_port *pp) 124 + static int spear13xx_pcie_host_init(struct dw_pcie_rp *pp) 125 125 { 126 126 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 127 127 struct spear13xx_pcie *spear13xx_pcie = to_spear13xx_pcie(pci); ··· 155 155 struct platform_device *pdev) 156 156 { 157 157 struct dw_pcie *pci = spear13xx_pcie->pci; 158 - struct pcie_port *pp = &pci->pp; 158 + struct dw_pcie_rp *pp = &pci->pp; 159 159 struct device *dev = &pdev->dev; 160 160 int ret; 161 161 ··· 172 172 } 173 173 174 174 pp->ops = &spear13xx_pcie_host_ops; 175 - pp->msi_irq = -ENODEV; 175 + pp->msi_irq[0] = -ENODEV; 176 176 177 177 ret = dw_pcie_host_init(pp); 178 178 if (ret) { ··· 258 258 .probe = spear13xx_pcie_probe, 259 259 .driver = { 260 260 .name = "spear-pcie", 261 - .of_match_table = of_match_ptr(spear13xx_pcie_of_match), 261 + .of_match_table = spear13xx_pcie_of_match, 262 262 .suppress_bind_attrs = true, 263 263 }, 264 264 };
+4 -3
drivers/pci/controller/dwc/pcie-tegra194-acpi.c
··· 39 39 static void atu_reg_write(struct tegra194_pcie_ecam *pcie_ecam, int index, 40 40 u32 val, u32 reg) 41 41 { 42 - u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index); 42 + u32 offset = PCIE_ATU_UNROLL_BASE(PCIE_ATU_REGION_DIR_OB, index) + 43 + PCIE_ATU_VIEWPORT_BASE; 43 44 44 45 writel(val, pcie_ecam->iatu_base + offset + reg); 45 46 } ··· 59 58 PCIE_ATU_LIMIT); 60 59 atu_reg_write(pcie_ecam, index, upper_32_bits(pci_addr), 61 60 PCIE_ATU_UPPER_TARGET); 62 - atu_reg_write(pcie_ecam, index, type, PCIE_ATU_CR1); 63 - atu_reg_write(pcie_ecam, index, PCIE_ATU_ENABLE, PCIE_ATU_CR2); 61 + atu_reg_write(pcie_ecam, index, type, PCIE_ATU_REGION_CTRL1); 62 + atu_reg_write(pcie_ecam, index, PCIE_ATU_ENABLE, PCIE_ATU_REGION_CTRL2); 64 63 } 65 64 66 65 static void __iomem *tegra194_map_bus(struct pci_bus *bus,
+433 -251
drivers/pci/controller/dwc/pcie-tegra194.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 /* 3 - * PCIe host controller driver for Tegra194 SoC 3 + * PCIe host controller driver for the following SoCs 4 + * Tegra194 5 + * Tegra234 4 6 * 5 - * Copyright (C) 2019 NVIDIA Corporation. 7 + * Copyright (C) 2019-2022 NVIDIA Corporation. 6 8 * 7 9 * Author: Vidya Sagar <vidyas@nvidia.com> 8 10 */ ··· 37 35 #include <soc/tegra/bpmp-abi.h> 38 36 #include "../../pci.h" 39 37 38 + #define TEGRA194_DWC_IP_VER 0x490A 39 + #define TEGRA234_DWC_IP_VER 0x562A 40 + 40 41 #define APPL_PINMUX 0x0 41 42 #define APPL_PINMUX_PEX_RST BIT(0) 42 43 #define APPL_PINMUX_CLKREQ_OVERRIDE_EN BIT(2) ··· 54 49 #define APPL_CTRL_HW_HOT_RST_MODE_MASK GENMASK(1, 0) 55 50 #define APPL_CTRL_HW_HOT_RST_MODE_SHIFT 22 56 51 #define APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST 0x1 52 + #define APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST_LTSSM_EN 0x2 57 53 58 54 #define APPL_INTR_EN_L0_0 0x8 59 55 #define APPL_INTR_EN_L0_0_LINK_STATE_INT_EN BIT(0) ··· 176 170 #define CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF 0x718 177 171 #define CFG_TIMER_CTRL_ACK_NAK_SHIFT (19) 178 172 179 - #define EVENT_COUNTER_ALL_CLEAR 0x3 180 - #define EVENT_COUNTER_ENABLE_ALL 0x7 181 - #define EVENT_COUNTER_ENABLE_SHIFT 2 182 - #define EVENT_COUNTER_EVENT_SEL_MASK GENMASK(7, 0) 183 - #define EVENT_COUNTER_EVENT_SEL_SHIFT 16 184 - #define EVENT_COUNTER_EVENT_Tx_L0S 0x2 185 - #define EVENT_COUNTER_EVENT_Rx_L0S 0x3 186 - #define EVENT_COUNTER_EVENT_L1 0x5 187 - #define EVENT_COUNTER_EVENT_L1_1 0x7 188 - #define EVENT_COUNTER_EVENT_L1_2 0x8 189 - #define EVENT_COUNTER_GROUP_SEL_SHIFT 24 190 - #define EVENT_COUNTER_GROUP_5 0x5 191 - 192 173 #define N_FTS_VAL 52 193 174 #define FTS_VAL 52 194 175 ··· 183 190 #define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT 8 184 191 #define GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK GENMASK(23, 8) 185 192 #define GEN3_EQ_CONTROL_OFF_FB_MODE_MASK GENMASK(3, 0) 186 - 187 - #define GEN3_RELATED_OFF 0x890 188 - #define GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL BIT(0) 189 - #define GEN3_RELATED_OFF_GEN3_EQ_DISABLE BIT(16) 190 - #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_SHIFT 24 191 - #define GEN3_RELATED_OFF_RATE_SHADOW_SEL_MASK GENMASK(25, 24) 192 193 193 194 #define PORT_LOGIC_AMBA_ERROR_RESPONSE_DEFAULT 0x8D0 194 195 #define AMBA_ERROR_RESPONSE_CRS_SHIFT 3 ··· 230 243 GEN4_CORE_CLK_FREQ 231 244 }; 232 245 233 - struct tegra194_pcie { 246 + struct tegra_pcie_dw_of_data { 247 + u32 version; 248 + enum dw_pcie_device_mode mode; 249 + bool has_msix_doorbell_access_fix; 250 + bool has_sbr_reset_fix; 251 + bool has_l1ss_exit_fix; 252 + bool has_ltr_req_fix; 253 + u32 cdm_chk_int_en_bit; 254 + u32 gen4_preset_vec; 255 + u8 n_fts[2]; 256 + }; 257 + 258 + struct tegra_pcie_dw { 234 259 struct device *dev; 235 260 struct resource *appl_res; 236 261 struct resource *dbi_res; ··· 254 255 struct dw_pcie pci; 255 256 struct tegra_bpmp *bpmp; 256 257 257 - enum dw_pcie_device_mode mode; 258 + struct tegra_pcie_dw_of_data *of_data; 258 259 259 260 bool supports_clkreq; 260 261 bool enable_cdm_check; 262 + bool enable_srns; 261 263 bool link_state; 262 264 bool update_fc_fixup; 265 + bool enable_ext_refclk; 263 266 u8 init_link_width; 264 267 u32 msi_ctrl_int; 265 268 u32 num_lanes; 266 269 u32 cid; 267 270 u32 cfg_link_cap_l1sub; 271 + u32 ras_des_cap; 268 272 u32 pcie_cap_base; 269 273 u32 aspm_cmrt; 270 274 u32 aspm_pwr_on_t; ··· 289 287 int ep_state; 290 288 }; 291 289 292 - struct tegra194_pcie_of_data { 293 - enum dw_pcie_device_mode mode; 294 - }; 295 - 296 - static inline struct tegra194_pcie *to_tegra_pcie(struct dw_pcie *pci) 290 + static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci) 297 291 { 298 - return container_of(pci, struct tegra194_pcie, pci); 292 + return container_of(pci, struct tegra_pcie_dw, pci); 299 293 } 300 294 301 - static inline void appl_writel(struct tegra194_pcie *pcie, const u32 value, 295 + static inline void appl_writel(struct tegra_pcie_dw *pcie, const u32 value, 302 296 const u32 reg) 303 297 { 304 298 writel_relaxed(value, pcie->appl_base + reg); 305 299 } 306 300 307 - static inline u32 appl_readl(struct tegra194_pcie *pcie, const u32 reg) 301 + static inline u32 appl_readl(struct tegra_pcie_dw *pcie, const u32 reg) 308 302 { 309 303 return readl_relaxed(pcie->appl_base + reg); 310 304 } ··· 309 311 enum dw_pcie_device_mode mode; 310 312 }; 311 313 312 - static void apply_bad_link_workaround(struct pcie_port *pp) 314 + static void apply_bad_link_workaround(struct dw_pcie_rp *pp) 313 315 { 314 316 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 315 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 317 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 316 318 u32 current_link_width; 317 319 u16 val; 318 320 ··· 345 347 346 348 static irqreturn_t tegra_pcie_rp_irq_handler(int irq, void *arg) 347 349 { 348 - struct tegra194_pcie *pcie = arg; 350 + struct tegra_pcie_dw *pcie = arg; 349 351 struct dw_pcie *pci = &pcie->pci; 350 - struct pcie_port *pp = &pci->pp; 351 - u32 val, tmp; 352 + struct dw_pcie_rp *pp = &pci->pp; 353 + u32 val, status_l0, status_l1; 352 354 u16 val_w; 353 355 354 - val = appl_readl(pcie, APPL_INTR_STATUS_L0); 355 - if (val & APPL_INTR_STATUS_L0_LINK_STATE_INT) { 356 - val = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0); 357 - if (val & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) { 358 - appl_writel(pcie, val, APPL_INTR_STATUS_L1_0_0); 359 - 356 + status_l0 = appl_readl(pcie, APPL_INTR_STATUS_L0); 357 + if (status_l0 & APPL_INTR_STATUS_L0_LINK_STATE_INT) { 358 + status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_0_0); 359 + appl_writel(pcie, status_l1, APPL_INTR_STATUS_L1_0_0); 360 + if (!pcie->of_data->has_sbr_reset_fix && 361 + status_l1 & APPL_INTR_STATUS_L1_0_0_LINK_REQ_RST_NOT_CHGED) { 360 362 /* SBR & Surprise Link Down WAR */ 361 363 val = appl_readl(pcie, APPL_CAR_RESET_OVRD); 362 364 val &= ~APPL_CAR_RESET_OVRD_CYA_OVERRIDE_CORE_RST_N; ··· 372 374 } 373 375 } 374 376 375 - if (val & APPL_INTR_STATUS_L0_INT_INT) { 376 - val = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0); 377 - if (val & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) { 377 + if (status_l0 & APPL_INTR_STATUS_L0_INT_INT) { 378 + status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0); 379 + if (status_l1 & APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS) { 378 380 appl_writel(pcie, 379 381 APPL_INTR_STATUS_L1_8_0_AUTO_BW_INT_STS, 380 382 APPL_INTR_STATUS_L1_8_0); 381 383 apply_bad_link_workaround(pp); 382 384 } 383 - if (val & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) { 385 + if (status_l1 & APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS) { 386 + val_w = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + 387 + PCI_EXP_LNKSTA); 388 + val_w |= PCI_EXP_LNKSTA_LBMS; 389 + dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + 390 + PCI_EXP_LNKSTA, val_w); 391 + 384 392 appl_writel(pcie, 385 393 APPL_INTR_STATUS_L1_8_0_BW_MGT_INT_STS, 386 394 APPL_INTR_STATUS_L1_8_0); ··· 398 394 } 399 395 } 400 396 401 - val = appl_readl(pcie, APPL_INTR_STATUS_L0); 402 - if (val & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) { 403 - val = appl_readl(pcie, APPL_INTR_STATUS_L1_18); 404 - tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); 405 - if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) { 397 + if (status_l0 & APPL_INTR_STATUS_L0_CDM_REG_CHK_INT) { 398 + status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_18); 399 + val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS); 400 + if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMPLT) { 406 401 dev_info(pci->dev, "CDM check complete\n"); 407 - tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE; 402 + val |= PCIE_PL_CHK_REG_CHK_REG_COMPLETE; 408 403 } 409 - if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) { 404 + if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_CMP_ERR) { 410 405 dev_err(pci->dev, "CDM comparison mismatch\n"); 411 - tmp |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR; 406 + val |= PCIE_PL_CHK_REG_CHK_REG_COMPARISON_ERROR; 412 407 } 413 - if (val & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) { 408 + if (status_l1 & APPL_INTR_STATUS_L1_18_CDM_REG_CHK_LOGIC_ERR) { 414 409 dev_err(pci->dev, "CDM Logic error\n"); 415 - tmp |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR; 410 + val |= PCIE_PL_CHK_REG_CHK_REG_LOGIC_ERROR; 416 411 } 417 - dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, tmp); 418 - tmp = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR); 419 - dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", tmp); 412 + dw_pcie_writel_dbi(pci, PCIE_PL_CHK_REG_CONTROL_STATUS, val); 413 + val = dw_pcie_readl_dbi(pci, PCIE_PL_CHK_REG_ERR_ADDR); 414 + dev_err(pci->dev, "CDM Error Address Offset = 0x%08X\n", val); 420 415 } 421 416 422 417 return IRQ_HANDLED; 423 418 } 424 419 425 - static void pex_ep_event_hot_rst_done(struct tegra194_pcie *pcie) 420 + static void pex_ep_event_hot_rst_done(struct tegra_pcie_dw *pcie) 426 421 { 427 422 u32 val; 428 423 ··· 449 446 450 447 static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg) 451 448 { 452 - struct tegra194_pcie *pcie = arg; 449 + struct tegra_pcie_dw *pcie = arg; 453 450 struct dw_pcie *pci = &pcie->pci; 454 451 u32 val, speed; 455 452 456 453 speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) & 457 454 PCI_EXP_LNKSTA_CLS; 458 455 clk_set_rate(pcie->core_clk, pcie_gen_freq[speed - 1]); 456 + 457 + if (pcie->of_data->has_ltr_req_fix) 458 + return IRQ_HANDLED; 459 459 460 460 /* If EP doesn't advertise L1SS, just return */ 461 461 val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub); ··· 498 492 499 493 static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg) 500 494 { 501 - struct tegra194_pcie *pcie = arg; 495 + struct tegra_pcie_dw *pcie = arg; 502 496 struct dw_pcie_ep *ep = &pcie->pci.ep; 503 497 int spurious = 1; 504 498 u32 status_l0, status_l1, link_status; ··· 541 535 return IRQ_HANDLED; 542 536 } 543 537 544 - static int tegra194_pcie_rd_own_conf(struct pci_bus *bus, u32 devfn, int where, 538 + static int tegra_pcie_dw_rd_own_conf(struct pci_bus *bus, u32 devfn, int where, 545 539 int size, u32 *val) 546 540 { 541 + struct dw_pcie_rp *pp = bus->sysdata; 542 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 543 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 544 + 547 545 /* 548 546 * This is an endpoint mode specific register happen to appear even 549 547 * when controller is operating in root port mode and system hangs 550 548 * when it is accessed with link being in ASPM-L1 state. 551 549 * So skip accessing it altogether 552 550 */ 553 - if (!PCI_SLOT(devfn) && where == PORT_LOGIC_MSIX_DOORBELL) { 551 + if (!pcie->of_data->has_msix_doorbell_access_fix && 552 + !PCI_SLOT(devfn) && where == PORT_LOGIC_MSIX_DOORBELL) { 554 553 *val = 0x00000000; 555 554 return PCIBIOS_SUCCESSFUL; 556 555 } ··· 563 552 return pci_generic_config_read(bus, devfn, where, size, val); 564 553 } 565 554 566 - static int tegra194_pcie_wr_own_conf(struct pci_bus *bus, u32 devfn, int where, 555 + static int tegra_pcie_dw_wr_own_conf(struct pci_bus *bus, u32 devfn, int where, 567 556 int size, u32 val) 568 557 { 558 + struct dw_pcie_rp *pp = bus->sysdata; 559 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 560 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 561 + 569 562 /* 570 563 * This is an endpoint mode specific register happen to appear even 571 564 * when controller is operating in root port mode and system hangs 572 565 * when it is accessed with link being in ASPM-L1 state. 573 566 * So skip accessing it altogether 574 567 */ 575 - if (!PCI_SLOT(devfn) && where == PORT_LOGIC_MSIX_DOORBELL) 568 + if (!pcie->of_data->has_msix_doorbell_access_fix && 569 + !PCI_SLOT(devfn) && where == PORT_LOGIC_MSIX_DOORBELL) 576 570 return PCIBIOS_SUCCESSFUL; 577 571 578 572 return pci_generic_config_write(bus, devfn, where, size, val); ··· 585 569 586 570 static struct pci_ops tegra_pci_ops = { 587 571 .map_bus = dw_pcie_own_conf_map_bus, 588 - .read = tegra194_pcie_rd_own_conf, 589 - .write = tegra194_pcie_wr_own_conf, 572 + .read = tegra_pcie_dw_rd_own_conf, 573 + .write = tegra_pcie_dw_wr_own_conf, 590 574 }; 591 575 592 576 #if defined(CONFIG_PCIEASPM) 593 - static const u32 event_cntr_ctrl_offset[] = { 594 - 0x1d8, 595 - 0x1a8, 596 - 0x1a8, 597 - 0x1a8, 598 - 0x1c4, 599 - 0x1d8 600 - }; 601 - 602 - static const u32 event_cntr_data_offset[] = { 603 - 0x1dc, 604 - 0x1ac, 605 - 0x1ac, 606 - 0x1ac, 607 - 0x1c8, 608 - 0x1dc 609 - }; 610 - 611 - static void disable_aspm_l11(struct tegra194_pcie *pcie) 577 + static void disable_aspm_l11(struct tegra_pcie_dw *pcie) 612 578 { 613 579 u32 val; 614 580 ··· 599 601 dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val); 600 602 } 601 603 602 - static void disable_aspm_l12(struct tegra194_pcie *pcie) 604 + static void disable_aspm_l12(struct tegra_pcie_dw *pcie) 603 605 { 604 606 u32 val; 605 607 ··· 608 610 dw_pcie_writel_dbi(&pcie->pci, pcie->cfg_link_cap_l1sub, val); 609 611 } 610 612 611 - static inline u32 event_counter_prog(struct tegra194_pcie *pcie, u32 event) 613 + static inline u32 event_counter_prog(struct tegra_pcie_dw *pcie, u32 event) 612 614 { 613 615 u32 val; 614 616 615 - val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid]); 617 + val = dw_pcie_readl_dbi(&pcie->pci, pcie->ras_des_cap + 618 + PCIE_RAS_DES_EVENT_COUNTER_CONTROL); 616 619 val &= ~(EVENT_COUNTER_EVENT_SEL_MASK << EVENT_COUNTER_EVENT_SEL_SHIFT); 617 620 val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; 618 621 val |= event << EVENT_COUNTER_EVENT_SEL_SHIFT; 619 622 val |= EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; 620 - dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val); 621 - val = dw_pcie_readl_dbi(&pcie->pci, event_cntr_data_offset[pcie->cid]); 623 + dw_pcie_writel_dbi(&pcie->pci, pcie->ras_des_cap + 624 + PCIE_RAS_DES_EVENT_COUNTER_CONTROL, val); 625 + val = dw_pcie_readl_dbi(&pcie->pci, pcie->ras_des_cap + 626 + PCIE_RAS_DES_EVENT_COUNTER_DATA); 622 627 623 628 return val; 624 629 } 625 630 626 631 static int aspm_state_cnt(struct seq_file *s, void *data) 627 632 { 628 - struct tegra194_pcie *pcie = (struct tegra194_pcie *) 633 + struct tegra_pcie_dw *pcie = (struct tegra_pcie_dw *) 629 634 dev_get_drvdata(s->private); 630 635 u32 val; 631 636 ··· 648 647 event_counter_prog(pcie, EVENT_COUNTER_EVENT_L1_2)); 649 648 650 649 /* Clear all counters */ 651 - dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], 650 + dw_pcie_writel_dbi(&pcie->pci, pcie->ras_des_cap + 651 + PCIE_RAS_DES_EVENT_COUNTER_CONTROL, 652 652 EVENT_COUNTER_ALL_CLEAR); 653 653 654 654 /* Re-enable counting */ 655 655 val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; 656 656 val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; 657 - dw_pcie_writel_dbi(&pcie->pci, event_cntr_ctrl_offset[pcie->cid], val); 657 + dw_pcie_writel_dbi(&pcie->pci, pcie->ras_des_cap + 658 + PCIE_RAS_DES_EVENT_COUNTER_CONTROL, val); 658 659 659 660 return 0; 660 661 } 661 662 662 - static void init_host_aspm(struct tegra194_pcie *pcie) 663 + static void init_host_aspm(struct tegra_pcie_dw *pcie) 663 664 { 664 665 struct dw_pcie *pci = &pcie->pci; 665 666 u32 val; ··· 669 666 val = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_L1SS); 670 667 pcie->cfg_link_cap_l1sub = val + PCI_L1SS_CAP; 671 668 669 + pcie->ras_des_cap = dw_pcie_find_ext_capability(&pcie->pci, 670 + PCI_EXT_CAP_ID_VNDR); 671 + 672 672 /* Enable ASPM counters */ 673 673 val = EVENT_COUNTER_ENABLE_ALL << EVENT_COUNTER_ENABLE_SHIFT; 674 674 val |= EVENT_COUNTER_GROUP_5 << EVENT_COUNTER_GROUP_SEL_SHIFT; 675 - dw_pcie_writel_dbi(pci, event_cntr_ctrl_offset[pcie->cid], val); 675 + dw_pcie_writel_dbi(pci, pcie->ras_des_cap + 676 + PCIE_RAS_DES_EVENT_COUNTER_CONTROL, val); 676 677 677 678 /* Program T_cmrt and T_pwr_on values */ 678 679 val = dw_pcie_readl_dbi(pci, pcie->cfg_link_cap_l1sub); ··· 693 686 dw_pcie_writel_dbi(pci, PCIE_PORT_AFR, val); 694 687 } 695 688 696 - static void init_debugfs(struct tegra194_pcie *pcie) 689 + static void init_debugfs(struct tegra_pcie_dw *pcie) 697 690 { 698 691 debugfs_create_devm_seqfile(pcie->dev, "aspm_state_cnt", pcie->debugfs, 699 692 aspm_state_cnt); 700 693 } 701 694 #else 702 - static inline void disable_aspm_l12(struct tegra194_pcie *pcie) { return; } 703 - static inline void disable_aspm_l11(struct tegra194_pcie *pcie) { return; } 704 - static inline void init_host_aspm(struct tegra194_pcie *pcie) { return; } 705 - static inline void init_debugfs(struct tegra194_pcie *pcie) { return; } 695 + static inline void disable_aspm_l12(struct tegra_pcie_dw *pcie) { return; } 696 + static inline void disable_aspm_l11(struct tegra_pcie_dw *pcie) { return; } 697 + static inline void init_host_aspm(struct tegra_pcie_dw *pcie) { return; } 698 + static inline void init_debugfs(struct tegra_pcie_dw *pcie) { return; } 706 699 #endif 707 700 708 - static void tegra_pcie_enable_system_interrupts(struct pcie_port *pp) 701 + static void tegra_pcie_enable_system_interrupts(struct dw_pcie_rp *pp) 709 702 { 710 703 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 711 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 704 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 712 705 u32 val; 713 706 u16 val_w; 714 707 ··· 716 709 val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN; 717 710 appl_writel(pcie, val, APPL_INTR_EN_L0_0); 718 711 719 - val = appl_readl(pcie, APPL_INTR_EN_L1_0_0); 720 - val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN; 721 - appl_writel(pcie, val, APPL_INTR_EN_L1_0_0); 712 + if (!pcie->of_data->has_sbr_reset_fix) { 713 + val = appl_readl(pcie, APPL_INTR_EN_L1_0_0); 714 + val |= APPL_INTR_EN_L1_0_0_LINK_REQ_RST_NOT_INT_EN; 715 + appl_writel(pcie, val, APPL_INTR_EN_L1_0_0); 716 + } 722 717 723 718 if (pcie->enable_cdm_check) { 724 719 val = appl_readl(pcie, APPL_INTR_EN_L0_0); 725 - val |= APPL_INTR_EN_L0_0_CDM_REG_CHK_INT_EN; 720 + val |= pcie->of_data->cdm_chk_int_en_bit; 726 721 appl_writel(pcie, val, APPL_INTR_EN_L0_0); 727 722 728 723 val = appl_readl(pcie, APPL_INTR_EN_L1_18); ··· 745 736 val_w); 746 737 } 747 738 748 - static void tegra_pcie_enable_legacy_interrupts(struct pcie_port *pp) 739 + static void tegra_pcie_enable_legacy_interrupts(struct dw_pcie_rp *pp) 749 740 { 750 741 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 751 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 742 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 752 743 u32 val; 753 744 754 745 /* Enable legacy interrupt generation */ ··· 766 757 appl_writel(pcie, val, APPL_INTR_EN_L1_8_0); 767 758 } 768 759 769 - static void tegra_pcie_enable_msi_interrupts(struct pcie_port *pp) 760 + static void tegra_pcie_enable_msi_interrupts(struct dw_pcie_rp *pp) 770 761 { 771 762 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 772 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 763 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 773 764 u32 val; 774 765 775 766 /* Enable MSI interrupt generation */ ··· 779 770 appl_writel(pcie, val, APPL_INTR_EN_L0_0); 780 771 } 781 772 782 - static void tegra_pcie_enable_interrupts(struct pcie_port *pp) 773 + static void tegra_pcie_enable_interrupts(struct dw_pcie_rp *pp) 783 774 { 784 775 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 785 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 776 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 786 777 787 778 /* Clear interrupt statuses before enabling interrupts */ 788 779 appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0); ··· 807 798 tegra_pcie_enable_msi_interrupts(pp); 808 799 } 809 800 810 - static void config_gen3_gen4_eq_presets(struct tegra194_pcie *pcie) 801 + static void config_gen3_gen4_eq_presets(struct tegra_pcie_dw *pcie) 811 802 { 812 803 struct dw_pcie *pci = &pcie->pci; 813 804 u32 val, offset, i; ··· 851 842 852 843 val = dw_pcie_readl_dbi(pci, GEN3_EQ_CONTROL_OFF); 853 844 val &= ~GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_MASK; 854 - val |= (0x360 << GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); 845 + val |= (pcie->of_data->gen4_preset_vec << 846 + GEN3_EQ_CONTROL_OFF_PSET_REQ_VEC_SHIFT); 855 847 val &= ~GEN3_EQ_CONTROL_OFF_FB_MODE_MASK; 856 848 dw_pcie_writel_dbi(pci, GEN3_EQ_CONTROL_OFF, val); 857 849 ··· 861 851 dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 862 852 } 863 853 864 - static int tegra194_pcie_host_init(struct pcie_port *pp) 854 + static int tegra_pcie_dw_host_init(struct dw_pcie_rp *pp) 865 855 { 866 856 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 867 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 857 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 868 858 u32 val; 859 + u16 val_16; 869 860 870 861 pp->bridge->ops = &tegra_pci_ops; 871 862 872 863 if (!pcie->pcie_cap_base) 873 864 pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, 874 865 PCI_CAP_ID_EXP); 866 + 867 + val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL); 868 + val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD; 869 + val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B; 870 + dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16); 875 871 876 872 val = dw_pcie_readl_dbi(pci, PCI_IO_BASE); 877 873 val &= ~(IO_BASE_IO_DECODE | IO_BASE_IO_DECODE_BIT8); ··· 903 887 val |= (pcie->num_lanes << PCI_EXP_LNKSTA_NLW_SHIFT); 904 888 dw_pcie_writel_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKCAP, val); 905 889 890 + /* Clear Slot Clock Configuration bit if SRNS configuration */ 891 + if (pcie->enable_srns) { 892 + val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + 893 + PCI_EXP_LNKSTA); 894 + val_16 &= ~PCI_EXP_LNKSTA_SLC; 895 + dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA, 896 + val_16); 897 + } 898 + 906 899 config_gen3_gen4_eq_presets(pcie); 907 900 908 901 init_host_aspm(pcie); ··· 922 897 disable_aspm_l12(pcie); 923 898 } 924 899 925 - val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 926 - val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 927 - dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 900 + if (!pcie->of_data->has_l1ss_exit_fix) { 901 + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 902 + val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 903 + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 904 + } 928 905 929 906 if (pcie->update_fc_fixup) { 930 907 val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF); ··· 939 912 return 0; 940 913 } 941 914 942 - static int tegra194_pcie_start_link(struct dw_pcie *pci) 915 + static int tegra_pcie_dw_start_link(struct dw_pcie *pci) 943 916 { 944 917 u32 val, offset, speed, tmp; 945 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 946 - struct pcie_port *pp = &pci->pp; 918 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 919 + struct dw_pcie_rp *pp = &pci->pp; 947 920 bool retry = true; 948 921 949 - if (pcie->mode == DW_PCIE_EP_TYPE) { 922 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) { 950 923 enable_irq(pcie->pex_rst_irq); 951 924 return 0; 952 925 } ··· 1005 978 offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_DLF); 1006 979 val = dw_pcie_readl_dbi(pci, offset + PCI_DLF_CAP); 1007 980 val &= ~PCI_DLF_EXCHANGE_ENABLE; 1008 - dw_pcie_writel_dbi(pci, offset, val); 981 + dw_pcie_writel_dbi(pci, offset + PCI_DLF_CAP, val); 1009 982 1010 - tegra194_pcie_host_init(pp); 983 + tegra_pcie_dw_host_init(pp); 1011 984 dw_pcie_setup_rc(pp); 1012 985 1013 986 retry = false; ··· 1023 996 return 0; 1024 997 } 1025 998 1026 - static int tegra194_pcie_link_up(struct dw_pcie *pci) 999 + static int tegra_pcie_dw_link_up(struct dw_pcie *pci) 1027 1000 { 1028 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 1001 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 1029 1002 u32 val = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA); 1030 1003 1031 1004 return !!(val & PCI_EXP_LNKSTA_DLLLA); 1032 1005 } 1033 1006 1034 - static void tegra194_pcie_stop_link(struct dw_pcie *pci) 1007 + static void tegra_pcie_dw_stop_link(struct dw_pcie *pci) 1035 1008 { 1036 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 1009 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 1037 1010 1038 1011 disable_irq(pcie->pex_rst_irq); 1039 1012 } 1040 1013 1041 1014 static const struct dw_pcie_ops tegra_dw_pcie_ops = { 1042 - .link_up = tegra194_pcie_link_up, 1043 - .start_link = tegra194_pcie_start_link, 1044 - .stop_link = tegra194_pcie_stop_link, 1015 + .link_up = tegra_pcie_dw_link_up, 1016 + .start_link = tegra_pcie_dw_start_link, 1017 + .stop_link = tegra_pcie_dw_stop_link, 1045 1018 }; 1046 1019 1047 - static const struct dw_pcie_host_ops tegra194_pcie_host_ops = { 1048 - .host_init = tegra194_pcie_host_init, 1020 + static const struct dw_pcie_host_ops tegra_pcie_dw_host_ops = { 1021 + .host_init = tegra_pcie_dw_host_init, 1049 1022 }; 1050 1023 1051 - static void tegra_pcie_disable_phy(struct tegra194_pcie *pcie) 1024 + static void tegra_pcie_disable_phy(struct tegra_pcie_dw *pcie) 1052 1025 { 1053 1026 unsigned int phy_count = pcie->phy_count; 1054 1027 ··· 1058 1031 } 1059 1032 } 1060 1033 1061 - static int tegra_pcie_enable_phy(struct tegra194_pcie *pcie) 1034 + static int tegra_pcie_enable_phy(struct tegra_pcie_dw *pcie) 1062 1035 { 1063 1036 unsigned int i; 1064 1037 int ret; ··· 1085 1058 return ret; 1086 1059 } 1087 1060 1088 - static int tegra194_pcie_parse_dt(struct tegra194_pcie *pcie) 1061 + static int tegra_pcie_dw_parse_dt(struct tegra_pcie_dw *pcie) 1089 1062 { 1090 1063 struct platform_device *pdev = to_platform_device(pcie->dev); 1091 1064 struct device_node *np = pcie->dev->of_node; ··· 1138 1111 if (of_property_read_bool(np, "nvidia,update-fc-fixup")) 1139 1112 pcie->update_fc_fixup = true; 1140 1113 1114 + /* RP using an external REFCLK is supported only in Tegra234 */ 1115 + if (pcie->of_data->version == TEGRA194_DWC_IP_VER) { 1116 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) 1117 + pcie->enable_ext_refclk = true; 1118 + } else { 1119 + pcie->enable_ext_refclk = 1120 + of_property_read_bool(pcie->dev->of_node, 1121 + "nvidia,enable-ext-refclk"); 1122 + } 1123 + 1141 1124 pcie->supports_clkreq = 1142 1125 of_property_read_bool(pcie->dev->of_node, "supports-clkreq"); 1143 1126 1144 1127 pcie->enable_cdm_check = 1145 1128 of_property_read_bool(np, "snps,enable-cdm-check"); 1146 1129 1147 - if (pcie->mode == DW_PCIE_RC_TYPE) 1130 + if (pcie->of_data->version == TEGRA234_DWC_IP_VER) 1131 + pcie->enable_srns = 1132 + of_property_read_bool(np, "nvidia,enable-srns"); 1133 + 1134 + if (pcie->of_data->mode == DW_PCIE_RC_TYPE) 1148 1135 return 0; 1149 1136 1150 1137 /* Endpoint mode specific DT entries */ ··· 1195 1154 return 0; 1196 1155 } 1197 1156 1198 - static int tegra_pcie_bpmp_set_ctrl_state(struct tegra194_pcie *pcie, 1157 + static int tegra_pcie_bpmp_set_ctrl_state(struct tegra_pcie_dw *pcie, 1199 1158 bool enable) 1200 1159 { 1201 1160 struct mrq_uphy_response resp; 1202 1161 struct tegra_bpmp_message msg; 1203 1162 struct mrq_uphy_request req; 1204 1163 1205 - /* Controller-5 doesn't need to have its state set by BPMP-FW */ 1206 - if (pcie->cid == 5) 1164 + /* 1165 + * Controller-5 doesn't need to have its state set by BPMP-FW in 1166 + * Tegra194 1167 + */ 1168 + if (pcie->of_data->version == TEGRA194_DWC_IP_VER && pcie->cid == 5) 1207 1169 return 0; 1208 1170 1209 1171 memset(&req, 0, sizeof(req)); ··· 1226 1182 return tegra_bpmp_transfer(pcie->bpmp, &msg); 1227 1183 } 1228 1184 1229 - static int tegra_pcie_bpmp_set_pll_state(struct tegra194_pcie *pcie, 1185 + static int tegra_pcie_bpmp_set_pll_state(struct tegra_pcie_dw *pcie, 1230 1186 bool enable) 1231 1187 { 1232 1188 struct mrq_uphy_response resp; ··· 1254 1210 return tegra_bpmp_transfer(pcie->bpmp, &msg); 1255 1211 } 1256 1212 1257 - static void tegra_pcie_downstream_dev_to_D0(struct tegra194_pcie *pcie) 1213 + static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) 1258 1214 { 1259 - struct pcie_port *pp = &pcie->pci.pp; 1215 + struct dw_pcie_rp *pp = &pcie->pci.pp; 1260 1216 struct pci_bus *child, *root_bus = NULL; 1261 1217 struct pci_dev *pdev; 1262 1218 ··· 1292 1248 } 1293 1249 } 1294 1250 1295 - static int tegra_pcie_get_slot_regulators(struct tegra194_pcie *pcie) 1251 + static int tegra_pcie_get_slot_regulators(struct tegra_pcie_dw *pcie) 1296 1252 { 1297 1253 pcie->slot_ctl_3v3 = devm_regulator_get_optional(pcie->dev, "vpcie3v3"); 1298 1254 if (IS_ERR(pcie->slot_ctl_3v3)) { ··· 1313 1269 return 0; 1314 1270 } 1315 1271 1316 - static int tegra_pcie_enable_slot_regulators(struct tegra194_pcie *pcie) 1272 + static int tegra_pcie_enable_slot_regulators(struct tegra_pcie_dw *pcie) 1317 1273 { 1318 1274 int ret; 1319 1275 ··· 1351 1307 return ret; 1352 1308 } 1353 1309 1354 - static void tegra_pcie_disable_slot_regulators(struct tegra194_pcie *pcie) 1310 + static void tegra_pcie_disable_slot_regulators(struct tegra_pcie_dw *pcie) 1355 1311 { 1356 1312 if (pcie->slot_ctl_12v) 1357 1313 regulator_disable(pcie->slot_ctl_12v); ··· 1359 1315 regulator_disable(pcie->slot_ctl_3v3); 1360 1316 } 1361 1317 1362 - static int tegra_pcie_config_controller(struct tegra194_pcie *pcie, 1318 + static int tegra_pcie_config_controller(struct tegra_pcie_dw *pcie, 1363 1319 bool en_hw_hot_rst) 1364 1320 { 1365 1321 int ret; ··· 1370 1326 dev_err(pcie->dev, 1371 1327 "Failed to enable controller %u: %d\n", pcie->cid, ret); 1372 1328 return ret; 1329 + } 1330 + 1331 + if (pcie->enable_ext_refclk) { 1332 + ret = tegra_pcie_bpmp_set_pll_state(pcie, true); 1333 + if (ret) { 1334 + dev_err(pcie->dev, "Failed to init UPHY: %d\n", ret); 1335 + goto fail_pll_init; 1336 + } 1373 1337 } 1374 1338 1375 1339 ret = tegra_pcie_enable_slot_regulators(pcie); ··· 1403 1351 goto fail_core_apb_rst; 1404 1352 } 1405 1353 1406 - if (en_hw_hot_rst) { 1354 + if (en_hw_hot_rst || pcie->of_data->has_sbr_reset_fix) { 1407 1355 /* Enable HW_HOT_RST mode */ 1408 1356 val = appl_readl(pcie, APPL_CTRL); 1409 1357 val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << 1410 1358 APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 1359 + val |= (APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST_LTSSM_EN << 1360 + APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 1411 1361 val |= APPL_CTRL_HW_HOT_RST_EN; 1412 1362 appl_writel(pcie, val, APPL_CTRL); 1413 1363 } ··· 1436 1382 val |= (APPL_CFG_MISC_ARCACHE_VAL << APPL_CFG_MISC_ARCACHE_SHIFT); 1437 1383 appl_writel(pcie, val, APPL_CFG_MISC); 1438 1384 1385 + if (pcie->enable_srns || pcie->enable_ext_refclk) { 1386 + /* 1387 + * When Tegra PCIe RP is using external clock, it cannot supply 1388 + * same clock to its downstream hierarchy. Hence, gate PCIe RP 1389 + * REFCLK out pads when RP & EP are using separate clocks or RP 1390 + * is using an external REFCLK. 1391 + */ 1392 + val = appl_readl(pcie, APPL_PINMUX); 1393 + val |= APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN; 1394 + val &= ~APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE; 1395 + appl_writel(pcie, val, APPL_PINMUX); 1396 + } 1397 + 1439 1398 if (!pcie->supports_clkreq) { 1440 1399 val = appl_readl(pcie, APPL_PINMUX); 1441 1400 val |= APPL_PINMUX_CLKREQ_OVERRIDE_EN; ··· 1474 1407 fail_reg_en: 1475 1408 tegra_pcie_disable_slot_regulators(pcie); 1476 1409 fail_slot_reg_en: 1410 + if (pcie->enable_ext_refclk) 1411 + tegra_pcie_bpmp_set_pll_state(pcie, false); 1412 + fail_pll_init: 1477 1413 tegra_pcie_bpmp_set_ctrl_state(pcie, false); 1478 1414 1479 1415 return ret; 1480 1416 } 1481 1417 1482 - static void tegra_pcie_unconfig_controller(struct tegra194_pcie *pcie) 1418 + static void tegra_pcie_unconfig_controller(struct tegra_pcie_dw *pcie) 1483 1419 { 1484 1420 int ret; 1485 1421 ··· 1504 1434 1505 1435 tegra_pcie_disable_slot_regulators(pcie); 1506 1436 1437 + if (pcie->enable_ext_refclk) { 1438 + ret = tegra_pcie_bpmp_set_pll_state(pcie, false); 1439 + if (ret) 1440 + dev_err(pcie->dev, "Failed to deinit UPHY: %d\n", ret); 1441 + } 1442 + 1507 1443 ret = tegra_pcie_bpmp_set_ctrl_state(pcie, false); 1508 1444 if (ret) 1509 1445 dev_err(pcie->dev, "Failed to disable controller %d: %d\n", 1510 1446 pcie->cid, ret); 1511 1447 } 1512 1448 1513 - static int tegra_pcie_init_controller(struct tegra194_pcie *pcie) 1449 + static int tegra_pcie_init_controller(struct tegra_pcie_dw *pcie) 1514 1450 { 1515 1451 struct dw_pcie *pci = &pcie->pci; 1516 - struct pcie_port *pp = &pci->pp; 1452 + struct dw_pcie_rp *pp = &pci->pp; 1517 1453 int ret; 1518 1454 1519 1455 ret = tegra_pcie_config_controller(pcie, false); 1520 1456 if (ret < 0) 1521 1457 return ret; 1522 1458 1523 - pp->ops = &tegra194_pcie_host_ops; 1459 + pp->ops = &tegra_pcie_dw_host_ops; 1524 1460 1525 1461 ret = dw_pcie_host_init(pp); 1526 1462 if (ret < 0) { ··· 1541 1465 return ret; 1542 1466 } 1543 1467 1544 - static int tegra_pcie_try_link_l2(struct tegra194_pcie *pcie) 1468 + static int tegra_pcie_try_link_l2(struct tegra_pcie_dw *pcie) 1545 1469 { 1546 1470 u32 val; 1547 1471 1548 - if (!tegra194_pcie_link_up(&pcie->pci)) 1472 + if (!tegra_pcie_dw_link_up(&pcie->pci)) 1549 1473 return 0; 1550 1474 1551 1475 val = appl_readl(pcie, APPL_RADM_STATUS); ··· 1557 1481 1, PME_ACK_TIMEOUT); 1558 1482 } 1559 1483 1560 - static void tegra194_pcie_pme_turnoff(struct tegra194_pcie *pcie) 1484 + static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie) 1561 1485 { 1562 1486 u32 data; 1563 1487 int err; 1564 1488 1565 - if (!tegra194_pcie_link_up(&pcie->pci)) { 1489 + if (!tegra_pcie_dw_link_up(&pcie->pci)) { 1566 1490 dev_dbg(pcie->dev, "PCIe link is not up...!\n"); 1567 1491 return; 1568 1492 } ··· 1619 1543 appl_writel(pcie, data, APPL_PINMUX); 1620 1544 } 1621 1545 1622 - static void tegra_pcie_deinit_controller(struct tegra194_pcie *pcie) 1546 + static void tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie) 1623 1547 { 1624 1548 tegra_pcie_downstream_dev_to_D0(pcie); 1625 1549 dw_pcie_host_deinit(&pcie->pci.pp); 1626 - tegra194_pcie_pme_turnoff(pcie); 1550 + tegra_pcie_dw_pme_turnoff(pcie); 1627 1551 tegra_pcie_unconfig_controller(pcie); 1628 1552 } 1629 1553 1630 - static int tegra_pcie_config_rp(struct tegra194_pcie *pcie) 1554 + static int tegra_pcie_config_rp(struct tegra_pcie_dw *pcie) 1631 1555 { 1632 1556 struct device *dev = pcie->dev; 1633 1557 char *name; ··· 1654 1578 goto fail_pm_get_sync; 1655 1579 } 1656 1580 1657 - pcie->link_state = tegra194_pcie_link_up(&pcie->pci); 1581 + pcie->link_state = tegra_pcie_dw_link_up(&pcie->pci); 1658 1582 if (!pcie->link_state) { 1659 1583 ret = -ENOMEDIUM; 1660 1584 goto fail_host_init; ··· 1679 1603 return ret; 1680 1604 } 1681 1605 1682 - static void pex_ep_event_pex_rst_assert(struct tegra194_pcie *pcie) 1606 + static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie) 1683 1607 { 1684 1608 u32 val; 1685 1609 int ret; ··· 1710 1634 1711 1635 pm_runtime_put_sync(pcie->dev); 1712 1636 1637 + if (pcie->enable_ext_refclk) { 1638 + ret = tegra_pcie_bpmp_set_pll_state(pcie, false); 1639 + if (ret) 1640 + dev_err(pcie->dev, "Failed to turn off UPHY: %d\n", 1641 + ret); 1642 + } 1643 + 1713 1644 ret = tegra_pcie_bpmp_set_pll_state(pcie, false); 1714 1645 if (ret) 1715 1646 dev_err(pcie->dev, "Failed to turn off UPHY: %d\n", ret); ··· 1725 1642 dev_dbg(pcie->dev, "Uninitialization of endpoint is completed\n"); 1726 1643 } 1727 1644 1728 - static void pex_ep_event_pex_rst_deassert(struct tegra194_pcie *pcie) 1645 + static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie) 1729 1646 { 1730 1647 struct dw_pcie *pci = &pcie->pci; 1731 1648 struct dw_pcie_ep *ep = &pci->ep; 1732 1649 struct device *dev = pcie->dev; 1733 1650 u32 val; 1734 1651 int ret; 1652 + u16 val_16; 1735 1653 1736 1654 if (pcie->ep_state == EP_STATE_ENABLED) 1737 1655 return; ··· 1744 1660 return; 1745 1661 } 1746 1662 1747 - ret = tegra_pcie_bpmp_set_pll_state(pcie, true); 1663 + ret = tegra_pcie_bpmp_set_ctrl_state(pcie, true); 1748 1664 if (ret) { 1749 - dev_err(dev, "Failed to init UPHY for PCIe EP: %d\n", ret); 1750 - goto fail_pll_init; 1665 + dev_err(pcie->dev, "Failed to enable controller %u: %d\n", 1666 + pcie->cid, ret); 1667 + goto fail_set_ctrl_state; 1668 + } 1669 + 1670 + if (pcie->enable_ext_refclk) { 1671 + ret = tegra_pcie_bpmp_set_pll_state(pcie, true); 1672 + if (ret) { 1673 + dev_err(dev, "Failed to init UPHY for PCIe EP: %d\n", 1674 + ret); 1675 + goto fail_pll_init; 1676 + } 1751 1677 } 1752 1678 1753 1679 ret = clk_prepare_enable(pcie->core_clk); ··· 1854 1760 disable_aspm_l12(pcie); 1855 1761 } 1856 1762 1857 - val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 1858 - val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 1859 - dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 1763 + if (!pcie->of_data->has_l1ss_exit_fix) { 1764 + val = dw_pcie_readl_dbi(pci, GEN3_RELATED_OFF); 1765 + val &= ~GEN3_RELATED_OFF_GEN3_ZRXDC_NONCOMPL; 1766 + dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 1767 + } 1860 1768 1861 1769 pcie->pcie_cap_base = dw_pcie_find_capability(&pcie->pci, 1862 1770 PCI_CAP_ID_EXP); 1771 + 1772 + val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL); 1773 + val_16 &= ~PCI_EXP_DEVCTL_PAYLOAD; 1774 + val_16 |= PCI_EXP_DEVCTL_PAYLOAD_256B; 1775 + dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_DEVCTL, val_16); 1776 + 1777 + /* Clear Slot Clock Configuration bit if SRNS configuration */ 1778 + if (pcie->enable_srns) { 1779 + val_16 = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + 1780 + PCI_EXP_LNKSTA); 1781 + val_16 &= ~PCI_EXP_LNKSTA_SLC; 1782 + dw_pcie_writew_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA, 1783 + val_16); 1784 + } 1785 + 1863 1786 clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ); 1864 1787 1865 1788 val = (ep->msi_mem_phys & MSIX_ADDR_MATCH_LOW_OFF_MASK); ··· 1892 1781 } 1893 1782 1894 1783 dw_pcie_ep_init_notify(ep); 1784 + 1785 + /* Program the private control to allow sending LTR upstream */ 1786 + if (pcie->of_data->has_ltr_req_fix) { 1787 + val = appl_readl(pcie, APPL_LTR_MSG_2); 1788 + val |= APPL_LTR_MSG_2_LTR_MSG_REQ_STATE; 1789 + appl_writel(pcie, val, APPL_LTR_MSG_2); 1790 + } 1895 1791 1896 1792 /* Enable LTSSM */ 1897 1793 val = appl_readl(pcie, APPL_CTRL); ··· 1920 1802 fail_core_clk_enable: 1921 1803 tegra_pcie_bpmp_set_pll_state(pcie, false); 1922 1804 fail_pll_init: 1805 + tegra_pcie_bpmp_set_ctrl_state(pcie, false); 1806 + fail_set_ctrl_state: 1923 1807 pm_runtime_put_sync(dev); 1924 1808 } 1925 1809 1926 1810 static irqreturn_t tegra_pcie_ep_pex_rst_irq(int irq, void *arg) 1927 1811 { 1928 - struct tegra194_pcie *pcie = arg; 1812 + struct tegra_pcie_dw *pcie = arg; 1929 1813 1930 1814 if (gpiod_get_value(pcie->pex_rst_gpiod)) 1931 1815 pex_ep_event_pex_rst_assert(pcie); ··· 1937 1817 return IRQ_HANDLED; 1938 1818 } 1939 1819 1940 - static int tegra_pcie_ep_raise_legacy_irq(struct tegra194_pcie *pcie, u16 irq) 1820 + static int tegra_pcie_ep_raise_legacy_irq(struct tegra_pcie_dw *pcie, u16 irq) 1941 1821 { 1942 1822 /* Tegra194 supports only INTA */ 1943 1823 if (irq > 1) ··· 1949 1829 return 0; 1950 1830 } 1951 1831 1952 - static int tegra_pcie_ep_raise_msi_irq(struct tegra194_pcie *pcie, u16 irq) 1832 + static int tegra_pcie_ep_raise_msi_irq(struct tegra_pcie_dw *pcie, u16 irq) 1953 1833 { 1954 1834 if (unlikely(irq > 31)) 1955 1835 return -EINVAL; ··· 1959 1839 return 0; 1960 1840 } 1961 1841 1962 - static int tegra_pcie_ep_raise_msix_irq(struct tegra194_pcie *pcie, u16 irq) 1842 + static int tegra_pcie_ep_raise_msix_irq(struct tegra_pcie_dw *pcie, u16 irq) 1963 1843 { 1964 1844 struct dw_pcie_ep *ep = &pcie->pci.ep; 1965 1845 ··· 1973 1853 u16 interrupt_num) 1974 1854 { 1975 1855 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1976 - struct tegra194_pcie *pcie = to_tegra_pcie(pci); 1856 + struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 1977 1857 1978 1858 switch (type) { 1979 1859 case PCI_EPC_IRQ_LEGACY: ··· 2014 1894 .get_features = tegra_pcie_ep_get_features, 2015 1895 }; 2016 1896 2017 - static int tegra_pcie_config_ep(struct tegra194_pcie *pcie, 1897 + static int tegra_pcie_config_ep(struct tegra_pcie_dw *pcie, 2018 1898 struct platform_device *pdev) 2019 1899 { 2020 1900 struct dw_pcie *pci = &pcie->pci; ··· 2069 1949 if (ret) { 2070 1950 dev_err(dev, "Failed to initialize DWC Endpoint subsystem: %d\n", 2071 1951 ret); 1952 + pm_runtime_disable(dev); 2072 1953 return ret; 2073 1954 } 2074 1955 2075 1956 return 0; 2076 1957 } 2077 1958 2078 - static int tegra194_pcie_probe(struct platform_device *pdev) 1959 + static int tegra_pcie_dw_probe(struct platform_device *pdev) 2079 1960 { 2080 - const struct tegra194_pcie_of_data *data; 1961 + const struct tegra_pcie_dw_of_data *data; 2081 1962 struct device *dev = &pdev->dev; 2082 1963 struct resource *atu_dma_res; 2083 - struct tegra194_pcie *pcie; 2084 - struct pcie_port *pp; 1964 + struct tegra_pcie_dw *pcie; 1965 + struct dw_pcie_rp *pp; 2085 1966 struct dw_pcie *pci; 2086 1967 struct phy **phys; 2087 1968 char *name; ··· 2098 1977 pci = &pcie->pci; 2099 1978 pci->dev = &pdev->dev; 2100 1979 pci->ops = &tegra_dw_pcie_ops; 2101 - pci->n_fts[0] = N_FTS_VAL; 2102 - pci->n_fts[1] = FTS_VAL; 2103 - pci->version = 0x490A; 2104 - 1980 + pcie->dev = &pdev->dev; 1981 + pcie->of_data = (struct tegra_pcie_dw_of_data *)data; 1982 + pci->n_fts[0] = pcie->of_data->n_fts[0]; 1983 + pci->n_fts[1] = pcie->of_data->n_fts[1]; 2105 1984 pp = &pci->pp; 2106 1985 pp->num_vectors = MAX_MSI_IRQS; 2107 - pcie->dev = &pdev->dev; 2108 - pcie->mode = (enum dw_pcie_device_mode)data->mode; 2109 1986 2110 - ret = tegra194_pcie_parse_dt(pcie); 1987 + ret = tegra_pcie_dw_parse_dt(pcie); 2111 1988 if (ret < 0) { 2112 1989 const char *level = KERN_ERR; 2113 1990 ··· 2220 2101 2221 2102 platform_set_drvdata(pdev, pcie); 2222 2103 2223 - switch (pcie->mode) { 2104 + switch (pcie->of_data->mode) { 2224 2105 case DW_PCIE_RC_TYPE: 2225 2106 ret = devm_request_irq(dev, pp->irq, tegra_pcie_rp_irq_handler, 2226 2107 IRQF_SHARED, "tegra-pcie-intr", pcie); ··· 2255 2136 break; 2256 2137 2257 2138 default: 2258 - dev_err(dev, "Invalid PCIe device type %d\n", pcie->mode); 2139 + dev_err(dev, "Invalid PCIe device type %d\n", 2140 + pcie->of_data->mode); 2259 2141 } 2260 2142 2261 2143 fail: ··· 2264 2144 return ret; 2265 2145 } 2266 2146 2267 - static int tegra194_pcie_remove(struct platform_device *pdev) 2147 + static int tegra_pcie_dw_remove(struct platform_device *pdev) 2268 2148 { 2269 - struct tegra194_pcie *pcie = platform_get_drvdata(pdev); 2149 + struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); 2270 2150 2271 - if (!pcie->link_state) 2272 - return 0; 2151 + if (pcie->of_data->mode == DW_PCIE_RC_TYPE) { 2152 + if (!pcie->link_state) 2153 + return 0; 2273 2154 2274 - debugfs_remove_recursive(pcie->debugfs); 2275 - tegra_pcie_deinit_controller(pcie); 2276 - pm_runtime_put_sync(pcie->dev); 2155 + debugfs_remove_recursive(pcie->debugfs); 2156 + tegra_pcie_deinit_controller(pcie); 2157 + pm_runtime_put_sync(pcie->dev); 2158 + } else { 2159 + disable_irq(pcie->pex_rst_irq); 2160 + pex_ep_event_pex_rst_assert(pcie); 2161 + } 2162 + 2277 2163 pm_runtime_disable(pcie->dev); 2278 2164 tegra_bpmp_put(pcie->bpmp); 2279 2165 if (pcie->pex_refclk_sel_gpiod) ··· 2288 2162 return 0; 2289 2163 } 2290 2164 2291 - static int tegra194_pcie_suspend_late(struct device *dev) 2165 + static int tegra_pcie_dw_suspend_late(struct device *dev) 2292 2166 { 2293 - struct tegra194_pcie *pcie = dev_get_drvdata(dev); 2167 + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2294 2168 u32 val; 2169 + 2170 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) { 2171 + dev_err(dev, "Failed to Suspend as Tegra PCIe is in EP mode\n"); 2172 + return -EPERM; 2173 + } 2295 2174 2296 2175 if (!pcie->link_state) 2297 2176 return 0; 2298 2177 2299 2178 /* Enable HW_HOT_RST mode */ 2300 - val = appl_readl(pcie, APPL_CTRL); 2301 - val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << 2302 - APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 2303 - val |= APPL_CTRL_HW_HOT_RST_EN; 2304 - appl_writel(pcie, val, APPL_CTRL); 2179 + if (!pcie->of_data->has_sbr_reset_fix) { 2180 + val = appl_readl(pcie, APPL_CTRL); 2181 + val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << 2182 + APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 2183 + val |= APPL_CTRL_HW_HOT_RST_EN; 2184 + appl_writel(pcie, val, APPL_CTRL); 2185 + } 2305 2186 2306 2187 return 0; 2307 2188 } 2308 2189 2309 - static int tegra194_pcie_suspend_noirq(struct device *dev) 2190 + static int tegra_pcie_dw_suspend_noirq(struct device *dev) 2310 2191 { 2311 - struct tegra194_pcie *pcie = dev_get_drvdata(dev); 2192 + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2312 2193 2313 2194 if (!pcie->link_state) 2314 2195 return 0; 2315 2196 2316 2197 tegra_pcie_downstream_dev_to_D0(pcie); 2317 - tegra194_pcie_pme_turnoff(pcie); 2198 + tegra_pcie_dw_pme_turnoff(pcie); 2318 2199 tegra_pcie_unconfig_controller(pcie); 2319 2200 2320 2201 return 0; 2321 2202 } 2322 2203 2323 - static int tegra194_pcie_resume_noirq(struct device *dev) 2204 + static int tegra_pcie_dw_resume_noirq(struct device *dev) 2324 2205 { 2325 - struct tegra194_pcie *pcie = dev_get_drvdata(dev); 2206 + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2326 2207 int ret; 2327 2208 2328 2209 if (!pcie->link_state) ··· 2339 2206 if (ret < 0) 2340 2207 return ret; 2341 2208 2342 - ret = tegra194_pcie_host_init(&pcie->pci.pp); 2209 + ret = tegra_pcie_dw_host_init(&pcie->pci.pp); 2343 2210 if (ret < 0) { 2344 2211 dev_err(dev, "Failed to init host: %d\n", ret); 2345 2212 goto fail_host_init; ··· 2347 2214 2348 2215 dw_pcie_setup_rc(&pcie->pci.pp); 2349 2216 2350 - ret = tegra194_pcie_start_link(&pcie->pci); 2217 + ret = tegra_pcie_dw_start_link(&pcie->pci); 2351 2218 if (ret < 0) 2352 2219 goto fail_host_init; 2353 2220 ··· 2358 2225 return ret; 2359 2226 } 2360 2227 2361 - static int tegra194_pcie_resume_early(struct device *dev) 2228 + static int tegra_pcie_dw_resume_early(struct device *dev) 2362 2229 { 2363 - struct tegra194_pcie *pcie = dev_get_drvdata(dev); 2230 + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2364 2231 u32 val; 2365 2232 2366 - if (pcie->mode == DW_PCIE_EP_TYPE) { 2233 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) { 2367 2234 dev_err(dev, "Suspend is not supported in EP mode"); 2368 2235 return -ENOTSUPP; 2369 2236 } ··· 2372 2239 return 0; 2373 2240 2374 2241 /* Disable HW_HOT_RST mode */ 2375 - val = appl_readl(pcie, APPL_CTRL); 2376 - val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << 2377 - APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 2378 - val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST << 2379 - APPL_CTRL_HW_HOT_RST_MODE_SHIFT; 2380 - val &= ~APPL_CTRL_HW_HOT_RST_EN; 2381 - appl_writel(pcie, val, APPL_CTRL); 2242 + if (!pcie->of_data->has_sbr_reset_fix) { 2243 + val = appl_readl(pcie, APPL_CTRL); 2244 + val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << 2245 + APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 2246 + val |= APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST << 2247 + APPL_CTRL_HW_HOT_RST_MODE_SHIFT; 2248 + val &= ~APPL_CTRL_HW_HOT_RST_EN; 2249 + appl_writel(pcie, val, APPL_CTRL); 2250 + } 2382 2251 2383 2252 return 0; 2384 2253 } 2385 2254 2386 - static void tegra194_pcie_shutdown(struct platform_device *pdev) 2255 + static void tegra_pcie_dw_shutdown(struct platform_device *pdev) 2387 2256 { 2388 - struct tegra194_pcie *pcie = platform_get_drvdata(pdev); 2257 + struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); 2389 2258 2390 - if (!pcie->link_state) 2391 - return; 2259 + if (pcie->of_data->mode == DW_PCIE_RC_TYPE) { 2260 + if (!pcie->link_state) 2261 + return; 2392 2262 2393 - debugfs_remove_recursive(pcie->debugfs); 2394 - tegra_pcie_downstream_dev_to_D0(pcie); 2263 + debugfs_remove_recursive(pcie->debugfs); 2264 + tegra_pcie_downstream_dev_to_D0(pcie); 2395 2265 2396 - disable_irq(pcie->pci.pp.irq); 2397 - if (IS_ENABLED(CONFIG_PCI_MSI)) 2398 - disable_irq(pcie->pci.pp.msi_irq); 2266 + disable_irq(pcie->pci.pp.irq); 2267 + if (IS_ENABLED(CONFIG_PCI_MSI)) 2268 + disable_irq(pcie->pci.pp.msi_irq[0]); 2399 2269 2400 - tegra194_pcie_pme_turnoff(pcie); 2401 - tegra_pcie_unconfig_controller(pcie); 2270 + tegra_pcie_dw_pme_turnoff(pcie); 2271 + tegra_pcie_unconfig_controller(pcie); 2272 + pm_runtime_put_sync(pcie->dev); 2273 + } else { 2274 + disable_irq(pcie->pex_rst_irq); 2275 + pex_ep_event_pex_rst_assert(pcie); 2276 + } 2402 2277 } 2403 2278 2404 - static const struct tegra194_pcie_of_data tegra194_pcie_rc_of_data = { 2279 + static const struct tegra_pcie_dw_of_data tegra194_pcie_dw_rc_of_data = { 2280 + .version = TEGRA194_DWC_IP_VER, 2405 2281 .mode = DW_PCIE_RC_TYPE, 2282 + .cdm_chk_int_en_bit = BIT(19), 2283 + /* Gen4 - 5, 6, 8 and 9 presets enabled */ 2284 + .gen4_preset_vec = 0x360, 2285 + .n_fts = { 52, 52 }, 2406 2286 }; 2407 2287 2408 - static const struct tegra194_pcie_of_data tegra194_pcie_ep_of_data = { 2288 + static const struct tegra_pcie_dw_of_data tegra194_pcie_dw_ep_of_data = { 2289 + .version = TEGRA194_DWC_IP_VER, 2409 2290 .mode = DW_PCIE_EP_TYPE, 2291 + .cdm_chk_int_en_bit = BIT(19), 2292 + /* Gen4 - 5, 6, 8 and 9 presets enabled */ 2293 + .gen4_preset_vec = 0x360, 2294 + .n_fts = { 52, 52 }, 2410 2295 }; 2411 2296 2412 - static const struct of_device_id tegra194_pcie_of_match[] = { 2297 + static const struct tegra_pcie_dw_of_data tegra234_pcie_dw_rc_of_data = { 2298 + .version = TEGRA234_DWC_IP_VER, 2299 + .mode = DW_PCIE_RC_TYPE, 2300 + .has_msix_doorbell_access_fix = true, 2301 + .has_sbr_reset_fix = true, 2302 + .has_l1ss_exit_fix = true, 2303 + .cdm_chk_int_en_bit = BIT(18), 2304 + /* Gen4 - 6, 8 and 9 presets enabled */ 2305 + .gen4_preset_vec = 0x340, 2306 + .n_fts = { 52, 80 }, 2307 + }; 2308 + 2309 + static const struct tegra_pcie_dw_of_data tegra234_pcie_dw_ep_of_data = { 2310 + .version = TEGRA234_DWC_IP_VER, 2311 + .mode = DW_PCIE_EP_TYPE, 2312 + .has_l1ss_exit_fix = true, 2313 + .has_ltr_req_fix = true, 2314 + .cdm_chk_int_en_bit = BIT(18), 2315 + /* Gen4 - 6, 8 and 9 presets enabled */ 2316 + .gen4_preset_vec = 0x340, 2317 + .n_fts = { 52, 80 }, 2318 + }; 2319 + 2320 + static const struct of_device_id tegra_pcie_dw_of_match[] = { 2413 2321 { 2414 2322 .compatible = "nvidia,tegra194-pcie", 2415 - .data = &tegra194_pcie_rc_of_data, 2323 + .data = &tegra194_pcie_dw_rc_of_data, 2416 2324 }, 2417 2325 { 2418 2326 .compatible = "nvidia,tegra194-pcie-ep", 2419 - .data = &tegra194_pcie_ep_of_data, 2327 + .data = &tegra194_pcie_dw_ep_of_data, 2420 2328 }, 2421 - {}, 2329 + { 2330 + .compatible = "nvidia,tegra234-pcie", 2331 + .data = &tegra234_pcie_dw_rc_of_data, 2332 + }, 2333 + { 2334 + .compatible = "nvidia,tegra234-pcie-ep", 2335 + .data = &tegra234_pcie_dw_ep_of_data, 2336 + }, 2337 + {} 2422 2338 }; 2423 2339 2424 - static const struct dev_pm_ops tegra194_pcie_pm_ops = { 2425 - .suspend_late = tegra194_pcie_suspend_late, 2426 - .suspend_noirq = tegra194_pcie_suspend_noirq, 2427 - .resume_noirq = tegra194_pcie_resume_noirq, 2428 - .resume_early = tegra194_pcie_resume_early, 2340 + static const struct dev_pm_ops tegra_pcie_dw_pm_ops = { 2341 + .suspend_late = tegra_pcie_dw_suspend_late, 2342 + .suspend_noirq = tegra_pcie_dw_suspend_noirq, 2343 + .resume_noirq = tegra_pcie_dw_resume_noirq, 2344 + .resume_early = tegra_pcie_dw_resume_early, 2429 2345 }; 2430 2346 2431 - static struct platform_driver tegra194_pcie_driver = { 2432 - .probe = tegra194_pcie_probe, 2433 - .remove = tegra194_pcie_remove, 2434 - .shutdown = tegra194_pcie_shutdown, 2347 + static struct platform_driver tegra_pcie_dw_driver = { 2348 + .probe = tegra_pcie_dw_probe, 2349 + .remove = tegra_pcie_dw_remove, 2350 + .shutdown = tegra_pcie_dw_shutdown, 2435 2351 .driver = { 2436 2352 .name = "tegra194-pcie", 2437 - .pm = &tegra194_pcie_pm_ops, 2438 - .of_match_table = tegra194_pcie_of_match, 2353 + .pm = &tegra_pcie_dw_pm_ops, 2354 + .of_match_table = tegra_pcie_dw_of_match, 2439 2355 }, 2440 2356 }; 2441 - module_platform_driver(tegra194_pcie_driver); 2357 + module_platform_driver(tegra_pcie_dw_driver); 2442 2358 2443 - MODULE_DEVICE_TABLE(of, tegra194_pcie_of_match); 2359 + MODULE_DEVICE_TABLE(of, tegra_pcie_dw_of_match); 2444 2360 2445 2361 MODULE_AUTHOR("Vidya Sagar <vidyas@nvidia.com>"); 2446 2362 MODULE_DESCRIPTION("NVIDIA PCIe host controller driver");
+5 -5
drivers/pci/controller/dwc/pcie-uniphier.c
··· 171 171 172 172 static void uniphier_pcie_irq_mask(struct irq_data *d) 173 173 { 174 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 174 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d); 175 175 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 176 176 struct uniphier_pcie *pcie = to_uniphier_pcie(pci); 177 177 unsigned long flags; ··· 188 188 189 189 static void uniphier_pcie_irq_unmask(struct irq_data *d) 190 190 { 191 - struct pcie_port *pp = irq_data_get_irq_chip_data(d); 191 + struct dw_pcie_rp *pp = irq_data_get_irq_chip_data(d); 192 192 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 193 193 struct uniphier_pcie *pcie = to_uniphier_pcie(pci); 194 194 unsigned long flags; ··· 225 225 226 226 static void uniphier_pcie_irq_handler(struct irq_desc *desc) 227 227 { 228 - struct pcie_port *pp = irq_desc_get_handler_data(desc); 228 + struct dw_pcie_rp *pp = irq_desc_get_handler_data(desc); 229 229 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 230 230 struct uniphier_pcie *pcie = to_uniphier_pcie(pci); 231 231 struct irq_chip *chip = irq_desc_get_chip(desc); ··· 258 258 chained_irq_exit(chip, desc); 259 259 } 260 260 261 - static int uniphier_pcie_config_legacy_irq(struct pcie_port *pp) 261 + static int uniphier_pcie_config_legacy_irq(struct dw_pcie_rp *pp) 262 262 { 263 263 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 264 264 struct uniphier_pcie *pcie = to_uniphier_pcie(pci); ··· 295 295 return ret; 296 296 } 297 297 298 - static int uniphier_pcie_host_init(struct pcie_port *pp) 298 + static int uniphier_pcie_host_init(struct dw_pcie_rp *pp) 299 299 { 300 300 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 301 301 struct uniphier_pcie *pcie = to_uniphier_pcie(pci);
+3 -3
drivers/pci/controller/dwc/pcie-visconti.c
··· 178 178 */ 179 179 static u64 visconti_pcie_cpu_addr_fixup(struct dw_pcie *pci, u64 cpu_addr) 180 180 { 181 - struct pcie_port *pp = &pci->pp; 181 + struct dw_pcie_rp *pp = &pci->pp; 182 182 183 183 return cpu_addr & ~pp->io_base; 184 184 } ··· 190 190 .stop_link = visconti_pcie_stop_link, 191 191 }; 192 192 193 - static int visconti_pcie_host_init(struct pcie_port *pp) 193 + static int visconti_pcie_host_init(struct dw_pcie_rp *pp) 194 194 { 195 195 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 196 196 struct visconti_pcie *pcie = dev_get_drvdata(pci->dev); ··· 278 278 struct platform_device *pdev) 279 279 { 280 280 struct dw_pcie *pci = &pcie->pci; 281 - struct pcie_port *pp = &pci->pp; 281 + struct dw_pcie_rp *pp = &pci->pp; 282 282 283 283 pp->irq = platform_get_irq_byname(pdev, "intr"); 284 284 if (pp->irq < 0)
+103 -9
drivers/pci/controller/pci-aardvark.c
··· 8 8 * Author: Hezi Shahmoon <hezi.shahmoon@marvell.com> 9 9 */ 10 10 11 + #include <linux/bitfield.h> 11 12 #include <linux/delay.h> 12 13 #include <linux/gpio/consumer.h> 13 14 #include <linux/interrupt.h> ··· 34 33 #define PCIE_CORE_CMD_STATUS_REG 0x4 35 34 #define PCIE_CORE_DEV_REV_REG 0x8 36 35 #define PCIE_CORE_PCIEXP_CAP 0xc0 36 + #define PCIE_CORE_PCIERR_CAP 0x100 37 37 #define PCIE_CORE_ERR_CAPCTL_REG 0x118 38 38 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX BIT(5) 39 39 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) ··· 859 857 860 858 861 859 switch (reg) { 862 - case PCI_EXP_SLTCTL: 863 - *value = PCI_EXP_SLTSTA_PDS << 16; 864 - return PCI_BRIDGE_EMUL_HANDLED; 865 - 866 860 /* 867 - * PCI_EXP_RTCTL and PCI_EXP_RTSTA are also supported, but do not need 868 - * to be handled here, because their values are stored in emulated 869 - * config space buffer, and we read them from there when needed. 861 + * PCI_EXP_SLTCAP, PCI_EXP_SLTCTL, PCI_EXP_RTCTL and PCI_EXP_RTSTA are 862 + * also supported, but do not need to be handled here, because their 863 + * values are stored in emulated config space buffer, and we read them 864 + * from there when needed. 870 865 */ 871 866 872 867 case PCI_EXP_LNKCAP: { ··· 943 944 } 944 945 } 945 946 947 + static pci_bridge_emul_read_status_t 948 + advk_pci_bridge_emul_ext_conf_read(struct pci_bridge_emul *bridge, 949 + int reg, u32 *value) 950 + { 951 + struct advk_pcie *pcie = bridge->data; 952 + 953 + switch (reg) { 954 + case 0: 955 + *value = advk_readl(pcie, PCIE_CORE_PCIERR_CAP + reg); 956 + 957 + /* 958 + * PCI_EXT_CAP_NEXT bits are set to offset 0x150, but Armada 959 + * 3700 Functional Specification does not document registers 960 + * at those addresses. 961 + * 962 + * Thus we clear PCI_EXT_CAP_NEXT bits to make Advanced Error 963 + * Reporting Capability header the last Extended Capability. 964 + * If we obtain documentation for those registers in the 965 + * future, this can be changed. 966 + */ 967 + *value &= 0x000fffff; 968 + return PCI_BRIDGE_EMUL_HANDLED; 969 + 970 + case PCI_ERR_UNCOR_STATUS: 971 + case PCI_ERR_UNCOR_MASK: 972 + case PCI_ERR_UNCOR_SEVER: 973 + case PCI_ERR_COR_STATUS: 974 + case PCI_ERR_COR_MASK: 975 + case PCI_ERR_CAP: 976 + case PCI_ERR_HEADER_LOG + 0: 977 + case PCI_ERR_HEADER_LOG + 4: 978 + case PCI_ERR_HEADER_LOG + 8: 979 + case PCI_ERR_HEADER_LOG + 12: 980 + case PCI_ERR_ROOT_COMMAND: 981 + case PCI_ERR_ROOT_STATUS: 982 + case PCI_ERR_ROOT_ERR_SRC: 983 + *value = advk_readl(pcie, PCIE_CORE_PCIERR_CAP + reg); 984 + return PCI_BRIDGE_EMUL_HANDLED; 985 + 986 + default: 987 + return PCI_BRIDGE_EMUL_NOT_HANDLED; 988 + } 989 + } 990 + 991 + static void 992 + advk_pci_bridge_emul_ext_conf_write(struct pci_bridge_emul *bridge, 993 + int reg, u32 old, u32 new, u32 mask) 994 + { 995 + struct advk_pcie *pcie = bridge->data; 996 + 997 + switch (reg) { 998 + /* These are W1C registers, so clear other bits */ 999 + case PCI_ERR_UNCOR_STATUS: 1000 + case PCI_ERR_COR_STATUS: 1001 + case PCI_ERR_ROOT_STATUS: 1002 + new &= mask; 1003 + fallthrough; 1004 + 1005 + case PCI_ERR_UNCOR_MASK: 1006 + case PCI_ERR_UNCOR_SEVER: 1007 + case PCI_ERR_COR_MASK: 1008 + case PCI_ERR_CAP: 1009 + case PCI_ERR_HEADER_LOG + 0: 1010 + case PCI_ERR_HEADER_LOG + 4: 1011 + case PCI_ERR_HEADER_LOG + 8: 1012 + case PCI_ERR_HEADER_LOG + 12: 1013 + case PCI_ERR_ROOT_COMMAND: 1014 + case PCI_ERR_ROOT_ERR_SRC: 1015 + advk_writel(pcie, new, PCIE_CORE_PCIERR_CAP + reg); 1016 + break; 1017 + 1018 + default: 1019 + break; 1020 + } 1021 + } 1022 + 946 1023 static const struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { 947 1024 .read_base = advk_pci_bridge_emul_base_conf_read, 948 1025 .write_base = advk_pci_bridge_emul_base_conf_write, 949 1026 .read_pcie = advk_pci_bridge_emul_pcie_conf_read, 950 1027 .write_pcie = advk_pci_bridge_emul_pcie_conf_write, 1028 + .read_ext = advk_pci_bridge_emul_ext_conf_read, 1029 + .write_ext = advk_pci_bridge_emul_ext_conf_write, 951 1030 }; 952 1031 953 1032 /* ··· 1054 977 /* Support interrupt A for MSI feature */ 1055 978 bridge->conf.intpin = PCI_INTERRUPT_INTA; 1056 979 1057 - /* Aardvark HW provides PCIe Capability structure in version 2 */ 1058 - bridge->pcie_conf.cap = cpu_to_le16(2); 980 + /* 981 + * Aardvark HW provides PCIe Capability structure in version 2 and 982 + * indicate slot support, which is emulated. 983 + */ 984 + bridge->pcie_conf.cap = cpu_to_le16(2 | PCI_EXP_FLAGS_SLOT); 985 + 986 + /* 987 + * Set Presence Detect State bit permanently since there is no support 988 + * for unplugging the card nor detecting whether it is plugged. (If a 989 + * platform exists in the future that supports it, via a GPIO for 990 + * example, it should be implemented via this bit.) 991 + * 992 + * Set physical slot number to 1 since there is only one port and zero 993 + * value is reserved for ports within the same silicon as Root Port 994 + * which is not our case. 995 + */ 996 + bridge->pcie_conf.slotcap = cpu_to_le32(FIELD_PREP(PCI_EXP_SLTCAP_PSN, 997 + 1)); 998 + bridge->pcie_conf.slotsta = cpu_to_le16(PCI_EXP_SLTSTA_PDS); 1059 999 1060 1000 /* Indicates supports for Completion Retry Status */ 1061 1001 bridge->pcie_conf.rootcap = cpu_to_le16(PCI_EXP_RTCAP_CRSVIS);
+166 -40
drivers/pci/controller/pci-loongson.c
··· 9 9 #include <linux/of_pci.h> 10 10 #include <linux/pci.h> 11 11 #include <linux/pci_ids.h> 12 + #include <linux/pci-acpi.h> 13 + #include <linux/pci-ecam.h> 12 14 13 15 #include "../pci.h" 14 16 ··· 20 18 #define DEV_PCIE_PORT_2 0x7a29 21 19 22 20 #define DEV_LS2K_APB 0x7a02 23 - #define DEV_LS7A_CONF 0x7a10 21 + #define DEV_LS7A_GMAC 0x7a03 22 + #define DEV_LS7A_DC1 0x7a06 24 23 #define DEV_LS7A_LPC 0x7a0c 24 + #define DEV_LS7A_AHCI 0x7a08 25 + #define DEV_LS7A_CONF 0x7a10 26 + #define DEV_LS7A_GNET 0x7a13 27 + #define DEV_LS7A_EHCI 0x7a14 28 + #define DEV_LS7A_DC2 0x7a36 29 + #define DEV_LS7A_HDMI 0x7a37 25 30 26 31 #define FLAG_CFG0 BIT(0) 27 32 #define FLAG_CFG1 BIT(1) 28 33 #define FLAG_DEV_FIX BIT(2) 34 + #define FLAG_DEV_HIDDEN BIT(3) 35 + 36 + struct loongson_pci_data { 37 + u32 flags; 38 + struct pci_ops *ops; 39 + }; 29 40 30 41 struct loongson_pci { 31 42 void __iomem *cfg0_base; 32 43 void __iomem *cfg1_base; 33 44 struct platform_device *pdev; 34 - u32 flags; 45 + const struct loongson_pci_data *data; 35 46 }; 36 47 37 48 /* Fixup wrong class code in PCIe bridges */ ··· 107 92 } 108 93 DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, loongson_mrrs_quirk); 109 94 110 - static void __iomem *cfg1_map(struct loongson_pci *priv, int bus, 111 - unsigned int devfn, int where) 95 + static void loongson_pci_pin_quirk(struct pci_dev *pdev) 112 96 { 113 - unsigned long addroff = 0x0; 97 + pdev->pin = 1 + (PCI_FUNC(pdev->devfn) & 3); 98 + } 99 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 100 + DEV_LS7A_DC1, loongson_pci_pin_quirk); 101 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 102 + DEV_LS7A_DC2, loongson_pci_pin_quirk); 103 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 104 + DEV_LS7A_GMAC, loongson_pci_pin_quirk); 105 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 106 + DEV_LS7A_AHCI, loongson_pci_pin_quirk); 107 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 108 + DEV_LS7A_EHCI, loongson_pci_pin_quirk); 109 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 110 + DEV_LS7A_GNET, loongson_pci_pin_quirk); 111 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LOONGSON, 112 + DEV_LS7A_HDMI, loongson_pci_pin_quirk); 114 113 115 - if (bus != 0) 116 - addroff |= BIT(28); /* Type 1 Access */ 117 - addroff |= (where & 0xff) | ((where & 0xf00) << 16); 118 - addroff |= (bus << 16) | (devfn << 8); 119 - return priv->cfg1_base + addroff; 114 + static struct loongson_pci *pci_bus_to_loongson_pci(struct pci_bus *bus) 115 + { 116 + struct pci_config_window *cfg; 117 + 118 + if (acpi_disabled) 119 + return (struct loongson_pci *)(bus->sysdata); 120 + 121 + cfg = bus->sysdata; 122 + return (struct loongson_pci *)(cfg->priv); 120 123 } 121 124 122 - static void __iomem *cfg0_map(struct loongson_pci *priv, int bus, 123 - unsigned int devfn, int where) 125 + static void __iomem *cfg0_map(struct loongson_pci *priv, struct pci_bus *bus, 126 + unsigned int devfn, int where) 124 127 { 125 128 unsigned long addroff = 0x0; 129 + unsigned char busnum = bus->number; 126 130 127 - if (bus != 0) 131 + if (!pci_is_root_bus(bus)) { 128 132 addroff |= BIT(24); /* Type 1 Access */ 129 - addroff |= (bus << 16) | (devfn << 8) | where; 133 + addroff |= (busnum << 16); 134 + } 135 + addroff |= (devfn << 8) | where; 130 136 return priv->cfg0_base + addroff; 131 137 } 132 138 133 - static void __iomem *pci_loongson_map_bus(struct pci_bus *bus, unsigned int devfn, 134 - int where) 139 + static void __iomem *cfg1_map(struct loongson_pci *priv, struct pci_bus *bus, 140 + unsigned int devfn, int where) 135 141 { 142 + unsigned long addroff = 0x0; 136 143 unsigned char busnum = bus->number; 137 - struct pci_host_bridge *bridge = pci_find_host_bridge(bus); 138 - struct loongson_pci *priv = pci_host_bridge_priv(bridge); 144 + 145 + if (!pci_is_root_bus(bus)) { 146 + addroff |= BIT(28); /* Type 1 Access */ 147 + addroff |= (busnum << 16); 148 + } 149 + addroff |= (devfn << 8) | (where & 0xff) | ((where & 0xf00) << 16); 150 + return priv->cfg1_base + addroff; 151 + } 152 + 153 + static bool pdev_may_exist(struct pci_bus *bus, unsigned int device, 154 + unsigned int function) 155 + { 156 + return !(pci_is_root_bus(bus) && 157 + (device >= 9 && device <= 20) && (function > 0)); 158 + } 159 + 160 + static void __iomem *pci_loongson_map_bus(struct pci_bus *bus, 161 + unsigned int devfn, int where) 162 + { 163 + unsigned int device = PCI_SLOT(devfn); 164 + unsigned int function = PCI_FUNC(devfn); 165 + struct loongson_pci *priv = pci_bus_to_loongson_pci(bus); 139 166 140 167 /* 141 168 * Do not read more than one device on the bus other than 142 - * the host bus. For our hardware the root bus is always bus 0. 169 + * the host bus. 143 170 */ 144 - if (priv->flags & FLAG_DEV_FIX && busnum != 0 && 145 - PCI_SLOT(devfn) > 0) 146 - return NULL; 171 + if ((priv->data->flags & FLAG_DEV_FIX) && bus->self) { 172 + if (!pci_is_root_bus(bus) && (device > 0)) 173 + return NULL; 174 + } 175 + 176 + /* Don't access non-existent devices */ 177 + if (priv->data->flags & FLAG_DEV_HIDDEN) { 178 + if (!pdev_may_exist(bus, device, function)) 179 + return NULL; 180 + } 147 181 148 182 /* CFG0 can only access standard space */ 149 183 if (where < PCI_CFG_SPACE_SIZE && priv->cfg0_base) 150 - return cfg0_map(priv, busnum, devfn, where); 184 + return cfg0_map(priv, bus, devfn, where); 151 185 152 186 /* CFG1 can access extended space */ 153 187 if (where < PCI_CFG_SPACE_EXP_SIZE && priv->cfg1_base) 154 - return cfg1_map(priv, busnum, devfn, where); 188 + return cfg1_map(priv, bus, devfn, where); 155 189 156 190 return NULL; 157 191 } 192 + 193 + #ifdef CONFIG_OF 158 194 159 195 static int loongson_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 160 196 { ··· 225 159 return val; 226 160 } 227 161 228 - /* H/w only accept 32-bit PCI operations */ 162 + /* LS2K/LS7A accept 8/16/32-bit PCI config operations */ 229 163 static struct pci_ops loongson_pci_ops = { 164 + .map_bus = pci_loongson_map_bus, 165 + .read = pci_generic_config_read, 166 + .write = pci_generic_config_write, 167 + }; 168 + 169 + /* RS780/SR5690 only accept 32-bit PCI config operations */ 170 + static struct pci_ops loongson_pci_ops32 = { 230 171 .map_bus = pci_loongson_map_bus, 231 172 .read = pci_generic_config_read32, 232 173 .write = pci_generic_config_write32, 233 174 }; 234 175 176 + static const struct loongson_pci_data ls2k_pci_data = { 177 + .flags = FLAG_CFG1 | FLAG_DEV_FIX | FLAG_DEV_HIDDEN, 178 + .ops = &loongson_pci_ops, 179 + }; 180 + 181 + static const struct loongson_pci_data ls7a_pci_data = { 182 + .flags = FLAG_CFG1 | FLAG_DEV_FIX | FLAG_DEV_HIDDEN, 183 + .ops = &loongson_pci_ops, 184 + }; 185 + 186 + static const struct loongson_pci_data rs780e_pci_data = { 187 + .flags = FLAG_CFG0, 188 + .ops = &loongson_pci_ops32, 189 + }; 190 + 235 191 static const struct of_device_id loongson_pci_of_match[] = { 236 192 { .compatible = "loongson,ls2k-pci", 237 - .data = (void *)(FLAG_CFG0 | FLAG_CFG1 | FLAG_DEV_FIX), }, 193 + .data = &ls2k_pci_data, }, 238 194 { .compatible = "loongson,ls7a-pci", 239 - .data = (void *)(FLAG_CFG0 | FLAG_CFG1 | FLAG_DEV_FIX), }, 195 + .data = &ls7a_pci_data, }, 240 196 { .compatible = "loongson,rs780e-pci", 241 - .data = (void *)(FLAG_CFG0), }, 197 + .data = &rs780e_pci_data, }, 242 198 {} 243 199 }; 244 200 ··· 281 193 282 194 priv = pci_host_bridge_priv(bridge); 283 195 priv->pdev = pdev; 284 - priv->flags = (unsigned long)of_device_get_match_data(dev); 196 + priv->data = of_device_get_match_data(dev); 285 197 286 - regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 287 - if (!regs) { 288 - dev_err(dev, "missing mem resources for cfg0\n"); 289 - return -EINVAL; 198 + if (priv->data->flags & FLAG_CFG0) { 199 + regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 200 + if (!regs) 201 + dev_err(dev, "missing mem resources for cfg0\n"); 202 + else { 203 + priv->cfg0_base = devm_pci_remap_cfg_resource(dev, regs); 204 + if (IS_ERR(priv->cfg0_base)) 205 + return PTR_ERR(priv->cfg0_base); 206 + } 290 207 } 291 208 292 - priv->cfg0_base = devm_pci_remap_cfg_resource(dev, regs); 293 - if (IS_ERR(priv->cfg0_base)) 294 - return PTR_ERR(priv->cfg0_base); 295 - 296 - /* CFG1 is optional */ 297 - if (priv->flags & FLAG_CFG1) { 209 + if (priv->data->flags & FLAG_CFG1) { 298 210 regs = platform_get_resource(pdev, IORESOURCE_MEM, 1); 299 211 if (!regs) 300 212 dev_info(dev, "missing mem resource for cfg1\n"); ··· 306 218 } 307 219 308 220 bridge->sysdata = priv; 309 - bridge->ops = &loongson_pci_ops; 221 + bridge->ops = priv->data->ops; 310 222 bridge->map_irq = loongson_map_irq; 311 223 312 224 return pci_host_probe(bridge); ··· 320 232 .probe = loongson_pci_probe, 321 233 }; 322 234 builtin_platform_driver(loongson_pci_driver); 235 + 236 + #endif 237 + 238 + #ifdef CONFIG_ACPI 239 + 240 + static int loongson_pci_ecam_init(struct pci_config_window *cfg) 241 + { 242 + struct device *dev = cfg->parent; 243 + struct loongson_pci *priv; 244 + struct loongson_pci_data *data; 245 + 246 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 247 + if (!priv) 248 + return -ENOMEM; 249 + 250 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 251 + if (!data) 252 + return -ENOMEM; 253 + 254 + cfg->priv = priv; 255 + data->flags = FLAG_CFG1 | FLAG_DEV_HIDDEN; 256 + priv->data = data; 257 + priv->cfg1_base = cfg->win - (cfg->busr.start << 16); 258 + 259 + return 0; 260 + } 261 + 262 + const struct pci_ecam_ops loongson_pci_ecam_ops = { 263 + .bus_shift = 16, 264 + .init = loongson_pci_ecam_init, 265 + .pci_ops = { 266 + .map_bus = pci_loongson_map_bus, 267 + .read = pci_generic_config_read, 268 + .write = pci_generic_config_write, 269 + } 270 + }; 271 + 272 + #endif
+1 -3
drivers/pci/controller/pci-mvebu.c
··· 1216 1216 return -ENOENT; 1217 1217 } 1218 1218 1219 - #ifdef CONFIG_PM_SLEEP 1220 1219 static int mvebu_pcie_suspend(struct device *dev) 1221 1220 { 1222 1221 struct mvebu_pcie *pcie; ··· 1248 1249 1249 1250 return 0; 1250 1251 } 1251 - #endif 1252 1252 1253 1253 static void mvebu_pcie_port_clk_put(void *data) 1254 1254 { ··· 1735 1737 }; 1736 1738 1737 1739 static const struct dev_pm_ops mvebu_pcie_pm_ops = { 1738 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mvebu_pcie_suspend, mvebu_pcie_resume) 1740 + NOIRQ_SYSTEM_SLEEP_PM_OPS(mvebu_pcie_suspend, mvebu_pcie_resume) 1739 1741 }; 1740 1742 1741 1743 static struct platform_driver mvebu_pcie_driver = {
+1
drivers/pci/controller/pci-rcar-gen2.c
··· 328 328 { .compatible = "renesas,pci-r8a7791", }, 329 329 { .compatible = "renesas,pci-r8a7794", }, 330 330 { .compatible = "renesas,pci-rcar-gen2", }, 331 + { .compatible = "renesas,pci-rzn1", }, 331 332 { }, 332 333 }; 333 334
+4 -5
drivers/pci/controller/pci-tegra.c
··· 2707 2707 return 0; 2708 2708 } 2709 2709 2710 - static int __maybe_unused tegra_pcie_pm_suspend(struct device *dev) 2710 + static int tegra_pcie_pm_suspend(struct device *dev) 2711 2711 { 2712 2712 struct tegra_pcie *pcie = dev_get_drvdata(dev); 2713 2713 struct tegra_pcie_port *port; ··· 2742 2742 return 0; 2743 2743 } 2744 2744 2745 - static int __maybe_unused tegra_pcie_pm_resume(struct device *dev) 2745 + static int tegra_pcie_pm_resume(struct device *dev) 2746 2746 { 2747 2747 struct tegra_pcie *pcie = dev_get_drvdata(dev); 2748 2748 int err; ··· 2798 2798 } 2799 2799 2800 2800 static const struct dev_pm_ops tegra_pcie_pm_ops = { 2801 - SET_RUNTIME_PM_OPS(tegra_pcie_pm_suspend, tegra_pcie_pm_resume, NULL) 2802 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_pcie_pm_suspend, 2803 - tegra_pcie_pm_resume) 2801 + RUNTIME_PM_OPS(tegra_pcie_pm_suspend, tegra_pcie_pm_resume, NULL) 2802 + NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_pcie_pm_suspend, tegra_pcie_pm_resume) 2804 2803 }; 2805 2804 2806 2805 static struct platform_driver tegra_pcie_driver = {
+1 -1
drivers/pci/controller/pci-xgene.c
··· 641 641 static struct platform_driver xgene_pcie_driver = { 642 642 .driver = { 643 643 .name = "xgene-pcie", 644 - .of_match_table = of_match_ptr(xgene_pcie_match_table), 644 + .of_match_table = xgene_pcie_match_table, 645 645 .suppress_bind_attrs = true, 646 646 }, 647 647 .probe = xgene_pcie_probe,
+305 -138
drivers/pci/controller/pcie-brcmstb.c
··· 24 24 #include <linux/pci.h> 25 25 #include <linux/pci-ecam.h> 26 26 #include <linux/printk.h> 27 + #include <linux/regulator/consumer.h> 27 28 #include <linux/reset.h> 28 29 #include <linux/sizes.h> 29 30 #include <linux/slab.h> ··· 191 190 192 191 /* Forward declarations */ 193 192 struct brcm_pcie; 194 - static inline void brcm_pcie_bridge_sw_init_set_7278(struct brcm_pcie *pcie, u32 val); 195 - static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val); 196 - static inline void brcm_pcie_perst_set_4908(struct brcm_pcie *pcie, u32 val); 197 - static inline void brcm_pcie_perst_set_7278(struct brcm_pcie *pcie, u32 val); 198 - static inline void brcm_pcie_perst_set_generic(struct brcm_pcie *pcie, u32 val); 199 193 200 194 enum { 201 195 RGR1_SW_INIT_1, ··· 219 223 void (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val); 220 224 }; 221 225 222 - static const int pcie_offsets[] = { 223 - [RGR1_SW_INIT_1] = 0x9210, 224 - [EXT_CFG_INDEX] = 0x9000, 225 - [EXT_CFG_DATA] = 0x9004, 226 - }; 227 - 228 - static const int pcie_offsets_bmips_7425[] = { 229 - [RGR1_SW_INIT_1] = 0x8010, 230 - [EXT_CFG_INDEX] = 0x8300, 231 - [EXT_CFG_DATA] = 0x8304, 232 - }; 233 - 234 - static const struct pcie_cfg_data generic_cfg = { 235 - .offsets = pcie_offsets, 236 - .type = GENERIC, 237 - .perst_set = brcm_pcie_perst_set_generic, 238 - .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 239 - }; 240 - 241 - static const struct pcie_cfg_data bcm7425_cfg = { 242 - .offsets = pcie_offsets_bmips_7425, 243 - .type = BCM7425, 244 - .perst_set = brcm_pcie_perst_set_generic, 245 - .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 246 - }; 247 - 248 - static const struct pcie_cfg_data bcm7435_cfg = { 249 - .offsets = pcie_offsets, 250 - .type = BCM7435, 251 - .perst_set = brcm_pcie_perst_set_generic, 252 - .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 253 - }; 254 - 255 - static const struct pcie_cfg_data bcm4908_cfg = { 256 - .offsets = pcie_offsets, 257 - .type = BCM4908, 258 - .perst_set = brcm_pcie_perst_set_4908, 259 - .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 260 - }; 261 - 262 - static const int pcie_offset_bcm7278[] = { 263 - [RGR1_SW_INIT_1] = 0xc010, 264 - [EXT_CFG_INDEX] = 0x9000, 265 - [EXT_CFG_DATA] = 0x9004, 266 - }; 267 - 268 - static const struct pcie_cfg_data bcm7278_cfg = { 269 - .offsets = pcie_offset_bcm7278, 270 - .type = BCM7278, 271 - .perst_set = brcm_pcie_perst_set_7278, 272 - .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_7278, 273 - }; 274 - 275 - static const struct pcie_cfg_data bcm2711_cfg = { 276 - .offsets = pcie_offsets, 277 - .type = BCM2711, 278 - .perst_set = brcm_pcie_perst_set_generic, 279 - .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 226 + struct subdev_regulators { 227 + unsigned int num_supplies; 228 + struct regulator_bulk_data supplies[]; 280 229 }; 281 230 282 231 struct brcm_msi { ··· 261 320 u32 hw_rev; 262 321 void (*perst_set)(struct brcm_pcie *pcie, u32 val); 263 322 void (*bridge_sw_init_set)(struct brcm_pcie *pcie, u32 val); 323 + struct subdev_regulators *sr; 324 + bool ep_wakeup_capable; 264 325 }; 265 326 266 327 static inline bool is_bmips(const struct brcm_pcie *pcie) ··· 684 741 return dla && plu; 685 742 } 686 743 687 - static void __iomem *brcm_pcie_map_conf(struct pci_bus *bus, unsigned int devfn, 688 - int where) 744 + static void __iomem *brcm_pcie_map_bus(struct pci_bus *bus, 745 + unsigned int devfn, int where) 689 746 { 690 747 struct brcm_pcie *pcie = bus->sysdata; 691 748 void __iomem *base = pcie->base; 692 749 int idx; 693 750 694 - /* Accesses to the RC go right to the RC registers if slot==0 */ 751 + /* Accesses to the RC go right to the RC registers if !devfn */ 695 752 if (pci_is_root_bus(bus)) 696 - return PCI_SLOT(devfn) ? NULL : base + where; 753 + return devfn ? NULL : base + PCIE_ECAM_REG(where); 754 + 755 + /* An access to our HW w/o link-up will cause a CPU Abort */ 756 + if (!brcm_pcie_link_up(pcie)) 757 + return NULL; 697 758 698 759 /* For devices, write to the config space index register */ 699 760 idx = PCIE_ECAM_OFFSET(bus->number, devfn, 0); 700 761 writel(idx, pcie->base + PCIE_EXT_CFG_INDEX); 701 - return base + PCIE_EXT_CFG_DATA + where; 762 + return base + PCIE_EXT_CFG_DATA + PCIE_ECAM_REG(where); 702 763 } 703 764 704 - static void __iomem *brcm_pcie_map_conf32(struct pci_bus *bus, unsigned int devfn, 705 - int where) 765 + static void __iomem *brcm7425_pcie_map_bus(struct pci_bus *bus, 766 + unsigned int devfn, int where) 706 767 { 707 768 struct brcm_pcie *pcie = bus->sysdata; 708 769 void __iomem *base = pcie->base; 709 770 int idx; 710 771 711 - /* Accesses to the RC go right to the RC registers if slot==0 */ 772 + /* Accesses to the RC go right to the RC registers if !devfn */ 712 773 if (pci_is_root_bus(bus)) 713 - return PCI_SLOT(devfn) ? NULL : base + (where & ~0x3); 774 + return devfn ? NULL : base + PCIE_ECAM_REG(where); 775 + 776 + /* An access to our HW w/o link-up will cause a CPU Abort */ 777 + if (!brcm_pcie_link_up(pcie)) 778 + return NULL; 714 779 715 780 /* For devices, write to the config space index register */ 716 - idx = PCIE_ECAM_OFFSET(bus->number, devfn, (where & ~3)); 781 + idx = PCIE_ECAM_OFFSET(bus->number, devfn, where); 717 782 writel(idx, base + IDX_ADDR(pcie)); 718 783 return base + DATA_ADDR(pcie); 719 784 } 720 - 721 - static struct pci_ops brcm_pcie_ops = { 722 - .map_bus = brcm_pcie_map_conf, 723 - .read = pci_generic_config_read, 724 - .write = pci_generic_config_write, 725 - }; 726 - 727 - static struct pci_ops brcm_pcie_ops32 = { 728 - .map_bus = brcm_pcie_map_conf32, 729 - .read = pci_generic_config_read32, 730 - .write = pci_generic_config_write32, 731 - }; 732 785 733 786 static inline void brcm_pcie_bridge_sw_init_set_generic(struct brcm_pcie *pcie, u32 val) 734 787 { ··· 865 926 866 927 static int brcm_pcie_setup(struct brcm_pcie *pcie) 867 928 { 868 - struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 869 929 u64 rc_bar2_offset, rc_bar2_size; 870 930 void __iomem *base = pcie->base; 871 - struct device *dev = pcie->dev; 931 + struct pci_host_bridge *bridge; 872 932 struct resource_entry *entry; 873 - bool ssc_good = false; 874 - struct resource *res; 875 - int num_out_wins = 0; 876 - u16 nlw, cls, lnksta; 877 - int i, ret, memc; 878 933 u32 tmp, burst, aspm_support; 934 + int num_out_wins = 0; 935 + int ret, memc; 879 936 880 937 /* Reset the bridge */ 881 938 pcie->bridge_sw_init_set(pcie, 1); ··· 947 1012 else 948 1013 pcie->msi_target_addr = BRCM_MSI_TARGET_ADDR_GT_4GB; 949 1014 1015 + if (!brcm_pcie_rc_mode(pcie)) { 1016 + dev_err(pcie->dev, "PCIe RC controller misconfigured as Endpoint\n"); 1017 + return -EINVAL; 1018 + } 1019 + 950 1020 /* disable the PCIe->GISB memory window (RC_BAR1) */ 951 1021 tmp = readl(base + PCIE_MISC_RC_BAR1_CONFIG_LO); 952 1022 tmp &= ~PCIE_MISC_RC_BAR1_CONFIG_LO_SIZE_MASK; ··· 962 1022 tmp &= ~PCIE_MISC_RC_BAR3_CONFIG_LO_SIZE_MASK; 963 1023 writel(tmp, base + PCIE_MISC_RC_BAR3_CONFIG_LO); 964 1024 965 - if (pcie->gen) 966 - brcm_pcie_set_gen(pcie, pcie->gen); 967 - 968 - /* Unassert the fundamental reset */ 969 - pcie->perst_set(pcie, 0); 1025 + /* Don't advertise L0s capability if 'aspm-no-l0s' */ 1026 + aspm_support = PCIE_LINK_STATE_L1; 1027 + if (!of_property_read_bool(pcie->np, "aspm-no-l0s")) 1028 + aspm_support |= PCIE_LINK_STATE_L0S; 1029 + tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); 1030 + u32p_replace_bits(&tmp, aspm_support, 1031 + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK); 1032 + writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); 970 1033 971 1034 /* 972 - * Give the RC/EP time to wake up, before trying to configure RC. 973 - * Intermittently check status for link-up, up to a total of 100ms. 1035 + * For config space accesses on the RC, show the right class for 1036 + * a PCIe-PCIe bridge (the default setting is to be EP mode). 974 1037 */ 975 - for (i = 0; i < 100 && !brcm_pcie_link_up(pcie); i += 5) 976 - msleep(5); 1038 + tmp = readl(base + PCIE_RC_CFG_PRIV1_ID_VAL3); 1039 + u32p_replace_bits(&tmp, 0x060400, 1040 + PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK); 1041 + writel(tmp, base + PCIE_RC_CFG_PRIV1_ID_VAL3); 977 1042 978 - if (!brcm_pcie_link_up(pcie)) { 979 - dev_err(dev, "link down\n"); 980 - return -ENODEV; 981 - } 982 - 983 - if (!brcm_pcie_rc_mode(pcie)) { 984 - dev_err(dev, "PCIe misconfigured; is in EP mode\n"); 985 - return -EINVAL; 986 - } 987 - 1043 + bridge = pci_host_bridge_from_priv(pcie); 988 1044 resource_list_for_each_entry(entry, &bridge->windows) { 989 - res = entry->res; 1045 + struct resource *res = entry->res; 990 1046 991 1047 if (resource_type(res) != IORESOURCE_MEM) 992 1048 continue; ··· 1011 1075 num_out_wins++; 1012 1076 } 1013 1077 1014 - /* Don't advertise L0s capability if 'aspm-no-l0s' */ 1015 - aspm_support = PCIE_LINK_STATE_L1; 1016 - if (!of_property_read_bool(pcie->np, "aspm-no-l0s")) 1017 - aspm_support |= PCIE_LINK_STATE_L0S; 1018 - tmp = readl(base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); 1019 - u32p_replace_bits(&tmp, aspm_support, 1020 - PCIE_RC_CFG_PRIV1_LINK_CAPABILITY_ASPM_SUPPORT_MASK); 1021 - writel(tmp, base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY); 1078 + /* PCIe->SCB endian mode for BAR */ 1079 + tmp = readl(base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1); 1080 + u32p_replace_bits(&tmp, PCIE_RC_CFG_VENDOR_SPCIFIC_REG1_LITTLE_ENDIAN, 1081 + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1_ENDIAN_MODE_BAR2_MASK); 1082 + writel(tmp, base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1); 1083 + 1084 + return 0; 1085 + } 1086 + 1087 + static int brcm_pcie_start_link(struct brcm_pcie *pcie) 1088 + { 1089 + struct device *dev = pcie->dev; 1090 + void __iomem *base = pcie->base; 1091 + u16 nlw, cls, lnksta; 1092 + bool ssc_good = false; 1093 + u32 tmp; 1094 + int ret, i; 1095 + 1096 + /* Unassert the fundamental reset */ 1097 + pcie->perst_set(pcie, 0); 1022 1098 1023 1099 /* 1024 - * For config space accesses on the RC, show the right class for 1025 - * a PCIe-PCIe bridge (the default setting is to be EP mode). 1100 + * Give the RC/EP time to wake up, before trying to configure RC. 1101 + * Intermittently check status for link-up, up to a total of 100ms. 1026 1102 */ 1027 - tmp = readl(base + PCIE_RC_CFG_PRIV1_ID_VAL3); 1028 - u32p_replace_bits(&tmp, 0x060400, 1029 - PCIE_RC_CFG_PRIV1_ID_VAL3_CLASS_CODE_MASK); 1030 - writel(tmp, base + PCIE_RC_CFG_PRIV1_ID_VAL3); 1103 + for (i = 0; i < 100 && !brcm_pcie_link_up(pcie); i += 5) 1104 + msleep(5); 1105 + 1106 + if (!brcm_pcie_link_up(pcie)) { 1107 + dev_err(dev, "link down\n"); 1108 + return -ENODEV; 1109 + } 1110 + 1111 + if (pcie->gen) 1112 + brcm_pcie_set_gen(pcie, pcie->gen); 1031 1113 1032 1114 if (pcie->ssc) { 1033 1115 ret = brcm_pcie_set_ssc(pcie); ··· 1062 1108 pci_speed_string(pcie_link_speed[cls]), nlw, 1063 1109 ssc_good ? "(SSC)" : "(!SSC)"); 1064 1110 1065 - /* PCIe->SCB endian mode for BAR */ 1066 - tmp = readl(base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1); 1067 - u32p_replace_bits(&tmp, PCIE_RC_CFG_VENDOR_SPCIFIC_REG1_LITTLE_ENDIAN, 1068 - PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1_ENDIAN_MODE_BAR2_MASK); 1069 - writel(tmp, base + PCIE_RC_CFG_VENDOR_VENDOR_SPECIFIC_REG1); 1070 - 1071 1111 /* 1072 1112 * Refclk from RC should be gated with CLKREQ# input when ASPM L0s,L1 1073 1113 * is enabled => setting the CLKREQ_DEBUG_ENABLE field to 1. ··· 1071 1123 writel(tmp, base + PCIE_MISC_HARD_PCIE_HARD_DEBUG); 1072 1124 1073 1125 return 0; 1126 + } 1127 + 1128 + static const char * const supplies[] = { 1129 + "vpcie3v3", 1130 + "vpcie3v3aux", 1131 + "vpcie12v", 1132 + }; 1133 + 1134 + static void *alloc_subdev_regulators(struct device *dev) 1135 + { 1136 + const size_t size = sizeof(struct subdev_regulators) + 1137 + sizeof(struct regulator_bulk_data) * ARRAY_SIZE(supplies); 1138 + struct subdev_regulators *sr; 1139 + int i; 1140 + 1141 + sr = devm_kzalloc(dev, size, GFP_KERNEL); 1142 + if (sr) { 1143 + sr->num_supplies = ARRAY_SIZE(supplies); 1144 + for (i = 0; i < ARRAY_SIZE(supplies); i++) 1145 + sr->supplies[i].supply = supplies[i]; 1146 + } 1147 + 1148 + return sr; 1149 + } 1150 + 1151 + static int brcm_pcie_add_bus(struct pci_bus *bus) 1152 + { 1153 + struct brcm_pcie *pcie = bus->sysdata; 1154 + struct device *dev = &bus->dev; 1155 + struct subdev_regulators *sr; 1156 + int ret; 1157 + 1158 + if (!bus->parent || !pci_is_root_bus(bus->parent)) 1159 + return 0; 1160 + 1161 + if (dev->of_node) { 1162 + sr = alloc_subdev_regulators(dev); 1163 + if (!sr) { 1164 + dev_info(dev, "Can't allocate regulators for downstream device\n"); 1165 + goto no_regulators; 1166 + } 1167 + 1168 + pcie->sr = sr; 1169 + 1170 + ret = regulator_bulk_get(dev, sr->num_supplies, sr->supplies); 1171 + if (ret) { 1172 + dev_info(dev, "No regulators for downstream device\n"); 1173 + goto no_regulators; 1174 + } 1175 + 1176 + ret = regulator_bulk_enable(sr->num_supplies, sr->supplies); 1177 + if (ret) { 1178 + dev_err(dev, "Can't enable regulators for downstream device\n"); 1179 + regulator_bulk_free(sr->num_supplies, sr->supplies); 1180 + pcie->sr = NULL; 1181 + } 1182 + } 1183 + 1184 + no_regulators: 1185 + brcm_pcie_start_link(pcie); 1186 + return 0; 1187 + } 1188 + 1189 + static void brcm_pcie_remove_bus(struct pci_bus *bus) 1190 + { 1191 + struct brcm_pcie *pcie = bus->sysdata; 1192 + struct subdev_regulators *sr = pcie->sr; 1193 + struct device *dev = &bus->dev; 1194 + 1195 + if (!sr) 1196 + return; 1197 + 1198 + if (regulator_bulk_disable(sr->num_supplies, sr->supplies)) 1199 + dev_err(dev, "Failed to disable regulators for downstream device\n"); 1200 + regulator_bulk_free(sr->num_supplies, sr->supplies); 1201 + pcie->sr = NULL; 1074 1202 } 1075 1203 1076 1204 /* L23 is a low-power PCIe link state */ ··· 1245 1221 pcie->bridge_sw_init_set(pcie, 1); 1246 1222 } 1247 1223 1248 - static int brcm_pcie_suspend(struct device *dev) 1224 + static int pci_dev_may_wakeup(struct pci_dev *dev, void *data) 1225 + { 1226 + bool *ret = data; 1227 + 1228 + if (device_may_wakeup(&dev->dev)) { 1229 + *ret = true; 1230 + dev_info(&dev->dev, "Possible wake-up device; regulators will not be disabled\n"); 1231 + } 1232 + return (int) *ret; 1233 + } 1234 + 1235 + static int brcm_pcie_suspend_noirq(struct device *dev) 1249 1236 { 1250 1237 struct brcm_pcie *pcie = dev_get_drvdata(dev); 1238 + struct pci_host_bridge *bridge = pci_host_bridge_from_priv(pcie); 1251 1239 int ret; 1252 1240 1253 1241 brcm_pcie_turn_off(pcie); ··· 1277 1241 return ret; 1278 1242 } 1279 1243 1244 + if (pcie->sr) { 1245 + /* 1246 + * Now turn off the regulators, but if at least one 1247 + * downstream device is enabled as a wake-up source, do not 1248 + * turn off regulators. 1249 + */ 1250 + pcie->ep_wakeup_capable = false; 1251 + pci_walk_bus(bridge->bus, pci_dev_may_wakeup, 1252 + &pcie->ep_wakeup_capable); 1253 + if (!pcie->ep_wakeup_capable) { 1254 + ret = regulator_bulk_disable(pcie->sr->num_supplies, 1255 + pcie->sr->supplies); 1256 + if (ret) { 1257 + dev_err(dev, "Could not turn off regulators\n"); 1258 + reset_control_reset(pcie->rescal); 1259 + return ret; 1260 + } 1261 + } 1262 + } 1280 1263 clk_disable_unprepare(pcie->clk); 1281 1264 1282 1265 return 0; 1283 1266 } 1284 1267 1285 - static int brcm_pcie_resume(struct device *dev) 1268 + static int brcm_pcie_resume_noirq(struct device *dev) 1286 1269 { 1287 1270 struct brcm_pcie *pcie = dev_get_drvdata(dev); 1288 1271 void __iomem *base; ··· 1336 1281 if (ret) 1337 1282 goto err_reset; 1338 1283 1284 + if (pcie->sr) { 1285 + if (pcie->ep_wakeup_capable) { 1286 + /* 1287 + * We are resuming from a suspend. In the suspend we 1288 + * did not disable the power supplies, so there is 1289 + * no need to enable them (and falsely increase their 1290 + * usage count). 1291 + */ 1292 + pcie->ep_wakeup_capable = false; 1293 + } else { 1294 + ret = regulator_bulk_enable(pcie->sr->num_supplies, 1295 + pcie->sr->supplies); 1296 + if (ret) { 1297 + dev_err(dev, "Could not turn on regulators\n"); 1298 + goto err_reset; 1299 + } 1300 + } 1301 + } 1302 + 1303 + ret = brcm_pcie_start_link(pcie); 1304 + if (ret) 1305 + goto err_regulator; 1306 + 1339 1307 if (pcie->msi) 1340 1308 brcm_msi_set_regs(pcie->msi); 1341 1309 1342 1310 return 0; 1343 1311 1312 + err_regulator: 1313 + if (pcie->sr) 1314 + regulator_bulk_disable(pcie->sr->num_supplies, pcie->sr->supplies); 1344 1315 err_reset: 1345 1316 reset_control_rearm(pcie->rescal); 1346 1317 err_disable_clk: ··· 1397 1316 return 0; 1398 1317 } 1399 1318 1319 + static const int pcie_offsets[] = { 1320 + [RGR1_SW_INIT_1] = 0x9210, 1321 + [EXT_CFG_INDEX] = 0x9000, 1322 + [EXT_CFG_DATA] = 0x9004, 1323 + }; 1324 + 1325 + static const int pcie_offsets_bmips_7425[] = { 1326 + [RGR1_SW_INIT_1] = 0x8010, 1327 + [EXT_CFG_INDEX] = 0x8300, 1328 + [EXT_CFG_DATA] = 0x8304, 1329 + }; 1330 + 1331 + static const struct pcie_cfg_data generic_cfg = { 1332 + .offsets = pcie_offsets, 1333 + .type = GENERIC, 1334 + .perst_set = brcm_pcie_perst_set_generic, 1335 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 1336 + }; 1337 + 1338 + static const struct pcie_cfg_data bcm7425_cfg = { 1339 + .offsets = pcie_offsets_bmips_7425, 1340 + .type = BCM7425, 1341 + .perst_set = brcm_pcie_perst_set_generic, 1342 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 1343 + }; 1344 + 1345 + static const struct pcie_cfg_data bcm7435_cfg = { 1346 + .offsets = pcie_offsets, 1347 + .type = BCM7435, 1348 + .perst_set = brcm_pcie_perst_set_generic, 1349 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 1350 + }; 1351 + 1352 + static const struct pcie_cfg_data bcm4908_cfg = { 1353 + .offsets = pcie_offsets, 1354 + .type = BCM4908, 1355 + .perst_set = brcm_pcie_perst_set_4908, 1356 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 1357 + }; 1358 + 1359 + static const int pcie_offset_bcm7278[] = { 1360 + [RGR1_SW_INIT_1] = 0xc010, 1361 + [EXT_CFG_INDEX] = 0x9000, 1362 + [EXT_CFG_DATA] = 0x9004, 1363 + }; 1364 + 1365 + static const struct pcie_cfg_data bcm7278_cfg = { 1366 + .offsets = pcie_offset_bcm7278, 1367 + .type = BCM7278, 1368 + .perst_set = brcm_pcie_perst_set_7278, 1369 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_7278, 1370 + }; 1371 + 1372 + static const struct pcie_cfg_data bcm2711_cfg = { 1373 + .offsets = pcie_offsets, 1374 + .type = BCM2711, 1375 + .perst_set = brcm_pcie_perst_set_generic, 1376 + .bridge_sw_init_set = brcm_pcie_bridge_sw_init_set_generic, 1377 + }; 1378 + 1400 1379 static const struct of_device_id brcm_pcie_match[] = { 1401 1380 { .compatible = "brcm,bcm2711-pcie", .data = &bcm2711_cfg }, 1402 1381 { .compatible = "brcm,bcm4908-pcie", .data = &bcm4908_cfg }, ··· 1467 1326 { .compatible = "brcm,bcm7435-pcie", .data = &bcm7435_cfg }, 1468 1327 { .compatible = "brcm,bcm7425-pcie", .data = &bcm7425_cfg }, 1469 1328 {}, 1329 + }; 1330 + 1331 + static struct pci_ops brcm_pcie_ops = { 1332 + .map_bus = brcm_pcie_map_bus, 1333 + .read = pci_generic_config_read, 1334 + .write = pci_generic_config_write, 1335 + .add_bus = brcm_pcie_add_bus, 1336 + .remove_bus = brcm_pcie_remove_bus, 1337 + }; 1338 + 1339 + static struct pci_ops brcm7425_pcie_ops = { 1340 + .map_bus = brcm7425_pcie_map_bus, 1341 + .read = pci_generic_config_read32, 1342 + .write = pci_generic_config_write32, 1343 + .add_bus = brcm_pcie_add_bus, 1344 + .remove_bus = brcm_pcie_remove_bus, 1470 1345 }; 1471 1346 1472 1347 static int brcm_pcie_probe(struct platform_device *pdev) ··· 1571 1414 } 1572 1415 } 1573 1416 1574 - bridge->ops = pcie->type == BCM7425 ? &brcm_pcie_ops32 : &brcm_pcie_ops; 1417 + bridge->ops = pcie->type == BCM7425 ? &brcm7425_pcie_ops : &brcm_pcie_ops; 1575 1418 bridge->sysdata = pcie; 1576 1419 1577 1420 platform_set_drvdata(pdev, pcie); 1578 1421 1579 - return pci_host_probe(bridge); 1422 + ret = pci_host_probe(bridge); 1423 + if (!ret && !brcm_pcie_link_up(pcie)) 1424 + ret = -ENODEV; 1425 + 1426 + if (ret) { 1427 + brcm_pcie_remove(pdev); 1428 + return ret; 1429 + } 1430 + 1431 + return 0; 1432 + 1580 1433 fail: 1581 1434 __brcm_pcie_remove(pcie); 1582 1435 return ret; ··· 1595 1428 MODULE_DEVICE_TABLE(of, brcm_pcie_match); 1596 1429 1597 1430 static const struct dev_pm_ops brcm_pcie_pm_ops = { 1598 - .suspend = brcm_pcie_suspend, 1599 - .resume = brcm_pcie_resume, 1431 + .suspend_noirq = brcm_pcie_suspend_noirq, 1432 + .resume_noirq = brcm_pcie_resume_noirq, 1600 1433 }; 1601 1434 1602 1435 static struct platform_driver brcm_pcie_driver = {
+2 -2
drivers/pci/controller/pcie-iproc-msi.c
··· 589 589 msi->has_inten_reg = true; 590 590 591 591 msi->nr_msi_vecs = msi->nr_irqs * EQ_LEN; 592 - msi->bitmap = devm_kcalloc(pcie->dev, BITS_TO_LONGS(msi->nr_msi_vecs), 593 - sizeof(*msi->bitmap), GFP_KERNEL); 592 + msi->bitmap = devm_bitmap_zalloc(pcie->dev, msi->nr_msi_vecs, 593 + GFP_KERNEL); 594 594 if (!msi->bitmap) 595 595 return -ENOMEM; 596 596
+52 -10
drivers/pci/controller/pcie-mediatek-gen3.c
··· 153 153 DECLARE_BITMAP(msi_irq_in_use, PCIE_MSI_IRQS_NUM); 154 154 }; 155 155 156 + /* LTSSM state in PCIE_LTSSM_STATUS_REG bit[28:24] */ 157 + static const char *const ltssm_str[] = { 158 + "detect.quiet", /* 0x00 */ 159 + "detect.active", /* 0x01 */ 160 + "polling.active", /* 0x02 */ 161 + "polling.compliance", /* 0x03 */ 162 + "polling.configuration", /* 0x04 */ 163 + "config.linkwidthstart", /* 0x05 */ 164 + "config.linkwidthaccept", /* 0x06 */ 165 + "config.lanenumwait", /* 0x07 */ 166 + "config.lanenumaccept", /* 0x08 */ 167 + "config.complete", /* 0x09 */ 168 + "config.idle", /* 0x0A */ 169 + "recovery.receiverlock", /* 0x0B */ 170 + "recovery.equalization", /* 0x0C */ 171 + "recovery.speed", /* 0x0D */ 172 + "recovery.receiverconfig", /* 0x0E */ 173 + "recovery.idle", /* 0x0F */ 174 + "L0", /* 0x10 */ 175 + "L0s", /* 0x11 */ 176 + "L1.entry", /* 0x12 */ 177 + "L1.idle", /* 0x13 */ 178 + "L2.idle", /* 0x14 */ 179 + "L2.transmitwake", /* 0x15 */ 180 + "disable", /* 0x16 */ 181 + "loopback.entry", /* 0x17 */ 182 + "loopback.active", /* 0x18 */ 183 + "loopback.exit", /* 0x19 */ 184 + "hotreset", /* 0x1A */ 185 + }; 186 + 156 187 /** 157 188 * mtk_pcie_config_tlp_header() - Configure a configuration TLP header 158 189 * @bus: PCI bus to query ··· 358 327 !!(val & PCIE_PORT_LINKUP), 20, 359 328 PCI_PM_D3COLD_WAIT * USEC_PER_MSEC); 360 329 if (err) { 330 + const char *ltssm_state; 331 + int ltssm_index; 332 + 361 333 val = readl_relaxed(pcie->base + PCIE_LTSSM_STATUS_REG); 362 - dev_err(pcie->dev, "PCIe link down, ltssm reg val: %#x\n", val); 334 + ltssm_index = PCIE_LTSSM_STATE(val); 335 + ltssm_state = ltssm_index >= ARRAY_SIZE(ltssm_str) ? 336 + "Unknown state" : ltssm_str[ltssm_index]; 337 + dev_err(pcie->dev, 338 + "PCIe link down, current LTSSM state: %s (%#x)\n", 339 + ltssm_state, val); 363 340 return err; 364 341 } 365 342 ··· 639 600 &intx_domain_ops, pcie); 640 601 if (!pcie->intx_domain) { 641 602 dev_err(dev, "failed to create INTx IRQ domain\n"); 642 - return -ENODEV; 603 + ret = -ENODEV; 604 + goto out_put_node; 643 605 } 644 606 645 607 /* Setup MSI */ ··· 663 623 goto err_msi_domain; 664 624 } 665 625 626 + of_node_put(intc_node); 666 627 return 0; 667 628 668 629 err_msi_domain: 669 630 irq_domain_remove(pcie->msi_bottom_domain); 670 631 err_msi_bottom_domain: 671 632 irq_domain_remove(pcie->intx_domain); 672 - 633 + out_put_node: 634 + of_node_put(intc_node); 673 635 return ret; 674 636 } 675 637 ··· 959 917 return 0; 960 918 } 961 919 962 - static void __maybe_unused mtk_pcie_irq_save(struct mtk_gen3_pcie *pcie) 920 + static void mtk_pcie_irq_save(struct mtk_gen3_pcie *pcie) 963 921 { 964 922 int i; 965 923 ··· 977 935 raw_spin_unlock(&pcie->irq_lock); 978 936 } 979 937 980 - static void __maybe_unused mtk_pcie_irq_restore(struct mtk_gen3_pcie *pcie) 938 + static void mtk_pcie_irq_restore(struct mtk_gen3_pcie *pcie) 981 939 { 982 940 int i; 983 941 ··· 995 953 raw_spin_unlock(&pcie->irq_lock); 996 954 } 997 955 998 - static int __maybe_unused mtk_pcie_turn_off_link(struct mtk_gen3_pcie *pcie) 956 + static int mtk_pcie_turn_off_link(struct mtk_gen3_pcie *pcie) 999 957 { 1000 958 u32 val; 1001 959 ··· 1010 968 50 * USEC_PER_MSEC); 1011 969 } 1012 970 1013 - static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev) 971 + static int mtk_pcie_suspend_noirq(struct device *dev) 1014 972 { 1015 973 struct mtk_gen3_pcie *pcie = dev_get_drvdata(dev); 1016 974 int err; ··· 1036 994 return 0; 1037 995 } 1038 996 1039 - static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev) 997 + static int mtk_pcie_resume_noirq(struct device *dev) 1040 998 { 1041 999 struct mtk_gen3_pcie *pcie = dev_get_drvdata(dev); 1042 1000 int err; ··· 1057 1015 } 1058 1016 1059 1017 static const struct dev_pm_ops mtk_pcie_pm_ops = { 1060 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq, 1061 - mtk_pcie_resume_noirq) 1018 + NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq, 1019 + mtk_pcie_resume_noirq) 1062 1020 }; 1063 1021 1064 1022 static const struct of_device_id mtk_pcie_of_match[] = {
+4 -4
drivers/pci/controller/pcie-mediatek.c
··· 1150 1150 return 0; 1151 1151 } 1152 1152 1153 - static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev) 1153 + static int mtk_pcie_suspend_noirq(struct device *dev) 1154 1154 { 1155 1155 struct mtk_pcie *pcie = dev_get_drvdata(dev); 1156 1156 struct mtk_pcie_port *port; ··· 1174 1174 return 0; 1175 1175 } 1176 1176 1177 - static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev) 1177 + static int mtk_pcie_resume_noirq(struct device *dev) 1178 1178 { 1179 1179 struct mtk_pcie *pcie = dev_get_drvdata(dev); 1180 1180 struct mtk_pcie_port *port, *tmp; ··· 1195 1195 } 1196 1196 1197 1197 static const struct dev_pm_ops mtk_pcie_pm_ops = { 1198 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq, 1199 - mtk_pcie_resume_noirq) 1198 + NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq, 1199 + mtk_pcie_resume_noirq) 1200 1200 }; 1201 1201 1202 1202 static const struct mtk_pcie_soc mtk_pcie_soc_v1 = {
+2
drivers/pci/controller/pcie-microchip-host.c
··· 904 904 &event_domain_ops, port); 905 905 if (!port->event_domain) { 906 906 dev_err(dev, "failed to get event domain\n"); 907 + of_node_put(pcie_intc_node); 907 908 return -ENOMEM; 908 909 } 909 910 ··· 914 913 &intx_domain_ops, port); 915 914 if (!port->intx_domain) { 916 915 dev_err(dev, "failed to get an INTx IRQ domain\n"); 916 + of_node_put(pcie_intc_node); 917 917 return -ENOMEM; 918 918 } 919 919
+2 -2
drivers/pci/controller/pcie-rcar-host.c
··· 1072 1072 return err; 1073 1073 } 1074 1074 1075 - static int __maybe_unused rcar_pcie_resume(struct device *dev) 1075 + static int rcar_pcie_resume(struct device *dev) 1076 1076 { 1077 1077 struct rcar_pcie_host *host = dev_get_drvdata(dev); 1078 1078 struct rcar_pcie *pcie = &host->pcie; ··· 1127 1127 } 1128 1128 1129 1129 static const struct dev_pm_ops rcar_pcie_pm_ops = { 1130 - SET_SYSTEM_SLEEP_PM_OPS(NULL, rcar_pcie_resume) 1130 + SYSTEM_SLEEP_PM_OPS(NULL, rcar_pcie_resume) 1131 1131 .resume_noirq = rcar_pcie_resume_noirq, 1132 1132 }; 1133 1133
+4 -4
drivers/pci/controller/pcie-rockchip-host.c
··· 864 864 return 0; 865 865 } 866 866 867 - static int __maybe_unused rockchip_pcie_suspend_noirq(struct device *dev) 867 + static int rockchip_pcie_suspend_noirq(struct device *dev) 868 868 { 869 869 struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 870 870 int ret; ··· 889 889 return ret; 890 890 } 891 891 892 - static int __maybe_unused rockchip_pcie_resume_noirq(struct device *dev) 892 + static int rockchip_pcie_resume_noirq(struct device *dev) 893 893 { 894 894 struct rockchip_pcie *rockchip = dev_get_drvdata(dev); 895 895 int err; ··· 1035 1035 } 1036 1036 1037 1037 static const struct dev_pm_ops rockchip_pcie_pm_ops = { 1038 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(rockchip_pcie_suspend_noirq, 1039 - rockchip_pcie_resume_noirq) 1038 + NOIRQ_SYSTEM_SLEEP_PM_OPS(rockchip_pcie_suspend_noirq, 1039 + rockchip_pcie_resume_noirq) 1040 1040 }; 1041 1041 1042 1042 static const struct of_device_id rockchip_pcie_of_match[] = {
+58 -2
drivers/pci/controller/pcie-xilinx-cpm.c
··· 35 35 #define XILINX_CPM_PCIE_MISC_IR_ENABLE 0x00000348 36 36 #define XILINX_CPM_PCIE_MISC_IR_LOCAL BIT(1) 37 37 38 + #define XILINX_CPM_PCIE_IR_STATUS 0x000002A0 39 + #define XILINX_CPM_PCIE_IR_ENABLE 0x000002A8 40 + #define XILINX_CPM_PCIE_IR_LOCAL BIT(0) 41 + 38 42 /* Interrupt registers definitions */ 39 43 #define XILINX_CPM_PCIE_INTR_LINK_DOWN 0 40 44 #define XILINX_CPM_PCIE_INTR_HOT_RESET 3 ··· 102 98 /* Phy Status/Control Register definitions */ 103 99 #define XILINX_CPM_PCIE_REG_PSCR_LNKUP BIT(11) 104 100 101 + enum xilinx_cpm_version { 102 + CPM, 103 + CPM5, 104 + }; 105 + 106 + /** 107 + * struct xilinx_cpm_variant - CPM variant information 108 + * @version: CPM version 109 + */ 110 + struct xilinx_cpm_variant { 111 + enum xilinx_cpm_version version; 112 + }; 113 + 105 114 /** 106 115 * struct xilinx_cpm_pcie - PCIe port information 107 116 * @dev: Device pointer ··· 126 109 * @intx_irq: legacy interrupt number 127 110 * @irq: Error interrupt number 128 111 * @lock: lock protecting shared register access 112 + * @variant: CPM version check pointer 129 113 */ 130 114 struct xilinx_cpm_pcie { 131 115 struct device *dev; ··· 138 120 int intx_irq; 139 121 int irq; 140 122 raw_spinlock_t lock; 123 + const struct xilinx_cpm_variant *variant; 141 124 }; 142 125 143 126 static u32 pcie_read(struct xilinx_cpm_pcie *port, u32 reg) ··· 303 284 for_each_set_bit(i, &val, 32) 304 285 generic_handle_domain_irq(port->cpm_domain, i); 305 286 pcie_write(port, val, XILINX_CPM_PCIE_REG_IDR); 287 + 288 + if (port->variant->version == CPM5) { 289 + val = readl_relaxed(port->cpm_base + XILINX_CPM_PCIE_IR_STATUS); 290 + if (val) 291 + writel_relaxed(val, port->cpm_base + 292 + XILINX_CPM_PCIE_IR_STATUS); 293 + } 306 294 307 295 /* 308 296 * XILINX_CPM_PCIE_MISC_IR_STATUS register is mapped to ··· 510 484 */ 511 485 writel(XILINX_CPM_PCIE_MISC_IR_LOCAL, 512 486 port->cpm_base + XILINX_CPM_PCIE_MISC_IR_ENABLE); 487 + 488 + if (port->variant->version == CPM5) { 489 + writel(XILINX_CPM_PCIE_IR_LOCAL, 490 + port->cpm_base + XILINX_CPM_PCIE_IR_ENABLE); 491 + } 492 + 513 493 /* Enable the Bridge enable bit */ 514 494 pcie_write(port, pcie_read(port, XILINX_CPM_PCIE_REG_RPSC) | 515 495 XILINX_CPM_PCIE_REG_RPSC_BEN, ··· 550 518 if (IS_ERR(port->cfg)) 551 519 return PTR_ERR(port->cfg); 552 520 553 - port->reg_base = port->cfg->win; 521 + if (port->variant->version == CPM5) { 522 + port->reg_base = devm_platform_ioremap_resource_byname(pdev, 523 + "cpm_csr"); 524 + if (IS_ERR(port->reg_base)) 525 + return PTR_ERR(port->reg_base); 526 + } else { 527 + port->reg_base = port->cfg->win; 528 + } 554 529 555 530 return 0; 556 531 } ··· 598 559 if (!bus) 599 560 return -ENODEV; 600 561 562 + port->variant = of_device_get_match_data(dev); 563 + 601 564 err = xilinx_cpm_pcie_parse_dt(port, bus->res); 602 565 if (err) { 603 566 dev_err(dev, "Parsing DT failed\n"); ··· 632 591 return err; 633 592 } 634 593 594 + static const struct xilinx_cpm_variant cpm_host = { 595 + .version = CPM, 596 + }; 597 + 598 + static const struct xilinx_cpm_variant cpm5_host = { 599 + .version = CPM5, 600 + }; 601 + 635 602 static const struct of_device_id xilinx_cpm_pcie_of_match[] = { 636 - { .compatible = "xlnx,versal-cpm-host-1.00", }, 603 + { 604 + .compatible = "xlnx,versal-cpm-host-1.00", 605 + .data = &cpm_host, 606 + }, 607 + { 608 + .compatible = "xlnx,versal-cpm5-host", 609 + .data = &cpm5_host, 610 + }, 637 611 {} 638 612 }; 639 613
+10 -3
drivers/pci/controller/vmd.c
··· 898 898 if (vmd->instance < 0) 899 899 return vmd->instance; 900 900 901 - vmd->name = kasprintf(GFP_KERNEL, "vmd%d", vmd->instance); 901 + vmd->name = devm_kasprintf(&dev->dev, GFP_KERNEL, "vmd%d", 902 + vmd->instance); 902 903 if (!vmd->name) { 903 904 err = -ENOMEM; 904 905 goto out_release_instance; ··· 937 936 938 937 out_release_instance: 939 938 ida_simple_remove(&vmd_instance_ida, vmd->instance); 940 - kfree(vmd->name); 941 939 return err; 942 940 } 943 941 ··· 959 959 vmd_detach_resources(vmd); 960 960 vmd_remove_irq_domain(vmd); 961 961 ida_simple_remove(&vmd_instance_ida, vmd->instance); 962 - kfree(vmd->name); 963 962 } 964 963 965 964 #ifdef CONFIG_PM_SLEEP ··· 1009 1010 VMD_FEAT_HAS_BUS_RESTRICTIONS | 1010 1011 VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1011 1012 {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa77f), 1013 + .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1014 + VMD_FEAT_HAS_BUS_RESTRICTIONS | 1015 + VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1016 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7d0b), 1017 + .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1018 + VMD_FEAT_HAS_BUS_RESTRICTIONS | 1019 + VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1020 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xad0b), 1012 1021 .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1013 1022 VMD_FEAT_HAS_BUS_RESTRICTIONS | 1014 1023 VMD_FEAT_OFFSET_FIRST_VECTOR,},
+106 -11
drivers/pci/endpoint/functions/pci-epf-test.c
··· 52 52 enum pci_barno test_reg_bar; 53 53 size_t msix_table_offset; 54 54 struct delayed_work cmd_handler; 55 - struct dma_chan *dma_chan; 55 + struct dma_chan *dma_chan_tx; 56 + struct dma_chan *dma_chan_rx; 56 57 struct completion transfer_complete; 57 58 bool dma_supported; 59 + bool dma_private; 58 60 const struct pci_epc_features *epc_features; 59 61 }; 60 62 ··· 98 96 * @dma_src: The source address of the data transfer. It can be a physical 99 97 * address given by pci_epc_mem_alloc_addr or DMA mapping APIs. 100 98 * @len: The size of the data transfer 99 + * @dma_remote: remote RC physical address 100 + * @dir: DMA transfer direction 101 101 * 102 102 * Function that uses dmaengine API to transfer data between PCIe EP and remote 103 103 * PCIe RC. The source and destination address can be a physical address given ··· 109 105 */ 110 106 static int pci_epf_test_data_transfer(struct pci_epf_test *epf_test, 111 107 dma_addr_t dma_dst, dma_addr_t dma_src, 112 - size_t len) 108 + size_t len, dma_addr_t dma_remote, 109 + enum dma_transfer_direction dir) 113 110 { 111 + struct dma_chan *chan = (dir == DMA_DEV_TO_MEM) ? 112 + epf_test->dma_chan_tx : epf_test->dma_chan_rx; 113 + dma_addr_t dma_local = (dir == DMA_MEM_TO_DEV) ? dma_src : dma_dst; 114 114 enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 115 - struct dma_chan *chan = epf_test->dma_chan; 116 115 struct pci_epf *epf = epf_test->epf; 117 116 struct dma_async_tx_descriptor *tx; 117 + struct dma_slave_config sconf = {}; 118 118 struct device *dev = &epf->dev; 119 119 dma_cookie_t cookie; 120 120 int ret; ··· 128 120 return -EINVAL; 129 121 } 130 122 131 - tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags); 123 + if (epf_test->dma_private) { 124 + sconf.direction = dir; 125 + if (dir == DMA_MEM_TO_DEV) 126 + sconf.dst_addr = dma_remote; 127 + else 128 + sconf.src_addr = dma_remote; 129 + 130 + if (dmaengine_slave_config(chan, &sconf)) { 131 + dev_err(dev, "DMA slave config fail\n"); 132 + return -EIO; 133 + } 134 + tx = dmaengine_prep_slave_single(chan, dma_local, len, dir, 135 + flags); 136 + } else { 137 + tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, 138 + flags); 139 + } 140 + 132 141 if (!tx) { 133 142 dev_err(dev, "Failed to prepare DMA memcpy\n"); 134 143 return -EIO; ··· 173 148 return 0; 174 149 } 175 150 151 + struct epf_dma_filter { 152 + struct device *dev; 153 + u32 dma_mask; 154 + }; 155 + 156 + static bool epf_dma_filter_fn(struct dma_chan *chan, void *node) 157 + { 158 + struct epf_dma_filter *filter = node; 159 + struct dma_slave_caps caps; 160 + 161 + memset(&caps, 0, sizeof(caps)); 162 + dma_get_slave_caps(chan, &caps); 163 + 164 + return chan->device->dev == filter->dev 165 + && (filter->dma_mask & caps.directions); 166 + } 167 + 176 168 /** 177 169 * pci_epf_test_init_dma_chan() - Function to initialize EPF test DMA channel 178 170 * @epf_test: the EPF test device that performs data transfer operation ··· 200 158 { 201 159 struct pci_epf *epf = epf_test->epf; 202 160 struct device *dev = &epf->dev; 161 + struct epf_dma_filter filter; 203 162 struct dma_chan *dma_chan; 204 163 dma_cap_mask_t mask; 205 164 int ret; 206 165 166 + filter.dev = epf->epc->dev.parent; 167 + filter.dma_mask = BIT(DMA_DEV_TO_MEM); 168 + 169 + dma_cap_zero(mask); 170 + dma_cap_set(DMA_SLAVE, mask); 171 + dma_chan = dma_request_channel(mask, epf_dma_filter_fn, &filter); 172 + if (!dma_chan) { 173 + dev_info(dev, "Failed to get private DMA rx channel. Falling back to generic one\n"); 174 + goto fail_back_tx; 175 + } 176 + 177 + epf_test->dma_chan_rx = dma_chan; 178 + 179 + filter.dma_mask = BIT(DMA_MEM_TO_DEV); 180 + dma_chan = dma_request_channel(mask, epf_dma_filter_fn, &filter); 181 + 182 + if (!dma_chan) { 183 + dev_info(dev, "Failed to get private DMA tx channel. Falling back to generic one\n"); 184 + goto fail_back_rx; 185 + } 186 + 187 + epf_test->dma_chan_tx = dma_chan; 188 + epf_test->dma_private = true; 189 + 190 + init_completion(&epf_test->transfer_complete); 191 + 192 + return 0; 193 + 194 + fail_back_rx: 195 + dma_release_channel(epf_test->dma_chan_rx); 196 + epf_test->dma_chan_tx = NULL; 197 + 198 + fail_back_tx: 207 199 dma_cap_zero(mask); 208 200 dma_cap_set(DMA_MEMCPY, mask); 209 201 ··· 250 174 } 251 175 init_completion(&epf_test->transfer_complete); 252 176 253 - epf_test->dma_chan = dma_chan; 177 + epf_test->dma_chan_tx = epf_test->dma_chan_rx = dma_chan; 254 178 255 179 return 0; 256 180 } ··· 266 190 if (!epf_test->dma_supported) 267 191 return; 268 192 269 - dma_release_channel(epf_test->dma_chan); 270 - epf_test->dma_chan = NULL; 193 + dma_release_channel(epf_test->dma_chan_tx); 194 + if (epf_test->dma_chan_tx == epf_test->dma_chan_rx) { 195 + epf_test->dma_chan_tx = NULL; 196 + epf_test->dma_chan_rx = NULL; 197 + return; 198 + } 199 + 200 + dma_release_channel(epf_test->dma_chan_rx); 201 + epf_test->dma_chan_rx = NULL; 202 + 203 + return; 271 204 } 272 205 273 206 static void pci_epf_test_print_rate(const char *ops, u64 size, ··· 365 280 goto err_map_addr; 366 281 } 367 282 283 + if (epf_test->dma_private) { 284 + dev_err(dev, "Cannot transfer data using DMA\n"); 285 + ret = -EINVAL; 286 + goto err_map_addr; 287 + } 288 + 368 289 ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr, 369 - src_phys_addr, reg->size); 290 + src_phys_addr, reg->size, 0, 291 + DMA_MEM_TO_MEM); 370 292 if (ret) 371 293 dev_err(dev, "Data transfer failed\n"); 372 294 } else { ··· 465 373 466 374 ktime_get_ts64(&start); 467 375 ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr, 468 - phys_addr, reg->size); 376 + phys_addr, reg->size, 377 + reg->src_addr, DMA_DEV_TO_MEM); 469 378 if (ret) 470 379 dev_err(dev, "Data transfer failed\n"); 471 380 ktime_get_ts64(&end); ··· 556 463 } 557 464 558 465 ktime_get_ts64(&start); 466 + 559 467 ret = pci_epf_test_data_transfer(epf_test, phys_addr, 560 - src_phys_addr, reg->size); 468 + src_phys_addr, reg->size, 469 + reg->dst_addr, 470 + DMA_MEM_TO_DEV); 561 471 if (ret) 562 472 dev_err(dev, "Data transfer failed\n"); 563 473 ktime_get_ts64(&end); ··· 723 627 724 628 cancel_delayed_work(&epf_test->cmd_handler); 725 629 pci_epf_test_clean_dma_chan(epf_test); 726 - pci_epc_stop(epc); 727 630 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 728 631 epf_bar = &epf->bar[bar]; 729 632
-44
drivers/pci/mmap.c
··· 13 13 14 14 #ifdef ARCH_GENERIC_PCI_MMAP_RESOURCE 15 15 16 - /* 17 - * Modern setup: generic pci_mmap_resource_range(), and implement the legacy 18 - * pci_mmap_page_range() (if needed) as a wrapper round it. 19 - */ 20 - 21 - #ifdef HAVE_PCI_MMAP 22 - int pci_mmap_page_range(struct pci_dev *pdev, int bar, 23 - struct vm_area_struct *vma, 24 - enum pci_mmap_state mmap_state, int write_combine) 25 - { 26 - resource_size_t start, end; 27 - 28 - pci_resource_to_user(pdev, bar, &pdev->resource[bar], &start, &end); 29 - 30 - /* Adjust vm_pgoff to be the offset within the resource */ 31 - vma->vm_pgoff -= start >> PAGE_SHIFT; 32 - return pci_mmap_resource_range(pdev, bar, vma, mmap_state, 33 - write_combine); 34 - } 35 - #endif 36 - 37 16 static const struct vm_operations_struct pci_phys_vm_ops = { 38 17 #ifdef CONFIG_HAVE_IOREMAP_PROT 39 18 .access = generic_access_phys, ··· 49 70 vma->vm_page_prot); 50 71 } 51 72 52 - #elif defined(HAVE_PCI_MMAP) /* && !ARCH_GENERIC_PCI_MMAP_RESOURCE */ 53 - 54 - /* 55 - * Legacy setup: Implement pci_mmap_resource_range() as a wrapper around 56 - * the architecture's pci_mmap_page_range(), converting to "user visible" 57 - * addresses as necessary. 58 - */ 59 - 60 - int pci_mmap_resource_range(struct pci_dev *pdev, int bar, 61 - struct vm_area_struct *vma, 62 - enum pci_mmap_state mmap_state, int write_combine) 63 - { 64 - resource_size_t start, end; 65 - 66 - /* 67 - * pci_mmap_page_range() expects the same kind of entry as coming 68 - * from /proc/bus/pci/ which is a "user visible" value. If this is 69 - * different from the resource itself, arch will do necessary fixup. 70 - */ 71 - pci_resource_to_user(pdev, bar, &pdev->resource[bar], &start, &end); 72 - vma->vm_pgoff += start >> PAGE_SHIFT; 73 - return pci_mmap_page_range(pdev, bar, vma, mmap_state, write_combine); 74 - } 75 73 #endif
+3 -2
drivers/pci/pci-acpi.c
··· 21 21 #include "pci.h" 22 22 23 23 /* 24 - * The GUID is defined in the PCI Firmware Specification available here: 25 - * https://www.pcisig.com/members/downloads/pcifw_r3_1_13Dec10.pdf 24 + * The GUID is defined in the PCI Firmware Specification available 25 + * here to PCI-SIG members: 26 + * https://members.pcisig.com/wg/PCI-SIG/document/15350 26 27 */ 27 28 const guid_t pci_acpi_dsm_guid = 28 29 GUID_INIT(0xe5c937d0, 0x3553, 0x4d7a,
+2 -6
drivers/pci/pci.c
··· 41 41 }; 42 42 EXPORT_SYMBOL_GPL(pci_power_names); 43 43 44 + #ifdef CONFIG_X86_32 44 45 int isa_dma_bridge_buggy; 45 46 EXPORT_SYMBOL(isa_dma_bridge_buggy); 47 + #endif 46 48 47 49 int pci_pci_problems; 48 50 EXPORT_SYMBOL(pci_pci_problems); ··· 1295 1293 pci_restore_bars(dev); 1296 1294 } 1297 1295 1298 - if (dev->bus->self) 1299 - pcie_aspm_pm_state_change(dev->bus->self); 1300 - 1301 1296 return 0; 1302 1297 } 1303 1298 ··· 1388 1389 pci_info_ratelimited(dev, "Refused to change power state from %s to %s\n", 1389 1390 pci_power_name(dev->current_state), 1390 1391 pci_power_name(state)); 1391 - 1392 - if (dev->bus->self) 1393 - pcie_aspm_pm_state_change(dev->bus->self); 1394 1392 1395 1393 return 0; 1396 1394 }
-2
drivers/pci/pci.h
··· 560 560 #ifdef CONFIG_PCIEASPM 561 561 void pcie_aspm_init_link_state(struct pci_dev *pdev); 562 562 void pcie_aspm_exit_link_state(struct pci_dev *pdev); 563 - void pcie_aspm_pm_state_change(struct pci_dev *pdev); 564 563 void pcie_aspm_powersave_config_link(struct pci_dev *pdev); 565 564 #else 566 565 static inline void pcie_aspm_init_link_state(struct pci_dev *pdev) { } 567 566 static inline void pcie_aspm_exit_link_state(struct pci_dev *pdev) { } 568 - static inline void pcie_aspm_pm_state_change(struct pci_dev *pdev) { } 569 567 static inline void pcie_aspm_powersave_config_link(struct pci_dev *pdev) { } 570 568 #endif 571 569
+11 -4
drivers/pci/pcie/aer.c
··· 392 392 pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_ERR, sizeof(u32) * n); 393 393 394 394 pci_aer_clear_status(dev); 395 + 396 + if (pci_aer_available()) 397 + pci_enable_pcie_error_reporting(dev); 398 + 399 + pcie_set_ecrc_checking(dev); 395 400 } 396 401 397 402 void pci_aer_exit(struct pci_dev *dev) ··· 543 538 u64 *stats = pdev->aer_stats->stats_array; \ 544 539 size_t len = 0; \ 545 540 \ 546 - for (i = 0; i < ARRAY_SIZE(strings_array); i++) { \ 541 + for (i = 0; i < ARRAY_SIZE(pdev->aer_stats->stats_array); i++) {\ 547 542 if (strings_array[i]) \ 548 543 len += sysfs_emit_at(buf, len, "%s %llu\n", \ 549 544 strings_array[i], \ ··· 1233 1228 pci_disable_pcie_error_reporting(dev); 1234 1229 } 1235 1230 1236 - if (enable) 1237 - pcie_set_ecrc_checking(dev); 1238 - 1239 1231 return 0; 1240 1232 } 1241 1233 ··· 1348 1346 struct aer_rpc *rpc; 1349 1347 struct device *device = &dev->device; 1350 1348 struct pci_dev *port = dev->port; 1349 + 1350 + BUILD_BUG_ON(ARRAY_SIZE(aer_correctable_error_string) < 1351 + AER_MAX_TYPEOF_COR_ERRS); 1352 + BUILD_BUG_ON(ARRAY_SIZE(aer_uncorrectable_error_string) < 1353 + AER_MAX_TYPEOF_UNCOR_ERRS); 1351 1354 1352 1355 /* Limit to Root Ports or Root Complex Event Collectors */ 1353 1356 if ((pci_pcie_type(port) != PCI_EXP_TYPE_RC_EC) &&
-20
drivers/pci/pcie/aspm.c
··· 1012 1012 up_read(&pci_bus_sem); 1013 1013 } 1014 1014 1015 - /* @pdev: the root port or switch downstream port */ 1016 - void pcie_aspm_pm_state_change(struct pci_dev *pdev) 1017 - { 1018 - struct pcie_link_state *link = pdev->link_state; 1019 - 1020 - if (aspm_disabled || !link) 1021 - return; 1022 - /* 1023 - * Devices changed PM state, we should recheck if latency 1024 - * meets all functions' requirement 1025 - */ 1026 - down_read(&pci_bus_sem); 1027 - mutex_lock(&aspm_lock); 1028 - pcie_update_aspm_capable(link->root); 1029 - pcie_config_aspm_path(link); 1030 - mutex_unlock(&aspm_lock); 1031 - up_read(&pci_bus_sem); 1032 - } 1033 - 1034 1015 void pcie_aspm_powersave_config_link(struct pci_dev *pdev) 1035 1016 { 1036 1017 struct pcie_link_state *link = pdev->link_state; ··· 1347 1366 { 1348 1367 return aspm_support_enabled; 1349 1368 } 1350 - EXPORT_SYMBOL(pcie_aspm_support_enabled);
+8 -4
drivers/pci/pcie/err.c
··· 55 55 56 56 device_lock(&dev->dev); 57 57 pdrv = dev->driver; 58 - if (!pci_dev_set_io_state(dev, state) || 59 - !pdrv || 60 - !pdrv->err_handler || 61 - !pdrv->err_handler->error_detected) { 58 + if (pci_dev_is_disconnected(dev)) { 59 + vote = PCI_ERS_RESULT_DISCONNECT; 60 + } else if (!pci_dev_set_io_state(dev, state)) { 61 + pci_info(dev, "can't recover (state transition %u -> %u invalid)\n", 62 + dev->error_state, state); 63 + vote = PCI_ERS_RESULT_NONE; 64 + } else if (!pdrv || !pdrv->err_handler || 65 + !pdrv->err_handler->error_detected) { 62 66 /* 63 67 * If any device in the subtree does not have an error_detected 64 68 * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent
+1 -8
drivers/pci/pcie/portdrv_core.c
··· 222 222 223 223 #ifdef CONFIG_PCIEAER 224 224 if (dev->aer_cap && pci_aer_available() && 225 - (pcie_ports_native || host->native_aer)) { 225 + (pcie_ports_native || host->native_aer)) 226 226 services |= PCIE_PORT_SERVICE_AER; 227 - 228 - /* 229 - * Disable AER on this port in case it's been enabled by the 230 - * BIOS (the AER service driver will enable it when necessary). 231 - */ 232 - pci_disable_pcie_error_reporting(dev); 233 - } 234 227 #endif 235 228 236 229 /* Root Ports and Root Complex Event Collectors may generate PMEs */
+44 -46
drivers/pci/probe.c
··· 1890 1890 1891 1891 dev->broken_intx_masking = pci_intx_mask_broken(dev); 1892 1892 1893 + /* Clear errors left from system firmware */ 1894 + pci_write_config_word(dev, PCI_STATUS, 0xffff); 1895 + 1893 1896 switch (dev->hdr_type) { /* header type */ 1894 1897 case PCI_HEADER_TYPE_NORMAL: /* standard header */ 1895 1898 if (class == PCI_CLASS_BRIDGE_PCI) ··· 2582 2579 } 2583 2580 EXPORT_SYMBOL(pci_scan_single_device); 2584 2581 2585 - static unsigned int next_fn(struct pci_bus *bus, struct pci_dev *dev, 2586 - unsigned int fn) 2582 + static int next_ari_fn(struct pci_bus *bus, struct pci_dev *dev, int fn) 2587 2583 { 2588 2584 int pos; 2589 2585 u16 cap = 0; 2590 2586 unsigned int next_fn; 2591 2587 2592 - if (pci_ari_enabled(bus)) { 2593 - if (!dev) 2594 - return 0; 2595 - pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ARI); 2596 - if (!pos) 2597 - return 0; 2588 + if (!dev) 2589 + return -ENODEV; 2598 2590 2599 - pci_read_config_word(dev, pos + PCI_ARI_CAP, &cap); 2600 - next_fn = PCI_ARI_CAP_NFN(cap); 2601 - if (next_fn <= fn) 2602 - return 0; /* protect against malformed list */ 2591 + pos = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ARI); 2592 + if (!pos) 2593 + return -ENODEV; 2603 2594 2604 - return next_fn; 2605 - } 2595 + pci_read_config_word(dev, pos + PCI_ARI_CAP, &cap); 2596 + next_fn = PCI_ARI_CAP_NFN(cap); 2597 + if (next_fn <= fn) 2598 + return -ENODEV; /* protect against malformed list */ 2606 2599 2607 - /* dev may be NULL for non-contiguous multifunction devices */ 2608 - if (!dev || dev->multifunction) 2609 - return (fn + 1) % 8; 2600 + return next_fn; 2601 + } 2610 2602 2611 - return 0; 2603 + static int next_fn(struct pci_bus *bus, struct pci_dev *dev, int fn) 2604 + { 2605 + if (pci_ari_enabled(bus)) 2606 + return next_ari_fn(bus, dev, fn); 2607 + 2608 + if (fn >= 7) 2609 + return -ENODEV; 2610 + /* only multifunction devices may have more functions */ 2611 + if (dev && !dev->multifunction) 2612 + return -ENODEV; 2613 + 2614 + return fn + 1; 2612 2615 } 2613 2616 2614 2617 static int only_one_child(struct pci_bus *bus) ··· 2652 2643 */ 2653 2644 int pci_scan_slot(struct pci_bus *bus, int devfn) 2654 2645 { 2655 - unsigned int fn, nr = 0; 2656 2646 struct pci_dev *dev; 2647 + int fn = 0, nr = 0; 2657 2648 2658 2649 if (only_one_child(bus) && (devfn > 0)) 2659 2650 return 0; /* Already scanned the entire slot */ 2660 2651 2661 - dev = pci_scan_single_device(bus, devfn); 2662 - if (!dev) 2663 - return 0; 2664 - if (!pci_dev_is_added(dev)) 2665 - nr++; 2666 - 2667 - for (fn = next_fn(bus, dev, 0); fn > 0; fn = next_fn(bus, dev, fn)) { 2652 + do { 2668 2653 dev = pci_scan_single_device(bus, devfn + fn); 2669 2654 if (dev) { 2670 2655 if (!pci_dev_is_added(dev)) 2671 2656 nr++; 2672 - dev->multifunction = 1; 2657 + if (fn > 0) 2658 + dev->multifunction = 1; 2659 + } else if (fn == 0) { 2660 + /* 2661 + * Function 0 is required unless we are running on 2662 + * a hypervisor that passes through individual PCI 2663 + * functions. 2664 + */ 2665 + if (!hypervisor_isolated_pci_functions()) 2666 + break; 2673 2667 } 2674 - } 2668 + fn = next_fn(bus, dev, fn); 2669 + } while (fn >= 0); 2675 2670 2676 2671 /* Only one slot has PCIe device */ 2677 2672 if (bus->self && nr) ··· 2871 2858 { 2872 2859 unsigned int used_buses, normal_bridges = 0, hotplug_bridges = 0; 2873 2860 unsigned int start = bus->busn_res.start; 2874 - unsigned int devfn, fn, cmax, max = start; 2861 + unsigned int devfn, cmax, max = start; 2875 2862 struct pci_dev *dev; 2876 - int nr_devs; 2877 2863 2878 2864 dev_dbg(&bus->dev, "scanning bus\n"); 2879 2865 2880 2866 /* Go find them, Rover! */ 2881 - for (devfn = 0; devfn < 256; devfn += 8) { 2882 - nr_devs = pci_scan_slot(bus, devfn); 2883 - 2884 - /* 2885 - * The Jailhouse hypervisor may pass individual functions of a 2886 - * multi-function device to a guest without passing function 0. 2887 - * Look for them as well. 2888 - */ 2889 - if (jailhouse_paravirt() && nr_devs == 0) { 2890 - for (fn = 1; fn < 8; fn++) { 2891 - dev = pci_scan_single_device(bus, devfn + fn); 2892 - if (dev) 2893 - dev->multifunction = 1; 2894 - } 2895 - } 2896 - } 2867 + for (devfn = 0; devfn < 256; devfn += 8) 2868 + pci_scan_slot(bus, devfn); 2897 2869 2898 2870 /* Reserve buses for SR-IOV capability */ 2899 2871 used_buses = pci_iov_bus_range(bus);
+6 -1
drivers/pci/proc.c
··· 244 244 { 245 245 struct pci_dev *dev = pde_data(file_inode(file)); 246 246 struct pci_filp_private *fpriv = file->private_data; 247 + resource_size_t start, end; 247 248 int i, ret, write_combine = 0, res_bit = IORESOURCE_MEM; 248 249 249 250 if (!capable(CAP_SYS_RAWIO) || ··· 279 278 iomem_is_exclusive(dev->resource[i].start)) 280 279 return -EINVAL; 281 280 282 - ret = pci_mmap_page_range(dev, i, vma, 281 + pci_resource_to_user(dev, i, &dev->resource[i], &start, &end); 282 + 283 + /* Adjust vm_pgoff to be the offset within the resource */ 284 + vma->vm_pgoff -= start >> PAGE_SHIFT; 285 + ret = pci_mmap_resource_range(dev, i, vma, 283 286 fpriv->mmap_state, write_combine); 284 287 if (ret < 0) 285 288 return ret;
+19 -5
drivers/pci/quirks.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/export.h> 19 19 #include <linux/pci.h> 20 + #include <linux/isa-dma.h> /* isa_dma_bridge_buggy */ 20 21 #include <linux/init.h> 21 22 #include <linux/delay.h> 22 23 #include <linux/acpi.h> ··· 31 30 #include <linux/pm_runtime.h> 32 31 #include <linux/suspend.h> 33 32 #include <linux/switchtec.h> 34 - #include <asm/dma.h> /* isa_dma_bridge_buggy */ 35 33 #include "pci.h" 36 34 37 35 static ktime_t fixup_debug_start(struct pci_dev *dev, ··· 239 239 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 240 240 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82441, quirk_passive_release); 241 241 242 + #ifdef CONFIG_X86_32 242 243 /* 243 244 * The VIA VP2/VP3/MVP3 seem to have some 'features'. There may be a 244 245 * workaround but VIA don't answer queries. If you happen to have good ··· 266 265 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_1, quirk_isa_dma_hangs); 267 266 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_2, quirk_isa_dma_hangs); 268 267 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NEC, PCI_DEVICE_ID_NEC_CBUS_3, quirk_isa_dma_hangs); 268 + #endif 269 269 270 270 /* 271 271 * Intel NM10 "TigerPoint" LPC PM1a_STS.BM_STS must be clear ··· 2711 2709 nvenet_msi_disable); 2712 2710 2713 2711 /* 2714 - * PCIe spec r4.0 sec 7.7.1.2 and sec 7.7.2.2 say that if MSI/MSI-X is enabled, 2715 - * then the device can't use INTx interrupts. Tegra's PCIe root ports don't 2716 - * generate MSI interrupts for PME and AER events instead only INTx interrupts 2717 - * are generated. Though Tegra's PCIe root ports can generate MSI interrupts 2712 + * PCIe spec r6.0 sec 6.1.4.3 says that if MSI/MSI-X is enabled, the device 2713 + * can't use INTx interrupts. Tegra's PCIe Root Ports don't generate MSI 2714 + * interrupts for PME and AER events; instead only INTx interrupts are 2715 + * generated. Though Tegra's PCIe Root Ports can generate MSI interrupts 2718 2716 * for other events, since PCIe specification doesn't support using a mix of 2719 2717 * INTx and MSI/MSI-X, it is required to disable MSI interrupts to avoid port 2720 2718 * service drivers registering their respective ISRs for MSIs. ··· 2760 2758 PCI_CLASS_BRIDGE_PCI, 8, 2761 2759 pci_quirk_nvidia_tegra_disable_rp_msi); 2762 2760 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x10e6, 2761 + PCI_CLASS_BRIDGE_PCI, 8, 2762 + pci_quirk_nvidia_tegra_disable_rp_msi); 2763 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x229a, 2764 + PCI_CLASS_BRIDGE_PCI, 8, 2765 + pci_quirk_nvidia_tegra_disable_rp_msi); 2766 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x229c, 2767 + PCI_CLASS_BRIDGE_PCI, 8, 2768 + pci_quirk_nvidia_tegra_disable_rp_msi); 2769 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_NVIDIA, 0x229e, 2763 2770 PCI_CLASS_BRIDGE_PCI, 8, 2764 2771 pci_quirk_nvidia_tegra_disable_rp_msi); 2765 2772 ··· 4935 4924 { PCI_VENDOR_ID_AMPERE, 0xE00C, pci_quirk_xgene_acs }, 4936 4925 /* Broadcom multi-function device */ 4937 4926 { PCI_VENDOR_ID_BROADCOM, 0x16D7, pci_quirk_mf_endpoint_acs }, 4927 + { PCI_VENDOR_ID_BROADCOM, 0x1750, pci_quirk_mf_endpoint_acs }, 4928 + { PCI_VENDOR_ID_BROADCOM, 0x1751, pci_quirk_mf_endpoint_acs }, 4929 + { PCI_VENDOR_ID_BROADCOM, 0x1752, pci_quirk_mf_endpoint_acs }, 4938 4930 { PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs }, 4939 4931 /* Amazon Annapurna Labs */ 4940 4932 { PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
+3 -4
drivers/pci/switch/switchtec.c
··· 1376 1376 dev->groups = switchtec_device_groups; 1377 1377 dev->release = stdev_release; 1378 1378 1379 - minor = ida_simple_get(&switchtec_minor_ida, 0, 0, 1380 - GFP_KERNEL); 1379 + minor = ida_alloc(&switchtec_minor_ida, GFP_KERNEL); 1381 1380 if (minor < 0) { 1382 1381 rc = minor; 1383 1382 goto err_put; ··· 1691 1692 err_devadd: 1692 1693 stdev_kill(stdev); 1693 1694 err_put: 1694 - ida_simple_remove(&switchtec_minor_ida, MINOR(stdev->dev.devt)); 1695 + ida_free(&switchtec_minor_ida, MINOR(stdev->dev.devt)); 1695 1696 put_device(&stdev->dev); 1696 1697 return rc; 1697 1698 } ··· 1703 1704 pci_set_drvdata(pdev, NULL); 1704 1705 1705 1706 cdev_device_del(&stdev->cdev, &stdev->dev); 1706 - ida_simple_remove(&switchtec_minor_ida, MINOR(stdev->dev.devt)); 1707 + ida_free(&switchtec_minor_ida, MINOR(stdev->dev.devt)); 1707 1708 dev_info(&stdev->dev, "unregistered.\n"); 1708 1709 stdev_kill(stdev); 1709 1710 put_device(&stdev->dev);
+9 -16
drivers/phy/samsung/phy-exynos-pcie.c
··· 51 51 { 52 52 struct exynos_pcie_phy *ep = phy_get_drvdata(phy); 53 53 54 + regmap_update_bits(ep->pmureg, EXYNOS5433_PMU_PCIE_PHY_OFFSET, 55 + BIT(0), 1); 56 + regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_GLOBAL_RESET, 57 + PCIE_APP_REQ_EXIT_L1_MODE, 0); 58 + regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_L1SUB_CM_CON, 59 + PCIE_REFCLK_GATING_EN, 0); 60 + 54 61 regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_COMMON_RESET, 55 62 PCIE_PHY_RESET, 1); 56 63 regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_MAC_RESET, ··· 116 109 return 0; 117 110 } 118 111 119 - static int exynos5433_pcie_phy_power_on(struct phy *phy) 120 - { 121 - struct exynos_pcie_phy *ep = phy_get_drvdata(phy); 122 - 123 - regmap_update_bits(ep->pmureg, EXYNOS5433_PMU_PCIE_PHY_OFFSET, 124 - BIT(0), 1); 125 - regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_GLOBAL_RESET, 126 - PCIE_APP_REQ_EXIT_L1_MODE, 0); 127 - regmap_update_bits(ep->fsysreg, PCIE_EXYNOS5433_PHY_L1SUB_CM_CON, 128 - PCIE_REFCLK_GATING_EN, 0); 129 - return 0; 130 - } 131 - 132 - static int exynos5433_pcie_phy_power_off(struct phy *phy) 112 + static int exynos5433_pcie_phy_exit(struct phy *phy) 133 113 { 134 114 struct exynos_pcie_phy *ep = phy_get_drvdata(phy); 135 115 ··· 129 135 130 136 static const struct phy_ops exynos5433_phy_ops = { 131 137 .init = exynos5433_pcie_phy_init, 132 - .power_on = exynos5433_pcie_phy_power_on, 133 - .power_off = exynos5433_pcie_phy_power_off, 138 + .exit = exynos5433_pcie_phy_exit, 134 139 .owner = THIS_MODULE, 135 140 }; 136 141
+3 -2
drivers/pnp/resource.c
··· 17 17 #include <asm/dma.h> 18 18 #include <asm/irq.h> 19 19 #include <linux/pci.h> 20 + #include <linux/libata.h> 20 21 #include <linux/ioport.h> 21 22 #include <linux/init.h> 22 23 ··· 323 322 * treat the compatibility IRQs as busy. 324 323 */ 325 324 if ((progif & 0x5) != 0x5) 326 - if (pci_get_legacy_ide_irq(pci, 0) == irq || 327 - pci_get_legacy_ide_irq(pci, 1) == irq) { 325 + if (ATA_PRIMARY_IRQ(pci) == irq || 326 + ATA_SECONDARY_IRQ(pci) == irq) { 328 327 pnp_dbg(&pnp->dev, " legacy IDE device %s " 329 328 "using irq %d\n", pci_name(pci), irq); 330 329 return 1;
+26 -13
include/asm-generic/pci.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * linux/include/asm-generic/pci.h 4 - * 5 - * Copyright (C) 2003 Russell King 6 - */ 7 - #ifndef _ASM_GENERIC_PCI_H 8 - #define _ASM_GENERIC_PCI_H 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 9 2 10 - #ifndef HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ 11 - static inline int pci_get_legacy_ide_irq(struct pci_dev *dev, int channel) 3 + #ifndef __ASM_GENERIC_PCI_H 4 + #define __ASM_GENERIC_PCI_H 5 + 6 + #ifndef PCIBIOS_MIN_IO 7 + #define PCIBIOS_MIN_IO 0 8 + #endif 9 + 10 + #ifndef PCIBIOS_MIN_MEM 11 + #define PCIBIOS_MIN_MEM 0 12 + #endif 13 + 14 + #ifndef pcibios_assign_all_busses 15 + /* For bootloaders that do not initialize the PCI bus */ 16 + #define pcibios_assign_all_busses() 1 17 + #endif 18 + 19 + /* Enable generic resource mapping code in drivers/pci/ */ 20 + #define ARCH_GENERIC_PCI_MMAP_RESOURCE 21 + 22 + #ifdef CONFIG_PCI_DOMAINS 23 + static inline int pci_proc_domain(struct pci_bus *bus) 12 24 { 13 - return channel ? 15 : 14; 25 + /* always show the domain in /proc */ 26 + return 1; 14 27 } 15 - #endif /* HAVE_ARCH_PCI_GET_LEGACY_IDE_IRQ */ 28 + #endif /* CONFIG_PCI_DOMAINS */ 16 29 17 - #endif /* _ASM_GENERIC_PCI_H */ 30 + #endif /* __ASM_GENERIC_PCI_H */
+2
include/asm-generic/pci_iomap.h
··· 25 25 #ifdef CONFIG_NO_GENERIC_PCI_IOPORT_MAP 26 26 extern void __iomem *__pci_ioport_map(struct pci_dev *dev, unsigned long port, 27 27 unsigned int nr); 28 + #elif !defined(CONFIG_HAS_IOPORT_MAP) 29 + #define __pci_ioport_map(dev, port, nr) NULL 28 30 #else 29 31 #define __pci_ioport_map(dev, port, nr) ioport_map((port), (nr)) 30 32 #endif
+58 -3
include/linux/dma/edma.h
··· 12 12 #include <linux/device.h> 13 13 #include <linux/dmaengine.h> 14 14 15 + #define EDMA_MAX_WR_CH 8 16 + #define EDMA_MAX_RD_CH 8 17 + 15 18 struct dw_edma; 19 + 20 + struct dw_edma_region { 21 + phys_addr_t paddr; 22 + void __iomem *vaddr; 23 + size_t sz; 24 + }; 25 + 26 + struct dw_edma_core_ops { 27 + int (*irq_vector)(struct device *dev, unsigned int nr); 28 + }; 29 + 30 + enum dw_edma_map_format { 31 + EDMA_MF_EDMA_LEGACY = 0x0, 32 + EDMA_MF_EDMA_UNROLL = 0x1, 33 + EDMA_MF_HDMA_COMPAT = 0x5 34 + }; 35 + 36 + /** 37 + * enum dw_edma_chip_flags - Flags specific to an eDMA chip 38 + * @DW_EDMA_CHIP_LOCAL: eDMA is used locally by an endpoint 39 + */ 40 + enum dw_edma_chip_flags { 41 + DW_EDMA_CHIP_LOCAL = BIT(0), 42 + }; 16 43 17 44 /** 18 45 * struct dw_edma_chip - representation of DesignWare eDMA controller hardware 19 46 * @dev: struct device of the eDMA controller 20 47 * @id: instance ID 21 - * @irq: irq line 22 - * @dw: struct dw_edma that is filed by dw_edma_probe() 48 + * @nr_irqs: total number of DMA IRQs 49 + * @ops DMA channel to IRQ number mapping 50 + * @flags dw_edma_chip_flags 51 + * @reg_base DMA register base address 52 + * @ll_wr_cnt DMA write link list count 53 + * @ll_rd_cnt DMA read link list count 54 + * @rg_region DMA register region 55 + * @ll_region_wr DMA descriptor link list memory for write channel 56 + * @ll_region_rd DMA descriptor link list memory for read channel 57 + * @dt_region_wr DMA data memory for write channel 58 + * @dt_region_rd DMA data memory for read channel 59 + * @mf DMA register map format 60 + * @dw: struct dw_edma that is filled by dw_edma_probe() 23 61 */ 24 62 struct dw_edma_chip { 25 63 struct device *dev; 26 64 int id; 27 - int irq; 65 + int nr_irqs; 66 + const struct dw_edma_core_ops *ops; 67 + u32 flags; 68 + 69 + void __iomem *reg_base; 70 + 71 + u16 ll_wr_cnt; 72 + u16 ll_rd_cnt; 73 + /* link list address */ 74 + struct dw_edma_region ll_region_wr[EDMA_MAX_WR_CH]; 75 + struct dw_edma_region ll_region_rd[EDMA_MAX_RD_CH]; 76 + 77 + /* data region */ 78 + struct dw_edma_region dt_region_wr[EDMA_MAX_WR_CH]; 79 + struct dw_edma_region dt_region_rd[EDMA_MAX_RD_CH]; 80 + 81 + enum dw_edma_map_format mf; 82 + 28 83 struct dw_edma *dw; 29 84 }; 30 85
+8
include/linux/hypervisor.h
··· 32 32 33 33 #endif /* !CONFIG_X86 */ 34 34 35 + static inline bool hypervisor_isolated_pci_functions(void) 36 + { 37 + if (IS_ENABLED(CONFIG_S390)) 38 + return true; 39 + 40 + return jailhouse_paravirt(); 41 + } 42 + 35 43 #endif /* __LINUX_HYPEVISOR_H */
+14
include/linux/isa-dma.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef __LINUX_ISA_DMA_H 4 + #define __LINUX_ISA_DMA_H 5 + 6 + #include <asm/dma.h> 7 + 8 + #if defined(CONFIG_PCI) && defined(CONFIG_X86_32) 9 + extern int isa_dma_bridge_buggy; 10 + #else 11 + #define isa_dma_bridge_buggy (0) 12 + #endif 13 + 14 + #endif /* __LINUX_ISA_DMA_H */
+1
include/linux/pci-ecam.h
··· 87 87 extern const struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */ 88 88 extern const struct pci_ecam_ops al_pcie_ops; /* Amazon Annapurna Labs PCIe */ 89 89 extern const struct pci_ecam_ops tegra194_pcie_ops; /* Tegra194 PCIe */ 90 + extern const struct pci_ecam_ops loongson_pci_ecam_ops; /* Loongson PCIe */ 90 91 #endif 91 92 92 93 #if IS_ENABLED(CONFIG_PCI_HOST_COMMON)
+1 -11
include/linux/pci.h
··· 1909 1909 1910 1910 #include <asm/pci.h> 1911 1911 1912 - /* These two functions provide almost identical functionality. Depending 1913 - * on the architecture, one will be implemented as a wrapper around the 1914 - * other (in drivers/pci/mmap.c). 1915 - * 1912 + /* 1916 1913 * pci_mmap_resource_range() maps a specific BAR, and vm->vm_pgoff 1917 1914 * is expected to be an offset within that region. 1918 1915 * 1919 - * pci_mmap_page_range() is the legacy architecture-specific interface, 1920 - * which accepts a "user visible" resource address converted by 1921 - * pci_resource_to_user(), as used in the legacy mmap() interface in 1922 - * /proc/bus/pci/. 1923 1916 */ 1924 1917 int pci_mmap_resource_range(struct pci_dev *dev, int bar, 1925 1918 struct vm_area_struct *vma, 1926 1919 enum pci_mmap_state mmap_state, int write_combine); 1927 - int pci_mmap_page_range(struct pci_dev *pdev, int bar, 1928 - struct vm_area_struct *vma, 1929 - enum pci_mmap_state mmap_state, int write_combine); 1930 1920 1931 1921 #ifndef arch_can_pci_mmap_wc 1932 1922 #define arch_can_pci_mmap_wc() 0
+1 -1
sound/core/isadma.c
··· 12 12 #undef HAVE_REALLY_SLOW_DMA_CONTROLLER 13 13 14 14 #include <linux/export.h> 15 + #include <linux/isa-dma.h> 15 16 #include <sound/core.h> 16 - #include <asm/dma.h> 17 17 18 18 /** 19 19 * snd_dma_program - program an ISA DMA transfer