Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v7.0-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull PCI updates from Bjorn Helgaas:
"Enumeration:

- Don't try to enable Extended Tags on VFs since that bit is Reserved
and causes misleading log messages (Håkon Bugge)

- Initialize Endpoint Read Completion Boundary to match Root Port,
regardless of ACPI _HPX (Håkon Bugge)

- Apply _HPX PCIe Setting Record only to AER configuration, and only
when OS owns PCIe hotplug but not AER, to avoid clobbering Extended
Tag and Relaxed Ordering settings (Håkon Bugge)

Resource management:

- Move CardBus code to setup-cardbus.c and only build it when
CONFIG_CARDBUS is set (Ilpo Järvinen)

- Fix bridge window alignment with optional resources, where
additional alignment requirement was previously lost (Ilpo
Järvinen)

- Stop over-estimating bridge window size since they are now assigned
without any gaps between them (Ilpo Järvinen)

- Increase resource MAX_IORES_LEVEL to avoid /proc/iomem flattening
for nested bridges and endpoints (Ilpo Järvinen)

- Add pbus_mem_size_optional() to handle sizes of optional resources
(SR-IOV VF BARs, expansion ROMs, bridge windows) (Ilpo Järvinen)

- Don't claim disabled bridge windows to avoid spurious claim
failures (Ilpo Järvinen)

Driver binding:

- Fix device reference leak in pcie_port_remove_service() (Uwe
Kleine-König)

- Move pcie_port_bus_match() and pcie_port_bus_type to PCIe-specific
portdrv.c (Uwe Kleine-König)

- Convert portdrv to use pcie_port_bus_type.probe() and .remove()
callbacks so .probe() and .remove() can eventually be removed from
struct device_driver (Uwe Kleine-König)

Error handling:

- Clear stale errors on reporting agents upon probe so they don't
look like recent errors (Lukas Wunner)

- Add generic RAS tracepoint for hotplug events (Shuai Xue)

- Add RAS tracepoint for link speed changes (Shuai Xue)

Power management:

- Avoid redundant delay on transition from D3hot to D3cold if the
device was already in D3hot (Brian Norris)

- Prevent runtime suspend until devices are fully initialized to
avoid saving incompletely configured device state (Brian Norris)

Power control:

- Add power_on/off callbacks with generic signature to pwrseq,
tc9563, and slot drivers so they can be used by pwrctrl core
(Manivannan Sadhasivam)

- Add PCIe M.2 connector support to the slot pwrctrl driver
(Manivannan Sadhasivam)

- Switch to pwrctrl interfaces to create, destroy, and power on/off
devices, calling them from host controller drivers instead of the
PCI core (Manivannan Sadhasivam)

- Drop qcom .assert_perst() callbacks since this is now done by the
controller driver instead of the pwrctrl driver (Manivannan
Sadhasivam)

Virtualization:

- Remove an incorrect unlock in pci_slot_trylock() error handling
(Jinhui Guo)

- Lock the bridge device for slot reset (Keith Busch)

- Enable ACS after IOMMU configuration on OF platforms so ACS is
enabled an all devices; previously the first device enumerated
(typically a Root Port) didn't have ACS enabled (Manivannan
Sadhasivam)

- Disable ACS Source Validation for IDT 0x80b5 and 0x8090 switches to
work around hardware erratum; previously ACS SV was only
temporarily disabled, which worked for enumeration but not after
reset (Manivannan Sadhasivam)

Peer-to-peer DMA:

- Release per-CPU pgmap ref when vm_insert_page() fails to avoid hang
when removing the PCI device (Hou Tao)

- Remove incorrect p2pmem_alloc_mmap() warning about page refcount
(Hou Tao)

Endpoint framework:

- Add configfs sub-groups synchronously to avoid NULL pointer
dereference when racing with removal (Liu Song)

- Fix swapped parameters in pci_{primary/secondary}_epc_epf_unlink()
functions (Manikanta Maddireddy)

ASPEED PCIe controller driver:

- Add ASPEED Root Complex DT binding and driver (Jacky Chou)

Freescale i.MX6 PCIe controller driver:

- Add DT binding and driver support for an optional external refclock
in addition to the refclock from the internal PLL (Richard Zhu)

- Fix CLKREQ# control so host asserts it during enumeration and
Endpoints can use it afterwards to exit the L1.2 link state
(Richard Zhu)

NVIDIA Tegra PCIe controller driver:

- Export irq_domain_free_irqs() to allow PCI/MSI drivers that tear
down MSI domains to be built as modules (Aaron Kling)

- Allow pci-tegra to be built as a module (Aaron Kling)

NVIDIA Tegra194 PCIe controller driver:

- Relax Kconfig so tegra194 can be built for platforms beyond
Tegra194 (Vidya Sagar)

Qualcomm PCIe controller driver:

- Merge SC8180x DT binding into SM8150 (Krzysztof Kozlowski)

- Move SDX55, SDM845, QCS404, IPQ5018, IPQ6018, IPQ8074 Gen3,
IPQ8074, IPQ4019, IPQ9574, APQ8064, MSM8996, APQ8084 to dedicated
schema (Krzysztof Kozlowski)

- Add DT binding and driver support for SA8255p Endpoint being
configured by firmware (Mrinmay Sarkar)

- Parse PERST# from all PCIe bridge nodes for future platforms that
will have PERST# in Switch Downstream Ports as well as in Root
Ports (Manivannan Sadhasivam)

Renesas RZ/G3S PCIe controller driver:

- Use pci_generic_config_write() since the writability provided by
the custom wrapper is unnecessary (Claudiu Beznea)

SOPHGO PCIe controller driver:

- Disable ASPM L0s and L1 on Sophgo 2044 PCIe Root Ports (Inochi
Amaoto)

Synopsys DesignWare PCIe controller driver:

- Extend PCI_FIND_NEXT_CAP() and PCI_FIND_NEXT_EXT_CAP() to return a
pointer to the preceding Capability, to allow removal of
Capabilities that are advertised but not fully implemented (Qiang
Yu)

- Remove MSI and MSI-X Capabilities in platforms that can't support
them, so the PCI core automatically falls back to INTx (Qiang Yu)

- Add ASPM L1.1 and L1.2 Substates context to debugfs ltssm_status
for drivers that support this (Shawn Lin)

- Skip PME_Turn_Off broadcast and L2/L3 transition during suspend if
link is not up to avoid an unnecessary timeout (Manivannan
Sadhasivam)

- Revert dw-rockchip, qcom, and DWC core changes that used link-up
IRQs to trigger enumeration instead of waiting for link to be up
because the PCI core doesn't allocate bus number space for
hierarchies that might be attached (Niklas Cassel)

- Make endpoint iATU entry for MSI permanent instead of programming
it dynamically, which is slow and racy with respect to other
concurrent traffic, e.g., eDMA (Koichiro Den)

- Use iMSI-RX MSI target address when possible to fix endpoints using
32-bit MSI (Shawn Lin)

- Allow DWC host controller driver probe to continue if device is not
found or found but inactive; only fail when there's an error with
the link (Manivannan Sadhasivam)

- For controllers like NXP i.MX6QP and i.MX7D, where LTSSM registers
are not accessible after PME_Turn_Off, simply wait 10ms instead of
polling for L2/L3 Ready (Richard Zhu)

- Use multiple iATU entries to map large bridge windows and DMA
ranges when necessary instead of failing (Samuel Holland)

- Add EPC dynamic_inbound_mapping feature bit for Endpoint
Controllers that can update BAR inbound address translation without
requiring EPF driver to clear/reset the BAR first, and advertise it
for DWC-based Endpoints (Koichiro Den)

- Add EPC subrange_mapping feature bit for Endpoint Controllers that
can map multiple independent inbound regions in a single BAR,
implement subrange mapping, advertise it for DWC-based Endpoints,
and add Endpoint selftests for it (Koichiro Den)

- Make resizable BARs work for Endpoint multi-PF configurations;
previously it only worked for PF 0 (Aksh Garg)

- Fix Endpoint non-PF 0 support for BAR configuration, ATU mappings,
and Address Match Mode (Aksh Garg)

- Set up iATU when ECAM is enabled; previously IO and MEM outbound
windows weren't programmed, and ECAM-related iATU entries weren't
restored after suspend/resume, so config accesses failed (Krishna
Chaitanya Chundru)

Miscellaneous:

- Use system_percpu_wq and WQ_PERCPU to explicitly request per-CPU
work so WQ_UNBOUND can eventually be removed (Marco Crivellari)"

* tag 'pci-v7.0-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (176 commits)
PCI/bwctrl: Disable BW controller on Intel P45 using a quirk
PCI: Disable ACS SV for IDT 0x8090 switch
PCI: Disable ACS SV for IDT 0x80b5 switch
PCI: Cache ACS Capabilities register
PCI: Enable ACS after configuring IOMMU for OF platforms
PCI: Add ACS quirk for Pericom PI7C9X2G404 switches [12d8:b404]
PCI: Add ACS quirk for Qualcomm Hamoa & Glymur
PCI: Use device_lock_assert() to verify device lock is held
PCI: Use lockdep_assert_held(pci_bus_sem) to verify lock is held
PCI: Fix pci_slot_lock () device locking
PCI: Fix pci_slot_trylock() error handling
PCI: Mark Nvidia GB10 to avoid bus reset
PCI: Mark ASM1164 SATA controller to avoid bus reset
PCI: host-generic: Avoid reporting incorrect 'missing reg property' error
PCI/PME: Replace RMW of Root Status register with direct write
PCI/AER: Clear stale errors on reporting agents upon probe
PCI: Don't claim disabled bridge windows
PCI: rzg3s-host: Fix device node reference leak in rzg3s_pcie_host_parse_port()
PCI: dwc: Fix missing iATU setup when ECAM is enabled
PCI: dwc: Clean up iATU index usage in dw_pcie_iatu_setup()
...

+6650 -2513
+24
Documentation/PCI/endpoint/pci-endpoint.rst
··· 95 95 Register space of the function driver is usually configured 96 96 using this API. 97 97 98 + Some endpoint controllers also support calling pci_epc_set_bar() again 99 + for the same BAR (without calling pci_epc_clear_bar()) to update inbound 100 + address translations after the host has programmed the BAR base address. 101 + Endpoint function drivers can check this capability via the 102 + dynamic_inbound_mapping EPC feature bit. 103 + 104 + When pci_epf_bar.num_submap is non-zero, the endpoint function driver is 105 + requesting BAR subrange mapping using pci_epf_bar.submap. This requires 106 + the EPC to advertise support via the subrange_mapping EPC feature bit. 107 + 108 + When an EPF driver wants to make use of the inbound subrange mapping 109 + feature, it requires that the BAR base address has been programmed by 110 + the host during enumeration. Thus, it needs to call pci_epc_set_bar() 111 + twice for the same BAR (requires dynamic_inbound_mapping): first with 112 + num_submap set to zero and configuring the BAR size, then after the PCIe 113 + link is up and the host enumerates the endpoint and programs the BAR 114 + base address, again with num_submap set to non-zero value. 115 + 116 + Note that when making use of the inbound subrange mapping feature, the 117 + EPF driver must not call pci_epc_clear_bar() between the two 118 + pci_epc_set_bar() calls, because clearing the BAR can clear/disable the 119 + BAR register or BAR decode on the endpoint while the host still expects 120 + the assigned BAR address to remain valid. 121 + 98 122 * pci_epc_clear_bar() 99 123 100 124 The PCI endpoint function driver should use pci_epc_clear_bar() to reset
+19
Documentation/PCI/endpoint/pci-test-howto.rst
··· 84 84 # echo 32 > functions/pci_epf_test/func1/msi_interrupts 85 85 # echo 2048 > functions/pci_epf_test/func1/msix_interrupts 86 86 87 + By default, pci-epf-test uses the following BAR sizes:: 88 + 89 + # grep . functions/pci_epf_test/func1/pci_epf_test.0/bar?_size 90 + functions/pci_epf_test/func1/pci_epf_test.0/bar0_size:131072 91 + functions/pci_epf_test/func1/pci_epf_test.0/bar1_size:131072 92 + functions/pci_epf_test/func1/pci_epf_test.0/bar2_size:131072 93 + functions/pci_epf_test/func1/pci_epf_test.0/bar3_size:131072 94 + functions/pci_epf_test/func1/pci_epf_test.0/bar4_size:131072 95 + functions/pci_epf_test/func1/pci_epf_test.0/bar5_size:1048576 96 + 97 + The user can override a default value using e.g.:: 98 + # echo 1048576 > functions/pci_epf_test/func1/pci_epf_test.0/bar1_size 99 + 100 + Overriding the default BAR sizes can only be done before binding the 101 + pci-epf-test device to a PCI endpoint controller driver. 102 + 103 + Note: Some endpoint controllers might have fixed-size BARs or reserved BARs; 104 + for such controllers, the corresponding BAR size in configfs will be ignored. 105 + 87 106 88 107 Binding pci-epf-test Device to EP Controller 89 108 --------------------------------------------
+7 -7
Documentation/PCI/endpoint/pci-vntb-howto.rst
··· 52 52 # cd /sys/kernel/config/pci_ep/ 53 53 # mkdir functions/pci_epf_vntb/func1 54 54 55 - The "mkdir func1" above creates the pci-epf-ntb function device that will 55 + The "mkdir func1" above creates the pci-epf-vntb function device that will 56 56 be probed by pci_epf_vntb driver. 57 57 58 58 The PCI endpoint framework populates the directory with the following 59 59 configurable fields:: 60 60 61 - # ls functions/pci_epf_ntb/func1 62 - baseclass_code deviceid msi_interrupts pci-epf-ntb.0 61 + # ls functions/pci_epf_vntb/func1 62 + baseclass_code deviceid msi_interrupts pci-epf-vntb.0 63 63 progif_code secondary subsys_id vendorid 64 64 cache_line_size interrupt_pin msix_interrupts primary 65 65 revid subclass_code subsys_vendor_id ··· 111 111 # echo 0x080A > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_pid 112 112 # echo 0x10 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vbus_number 113 113 114 - Binding pci-epf-ntb Device to EP Controller 114 + Binding pci-epf-vntb Device to EP Controller 115 115 -------------------------------------------- 116 116 117 117 NTB function device should be attached to PCI endpoint controllers 118 118 connected to the host. 119 119 120 - # ln -s controllers/5f010000.pcie_ep functions/pci-epf-ntb/func1/primary 120 + # ln -s controllers/5f010000.pcie_ep functions/pci_epf_vntb/func1/primary 121 121 122 122 Once the above step is completed, the PCI endpoint controllers are ready to 123 123 establish a link with the host. ··· 139 139 ------------------------- 140 140 141 141 Note that the devices listed here correspond to the values populated in 142 - "Creating pci-epf-ntb Device" section above:: 142 + "Creating pci-epf-vntb Device" section above:: 143 143 144 144 # lspci 145 145 00:00.0 PCI bridge: Freescale Semiconductor Inc Device 0000 (rev 01) ··· 152 152 ----------------------------------------- 153 153 154 154 Note that the devices listed here correspond to the values populated in 155 - "Creating pci-epf-ntb Device" section above:: 155 + "Creating pci-epf-vntb Device" section above:: 156 156 157 157 # lspci 158 158 10:00.0 Unassigned class [ffff]: Dawicontrol Computersysteme GmbH Device 1234 (rev ff)
+3 -3
Documentation/PCI/msi-howto.rst
··· 98 98 99 99 which allocates up to max_vecs interrupt vectors for a PCI device. It 100 100 returns the number of vectors allocated or a negative error. If the device 101 - has a requirements for a minimum number of vectors the driver can pass a 101 + has a requirement for a minimum number of vectors the driver can pass a 102 102 min_vecs argument set to this limit, and the PCI core will return -ENOSPC 103 103 if it can't meet the minimum number of vectors. 104 104 ··· 127 127 some platforms, MSI interrupts must all be targeted at the same set of CPUs 128 128 whereas MSI-X interrupts can all be targeted at different CPUs. 129 129 130 - If a device supports neither MSI-X or MSI it will fall back to a single 130 + If a device supports neither MSI-X nor MSI it will fall back to a single 131 131 legacy IRQ vector. 132 132 133 133 The typical usage of MSI or MSI-X interrupts is to allocate as many vectors ··· 203 203 ---------------------------------------------------- 204 204 205 205 Using 'lspci -v' (as root) may show some devices with "MSI", "Message 206 - Signalled Interrupts" or "MSI-X" capabilities. Each of these capabilities 206 + Signaled Interrupts" or "MSI-X" capabilities. Each of these capabilities 207 207 has an 'Enable' flag which is followed with either "+" (enabled) 208 208 or "-" (disabled). 209 209
+182
Documentation/devicetree/bindings/pci/aspeed,ast2600-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/aspeed,ast2600-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: ASPEED PCIe Root Complex Controller 8 + 9 + maintainers: 10 + - Jacky Chou <jacky_chou@aspeedtech.com> 11 + 12 + description: 13 + The ASPEED PCIe Root Complex controller provides PCI Express Root Complex 14 + functionality for ASPEED SoCs, such as the AST2600 and AST2700. 15 + This controller enables connectivity to PCIe endpoint devices, supporting 16 + memory and I/O windows, MSI and INTx interrupts, and integration with 17 + the SoC's clock, reset, and pinctrl subsystems. On AST2600, the PCIe Root 18 + Port device number is always 8. 19 + 20 + properties: 21 + compatible: 22 + enum: 23 + - aspeed,ast2600-pcie 24 + - aspeed,ast2700-pcie 25 + 26 + reg: 27 + maxItems: 1 28 + 29 + ranges: 30 + minItems: 2 31 + maxItems: 2 32 + 33 + interrupts: 34 + maxItems: 1 35 + description: INTx and MSI interrupt 36 + 37 + resets: 38 + items: 39 + - description: PCIe controller reset 40 + 41 + reset-names: 42 + items: 43 + - const: h2x 44 + 45 + aspeed,ahbc: 46 + $ref: /schemas/types.yaml#/definitions/phandle 47 + description: 48 + Phandle to the ASPEED AHB Controller (AHBC) syscon node. 49 + This reference is used by the PCIe controller to access 50 + system-level configuration registers related to the AHB bus. 51 + To enable AHB access for the PCIe controller. 52 + 53 + aspeed,pciecfg: 54 + $ref: /schemas/types.yaml#/definitions/phandle 55 + description: 56 + Phandle to the ASPEED PCIe configuration syscon node. 57 + This reference allows the PCIe controller to access 58 + SoC-specific PCIe configuration registers. There are the others 59 + functions such PCIe RC and PCIe EP will use this common register 60 + to configure the SoC interfaces. 61 + 62 + interrupt-controller: true 63 + 64 + patternProperties: 65 + "^pcie@[0-9a-f]+,0$": 66 + type: object 67 + $ref: /schemas/pci/pci-pci-bridge.yaml# 68 + 69 + properties: 70 + reg: 71 + maxItems: 1 72 + 73 + resets: 74 + items: 75 + - description: PERST# signal 76 + 77 + reset-names: 78 + items: 79 + - const: perst 80 + 81 + clocks: 82 + maxItems: 1 83 + 84 + phys: 85 + maxItems: 1 86 + 87 + required: 88 + - resets 89 + - reset-names 90 + - clocks 91 + - phys 92 + - ranges 93 + 94 + unevaluatedProperties: false 95 + 96 + allOf: 97 + - $ref: /schemas/pci/pci-host-bridge.yaml# 98 + - $ref: /schemas/interrupt-controller/msi-controller.yaml# 99 + - if: 100 + properties: 101 + compatible: 102 + contains: 103 + const: aspeed,ast2600-pcie 104 + then: 105 + required: 106 + - aspeed,ahbc 107 + else: 108 + properties: 109 + aspeed,ahbc: false 110 + - if: 111 + properties: 112 + compatible: 113 + contains: 114 + const: aspeed,ast2700-pcie 115 + then: 116 + required: 117 + - aspeed,pciecfg 118 + else: 119 + properties: 120 + aspeed,pciecfg: false 121 + 122 + required: 123 + - reg 124 + - interrupts 125 + - bus-range 126 + - ranges 127 + - resets 128 + - reset-names 129 + - msi-controller 130 + - interrupt-controller 131 + - interrupt-map-mask 132 + - interrupt-map 133 + 134 + unevaluatedProperties: false 135 + 136 + examples: 137 + - | 138 + #include <dt-bindings/interrupt-controller/arm-gic.h> 139 + #include <dt-bindings/clock/ast2600-clock.h> 140 + 141 + pcie0: pcie@1e770000 { 142 + compatible = "aspeed,ast2600-pcie"; 143 + device_type = "pci"; 144 + reg = <0x1e770000 0x100>; 145 + #address-cells = <3>; 146 + #size-cells = <2>; 147 + interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>; 148 + bus-range = <0x00 0xff>; 149 + 150 + ranges = <0x01000000 0x0 0x00018000 0x00018000 0x0 0x00008000 151 + 0x02000000 0x0 0x60000000 0x60000000 0x0 0x20000000>; 152 + 153 + resets = <&syscon ASPEED_RESET_H2X>; 154 + reset-names = "h2x"; 155 + 156 + #interrupt-cells = <1>; 157 + msi-controller; 158 + 159 + aspeed,ahbc = <&ahbc>; 160 + 161 + interrupt-controller; 162 + interrupt-map-mask = <0 0 0 7>; 163 + interrupt-map = <0 0 0 1 &pcie0 0>, 164 + <0 0 0 2 &pcie0 1>, 165 + <0 0 0 3 &pcie0 2>, 166 + <0 0 0 4 &pcie0 3>; 167 + 168 + pcie@8,0 { 169 + compatible = "pciclass,0604"; 170 + reg = <0x00004000 0 0 0 0>; 171 + #address-cells = <3>; 172 + #size-cells = <2>; 173 + device_type = "pci"; 174 + resets = <&syscon ASPEED_RESET_PCIE_RC_O>; 175 + reset-names = "perst"; 176 + clocks = <&syscon ASPEED_CLK_GATE_BCLK>; 177 + pinctrl-names = "default"; 178 + pinctrl-0 = <&pinctrl_pcierc1_default>; 179 + phys = <&pcie_phy1>; 180 + ranges; 181 + }; 182 + };
+5 -2
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml
··· 44 44 45 45 clock-names: 46 46 minItems: 3 47 - maxItems: 5 47 + maxItems: 6 48 48 49 49 interrupts: 50 50 minItems: 1 ··· 212 212 then: 213 213 properties: 214 214 clocks: 215 - maxItems: 5 215 + minItems: 5 216 + maxItems: 6 216 217 clock-names: 218 + minItems: 5 217 219 items: 218 220 - const: pcie 219 221 - const: pcie_bus 220 222 - const: pcie_phy 221 223 - const: pcie_aux 222 224 - const: ref 225 + - const: extref # Optional 223 226 224 227 unevaluatedProperties: false 225 228
+2
Documentation/devicetree/bindings/pci/loongson.yaml
··· 32 32 minItems: 1 33 33 maxItems: 3 34 34 35 + msi-parent: true 36 + 35 37 required: 36 38 - compatible 37 39 - reg
+1
Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
··· 48 48 oneOf: 49 49 - items: 50 50 - enum: 51 + - mediatek,mt7981-pcie 51 52 - mediatek,mt7986-pcie 52 53 - mediatek,mt8188-pcie 53 54 - mediatek,mt8195-pcie
+170
Documentation/devicetree/bindings/pci/qcom,pcie-apq8064.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-apq8064.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm APQ8064/IPQ8064 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-apq8064 17 + - qcom,pcie-ipq8064 18 + - qcom,pcie-ipq8064-v2 19 + 20 + reg: 21 + maxItems: 4 22 + 23 + reg-names: 24 + items: 25 + - const: dbi 26 + - const: elbi 27 + - const: parf 28 + - const: config 29 + 30 + clocks: 31 + minItems: 3 32 + maxItems: 5 33 + 34 + clock-names: 35 + minItems: 3 36 + items: 37 + - const: core # Clocks the pcie hw block 38 + - const: iface # Configuration AHB clock 39 + - const: phy 40 + - const: aux 41 + - const: ref 42 + 43 + interrupts: 44 + maxItems: 1 45 + 46 + interrupt-names: 47 + items: 48 + - const: msi 49 + 50 + resets: 51 + minItems: 5 52 + maxItems: 6 53 + 54 + reset-names: 55 + minItems: 5 56 + items: 57 + - const: axi 58 + - const: ahb 59 + - const: por 60 + - const: pci 61 + - const: phy 62 + - const: ext 63 + 64 + vdda-supply: 65 + description: A phandle to the core analog power supply 66 + 67 + vdda_phy-supply: 68 + description: A phandle to the core analog power supply for PHY 69 + 70 + vdda_refclk-supply: 71 + description: A phandle to the core analog power supply for IC which generates reference clock 72 + 73 + required: 74 + - resets 75 + - reset-names 76 + - vdda-supply 77 + - vdda_phy-supply 78 + - vdda_refclk-supply 79 + 80 + allOf: 81 + - $ref: qcom,pcie-common.yaml# 82 + - if: 83 + properties: 84 + compatible: 85 + contains: 86 + enum: 87 + - qcom,pcie-apq8064 88 + then: 89 + properties: 90 + clocks: 91 + maxItems: 3 92 + clock-names: 93 + maxItems: 3 94 + resets: 95 + maxItems: 5 96 + reset-names: 97 + maxItems: 5 98 + else: 99 + properties: 100 + clocks: 101 + minItems: 5 102 + clock-names: 103 + minItems: 5 104 + resets: 105 + minItems: 6 106 + reset-names: 107 + minItems: 6 108 + 109 + unevaluatedProperties: false 110 + 111 + examples: 112 + - | 113 + #include <dt-bindings/clock/qcom,gcc-msm8960.h> 114 + #include <dt-bindings/gpio/gpio.h> 115 + #include <dt-bindings/interrupt-controller/arm-gic.h> 116 + #include <dt-bindings/reset/qcom,gcc-msm8960.h> 117 + 118 + pcie@1b500000 { 119 + compatible = "qcom,pcie-apq8064"; 120 + reg = <0x1b500000 0x1000>, 121 + <0x1b502000 0x80>, 122 + <0x1b600000 0x100>, 123 + <0x0ff00000 0x100000>; 124 + reg-names = "dbi", "elbi", "parf", "config"; 125 + ranges = <0x81000000 0x0 0x00000000 0x0fe00000 0x0 0x00100000>, /* I/O */ 126 + <0x82000000 0x0 0x08000000 0x08000000 0x0 0x07e00000>; /* mem */ 127 + 128 + device_type = "pci"; 129 + linux,pci-domain = <0>; 130 + bus-range = <0x00 0xff>; 131 + num-lanes = <1>; 132 + #address-cells = <3>; 133 + #size-cells = <2>; 134 + 135 + clocks = <&gcc PCIE_A_CLK>, 136 + <&gcc PCIE_H_CLK>, 137 + <&gcc PCIE_PHY_REF_CLK>; 138 + clock-names = "core", "iface", "phy"; 139 + 140 + interrupts = <GIC_SPI 238 IRQ_TYPE_LEVEL_HIGH>; 141 + interrupt-names = "msi"; 142 + #interrupt-cells = <1>; 143 + interrupt-map-mask = <0 0 0 0x7>; 144 + interrupt-map = <0 0 0 1 &intc GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 145 + <0 0 0 2 &intc GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 146 + <0 0 0 3 &intc GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 147 + <0 0 0 4 &intc GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 148 + 149 + resets = <&gcc PCIE_ACLK_RESET>, 150 + <&gcc PCIE_HCLK_RESET>, 151 + <&gcc PCIE_POR_RESET>, 152 + <&gcc PCIE_PCI_RESET>, 153 + <&gcc PCIE_PHY_RESET>; 154 + reset-names = "axi", "ahb", "por", "pci", "phy"; 155 + 156 + perst-gpios = <&tlmm_pinmux 27 GPIO_ACTIVE_LOW>; 157 + vdda-supply = <&pm8921_s3>; 158 + vdda_phy-supply = <&pm8921_lvs6>; 159 + vdda_refclk-supply = <&v3p3_fixed>; 160 + 161 + pcie@0 { 162 + device_type = "pci"; 163 + reg = <0x0 0x0 0x0 0x0 0x0>; 164 + bus-range = <0x01 0xff>; 165 + 166 + #address-cells = <3>; 167 + #size-cells = <2>; 168 + ranges; 169 + }; 170 + };
+109
Documentation/devicetree/bindings/pci/qcom,pcie-apq8084.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-apq8084.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm APQ8084 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-apq8084 17 + 18 + reg: 19 + minItems: 4 20 + maxItems: 5 21 + 22 + reg-names: 23 + minItems: 4 24 + items: 25 + - const: parf 26 + - const: dbi 27 + - const: elbi 28 + - const: config 29 + - const: mhi 30 + 31 + clocks: 32 + maxItems: 4 33 + 34 + clock-names: 35 + items: 36 + - const: iface # Configuration AHB clock 37 + - const: master_bus # Master AXI clock 38 + - const: slave_bus # Slave AXI clock 39 + - const: aux 40 + 41 + interrupts: 42 + maxItems: 1 43 + 44 + interrupt-names: 45 + items: 46 + - const: msi 47 + 48 + resets: 49 + maxItems: 1 50 + 51 + reset-names: 52 + items: 53 + - const: core 54 + 55 + vdda-supply: 56 + description: A phandle to the core analog power supply 57 + 58 + required: 59 + - power-domains 60 + - resets 61 + - reset-names 62 + 63 + allOf: 64 + - $ref: qcom,pcie-common.yaml# 65 + 66 + unevaluatedProperties: false 67 + 68 + examples: 69 + - | 70 + #include <dt-bindings/interrupt-controller/arm-gic.h> 71 + #include <dt-bindings/gpio/gpio.h> 72 + pcie@fc520000 { 73 + compatible = "qcom,pcie-apq8084"; 74 + reg = <0xfc520000 0x2000>, 75 + <0xff000000 0x1000>, 76 + <0xff001000 0x1000>, 77 + <0xff002000 0x2000>; 78 + reg-names = "parf", "dbi", "elbi", "config"; 79 + device_type = "pci"; 80 + linux,pci-domain = <0>; 81 + bus-range = <0x00 0xff>; 82 + num-lanes = <1>; 83 + #address-cells = <3>; 84 + #size-cells = <2>; 85 + ranges = <0x81000000 0 0 0xff200000 0 0x00100000>, 86 + <0x82000000 0 0x00300000 0xff300000 0 0x00d00000>; 87 + interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>; 88 + interrupt-names = "msi"; 89 + #interrupt-cells = <1>; 90 + interrupt-map-mask = <0 0 0 0x7>; 91 + interrupt-map = <0 0 0 1 &intc 0 244 IRQ_TYPE_LEVEL_HIGH>, 92 + <0 0 0 2 &intc 0 245 IRQ_TYPE_LEVEL_HIGH>, 93 + <0 0 0 3 &intc 0 247 IRQ_TYPE_LEVEL_HIGH>, 94 + <0 0 0 4 &intc 0 248 IRQ_TYPE_LEVEL_HIGH>; 95 + clocks = <&gcc 324>, 96 + <&gcc 325>, 97 + <&gcc 327>, 98 + <&gcc 323>; 99 + clock-names = "iface", "master_bus", "slave_bus", "aux"; 100 + resets = <&gcc 81>; 101 + reset-names = "core"; 102 + power-domains = <&gcc 1>; 103 + vdda-supply = <&pma8084_l3>; 104 + phys = <&pciephy0>; 105 + phy-names = "pciephy"; 106 + perst-gpios = <&tlmm 70 GPIO_ACTIVE_LOW>; 107 + pinctrl-0 = <&pcie0_pins_default>; 108 + pinctrl-names = "default"; 109 + };
+146
Documentation/devicetree/bindings/pci/qcom,pcie-ipq4019.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-ipq4019.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm IPQ4019 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-ipq4019 17 + 18 + reg: 19 + maxItems: 4 20 + 21 + reg-names: 22 + items: 23 + - const: dbi 24 + - const: elbi 25 + - const: parf 26 + - const: config 27 + 28 + clocks: 29 + maxItems: 3 30 + 31 + clock-names: 32 + items: 33 + - const: aux 34 + - const: master_bus # Master AXI clock 35 + - const: slave_bus # Slave AXI clock 36 + 37 + interrupts: 38 + maxItems: 1 39 + 40 + interrupt-names: 41 + items: 42 + - const: msi 43 + 44 + resets: 45 + maxItems: 12 46 + 47 + reset-names: 48 + items: 49 + - const: axi_m # AXI master reset 50 + - const: axi_s # AXI slave reset 51 + - const: pipe 52 + - const: axi_m_vmid 53 + - const: axi_s_xpu 54 + - const: parf 55 + - const: phy 56 + - const: axi_m_sticky # AXI master sticky reset 57 + - const: pipe_sticky 58 + - const: pwr 59 + - const: ahb 60 + - const: phy_ahb 61 + 62 + required: 63 + - resets 64 + - reset-names 65 + 66 + allOf: 67 + - $ref: qcom,pcie-common.yaml# 68 + 69 + unevaluatedProperties: false 70 + 71 + examples: 72 + - | 73 + #include <dt-bindings/clock/qcom,gcc-ipq4019.h> 74 + #include <dt-bindings/gpio/gpio.h> 75 + #include <dt-bindings/interrupt-controller/arm-gic.h> 76 + 77 + pcie@40000000 { 78 + compatible = "qcom,pcie-ipq4019"; 79 + reg = <0x40000000 0xf1d>, 80 + <0x40000f20 0xa8>, 81 + <0x80000 0x2000>, 82 + <0x40100000 0x1000>; 83 + reg-names = "dbi", "elbi", "parf", "config"; 84 + ranges = <0x81000000 0x0 0x00000000 0x40200000 0x0 0x00100000>, 85 + <0x82000000 0x0 0x40300000 0x40300000 0x0 0x00d00000>; 86 + 87 + device_type = "pci"; 88 + linux,pci-domain = <0>; 89 + bus-range = <0x00 0xff>; 90 + num-lanes = <1>; 91 + #address-cells = <3>; 92 + #size-cells = <2>; 93 + 94 + clocks = <&gcc GCC_PCIE_AHB_CLK>, 95 + <&gcc GCC_PCIE_AXI_M_CLK>, 96 + <&gcc GCC_PCIE_AXI_S_CLK>; 97 + clock-names = "aux", 98 + "master_bus", 99 + "slave_bus"; 100 + 101 + interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>; 102 + interrupt-names = "msi"; 103 + #interrupt-cells = <1>; 104 + interrupt-map-mask = <0 0 0 0x7>; 105 + interrupt-map = <0 0 0 1 &intc GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 106 + <0 0 0 2 &intc GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 107 + <0 0 0 3 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 108 + <0 0 0 4 &intc GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 109 + 110 + resets = <&gcc PCIE_AXI_M_ARES>, 111 + <&gcc PCIE_AXI_S_ARES>, 112 + <&gcc PCIE_PIPE_ARES>, 113 + <&gcc PCIE_AXI_M_VMIDMT_ARES>, 114 + <&gcc PCIE_AXI_S_XPU_ARES>, 115 + <&gcc PCIE_PARF_XPU_ARES>, 116 + <&gcc PCIE_PHY_ARES>, 117 + <&gcc PCIE_AXI_M_STICKY_ARES>, 118 + <&gcc PCIE_PIPE_STICKY_ARES>, 119 + <&gcc PCIE_PWR_ARES>, 120 + <&gcc PCIE_AHB_ARES>, 121 + <&gcc PCIE_PHY_AHB_ARES>; 122 + reset-names = "axi_m", 123 + "axi_s", 124 + "pipe", 125 + "axi_m_vmid", 126 + "axi_s_xpu", 127 + "parf", 128 + "phy", 129 + "axi_m_sticky", 130 + "pipe_sticky", 131 + "pwr", 132 + "ahb", 133 + "phy_ahb"; 134 + 135 + perst-gpios = <&tlmm 38 GPIO_ACTIVE_LOW>; 136 + 137 + pcie@0 { 138 + device_type = "pci"; 139 + reg = <0x0 0x0 0x0 0x0 0x0>; 140 + bus-range = <0x01 0xff>; 141 + 142 + #address-cells = <3>; 143 + #size-cells = <2>; 144 + ranges; 145 + }; 146 + };
+189
Documentation/devicetree/bindings/pci/qcom,pcie-ipq5018.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-ipq5018.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm IPQ5018 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-ipq5018 17 + 18 + reg: 19 + minItems: 5 20 + maxItems: 6 21 + 22 + reg-names: 23 + minItems: 5 24 + items: 25 + - const: dbi 26 + - const: elbi 27 + - const: atu 28 + - const: parf 29 + - const: config 30 + - const: mhi 31 + 32 + clocks: 33 + maxItems: 6 34 + 35 + clock-names: 36 + items: 37 + - const: iface # PCIe to SysNOC BIU clock 38 + - const: axi_m # AXI Master clock 39 + - const: axi_s # AXI Slave clock 40 + - const: ahb 41 + - const: aux 42 + - const: axi_bridge 43 + 44 + interrupts: 45 + maxItems: 9 46 + 47 + interrupt-names: 48 + items: 49 + - const: msi0 50 + - const: msi1 51 + - const: msi2 52 + - const: msi3 53 + - const: msi4 54 + - const: msi5 55 + - const: msi6 56 + - const: msi7 57 + - const: global 58 + 59 + resets: 60 + maxItems: 8 61 + 62 + reset-names: 63 + items: 64 + - const: pipe 65 + - const: sleep 66 + - const: sticky # Core sticky reset 67 + - const: axi_m # AXI master reset 68 + - const: axi_s # AXI slave reset 69 + - const: ahb 70 + - const: axi_m_sticky # AXI master sticky reset 71 + - const: axi_s_sticky # AXI slave sticky reset 72 + 73 + required: 74 + - resets 75 + - reset-names 76 + 77 + allOf: 78 + - $ref: qcom,pcie-common.yaml# 79 + 80 + unevaluatedProperties: false 81 + 82 + examples: 83 + - | 84 + #include <dt-bindings/clock/qcom,gcc-ipq5018.h> 85 + #include <dt-bindings/gpio/gpio.h> 86 + #include <dt-bindings/interrupt-controller/arm-gic.h> 87 + #include <dt-bindings/reset/qcom,gcc-ipq5018.h> 88 + 89 + pcie@a0000000 { 90 + compatible = "qcom,pcie-ipq5018"; 91 + reg = <0xa0000000 0xf1d>, 92 + <0xa0000f20 0xa8>, 93 + <0xa0001000 0x1000>, 94 + <0x00080000 0x3000>, 95 + <0xa0100000 0x1000>, 96 + <0x00083000 0x1000>; 97 + reg-names = "dbi", 98 + "elbi", 99 + "atu", 100 + "parf", 101 + "config", 102 + "mhi"; 103 + ranges = <0x01000000 0 0x00000000 0xa0200000 0 0x00100000>, 104 + <0x02000000 0 0xa0300000 0xa0300000 0 0x10000000>; 105 + 106 + device_type = "pci"; 107 + linux,pci-domain = <0>; 108 + bus-range = <0x00 0xff>; 109 + num-lanes = <2>; 110 + #address-cells = <3>; 111 + #size-cells = <2>; 112 + 113 + /* The controller supports Gen3, but the connected PHY is Gen2-capable */ 114 + max-link-speed = <2>; 115 + 116 + clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>, 117 + <&gcc GCC_PCIE0_AXI_M_CLK>, 118 + <&gcc GCC_PCIE0_AXI_S_CLK>, 119 + <&gcc GCC_PCIE0_AHB_CLK>, 120 + <&gcc GCC_PCIE0_AUX_CLK>, 121 + <&gcc GCC_PCIE0_AXI_S_BRIDGE_CLK>; 122 + clock-names = "iface", 123 + "axi_m", 124 + "axi_s", 125 + "ahb", 126 + "aux", 127 + "axi_bridge"; 128 + 129 + msi-map = <0x0 &v2m0 0x0 0xff8>; 130 + 131 + interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>, 132 + <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>, 133 + <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>, 134 + <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>, 135 + <GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>, 136 + <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>, 137 + <GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>, 138 + <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, 139 + <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>; 140 + interrupt-names = "msi0", 141 + "msi1", 142 + "msi2", 143 + "msi3", 144 + "msi4", 145 + "msi5", 146 + "msi6", 147 + "msi7", 148 + "global"; 149 + 150 + #interrupt-cells = <1>; 151 + interrupt-map-mask = <0 0 0 0x7>; 152 + interrupt-map = <0 0 0 1 &intc 0 GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>, 153 + <0 0 0 2 &intc 0 GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>, 154 + <0 0 0 3 &intc 0 GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>, 155 + <0 0 0 4 &intc 0 GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; 156 + 157 + phys = <&pcie0_phy>; 158 + phy-names = "pciephy"; 159 + 160 + resets = <&gcc GCC_PCIE0_PIPE_ARES>, 161 + <&gcc GCC_PCIE0_SLEEP_ARES>, 162 + <&gcc GCC_PCIE0_CORE_STICKY_ARES>, 163 + <&gcc GCC_PCIE0_AXI_MASTER_ARES>, 164 + <&gcc GCC_PCIE0_AXI_SLAVE_ARES>, 165 + <&gcc GCC_PCIE0_AHB_ARES>, 166 + <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>, 167 + <&gcc GCC_PCIE0_AXI_SLAVE_STICKY_ARES>; 168 + reset-names = "pipe", 169 + "sleep", 170 + "sticky", 171 + "axi_m", 172 + "axi_s", 173 + "ahb", 174 + "axi_m_sticky", 175 + "axi_s_sticky"; 176 + 177 + perst-gpios = <&tlmm 15 GPIO_ACTIVE_LOW>; 178 + wake-gpios = <&tlmm 16 GPIO_ACTIVE_LOW>; 179 + 180 + pcie@0 { 181 + device_type = "pci"; 182 + reg = <0x0 0x0 0x0 0x0 0x0>; 183 + bus-range = <0x01 0xff>; 184 + 185 + #address-cells = <3>; 186 + #size-cells = <2>; 187 + ranges; 188 + }; 189 + };
+179
Documentation/devicetree/bindings/pci/qcom,pcie-ipq6018.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-ipq6018.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm IPQ6018 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-ipq6018 17 + - qcom,pcie-ipq8074-gen3 18 + 19 + reg: 20 + minItems: 5 21 + maxItems: 6 22 + 23 + reg-names: 24 + minItems: 5 25 + items: 26 + - const: dbi 27 + - const: elbi 28 + - const: atu 29 + - const: parf 30 + - const: config 31 + - const: mhi 32 + 33 + clocks: 34 + maxItems: 5 35 + 36 + clock-names: 37 + items: 38 + - const: iface # PCIe to SysNOC BIU clock 39 + - const: axi_m # AXI Master clock 40 + - const: axi_s # AXI Slave clock 41 + - const: axi_bridge 42 + - const: rchng 43 + 44 + interrupts: 45 + maxItems: 9 46 + 47 + interrupt-names: 48 + items: 49 + - const: msi0 50 + - const: msi1 51 + - const: msi2 52 + - const: msi3 53 + - const: msi4 54 + - const: msi5 55 + - const: msi6 56 + - const: msi7 57 + - const: global 58 + 59 + resets: 60 + maxItems: 8 61 + 62 + reset-names: 63 + items: 64 + - const: pipe 65 + - const: sleep 66 + - const: sticky # Core sticky reset 67 + - const: axi_m # AXI master reset 68 + - const: axi_s # AXI slave reset 69 + - const: ahb 70 + - const: axi_m_sticky # AXI master sticky reset 71 + - const: axi_s_sticky # AXI slave sticky reset 72 + 73 + required: 74 + - resets 75 + - reset-names 76 + 77 + allOf: 78 + - $ref: qcom,pcie-common.yaml# 79 + 80 + unevaluatedProperties: false 81 + 82 + examples: 83 + - | 84 + #include <dt-bindings/clock/qcom,gcc-ipq6018.h> 85 + #include <dt-bindings/gpio/gpio.h> 86 + #include <dt-bindings/interrupt-controller/arm-gic.h> 87 + #include <dt-bindings/reset/qcom,gcc-ipq6018.h> 88 + 89 + soc { 90 + #address-cells = <2>; 91 + #size-cells = <2>; 92 + 93 + pcie@20000000 { 94 + compatible = "qcom,pcie-ipq6018"; 95 + reg = <0x0 0x20000000 0x0 0xf1d>, 96 + <0x0 0x20000f20 0x0 0xa8>, 97 + <0x0 0x20001000 0x0 0x1000>, 98 + <0x0 0x80000 0x0 0x4000>, 99 + <0x0 0x20100000 0x0 0x1000>; 100 + reg-names = "dbi", "elbi", "atu", "parf", "config"; 101 + ranges = <0x81000000 0x0 0x00000000 0x0 0x20200000 0x0 0x10000>, 102 + <0x82000000 0x0 0x20220000 0x0 0x20220000 0x0 0xfde0000>; 103 + 104 + device_type = "pci"; 105 + linux,pci-domain = <0>; 106 + bus-range = <0x00 0xff>; 107 + num-lanes = <1>; 108 + max-link-speed = <3>; 109 + #address-cells = <3>; 110 + #size-cells = <2>; 111 + 112 + clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>, 113 + <&gcc GCC_PCIE0_AXI_M_CLK>, 114 + <&gcc GCC_PCIE0_AXI_S_CLK>, 115 + <&gcc GCC_PCIE0_AXI_S_BRIDGE_CLK>, 116 + <&gcc PCIE0_RCHNG_CLK>; 117 + clock-names = "iface", 118 + "axi_m", 119 + "axi_s", 120 + "axi_bridge", 121 + "rchng"; 122 + 123 + interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>, 124 + <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>, 125 + <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>, 126 + <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>, 127 + <GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>, 128 + <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>, 129 + <GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>, 130 + <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>, 131 + <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>; 132 + interrupt-names = "msi0", 133 + "msi1", 134 + "msi2", 135 + "msi3", 136 + "msi4", 137 + "msi5", 138 + "msi6", 139 + "msi7", 140 + "global"; 141 + 142 + #interrupt-cells = <1>; 143 + interrupt-map-mask = <0 0 0 0x7>; 144 + interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 145 + <0 0 0 2 &intc 0 0 GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 146 + <0 0 0 3 &intc 0 0 GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 147 + <0 0 0 4 &intc 0 0 GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 148 + 149 + phys = <&pcie_phy>; 150 + phy-names = "pciephy"; 151 + 152 + resets = <&gcc GCC_PCIE0_PIPE_ARES>, 153 + <&gcc GCC_PCIE0_SLEEP_ARES>, 154 + <&gcc GCC_PCIE0_CORE_STICKY_ARES>, 155 + <&gcc GCC_PCIE0_AXI_MASTER_ARES>, 156 + <&gcc GCC_PCIE0_AXI_SLAVE_ARES>, 157 + <&gcc GCC_PCIE0_AHB_ARES>, 158 + <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>, 159 + <&gcc GCC_PCIE0_AXI_SLAVE_STICKY_ARES>; 160 + reset-names = "pipe", 161 + "sleep", 162 + "sticky", 163 + "axi_m", 164 + "axi_s", 165 + "ahb", 166 + "axi_m_sticky", 167 + "axi_s_sticky"; 168 + 169 + pcie@0 { 170 + device_type = "pci"; 171 + reg = <0x0 0x0 0x0 0x0 0x0>; 172 + bus-range = <0x01 0xff>; 173 + 174 + #address-cells = <3>; 175 + #size-cells = <2>; 176 + ranges; 177 + }; 178 + }; 179 + };
+165
Documentation/devicetree/bindings/pci/qcom,pcie-ipq8074.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-ipq8074.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm IPQ8074 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-ipq8074 17 + 18 + reg: 19 + maxItems: 4 20 + 21 + reg-names: 22 + items: 23 + - const: dbi 24 + - const: elbi 25 + - const: parf 26 + - const: config 27 + 28 + clocks: 29 + maxItems: 5 30 + 31 + clock-names: 32 + items: 33 + - const: iface # PCIe to SysNOC BIU clock 34 + - const: axi_m # AXI Master clock 35 + - const: axi_s # AXI Slave clock 36 + - const: ahb 37 + - const: aux 38 + 39 + interrupts: 40 + maxItems: 9 41 + 42 + interrupt-names: 43 + items: 44 + - const: msi0 45 + - const: msi1 46 + - const: msi2 47 + - const: msi3 48 + - const: msi4 49 + - const: msi5 50 + - const: msi6 51 + - const: msi7 52 + - const: global 53 + 54 + resets: 55 + maxItems: 7 56 + 57 + reset-names: 58 + items: 59 + - const: pipe 60 + - const: sleep 61 + - const: sticky # Core sticky reset 62 + - const: axi_m # AXI master reset 63 + - const: axi_s # AXI slave reset 64 + - const: ahb 65 + - const: axi_m_sticky # AXI master sticky reset 66 + 67 + required: 68 + - resets 69 + - reset-names 70 + 71 + allOf: 72 + - $ref: qcom,pcie-common.yaml# 73 + 74 + unevaluatedProperties: false 75 + 76 + examples: 77 + - | 78 + #include <dt-bindings/clock/qcom,gcc-ipq8074.h> 79 + #include <dt-bindings/gpio/gpio.h> 80 + #include <dt-bindings/interrupt-controller/arm-gic.h> 81 + 82 + pcie@10000000 { 83 + compatible = "qcom,pcie-ipq8074"; 84 + reg = <0x10000000 0xf1d>, 85 + <0x10000f20 0xa8>, 86 + <0x00088000 0x2000>, 87 + <0x10100000 0x1000>; 88 + reg-names = "dbi", "elbi", "parf", "config"; 89 + ranges = <0x81000000 0x0 0x00000000 0x10200000 0x0 0x10000>, /* I/O */ 90 + <0x82000000 0x0 0x10220000 0x10220000 0x0 0xfde0000>; /* MEM */ 91 + 92 + device_type = "pci"; 93 + linux,pci-domain = <1>; 94 + bus-range = <0x00 0xff>; 95 + num-lanes = <1>; 96 + max-link-speed = <2>; 97 + #address-cells = <3>; 98 + #size-cells = <2>; 99 + 100 + clocks = <&gcc GCC_SYS_NOC_PCIE1_AXI_CLK>, 101 + <&gcc GCC_PCIE1_AXI_M_CLK>, 102 + <&gcc GCC_PCIE1_AXI_S_CLK>, 103 + <&gcc GCC_PCIE1_AHB_CLK>, 104 + <&gcc GCC_PCIE1_AUX_CLK>; 105 + clock-names = "iface", 106 + "axi_m", 107 + "axi_s", 108 + "ahb", 109 + "aux"; 110 + 111 + interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>, 112 + <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>, 113 + <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>, 114 + <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>, 115 + <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>, 116 + <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>, 117 + <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>, 118 + <GIC_SPI 137 IRQ_TYPE_LEVEL_HIGH>, 119 + <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>; 120 + interrupt-names = "msi0", 121 + "msi1", 122 + "msi2", 123 + "msi3", 124 + "msi4", 125 + "msi5", 126 + "msi6", 127 + "msi7", 128 + "global"; 129 + #interrupt-cells = <1>; 130 + interrupt-map-mask = <0 0 0 0x7>; 131 + interrupt-map = <0 0 0 1 &intc 0 GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 132 + <0 0 0 2 &intc 0 GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 133 + <0 0 0 3 &intc 0 GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 134 + <0 0 0 4 &intc 0 GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 135 + 136 + phys = <&pcie_qmp1>; 137 + phy-names = "pciephy"; 138 + 139 + resets = <&gcc GCC_PCIE1_PIPE_ARES>, 140 + <&gcc GCC_PCIE1_SLEEP_ARES>, 141 + <&gcc GCC_PCIE1_CORE_STICKY_ARES>, 142 + <&gcc GCC_PCIE1_AXI_MASTER_ARES>, 143 + <&gcc GCC_PCIE1_AXI_SLAVE_ARES>, 144 + <&gcc GCC_PCIE1_AHB_ARES>, 145 + <&gcc GCC_PCIE1_AXI_MASTER_STICKY_ARES>; 146 + reset-names = "pipe", 147 + "sleep", 148 + "sticky", 149 + "axi_m", 150 + "axi_s", 151 + "ahb", 152 + "axi_m_sticky"; 153 + 154 + perst-gpios = <&tlmm 58 GPIO_ACTIVE_LOW>; 155 + 156 + pcie@0 { 157 + device_type = "pci"; 158 + reg = <0x0 0x0 0x0 0x0 0x0>; 159 + bus-range = <0x01 0xff>; 160 + 161 + #address-cells = <3>; 162 + #size-cells = <2>; 163 + ranges; 164 + }; 165 + };
+183
Documentation/devicetree/bindings/pci/qcom,pcie-ipq9574.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-ipq9574.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm IPQ9574 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + oneOf: 16 + - enum: 17 + - qcom,pcie-ipq9574 18 + - items: 19 + - enum: 20 + - qcom,pcie-ipq5332 21 + - qcom,pcie-ipq5424 22 + - const: qcom,pcie-ipq9574 23 + 24 + reg: 25 + maxItems: 6 26 + 27 + reg-names: 28 + items: 29 + - const: dbi 30 + - const: elbi 31 + - const: atu 32 + - const: parf 33 + - const: config 34 + - const: mhi 35 + 36 + clocks: 37 + maxItems: 6 38 + 39 + clock-names: 40 + items: 41 + - const: axi_m # AXI Master clock 42 + - const: axi_s # AXI Slave clock 43 + - const: axi_bridge 44 + - const: rchng 45 + - const: ahb 46 + - const: aux 47 + 48 + interrupts: 49 + minItems: 8 50 + maxItems: 9 51 + 52 + interrupt-names: 53 + minItems: 8 54 + items: 55 + - const: msi0 56 + - const: msi1 57 + - const: msi2 58 + - const: msi3 59 + - const: msi4 60 + - const: msi5 61 + - const: msi6 62 + - const: msi7 63 + - const: global 64 + 65 + resets: 66 + maxItems: 8 67 + 68 + reset-names: 69 + items: 70 + - const: pipe 71 + - const: sticky # Core sticky reset 72 + - const: axi_s_sticky # AXI Slave Sticky reset 73 + - const: axi_s # AXI slave reset 74 + - const: axi_m_sticky # AXI Master Sticky reset 75 + - const: axi_m # AXI master reset 76 + - const: aux 77 + - const: ahb 78 + 79 + required: 80 + - resets 81 + - reset-names 82 + 83 + allOf: 84 + - $ref: qcom,pcie-common.yaml# 85 + 86 + unevaluatedProperties: false 87 + 88 + examples: 89 + - | 90 + #include <dt-bindings/clock/qcom,ipq9574-gcc.h> 91 + #include <dt-bindings/gpio/gpio.h> 92 + #include <dt-bindings/interconnect/qcom,ipq9574.h> 93 + #include <dt-bindings/interrupt-controller/arm-gic.h> 94 + #include <dt-bindings/reset/qcom,ipq9574-gcc.h> 95 + 96 + pcie@10000000 { 97 + compatible = "qcom,pcie-ipq9574"; 98 + reg = <0x10000000 0xf1d>, 99 + <0x10000f20 0xa8>, 100 + <0x10001000 0x1000>, 101 + <0x000f8000 0x4000>, 102 + <0x10100000 0x1000>, 103 + <0x000fe000 0x1000>; 104 + reg-names = "dbi", 105 + "elbi", 106 + "atu", 107 + "parf", 108 + "config", 109 + "mhi"; 110 + ranges = <0x01000000 0x0 0x00000000 0x10200000 0x0 0x100000>, 111 + <0x02000000 0x0 0x10300000 0x10300000 0x0 0x7d00000>; 112 + 113 + device_type = "pci"; 114 + linux,pci-domain = <1>; 115 + bus-range = <0x00 0xff>; 116 + num-lanes = <1>; 117 + #address-cells = <3>; 118 + #size-cells = <2>; 119 + 120 + clocks = <&gcc GCC_PCIE1_AXI_M_CLK>, 121 + <&gcc GCC_PCIE1_AXI_S_CLK>, 122 + <&gcc GCC_PCIE1_AXI_S_BRIDGE_CLK>, 123 + <&gcc GCC_PCIE1_RCHNG_CLK>, 124 + <&gcc GCC_PCIE1_AHB_CLK>, 125 + <&gcc GCC_PCIE1_AUX_CLK>; 126 + clock-names = "axi_m", 127 + "axi_s", 128 + "axi_bridge", 129 + "rchng", 130 + "ahb", 131 + "aux"; 132 + 133 + interconnects = <&gcc MASTER_ANOC_PCIE1 &gcc SLAVE_ANOC_PCIE1>, 134 + <&gcc MASTER_SNOC_PCIE1 &gcc SLAVE_SNOC_PCIE1>; 135 + interconnect-names = "pcie-mem", "cpu-pcie"; 136 + 137 + interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>, 138 + <GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>, 139 + <GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>, 140 + <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>, 141 + <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>, 142 + <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>, 143 + <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>, 144 + <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>; 145 + interrupt-names = "msi0", 146 + "msi1", 147 + "msi2", 148 + "msi3", 149 + "msi4", 150 + "msi5", 151 + "msi6", 152 + "msi7"; 153 + 154 + #interrupt-cells = <1>; 155 + interrupt-map-mask = <0 0 0 0x7>; 156 + interrupt-map = <0 0 0 1 &intc 0 GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>, 157 + <0 0 0 2 &intc 0 GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>, 158 + <0 0 0 3 &intc 0 GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>, 159 + <0 0 0 4 &intc 0 GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>; 160 + 161 + resets = <&gcc GCC_PCIE1_PIPE_ARES>, 162 + <&gcc GCC_PCIE1_CORE_STICKY_ARES>, 163 + <&gcc GCC_PCIE1_AXI_S_STICKY_ARES>, 164 + <&gcc GCC_PCIE1_AXI_S_ARES>, 165 + <&gcc GCC_PCIE1_AXI_M_STICKY_ARES>, 166 + <&gcc GCC_PCIE1_AXI_M_ARES>, 167 + <&gcc GCC_PCIE1_AUX_ARES>, 168 + <&gcc GCC_PCIE1_AHB_ARES>; 169 + reset-names = "pipe", 170 + "sticky", 171 + "axi_s_sticky", 172 + "axi_s", 173 + "axi_m_sticky", 174 + "axi_m", 175 + "aux", 176 + "ahb"; 177 + 178 + phys = <&pcie1_phy>; 179 + phy-names = "pciephy"; 180 + 181 + perst-gpios = <&tlmm 26 GPIO_ACTIVE_LOW>; 182 + wake-gpios = <&tlmm 27 GPIO_ACTIVE_LOW>; 183 + };
+156
Documentation/devicetree/bindings/pci/qcom,pcie-msm8996.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-msm8996.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm MSM8996 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + oneOf: 16 + - enum: 17 + - qcom,pcie-msm8996 18 + - items: 19 + - const: qcom,pcie-msm8998 20 + - const: qcom,pcie-msm8996 21 + 22 + reg: 23 + minItems: 4 24 + maxItems: 5 25 + 26 + reg-names: 27 + minItems: 4 28 + items: 29 + - const: parf 30 + - const: dbi 31 + - const: elbi 32 + - const: config 33 + - const: mhi 34 + 35 + clocks: 36 + maxItems: 5 37 + 38 + clock-names: 39 + items: 40 + - const: pipe # Pipe Clock driving internal logic 41 + - const: aux 42 + - const: cfg 43 + - const: bus_master # Master AXI clock 44 + - const: bus_slave # Slave AXI clock 45 + 46 + interrupts: 47 + minItems: 8 48 + maxItems: 9 49 + 50 + interrupt-names: 51 + minItems: 8 52 + items: 53 + - const: msi0 54 + - const: msi1 55 + - const: msi2 56 + - const: msi3 57 + - const: msi4 58 + - const: msi5 59 + - const: msi6 60 + - const: msi7 61 + - const: global 62 + 63 + vdda-supply: 64 + description: A phandle to the core analog power supply 65 + 66 + vddpe-3v3-supply: 67 + description: A phandle to the PCIe endpoint power supply 68 + 69 + required: 70 + - power-domains 71 + 72 + allOf: 73 + - $ref: qcom,pcie-common.yaml# 74 + 75 + unevaluatedProperties: false 76 + 77 + examples: 78 + - | 79 + #include <dt-bindings/clock/qcom,gcc-msm8996.h> 80 + #include <dt-bindings/gpio/gpio.h> 81 + #include <dt-bindings/interrupt-controller/arm-gic.h> 82 + 83 + pcie@600000 { 84 + compatible = "qcom,pcie-msm8996"; 85 + reg = <0x00600000 0x2000>, 86 + <0x0c000000 0xf1d>, 87 + <0x0c000f20 0xa8>, 88 + <0x0c100000 0x100000>; 89 + reg-names = "parf", "dbi", "elbi", "config"; 90 + ranges = <0x01000000 0x0 0x00000000 0x0c200000 0x0 0x100000>, 91 + <0x02000000 0x0 0x0c300000 0x0c300000 0x0 0xd00000>; 92 + 93 + device_type = "pci"; 94 + bus-range = <0x00 0xff>; 95 + num-lanes = <1>; 96 + #address-cells = <3>; 97 + #size-cells = <2>; 98 + linux,pci-domain = <0>; 99 + 100 + clocks = <&gcc GCC_PCIE_0_PIPE_CLK>, 101 + <&gcc GCC_PCIE_0_AUX_CLK>, 102 + <&gcc GCC_PCIE_0_CFG_AHB_CLK>, 103 + <&gcc GCC_PCIE_0_MSTR_AXI_CLK>, 104 + <&gcc GCC_PCIE_0_SLV_AXI_CLK>; 105 + clock-names = "pipe", 106 + "aux", 107 + "cfg", 108 + "bus_master", 109 + "bus_slave"; 110 + 111 + interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>, 112 + <GIC_SPI 406 IRQ_TYPE_LEVEL_HIGH>, 113 + <GIC_SPI 407 IRQ_TYPE_LEVEL_HIGH>, 114 + <GIC_SPI 408 IRQ_TYPE_LEVEL_HIGH>, 115 + <GIC_SPI 409 IRQ_TYPE_LEVEL_HIGH>, 116 + <GIC_SPI 410 IRQ_TYPE_LEVEL_HIGH>, 117 + <GIC_SPI 411 IRQ_TYPE_LEVEL_HIGH>, 118 + <GIC_SPI 412 IRQ_TYPE_LEVEL_HIGH>; 119 + interrupt-names = "msi0", 120 + "msi1", 121 + "msi2", 122 + "msi3", 123 + "msi4", 124 + "msi5", 125 + "msi6", 126 + "msi7"; 127 + #interrupt-cells = <1>; 128 + interrupt-map-mask = <0 0 0 0x7>; 129 + interrupt-map = <0 0 0 1 &intc GIC_SPI 244 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 130 + <0 0 0 2 &intc GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 131 + <0 0 0 3 &intc GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 132 + <0 0 0 4 &intc GIC_SPI 248 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 133 + 134 + pinctrl-names = "default", "sleep"; 135 + pinctrl-0 = <&pcie0_state_on>; 136 + pinctrl-1 = <&pcie0_state_off>; 137 + 138 + phys = <&pciephy_0>; 139 + phy-names = "pciephy"; 140 + 141 + power-domains = <&gcc PCIE0_GDSC>; 142 + 143 + perst-gpios = <&tlmm 35 GPIO_ACTIVE_LOW>; 144 + vddpe-3v3-supply = <&wlan_en>; 145 + vdda-supply = <&vreg_l28a_0p925>; 146 + 147 + pcie@0 { 148 + device_type = "pci"; 149 + reg = <0x0 0x0 0x0 0x0 0x0>; 150 + bus-range = <0x01 0xff>; 151 + 152 + #address-cells = <3>; 153 + #size-cells = <2>; 154 + ranges; 155 + }; 156 + };
+131
Documentation/devicetree/bindings/pci/qcom,pcie-qcs404.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-qcs404.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm QCS404 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-qcs404 17 + 18 + reg: 19 + maxItems: 4 20 + 21 + reg-names: 22 + items: 23 + - const: dbi 24 + - const: elbi 25 + - const: parf 26 + - const: config 27 + 28 + clocks: 29 + maxItems: 4 30 + 31 + clock-names: 32 + items: 33 + - const: iface # AHB clock 34 + - const: aux 35 + - const: master_bus # AXI Master clock 36 + - const: slave_bus # AXI Slave clock 37 + 38 + interrupts: 39 + maxItems: 1 40 + 41 + interrupt-names: 42 + items: 43 + - const: msi 44 + 45 + resets: 46 + maxItems: 6 47 + 48 + reset-names: 49 + items: 50 + - const: axi_m # AXI Master reset 51 + - const: axi_s # AXI Slave reset 52 + - const: axi_m_sticky # AXI Master Sticky reset 53 + - const: pipe_sticky 54 + - const: pwr 55 + - const: ahb 56 + 57 + required: 58 + - resets 59 + - reset-names 60 + 61 + allOf: 62 + - $ref: qcom,pcie-common.yaml# 63 + 64 + unevaluatedProperties: false 65 + 66 + examples: 67 + - | 68 + #include <dt-bindings/clock/qcom,gcc-qcs404.h> 69 + #include <dt-bindings/gpio/gpio.h> 70 + #include <dt-bindings/interrupt-controller/arm-gic.h> 71 + 72 + pcie@10000000 { 73 + compatible = "qcom,pcie-qcs404"; 74 + reg = <0x10000000 0xf1d>, 75 + <0x10000f20 0xa8>, 76 + <0x07780000 0x2000>, 77 + <0x10001000 0x2000>; 78 + reg-names = "dbi", "elbi", "parf", "config"; 79 + ranges = <0x81000000 0x0 0x00000000 0x10003000 0x0 0x00010000>, /* I/O */ 80 + <0x82000000 0x0 0x10013000 0x10013000 0x0 0x007ed000>; /* memory */ 81 + 82 + device_type = "pci"; 83 + linux,pci-domain = <0>; 84 + bus-range = <0x00 0xff>; 85 + num-lanes = <1>; 86 + #address-cells = <3>; 87 + #size-cells = <2>; 88 + 89 + clocks = <&gcc GCC_PCIE_0_CFG_AHB_CLK>, 90 + <&gcc GCC_PCIE_0_AUX_CLK>, 91 + <&gcc GCC_PCIE_0_MSTR_AXI_CLK>, 92 + <&gcc GCC_PCIE_0_SLV_AXI_CLK>; 93 + clock-names = "iface", "aux", "master_bus", "slave_bus"; 94 + 95 + interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>; 96 + interrupt-names = "msi"; 97 + #interrupt-cells = <1>; 98 + interrupt-map-mask = <0 0 0 0x7>; 99 + interrupt-map = <0 0 0 1 &intc GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 100 + <0 0 0 2 &intc GIC_SPI 224 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 101 + <0 0 0 3 &intc GIC_SPI 267 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 102 + <0 0 0 4 &intc GIC_SPI 268 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 103 + 104 + phys = <&pcie_phy>; 105 + phy-names = "pciephy"; 106 + 107 + perst-gpios = <&tlmm 43 GPIO_ACTIVE_LOW>; 108 + 109 + resets = <&gcc GCC_PCIE_0_AXI_MASTER_ARES>, 110 + <&gcc GCC_PCIE_0_AXI_SLAVE_ARES>, 111 + <&gcc GCC_PCIE_0_AXI_MASTER_STICKY_ARES>, 112 + <&gcc GCC_PCIE_0_CORE_STICKY_ARES>, 113 + <&gcc GCC_PCIE_0_BCR>, 114 + <&gcc GCC_PCIE_0_AHB_ARES>; 115 + reset-names = "axi_m", 116 + "axi_s", 117 + "axi_m_sticky", 118 + "pipe_sticky", 119 + "pwr", 120 + "ahb"; 121 + 122 + pcie@0 { 123 + device_type = "pci"; 124 + reg = <0x0 0x0 0x0 0x0 0x0>; 125 + bus-range = <0x01 0xff>; 126 + 127 + #address-cells = <3>; 128 + #size-cells = <2>; 129 + ranges; 130 + }; 131 + };
-168
Documentation/devicetree/bindings/pci/qcom,pcie-sc8180x.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - $id: http://devicetree.org/schemas/pci/qcom,pcie-sc8180x.yaml# 5 - $schema: http://devicetree.org/meta-schemas/core.yaml# 6 - 7 - title: Qualcomm SC8180x PCI Express Root Complex 8 - 9 - maintainers: 10 - - Bjorn Andersson <andersson@kernel.org> 11 - - Manivannan Sadhasivam <mani@kernel.org> 12 - 13 - description: 14 - Qualcomm SC8180x SoC PCIe root complex controller is based on the Synopsys 15 - DesignWare PCIe IP. 16 - 17 - properties: 18 - compatible: 19 - const: qcom,pcie-sc8180x 20 - 21 - reg: 22 - minItems: 5 23 - maxItems: 6 24 - 25 - reg-names: 26 - minItems: 5 27 - items: 28 - - const: parf # Qualcomm specific registers 29 - - const: dbi # DesignWare PCIe registers 30 - - const: elbi # External local bus interface registers 31 - - const: atu # ATU address space 32 - - const: config # PCIe configuration space 33 - - const: mhi # MHI registers 34 - 35 - clocks: 36 - minItems: 6 37 - maxItems: 6 38 - 39 - clock-names: 40 - items: 41 - - const: pipe # PIPE clock 42 - - const: aux # Auxiliary clock 43 - - const: cfg # Configuration clock 44 - - const: bus_master # Master AXI clock 45 - - const: bus_slave # Slave AXI clock 46 - - const: slave_q2a # Slave Q2A clock 47 - 48 - interrupts: 49 - minItems: 8 50 - maxItems: 9 51 - 52 - interrupt-names: 53 - minItems: 8 54 - items: 55 - - const: msi0 56 - - const: msi1 57 - - const: msi2 58 - - const: msi3 59 - - const: msi4 60 - - const: msi5 61 - - const: msi6 62 - - const: msi7 63 - - const: global 64 - 65 - resets: 66 - maxItems: 1 67 - 68 - reset-names: 69 - items: 70 - - const: pci 71 - 72 - allOf: 73 - - $ref: qcom,pcie-common.yaml# 74 - 75 - unevaluatedProperties: false 76 - 77 - examples: 78 - - | 79 - #include <dt-bindings/clock/qcom,gcc-sc8180x.h> 80 - #include <dt-bindings/interconnect/qcom,sc8180x.h> 81 - #include <dt-bindings/interrupt-controller/arm-gic.h> 82 - 83 - soc { 84 - #address-cells = <2>; 85 - #size-cells = <2>; 86 - 87 - pcie@1c00000 { 88 - compatible = "qcom,pcie-sc8180x"; 89 - reg = <0 0x01c00000 0 0x3000>, 90 - <0 0x60000000 0 0xf1d>, 91 - <0 0x60000f20 0 0xa8>, 92 - <0 0x60001000 0 0x1000>, 93 - <0 0x60100000 0 0x100000>; 94 - reg-names = "parf", 95 - "dbi", 96 - "elbi", 97 - "atu", 98 - "config"; 99 - ranges = <0x01000000 0x0 0x60200000 0x0 0x60200000 0x0 0x100000>, 100 - <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>; 101 - 102 - bus-range = <0x00 0xff>; 103 - device_type = "pci"; 104 - linux,pci-domain = <0>; 105 - num-lanes = <2>; 106 - 107 - #address-cells = <3>; 108 - #size-cells = <2>; 109 - 110 - assigned-clocks = <&gcc GCC_PCIE_0_AUX_CLK>; 111 - assigned-clock-rates = <19200000>; 112 - 113 - clocks = <&gcc GCC_PCIE_0_PIPE_CLK>, 114 - <&gcc GCC_PCIE_0_AUX_CLK>, 115 - <&gcc GCC_PCIE_0_CFG_AHB_CLK>, 116 - <&gcc GCC_PCIE_0_MSTR_AXI_CLK>, 117 - <&gcc GCC_PCIE_0_SLV_AXI_CLK>, 118 - <&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>; 119 - clock-names = "pipe", 120 - "aux", 121 - "cfg", 122 - "bus_master", 123 - "bus_slave", 124 - "slave_q2a"; 125 - 126 - dma-coherent; 127 - 128 - interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, 129 - <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, 130 - <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, 131 - <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>, 132 - <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 133 - <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 134 - <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 135 - <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 136 - <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 137 - interrupt-names = "msi0", 138 - "msi1", 139 - "msi2", 140 - "msi3", 141 - "msi4", 142 - "msi5", 143 - "msi6", 144 - "msi7", 145 - "global"; 146 - #interrupt-cells = <1>; 147 - interrupt-map-mask = <0 0 0 0x7>; 148 - interrupt-map = <0 0 0 1 &intc 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 149 - <0 0 0 2 &intc 0 150 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 150 - <0 0 0 3 &intc 0 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 151 - <0 0 0 4 &intc 0 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 152 - 153 - interconnects = <&aggre2_noc MASTER_PCIE 0 &mc_virt SLAVE_EBI_CH0 0>, 154 - <&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_PCIE_0 0>; 155 - interconnect-names = "pcie-mem", "cpu-pcie"; 156 - 157 - iommu-map = <0x0 &apps_smmu 0x1d80 0x1>, 158 - <0x100 &apps_smmu 0x1d81 0x1>; 159 - 160 - phys = <&pcie0_phy>; 161 - phy-names = "pciephy"; 162 - 163 - power-domains = <&gcc PCIE_0_GDSC>; 164 - 165 - resets = <&gcc GCC_PCIE_0_BCR>; 166 - reset-names = "pci"; 167 - }; 168 - };
+190
Documentation/devicetree/bindings/pci/qcom,pcie-sdm845.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-sdm845.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm SDM845 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-sdm845 17 + 18 + reg: 19 + minItems: 4 20 + maxItems: 5 21 + 22 + reg-names: 23 + minItems: 4 24 + items: 25 + - const: parf 26 + - const: dbi 27 + - const: elbi 28 + - const: config 29 + - const: mhi 30 + 31 + clocks: 32 + minItems: 7 33 + maxItems: 8 34 + 35 + clock-names: 36 + minItems: 7 37 + items: 38 + - const: pipe 39 + - const: aux 40 + - const: cfg 41 + - const: bus_master # Master AXI clock 42 + - const: bus_slave # Slave AXI clock 43 + - const: slave_q2a 44 + - enum: [ ref, tbu ] 45 + - const: tbu 46 + 47 + interrupts: 48 + minItems: 8 49 + maxItems: 9 50 + 51 + interrupt-names: 52 + minItems: 8 53 + items: 54 + - const: msi0 55 + - const: msi1 56 + - const: msi2 57 + - const: msi3 58 + - const: msi4 59 + - const: msi5 60 + - const: msi6 61 + - const: msi7 62 + - const: global 63 + 64 + resets: 65 + maxItems: 1 66 + 67 + reset-names: 68 + items: 69 + - const: pci 70 + 71 + required: 72 + - power-domains 73 + - resets 74 + - reset-names 75 + 76 + allOf: 77 + - $ref: qcom,pcie-common.yaml# 78 + 79 + unevaluatedProperties: false 80 + 81 + examples: 82 + - | 83 + #include <dt-bindings/clock/qcom,gcc-sdm845.h> 84 + #include <dt-bindings/gpio/gpio.h> 85 + #include <dt-bindings/interrupt-controller/arm-gic.h> 86 + 87 + soc { 88 + #address-cells = <2>; 89 + #size-cells = <2>; 90 + 91 + pcie@1c00000 { 92 + compatible = "qcom,pcie-sdm845"; 93 + reg = <0x0 0x01c00000 0x0 0x2000>, 94 + <0x0 0x60000000 0x0 0xf1d>, 95 + <0x0 0x60000f20 0x0 0xa8>, 96 + <0x0 0x60100000 0x0 0x100000>, 97 + <0x0 0x01c07000 0x0 0x1000>; 98 + reg-names = "parf", "dbi", "elbi", "config", "mhi"; 99 + ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>, 100 + <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0xd00000>; 101 + 102 + device_type = "pci"; 103 + linux,pci-domain = <0>; 104 + bus-range = <0x00 0xff>; 105 + num-lanes = <1>; 106 + 107 + #address-cells = <3>; 108 + #size-cells = <2>; 109 + 110 + clocks = <&gcc GCC_PCIE_0_PIPE_CLK>, 111 + <&gcc GCC_PCIE_0_AUX_CLK>, 112 + <&gcc GCC_PCIE_0_CFG_AHB_CLK>, 113 + <&gcc GCC_PCIE_0_MSTR_AXI_CLK>, 114 + <&gcc GCC_PCIE_0_SLV_AXI_CLK>, 115 + <&gcc GCC_PCIE_0_SLV_Q2A_AXI_CLK>, 116 + <&gcc GCC_AGGRE_NOC_PCIE_TBU_CLK>; 117 + clock-names = "pipe", 118 + "aux", 119 + "cfg", 120 + "bus_master", 121 + "bus_slave", 122 + "slave_q2a", 123 + "tbu"; 124 + 125 + interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, 126 + <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, 127 + <GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, 128 + <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>, 129 + <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 130 + <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 131 + <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 132 + <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 133 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 134 + interrupt-names = "msi0", 135 + "msi1", 136 + "msi2", 137 + "msi3", 138 + "msi4", 139 + "msi5", 140 + "msi6", 141 + "msi7", 142 + "global"; 143 + #interrupt-cells = <1>; 144 + interrupt-map-mask = <0 0 0 0x7>; 145 + interrupt-map = <0 0 0 1 &intc 0 0 GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 146 + <0 0 0 2 &intc 0 0 GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 147 + <0 0 0 3 &intc 0 0 GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 148 + <0 0 0 4 &intc 0 0 GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 149 + 150 + iommu-map = <0x0 &apps_smmu 0x1c10 0x1>, 151 + <0x100 &apps_smmu 0x1c11 0x1>, 152 + <0x200 &apps_smmu 0x1c12 0x1>, 153 + <0x300 &apps_smmu 0x1c13 0x1>, 154 + <0x400 &apps_smmu 0x1c14 0x1>, 155 + <0x500 &apps_smmu 0x1c15 0x1>, 156 + <0x600 &apps_smmu 0x1c16 0x1>, 157 + <0x700 &apps_smmu 0x1c17 0x1>, 158 + <0x800 &apps_smmu 0x1c18 0x1>, 159 + <0x900 &apps_smmu 0x1c19 0x1>, 160 + <0xa00 &apps_smmu 0x1c1a 0x1>, 161 + <0xb00 &apps_smmu 0x1c1b 0x1>, 162 + <0xc00 &apps_smmu 0x1c1c 0x1>, 163 + <0xd00 &apps_smmu 0x1c1d 0x1>, 164 + <0xe00 &apps_smmu 0x1c1e 0x1>, 165 + <0xf00 &apps_smmu 0x1c1f 0x1>; 166 + 167 + power-domains = <&gcc PCIE_0_GDSC>; 168 + 169 + phys = <&pcie0_phy>; 170 + phy-names = "pciephy"; 171 + 172 + resets = <&gcc GCC_PCIE_0_BCR>; 173 + reset-names = "pci"; 174 + 175 + perst-gpios = <&tlmm 35 GPIO_ACTIVE_LOW>; 176 + wake-gpios = <&tlmm 134 GPIO_ACTIVE_HIGH>; 177 + 178 + vddpe-3v3-supply = <&pcie0_3p3v_dual>; 179 + 180 + pcie@0 { 181 + device_type = "pci"; 182 + reg = <0x0 0x0 0x0 0x0 0x0>; 183 + bus-range = <0x01 0xff>; 184 + 185 + #address-cells = <3>; 186 + #size-cells = <2>; 187 + ranges; 188 + }; 189 + }; 190 + };
+172
Documentation/devicetree/bindings/pci/qcom,pcie-sdx55.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,pcie-sdx55.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm SDX55 PCI Express Root Complex 8 + 9 + maintainers: 10 + - Bjorn Andersson <andersson@kernel.org> 11 + - Manivannan Sadhasivam <mani@kernel.org> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - qcom,pcie-sdx55 17 + 18 + reg: 19 + minItems: 5 20 + maxItems: 6 21 + 22 + reg-names: 23 + minItems: 5 24 + items: 25 + - const: parf 26 + - const: dbi 27 + - const: elbi 28 + - const: atu 29 + - const: config 30 + - const: mhi 31 + 32 + clocks: 33 + maxItems: 7 34 + 35 + clock-names: 36 + items: 37 + - const: pipe 38 + - const: aux 39 + - const: cfg 40 + - const: bus_master # Master AXI clock 41 + - const: bus_slave # Slave AXI clock 42 + - const: slave_q2a 43 + - const: sleep 44 + 45 + interrupts: 46 + maxItems: 8 47 + 48 + interrupt-names: 49 + items: 50 + - const: msi 51 + - const: msi2 52 + - const: msi3 53 + - const: msi4 54 + - const: msi5 55 + - const: msi6 56 + - const: msi7 57 + - const: msi8 58 + 59 + resets: 60 + maxItems: 1 61 + 62 + reset-names: 63 + items: 64 + - const: pci 65 + 66 + required: 67 + - power-domains 68 + - resets 69 + - reset-names 70 + 71 + allOf: 72 + - $ref: qcom,pcie-common.yaml# 73 + 74 + unevaluatedProperties: false 75 + 76 + examples: 77 + - | 78 + #include <dt-bindings/clock/qcom,gcc-sdx55.h> 79 + #include <dt-bindings/gpio/gpio.h> 80 + #include <dt-bindings/interrupt-controller/arm-gic.h> 81 + 82 + pcie@1c00000 { 83 + compatible = "qcom,pcie-sdx55"; 84 + reg = <0x01c00000 0x3000>, 85 + <0x40000000 0xf1d>, 86 + <0x40000f20 0xc8>, 87 + <0x40001000 0x1000>, 88 + <0x40100000 0x100000>; 89 + reg-names = "parf", 90 + "dbi", 91 + "elbi", 92 + "atu", 93 + "config"; 94 + ranges = <0x01000000 0x0 0x00000000 0x40200000 0x0 0x100000>, 95 + <0x02000000 0x0 0x40300000 0x40300000 0x0 0x3fd00000>; 96 + 97 + device_type = "pci"; 98 + linux,pci-domain = <0>; 99 + bus-range = <0x00 0xff>; 100 + num-lanes = <1>; 101 + 102 + #address-cells = <3>; 103 + #size-cells = <2>; 104 + 105 + interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>, 106 + <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>, 107 + <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>, 108 + <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>, 109 + <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>, 110 + <GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>, 111 + <GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>, 112 + <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>; 113 + interrupt-names = "msi", 114 + "msi2", 115 + "msi3", 116 + "msi4", 117 + "msi5", 118 + "msi6", 119 + "msi7", 120 + "msi8"; 121 + #interrupt-cells = <1>; 122 + interrupt-map-mask = <0 0 0 0x7>; 123 + interrupt-map = <0 0 0 1 &intc GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, /* int_a */ 124 + <0 0 0 2 &intc GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, /* int_b */ 125 + <0 0 0 3 &intc GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>, /* int_c */ 126 + <0 0 0 4 &intc GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; /* int_d */ 127 + 128 + clocks = <&gcc GCC_PCIE_PIPE_CLK>, 129 + <&gcc GCC_PCIE_AUX_CLK>, 130 + <&gcc GCC_PCIE_CFG_AHB_CLK>, 131 + <&gcc GCC_PCIE_MSTR_AXI_CLK>, 132 + <&gcc GCC_PCIE_SLV_AXI_CLK>, 133 + <&gcc GCC_PCIE_SLV_Q2A_AXI_CLK>, 134 + <&gcc GCC_PCIE_SLEEP_CLK>; 135 + clock-names = "pipe", 136 + "aux", 137 + "cfg", 138 + "bus_master", 139 + "bus_slave", 140 + "slave_q2a", 141 + "sleep"; 142 + 143 + assigned-clocks = <&gcc GCC_PCIE_AUX_CLK>; 144 + assigned-clock-rates = <19200000>; 145 + 146 + iommu-map = <0x0 &apps_smmu 0x0200 0x1>, 147 + <0x100 &apps_smmu 0x0201 0x1>, 148 + <0x200 &apps_smmu 0x0202 0x1>, 149 + <0x300 &apps_smmu 0x0203 0x1>, 150 + <0x400 &apps_smmu 0x0204 0x1>; 151 + 152 + power-domains = <&gcc PCIE_GDSC>; 153 + 154 + phys = <&pcie_phy>; 155 + phy-names = "pciephy"; 156 + 157 + resets = <&gcc GCC_PCIE_BCR>; 158 + reset-names = "pci"; 159 + 160 + perst-gpios = <&tlmm 57 GPIO_ACTIVE_LOW>; 161 + wake-gpios = <&tlmm 53 GPIO_ACTIVE_HIGH>; 162 + 163 + pcie@0 { 164 + device_type = "pci"; 165 + reg = <0x0 0x0 0x0 0x0 0x0>; 166 + bus-range = <0x01 0xff>; 167 + 168 + #address-cells = <3>; 169 + #size-cells = <2>; 170 + ranges; 171 + }; 172 + };
+1
Documentation/devicetree/bindings/pci/qcom,pcie-sm8150.yaml
··· 17 17 properties: 18 18 compatible: 19 19 oneOf: 20 + - const: qcom,pcie-sc8180x 20 21 - const: qcom,pcie-sm8150 21 22 - items: 22 23 - enum:
+6 -1
Documentation/devicetree/bindings/pci/qcom,pcie-x1e80100.yaml
··· 16 16 17 17 properties: 18 18 compatible: 19 - const: qcom,pcie-x1e80100 19 + oneOf: 20 + - const: qcom,pcie-x1e80100 21 + - items: 22 + - enum: 23 + - qcom,glymur-pcie 24 + - const: qcom,pcie-x1e80100 20 25 21 26 reg: 22 27 minItems: 6
-782
Documentation/devicetree/bindings/pci/qcom,pcie.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - $id: http://devicetree.org/schemas/pci/qcom,pcie.yaml# 5 - $schema: http://devicetree.org/meta-schemas/core.yaml# 6 - 7 - title: Qualcomm PCI express root complex 8 - 9 - maintainers: 10 - - Bjorn Andersson <bjorn.andersson@linaro.org> 11 - - Manivannan Sadhasivam <mani@kernel.org> 12 - 13 - description: | 14 - Qualcomm PCIe root complex controller is based on the Synopsys DesignWare 15 - PCIe IP. 16 - 17 - properties: 18 - compatible: 19 - oneOf: 20 - - enum: 21 - - qcom,pcie-apq8064 22 - - qcom,pcie-apq8084 23 - - qcom,pcie-ipq4019 24 - - qcom,pcie-ipq5018 25 - - qcom,pcie-ipq6018 26 - - qcom,pcie-ipq8064 27 - - qcom,pcie-ipq8064-v2 28 - - qcom,pcie-ipq8074 29 - - qcom,pcie-ipq8074-gen3 30 - - qcom,pcie-ipq9574 31 - - qcom,pcie-msm8996 32 - - qcom,pcie-qcs404 33 - - qcom,pcie-sdm845 34 - - qcom,pcie-sdx55 35 - - items: 36 - - enum: 37 - - qcom,pcie-ipq5332 38 - - qcom,pcie-ipq5424 39 - - const: qcom,pcie-ipq9574 40 - - items: 41 - - const: qcom,pcie-msm8998 42 - - const: qcom,pcie-msm8996 43 - 44 - reg: 45 - minItems: 4 46 - maxItems: 6 47 - 48 - reg-names: 49 - minItems: 4 50 - maxItems: 6 51 - 52 - interrupts: 53 - minItems: 1 54 - maxItems: 9 55 - 56 - interrupt-names: 57 - minItems: 1 58 - maxItems: 9 59 - 60 - iommu-map: 61 - minItems: 1 62 - maxItems: 16 63 - 64 - # Common definitions for clocks, clock-names and reset. 65 - # Platform constraints are described later. 66 - clocks: 67 - minItems: 3 68 - maxItems: 13 69 - 70 - clock-names: 71 - minItems: 3 72 - maxItems: 13 73 - 74 - dma-coherent: true 75 - 76 - interconnects: 77 - maxItems: 2 78 - 79 - interconnect-names: 80 - items: 81 - - const: pcie-mem 82 - - const: cpu-pcie 83 - 84 - resets: 85 - minItems: 1 86 - maxItems: 12 87 - 88 - reset-names: 89 - minItems: 1 90 - maxItems: 12 91 - 92 - vdda-supply: 93 - description: A phandle to the core analog power supply 94 - 95 - vdda_phy-supply: 96 - description: A phandle to the core analog power supply for PHY 97 - 98 - vdda_refclk-supply: 99 - description: A phandle to the core analog power supply for IC which generates reference clock 100 - 101 - vddpe-3v3-supply: 102 - description: A phandle to the PCIe endpoint power supply 103 - 104 - phys: 105 - maxItems: 1 106 - 107 - phy-names: 108 - items: 109 - - const: pciephy 110 - 111 - power-domains: 112 - maxItems: 1 113 - 114 - perst-gpios: 115 - description: GPIO controlled connection to PERST# signal 116 - maxItems: 1 117 - 118 - required-opps: 119 - maxItems: 1 120 - 121 - wake-gpios: 122 - description: GPIO controlled connection to WAKE# signal 123 - maxItems: 1 124 - 125 - required: 126 - - compatible 127 - - reg 128 - - reg-names 129 - - interrupt-map-mask 130 - - interrupt-map 131 - - clocks 132 - - clock-names 133 - 134 - anyOf: 135 - - required: 136 - - interrupts 137 - - interrupt-names 138 - - "#interrupt-cells" 139 - - required: 140 - - msi-map 141 - 142 - allOf: 143 - - $ref: /schemas/pci/pci-host-bridge.yaml# 144 - - if: 145 - properties: 146 - compatible: 147 - contains: 148 - enum: 149 - - qcom,pcie-apq8064 150 - - qcom,pcie-ipq4019 151 - - qcom,pcie-ipq8064 152 - - qcom,pcie-ipq8064v2 153 - - qcom,pcie-ipq8074 154 - - qcom,pcie-qcs404 155 - then: 156 - properties: 157 - reg: 158 - minItems: 4 159 - maxItems: 4 160 - reg-names: 161 - items: 162 - - const: dbi # DesignWare PCIe registers 163 - - const: elbi # External local bus interface registers 164 - - const: parf # Qualcomm specific registers 165 - - const: config # PCIe configuration space 166 - 167 - - if: 168 - properties: 169 - compatible: 170 - contains: 171 - enum: 172 - - qcom,pcie-ipq5018 173 - - qcom,pcie-ipq6018 174 - - qcom,pcie-ipq8074-gen3 175 - - qcom,pcie-ipq9574 176 - then: 177 - properties: 178 - reg: 179 - minItems: 5 180 - maxItems: 6 181 - reg-names: 182 - minItems: 5 183 - items: 184 - - const: dbi # DesignWare PCIe registers 185 - - const: elbi # External local bus interface registers 186 - - const: atu # ATU address space 187 - - const: parf # Qualcomm specific registers 188 - - const: config # PCIe configuration space 189 - - const: mhi # MHI registers 190 - 191 - - if: 192 - properties: 193 - compatible: 194 - contains: 195 - enum: 196 - - qcom,pcie-apq8084 197 - - qcom,pcie-msm8996 198 - - qcom,pcie-sdm845 199 - then: 200 - properties: 201 - reg: 202 - minItems: 4 203 - maxItems: 5 204 - reg-names: 205 - minItems: 4 206 - items: 207 - - const: parf # Qualcomm specific registers 208 - - const: dbi # DesignWare PCIe registers 209 - - const: elbi # External local bus interface registers 210 - - const: config # PCIe configuration space 211 - - const: mhi # MHI registers 212 - 213 - - if: 214 - properties: 215 - compatible: 216 - contains: 217 - enum: 218 - - qcom,pcie-sdx55 219 - then: 220 - properties: 221 - reg: 222 - minItems: 5 223 - maxItems: 6 224 - reg-names: 225 - minItems: 5 226 - items: 227 - - const: parf # Qualcomm specific registers 228 - - const: dbi # DesignWare PCIe registers 229 - - const: elbi # External local bus interface registers 230 - - const: atu # ATU address space 231 - - const: config # PCIe configuration space 232 - - const: mhi # MHI registers 233 - 234 - - if: 235 - properties: 236 - compatible: 237 - contains: 238 - enum: 239 - - qcom,pcie-apq8064 240 - - qcom,pcie-ipq8064 241 - - qcom,pcie-ipq8064v2 242 - then: 243 - properties: 244 - clocks: 245 - minItems: 3 246 - maxItems: 5 247 - clock-names: 248 - minItems: 3 249 - items: 250 - - const: core # Clocks the pcie hw block 251 - - const: iface # Configuration AHB clock 252 - - const: phy # Clocks the pcie PHY block 253 - - const: aux # Clocks the pcie AUX block, not on apq8064 254 - - const: ref # Clocks the pcie ref block, not on apq8064 255 - resets: 256 - minItems: 5 257 - maxItems: 6 258 - reset-names: 259 - minItems: 5 260 - items: 261 - - const: axi # AXI reset 262 - - const: ahb # AHB reset 263 - - const: por # POR reset 264 - - const: pci # PCI reset 265 - - const: phy # PHY reset 266 - - const: ext # EXT reset, not on apq8064 267 - required: 268 - - vdda-supply 269 - - vdda_phy-supply 270 - - vdda_refclk-supply 271 - 272 - - if: 273 - properties: 274 - compatible: 275 - contains: 276 - enum: 277 - - qcom,pcie-apq8084 278 - then: 279 - properties: 280 - clocks: 281 - minItems: 4 282 - maxItems: 4 283 - clock-names: 284 - items: 285 - - const: iface # Configuration AHB clock 286 - - const: master_bus # Master AXI clock 287 - - const: slave_bus # Slave AXI clock 288 - - const: aux # Auxiliary (AUX) clock 289 - resets: 290 - maxItems: 1 291 - reset-names: 292 - items: 293 - - const: core # Core reset 294 - 295 - - if: 296 - properties: 297 - compatible: 298 - contains: 299 - enum: 300 - - qcom,pcie-ipq4019 301 - then: 302 - properties: 303 - clocks: 304 - minItems: 3 305 - maxItems: 3 306 - clock-names: 307 - items: 308 - - const: aux # Auxiliary (AUX) clock 309 - - const: master_bus # Master AXI clock 310 - - const: slave_bus # Slave AXI clock 311 - resets: 312 - minItems: 12 313 - maxItems: 12 314 - reset-names: 315 - items: 316 - - const: axi_m # AXI master reset 317 - - const: axi_s # AXI slave reset 318 - - const: pipe # PIPE reset 319 - - const: axi_m_vmid # VMID reset 320 - - const: axi_s_xpu # XPU reset 321 - - const: parf # PARF reset 322 - - const: phy # PHY reset 323 - - const: axi_m_sticky # AXI sticky reset 324 - - const: pipe_sticky # PIPE sticky reset 325 - - const: pwr # PWR reset 326 - - const: ahb # AHB reset 327 - - const: phy_ahb # PHY AHB reset 328 - 329 - - if: 330 - properties: 331 - compatible: 332 - contains: 333 - enum: 334 - - qcom,pcie-ipq5018 335 - then: 336 - properties: 337 - clocks: 338 - minItems: 6 339 - maxItems: 6 340 - clock-names: 341 - items: 342 - - const: iface # PCIe to SysNOC BIU clock 343 - - const: axi_m # AXI Master clock 344 - - const: axi_s # AXI Slave clock 345 - - const: ahb # AHB clock 346 - - const: aux # Auxiliary clock 347 - - const: axi_bridge # AXI bridge clock 348 - resets: 349 - minItems: 8 350 - maxItems: 8 351 - reset-names: 352 - items: 353 - - const: pipe # PIPE reset 354 - - const: sleep # Sleep reset 355 - - const: sticky # Core sticky reset 356 - - const: axi_m # AXI master reset 357 - - const: axi_s # AXI slave reset 358 - - const: ahb # AHB reset 359 - - const: axi_m_sticky # AXI master sticky reset 360 - - const: axi_s_sticky # AXI slave sticky reset 361 - interrupts: 362 - minItems: 9 363 - maxItems: 9 364 - interrupt-names: 365 - items: 366 - - const: msi0 367 - - const: msi1 368 - - const: msi2 369 - - const: msi3 370 - - const: msi4 371 - - const: msi5 372 - - const: msi6 373 - - const: msi7 374 - - const: global 375 - 376 - - if: 377 - properties: 378 - compatible: 379 - contains: 380 - enum: 381 - - qcom,pcie-msm8996 382 - then: 383 - properties: 384 - clocks: 385 - minItems: 5 386 - maxItems: 5 387 - clock-names: 388 - items: 389 - - const: pipe # Pipe Clock driving internal logic 390 - - const: aux # Auxiliary (AUX) clock 391 - - const: cfg # Configuration clock 392 - - const: bus_master # Master AXI clock 393 - - const: bus_slave # Slave AXI clock 394 - resets: false 395 - reset-names: false 396 - 397 - - if: 398 - properties: 399 - compatible: 400 - contains: 401 - enum: 402 - - qcom,pcie-ipq8074 403 - then: 404 - properties: 405 - clocks: 406 - minItems: 5 407 - maxItems: 5 408 - clock-names: 409 - items: 410 - - const: iface # PCIe to SysNOC BIU clock 411 - - const: axi_m # AXI Master clock 412 - - const: axi_s # AXI Slave clock 413 - - const: ahb # AHB clock 414 - - const: aux # Auxiliary clock 415 - resets: 416 - minItems: 7 417 - maxItems: 7 418 - reset-names: 419 - items: 420 - - const: pipe # PIPE reset 421 - - const: sleep # Sleep reset 422 - - const: sticky # Core Sticky reset 423 - - const: axi_m # AXI Master reset 424 - - const: axi_s # AXI Slave reset 425 - - const: ahb # AHB Reset 426 - - const: axi_m_sticky # AXI Master Sticky reset 427 - 428 - - if: 429 - properties: 430 - compatible: 431 - contains: 432 - enum: 433 - - qcom,pcie-ipq6018 434 - - qcom,pcie-ipq8074-gen3 435 - then: 436 - properties: 437 - clocks: 438 - minItems: 5 439 - maxItems: 5 440 - clock-names: 441 - items: 442 - - const: iface # PCIe to SysNOC BIU clock 443 - - const: axi_m # AXI Master clock 444 - - const: axi_s # AXI Slave clock 445 - - const: axi_bridge # AXI bridge clock 446 - - const: rchng 447 - resets: 448 - minItems: 8 449 - maxItems: 8 450 - reset-names: 451 - items: 452 - - const: pipe # PIPE reset 453 - - const: sleep # Sleep reset 454 - - const: sticky # Core Sticky reset 455 - - const: axi_m # AXI Master reset 456 - - const: axi_s # AXI Slave reset 457 - - const: ahb # AHB Reset 458 - - const: axi_m_sticky # AXI Master Sticky reset 459 - - const: axi_s_sticky # AXI Slave Sticky reset 460 - 461 - - if: 462 - properties: 463 - compatible: 464 - contains: 465 - enum: 466 - - qcom,pcie-ipq9574 467 - then: 468 - properties: 469 - clocks: 470 - minItems: 6 471 - maxItems: 6 472 - clock-names: 473 - items: 474 - - const: axi_m # AXI Master clock 475 - - const: axi_s # AXI Slave clock 476 - - const: axi_bridge 477 - - const: rchng 478 - - const: ahb 479 - - const: aux 480 - 481 - resets: 482 - minItems: 8 483 - maxItems: 8 484 - reset-names: 485 - items: 486 - - const: pipe # PIPE reset 487 - - const: sticky # Core Sticky reset 488 - - const: axi_s_sticky # AXI Slave Sticky reset 489 - - const: axi_s # AXI Slave reset 490 - - const: axi_m_sticky # AXI Master Sticky reset 491 - - const: axi_m # AXI Master reset 492 - - const: aux # AUX Reset 493 - - const: ahb # AHB Reset 494 - 495 - interrupts: 496 - minItems: 8 497 - interrupt-names: 498 - minItems: 8 499 - items: 500 - - const: msi0 501 - - const: msi1 502 - - const: msi2 503 - - const: msi3 504 - - const: msi4 505 - - const: msi5 506 - - const: msi6 507 - - const: msi7 508 - - const: global 509 - 510 - - if: 511 - properties: 512 - compatible: 513 - contains: 514 - enum: 515 - - qcom,pcie-qcs404 516 - then: 517 - properties: 518 - clocks: 519 - minItems: 4 520 - maxItems: 4 521 - clock-names: 522 - items: 523 - - const: iface # AHB clock 524 - - const: aux # Auxiliary clock 525 - - const: master_bus # AXI Master clock 526 - - const: slave_bus # AXI Slave clock 527 - resets: 528 - minItems: 6 529 - maxItems: 6 530 - reset-names: 531 - items: 532 - - const: axi_m # AXI Master reset 533 - - const: axi_s # AXI Slave reset 534 - - const: axi_m_sticky # AXI Master Sticky reset 535 - - const: pipe_sticky # PIPE sticky reset 536 - - const: pwr # PWR reset 537 - - const: ahb # AHB reset 538 - 539 - - if: 540 - properties: 541 - compatible: 542 - contains: 543 - enum: 544 - - qcom,pcie-sdm845 545 - then: 546 - oneOf: 547 - # Unfortunately the "optional" ref clock is used in the middle of the list 548 - - properties: 549 - clocks: 550 - minItems: 8 551 - maxItems: 8 552 - clock-names: 553 - items: 554 - - const: pipe # PIPE clock 555 - - const: aux # Auxiliary clock 556 - - const: cfg # Configuration clock 557 - - const: bus_master # Master AXI clock 558 - - const: bus_slave # Slave AXI clock 559 - - const: slave_q2a # Slave Q2A clock 560 - - const: ref # REFERENCE clock 561 - - const: tbu # PCIe TBU clock 562 - - properties: 563 - clocks: 564 - minItems: 7 565 - maxItems: 7 566 - clock-names: 567 - items: 568 - - const: pipe # PIPE clock 569 - - const: aux # Auxiliary clock 570 - - const: cfg # Configuration clock 571 - - const: bus_master # Master AXI clock 572 - - const: bus_slave # Slave AXI clock 573 - - const: slave_q2a # Slave Q2A clock 574 - - const: tbu # PCIe TBU clock 575 - properties: 576 - resets: 577 - maxItems: 1 578 - reset-names: 579 - items: 580 - - const: pci # PCIe core reset 581 - 582 - - if: 583 - properties: 584 - compatible: 585 - contains: 586 - enum: 587 - - qcom,pcie-sdx55 588 - then: 589 - properties: 590 - clocks: 591 - minItems: 7 592 - maxItems: 7 593 - clock-names: 594 - items: 595 - - const: pipe # PIPE clock 596 - - const: aux # Auxiliary clock 597 - - const: cfg # Configuration clock 598 - - const: bus_master # Master AXI clock 599 - - const: bus_slave # Slave AXI clock 600 - - const: slave_q2a # Slave Q2A clock 601 - - const: sleep # PCIe Sleep clock 602 - resets: 603 - maxItems: 1 604 - reset-names: 605 - items: 606 - - const: pci # PCIe core reset 607 - 608 - - if: 609 - not: 610 - properties: 611 - compatible: 612 - contains: 613 - enum: 614 - - qcom,pcie-apq8064 615 - - qcom,pcie-ipq4019 616 - - qcom,pcie-ipq5018 617 - - qcom,pcie-ipq8064 618 - - qcom,pcie-ipq8064v2 619 - - qcom,pcie-ipq8074 620 - - qcom,pcie-ipq8074-gen3 621 - - qcom,pcie-ipq9574 622 - - qcom,pcie-qcs404 623 - then: 624 - required: 625 - - power-domains 626 - 627 - - if: 628 - not: 629 - properties: 630 - compatible: 631 - contains: 632 - enum: 633 - - qcom,pcie-msm8996 634 - then: 635 - required: 636 - - resets 637 - - reset-names 638 - 639 - - if: 640 - properties: 641 - compatible: 642 - contains: 643 - enum: 644 - - qcom,pcie-ipq6018 645 - - qcom,pcie-ipq8074 646 - - qcom,pcie-ipq8074-gen3 647 - - qcom,pcie-msm8996 648 - - qcom,pcie-msm8998 649 - - qcom,pcie-sdm845 650 - then: 651 - oneOf: 652 - - properties: 653 - interrupts: 654 - maxItems: 1 655 - interrupt-names: 656 - items: 657 - - const: msi 658 - - properties: 659 - interrupts: 660 - minItems: 8 661 - maxItems: 9 662 - interrupt-names: 663 - minItems: 8 664 - items: 665 - - const: msi0 666 - - const: msi1 667 - - const: msi2 668 - - const: msi3 669 - - const: msi4 670 - - const: msi5 671 - - const: msi6 672 - - const: msi7 673 - - const: global 674 - 675 - - if: 676 - properties: 677 - compatible: 678 - contains: 679 - enum: 680 - - qcom,pcie-apq8064 681 - - qcom,pcie-apq8084 682 - - qcom,pcie-ipq4019 683 - - qcom,pcie-ipq8064 684 - - qcom,pcie-ipq8064-v2 685 - - qcom,pcie-qcs404 686 - then: 687 - properties: 688 - interrupts: 689 - maxItems: 1 690 - interrupt-names: 691 - items: 692 - - const: msi 693 - 694 - unevaluatedProperties: false 695 - 696 - examples: 697 - - | 698 - #include <dt-bindings/interrupt-controller/arm-gic.h> 699 - pcie@1b500000 { 700 - compatible = "qcom,pcie-ipq8064"; 701 - reg = <0x1b500000 0x1000>, 702 - <0x1b502000 0x80>, 703 - <0x1b600000 0x100>, 704 - <0x0ff00000 0x100000>; 705 - reg-names = "dbi", "elbi", "parf", "config"; 706 - device_type = "pci"; 707 - linux,pci-domain = <0>; 708 - bus-range = <0x00 0xff>; 709 - num-lanes = <1>; 710 - #address-cells = <3>; 711 - #size-cells = <2>; 712 - ranges = <0x81000000 0 0 0x0fe00000 0 0x00100000>, 713 - <0x82000000 0 0 0x08000000 0 0x07e00000>; 714 - interrupts = <GIC_SPI 238 IRQ_TYPE_LEVEL_HIGH>; 715 - interrupt-names = "msi"; 716 - #interrupt-cells = <1>; 717 - interrupt-map-mask = <0 0 0 0x7>; 718 - interrupt-map = <0 0 0 1 &intc 0 36 IRQ_TYPE_LEVEL_HIGH>, 719 - <0 0 0 2 &intc 0 37 IRQ_TYPE_LEVEL_HIGH>, 720 - <0 0 0 3 &intc 0 38 IRQ_TYPE_LEVEL_HIGH>, 721 - <0 0 0 4 &intc 0 39 IRQ_TYPE_LEVEL_HIGH>; 722 - clocks = <&gcc 41>, 723 - <&gcc 43>, 724 - <&gcc 44>, 725 - <&gcc 42>, 726 - <&gcc 248>; 727 - clock-names = "core", "iface", "phy", "aux", "ref"; 728 - resets = <&gcc 27>, 729 - <&gcc 26>, 730 - <&gcc 25>, 731 - <&gcc 24>, 732 - <&gcc 23>, 733 - <&gcc 22>; 734 - reset-names = "axi", "ahb", "por", "pci", "phy", "ext"; 735 - pinctrl-0 = <&pcie_pins_default>; 736 - pinctrl-names = "default"; 737 - vdda-supply = <&pm8921_s3>; 738 - vdda_phy-supply = <&pm8921_lvs6>; 739 - vdda_refclk-supply = <&ext_3p3v>; 740 - }; 741 - - | 742 - #include <dt-bindings/interrupt-controller/arm-gic.h> 743 - #include <dt-bindings/gpio/gpio.h> 744 - pcie@fc520000 { 745 - compatible = "qcom,pcie-apq8084"; 746 - reg = <0xfc520000 0x2000>, 747 - <0xff000000 0x1000>, 748 - <0xff001000 0x1000>, 749 - <0xff002000 0x2000>; 750 - reg-names = "parf", "dbi", "elbi", "config"; 751 - device_type = "pci"; 752 - linux,pci-domain = <0>; 753 - bus-range = <0x00 0xff>; 754 - num-lanes = <1>; 755 - #address-cells = <3>; 756 - #size-cells = <2>; 757 - ranges = <0x81000000 0 0 0xff200000 0 0x00100000>, 758 - <0x82000000 0 0x00300000 0xff300000 0 0x00d00000>; 759 - interrupts = <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>; 760 - interrupt-names = "msi"; 761 - #interrupt-cells = <1>; 762 - interrupt-map-mask = <0 0 0 0x7>; 763 - interrupt-map = <0 0 0 1 &intc 0 244 IRQ_TYPE_LEVEL_HIGH>, 764 - <0 0 0 2 &intc 0 245 IRQ_TYPE_LEVEL_HIGH>, 765 - <0 0 0 3 &intc 0 247 IRQ_TYPE_LEVEL_HIGH>, 766 - <0 0 0 4 &intc 0 248 IRQ_TYPE_LEVEL_HIGH>; 767 - clocks = <&gcc 324>, 768 - <&gcc 325>, 769 - <&gcc 327>, 770 - <&gcc 323>; 771 - clock-names = "iface", "master_bus", "slave_bus", "aux"; 772 - resets = <&gcc 81>; 773 - reset-names = "core"; 774 - power-domains = <&gcc 1>; 775 - vdda-supply = <&pma8084_l3>; 776 - phys = <&pciephy0>; 777 - phy-names = "pciephy"; 778 - perst-gpios = <&tlmm 70 GPIO_ACTIVE_LOW>; 779 - pinctrl-0 = <&pcie0_pins_default>; 780 - pinctrl-names = "default"; 781 - }; 782 - ...
+110
Documentation/devicetree/bindings/pci/qcom,sa8255p-pcie-ep.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/qcom,sa8255p-pcie-ep.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm firmware managed PCIe Endpoint Controller 8 + 9 + description: 10 + Qualcomm SA8255p SoC PCIe endpoint controller is based on the Synopsys 11 + DesignWare PCIe IP which is managed by firmware. 12 + 13 + maintainers: 14 + - Manivannan Sadhasivam <mani@kernel.org> 15 + 16 + properties: 17 + compatible: 18 + const: qcom,sa8255p-pcie-ep 19 + 20 + reg: 21 + items: 22 + - description: Qualcomm-specific PARF configuration registers 23 + - description: DesignWare PCIe registers 24 + - description: External local bus interface registers 25 + - description: Address Translation Unit (ATU) registers 26 + - description: Memory region used to map remote RC address space 27 + - description: BAR memory region 28 + - description: DMA register space 29 + 30 + reg-names: 31 + items: 32 + - const: parf 33 + - const: dbi 34 + - const: elbi 35 + - const: atu 36 + - const: addr_space 37 + - const: mmio 38 + - const: dma 39 + 40 + interrupts: 41 + items: 42 + - description: PCIe Global interrupt 43 + - description: PCIe Doorbell interrupt 44 + - description: DMA interrupt 45 + 46 + interrupt-names: 47 + items: 48 + - const: global 49 + - const: doorbell 50 + - const: dma 51 + 52 + iommus: 53 + maxItems: 1 54 + 55 + reset-gpios: 56 + description: GPIO used as PERST# input signal 57 + maxItems: 1 58 + 59 + wake-gpios: 60 + description: GPIO used as WAKE# output signal 61 + maxItems: 1 62 + 63 + power-domains: 64 + maxItems: 1 65 + 66 + dma-coherent: true 67 + 68 + num-lanes: 69 + default: 2 70 + 71 + required: 72 + - compatible 73 + - reg 74 + - reg-names 75 + - interrupts 76 + - interrupt-names 77 + - reset-gpios 78 + - power-domains 79 + 80 + additionalProperties: false 81 + 82 + examples: 83 + - | 84 + #include <dt-bindings/gpio/gpio.h> 85 + #include <dt-bindings/interrupt-controller/arm-gic.h> 86 + soc { 87 + #address-cells = <2>; 88 + #size-cells = <2>; 89 + pcie1_ep: pcie-ep@1c10000 { 90 + compatible = "qcom,sa8255p-pcie-ep"; 91 + reg = <0x0 0x01c10000 0x0 0x3000>, 92 + <0x0 0x60000000 0x0 0xf20>, 93 + <0x0 0x60000f20 0x0 0xa8>, 94 + <0x0 0x60001000 0x0 0x4000>, 95 + <0x0 0x60200000 0x0 0x100000>, 96 + <0x0 0x01c13000 0x0 0x1000>, 97 + <0x0 0x60005000 0x0 0x2000>; 98 + reg-names = "parf", "dbi", "elbi", "atu", "addr_space", "mmio", "dma"; 99 + interrupts = <GIC_SPI 518 IRQ_TYPE_LEVEL_HIGH>, 100 + <GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>, 101 + <GIC_SPI 474 IRQ_TYPE_LEVEL_HIGH>; 102 + interrupt-names = "global", "doorbell", "dma"; 103 + reset-gpios = <&tlmm 4 GPIO_ACTIVE_LOW>; 104 + wake-gpios = <&tlmm 5 GPIO_ACTIVE_LOW>; 105 + dma-coherent; 106 + iommus = <&pcie_smmu 0x80 0x7f>; 107 + power-domains = <&scmi6_pd 1>; 108 + num-lanes = <4>; 109 + }; 110 + };
+6
Documentation/devicetree/bindings/pci/snps,dw-pcie-common.yaml
··· 106 106 be connected to a single source of the periodic signal). 107 107 const: ref 108 108 - description: 109 + Some dwc wrappers (like i.MX95 PCIes) have two reference clock 110 + inputs, one from an internal PLL, the other from an off-chip crystal 111 + oscillator. If present, 'extref' refers to a reference clock from 112 + an external oscillator. 113 + const: extref 114 + - description: 109 115 Clock for the PHY registers interface. Originally this is 110 116 a PHY-viewport-based interface, but some platform may have 111 117 specifically designed one.
+2 -2
Documentation/devicetree/bindings/pci/socionext,uniphier-pcie.yaml
··· 51 51 phy-names: 52 52 const: pcie-phy 53 53 54 - interrupt-controller: 54 + legacy-interrupt-controller: 55 55 type: object 56 56 additionalProperties: false 57 57 ··· 111 111 <0 0 0 3 &pcie_intc 2>, 112 112 <0 0 0 4 &pcie_intc 3>; 113 113 114 - pcie_intc: interrupt-controller { 114 + pcie_intc: legacy-interrupt-controller { 115 115 #address-cells = <0>; 116 116 interrupt-controller; 117 117 #interrupt-cells = <1>;
+74
Documentation/trace/events-pci.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =========================== 4 + Subsystem Trace Points: PCI 5 + =========================== 6 + 7 + Overview 8 + ======== 9 + The PCI tracing system provides tracepoints to monitor critical hardware events 10 + that can impact system performance and reliability. These events normally show 11 + up here: 12 + 13 + /sys/kernel/tracing/events/pci 14 + 15 + Cf. include/trace/events/pci.h for the events definitions. 16 + 17 + Available Tracepoints 18 + ===================== 19 + 20 + pci_hp_event 21 + ------------ 22 + 23 + Monitors PCI hotplug events including card insertion/removal and link 24 + state changes. 25 + :: 26 + 27 + pci_hp_event "%s slot:%s, event:%s\n" 28 + 29 + **Event Types**: 30 + 31 + * ``LINK_UP`` - PCIe link established 32 + * ``LINK_DOWN`` - PCIe link lost 33 + * ``CARD_PRESENT`` - Card detected in slot 34 + * ``CARD_NOT_PRESENT`` - Card removed from slot 35 + 36 + **Example Usage**:: 37 + 38 + # Enable the tracepoint 39 + echo 1 > /sys/kernel/debug/tracing/events/pci/pci_hp_event/enable 40 + 41 + # Monitor events (the following output is generated when a device is hotplugged) 42 + cat /sys/kernel/debug/tracing/trace_pipe 43 + irq/51-pciehp-88 [001] ..... 1311.177459: pci_hp_event: 0000:00:02.0 slot:10, event:CARD_PRESENT 44 + 45 + irq/51-pciehp-88 [001] ..... 1311.177566: pci_hp_event: 0000:00:02.0 slot:10, event:LINK_UP 46 + 47 + pcie_link_event 48 + --------------- 49 + 50 + Monitors PCIe link speed changes and provides detailed link status information. 51 + :: 52 + 53 + pcie_link_event "%s type:%d, reason:%d, cur_bus_speed:%d, max_bus_speed:%d, width:%u, flit_mode:%u, status:%s\n" 54 + 55 + **Parameters**: 56 + 57 + * ``type`` - PCIe device type (4=Root Port, etc.) 58 + * ``reason`` - Reason for link change: 59 + 60 + - ``0`` - Link retrain 61 + - ``1`` - Bus enumeration 62 + - ``2`` - Bandwidth notification enable 63 + - ``3`` - Bandwidth notification IRQ 64 + - ``4`` - Hotplug event 65 + 66 + 67 + **Example Usage**:: 68 + 69 + # Enable the tracepoint 70 + echo 1 > /sys/kernel/debug/tracing/events/pci/pcie_link_event/enable 71 + 72 + # Monitor events (the following output is generated when a device is hotplugged) 73 + cat /sys/kernel/debug/tracing/trace_pipe 74 + irq/51-pciehp-88 [001] ..... 381.545386: pcie_link_event: 0000:00:02.0 type:4, reason:4, cur_bus_speed:20, max_bus_speed:23, width:1, flit_mode:0, status:DLLLA
+1
Documentation/trace/index.rst
··· 54 54 events-power 55 55 events-nmi 56 56 events-msr 57 + events-pci 57 58 boottime-trace 58 59 histogram 59 60 histogram-design
+9
MAINTAINERS
··· 3922 3922 F: Documentation/devicetree/bindings/media/aspeed,video-engine.yaml 3923 3923 F: drivers/media/platform/aspeed/ 3924 3924 3925 + ASPEED PCIE CONTROLLER DRIVER 3926 + M: Jacky Chou <jacky_chou@aspeedtech.com> 3927 + L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers) 3928 + L: linux-pci@vger.kernel.org 3929 + S: Maintained 3930 + F: Documentation/devicetree/bindings/pci/aspeed,ast2600-pcie.yaml 3931 + F: drivers/pci/controller/pcie-aspeed.c 3932 + 3925 3933 ASUS EC HARDWARE MONITOR DRIVER 3926 3934 M: Eugene Shalygin <eugene.shalygin@gmail.com> 3927 3935 L: linux-hwmon@vger.kernel.org ··· 20511 20503 L: linux-arm-msm@vger.kernel.org 20512 20504 S: Maintained 20513 20505 F: Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml 20506 + F: Documentation/devicetree/bindings/pci/qcom,sa8255p-pcie-ep.yaml 20514 20507 F: drivers/pci/controller/dwc/pcie-qcom-common.c 20515 20508 F: drivers/pci/controller/dwc/pcie-qcom-ep.c 20516 20509
+1
drivers/cpuidle/cpuidle-tegra.c
··· 336 336 pr_info("disabling CC6 state, since PCIe IRQs are in use\n"); 337 337 tegra_cpuidle_disable_state(TEGRA_CC6); 338 338 } 339 + EXPORT_SYMBOL_GPL(tegra_cpuidle_pcie_irqs_in_use); 339 340 340 341 static void tegra_cpuidle_setup_tegra114_c7_state(void) 341 342 {
+202 -1
drivers/misc/pci_endpoint_test.c
··· 39 39 #define COMMAND_COPY BIT(5) 40 40 #define COMMAND_ENABLE_DOORBELL BIT(6) 41 41 #define COMMAND_DISABLE_DOORBELL BIT(7) 42 + #define COMMAND_BAR_SUBRANGE_SETUP BIT(8) 43 + #define COMMAND_BAR_SUBRANGE_CLEAR BIT(9) 42 44 43 45 #define PCI_ENDPOINT_TEST_STATUS 0x8 44 46 #define STATUS_READ_SUCCESS BIT(0) ··· 57 55 #define STATUS_DOORBELL_ENABLE_FAIL BIT(11) 58 56 #define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12) 59 57 #define STATUS_DOORBELL_DISABLE_FAIL BIT(13) 58 + #define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14) 59 + #define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15) 60 + #define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16) 61 + #define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17) 60 62 61 63 #define PCI_ENDPOINT_TEST_LOWER_SRC_ADDR 0x0c 62 64 #define PCI_ENDPOINT_TEST_UPPER_SRC_ADDR 0x10 ··· 83 77 #define CAP_MSI BIT(1) 84 78 #define CAP_MSIX BIT(2) 85 79 #define CAP_INTX BIT(3) 80 + #define CAP_SUBRANGE_MAPPING BIT(4) 86 81 87 82 #define PCI_ENDPOINT_TEST_DB_BAR 0x34 88 83 #define PCI_ENDPOINT_TEST_DB_OFFSET 0x38 ··· 106 99 #define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031 107 100 108 101 #define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588 102 + 103 + #define PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB 2 109 104 110 105 static DEFINE_IDA(pci_endpoint_test_ida); 111 106 ··· 421 412 } 422 413 423 414 return 0; 415 + } 416 + 417 + static u8 pci_endpoint_test_subrange_sig_byte(enum pci_barno barno, 418 + unsigned int subno) 419 + { 420 + return 0x50 + (barno * 8) + subno; 421 + } 422 + 423 + static u8 pci_endpoint_test_subrange_test_byte(enum pci_barno barno, 424 + unsigned int subno) 425 + { 426 + return 0xa0 + (barno * 8) + subno; 427 + } 428 + 429 + static int pci_endpoint_test_bar_subrange_cmd(struct pci_endpoint_test *test, 430 + enum pci_barno barno, u32 command, 431 + u32 ok_bit, u32 fail_bit) 432 + { 433 + struct pci_dev *pdev = test->pdev; 434 + struct device *dev = &pdev->dev; 435 + int irq_type = test->irq_type; 436 + u32 status; 437 + 438 + if (irq_type < PCITEST_IRQ_TYPE_INTX || 439 + irq_type > PCITEST_IRQ_TYPE_MSIX) { 440 + dev_err(dev, "Invalid IRQ type\n"); 441 + return -EINVAL; 442 + } 443 + 444 + reinit_completion(&test->irq_raised); 445 + 446 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_STATUS, 0); 447 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type); 448 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1); 449 + /* Reuse SIZE as a command parameter: bar number. */ 450 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, barno); 451 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, command); 452 + 453 + if (!wait_for_completion_timeout(&test->irq_raised, 454 + msecs_to_jiffies(1000))) 455 + return -ETIMEDOUT; 456 + 457 + status = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS); 458 + if (status & fail_bit) 459 + return -EIO; 460 + 461 + if (!(status & ok_bit)) 462 + return -EIO; 463 + 464 + return 0; 465 + } 466 + 467 + static int pci_endpoint_test_bar_subrange_setup(struct pci_endpoint_test *test, 468 + enum pci_barno barno) 469 + { 470 + return pci_endpoint_test_bar_subrange_cmd(test, barno, 471 + COMMAND_BAR_SUBRANGE_SETUP, 472 + STATUS_BAR_SUBRANGE_SETUP_SUCCESS, 473 + STATUS_BAR_SUBRANGE_SETUP_FAIL); 474 + } 475 + 476 + static int pci_endpoint_test_bar_subrange_clear(struct pci_endpoint_test *test, 477 + enum pci_barno barno) 478 + { 479 + return pci_endpoint_test_bar_subrange_cmd(test, barno, 480 + COMMAND_BAR_SUBRANGE_CLEAR, 481 + STATUS_BAR_SUBRANGE_CLEAR_SUCCESS, 482 + STATUS_BAR_SUBRANGE_CLEAR_FAIL); 483 + } 484 + 485 + static int pci_endpoint_test_bar_subrange(struct pci_endpoint_test *test, 486 + enum pci_barno barno) 487 + { 488 + u32 nsub = PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB; 489 + struct device *dev = &test->pdev->dev; 490 + size_t sub_size, buf_size; 491 + resource_size_t bar_size; 492 + void __iomem *bar_addr; 493 + void *read_buf = NULL; 494 + int ret, clear_ret; 495 + size_t off, chunk; 496 + u32 i, exp, val; 497 + u8 pattern; 498 + 499 + if (!(test->ep_caps & CAP_SUBRANGE_MAPPING)) 500 + return -EOPNOTSUPP; 501 + 502 + /* 503 + * The test register BAR is not safe to reprogram and write/read 504 + * over its full size. BAR_TEST already special-cases it to a tiny 505 + * range. For subrange mapping tests, let's simply skip it. 506 + */ 507 + if (barno == test->test_reg_bar) 508 + return -EBUSY; 509 + 510 + bar_size = pci_resource_len(test->pdev, barno); 511 + if (!bar_size) 512 + return -ENODATA; 513 + 514 + bar_addr = test->bar[barno]; 515 + if (!bar_addr) 516 + return -ENOMEM; 517 + 518 + ret = pci_endpoint_test_bar_subrange_setup(test, barno); 519 + if (ret) 520 + return ret; 521 + 522 + if (bar_size % nsub || bar_size / nsub > SIZE_MAX) { 523 + ret = -EINVAL; 524 + goto out_clear; 525 + } 526 + 527 + sub_size = bar_size / nsub; 528 + if (sub_size < sizeof(u32)) { 529 + ret = -ENOSPC; 530 + goto out_clear; 531 + } 532 + 533 + /* Limit the temporary buffer size */ 534 + buf_size = min_t(size_t, sub_size, SZ_1M); 535 + 536 + read_buf = kmalloc(buf_size, GFP_KERNEL); 537 + if (!read_buf) { 538 + ret = -ENOMEM; 539 + goto out_clear; 540 + } 541 + 542 + /* 543 + * Step 1: verify EP-provided signature per subrange. This detects 544 + * whether the EP actually applied the submap order. 545 + */ 546 + for (i = 0; i < nsub; i++) { 547 + exp = (u32)pci_endpoint_test_subrange_sig_byte(barno, i) * 548 + 0x01010101U; 549 + val = ioread32(bar_addr + (i * sub_size)); 550 + if (val != exp) { 551 + dev_err(dev, 552 + "BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n", 553 + barno, i, (size_t)i * sub_size, exp, val); 554 + ret = -EIO; 555 + goto out_clear; 556 + } 557 + val = ioread32(bar_addr + (i * sub_size) + sub_size - sizeof(u32)); 558 + if (val != exp) { 559 + dev_err(dev, 560 + "BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n", 561 + barno, i, 562 + ((size_t)i * sub_size) + sub_size - sizeof(u32), 563 + exp, val); 564 + ret = -EIO; 565 + goto out_clear; 566 + } 567 + } 568 + 569 + /* Step 2: write unique pattern per subrange (write all first). */ 570 + for (i = 0; i < nsub; i++) { 571 + pattern = pci_endpoint_test_subrange_test_byte(barno, i); 572 + memset_io(bar_addr + (i * sub_size), pattern, sub_size); 573 + } 574 + 575 + /* Step 3: read back and verify (read all after all writes). */ 576 + for (i = 0; i < nsub; i++) { 577 + pattern = pci_endpoint_test_subrange_test_byte(barno, i); 578 + for (off = 0; off < sub_size; off += chunk) { 579 + void *bad; 580 + 581 + chunk = min_t(size_t, buf_size, sub_size - off); 582 + memcpy_fromio(read_buf, bar_addr + (i * sub_size) + off, 583 + chunk); 584 + bad = memchr_inv(read_buf, pattern, chunk); 585 + if (bad) { 586 + size_t bad_off = (u8 *)bad - (u8 *)read_buf; 587 + 588 + dev_err(dev, 589 + "BAR%d subrange%u data mismatch @%#zx (pattern %#02x)\n", 590 + barno, i, (size_t)i * sub_size + off + bad_off, 591 + pattern); 592 + ret = -EIO; 593 + goto out_clear; 594 + } 595 + } 596 + } 597 + 598 + out_clear: 599 + kfree(read_buf); 600 + clear_ret = pci_endpoint_test_bar_subrange_clear(test, barno); 601 + return ret ?: clear_ret; 424 602 } 425 603 426 604 static int pci_endpoint_test_intx_irq(struct pci_endpoint_test *test) ··· 1132 936 1133 937 switch (cmd) { 1134 938 case PCITEST_BAR: 939 + case PCITEST_BAR_SUBRANGE: 1135 940 bar = arg; 1136 941 if (bar <= NO_BAR || bar > BAR_5) 1137 942 goto ret; 1138 943 if (is_am654_pci_dev(pdev) && bar == BAR_0) 1139 944 goto ret; 1140 - ret = pci_endpoint_test_bar(test, bar); 945 + 946 + if (cmd == PCITEST_BAR) 947 + ret = pci_endpoint_test_bar(test, bar); 948 + else 949 + ret = pci_endpoint_test_bar_subrange(test, bar); 1141 950 break; 1142 951 case PCITEST_BARS: 1143 952 ret = pci_endpoint_test_bars(test);
+4
drivers/pci/Makefile
··· 39 39 obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o 40 40 obj-$(CONFIG_PCI_NPEM) += npem.o 41 41 obj-$(CONFIG_PCIE_TPH) += tph.o 42 + obj-$(CONFIG_CARDBUS) += setup-cardbus.o 42 43 43 44 # Endpoint library must be initialized before its users 44 45 obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ ··· 48 47 obj-y += switch/ 49 48 50 49 subdir-ccflags-$(CONFIG_PCI_DEBUG) := -DDEBUG 50 + 51 + CFLAGS_trace.o := -I$(src) 52 + obj-$(CONFIG_TRACING) += trace.o
+5 -16
drivers/pci/bus.c
··· 15 15 #include <linux/of.h> 16 16 #include <linux/of_platform.h> 17 17 #include <linux/platform_device.h> 18 + #include <linux/pm_runtime.h> 18 19 #include <linux/proc_fs.h> 19 20 #include <linux/slab.h> 20 21 ··· 345 344 void pci_bus_add_device(struct pci_dev *dev) 346 345 { 347 346 struct device_node *dn = dev->dev.of_node; 348 - struct platform_device *pdev; 349 347 350 348 /* 351 349 * Can not put in pci_device_add yet because resources ··· 362 362 pci_save_state(dev); 363 363 364 364 /* 365 - * If the PCI device is associated with a pwrctrl device with a 366 - * power supply, create a device link between the PCI device and 367 - * pwrctrl device. This ensures that pwrctrl drivers are probed 368 - * before PCI client drivers. 365 + * Enable runtime PM, which potentially allows the device to 366 + * suspend immediately, only after the PCI state has been 367 + * configured completely. 369 368 */ 370 - pdev = of_find_device_by_node(dn); 371 - if (pdev) { 372 - if (of_pci_supply_present(dn)) { 373 - if (!device_link_add(&dev->dev, &pdev->dev, 374 - DL_FLAG_AUTOREMOVE_CONSUMER)) { 375 - pci_err(dev, "failed to add device link to power control device %s\n", 376 - pdev->name); 377 - } 378 - } 379 - put_device(&pdev->dev); 380 - } 369 + pm_runtime_enable(&dev->dev); 381 370 382 371 if (!dn || of_device_is_available(dn)) 383 372 pci_dev_allow_binding(dev);
+18 -1
drivers/pci/controller/Kconfig
··· 58 58 bool "ARM Versatile PB PCI controller" 59 59 depends on ARCH_VERSATILE || COMPILE_TEST 60 60 61 + config PCIE_ASPEED 62 + bool "ASPEED PCIe controller" 63 + depends on ARCH_ASPEED || COMPILE_TEST 64 + depends on OF 65 + depends on PCI_MSI 66 + select IRQ_MSI_LIB 67 + help 68 + Enable this option to support the PCIe controller found on ASPEED 69 + SoCs. 70 + 71 + This driver provides initialization and management for PCIe 72 + Root Complex functionality, including INTx and MSI support. 73 + 74 + Select Y if your platform uses an ASPEED SoC and requires PCIe 75 + connectivity. 76 + 61 77 config PCIE_BRCMSTB 62 78 tristate "Broadcom Brcmstb PCIe controller" 63 79 depends on ARCH_BRCMSTB || ARCH_BCM2835 || ARCH_BCMBCA || \ ··· 248 232 driver. 249 233 250 234 config PCI_TEGRA 251 - bool "NVIDIA Tegra PCIe controller" 235 + tristate "NVIDIA Tegra PCIe controller" 252 236 depends on ARCH_TEGRA || COMPILE_TEST 253 237 depends on PCI_MSI 254 238 select IRQ_MSI_LIB ··· 259 243 config PCIE_RCAR_HOST 260 244 bool "Renesas R-Car PCIe controller (host mode)" 261 245 depends on ARCH_RENESAS || COMPILE_TEST 246 + depends on OF 262 247 depends on PCI_MSI 263 248 select IRQ_MSI_LIB 264 249 help
+1
drivers/pci/controller/Makefile
··· 40 40 obj-$(CONFIG_PCIE_HISI_ERR) += pcie-hisi-error.o 41 41 obj-$(CONFIG_PCIE_APPLE) += pcie-apple.o 42 42 obj-$(CONFIG_PCIE_MT7621) += pcie-mt7621.o 43 + obj-$(CONFIG_PCIE_ASPEED) += pcie-aspeed.o 43 44 44 45 # pcie-hisi.o quirks are needed even without CONFIG_PCIE_DW 45 46 obj-y += dwc/
+25 -16
drivers/pci/controller/cadence/pci-j721e.c
··· 620 620 gpiod_set_value_cansleep(pcie->reset_gpio, 1); 621 621 } 622 622 623 - ret = cdns_pcie_host_setup(rc); 624 - if (ret < 0) 625 - goto err_pcie_setup; 623 + if (IS_ENABLED(CONFIG_PCI_J721E_HOST)) { 624 + ret = cdns_pcie_host_setup(rc); 625 + if (ret < 0) 626 + goto err_pcie_setup; 627 + } 626 628 627 629 break; 628 630 case PCI_MODE_EP: ··· 634 632 goto err_get_sync; 635 633 } 636 634 637 - ret = cdns_pcie_ep_setup(ep); 638 - if (ret < 0) 639 - goto err_pcie_setup; 635 + if (IS_ENABLED(CONFIG_PCI_J721E_EP)) { 636 + ret = cdns_pcie_ep_setup(ep); 637 + if (ret < 0) 638 + goto err_pcie_setup; 639 + } 640 640 641 641 break; 642 642 } ··· 663 659 struct cdns_pcie_ep *ep; 664 660 struct cdns_pcie_rc *rc; 665 661 666 - if (pcie->mode == PCI_MODE_RC) { 662 + if (IS_ENABLED(CONFIG_PCI_J721E_HOST) && 663 + pcie->mode == PCI_MODE_RC) { 667 664 rc = container_of(cdns_pcie, struct cdns_pcie_rc, pcie); 668 665 cdns_pcie_host_disable(rc); 669 - } else { 666 + } else if (IS_ENABLED(CONFIG_PCI_J721E_EP)) { 670 667 ep = container_of(cdns_pcie, struct cdns_pcie_ep, pcie); 671 668 cdns_pcie_ep_disable(ep); 672 669 } ··· 733 728 gpiod_set_value_cansleep(pcie->reset_gpio, 1); 734 729 } 735 730 736 - ret = cdns_pcie_host_link_setup(rc); 737 - if (ret < 0) { 738 - clk_disable_unprepare(pcie->refclk); 739 - return ret; 731 + if (IS_ENABLED(CONFIG_PCI_J721E_HOST)) { 732 + ret = cdns_pcie_host_link_setup(rc); 733 + if (ret < 0) { 734 + clk_disable_unprepare(pcie->refclk); 735 + return ret; 736 + } 740 737 } 741 738 742 739 /* ··· 748 741 for (enum cdns_pcie_rp_bar bar = RP_BAR0; bar <= RP_NO_BAR; bar++) 749 742 rc->avail_ib_bar[bar] = true; 750 743 751 - ret = cdns_pcie_host_init(rc); 752 - if (ret) { 753 - clk_disable_unprepare(pcie->refclk); 754 - return ret; 744 + if (IS_ENABLED(CONFIG_PCI_J721E_HOST)) { 745 + ret = cdns_pcie_host_init(rc); 746 + if (ret) { 747 + clk_disable_unprepare(pcie->refclk); 748 + return ret; 749 + } 755 750 } 756 751 } 757 752
+11 -1
drivers/pci/controller/cadence/pcie-cadence-host-common.c
··· 173 173 const struct list_head *b) 174 174 { 175 175 struct resource_entry *entry1, *entry2; 176 + u64 size1, size2; 176 177 177 178 entry1 = container_of(a, struct resource_entry, node); 178 179 entry2 = container_of(b, struct resource_entry, node); 179 180 180 - return resource_size(entry2->res) - resource_size(entry1->res); 181 + size1 = resource_size(entry1->res); 182 + size2 = resource_size(entry2->res); 183 + 184 + if (size1 > size2) 185 + return -1; 186 + 187 + if (size1 < size2) 188 + return 1; 189 + 190 + return 0; 181 191 } 182 192 EXPORT_SYMBOL_GPL(cdns_pcie_host_dma_ranges_cmp); 183 193
+2 -2
drivers/pci/controller/cadence/pcie-cadence.c
··· 13 13 u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap) 14 14 { 15 15 return PCI_FIND_NEXT_CAP(cdns_pcie_read_cfg, PCI_CAPABILITY_LIST, 16 - cap, pcie); 16 + cap, NULL, pcie); 17 17 } 18 18 EXPORT_SYMBOL_GPL(cdns_pcie_find_capability); 19 19 20 20 u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap) 21 21 { 22 - return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, pcie); 22 + return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, NULL, pcie); 23 23 } 24 24 EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability); 25 25
+2 -2
drivers/pci/controller/dwc/Kconfig
··· 228 228 229 229 config PCIE_TEGRA194_HOST 230 230 tristate "NVIDIA Tegra194 (and later) PCIe controller (host mode)" 231 - depends on ARCH_TEGRA_194_SOC || COMPILE_TEST 231 + depends on (ARCH_TEGRA && ARM64) || COMPILE_TEST 232 232 depends on PCI_MSI 233 233 select PCIE_DW_HOST 234 234 select PHY_TEGRA194_P2U ··· 243 243 244 244 config PCIE_TEGRA194_EP 245 245 tristate "NVIDIA Tegra194 (and later) PCIe controller (endpoint mode)" 246 - depends on ARCH_TEGRA_194_SOC || COMPILE_TEST 246 + depends on (ARCH_TEGRA && ARM64) || COMPILE_TEST 247 247 depends on PCI_ENDPOINT 248 248 select PCIE_DW_EP 249 249 select PHY_TEGRA194_P2U
+1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 424 424 } 425 425 426 426 static const struct pci_epc_features dra7xx_pcie_epc_features = { 427 + DWC_EPC_COMMON_FEATURES, 427 428 .linkup_notifier = true, 428 429 .msi_capable = true, 429 430 };
+69 -8
drivers/pci/controller/dwc/pci-imx6.c
··· 52 52 #define IMX95_PCIE_REF_CLKEN BIT(23) 53 53 #define IMX95_PCIE_PHY_CR_PARA_SEL BIT(9) 54 54 #define IMX95_PCIE_SS_RW_REG_1 0xf4 55 + #define IMX95_PCIE_CLKREQ_OVERRIDE_EN BIT(8) 56 + #define IMX95_PCIE_CLKREQ_OVERRIDE_VAL BIT(9) 55 57 #define IMX95_PCIE_SYS_AUX_PWR_DET BIT(31) 56 58 57 59 #define IMX95_PE0_GEN_CTRL_1 0x1050 ··· 116 114 #define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9) 117 115 #define IMX_PCIE_FLAG_HAS_LUT BIT(10) 118 116 #define IMX_PCIE_FLAG_8GT_ECN_ERR051586 BIT(11) 117 + #define IMX_PCIE_FLAG_SKIP_L23_READY BIT(12) 119 118 120 119 #define imx_check_flag(pci, val) (pci->drvdata->flags & val) 121 120 ··· 139 136 int (*enable_ref_clk)(struct imx_pcie *pcie, bool enable); 140 137 int (*core_reset)(struct imx_pcie *pcie, bool assert); 141 138 int (*wait_pll_lock)(struct imx_pcie *pcie); 139 + void (*clr_clkreq_override)(struct imx_pcie *pcie); 142 140 const struct dw_pcie_host_ops *ops; 143 141 }; 144 142 ··· 153 149 struct gpio_desc *reset_gpiod; 154 150 struct clk_bulk_data *clks; 155 151 int num_clks; 152 + bool supports_clkreq; 153 + bool enable_ext_refclk; 156 154 struct regmap *iomuxc_gpr; 157 155 u16 msi_ctrl; 158 156 u32 controller_id; ··· 247 241 248 242 static int imx95_pcie_init_phy(struct imx_pcie *imx_pcie) 249 243 { 244 + bool ext = imx_pcie->enable_ext_refclk; 245 + 250 246 /* 251 247 * ERR051624: The Controller Without Vaux Cannot Exit L23 Ready 252 248 * Through Beacon or PERST# De-assertion ··· 267 259 IMX95_PCIE_PHY_CR_PARA_SEL, 268 260 IMX95_PCIE_PHY_CR_PARA_SEL); 269 261 270 - regmap_update_bits(imx_pcie->iomuxc_gpr, 271 - IMX95_PCIE_PHY_GEN_CTRL, 272 - IMX95_PCIE_REF_USE_PAD, 0); 273 - regmap_update_bits(imx_pcie->iomuxc_gpr, 274 - IMX95_PCIE_SS_RW_REG_0, 262 + regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_PHY_GEN_CTRL, 263 + ext ? IMX95_PCIE_REF_USE_PAD : 0, 264 + IMX95_PCIE_REF_USE_PAD); 265 + regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_0, 275 266 IMX95_PCIE_REF_CLKEN, 276 - IMX95_PCIE_REF_CLKEN); 267 + ext ? 0 : IMX95_PCIE_REF_CLKEN); 277 268 278 269 return 0; 279 270 } ··· 692 685 return 0; 693 686 } 694 687 695 - static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable) 688 + static void imx8mm_pcie_clkreq_override(struct imx_pcie *imx_pcie, bool enable) 696 689 { 697 690 int offset = imx_pcie_grp_offset(imx_pcie); 698 691 ··· 702 695 regmap_update_bits(imx_pcie->iomuxc_gpr, offset, 703 696 IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN, 704 697 enable ? IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN : 0); 698 + } 699 + 700 + static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable) 701 + { 702 + imx8mm_pcie_clkreq_override(imx_pcie, enable); 705 703 return 0; 706 704 } 707 705 ··· 716 704 IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 717 705 enable ? 0 : IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); 718 706 return 0; 707 + } 708 + 709 + static void imx95_pcie_clkreq_override(struct imx_pcie *imx_pcie, bool enable) 710 + { 711 + regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_1, 712 + IMX95_PCIE_CLKREQ_OVERRIDE_EN, 713 + enable ? IMX95_PCIE_CLKREQ_OVERRIDE_EN : 0); 714 + regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_1, 715 + IMX95_PCIE_CLKREQ_OVERRIDE_VAL, 716 + enable ? IMX95_PCIE_CLKREQ_OVERRIDE_VAL : 0); 717 + } 718 + 719 + static int imx95_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable) 720 + { 721 + imx95_pcie_clkreq_override(imx_pcie, enable); 722 + return 0; 723 + } 724 + 725 + static void imx8mm_pcie_clr_clkreq_override(struct imx_pcie *imx_pcie) 726 + { 727 + imx8mm_pcie_clkreq_override(imx_pcie, false); 728 + } 729 + 730 + static void imx95_pcie_clr_clkreq_override(struct imx_pcie *imx_pcie) 731 + { 732 + imx95_pcie_clkreq_override(imx_pcie, false); 719 733 } 720 734 721 735 static int imx_pcie_clk_enable(struct imx_pcie *imx_pcie) ··· 1360 1322 dw_pcie_writel_dbi(pci, GEN3_RELATED_OFF, val); 1361 1323 dw_pcie_dbi_ro_wr_dis(pci); 1362 1324 } 1325 + 1326 + /* Clear CLKREQ# override if supports_clkreq is true and link is up */ 1327 + if (dw_pcie_link_up(pci) && imx_pcie->supports_clkreq) { 1328 + if (imx_pcie->drvdata->clr_clkreq_override) 1329 + imx_pcie->drvdata->clr_clkreq_override(imx_pcie); 1330 + } 1363 1331 } 1364 1332 1365 1333 /* ··· 1431 1387 } 1432 1388 1433 1389 static const struct pci_epc_features imx8m_pcie_epc_features = { 1390 + DWC_EPC_COMMON_FEATURES, 1434 1391 .msi_capable = true, 1435 1392 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1436 1393 .bar[BAR_3] = { .type = BAR_RESERVED, }, ··· 1441 1396 }; 1442 1397 1443 1398 static const struct pci_epc_features imx8q_pcie_epc_features = { 1399 + DWC_EPC_COMMON_FEATURES, 1444 1400 .msi_capable = true, 1445 1401 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1446 1402 .bar[BAR_3] = { .type = BAR_RESERVED, }, ··· 1462 1416 * BAR5 | Enable | 32-bit | 64 KB | Programmable Size 1463 1417 */ 1464 1418 static const struct pci_epc_features imx95_pcie_epc_features = { 1419 + DWC_EPC_COMMON_FEATURES, 1465 1420 .msi_capable = true, 1466 1421 .bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_64K, }, 1467 1422 .align = SZ_4K, ··· 1649 1602 struct imx_pcie *imx_pcie; 1650 1603 struct device_node *np; 1651 1604 struct device_node *node = dev->of_node; 1652 - int ret, domain; 1605 + int i, ret, domain; 1653 1606 u16 val; 1654 1607 1655 1608 imx_pcie = devm_kzalloc(dev, sizeof(*imx_pcie), GFP_KERNEL); ··· 1700 1653 if (imx_pcie->num_clks < 0) 1701 1654 return dev_err_probe(dev, imx_pcie->num_clks, 1702 1655 "failed to get clocks\n"); 1656 + for (i = 0; i < imx_pcie->num_clks; i++) 1657 + if (strncmp(imx_pcie->clks[i].id, "extref", 6) == 0) 1658 + imx_pcie->enable_ext_refclk = true; 1703 1659 1704 1660 if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_PHYDRV)) { 1705 1661 imx_pcie->phy = devm_phy_get(dev, "pcie-phy"); ··· 1790 1740 /* Limit link speed */ 1791 1741 pci->max_link_speed = 1; 1792 1742 of_property_read_u32(node, "fsl,max-link-speed", &pci->max_link_speed); 1743 + imx_pcie->supports_clkreq = of_property_read_bool(node, "supports-clkreq"); 1793 1744 1794 1745 ret = devm_regulator_get_enable_optional(&pdev->dev, "vpcie3v3aux"); 1795 1746 if (ret < 0 && ret != -ENODEV) ··· 1828 1777 */ 1829 1778 imx_pcie_add_lut_by_rid(imx_pcie, 0); 1830 1779 } else { 1780 + if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_SKIP_L23_READY)) 1781 + pci->pp.skip_l23_ready = true; 1831 1782 pci->pp.use_atu_msg = true; 1832 1783 ret = dw_pcie_host_init(&pci->pp); 1833 1784 if (ret < 0) ··· 1891 1838 .variant = IMX6QP, 1892 1839 .flags = IMX_PCIE_FLAG_IMX_PHY | 1893 1840 IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND | 1841 + IMX_PCIE_FLAG_SKIP_L23_READY | 1894 1842 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1895 1843 .dbi_length = 0x200, 1896 1844 .gpr = "fsl,imx6q-iomuxc-gpr", ··· 1908 1854 .variant = IMX7D, 1909 1855 .flags = IMX_PCIE_FLAG_SUPPORTS_SUSPEND | 1910 1856 IMX_PCIE_FLAG_HAS_APP_RESET | 1857 + IMX_PCIE_FLAG_SKIP_L23_READY | 1911 1858 IMX_PCIE_FLAG_HAS_PHY_RESET, 1912 1859 .gpr = "fsl,imx7d-iomuxc-gpr", 1913 1860 .mode_off[0] = IOMUXC_GPR12, ··· 1928 1873 .mode_mask[1] = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE, 1929 1874 .init_phy = imx8mq_pcie_init_phy, 1930 1875 .enable_ref_clk = imx8mm_pcie_enable_ref_clk, 1876 + .clr_clkreq_override = imx8mm_pcie_clr_clkreq_override, 1931 1877 }, 1932 1878 [IMX8MM] = { 1933 1879 .variant = IMX8MM, ··· 1939 1883 .mode_off[0] = IOMUXC_GPR12, 1940 1884 .mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE, 1941 1885 .enable_ref_clk = imx8mm_pcie_enable_ref_clk, 1886 + .clr_clkreq_override = imx8mm_pcie_clr_clkreq_override, 1942 1887 }, 1943 1888 [IMX8MP] = { 1944 1889 .variant = IMX8MP, ··· 1950 1893 .mode_off[0] = IOMUXC_GPR12, 1951 1894 .mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE, 1952 1895 .enable_ref_clk = imx8mm_pcie_enable_ref_clk, 1896 + .clr_clkreq_override = imx8mm_pcie_clr_clkreq_override, 1953 1897 }, 1954 1898 [IMX8Q] = { 1955 1899 .variant = IMX8Q, ··· 1971 1913 .core_reset = imx95_pcie_core_reset, 1972 1914 .init_phy = imx95_pcie_init_phy, 1973 1915 .wait_pll_lock = imx95_pcie_wait_for_phy_pll_lock, 1916 + .enable_ref_clk = imx95_pcie_enable_ref_clk, 1917 + .clr_clkreq_override = imx95_pcie_clr_clkreq_override, 1974 1918 }, 1975 1919 [IMX8MQ_EP] = { 1976 1920 .variant = IMX8MQ_EP, ··· 2029 1969 .core_reset = imx95_pcie_core_reset, 2030 1970 .wait_pll_lock = imx95_pcie_wait_for_phy_pll_lock, 2031 1971 .epc_features = &imx95_pcie_epc_features, 1972 + .enable_ref_clk = imx95_pcie_enable_ref_clk, 2032 1973 .mode = DW_PCIE_EP_TYPE, 2033 1974 }, 2034 1975 };
+1
drivers/pci/controller/dwc/pci-keystone.c
··· 930 930 } 931 931 932 932 static const struct pci_epc_features ks_pcie_am654_epc_features = { 933 + DWC_EPC_COMMON_FEATURES, 933 934 .msi_capable = true, 934 935 .msix_capable = true, 935 936 .bar[BAR_0] = { .type = BAR_RESERVED, },
+1
drivers/pci/controller/dwc/pcie-artpec6.c
··· 370 370 } 371 371 372 372 static const struct pci_epc_features artpec6_pcie_epc_features = { 373 + DWC_EPC_COMMON_FEATURES, 373 374 .msi_capable = true, 374 375 }; 375 376
+1 -51
drivers/pci/controller/dwc/pcie-designware-debugfs.c
··· 443 443 return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos); 444 444 } 445 445 446 - static const char *ltssm_status_string(enum dw_pcie_ltssm ltssm) 447 - { 448 - const char *str; 449 - 450 - switch (ltssm) { 451 - #define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break 452 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET); 453 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT); 454 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE); 455 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE); 456 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG); 457 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET); 458 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT); 459 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START); 460 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT); 461 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI); 462 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT); 463 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE); 464 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE); 465 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK); 466 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED); 467 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG); 468 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE); 469 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0); 470 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S); 471 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE); 472 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE); 473 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE); 474 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE); 475 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY); 476 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE); 477 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED); 478 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY); 479 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE); 480 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT); 481 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT); 482 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY); 483 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET); 484 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0); 485 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1); 486 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2); 487 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3); 488 - default: 489 - str = "DW_PCIE_LTSSM_UNKNOWN"; 490 - break; 491 - } 492 - 493 - return str + strlen("DW_PCIE_LTSSM_"); 494 - } 495 - 496 446 static int ltssm_status_show(struct seq_file *s, void *v) 497 447 { 498 448 struct dw_pcie *pci = s->private; 499 449 enum dw_pcie_ltssm val; 500 450 501 451 val = dw_pcie_get_ltssm(pci); 502 - seq_printf(s, "%s (0x%02x)\n", ltssm_status_string(val), val); 452 + seq_printf(s, "%s (0x%02x)\n", dw_pcie_ltssm_status_string(val), val); 503 453 504 454 return 0; 505 455 }
+317 -82
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 72 72 static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap) 73 73 { 74 74 return PCI_FIND_NEXT_CAP(dw_pcie_ep_read_cfg, PCI_CAPABILITY_LIST, 75 - cap, ep, func_no); 75 + cap, NULL, ep, func_no); 76 76 } 77 77 78 - /** 79 - * dw_pcie_ep_hide_ext_capability - Hide a capability from the linked list 80 - * @pci: DWC PCI device 81 - * @prev_cap: Capability preceding the capability that should be hidden 82 - * @cap: Capability that should be hidden 83 - * 84 - * Return: 0 if success, errno otherwise. 85 - */ 86 - int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap) 78 + static u16 dw_pcie_ep_find_ext_capability(struct dw_pcie_ep *ep, 79 + u8 func_no, u8 cap) 87 80 { 88 - u16 prev_cap_offset, cap_offset; 89 - u32 prev_cap_header, cap_header; 90 - 91 - prev_cap_offset = dw_pcie_find_ext_capability(pci, prev_cap); 92 - if (!prev_cap_offset) 93 - return -EINVAL; 94 - 95 - prev_cap_header = dw_pcie_readl_dbi(pci, prev_cap_offset); 96 - cap_offset = PCI_EXT_CAP_NEXT(prev_cap_header); 97 - cap_header = dw_pcie_readl_dbi(pci, cap_offset); 98 - 99 - /* cap must immediately follow prev_cap. */ 100 - if (PCI_EXT_CAP_ID(cap_header) != cap) 101 - return -EINVAL; 102 - 103 - /* Clear next ptr. */ 104 - prev_cap_header &= ~GENMASK(31, 20); 105 - 106 - /* Set next ptr to next ptr of cap. */ 107 - prev_cap_header |= cap_header & GENMASK(31, 20); 108 - 109 - dw_pcie_dbi_ro_wr_en(pci); 110 - dw_pcie_writel_dbi(pci, prev_cap_offset, prev_cap_header); 111 - dw_pcie_dbi_ro_wr_dis(pci); 112 - 113 - return 0; 81 + return PCI_FIND_NEXT_EXT_CAP(dw_pcie_ep_read_cfg, 0, 82 + cap, NULL, ep, func_no); 114 83 } 115 - EXPORT_SYMBOL_GPL(dw_pcie_ep_hide_ext_capability); 116 84 117 85 static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 118 86 struct pci_epf_header *hdr) ··· 107 139 return 0; 108 140 } 109 141 110 - static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type, 111 - dma_addr_t parent_bus_addr, enum pci_barno bar, 112 - size_t size) 142 + /* BAR Match Mode inbound iATU mapping */ 143 + static int dw_pcie_ep_ib_atu_bar(struct dw_pcie_ep *ep, u8 func_no, int type, 144 + dma_addr_t parent_bus_addr, enum pci_barno bar, 145 + size_t size) 113 146 { 114 147 int ret; 115 148 u32 free_win; 116 149 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 150 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 117 151 118 - if (!ep->bar_to_atu[bar]) 152 + if (!ep_func) 153 + return -EINVAL; 154 + 155 + if (!ep_func->bar_to_atu[bar]) 119 156 free_win = find_first_zero_bit(ep->ib_window_map, pci->num_ib_windows); 120 157 else 121 - free_win = ep->bar_to_atu[bar] - 1; 158 + free_win = ep_func->bar_to_atu[bar] - 1; 122 159 123 160 if (free_win >= pci->num_ib_windows) { 124 161 dev_err(pci->dev, "No free inbound window\n"); ··· 141 168 * Always increment free_win before assignment, since value 0 is used to identify 142 169 * unallocated mapping. 143 170 */ 144 - ep->bar_to_atu[bar] = free_win + 1; 171 + ep_func->bar_to_atu[bar] = free_win + 1; 145 172 set_bit(free_win, ep->ib_window_map); 146 173 147 174 return 0; 175 + } 176 + 177 + static void dw_pcie_ep_clear_ib_maps(struct dw_pcie_ep *ep, u8 func_no, enum pci_barno bar) 178 + { 179 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 180 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 181 + struct device *dev = pci->dev; 182 + unsigned int i, num; 183 + u32 atu_index; 184 + u32 *indexes; 185 + 186 + if (!ep_func) 187 + return; 188 + 189 + /* Tear down the BAR Match Mode mapping, if any. */ 190 + if (ep_func->bar_to_atu[bar]) { 191 + atu_index = ep_func->bar_to_atu[bar] - 1; 192 + dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index); 193 + clear_bit(atu_index, ep->ib_window_map); 194 + ep_func->bar_to_atu[bar] = 0; 195 + } 196 + 197 + /* Tear down all Address Match Mode mappings, if any. */ 198 + indexes = ep_func->ib_atu_indexes[bar]; 199 + num = ep_func->num_ib_atu_indexes[bar]; 200 + ep_func->ib_atu_indexes[bar] = NULL; 201 + ep_func->num_ib_atu_indexes[bar] = 0; 202 + if (!indexes) 203 + return; 204 + for (i = 0; i < num; i++) { 205 + dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, indexes[i]); 206 + clear_bit(indexes[i], ep->ib_window_map); 207 + } 208 + devm_kfree(dev, indexes); 209 + } 210 + 211 + static u64 dw_pcie_ep_read_bar_assigned(struct dw_pcie_ep *ep, u8 func_no, 212 + enum pci_barno bar, int flags) 213 + { 214 + u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar); 215 + u32 lo, hi; 216 + u64 addr; 217 + 218 + lo = dw_pcie_ep_readl_dbi(ep, func_no, reg); 219 + 220 + if (flags & PCI_BASE_ADDRESS_SPACE) 221 + return lo & PCI_BASE_ADDRESS_IO_MASK; 222 + 223 + addr = lo & PCI_BASE_ADDRESS_MEM_MASK; 224 + if (!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64)) 225 + return addr; 226 + 227 + hi = dw_pcie_ep_readl_dbi(ep, func_no, reg + 4); 228 + return addr | ((u64)hi << 32); 229 + } 230 + 231 + static int dw_pcie_ep_validate_submap(struct dw_pcie_ep *ep, 232 + const struct pci_epf_bar_submap *submap, 233 + unsigned int num_submap, size_t bar_size) 234 + { 235 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 236 + u32 align = pci->region_align; 237 + size_t off = 0; 238 + unsigned int i; 239 + size_t size; 240 + 241 + if (!align || !IS_ALIGNED(bar_size, align)) 242 + return -EINVAL; 243 + 244 + /* 245 + * The submap array order defines the BAR layout (submap[0] starts 246 + * at offset 0 and each entry immediately follows the previous 247 + * one). Here, validate that it forms a strict, gapless 248 + * decomposition of the BAR: 249 + * - each entry has a non-zero size 250 + * - sizes, implicit offsets and phys_addr are aligned to 251 + * pci->region_align 252 + * - each entry lies within the BAR range 253 + * - the entries exactly cover the whole BAR 254 + * 255 + * Note: dw_pcie_prog_inbound_atu() also checks alignment for the 256 + * PCI address and the target phys_addr, but validating up-front 257 + * avoids partially programming iATU windows in vain. 258 + */ 259 + for (i = 0; i < num_submap; i++) { 260 + size = submap[i].size; 261 + 262 + if (!size) 263 + return -EINVAL; 264 + 265 + if (!IS_ALIGNED(size, align) || !IS_ALIGNED(off, align)) 266 + return -EINVAL; 267 + 268 + if (!IS_ALIGNED(submap[i].phys_addr, align)) 269 + return -EINVAL; 270 + 271 + if (off > bar_size || size > bar_size - off) 272 + return -EINVAL; 273 + 274 + off += size; 275 + } 276 + if (off != bar_size) 277 + return -EINVAL; 278 + 279 + return 0; 280 + } 281 + 282 + /* Address Match Mode inbound iATU mapping */ 283 + static int dw_pcie_ep_ib_atu_addr(struct dw_pcie_ep *ep, u8 func_no, int type, 284 + const struct pci_epf_bar *epf_bar) 285 + { 286 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 287 + const struct pci_epf_bar_submap *submap = epf_bar->submap; 288 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 289 + enum pci_barno bar = epf_bar->barno; 290 + struct device *dev = pci->dev; 291 + u64 pci_addr, parent_bus_addr; 292 + u64 size, base, off = 0; 293 + int free_win, ret; 294 + unsigned int i; 295 + u32 *indexes; 296 + 297 + if (!ep_func || !epf_bar->num_submap || !submap || !epf_bar->size) 298 + return -EINVAL; 299 + 300 + ret = dw_pcie_ep_validate_submap(ep, submap, epf_bar->num_submap, 301 + epf_bar->size); 302 + if (ret) 303 + return ret; 304 + 305 + base = dw_pcie_ep_read_bar_assigned(ep, func_no, bar, epf_bar->flags); 306 + if (!base) { 307 + dev_err(dev, 308 + "BAR%u not assigned, cannot set up sub-range mappings\n", 309 + bar); 310 + return -EINVAL; 311 + } 312 + 313 + indexes = devm_kcalloc(dev, epf_bar->num_submap, sizeof(*indexes), 314 + GFP_KERNEL); 315 + if (!indexes) 316 + return -ENOMEM; 317 + 318 + ep_func->ib_atu_indexes[bar] = indexes; 319 + ep_func->num_ib_atu_indexes[bar] = 0; 320 + 321 + for (i = 0; i < epf_bar->num_submap; i++) { 322 + size = submap[i].size; 323 + parent_bus_addr = submap[i].phys_addr; 324 + 325 + if (off > (~0ULL) - base) { 326 + ret = -EINVAL; 327 + goto err; 328 + } 329 + 330 + pci_addr = base + off; 331 + off += size; 332 + 333 + free_win = find_first_zero_bit(ep->ib_window_map, 334 + pci->num_ib_windows); 335 + if (free_win >= pci->num_ib_windows) { 336 + ret = -ENOSPC; 337 + goto err; 338 + } 339 + 340 + ret = dw_pcie_prog_inbound_atu(pci, free_win, type, 341 + parent_bus_addr, pci_addr, size); 342 + if (ret) 343 + goto err; 344 + 345 + set_bit(free_win, ep->ib_window_map); 346 + indexes[i] = free_win; 347 + ep_func->num_ib_atu_indexes[bar] = i + 1; 348 + } 349 + return 0; 350 + err: 351 + dw_pcie_ep_clear_ib_maps(ep, func_no, bar); 352 + return ret; 148 353 } 149 354 150 355 static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, ··· 355 204 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 356 205 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 357 206 enum pci_barno bar = epf_bar->barno; 358 - u32 atu_index = ep->bar_to_atu[bar] - 1; 207 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 359 208 360 - if (!ep->bar_to_atu[bar]) 209 + if (!ep_func || !ep_func->epf_bar[bar]) 361 210 return; 362 211 363 212 __dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags); 364 213 365 - dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index); 366 - clear_bit(atu_index, ep->ib_window_map); 367 - ep->epf_bar[bar] = NULL; 368 - ep->bar_to_atu[bar] = 0; 214 + dw_pcie_ep_clear_ib_maps(ep, func_no, bar); 215 + 216 + ep_func->epf_bar[bar] = NULL; 369 217 } 370 218 371 - static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie *pci, 219 + static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie_ep *ep, u8 func_no, 372 220 enum pci_barno bar) 373 221 { 374 222 u32 reg, bar_index; 375 223 unsigned int offset, nbars; 376 224 int i; 377 225 378 - offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 226 + offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR); 379 227 if (!offset) 380 228 return offset; 381 229 382 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 230 + reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 383 231 nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg); 384 232 385 233 for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) { 386 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 234 + reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 387 235 bar_index = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, reg); 388 236 if (bar_index == bar) 389 237 return offset; ··· 403 253 u32 rebar_cap, rebar_ctrl; 404 254 int ret; 405 255 406 - rebar_offset = dw_pcie_ep_get_rebar_offset(pci, bar); 256 + rebar_offset = dw_pcie_ep_get_rebar_offset(ep, func_no, bar); 407 257 if (!rebar_offset) 408 258 return -EINVAL; 409 259 ··· 433 283 * 1 MB to 128 TB. Bits 31:16 in PCI_REBAR_CTRL define "supported sizes" 434 284 * bits for sizes 256 TB to 8 EB. Disallow sizes 256 TB to 8 EB. 435 285 */ 436 - rebar_ctrl = dw_pcie_readl_dbi(pci, rebar_offset + PCI_REBAR_CTRL); 286 + rebar_ctrl = dw_pcie_ep_readl_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL); 437 287 rebar_ctrl &= ~GENMASK(31, 16); 438 - dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl); 288 + dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl); 439 289 440 290 /* 441 291 * The "selected size" (bits 13:8) in PCI_REBAR_CTRL are automatically 442 292 * updated when writing PCI_REBAR_CAP, see "Figure 3-26 Resizable BAR 443 293 * Example for 32-bit Memory BAR0" in DWC EP databook 5.96a. 444 294 */ 445 - dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CAP, rebar_cap); 295 + dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CAP, rebar_cap); 446 296 447 297 dw_pcie_dbi_ro_wr_dis(pci); 448 298 ··· 491 341 { 492 342 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 493 343 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 344 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 494 345 enum pci_barno bar = epf_bar->barno; 495 346 size_t size = epf_bar->size; 496 347 enum pci_epc_bar_type bar_type; 497 348 int flags = epf_bar->flags; 498 349 int ret, type; 350 + 351 + if (!ep_func) 352 + return -EINVAL; 499 353 500 354 /* 501 355 * DWC does not allow BAR pairs to overlap, e.g. you cannot combine BARs ··· 514 360 * calling clear_bar() would clear the BAR's PCI address assigned by the 515 361 * host). 516 362 */ 517 - if (ep->epf_bar[bar]) { 363 + if (ep_func->epf_bar[bar]) { 518 364 /* 519 365 * We can only dynamically change a BAR if the new BAR size and 520 366 * BAR flags do not differ from the existing configuration. 521 367 */ 522 - if (ep->epf_bar[bar]->barno != bar || 523 - ep->epf_bar[bar]->size != size || 524 - ep->epf_bar[bar]->flags != flags) 368 + if (ep_func->epf_bar[bar]->barno != bar || 369 + ep_func->epf_bar[bar]->size != size || 370 + ep_func->epf_bar[bar]->flags != flags) 525 371 return -EINVAL; 372 + 373 + /* 374 + * When dynamically changing a BAR, tear down any existing 375 + * mappings before re-programming. 376 + */ 377 + if (ep_func->epf_bar[bar]->num_submap || epf_bar->num_submap) 378 + dw_pcie_ep_clear_ib_maps(ep, func_no, bar); 526 379 527 380 /* 528 381 * When dynamically changing a BAR, skip writing the BAR reg, as 529 382 * that would clear the BAR's PCI address assigned by the host. 530 383 */ 531 384 goto config_atu; 385 + } else { 386 + /* 387 + * Subrange mapping is an update-only operation. The BAR 388 + * must have been configured once without submaps so that 389 + * subsequent set_bar() calls can update inbound mappings 390 + * without touching the BAR register (and clobbering the 391 + * host-assigned address). 392 + */ 393 + if (epf_bar->num_submap) 394 + return -EINVAL; 532 395 } 533 396 534 397 bar_type = dw_pcie_ep_get_bar_type(ep, bar); ··· 579 408 else 580 409 type = PCIE_ATU_TYPE_IO; 581 410 582 - ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar, 583 - size); 411 + if (epf_bar->num_submap) 412 + ret = dw_pcie_ep_ib_atu_addr(ep, func_no, type, epf_bar); 413 + else 414 + ret = dw_pcie_ep_ib_atu_bar(ep, func_no, type, 415 + epf_bar->phys_addr, bar, size); 416 + 584 417 if (ret) 585 418 return ret; 586 419 587 - ep->epf_bar[bar] = epf_bar; 420 + ep_func->epf_bar[bar] = epf_bar; 588 421 589 422 return 0; 590 423 } ··· 776 601 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 777 602 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 778 603 604 + /* 605 + * Tear down the dedicated outbound window used for MSI 606 + * generation. This avoids leaking an iATU window across 607 + * endpoint stop/start cycles. 608 + */ 609 + if (ep->msi_iatu_mapped) { 610 + dw_pcie_ep_unmap_addr(epc, 0, 0, ep->msi_mem_phys); 611 + ep->msi_iatu_mapped = false; 612 + } 613 + 779 614 dw_pcie_stop_link(pci); 780 615 } 781 616 ··· 887 702 msg_addr = ((u64)msg_addr_upper) << 32 | msg_addr_lower; 888 703 889 704 msg_addr = dw_pcie_ep_align_addr(epc, msg_addr, &map_size, &offset); 890 - ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr, 891 - map_size); 892 - if (ret) 893 - return ret; 705 + 706 + /* 707 + * Program the outbound iATU once and keep it enabled. 708 + * 709 + * The spec warns that updating iATU registers while there are 710 + * operations in flight on the AXI bridge interface is not 711 + * supported, so we avoid reprogramming the region on every MSI, 712 + * specifically unmapping immediately after writel(). 713 + */ 714 + if (!ep->msi_iatu_mapped) { 715 + ret = dw_pcie_ep_map_addr(epc, func_no, 0, 716 + ep->msi_mem_phys, msg_addr, 717 + map_size); 718 + if (ret) 719 + return ret; 720 + 721 + ep->msi_iatu_mapped = true; 722 + ep->msi_msg_addr = msg_addr; 723 + ep->msi_map_size = map_size; 724 + } else if (WARN_ON_ONCE(ep->msi_msg_addr != msg_addr || 725 + ep->msi_map_size != map_size)) { 726 + /* 727 + * The host changed the MSI target address or the required 728 + * mapping size changed. Reprogramming the iATU at runtime is 729 + * unsafe on this controller, so bail out instead of trying to 730 + * update the existing region. 731 + */ 732 + return -EINVAL; 733 + } 894 734 895 735 writel(msg_data | (interrupt_num - 1), ep->msi_mem + offset); 896 - 897 - dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys); 898 736 899 737 return 0; 900 738 } ··· 983 775 bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset); 984 776 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 985 777 986 - msix_tbl = ep->epf_bar[bir]->addr + tbl_offset; 778 + msix_tbl = ep_func->epf_bar[bir]->addr + tbl_offset; 987 779 msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr; 988 780 msg_data = msix_tbl[(interrupt_num - 1)].msg_data; 989 781 vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl; ··· 1044 836 } 1045 837 EXPORT_SYMBOL_GPL(dw_pcie_ep_deinit); 1046 838 1047 - static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci) 839 + static void dw_pcie_ep_init_rebar_registers(struct dw_pcie_ep *ep, u8 func_no) 1048 840 { 1049 - struct dw_pcie_ep *ep = &pci->ep; 1050 - unsigned int offset; 1051 - unsigned int nbars; 841 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 842 + unsigned int offset, nbars; 1052 843 enum pci_barno bar; 1053 844 u32 reg, i, val; 1054 845 1055 - offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 846 + if (!ep_func) 847 + return; 1056 848 1057 - dw_pcie_dbi_ro_wr_en(pci); 849 + offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR); 1058 850 1059 851 if (offset) { 1060 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 852 + reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 1061 853 nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg); 1062 854 1063 855 /* ··· 1078 870 * the controller when RESBAR_CAP_REG is written, which 1079 871 * is why RESBAR_CAP_REG is written here. 1080 872 */ 1081 - val = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 873 + val = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 1082 874 bar = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, val); 1083 - if (ep->epf_bar[bar]) 1084 - pci_epc_bar_size_to_rebar_cap(ep->epf_bar[bar]->size, &val); 875 + if (ep_func->epf_bar[bar]) 876 + pci_epc_bar_size_to_rebar_cap(ep_func->epf_bar[bar]->size, &val); 1085 877 else 1086 878 val = BIT(4); 1087 879 1088 - dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, val); 880 + dw_pcie_ep_writel_dbi(ep, func_no, offset + PCI_REBAR_CAP, val); 1089 881 } 1090 882 } 883 + } 884 + 885 + static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci) 886 + { 887 + struct dw_pcie_ep *ep = &pci->ep; 888 + u8 funcs = ep->epc->max_functions; 889 + u8 func_no; 890 + 891 + dw_pcie_dbi_ro_wr_en(pci); 892 + 893 + for (func_no = 0; func_no < funcs; func_no++) 894 + dw_pcie_ep_init_rebar_registers(ep, func_no); 1091 895 1092 896 dw_pcie_setup(pci); 1093 897 dw_pcie_dbi_ro_wr_dis(pci); ··· 1187 967 if (ep->ops->init) 1188 968 ep->ops->init(ep); 1189 969 970 + /* 971 + * PCIe r6.0, section 7.9.15 states that for endpoints that support 972 + * PTM, this capability structure is required in exactly one 973 + * function, which controls the PTM behavior of all PTM capable 974 + * functions. This indicates the PTM capability structure 975 + * represents controller-level registers rather than per-function 976 + * registers. 977 + * 978 + * Therefore, PTM capability registers are configured using the 979 + * standard DBI accessors, instead of func_no indexed per-function 980 + * accessors. 981 + */ 1190 982 ptm_cap_base = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM); 1191 983 1192 984 /* ··· 1319 1087 struct device *dev = pci->dev; 1320 1088 1321 1089 INIT_LIST_HEAD(&ep->func_list); 1090 + ep->msi_iatu_mapped = false; 1091 + ep->msi_msg_addr = 0; 1092 + ep->msi_map_size = 0; 1322 1093 1323 1094 epc = devm_pci_epc_create(dev, &epc_ops); 1324 1095 if (IS_ERR(epc)) {
+159 -70
drivers/pci/controller/dwc/pcie-designware-host.c
··· 244 244 u64 msi_target = (u64)pp->msi_data; 245 245 u32 ctrl, num_ctrls; 246 246 247 - if (!pci_msi_enabled() || !pp->has_msi_ctrl) 247 + if (!pci_msi_enabled() || !pp->use_imsi_rx) 248 248 return; 249 249 250 250 num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; ··· 356 356 * order not to miss MSI TLPs from those devices the MSI target 357 357 * address has to be within the lowest 4GB. 358 358 * 359 - * Note until there is a better alternative found the reservation is 360 - * done by allocating from the artificially limited DMA-coherent 361 - * memory. 359 + * Per DWC databook r6.21a, section 3.10.2.3, the incoming MWr TLP 360 + * targeting the MSI_CTRL_ADDR is terminated by the iMSI-RX and never 361 + * appears on the AXI bus. So MSI_CTRL_ADDR address doesn't need to be 362 + * mapped and can be any memory that doesn't get allocated for the BAR 363 + * memory. Since most of the platforms provide 32-bit address for 364 + * 'config' region, try cfg0_base as the first option for the MSI target 365 + * address if it's a 32-bit address. Otherwise, try 32-bit and 64-bit 366 + * coherent memory allocation one by one. 362 367 */ 368 + if (!(pp->cfg0_base & GENMASK_ULL(63, 32))) { 369 + pp->msi_data = pp->cfg0_base; 370 + return 0; 371 + } 372 + 363 373 ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 364 374 if (!ret) 365 375 msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data, ··· 592 582 } 593 583 594 584 if (pci_msi_enabled()) { 595 - pp->has_msi_ctrl = !(pp->ops->msi_init || 585 + pp->use_imsi_rx = !(pp->ops->msi_init || 596 586 of_property_present(np, "msi-parent") || 597 587 of_property_present(np, "msi-map")); 598 588 599 589 /* 600 - * For the has_msi_ctrl case the default assignment is handled 590 + * For the use_imsi_rx case the default assignment is handled 601 591 * in the dw_pcie_msi_host_init(). 602 592 */ 603 - if (!pp->has_msi_ctrl && !pp->num_vectors) { 593 + if (!pp->use_imsi_rx && !pp->num_vectors) { 604 594 pp->num_vectors = MSI_DEF_NUM_VECTORS; 605 595 } else if (pp->num_vectors > MAX_MSI_IRQS) { 606 596 dev_err(dev, "Invalid number of vectors\n"); ··· 612 602 ret = pp->ops->msi_init(pp); 613 603 if (ret < 0) 614 604 goto err_deinit_host; 615 - } else if (pp->has_msi_ctrl) { 605 + } else if (pp->use_imsi_rx) { 616 606 ret = dw_pcie_msi_host_init(pp); 617 607 if (ret < 0) 618 608 goto err_deinit_host; ··· 629 619 ret = of_pci_get_equalization_presets(dev, &pp->presets, pci->num_lanes); 630 620 if (ret) 631 621 goto err_free_msi; 632 - 633 - if (pp->ecam_enabled) { 634 - ret = dw_pcie_config_ecam_iatu(pp); 635 - if (ret) { 636 - dev_err(dev, "Failed to configure iATU in ECAM mode\n"); 637 - goto err_free_msi; 638 - } 639 - } 640 622 641 623 /* 642 624 * Allocate the resource for MSG TLP before programming the iATU ··· 657 655 } 658 656 659 657 /* 660 - * Note: Skip the link up delay only when a Link Up IRQ is present. 661 - * If there is no Link Up IRQ, we should not bypass the delay 662 - * because that would require users to manually rescan for devices. 658 + * Only fail on timeout error. Other errors indicate the device may 659 + * become available later, so continue without failing. 663 660 */ 664 - if (!pp->use_linkup_irq) 665 - /* Ignore errors, the link may come up later */ 666 - dw_pcie_wait_for_link(pci); 661 + ret = dw_pcie_wait_for_link(pci); 662 + if (ret == -ETIMEDOUT) 663 + goto err_stop_link; 667 664 668 665 ret = pci_host_probe(bridge); 669 666 if (ret) ··· 682 681 dw_pcie_edma_remove(pci); 683 682 684 683 err_free_msi: 685 - if (pp->has_msi_ctrl) 684 + if (pp->use_imsi_rx) 686 685 dw_pcie_free_msi(pp); 687 686 688 687 err_deinit_host: ··· 710 709 711 710 dw_pcie_edma_remove(pci); 712 711 713 - if (pp->has_msi_ctrl) 712 + if (pp->use_imsi_rx) 714 713 dw_pcie_free_msi(pp); 715 714 716 715 if (pp->ops->deinit) ··· 847 846 return pci->dbi_base + where; 848 847 } 849 848 850 - static int dw_pcie_op_assert_perst(struct pci_bus *bus, bool assert) 851 - { 852 - struct dw_pcie_rp *pp = bus->sysdata; 853 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 854 - 855 - return dw_pcie_assert_perst(pci, assert); 856 - } 857 - 858 849 static struct pci_ops dw_pcie_ops = { 859 850 .map_bus = dw_pcie_own_conf_map_bus, 860 851 .read = pci_generic_config_read, 861 852 .write = pci_generic_config_write, 862 - .assert_perst = dw_pcie_op_assert_perst, 863 853 }; 864 854 865 855 static struct pci_ops dw_pcie_ecam_ops = { ··· 864 872 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 865 873 struct dw_pcie_ob_atu_cfg atu = { 0 }; 866 874 struct resource_entry *entry; 875 + int ob_iatu_index; 876 + int ib_iatu_index; 867 877 int i, ret; 868 878 869 - /* Note the very first outbound ATU is used for CFG IOs */ 870 879 if (!pci->num_ob_windows) { 871 880 dev_err(pci->dev, "No outbound iATU found\n"); 872 881 return -EINVAL; ··· 883 890 for (i = 0; i < pci->num_ib_windows; i++) 884 891 dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, i); 885 892 886 - i = 0; 893 + /* 894 + * NOTE: For outbound address translation, outbound iATU at index 0 is 895 + * reserved for CFG IOs (dw_pcie_other_conf_map_bus()), thus start at 896 + * index 1. 897 + * 898 + * If using ECAM, outbound iATU at index 0 and index 1 is reserved for 899 + * CFG IOs. 900 + */ 901 + if (pp->ecam_enabled) { 902 + ob_iatu_index = 2; 903 + ret = dw_pcie_config_ecam_iatu(pp); 904 + if (ret) { 905 + dev_err(pci->dev, "Failed to configure iATU in ECAM mode\n"); 906 + return ret; 907 + } 908 + } else { 909 + ob_iatu_index = 1; 910 + } 911 + 887 912 resource_list_for_each_entry(entry, &pp->bridge->windows) { 913 + resource_size_t res_size; 914 + 888 915 if (resource_type(entry->res) != IORESOURCE_MEM) 889 916 continue; 890 917 891 - if (pci->num_ob_windows <= ++i) 892 - break; 893 - 894 - atu.index = i; 895 918 atu.type = PCIE_ATU_TYPE_MEM; 896 919 atu.parent_bus_addr = entry->res->start - pci->parent_bus_offset; 897 920 atu.pci_addr = entry->res->start - entry->offset; 898 921 899 922 /* Adjust iATU size if MSG TLP region was allocated before */ 900 923 if (pp->msg_res && pp->msg_res->parent == entry->res) 901 - atu.size = resource_size(entry->res) - 924 + res_size = resource_size(entry->res) - 902 925 resource_size(pp->msg_res); 903 926 else 904 - atu.size = resource_size(entry->res); 927 + res_size = resource_size(entry->res); 905 928 906 - ret = dw_pcie_prog_outbound_atu(pci, &atu); 907 - if (ret) { 908 - dev_err(pci->dev, "Failed to set MEM range %pr\n", 909 - entry->res); 910 - return ret; 929 + while (res_size > 0) { 930 + /* 931 + * Return failure if we run out of windows in the 932 + * middle. Otherwise, we would end up only partially 933 + * mapping a single resource. 934 + */ 935 + if (ob_iatu_index >= pci->num_ob_windows) { 936 + dev_err(pci->dev, "Cannot add outbound window for region: %pr\n", 937 + entry->res); 938 + return -ENOMEM; 939 + } 940 + 941 + atu.index = ob_iatu_index; 942 + atu.size = MIN(pci->region_limit + 1, res_size); 943 + 944 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 945 + if (ret) { 946 + dev_err(pci->dev, "Failed to set MEM range %pr\n", 947 + entry->res); 948 + return ret; 949 + } 950 + 951 + ob_iatu_index++; 952 + atu.parent_bus_addr += atu.size; 953 + atu.pci_addr += atu.size; 954 + res_size -= atu.size; 911 955 } 912 956 } 913 957 914 958 if (pp->io_size) { 915 - if (pci->num_ob_windows > ++i) { 916 - atu.index = i; 959 + if (ob_iatu_index < pci->num_ob_windows) { 960 + atu.index = ob_iatu_index; 917 961 atu.type = PCIE_ATU_TYPE_IO; 918 962 atu.parent_bus_addr = pp->io_base - pci->parent_bus_offset; 919 963 atu.pci_addr = pp->io_bus_addr; ··· 962 932 entry->res); 963 933 return ret; 964 934 } 935 + ob_iatu_index++; 965 936 } else { 937 + /* 938 + * If there are not enough outbound windows to give I/O 939 + * space its own iATU, the outbound iATU at index 0 will 940 + * be shared between I/O space and CFG IOs, by 941 + * temporarily reconfiguring the iATU to CFG space, in 942 + * order to do a CFG IO, and then immediately restoring 943 + * it to I/O space. This is only implemented when using 944 + * dw_pcie_other_conf_map_bus(), which is not the case 945 + * when using ECAM. 946 + */ 947 + if (pp->ecam_enabled) { 948 + dev_err(pci->dev, "Cannot add outbound window for I/O\n"); 949 + return -ENOMEM; 950 + } 966 951 pp->cfg0_io_shared = true; 967 952 } 968 953 } 969 954 970 - if (pci->num_ob_windows <= i) 971 - dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n", 972 - pci->num_ob_windows); 955 + if (pp->use_atu_msg) { 956 + if (ob_iatu_index >= pci->num_ob_windows) { 957 + dev_err(pci->dev, "Cannot add outbound window for MSG TLP\n"); 958 + return -ENOMEM; 959 + } 960 + pp->msg_atu_index = ob_iatu_index++; 961 + } 973 962 974 - pp->msg_atu_index = i; 975 - 976 - i = 0; 963 + ib_iatu_index = 0; 977 964 resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) { 965 + resource_size_t res_start, res_size, window_size; 966 + 978 967 if (resource_type(entry->res) != IORESOURCE_MEM) 979 968 continue; 980 969 981 - if (pci->num_ib_windows <= i) 982 - break; 970 + res_size = resource_size(entry->res); 971 + res_start = entry->res->start; 972 + while (res_size > 0) { 973 + /* 974 + * Return failure if we run out of windows in the 975 + * middle. Otherwise, we would end up only partially 976 + * mapping a single resource. 977 + */ 978 + if (ib_iatu_index >= pci->num_ib_windows) { 979 + dev_err(pci->dev, "Cannot add inbound window for region: %pr\n", 980 + entry->res); 981 + return -ENOMEM; 982 + } 983 983 984 - ret = dw_pcie_prog_inbound_atu(pci, i++, PCIE_ATU_TYPE_MEM, 985 - entry->res->start, 986 - entry->res->start - entry->offset, 987 - resource_size(entry->res)); 988 - if (ret) { 989 - dev_err(pci->dev, "Failed to set DMA range %pr\n", 990 - entry->res); 991 - return ret; 984 + window_size = MIN(pci->region_limit + 1, res_size); 985 + ret = dw_pcie_prog_inbound_atu(pci, ib_iatu_index, 986 + PCIE_ATU_TYPE_MEM, res_start, 987 + res_start - entry->offset, window_size); 988 + if (ret) { 989 + dev_err(pci->dev, "Failed to set DMA range %pr\n", 990 + entry->res); 991 + return ret; 992 + } 993 + 994 + ib_iatu_index++; 995 + res_start += window_size; 996 + res_size -= window_size; 992 997 } 993 998 } 994 - 995 - if (pci->num_ib_windows <= i) 996 - dev_warn(pci->dev, "Dma-ranges exceed inbound iATU size (%u)\n", 997 - pci->num_ib_windows); 998 999 999 1000 return 0; 1000 1001 } ··· 1148 1087 * the platform uses its own address translation component rather than 1149 1088 * ATU, so we should not program the ATU here. 1150 1089 */ 1151 - if (pp->bridge->child_ops == &dw_child_pcie_ops) { 1090 + if (pp->bridge->child_ops == &dw_child_pcie_ops || pp->ecam_enabled) { 1152 1091 ret = dw_pcie_iatu_setup(pp); 1153 1092 if (ret) 1154 1093 return ret; ··· 1164 1103 dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 1165 1104 1166 1105 dw_pcie_dbi_ro_wr_dis(pci); 1106 + 1107 + /* 1108 + * The iMSI-RX module does not support receiving MSI or MSI-X generated 1109 + * by the Root Port. If iMSI-RX is used as the MSI controller, remove 1110 + * the MSI and MSI-X capabilities of the Root Port to allow the drivers 1111 + * to fall back to INTx instead. 1112 + */ 1113 + if (pp->use_imsi_rx) { 1114 + dw_pcie_remove_capability(pci, PCI_CAP_ID_MSI); 1115 + dw_pcie_remove_capability(pci, PCI_CAP_ID_MSIX); 1116 + } 1167 1117 1168 1118 return 0; 1169 1119 } ··· 1219 1147 int dw_pcie_suspend_noirq(struct dw_pcie *pci) 1220 1148 { 1221 1149 u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1150 + int ret = 0; 1222 1151 u32 val; 1223 - int ret; 1152 + 1153 + if (!dw_pcie_link_up(pci)) 1154 + goto stop_link; 1224 1155 1225 1156 /* 1226 1157 * If L1SS is supported, then do not put the link into L2 as some ··· 1238 1163 ret = dw_pcie_pme_turn_off(pci); 1239 1164 if (ret) 1240 1165 return ret; 1166 + } 1167 + 1168 + /* 1169 + * Some SoCs do not support reading the LTSSM register after 1170 + * PME_Turn_Off broadcast. For those SoCs, skip waiting for L2/L3 Ready 1171 + * state and wait 10ms as recommended in PCIe spec r6.0, sec 5.3.3.2.1. 1172 + */ 1173 + if (pci->pp.skip_l23_ready) { 1174 + mdelay(PCIE_PME_TO_L2_TIMEOUT_US/1000); 1175 + goto stop_link; 1241 1176 } 1242 1177 1243 1178 ret = read_poll_timeout(dw_pcie_get_ltssm, val, ··· 1268 1183 */ 1269 1184 udelay(1); 1270 1185 1186 + stop_link: 1271 1187 dw_pcie_stop_link(pci); 1272 1188 if (pci->pp.ops->deinit) 1273 1189 pci->pp.ops->deinit(&pci->pp); ··· 1305 1219 ret = dw_pcie_wait_for_link(pci); 1306 1220 if (ret) 1307 1221 return ret; 1222 + 1223 + if (pci->pp.ops->post_init) 1224 + pci->pp.ops->post_init(&pci->pp); 1308 1225 1309 1226 return ret; 1310 1227 }
+1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 61 61 } 62 62 63 63 static const struct pci_epc_features dw_plat_pcie_epc_features = { 64 + DWC_EPC_COMMON_FEATURES, 64 65 .msi_capable = true, 65 66 .msix_capable = true, 66 67 };
+147 -5
drivers/pci/controller/dwc/pcie-designware.c
··· 226 226 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) 227 227 { 228 228 return PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap, 229 - pci); 229 + NULL, pci); 230 230 } 231 231 EXPORT_SYMBOL_GPL(dw_pcie_find_capability); 232 232 233 233 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) 234 234 { 235 - return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, pci); 235 + return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, NULL, pci); 236 236 } 237 237 EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); 238 + 239 + void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap) 240 + { 241 + u8 cap_pos, pre_pos, next_pos; 242 + u16 reg; 243 + 244 + cap_pos = PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap, 245 + &pre_pos, pci); 246 + if (!cap_pos) 247 + return; 248 + 249 + reg = dw_pcie_readw_dbi(pci, cap_pos); 250 + next_pos = (reg & 0xff00) >> 8; 251 + 252 + dw_pcie_dbi_ro_wr_en(pci); 253 + if (pre_pos == PCI_CAPABILITY_LIST) 254 + dw_pcie_writeb_dbi(pci, PCI_CAPABILITY_LIST, next_pos); 255 + else 256 + dw_pcie_writeb_dbi(pci, pre_pos + 1, next_pos); 257 + dw_pcie_dbi_ro_wr_dis(pci); 258 + } 259 + EXPORT_SYMBOL_GPL(dw_pcie_remove_capability); 260 + 261 + void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap) 262 + { 263 + int cap_pos, next_pos, pre_pos; 264 + u32 pre_header, header; 265 + 266 + cap_pos = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, &pre_pos, pci); 267 + if (!cap_pos) 268 + return; 269 + 270 + header = dw_pcie_readl_dbi(pci, cap_pos); 271 + 272 + /* 273 + * If the first cap at offset PCI_CFG_SPACE_SIZE is removed, 274 + * only set its capid to zero as it cannot be skipped. 275 + */ 276 + if (cap_pos == PCI_CFG_SPACE_SIZE) { 277 + dw_pcie_dbi_ro_wr_en(pci); 278 + dw_pcie_writel_dbi(pci, cap_pos, header & 0xffff0000); 279 + dw_pcie_dbi_ro_wr_dis(pci); 280 + return; 281 + } 282 + 283 + pre_header = dw_pcie_readl_dbi(pci, pre_pos); 284 + next_pos = PCI_EXT_CAP_NEXT(header); 285 + 286 + dw_pcie_dbi_ro_wr_en(pci); 287 + dw_pcie_writel_dbi(pci, pre_pos, 288 + (pre_header & 0xfffff) | (next_pos << 20)); 289 + dw_pcie_dbi_ro_wr_dis(pci); 290 + } 291 + EXPORT_SYMBOL_GPL(dw_pcie_remove_ext_capability); 238 292 239 293 static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id, 240 294 u16 vsec_id) ··· 300 246 return 0; 301 247 302 248 while ((vsec = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, vsec, 303 - PCI_EXT_CAP_ID_VNDR, pci))) { 249 + PCI_EXT_CAP_ID_VNDR, NULL, pci))) { 304 250 header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER); 305 251 if (PCI_VNDR_HEADER_ID(header) == vsec_id) 306 252 return vsec; ··· 532 478 u32 retries, val; 533 479 u64 limit_addr; 534 480 481 + if (atu->index >= pci->num_ob_windows) 482 + return -ENOSPC; 483 + 535 484 limit_addr = parent_bus_addr + atu->size - 1; 536 485 537 486 if ((limit_addr & ~pci->region_limit) != (parent_bus_addr & ~pci->region_limit) || ··· 607 550 { 608 551 u64 limit_addr = pci_addr + size - 1; 609 552 u32 retries, val; 553 + 554 + if (index >= pci->num_ib_windows) 555 + return -ENOSPC; 610 556 611 557 if ((limit_addr & ~pci->region_limit) != (pci_addr & ~pci->region_limit) || 612 558 !IS_ALIGNED(parent_bus_addr, pci->region_align) || ··· 699 639 dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0); 700 640 } 701 641 642 + const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm) 643 + { 644 + const char *str; 645 + 646 + switch (ltssm) { 647 + #define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break 648 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET); 649 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT); 650 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE); 651 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE); 652 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG); 653 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET); 654 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT); 655 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START); 656 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT); 657 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI); 658 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT); 659 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE); 660 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE); 661 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK); 662 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED); 663 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG); 664 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE); 665 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0); 666 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S); 667 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE); 668 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE); 669 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE); 670 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE); 671 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY); 672 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE); 673 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED); 674 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY); 675 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE); 676 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT); 677 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT); 678 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY); 679 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET); 680 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0); 681 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1); 682 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2); 683 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3); 684 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_1); 685 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_2); 686 + default: 687 + str = "DW_PCIE_LTSSM_UNKNOWN"; 688 + break; 689 + } 690 + 691 + return str + strlen("DW_PCIE_LTSSM_"); 692 + } 693 + 694 + /** 695 + * dw_pcie_wait_for_link - Wait for the PCIe link to be up 696 + * @pci: DWC instance 697 + * 698 + * Returns: 0 if link is up, -ENODEV if device is not found, -EIO if the device 699 + * is found but not active and -ETIMEDOUT if the link fails to come up for other 700 + * reasons. 701 + */ 702 702 int dw_pcie_wait_for_link(struct dw_pcie *pci) 703 703 { 704 - u32 offset, val; 704 + u32 offset, val, ltssm; 705 705 int retries; 706 706 707 707 /* Check if the link is up or not */ ··· 773 653 } 774 654 775 655 if (retries >= PCIE_LINK_WAIT_MAX_RETRIES) { 776 - dev_info(pci->dev, "Phy link never came up\n"); 656 + /* 657 + * If the link is in Detect.Quiet or Detect.Active state, it 658 + * indicates that no device is detected. 659 + */ 660 + ltssm = dw_pcie_get_ltssm(pci); 661 + if (ltssm == DW_PCIE_LTSSM_DETECT_QUIET || 662 + ltssm == DW_PCIE_LTSSM_DETECT_ACT) { 663 + dev_info(pci->dev, "Device not found\n"); 664 + return -ENODEV; 665 + 666 + /* 667 + * If the link is in POLL.{Active/Compliance} state, then the 668 + * device is found to be connected to the bus, but it is not 669 + * active i.e., the device firmware might not yet initialized. 670 + */ 671 + } else if (ltssm == DW_PCIE_LTSSM_POLL_ACTIVE || 672 + ltssm == DW_PCIE_LTSSM_POLL_COMPLIANCE) { 673 + dev_info(pci->dev, "Device found, but not active\n"); 674 + return -EIO; 675 + } 676 + 677 + dev_err(pci->dev, "Link failed to come up. LTSSM: %s\n", 678 + dw_pcie_ltssm_status_string(ltssm)); 777 679 return -ETIMEDOUT; 778 680 } 779 681
+25 -20
drivers/pci/controller/dwc/pcie-designware.h
··· 305 305 /* Default eDMA LLP memory size */ 306 306 #define DMA_LLP_MEM_SIZE PAGE_SIZE 307 307 308 + /* Common struct pci_epc_feature bits among DWC EP glue drivers */ 309 + #define DWC_EPC_COMMON_FEATURES .dynamic_inbound_mapping = true, \ 310 + .subrange_mapping = true 311 + 308 312 struct dw_pcie; 309 313 struct dw_pcie_rp; 310 314 struct dw_pcie_ep; ··· 392 388 DW_PCIE_LTSSM_RCVRY_EQ2 = 0x22, 393 389 DW_PCIE_LTSSM_RCVRY_EQ3 = 0x23, 394 390 391 + /* Vendor glue drivers provide pseudo L1 substates from get_ltssm() */ 392 + DW_PCIE_LTSSM_L1_1 = 0x141, 393 + DW_PCIE_LTSSM_L1_2 = 0x142, 394 + 395 395 DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF, 396 396 }; 397 397 ··· 420 412 }; 421 413 422 414 struct dw_pcie_rp { 423 - bool has_msi_ctrl:1; 415 + bool use_imsi_rx:1; 424 416 bool cfg0_io_shared:1; 425 417 u64 cfg0_base; 426 418 void __iomem *va_cfg0_base; ··· 442 434 bool use_atu_msg; 443 435 int msg_atu_index; 444 436 struct resource *msg_res; 445 - bool use_linkup_irq; 446 437 struct pci_eq_presets presets; 447 438 struct pci_config_window *cfg; 448 439 bool ecam_enabled; 449 440 bool native_ecam; 441 + bool skip_l23_ready; 450 442 }; 451 443 452 444 struct dw_pcie_ep_ops { ··· 471 463 u8 func_no; 472 464 u8 msi_cap; /* MSI capability offset */ 473 465 u8 msix_cap; /* MSI-X capability offset */ 466 + u8 bar_to_atu[PCI_STD_NUM_BARS]; 467 + struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS]; 468 + 469 + /* Only for Address Match Mode inbound iATU */ 470 + u32 *ib_atu_indexes[PCI_STD_NUM_BARS]; 471 + unsigned int num_ib_atu_indexes[PCI_STD_NUM_BARS]; 474 472 }; 475 473 476 474 struct dw_pcie_ep { ··· 486 472 phys_addr_t phys_base; 487 473 size_t addr_size; 488 474 size_t page_size; 489 - u8 bar_to_atu[PCI_STD_NUM_BARS]; 490 475 phys_addr_t *outbound_addr; 491 476 unsigned long *ib_window_map; 492 477 unsigned long *ob_window_map; 493 478 void __iomem *msi_mem; 494 479 phys_addr_t msi_mem_phys; 495 - struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS]; 480 + 481 + /* MSI outbound iATU state */ 482 + bool msi_iatu_mapped; 483 + u64 msi_msg_addr; 484 + size_t msi_map_size; 496 485 }; 497 486 498 487 struct dw_pcie_ops { ··· 510 493 enum dw_pcie_ltssm (*get_ltssm)(struct dw_pcie *pcie); 511 494 int (*start_link)(struct dw_pcie *pcie); 512 495 void (*stop_link)(struct dw_pcie *pcie); 513 - int (*assert_perst)(struct dw_pcie *pcie, bool assert); 514 496 }; 515 497 516 498 struct debugfs_info { ··· 578 562 579 563 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap); 580 564 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap); 565 + void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap); 566 + void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap); 581 567 u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci); 582 568 u16 dw_pcie_find_ptm_capability(struct dw_pcie *pci); 583 569 ··· 816 798 pci->ops->stop_link(pci); 817 799 } 818 800 819 - static inline int dw_pcie_assert_perst(struct dw_pcie *pci, bool assert) 820 - { 821 - if (pci->ops && pci->ops->assert_perst) 822 - return pci->ops->assert_perst(pci, assert); 823 - 824 - return 0; 825 - } 826 - 827 801 static inline enum dw_pcie_ltssm dw_pcie_get_ltssm(struct dw_pcie *pci) 828 802 { 829 803 u32 val; ··· 827 817 828 818 return (enum dw_pcie_ltssm)FIELD_GET(PORT_LOGIC_LTSSM_STATE_MASK, val); 829 819 } 820 + 821 + const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm); 830 822 831 823 #ifdef CONFIG_PCIE_DW_HOST 832 824 int dw_pcie_suspend_noirq(struct dw_pcie *pci); ··· 908 896 int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, 909 897 u16 interrupt_num); 910 898 void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar); 911 - int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap); 912 899 struct dw_pcie_ep_func * 913 900 dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no); 914 901 #else ··· 963 952 964 953 static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) 965 954 { 966 - } 967 - 968 - static inline int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, 969 - u8 prev_cap, u8 cap) 970 - { 971 - return 0; 972 955 } 973 956 974 957 static inline struct dw_pcie_ep_func *
+38 -63
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 68 68 #define PCIE_CLKREQ_NOT_READY FIELD_PREP_WM16(BIT(0), 0) 69 69 #define PCIE_CLKREQ_PULL_DOWN FIELD_PREP_WM16(GENMASK(13, 12), 1) 70 70 71 + /* RASDES TBA information */ 72 + #define PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN 0x154 73 + #define PCIE_CLIENT_CDM_RASDES_TBA_L1_1 BIT(4) 74 + #define PCIE_CLIENT_CDM_RASDES_TBA_L1_2 BIT(5) 75 + 71 76 /* Hot Reset Control Register */ 72 77 #define PCIE_CLIENT_HOT_RESET_CTRL 0x180 73 78 #define PCIE_LTSSM_APP_DLY2_EN BIT(1) ··· 84 79 #define PCIE_LINKUP 0x3 85 80 #define PCIE_LINKUP_MASK GENMASK(17, 16) 86 81 #define PCIE_LTSSM_STATUS_MASK GENMASK(5, 0) 82 + 83 + #define PCIE_TYPE0_HDR_DBI2_OFFSET 0x100000 87 84 88 85 struct rockchip_pcie { 89 86 struct dw_pcie pci; ··· 188 181 return 0; 189 182 } 190 183 191 - static u32 rockchip_pcie_get_ltssm(struct rockchip_pcie *rockchip) 184 + static u32 rockchip_pcie_get_ltssm_reg(struct rockchip_pcie *rockchip) 192 185 { 193 186 return rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS); 187 + } 188 + 189 + static enum dw_pcie_ltssm rockchip_pcie_get_ltssm(struct dw_pcie *pci) 190 + { 191 + struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 192 + u32 val = rockchip_pcie_readl_apb(rockchip, 193 + PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN); 194 + 195 + if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_1) 196 + return DW_PCIE_LTSSM_L1_1; 197 + 198 + if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_2) 199 + return DW_PCIE_LTSSM_L1_2; 200 + 201 + return rockchip_pcie_get_ltssm_reg(rockchip) & PCIE_LTSSM_STATUS_MASK; 194 202 } 195 203 196 204 static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip) ··· 223 201 static bool rockchip_pcie_link_up(struct dw_pcie *pci) 224 202 { 225 203 struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 226 - u32 val = rockchip_pcie_get_ltssm(rockchip); 204 + u32 val = rockchip_pcie_get_ltssm_reg(rockchip); 227 205 228 206 return FIELD_GET(PCIE_LINKUP_MASK, val) == PCIE_LINKUP; 229 207 } ··· 314 292 if (irq < 0) 315 293 return irq; 316 294 295 + pci->dbi_base2 = pci->dbi_base + PCIE_TYPE0_HDR_DBI2_OFFSET; 296 + 317 297 ret = rockchip_pcie_init_irq_domain(rockchip); 318 298 if (ret < 0) 319 299 dev_err(dev, "failed to init irq domain\n"); ··· 325 301 326 302 rockchip_pcie_configure_l1ss(pci); 327 303 rockchip_pcie_enable_l0s(pci); 304 + 305 + /* Disable Root Ports BAR0 and BAR1 as they report bogus size */ 306 + dw_pcie_writel_dbi2(pci, PCI_BASE_ADDRESS_0, 0x0); 307 + dw_pcie_writel_dbi2(pci, PCI_BASE_ADDRESS_1, 0x0); 328 308 329 309 return 0; 330 310 } ··· 355 327 if (!of_device_is_compatible(dev->of_node, "rockchip,rk3588-pcie-ep")) 356 328 return; 357 329 358 - if (dw_pcie_ep_hide_ext_capability(pci, PCI_EXT_CAP_ID_SECPCI, 359 - PCI_EXT_CAP_ID_ATS)) 360 - dev_err(dev, "failed to hide ATS capability\n"); 330 + dw_pcie_remove_ext_capability(pci, PCI_EXT_CAP_ID_ATS); 361 331 } 362 332 363 333 static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep) ··· 390 364 } 391 365 392 366 static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = { 367 + DWC_EPC_COMMON_FEATURES, 393 368 .linkup_notifier = true, 394 369 .msi_capable = true, 395 370 .msix_capable = true, ··· 411 384 * BARs) would be overwritten, resulting in (all other BARs) no longer working. 412 385 */ 413 386 static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = { 387 + DWC_EPC_COMMON_FEATURES, 414 388 .linkup_notifier = true, 415 389 .msi_capable = true, 416 390 .msix_capable = true, ··· 513 485 .link_up = rockchip_pcie_link_up, 514 486 .start_link = rockchip_pcie_start_link, 515 487 .stop_link = rockchip_pcie_stop_link, 488 + .get_ltssm = rockchip_pcie_get_ltssm, 516 489 }; 517 - 518 - static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg) 519 - { 520 - struct rockchip_pcie *rockchip = arg; 521 - struct dw_pcie *pci = &rockchip->pci; 522 - struct dw_pcie_rp *pp = &pci->pp; 523 - struct device *dev = pci->dev; 524 - u32 reg; 525 - 526 - reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC); 527 - rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); 528 - 529 - dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg); 530 - dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip)); 531 - 532 - if (reg & PCIE_RDLH_LINK_UP_CHGED) { 533 - if (rockchip_pcie_link_up(pci)) { 534 - msleep(PCIE_RESET_CONFIG_WAIT_MS); 535 - dev_dbg(dev, "Received Link up event. Starting enumeration!\n"); 536 - /* Rescan the bus to enumerate endpoint devices */ 537 - pci_lock_rescan_remove(); 538 - pci_rescan_bus(pp->bridge->bus); 539 - pci_unlock_rescan_remove(); 540 - } 541 - } 542 - 543 - return IRQ_HANDLED; 544 - } 545 490 546 491 static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg) 547 492 { ··· 527 526 rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); 528 527 529 528 dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg); 530 - dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip)); 529 + dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm_reg(rockchip)); 531 530 532 531 if (reg & PCIE_LINK_REQ_RST_NOT_INT) { 533 532 dev_dbg(dev, "hot reset or link-down reset\n"); ··· 548 547 return IRQ_HANDLED; 549 548 } 550 549 551 - static int rockchip_pcie_configure_rc(struct platform_device *pdev, 552 - struct rockchip_pcie *rockchip) 550 + static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip) 553 551 { 554 - struct device *dev = &pdev->dev; 555 552 struct dw_pcie_rp *pp; 556 - int irq, ret; 557 553 u32 val; 558 554 559 555 if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST)) 560 556 return -ENODEV; 561 - 562 - irq = platform_get_irq_byname(pdev, "sys"); 563 - if (irq < 0) 564 - return irq; 565 - 566 - ret = devm_request_threaded_irq(dev, irq, NULL, 567 - rockchip_pcie_rc_sys_irq_thread, 568 - IRQF_ONESHOT, "pcie-sys-rc", rockchip); 569 - if (ret) { 570 - dev_err(dev, "failed to request PCIe sys IRQ\n"); 571 - return ret; 572 - } 573 557 574 558 /* LTSSM enable control mode */ 575 559 val = FIELD_PREP_WM16(PCIE_LTSSM_ENABLE_ENHANCE, 1); ··· 566 580 567 581 pp = &rockchip->pci.pp; 568 582 pp->ops = &rockchip_pcie_host_ops; 569 - pp->use_linkup_irq = true; 570 583 571 - ret = dw_pcie_host_init(pp); 572 - if (ret) { 573 - dev_err(dev, "failed to initialize host\n"); 574 - return ret; 575 - } 576 - 577 - /* unmask DLL up/down indicator */ 578 - val = FIELD_PREP_WM16(PCIE_RDLH_LINK_UP_CHGED, 0); 579 - rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC); 580 - 581 - return ret; 584 + return dw_pcie_host_init(pp); 582 585 } 583 586 584 587 static int rockchip_pcie_configure_ep(struct platform_device *pdev, ··· 686 711 687 712 switch (data->mode) { 688 713 case DW_PCIE_RC_TYPE: 689 - ret = rockchip_pcie_configure_rc(pdev, rockchip); 714 + ret = rockchip_pcie_configure_rc(rockchip); 690 715 if (ret) 691 716 goto deinit_clk; 692 717 break;
+1
drivers/pci/controller/dwc/pcie-keembay.c
··· 309 309 } 310 310 311 311 static const struct pci_epc_features keembay_pcie_epc_features = { 312 + DWC_EPC_COMMON_FEATURES, 312 313 .msi_capable = true, 313 314 .msix_capable = true, 314 315 .bar[BAR_0] = { .only_64bit = true, },
+4 -4
drivers/pci/controller/dwc/pcie-nxp-s32g.c
··· 282 282 283 283 ret = s32g_pcie_parse_port(s32g_pp, of_port); 284 284 if (ret) 285 - goto err_port; 285 + break; 286 286 } 287 287 288 - err_port: 289 - list_for_each_entry_safe(port, tmp, &s32g_pp->ports, list) 290 - list_del(&port->list); 288 + if (ret) 289 + list_for_each_entry_safe(port, tmp, &s32g_pp->ports, list) 290 + list_del(&port->list); 291 291 292 292 return ret; 293 293 }
+57 -11
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 168 168 * @hdma_support: HDMA support on this SoC 169 169 * @override_no_snoop: Override NO_SNOOP attribute in TLP to enable cache snooping 170 170 * @disable_mhi_ram_parity_check: Disable MHI RAM data parity error check 171 + * @firmware_managed: Set if the controller is firmware managed 171 172 */ 172 173 struct qcom_pcie_ep_cfg { 173 174 bool hdma_support; 174 175 bool override_no_snoop; 175 176 bool disable_mhi_ram_parity_check; 177 + bool firmware_managed; 176 178 }; 177 179 178 180 /** ··· 379 377 380 378 static void qcom_pcie_disable_resources(struct qcom_pcie_ep *pcie_ep) 381 379 { 380 + struct device *dev = pcie_ep->pci.dev; 381 + 382 + pm_runtime_put(dev); 383 + 384 + /* Skip resource disablement if controller is firmware-managed */ 385 + if (pcie_ep->cfg && pcie_ep->cfg->firmware_managed) 386 + return; 387 + 382 388 icc_set_bw(pcie_ep->icc_mem, 0, 0); 383 389 phy_power_off(pcie_ep->phy); 384 390 phy_exit(pcie_ep->phy); ··· 400 390 u32 val, offset; 401 391 int ret; 402 392 403 - ret = qcom_pcie_enable_resources(pcie_ep); 404 - if (ret) { 405 - dev_err(dev, "Failed to enable resources: %d\n", ret); 393 + ret = pm_runtime_resume_and_get(dev); 394 + if (ret < 0) { 395 + dev_err(dev, "Failed to enable device: %d\n", ret); 406 396 return ret; 407 397 } 408 398 399 + /* Skip resource enablement if controller is firmware-managed */ 400 + if (pcie_ep->cfg && pcie_ep->cfg->firmware_managed) 401 + goto skip_resources_enable; 402 + 403 + ret = qcom_pcie_enable_resources(pcie_ep); 404 + if (ret) { 405 + dev_err(dev, "Failed to enable resources: %d\n", ret); 406 + pm_runtime_put(dev); 407 + return ret; 408 + } 409 + 410 + skip_resources_enable: 409 411 /* Perform cleanup that requires refclk */ 410 412 pci_epc_deinit_notify(pci->ep.epc); 411 413 dw_pcie_ep_cleanup(&pci->ep); ··· 652 630 return ret; 653 631 } 654 632 633 + pcie_ep->reset = devm_gpiod_get(dev, "reset", GPIOD_IN); 634 + if (IS_ERR(pcie_ep->reset)) 635 + return PTR_ERR(pcie_ep->reset); 636 + 637 + pcie_ep->wake = devm_gpiod_get_optional(dev, "wake", GPIOD_OUT_LOW); 638 + if (IS_ERR(pcie_ep->wake)) 639 + return PTR_ERR(pcie_ep->wake); 640 + 641 + if (pcie_ep->cfg && pcie_ep->cfg->firmware_managed) 642 + return 0; 643 + 655 644 pcie_ep->num_clks = devm_clk_bulk_get_all(dev, &pcie_ep->clks); 656 645 if (pcie_ep->num_clks < 0) { 657 646 dev_err(dev, "Failed to get clocks\n"); ··· 672 639 pcie_ep->core_reset = devm_reset_control_get_exclusive(dev, "core"); 673 640 if (IS_ERR(pcie_ep->core_reset)) 674 641 return PTR_ERR(pcie_ep->core_reset); 675 - 676 - pcie_ep->reset = devm_gpiod_get(dev, "reset", GPIOD_IN); 677 - if (IS_ERR(pcie_ep->reset)) 678 - return PTR_ERR(pcie_ep->reset); 679 - 680 - pcie_ep->wake = devm_gpiod_get_optional(dev, "wake", GPIOD_OUT_LOW); 681 - if (IS_ERR(pcie_ep->wake)) 682 - return PTR_ERR(pcie_ep->wake); 683 642 684 643 pcie_ep->phy = devm_phy_optional_get(dev, "pciephy"); 685 644 if (IS_ERR(pcie_ep->phy)) ··· 845 820 } 846 821 847 822 static const struct pci_epc_features qcom_pcie_epc_features = { 823 + DWC_EPC_COMMON_FEATURES, 848 824 .linkup_notifier = true, 849 825 .msi_capable = true, 850 826 .align = SZ_4K, ··· 900 874 901 875 platform_set_drvdata(pdev, pcie_ep); 902 876 877 + pm_runtime_get_noresume(dev); 878 + pm_runtime_set_active(dev); 879 + ret = devm_pm_runtime_enable(dev); 880 + if (ret) 881 + return ret; 882 + 903 883 ret = qcom_pcie_ep_get_resources(pdev, pcie_ep); 904 884 if (ret) 905 885 return ret; ··· 923 891 name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node); 924 892 if (!name) { 925 893 ret = -ENOMEM; 894 + goto err_disable_irqs; 895 + } 896 + 897 + ret = pm_runtime_put_sync(dev); 898 + if (ret < 0) { 899 + dev_err(dev, "Failed to suspend device: %d\n", ret); 926 900 goto err_disable_irqs; 927 901 } 928 902 ··· 968 930 .disable_mhi_ram_parity_check = true, 969 931 }; 970 932 933 + static const struct qcom_pcie_ep_cfg cfg_1_34_0_fw_managed = { 934 + .hdma_support = true, 935 + .override_no_snoop = true, 936 + .disable_mhi_ram_parity_check = true, 937 + .firmware_managed = true, 938 + }; 939 + 971 940 static const struct of_device_id qcom_pcie_ep_match[] = { 941 + { .compatible = "qcom,sa8255p-pcie-ep", .data = &cfg_1_34_0_fw_managed}, 972 942 { .compatible = "qcom,sa8775p-pcie-ep", .data = &cfg_1_34_0}, 973 943 { .compatible = "qcom,sdx55-pcie-ep", }, 974 944 { .compatible = "qcom,sm8450-pcie-ep", },
+122 -106
drivers/pci/controller/dwc/pcie-qcom.c
··· 24 24 #include <linux/of_pci.h> 25 25 #include <linux/pci.h> 26 26 #include <linux/pci-ecam.h> 27 + #include <linux/pci-pwrctrl.h> 27 28 #include <linux/pm_opp.h> 28 29 #include <linux/pm_runtime.h> 29 30 #include <linux/platform_device.h> ··· 56 55 #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8 57 56 #define PARF_Q2A_FLUSH 0x1ac 58 57 #define PARF_LTSSM 0x1b0 59 - #define PARF_INT_ALL_STATUS 0x224 60 - #define PARF_INT_ALL_CLEAR 0x228 61 - #define PARF_INT_ALL_MASK 0x22c 62 58 #define PARF_SID_OFFSET 0x234 63 59 #define PARF_BDF_TRANSLATE_CFG 0x24c 64 60 #define PARF_DBI_BASE_ADDR_V2 0x350 ··· 131 133 132 134 /* PARF_LTSSM register fields */ 133 135 #define LTSSM_EN BIT(8) 134 - 135 - /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ 136 - #define PARF_INT_ALL_LINK_UP BIT(13) 137 - #define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23) 138 136 139 137 /* PARF_NO_SNOOP_OVERRIDE register fields */ 140 138 #define WR_NO_SNOOP_OVERRIDE_EN BIT(1) ··· 261 267 bool no_l0s; 262 268 }; 263 269 270 + struct qcom_pcie_perst { 271 + struct list_head list; 272 + struct gpio_desc *desc; 273 + }; 274 + 264 275 struct qcom_pcie_port { 265 276 struct list_head list; 266 - struct gpio_desc *reset; 267 277 struct phy *phy; 278 + struct list_head perst; 268 279 }; 269 280 270 281 struct qcom_pcie { ··· 288 289 289 290 #define to_qcom_pcie(x) dev_get_drvdata((x)->dev) 290 291 291 - static void qcom_perst_assert(struct qcom_pcie *pcie, bool assert) 292 + static void __qcom_pcie_perst_assert(struct qcom_pcie *pcie, bool assert) 292 293 { 294 + struct qcom_pcie_perst *perst; 293 295 struct qcom_pcie_port *port; 294 296 int val = assert ? 1 : 0; 295 297 296 - list_for_each_entry(port, &pcie->ports, list) 297 - gpiod_set_value_cansleep(port->reset, val); 298 + list_for_each_entry(port, &pcie->ports, list) { 299 + list_for_each_entry(perst, &port->perst, list) 300 + gpiod_set_value_cansleep(perst->desc, val); 301 + } 298 302 299 303 usleep_range(PERST_DELAY_US, PERST_DELAY_US + 500); 300 304 } 301 305 302 - static void qcom_ep_reset_assert(struct qcom_pcie *pcie) 306 + static void qcom_pcie_perst_assert(struct qcom_pcie *pcie) 303 307 { 304 - qcom_perst_assert(pcie, true); 308 + __qcom_pcie_perst_assert(pcie, true); 305 309 } 306 310 307 - static void qcom_ep_reset_deassert(struct qcom_pcie *pcie) 311 + static void qcom_pcie_perst_deassert(struct qcom_pcie *pcie) 308 312 { 309 - /* Ensure that PERST has been asserted for at least 100 ms */ 313 + /* Ensure that PERST# has been asserted for at least 100 ms */ 310 314 msleep(PCIE_T_PVPERL_MS); 311 - qcom_perst_assert(pcie, false); 315 + __qcom_pcie_perst_assert(pcie, false); 312 316 } 313 317 314 318 static int qcom_pcie_start_link(struct dw_pcie *pci) ··· 639 637 } 640 638 641 639 qcom_pcie_clear_hpc(pcie->pci); 642 - 643 - return 0; 644 - } 645 - 646 - static int qcom_pcie_assert_perst(struct dw_pcie *pci, bool assert) 647 - { 648 - struct qcom_pcie *pcie = to_qcom_pcie(pci); 649 - 650 - if (assert) 651 - qcom_ep_reset_assert(pcie); 652 - else 653 - qcom_ep_reset_deassert(pcie); 654 640 655 641 return 0; 656 642 } ··· 1289 1299 struct qcom_pcie *pcie = to_qcom_pcie(pci); 1290 1300 int ret; 1291 1301 1292 - qcom_ep_reset_assert(pcie); 1302 + qcom_pcie_perst_assert(pcie); 1293 1303 1294 1304 ret = pcie->cfg->ops->init(pcie); 1295 1305 if (ret) ··· 1299 1309 if (ret) 1300 1310 goto err_deinit; 1301 1311 1312 + ret = pci_pwrctrl_create_devices(pci->dev); 1313 + if (ret) 1314 + goto err_disable_phy; 1315 + 1316 + ret = pci_pwrctrl_power_on_devices(pci->dev); 1317 + if (ret) 1318 + goto err_pwrctrl_destroy; 1319 + 1302 1320 if (pcie->cfg->ops->post_init) { 1303 1321 ret = pcie->cfg->ops->post_init(pcie); 1304 1322 if (ret) 1305 - goto err_disable_phy; 1323 + goto err_pwrctrl_power_off; 1306 1324 } 1307 1325 1308 1326 qcom_pcie_clear_aspm_l0s(pcie->pci); 1327 + dw_pcie_remove_capability(pcie->pci, PCI_CAP_ID_MSIX); 1328 + dw_pcie_remove_ext_capability(pcie->pci, PCI_EXT_CAP_ID_DPC); 1309 1329 1310 - qcom_ep_reset_deassert(pcie); 1330 + qcom_pcie_perst_deassert(pcie); 1311 1331 1312 1332 if (pcie->cfg->ops->config_sid) { 1313 1333 ret = pcie->cfg->ops->config_sid(pcie); ··· 1328 1328 return 0; 1329 1329 1330 1330 err_assert_reset: 1331 - qcom_ep_reset_assert(pcie); 1331 + qcom_pcie_perst_assert(pcie); 1332 + err_pwrctrl_power_off: 1333 + pci_pwrctrl_power_off_devices(pci->dev); 1334 + err_pwrctrl_destroy: 1335 + if (ret != -EPROBE_DEFER) 1336 + pci_pwrctrl_destroy_devices(pci->dev); 1332 1337 err_disable_phy: 1333 1338 qcom_pcie_phy_power_off(pcie); 1334 1339 err_deinit: ··· 1347 1342 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1348 1343 struct qcom_pcie *pcie = to_qcom_pcie(pci); 1349 1344 1350 - qcom_ep_reset_assert(pcie); 1345 + qcom_pcie_perst_assert(pcie); 1346 + 1347 + /* 1348 + * No need to destroy pwrctrl devices as this function only gets called 1349 + * during system suspend as of now. 1350 + */ 1351 + pci_pwrctrl_power_off_devices(pci->dev); 1351 1352 qcom_pcie_phy_power_off(pcie); 1352 1353 pcie->cfg->ops->deinit(pcie); 1353 1354 } ··· 1507 1496 static const struct dw_pcie_ops dw_pcie_ops = { 1508 1497 .link_up = qcom_pcie_link_up, 1509 1498 .start_link = qcom_pcie_start_link, 1510 - .assert_perst = qcom_pcie_assert_perst, 1511 1499 }; 1512 1500 1513 1501 static int qcom_pcie_icc_init(struct qcom_pcie *pcie) ··· 1645 1635 qcom_pcie_link_transition_count); 1646 1636 } 1647 1637 1648 - static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data) 1649 - { 1650 - struct qcom_pcie *pcie = data; 1651 - struct dw_pcie_rp *pp = &pcie->pci->pp; 1652 - struct device *dev = pcie->pci->dev; 1653 - u32 status = readl_relaxed(pcie->parf + PARF_INT_ALL_STATUS); 1654 - 1655 - writel_relaxed(status, pcie->parf + PARF_INT_ALL_CLEAR); 1656 - 1657 - if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) { 1658 - msleep(PCIE_RESET_CONFIG_WAIT_MS); 1659 - dev_dbg(dev, "Received Link up event. Starting enumeration!\n"); 1660 - /* Rescan the bus to enumerate endpoint devices */ 1661 - pci_lock_rescan_remove(); 1662 - pci_rescan_bus(pp->bridge->bus); 1663 - pci_unlock_rescan_remove(); 1664 - 1665 - qcom_pcie_icc_opp_update(pcie); 1666 - } else { 1667 - dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n", 1668 - status); 1669 - } 1670 - 1671 - return IRQ_HANDLED; 1672 - } 1673 - 1674 1638 static void qcom_pci_free_msi(void *ptr) 1675 1639 { 1676 1640 struct dw_pcie_rp *pp = (struct dw_pcie_rp *)ptr; 1677 1641 1678 - if (pp && pp->has_msi_ctrl) 1642 + if (pp && pp->use_imsi_rx) 1679 1643 dw_pcie_free_msi(pp); 1680 1644 } 1681 1645 ··· 1673 1689 if (ret) 1674 1690 return ret; 1675 1691 1676 - pp->has_msi_ctrl = true; 1692 + pp->use_imsi_rx = true; 1677 1693 dw_pcie_msi_init(pp); 1678 1694 1679 1695 return devm_add_action_or_reset(dev, qcom_pci_free_msi, pp); ··· 1688 1704 } 1689 1705 }; 1690 1706 1707 + /* Parse PERST# from all nodes in depth first manner starting from @np */ 1708 + static int qcom_pcie_parse_perst(struct qcom_pcie *pcie, 1709 + struct qcom_pcie_port *port, 1710 + struct device_node *np) 1711 + { 1712 + struct device *dev = pcie->pci->dev; 1713 + struct qcom_pcie_perst *perst; 1714 + struct gpio_desc *reset; 1715 + int ret; 1716 + 1717 + if (!of_find_property(np, "reset-gpios", NULL)) 1718 + goto parse_child_node; 1719 + 1720 + reset = devm_fwnode_gpiod_get(dev, of_fwnode_handle(np), "reset", 1721 + GPIOD_OUT_HIGH, "PERST#"); 1722 + if (IS_ERR(reset)) { 1723 + /* 1724 + * FIXME: GPIOLIB currently supports exclusive GPIO access only. 1725 + * Non exclusive access is broken. But shared PERST# requires 1726 + * non-exclusive access. So once GPIOLIB properly supports it, 1727 + * implement it here. 1728 + */ 1729 + if (PTR_ERR(reset) == -EBUSY) 1730 + dev_err(dev, "Shared PERST# is not supported\n"); 1731 + 1732 + return PTR_ERR(reset); 1733 + } 1734 + 1735 + perst = devm_kzalloc(dev, sizeof(*perst), GFP_KERNEL); 1736 + if (!perst) 1737 + return -ENOMEM; 1738 + 1739 + INIT_LIST_HEAD(&perst->list); 1740 + perst->desc = reset; 1741 + list_add_tail(&perst->list, &port->perst); 1742 + 1743 + parse_child_node: 1744 + for_each_available_child_of_node_scoped(np, child) { 1745 + ret = qcom_pcie_parse_perst(pcie, port, child); 1746 + if (ret) 1747 + return ret; 1748 + } 1749 + 1750 + return 0; 1751 + } 1752 + 1691 1753 static int qcom_pcie_parse_port(struct qcom_pcie *pcie, struct device_node *node) 1692 1754 { 1693 1755 struct device *dev = pcie->pci->dev; 1694 1756 struct qcom_pcie_port *port; 1695 - struct gpio_desc *reset; 1696 1757 struct phy *phy; 1697 1758 int ret; 1698 - 1699 - reset = devm_fwnode_gpiod_get(dev, of_fwnode_handle(node), 1700 - "reset", GPIOD_OUT_HIGH, "PERST#"); 1701 - if (IS_ERR(reset)) 1702 - return PTR_ERR(reset); 1703 1759 1704 1760 phy = devm_of_phy_get(dev, node, NULL); 1705 1761 if (IS_ERR(phy)) ··· 1753 1729 if (ret) 1754 1730 return ret; 1755 1731 1756 - port->reset = reset; 1732 + INIT_LIST_HEAD(&port->perst); 1733 + 1734 + ret = qcom_pcie_parse_perst(pcie, port, node); 1735 + if (ret) 1736 + return ret; 1737 + 1757 1738 port->phy = phy; 1758 1739 INIT_LIST_HEAD(&port->list); 1759 1740 list_add_tail(&port->list, &pcie->ports); ··· 1768 1739 1769 1740 static int qcom_pcie_parse_ports(struct qcom_pcie *pcie) 1770 1741 { 1742 + struct qcom_pcie_perst *perst, *tmp_perst; 1743 + struct qcom_pcie_port *port, *tmp_port; 1771 1744 struct device *dev = pcie->pci->dev; 1772 - struct qcom_pcie_port *port, *tmp; 1773 - int ret = -ENOENT; 1745 + int ret = -ENODEV; 1774 1746 1775 1747 for_each_available_child_of_node_scoped(dev->of_node, of_port) { 1776 1748 if (!of_node_is_type(of_port, "pci")) ··· 1784 1754 return ret; 1785 1755 1786 1756 err_port_del: 1787 - list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 1757 + list_for_each_entry_safe(port, tmp_port, &pcie->ports, list) { 1758 + list_for_each_entry_safe(perst, tmp_perst, &port->perst, list) 1759 + list_del(&perst->list); 1788 1760 phy_exit(port->phy); 1789 1761 list_del(&port->list); 1790 1762 } ··· 1797 1765 static int qcom_pcie_parse_legacy_binding(struct qcom_pcie *pcie) 1798 1766 { 1799 1767 struct device *dev = pcie->pci->dev; 1768 + struct qcom_pcie_perst *perst; 1800 1769 struct qcom_pcie_port *port; 1801 1770 struct gpio_desc *reset; 1802 1771 struct phy *phy; ··· 1819 1786 if (!port) 1820 1787 return -ENOMEM; 1821 1788 1822 - port->reset = reset; 1789 + perst = devm_kzalloc(dev, sizeof(*perst), GFP_KERNEL); 1790 + if (!perst) 1791 + return -ENOMEM; 1792 + 1823 1793 port->phy = phy; 1824 1794 INIT_LIST_HEAD(&port->list); 1825 1795 list_add_tail(&port->list, &pcie->ports); 1796 + 1797 + perst->desc = reset; 1798 + INIT_LIST_HEAD(&port->perst); 1799 + INIT_LIST_HEAD(&perst->list); 1800 + list_add_tail(&perst->list, &port->perst); 1826 1801 1827 1802 return 0; 1828 1803 } 1829 1804 1830 1805 static int qcom_pcie_probe(struct platform_device *pdev) 1831 1806 { 1807 + struct qcom_pcie_perst *perst, *tmp_perst; 1808 + struct qcom_pcie_port *port, *tmp_port; 1832 1809 const struct qcom_pcie_cfg *pcie_cfg; 1833 1810 unsigned long max_freq = ULONG_MAX; 1834 - struct qcom_pcie_port *port, *tmp; 1835 1811 struct device *dev = &pdev->dev; 1836 1812 struct dev_pm_opp *opp; 1837 1813 struct qcom_pcie *pcie; 1838 1814 struct dw_pcie_rp *pp; 1839 1815 struct resource *res; 1840 1816 struct dw_pcie *pci; 1841 - int ret, irq; 1842 - char *name; 1817 + int ret; 1843 1818 1844 1819 pcie_cfg = of_device_get_match_data(dev); 1845 1820 if (!pcie_cfg) { ··· 1980 1939 1981 1940 ret = qcom_pcie_parse_ports(pcie); 1982 1941 if (ret) { 1983 - if (ret != -ENOENT) { 1942 + if (ret != -ENODEV) { 1984 1943 dev_err_probe(pci->dev, ret, 1985 1944 "Failed to parse Root Port: %d\n", ret); 1986 1945 goto err_pm_runtime_put; ··· 1998 1957 1999 1958 platform_set_drvdata(pdev, pcie); 2000 1959 2001 - irq = platform_get_irq_byname_optional(pdev, "global"); 2002 - if (irq > 0) 2003 - pp->use_linkup_irq = true; 2004 - 2005 1960 ret = dw_pcie_host_init(pp); 2006 1961 if (ret) { 2007 - dev_err(dev, "cannot initialize host\n"); 1962 + dev_err_probe(dev, ret, "cannot initialize host\n"); 2008 1963 goto err_phy_exit; 2009 - } 2010 - 2011 - name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_global_irq%d", 2012 - pci_domain_nr(pp->bridge->bus)); 2013 - if (!name) { 2014 - ret = -ENOMEM; 2015 - goto err_host_deinit; 2016 - } 2017 - 2018 - if (irq > 0) { 2019 - ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, 2020 - qcom_pcie_global_irq_thread, 2021 - IRQF_ONESHOT, name, pcie); 2022 - if (ret) { 2023 - dev_err_probe(&pdev->dev, ret, 2024 - "Failed to request Global IRQ\n"); 2025 - goto err_host_deinit; 2026 - } 2027 - 2028 - writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7, 2029 - pcie->parf + PARF_INT_ALL_MASK); 2030 1964 } 2031 1965 2032 1966 qcom_pcie_icc_opp_update(pcie); ··· 2011 1995 2012 1996 return 0; 2013 1997 2014 - err_host_deinit: 2015 - dw_pcie_host_deinit(pp); 2016 1998 err_phy_exit: 2017 - list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 1999 + list_for_each_entry_safe(port, tmp_port, &pcie->ports, list) { 2000 + list_for_each_entry_safe(perst, tmp_perst, &port->perst, list) 2001 + list_del(&perst->list); 2018 2002 phy_exit(port->phy); 2019 2003 list_del(&port->list); 2020 2004 }
+1
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 420 420 } 421 421 422 422 static const struct pci_epc_features rcar_gen4_pcie_epc_features = { 423 + DWC_EPC_COMMON_FEATURES, 423 424 .msi_capable = true, 424 425 .bar[BAR_1] = { .type = BAR_RESERVED, }, 425 426 .bar[BAR_3] = { .type = BAR_RESERVED, },
+18
drivers/pci/controller/dwc/pcie-sophgo.c
··· 161 161 raw_spin_unlock_irqrestore(&pp->lock, flags); 162 162 } 163 163 164 + static void sophgo_pcie_disable_l0s_l1(struct dw_pcie_rp *pp) 165 + { 166 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 167 + u32 offset, val; 168 + 169 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 170 + 171 + dw_pcie_dbi_ro_wr_en(pci); 172 + 173 + val = dw_pcie_readl_dbi(pci, PCI_EXP_LNKCAP + offset); 174 + val &= ~(PCI_EXP_LNKCAP_ASPM_L0S | PCI_EXP_LNKCAP_ASPM_L1); 175 + dw_pcie_writel_dbi(pci, PCI_EXP_LNKCAP + offset, val); 176 + 177 + dw_pcie_dbi_ro_wr_dis(pci); 178 + } 179 + 164 180 static int sophgo_pcie_host_init(struct dw_pcie_rp *pp) 165 181 { 166 182 int irq; ··· 186 170 return irq; 187 171 188 172 irq_set_chained_handler_and_data(irq, sophgo_pcie_intx_handler, pp); 173 + 174 + sophgo_pcie_disable_l0s_l1(pp); 189 175 190 176 sophgo_pcie_msi_enable(pp); 191 177
+1
drivers/pci/controller/dwc/pcie-stm32-ep.c
··· 70 70 } 71 71 72 72 static const struct pci_epc_features stm32_pcie_epc_features = { 73 + DWC_EPC_COMMON_FEATURES, 73 74 .msi_capable = true, 74 75 .align = SZ_64K, 75 76 };
+1
drivers/pci/controller/dwc/pcie-tegra194.c
··· 1988 1988 } 1989 1989 1990 1990 static const struct pci_epc_features tegra_pcie_epc_features = { 1991 + DWC_EPC_COMMON_FEATURES, 1991 1992 .linkup_notifier = true, 1992 1993 .msi_capable = true, 1993 1994 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M,
+2
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 420 420 .init = uniphier_pcie_pro5_init_ep, 421 421 .wait = NULL, 422 422 .features = { 423 + DWC_EPC_COMMON_FEATURES, 423 424 .linkup_notifier = false, 424 425 .msi_capable = true, 425 426 .msix_capable = false, ··· 439 438 .init = uniphier_pcie_nx1_init_ep, 440 439 .wait = uniphier_pcie_nx1_wait_ep, 441 440 .features = { 441 + DWC_EPC_COMMON_FEATURES, 442 442 .linkup_notifier = false, 443 443 .msi_capable = true, 444 444 .msix_capable = false,
+1 -1
drivers/pci/controller/pci-host-common.c
··· 32 32 33 33 err = of_address_to_resource(dev->of_node, 0, &cfgres); 34 34 if (err) { 35 - dev_err(dev, "missing \"reg\" property\n"); 35 + dev_err(dev, "missing or malformed \"reg\" property\n"); 36 36 return ERR_PTR(err); 37 37 } 38 38
+4 -31
drivers/pci/controller/pci-tegra.c
··· 2545 2545 2546 2546 DEFINE_SEQ_ATTRIBUTE(tegra_pcie_ports); 2547 2547 2548 - static void tegra_pcie_debugfs_exit(struct tegra_pcie *pcie) 2549 - { 2550 - debugfs_remove_recursive(pcie->debugfs); 2551 - pcie->debugfs = NULL; 2552 - } 2553 - 2554 2548 static void tegra_pcie_debugfs_init(struct tegra_pcie *pcie) 2555 2549 { 2556 2550 pcie->debugfs = debugfs_create_dir("pcie", NULL); ··· 2616 2622 put_resources: 2617 2623 tegra_pcie_put_resources(pcie); 2618 2624 return err; 2619 - } 2620 - 2621 - static void tegra_pcie_remove(struct platform_device *pdev) 2622 - { 2623 - struct tegra_pcie *pcie = platform_get_drvdata(pdev); 2624 - struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 2625 - struct tegra_pcie_port *port, *tmp; 2626 - 2627 - if (IS_ENABLED(CONFIG_DEBUG_FS)) 2628 - tegra_pcie_debugfs_exit(pcie); 2629 - 2630 - pci_stop_root_bus(host->bus); 2631 - pci_remove_root_bus(host->bus); 2632 - pm_runtime_put_sync(pcie->dev); 2633 - pm_runtime_disable(pcie->dev); 2634 - 2635 - if (IS_ENABLED(CONFIG_PCI_MSI)) 2636 - tegra_pcie_msi_teardown(pcie); 2637 - 2638 - tegra_pcie_put_resources(pcie); 2639 - 2640 - list_for_each_entry_safe(port, tmp, &pcie->ports, list) 2641 - tegra_pcie_port_free(port); 2642 2625 } 2643 2626 2644 2627 static int tegra_pcie_pm_suspend(struct device *dev) ··· 2721 2750 .pm = &tegra_pcie_pm_ops, 2722 2751 }, 2723 2752 .probe = tegra_pcie_probe, 2724 - .remove = tegra_pcie_remove, 2725 2753 }; 2726 - module_platform_driver(tegra_pcie_driver); 2754 + builtin_platform_driver(tegra_pcie_driver); 2755 + MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>"); 2756 + MODULE_DESCRIPTION("NVIDIA PCI host controller driver"); 2757 + MODULE_LICENSE("GPL");
+1111
drivers/pci/controller/pcie-aspeed.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright 2025 Aspeed Technology Inc. 4 + */ 5 + #include <linux/bitfield.h> 6 + #include <linux/clk.h> 7 + #include <linux/interrupt.h> 8 + #include <linux/irq.h> 9 + #include <linux/irqdomain.h> 10 + #include <linux/irqchip/chained_irq.h> 11 + #include <linux/irqchip/irq-msi-lib.h> 12 + #include <linux/kernel.h> 13 + #include <linux/mfd/syscon.h> 14 + #include <linux/module.h> 15 + #include <linux/msi.h> 16 + #include <linux/mutex.h> 17 + #include <linux/of.h> 18 + #include <linux/of_address.h> 19 + #include <linux/of_pci.h> 20 + #include <linux/pci.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/phy/pcie.h> 23 + #include <linux/phy/phy.h> 24 + #include <linux/regmap.h> 25 + #include <linux/reset.h> 26 + 27 + #include "../pci.h" 28 + 29 + #define MAX_MSI_HOST_IRQS 64 30 + #define ASPEED_RESET_RC_WAIT_MS 10 31 + 32 + /* AST2600 AHBC Registers */ 33 + #define ASPEED_AHBC_KEY 0x00 34 + #define ASPEED_AHBC_UNLOCK_KEY 0xaeed1a03 35 + #define ASPEED_AHBC_UNLOCK 0x01 36 + #define ASPEED_AHBC_ADDR_MAPPING 0x8c 37 + #define ASPEED_PCIE_RC_MEMORY_EN BIT(5) 38 + 39 + /* AST2600 H2X Controller Registers */ 40 + #define ASPEED_H2X_INT_STS 0x08 41 + #define ASPEED_PCIE_TX_IDLE_CLEAR BIT(0) 42 + #define ASPEED_PCIE_INTX_STS GENMASK(3, 0) 43 + #define ASPEED_H2X_HOST_RX_DESC_DATA 0x0c 44 + #define ASPEED_H2X_TX_DESC0 0x10 45 + #define ASPEED_H2X_TX_DESC1 0x14 46 + #define ASPEED_H2X_TX_DESC2 0x18 47 + #define ASPEED_H2X_TX_DESC3 0x1c 48 + #define ASPEED_H2X_TX_DESC_DATA 0x20 49 + #define ASPEED_H2X_STS 0x24 50 + #define ASPEED_PCIE_TX_IDLE BIT(31) 51 + #define ASPEED_PCIE_STATUS_OF_TX GENMASK(25, 24) 52 + #define ASPEED_PCIE_RC_H_TX_COMPLETE BIT(25) 53 + #define ASPEED_PCIE_TRIGGER_TX BIT(0) 54 + #define ASPEED_H2X_AHB_ADDR_CONFIG0 0x60 55 + #define ASPEED_AHB_REMAP_LO_ADDR(x) (x & GENMASK(15, 4)) 56 + #define ASPEED_AHB_MASK_LO_ADDR(x) FIELD_PREP(GENMASK(31, 20), x) 57 + #define ASPEED_H2X_AHB_ADDR_CONFIG1 0x64 58 + #define ASPEED_AHB_REMAP_HI_ADDR(x) (x) 59 + #define ASPEED_H2X_AHB_ADDR_CONFIG2 0x68 60 + #define ASPEED_AHB_MASK_HI_ADDR(x) (x) 61 + #define ASPEED_H2X_DEV_CTRL 0xc0 62 + #define ASPEED_PCIE_RX_DMA_EN BIT(9) 63 + #define ASPEED_PCIE_RX_LINEAR BIT(8) 64 + #define ASPEED_PCIE_RX_MSI_SEL BIT(7) 65 + #define ASPEED_PCIE_RX_MSI_EN BIT(6) 66 + #define ASPEED_PCIE_UNLOCK_RX_BUFF BIT(4) 67 + #define ASPEED_PCIE_WAIT_RX_TLP_CLR BIT(2) 68 + #define ASPEED_PCIE_RC_RX_ENABLE BIT(1) 69 + #define ASPEED_PCIE_RC_ENABLE BIT(0) 70 + #define ASPEED_H2X_DEV_STS 0xc8 71 + #define ASPEED_PCIE_RC_RX_DONE_ISR BIT(4) 72 + #define ASPEED_H2X_DEV_RX_DESC_DATA 0xcc 73 + #define ASPEED_H2X_DEV_RX_DESC1 0xd4 74 + #define ASPEED_H2X_DEV_TX_TAG 0xfc 75 + #define ASPEED_RC_TLP_TX_TAG_NUM 0x28 76 + 77 + /* AST2700 H2X */ 78 + #define ASPEED_H2X_CTRL 0x00 79 + #define ASPEED_H2X_BRIDGE_EN BIT(0) 80 + #define ASPEED_H2X_BRIDGE_DIRECT_EN BIT(1) 81 + #define ASPEED_H2X_CFGE_INT_STS 0x08 82 + #define ASPEED_CFGE_TX_IDLE BIT(0) 83 + #define ASPEED_CFGE_RX_BUSY BIT(1) 84 + #define ASPEED_H2X_CFGI_TLP 0x20 85 + #define ASPEED_CFGI_BYTE_EN_MASK GENMASK(19, 16) 86 + #define ASPEED_CFGI_BYTE_EN(x) \ 87 + FIELD_PREP(ASPEED_CFGI_BYTE_EN_MASK, (x)) 88 + #define ASPEED_H2X_CFGI_WR_DATA 0x24 89 + #define ASPEED_CFGI_WRITE BIT(20) 90 + #define ASPEED_H2X_CFGI_CTRL 0x28 91 + #define ASPEED_CFGI_TLP_FIRE BIT(0) 92 + #define ASPEED_H2X_CFGI_RET_DATA 0x2c 93 + #define ASPEED_H2X_CFGE_TLP_1ST 0x30 94 + #define ASPEED_H2X_CFGE_TLP_NEXT 0x34 95 + #define ASPEED_H2X_CFGE_CTRL 0x38 96 + #define ASPEED_CFGE_TLP_FIRE BIT(0) 97 + #define ASPEED_H2X_CFGE_RET_DATA 0x3c 98 + #define ASPEED_H2X_REMAP_PREF_ADDR 0x70 99 + #define ASPEED_REMAP_PREF_ADDR_63_32(x) (x) 100 + #define ASPEED_H2X_REMAP_PCI_ADDR_HI 0x74 101 + #define ASPEED_REMAP_PCI_ADDR_63_32(x) (((x) >> 32) & GENMASK(31, 0)) 102 + #define ASPEED_H2X_REMAP_PCI_ADDR_LO 0x78 103 + #define ASPEED_REMAP_PCI_ADDR_31_12(x) ((x) & GENMASK(31, 12)) 104 + 105 + /* AST2700 SCU */ 106 + #define ASPEED_SCU_60 0x60 107 + #define ASPEED_RC_E2M_PATH_EN BIT(0) 108 + #define ASPEED_RC_H2XS_PATH_EN BIT(16) 109 + #define ASPEED_RC_H2XD_PATH_EN BIT(17) 110 + #define ASPEED_RC_H2XX_PATH_EN BIT(18) 111 + #define ASPEED_RC_UPSTREAM_MEM_EN BIT(19) 112 + #define ASPEED_SCU_64 0x64 113 + #define ASPEED_RC0_DECODE_DMA_BASE(x) FIELD_PREP(GENMASK(7, 0), x) 114 + #define ASPEED_RC0_DECODE_DMA_LIMIT(x) FIELD_PREP(GENMASK(15, 8), x) 115 + #define ASPEED_RC1_DECODE_DMA_BASE(x) FIELD_PREP(GENMASK(23, 16), x) 116 + #define ASPEED_RC1_DECODE_DMA_LIMIT(x) FIELD_PREP(GENMASK(31, 24), x) 117 + #define ASPEED_SCU_70 0x70 118 + #define ASPEED_DISABLE_EP_FUNC 0 119 + 120 + /* Macro to combine Fmt and Type into the 8-bit field */ 121 + #define ASPEED_TLP_FMT_TYPE(fmt, type) ((((fmt) & 0x7) << 5) | ((type) & 0x1f)) 122 + #define ASPEED_TLP_COMMON_FIELDS GENMASK(31, 24) 123 + 124 + /* Completion status */ 125 + #define CPL_STS(x) FIELD_GET(GENMASK(15, 13), (x)) 126 + /* TLP configuration type 0 and type 1 */ 127 + #define CFG0_READ_FMTTYPE \ 128 + FIELD_PREP(ASPEED_TLP_COMMON_FIELDS, \ 129 + ASPEED_TLP_FMT_TYPE(PCIE_TLP_FMT_3DW_NO_DATA, \ 130 + PCIE_TLP_TYPE_CFG0_RD)) 131 + #define CFG0_WRITE_FMTTYPE \ 132 + FIELD_PREP(ASPEED_TLP_COMMON_FIELDS, \ 133 + ASPEED_TLP_FMT_TYPE(PCIE_TLP_FMT_3DW_DATA, \ 134 + PCIE_TLP_TYPE_CFG0_WR)) 135 + #define CFG1_READ_FMTTYPE \ 136 + FIELD_PREP(ASPEED_TLP_COMMON_FIELDS, \ 137 + ASPEED_TLP_FMT_TYPE(PCIE_TLP_FMT_3DW_NO_DATA, \ 138 + PCIE_TLP_TYPE_CFG1_RD)) 139 + #define CFG1_WRITE_FMTTYPE \ 140 + FIELD_PREP(ASPEED_TLP_COMMON_FIELDS, \ 141 + ASPEED_TLP_FMT_TYPE(PCIE_TLP_FMT_3DW_DATA, \ 142 + PCIE_TLP_TYPE_CFG1_WR)) 143 + #define CFG_PAYLOAD_SIZE 0x01 /* 1 DWORD */ 144 + #define TLP_HEADER_BYTE_EN(x, y) ((GENMASK((x) - 1, 0) << ((y) % 4))) 145 + #define TLP_GET_VALUE(x, y, z) \ 146 + (((x) >> ((((z) % 4)) * 8)) & GENMASK((8 * (y)) - 1, 0)) 147 + #define TLP_SET_VALUE(x, y, z) \ 148 + ((((x) & GENMASK((8 * (y)) - 1, 0)) << ((((z) % 4)) * 8))) 149 + #define AST2600_TX_DESC1_VALUE 0x00002000 150 + #define AST2700_TX_DESC1_VALUE 0x00401000 151 + 152 + /** 153 + * struct aspeed_pcie_port - PCIe port information 154 + * @list: port list 155 + * @pcie: pointer to PCIe host info 156 + * @clk: pointer to the port clock gate 157 + * @phy: pointer to PCIe PHY 158 + * @perst: pointer to port reset control 159 + * @slot: port slot 160 + */ 161 + struct aspeed_pcie_port { 162 + struct list_head list; 163 + struct aspeed_pcie *pcie; 164 + struct clk *clk; 165 + struct phy *phy; 166 + struct reset_control *perst; 167 + u32 slot; 168 + }; 169 + 170 + /** 171 + * struct aspeed_pcie - PCIe RC information 172 + * @host: pointer to PCIe host bridge 173 + * @dev: pointer to device structure 174 + * @reg: PCIe host register base address 175 + * @ahbc: pointer to AHHC register map 176 + * @cfg: pointer to Aspeed PCIe configuration register map 177 + * @platform: platform specific information 178 + * @ports: list of PCIe ports 179 + * @tx_tag: current TX tag for the port 180 + * @root_bus_nr: bus number of the host bridge 181 + * @h2xrst: pointer to H2X reset control 182 + * @intx_domain: IRQ domain for INTx interrupts 183 + * @msi_domain: IRQ domain for MSI interrupts 184 + * @lock: mutex to protect MSI bitmap variable 185 + * @msi_irq_in_use: bitmap to track used MSI host IRQs 186 + * @clear_msi_twice: AST2700 workaround to clear MSI status twice 187 + */ 188 + struct aspeed_pcie { 189 + struct pci_host_bridge *host; 190 + struct device *dev; 191 + void __iomem *reg; 192 + struct regmap *ahbc; 193 + struct regmap *cfg; 194 + const struct aspeed_pcie_rc_platform *platform; 195 + struct list_head ports; 196 + 197 + u8 tx_tag; 198 + u8 root_bus_nr; 199 + 200 + struct reset_control *h2xrst; 201 + 202 + struct irq_domain *intx_domain; 203 + struct irq_domain *msi_domain; 204 + struct mutex lock; 205 + DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_HOST_IRQS); 206 + 207 + bool clear_msi_twice; /* AST2700 workaround */ 208 + }; 209 + 210 + /** 211 + * struct aspeed_pcie_rc_platform - Platform information 212 + * @setup: initialization function 213 + * @pcie_map_ranges: function to map PCIe address ranges 214 + * @reg_intx_en: INTx enable register offset 215 + * @reg_intx_sts: INTx status register offset 216 + * @reg_msi_en: MSI enable register offset 217 + * @reg_msi_sts: MSI enable register offset 218 + * @msi_address: HW fixed MSI address 219 + */ 220 + struct aspeed_pcie_rc_platform { 221 + int (*setup)(struct platform_device *pdev); 222 + void (*pcie_map_ranges)(struct aspeed_pcie *pcie, u64 pci_addr); 223 + int reg_intx_en; 224 + int reg_intx_sts; 225 + int reg_msi_en; 226 + int reg_msi_sts; 227 + u32 msi_address; 228 + }; 229 + 230 + static void aspeed_pcie_intx_irq_ack(struct irq_data *d) 231 + { 232 + struct aspeed_pcie *pcie = irq_data_get_irq_chip_data(d); 233 + int intx_en = pcie->platform->reg_intx_en; 234 + u32 en; 235 + 236 + en = readl(pcie->reg + intx_en); 237 + en |= BIT(d->hwirq); 238 + writel(en, pcie->reg + intx_en); 239 + } 240 + 241 + static void aspeed_pcie_intx_irq_mask(struct irq_data *d) 242 + { 243 + struct aspeed_pcie *pcie = irq_data_get_irq_chip_data(d); 244 + int intx_en = pcie->platform->reg_intx_en; 245 + u32 en; 246 + 247 + en = readl(pcie->reg + intx_en); 248 + en &= ~BIT(d->hwirq); 249 + writel(en, pcie->reg + intx_en); 250 + } 251 + 252 + static void aspeed_pcie_intx_irq_unmask(struct irq_data *d) 253 + { 254 + struct aspeed_pcie *pcie = irq_data_get_irq_chip_data(d); 255 + int intx_en = pcie->platform->reg_intx_en; 256 + u32 en; 257 + 258 + en = readl(pcie->reg + intx_en); 259 + en |= BIT(d->hwirq); 260 + writel(en, pcie->reg + intx_en); 261 + } 262 + 263 + static struct irq_chip aspeed_intx_irq_chip = { 264 + .name = "INTx", 265 + .irq_ack = aspeed_pcie_intx_irq_ack, 266 + .irq_mask = aspeed_pcie_intx_irq_mask, 267 + .irq_unmask = aspeed_pcie_intx_irq_unmask, 268 + }; 269 + 270 + static int aspeed_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 271 + irq_hw_number_t hwirq) 272 + { 273 + irq_set_chip_and_handler(irq, &aspeed_intx_irq_chip, handle_level_irq); 274 + irq_set_chip_data(irq, domain->host_data); 275 + irq_set_status_flags(irq, IRQ_LEVEL); 276 + 277 + return 0; 278 + } 279 + 280 + static const struct irq_domain_ops aspeed_intx_domain_ops = { 281 + .map = aspeed_pcie_intx_map, 282 + }; 283 + 284 + static irqreturn_t aspeed_pcie_intr_handler(int irq, void *dev_id) 285 + { 286 + struct aspeed_pcie *pcie = dev_id; 287 + const struct aspeed_pcie_rc_platform *platform = pcie->platform; 288 + unsigned long status; 289 + unsigned long intx; 290 + u32 bit; 291 + int i; 292 + 293 + intx = FIELD_GET(ASPEED_PCIE_INTX_STS, 294 + readl(pcie->reg + platform->reg_intx_sts)); 295 + for_each_set_bit(bit, &intx, PCI_NUM_INTX) 296 + generic_handle_domain_irq(pcie->intx_domain, bit); 297 + 298 + for (i = 0; i < 2; i++) { 299 + int msi_sts_reg = platform->reg_msi_sts + (i * 4); 300 + 301 + status = readl(pcie->reg + msi_sts_reg); 302 + writel(status, pcie->reg + msi_sts_reg); 303 + 304 + /* 305 + * AST2700 workaround: 306 + * The MSI status needs to clear one more time. 307 + */ 308 + if (pcie->clear_msi_twice) 309 + writel(status, pcie->reg + msi_sts_reg); 310 + 311 + for_each_set_bit(bit, &status, 32) { 312 + bit += (i * 32); 313 + generic_handle_domain_irq(pcie->msi_domain, bit); 314 + } 315 + } 316 + 317 + return IRQ_HANDLED; 318 + } 319 + 320 + static u32 aspeed_pcie_get_bdf_offset(struct pci_bus *bus, unsigned int devfn, 321 + int where) 322 + { 323 + return ((bus->number) << 24) | (PCI_SLOT(devfn) << 19) | 324 + (PCI_FUNC(devfn) << 16) | (where & ~3); 325 + } 326 + 327 + static int aspeed_ast2600_conf(struct pci_bus *bus, unsigned int devfn, 328 + int where, int size, u32 *val, u32 fmt_type, 329 + bool write) 330 + { 331 + struct aspeed_pcie *pcie = bus->sysdata; 332 + u32 bdf_offset, cfg_val, isr; 333 + int ret; 334 + 335 + bdf_offset = aspeed_pcie_get_bdf_offset(bus, devfn, where); 336 + 337 + /* Driver may set unlock RX buffer before triggering next TX config */ 338 + cfg_val = readl(pcie->reg + ASPEED_H2X_DEV_CTRL); 339 + writel(ASPEED_PCIE_UNLOCK_RX_BUFF | cfg_val, 340 + pcie->reg + ASPEED_H2X_DEV_CTRL); 341 + 342 + cfg_val = fmt_type | CFG_PAYLOAD_SIZE; 343 + writel(cfg_val, pcie->reg + ASPEED_H2X_TX_DESC0); 344 + 345 + cfg_val = AST2600_TX_DESC1_VALUE | 346 + FIELD_PREP(GENMASK(11, 8), pcie->tx_tag) | 347 + TLP_HEADER_BYTE_EN(size, where); 348 + writel(cfg_val, pcie->reg + ASPEED_H2X_TX_DESC1); 349 + 350 + writel(bdf_offset, pcie->reg + ASPEED_H2X_TX_DESC2); 351 + writel(0, pcie->reg + ASPEED_H2X_TX_DESC3); 352 + if (write) 353 + writel(TLP_SET_VALUE(*val, size, where), 354 + pcie->reg + ASPEED_H2X_TX_DESC_DATA); 355 + 356 + cfg_val = readl(pcie->reg + ASPEED_H2X_STS); 357 + cfg_val |= ASPEED_PCIE_TRIGGER_TX; 358 + writel(cfg_val, pcie->reg + ASPEED_H2X_STS); 359 + 360 + ret = readl_poll_timeout(pcie->reg + ASPEED_H2X_STS, cfg_val, 361 + (cfg_val & ASPEED_PCIE_TX_IDLE), 0, 50); 362 + if (ret) { 363 + dev_err(pcie->dev, 364 + "%02x:%02x.%d CR tx timeout sts: 0x%08x\n", 365 + bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn), cfg_val); 366 + ret = PCIBIOS_SET_FAILED; 367 + PCI_SET_ERROR_RESPONSE(val); 368 + goto out; 369 + } 370 + 371 + cfg_val = readl(pcie->reg + ASPEED_H2X_INT_STS); 372 + cfg_val |= ASPEED_PCIE_TX_IDLE_CLEAR; 373 + writel(cfg_val, pcie->reg + ASPEED_H2X_INT_STS); 374 + 375 + cfg_val = readl(pcie->reg + ASPEED_H2X_STS); 376 + switch (cfg_val & ASPEED_PCIE_STATUS_OF_TX) { 377 + case ASPEED_PCIE_RC_H_TX_COMPLETE: 378 + ret = readl_poll_timeout(pcie->reg + ASPEED_H2X_DEV_STS, isr, 379 + (isr & ASPEED_PCIE_RC_RX_DONE_ISR), 0, 380 + 50); 381 + if (ret) { 382 + dev_err(pcie->dev, 383 + "%02x:%02x.%d CR rx timeout sts: 0x%08x\n", 384 + bus->number, PCI_SLOT(devfn), 385 + PCI_FUNC(devfn), isr); 386 + ret = PCIBIOS_SET_FAILED; 387 + PCI_SET_ERROR_RESPONSE(val); 388 + goto out; 389 + } 390 + if (!write) { 391 + cfg_val = readl(pcie->reg + ASPEED_H2X_DEV_RX_DESC1); 392 + if (CPL_STS(cfg_val) != PCIE_CPL_STS_SUCCESS) { 393 + ret = PCIBIOS_SET_FAILED; 394 + PCI_SET_ERROR_RESPONSE(val); 395 + goto out; 396 + } else { 397 + *val = readl(pcie->reg + 398 + ASPEED_H2X_DEV_RX_DESC_DATA); 399 + } 400 + } 401 + break; 402 + case ASPEED_PCIE_STATUS_OF_TX: 403 + ret = PCIBIOS_SET_FAILED; 404 + PCI_SET_ERROR_RESPONSE(val); 405 + goto out; 406 + default: 407 + *val = readl(pcie->reg + ASPEED_H2X_HOST_RX_DESC_DATA); 408 + break; 409 + } 410 + 411 + cfg_val = readl(pcie->reg + ASPEED_H2X_DEV_CTRL); 412 + cfg_val |= ASPEED_PCIE_UNLOCK_RX_BUFF; 413 + writel(cfg_val, pcie->reg + ASPEED_H2X_DEV_CTRL); 414 + 415 + *val = TLP_GET_VALUE(*val, size, where); 416 + 417 + ret = PCIBIOS_SUCCESSFUL; 418 + out: 419 + cfg_val = readl(pcie->reg + ASPEED_H2X_DEV_STS); 420 + writel(cfg_val, pcie->reg + ASPEED_H2X_DEV_STS); 421 + pcie->tx_tag = (pcie->tx_tag + 1) % 0x8; 422 + return ret; 423 + } 424 + 425 + static int aspeed_ast2600_rd_conf(struct pci_bus *bus, unsigned int devfn, 426 + int where, int size, u32 *val) 427 + { 428 + /* 429 + * AST2600 has only one Root Port on the root bus. 430 + */ 431 + if (PCI_SLOT(devfn) != 8) 432 + return PCIBIOS_DEVICE_NOT_FOUND; 433 + 434 + return aspeed_ast2600_conf(bus, devfn, where, size, val, 435 + CFG0_READ_FMTTYPE, false); 436 + } 437 + 438 + static int aspeed_ast2600_child_rd_conf(struct pci_bus *bus, unsigned int devfn, 439 + int where, int size, u32 *val) 440 + { 441 + return aspeed_ast2600_conf(bus, devfn, where, size, val, 442 + CFG1_READ_FMTTYPE, false); 443 + } 444 + 445 + static int aspeed_ast2600_wr_conf(struct pci_bus *bus, unsigned int devfn, 446 + int where, int size, u32 val) 447 + { 448 + /* 449 + * AST2600 has only one Root Port on the root bus. 450 + */ 451 + if (PCI_SLOT(devfn) != 8) 452 + return PCIBIOS_DEVICE_NOT_FOUND; 453 + 454 + return aspeed_ast2600_conf(bus, devfn, where, size, &val, 455 + CFG0_WRITE_FMTTYPE, true); 456 + } 457 + 458 + static int aspeed_ast2600_child_wr_conf(struct pci_bus *bus, unsigned int devfn, 459 + int where, int size, u32 val) 460 + { 461 + return aspeed_ast2600_conf(bus, devfn, where, size, &val, 462 + CFG1_WRITE_FMTTYPE, true); 463 + } 464 + 465 + static int aspeed_ast2700_config(struct pci_bus *bus, unsigned int devfn, 466 + int where, int size, u32 *val, bool write) 467 + { 468 + struct aspeed_pcie *pcie = bus->sysdata; 469 + u32 cfg_val; 470 + 471 + cfg_val = ASPEED_CFGI_BYTE_EN(TLP_HEADER_BYTE_EN(size, where)) | 472 + (where & ~3); 473 + if (write) 474 + cfg_val |= ASPEED_CFGI_WRITE; 475 + writel(cfg_val, pcie->reg + ASPEED_H2X_CFGI_TLP); 476 + 477 + writel(TLP_SET_VALUE(*val, size, where), 478 + pcie->reg + ASPEED_H2X_CFGI_WR_DATA); 479 + writel(ASPEED_CFGI_TLP_FIRE, pcie->reg + ASPEED_H2X_CFGI_CTRL); 480 + *val = readl(pcie->reg + ASPEED_H2X_CFGI_RET_DATA); 481 + *val = TLP_GET_VALUE(*val, size, where); 482 + 483 + return PCIBIOS_SUCCESSFUL; 484 + } 485 + 486 + static int aspeed_ast2700_child_config(struct pci_bus *bus, unsigned int devfn, 487 + int where, int size, u32 *val, 488 + bool write) 489 + { 490 + struct aspeed_pcie *pcie = bus->sysdata; 491 + u32 bdf_offset, status, cfg_val; 492 + int ret; 493 + 494 + bdf_offset = aspeed_pcie_get_bdf_offset(bus, devfn, where); 495 + 496 + cfg_val = CFG_PAYLOAD_SIZE; 497 + if (write) 498 + cfg_val |= (bus->number == (pcie->root_bus_nr + 1)) ? 499 + CFG0_WRITE_FMTTYPE : 500 + CFG1_WRITE_FMTTYPE; 501 + else 502 + cfg_val |= (bus->number == (pcie->root_bus_nr + 1)) ? 503 + CFG0_READ_FMTTYPE : 504 + CFG1_READ_FMTTYPE; 505 + writel(cfg_val, pcie->reg + ASPEED_H2X_CFGE_TLP_1ST); 506 + 507 + cfg_val = AST2700_TX_DESC1_VALUE | 508 + FIELD_PREP(GENMASK(11, 8), pcie->tx_tag) | 509 + TLP_HEADER_BYTE_EN(size, where); 510 + writel(cfg_val, pcie->reg + ASPEED_H2X_CFGE_TLP_NEXT); 511 + 512 + writel(bdf_offset, pcie->reg + ASPEED_H2X_CFGE_TLP_NEXT); 513 + if (write) 514 + writel(TLP_SET_VALUE(*val, size, where), 515 + pcie->reg + ASPEED_H2X_CFGE_TLP_NEXT); 516 + writel(ASPEED_CFGE_TX_IDLE | ASPEED_CFGE_RX_BUSY, 517 + pcie->reg + ASPEED_H2X_CFGE_INT_STS); 518 + writel(ASPEED_CFGE_TLP_FIRE, pcie->reg + ASPEED_H2X_CFGE_CTRL); 519 + 520 + ret = readl_poll_timeout(pcie->reg + ASPEED_H2X_CFGE_INT_STS, status, 521 + (status & ASPEED_CFGE_TX_IDLE), 0, 50); 522 + if (ret) { 523 + dev_err(pcie->dev, 524 + "%02x:%02x.%d CR tx timeout sts: 0x%08x\n", 525 + bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn), status); 526 + ret = PCIBIOS_SET_FAILED; 527 + PCI_SET_ERROR_RESPONSE(val); 528 + goto out; 529 + } 530 + 531 + ret = readl_poll_timeout(pcie->reg + ASPEED_H2X_CFGE_INT_STS, status, 532 + (status & ASPEED_CFGE_RX_BUSY), 0, 50); 533 + if (ret) { 534 + dev_err(pcie->dev, 535 + "%02x:%02x.%d CR rx timeout sts: 0x%08x\n", 536 + bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn), status); 537 + ret = PCIBIOS_SET_FAILED; 538 + PCI_SET_ERROR_RESPONSE(val); 539 + goto out; 540 + } 541 + *val = readl(pcie->reg + ASPEED_H2X_CFGE_RET_DATA); 542 + *val = TLP_GET_VALUE(*val, size, where); 543 + 544 + ret = PCIBIOS_SUCCESSFUL; 545 + out: 546 + writel(status, pcie->reg + ASPEED_H2X_CFGE_INT_STS); 547 + pcie->tx_tag = (pcie->tx_tag + 1) % 0xf; 548 + return ret; 549 + } 550 + 551 + static int aspeed_ast2700_rd_conf(struct pci_bus *bus, unsigned int devfn, 552 + int where, int size, u32 *val) 553 + { 554 + /* 555 + * AST2700 has only one Root Port on the root bus. 556 + */ 557 + if (devfn != 0) 558 + return PCIBIOS_DEVICE_NOT_FOUND; 559 + 560 + return aspeed_ast2700_config(bus, devfn, where, size, val, false); 561 + } 562 + 563 + static int aspeed_ast2700_child_rd_conf(struct pci_bus *bus, unsigned int devfn, 564 + int where, int size, u32 *val) 565 + { 566 + return aspeed_ast2700_child_config(bus, devfn, where, size, val, false); 567 + } 568 + 569 + static int aspeed_ast2700_wr_conf(struct pci_bus *bus, unsigned int devfn, 570 + int where, int size, u32 val) 571 + { 572 + /* 573 + * AST2700 has only one Root Port on the root bus. 574 + */ 575 + if (devfn != 0) 576 + return PCIBIOS_DEVICE_NOT_FOUND; 577 + 578 + return aspeed_ast2700_config(bus, devfn, where, size, &val, true); 579 + } 580 + 581 + static int aspeed_ast2700_child_wr_conf(struct pci_bus *bus, unsigned int devfn, 582 + int where, int size, u32 val) 583 + { 584 + return aspeed_ast2700_child_config(bus, devfn, where, size, &val, true); 585 + } 586 + 587 + static struct pci_ops aspeed_ast2600_pcie_ops = { 588 + .read = aspeed_ast2600_rd_conf, 589 + .write = aspeed_ast2600_wr_conf, 590 + }; 591 + 592 + static struct pci_ops aspeed_ast2600_pcie_child_ops = { 593 + .read = aspeed_ast2600_child_rd_conf, 594 + .write = aspeed_ast2600_child_wr_conf, 595 + }; 596 + 597 + static struct pci_ops aspeed_ast2700_pcie_ops = { 598 + .read = aspeed_ast2700_rd_conf, 599 + .write = aspeed_ast2700_wr_conf, 600 + }; 601 + 602 + static struct pci_ops aspeed_ast2700_pcie_child_ops = { 603 + .read = aspeed_ast2700_child_rd_conf, 604 + .write = aspeed_ast2700_child_wr_conf, 605 + }; 606 + 607 + static void aspeed_irq_compose_msi_msg(struct irq_data *data, 608 + struct msi_msg *msg) 609 + { 610 + struct aspeed_pcie *pcie = irq_data_get_irq_chip_data(data); 611 + 612 + msg->address_hi = 0; 613 + msg->address_lo = pcie->platform->msi_address; 614 + msg->data = data->hwirq; 615 + } 616 + 617 + static struct irq_chip aspeed_msi_bottom_irq_chip = { 618 + .name = "ASPEED MSI", 619 + .irq_compose_msi_msg = aspeed_irq_compose_msi_msg, 620 + }; 621 + 622 + static int aspeed_irq_msi_domain_alloc(struct irq_domain *domain, 623 + unsigned int virq, unsigned int nr_irqs, 624 + void *args) 625 + { 626 + struct aspeed_pcie *pcie = domain->host_data; 627 + int bit; 628 + int i; 629 + 630 + guard(mutex)(&pcie->lock); 631 + 632 + bit = bitmap_find_free_region(pcie->msi_irq_in_use, MAX_MSI_HOST_IRQS, 633 + get_count_order(nr_irqs)); 634 + 635 + if (bit < 0) 636 + return -ENOSPC; 637 + 638 + for (i = 0; i < nr_irqs; i++) { 639 + irq_domain_set_info(domain, virq + i, bit + i, 640 + &aspeed_msi_bottom_irq_chip, 641 + domain->host_data, handle_simple_irq, NULL, 642 + NULL); 643 + } 644 + 645 + return 0; 646 + } 647 + 648 + static void aspeed_irq_msi_domain_free(struct irq_domain *domain, 649 + unsigned int virq, unsigned int nr_irqs) 650 + { 651 + struct irq_data *data = irq_domain_get_irq_data(domain, virq); 652 + struct aspeed_pcie *pcie = irq_data_get_irq_chip_data(data); 653 + 654 + guard(mutex)(&pcie->lock); 655 + 656 + bitmap_release_region(pcie->msi_irq_in_use, data->hwirq, 657 + get_count_order(nr_irqs)); 658 + } 659 + 660 + static const struct irq_domain_ops aspeed_msi_domain_ops = { 661 + .alloc = aspeed_irq_msi_domain_alloc, 662 + .free = aspeed_irq_msi_domain_free, 663 + }; 664 + 665 + #define ASPEED_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \ 666 + MSI_FLAG_USE_DEF_CHIP_OPS | \ 667 + MSI_FLAG_NO_AFFINITY) 668 + 669 + #define ASPEED_MSI_FLAGS_SUPPORTED (MSI_GENERIC_FLAGS_MASK | \ 670 + MSI_FLAG_MULTI_PCI_MSI | \ 671 + MSI_FLAG_PCI_MSIX) 672 + 673 + static const struct msi_parent_ops aspeed_msi_parent_ops = { 674 + .required_flags = ASPEED_MSI_FLAGS_REQUIRED, 675 + .supported_flags = ASPEED_MSI_FLAGS_SUPPORTED, 676 + .bus_select_token = DOMAIN_BUS_PCI_MSI, 677 + .chip_flags = MSI_CHIP_FLAG_SET_ACK, 678 + .prefix = "ASPEED-", 679 + .init_dev_msi_info = msi_lib_init_dev_msi_info, 680 + }; 681 + 682 + static int aspeed_pcie_msi_init(struct aspeed_pcie *pcie) 683 + { 684 + writel(~0, pcie->reg + pcie->platform->reg_msi_en); 685 + writel(~0, pcie->reg + pcie->platform->reg_msi_en + 0x04); 686 + writel(~0, pcie->reg + pcie->platform->reg_msi_sts); 687 + writel(~0, pcie->reg + pcie->platform->reg_msi_sts + 0x04); 688 + 689 + struct irq_domain_info info = { 690 + .fwnode = dev_fwnode(pcie->dev), 691 + .ops = &aspeed_msi_domain_ops, 692 + .host_data = pcie, 693 + .size = MAX_MSI_HOST_IRQS, 694 + }; 695 + 696 + pcie->msi_domain = msi_create_parent_irq_domain(&info, 697 + &aspeed_msi_parent_ops); 698 + if (!pcie->msi_domain) 699 + return dev_err_probe(pcie->dev, -ENOMEM, 700 + "failed to create MSI domain\n"); 701 + 702 + return 0; 703 + } 704 + 705 + static void aspeed_pcie_msi_free(struct aspeed_pcie *pcie) 706 + { 707 + if (pcie->msi_domain) { 708 + irq_domain_remove(pcie->msi_domain); 709 + pcie->msi_domain = NULL; 710 + } 711 + } 712 + 713 + static void aspeed_pcie_irq_domain_free(void *d) 714 + { 715 + struct aspeed_pcie *pcie = d; 716 + 717 + if (pcie->intx_domain) { 718 + irq_domain_remove(pcie->intx_domain); 719 + pcie->intx_domain = NULL; 720 + } 721 + aspeed_pcie_msi_free(pcie); 722 + } 723 + 724 + static int aspeed_pcie_init_irq_domain(struct aspeed_pcie *pcie) 725 + { 726 + int ret; 727 + 728 + pcie->intx_domain = irq_domain_add_linear(pcie->dev->of_node, 729 + PCI_NUM_INTX, 730 + &aspeed_intx_domain_ops, 731 + pcie); 732 + if (!pcie->intx_domain) { 733 + ret = dev_err_probe(pcie->dev, -ENOMEM, 734 + "failed to get INTx IRQ domain\n"); 735 + goto err; 736 + } 737 + 738 + writel(0, pcie->reg + pcie->platform->reg_intx_en); 739 + writel(~0, pcie->reg + pcie->platform->reg_intx_sts); 740 + 741 + ret = aspeed_pcie_msi_init(pcie); 742 + if (ret) 743 + goto err; 744 + 745 + return 0; 746 + err: 747 + aspeed_pcie_irq_domain_free(pcie); 748 + return ret; 749 + } 750 + 751 + static int aspeed_pcie_port_init(struct aspeed_pcie_port *port) 752 + { 753 + struct aspeed_pcie *pcie = port->pcie; 754 + struct device *dev = pcie->dev; 755 + int ret; 756 + 757 + ret = clk_prepare_enable(port->clk); 758 + if (ret) 759 + return dev_err_probe(dev, ret, 760 + "failed to set clock for slot (%d)\n", 761 + port->slot); 762 + 763 + ret = phy_init(port->phy); 764 + if (ret) 765 + return dev_err_probe(dev, ret, 766 + "failed to init phy pcie for slot (%d)\n", 767 + port->slot); 768 + 769 + ret = phy_set_mode_ext(port->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 770 + if (ret) 771 + return dev_err_probe(dev, ret, 772 + "failed to set phy mode for slot (%d)\n", 773 + port->slot); 774 + 775 + reset_control_deassert(port->perst); 776 + msleep(PCIE_RESET_CONFIG_WAIT_MS); 777 + 778 + return 0; 779 + } 780 + 781 + static void aspeed_host_reset(struct aspeed_pcie *pcie) 782 + { 783 + reset_control_assert(pcie->h2xrst); 784 + mdelay(ASPEED_RESET_RC_WAIT_MS); 785 + reset_control_deassert(pcie->h2xrst); 786 + } 787 + 788 + static void aspeed_pcie_map_ranges(struct aspeed_pcie *pcie) 789 + { 790 + struct pci_host_bridge *bridge = pcie->host; 791 + struct resource_entry *window; 792 + 793 + resource_list_for_each_entry(window, &bridge->windows) { 794 + u64 pci_addr; 795 + 796 + if (resource_type(window->res) != IORESOURCE_MEM) 797 + continue; 798 + 799 + pci_addr = window->res->start - window->offset; 800 + pcie->platform->pcie_map_ranges(pcie, pci_addr); 801 + break; 802 + } 803 + } 804 + 805 + static void aspeed_ast2600_pcie_map_ranges(struct aspeed_pcie *pcie, 806 + u64 pci_addr) 807 + { 808 + u32 pci_addr_lo = pci_addr & GENMASK(31, 0); 809 + u32 pci_addr_hi = (pci_addr >> 32) & GENMASK(31, 0); 810 + 811 + pci_addr_lo >>= 16; 812 + writel(ASPEED_AHB_REMAP_LO_ADDR(pci_addr_lo) | 813 + ASPEED_AHB_MASK_LO_ADDR(0xe00), 814 + pcie->reg + ASPEED_H2X_AHB_ADDR_CONFIG0); 815 + writel(ASPEED_AHB_REMAP_HI_ADDR(pci_addr_hi), 816 + pcie->reg + ASPEED_H2X_AHB_ADDR_CONFIG1); 817 + writel(ASPEED_AHB_MASK_HI_ADDR(~0), 818 + pcie->reg + ASPEED_H2X_AHB_ADDR_CONFIG2); 819 + } 820 + 821 + static int aspeed_ast2600_setup(struct platform_device *pdev) 822 + { 823 + struct aspeed_pcie *pcie = platform_get_drvdata(pdev); 824 + struct device *dev = pcie->dev; 825 + 826 + pcie->ahbc = syscon_regmap_lookup_by_phandle(dev->of_node, 827 + "aspeed,ahbc"); 828 + if (IS_ERR(pcie->ahbc)) 829 + return dev_err_probe(dev, PTR_ERR(pcie->ahbc), 830 + "failed to map ahbc base\n"); 831 + 832 + aspeed_host_reset(pcie); 833 + 834 + regmap_write(pcie->ahbc, ASPEED_AHBC_KEY, ASPEED_AHBC_UNLOCK_KEY); 835 + regmap_update_bits(pcie->ahbc, ASPEED_AHBC_ADDR_MAPPING, 836 + ASPEED_PCIE_RC_MEMORY_EN, ASPEED_PCIE_RC_MEMORY_EN); 837 + regmap_write(pcie->ahbc, ASPEED_AHBC_KEY, ASPEED_AHBC_UNLOCK); 838 + 839 + writel(ASPEED_H2X_BRIDGE_EN, pcie->reg + ASPEED_H2X_CTRL); 840 + 841 + writel(ASPEED_PCIE_RX_DMA_EN | ASPEED_PCIE_RX_LINEAR | 842 + ASPEED_PCIE_RX_MSI_SEL | ASPEED_PCIE_RX_MSI_EN | 843 + ASPEED_PCIE_WAIT_RX_TLP_CLR | ASPEED_PCIE_RC_RX_ENABLE | 844 + ASPEED_PCIE_RC_ENABLE, 845 + pcie->reg + ASPEED_H2X_DEV_CTRL); 846 + 847 + writel(ASPEED_RC_TLP_TX_TAG_NUM, pcie->reg + ASPEED_H2X_DEV_TX_TAG); 848 + 849 + pcie->host->ops = &aspeed_ast2600_pcie_ops; 850 + pcie->host->child_ops = &aspeed_ast2600_pcie_child_ops; 851 + 852 + return 0; 853 + } 854 + 855 + static void aspeed_ast2700_pcie_map_ranges(struct aspeed_pcie *pcie, 856 + u64 pci_addr) 857 + { 858 + writel(ASPEED_REMAP_PCI_ADDR_31_12(pci_addr), 859 + pcie->reg + ASPEED_H2X_REMAP_PCI_ADDR_LO); 860 + writel(ASPEED_REMAP_PCI_ADDR_63_32(pci_addr), 861 + pcie->reg + ASPEED_H2X_REMAP_PCI_ADDR_HI); 862 + } 863 + 864 + static int aspeed_ast2700_setup(struct platform_device *pdev) 865 + { 866 + struct aspeed_pcie *pcie = platform_get_drvdata(pdev); 867 + struct device *dev = pcie->dev; 868 + 869 + pcie->cfg = syscon_regmap_lookup_by_phandle(dev->of_node, 870 + "aspeed,pciecfg"); 871 + if (IS_ERR(pcie->cfg)) 872 + return dev_err_probe(dev, PTR_ERR(pcie->cfg), 873 + "failed to map pciecfg base\n"); 874 + 875 + regmap_update_bits(pcie->cfg, ASPEED_SCU_60, 876 + ASPEED_RC_E2M_PATH_EN | ASPEED_RC_H2XS_PATH_EN | 877 + ASPEED_RC_H2XD_PATH_EN | ASPEED_RC_H2XX_PATH_EN | 878 + ASPEED_RC_UPSTREAM_MEM_EN, 879 + ASPEED_RC_E2M_PATH_EN | ASPEED_RC_H2XS_PATH_EN | 880 + ASPEED_RC_H2XD_PATH_EN | ASPEED_RC_H2XX_PATH_EN | 881 + ASPEED_RC_UPSTREAM_MEM_EN); 882 + regmap_write(pcie->cfg, ASPEED_SCU_64, 883 + ASPEED_RC0_DECODE_DMA_BASE(0) | 884 + ASPEED_RC0_DECODE_DMA_LIMIT(0xff) | 885 + ASPEED_RC1_DECODE_DMA_BASE(0) | 886 + ASPEED_RC1_DECODE_DMA_LIMIT(0xff)); 887 + regmap_write(pcie->cfg, ASPEED_SCU_70, ASPEED_DISABLE_EP_FUNC); 888 + 889 + aspeed_host_reset(pcie); 890 + 891 + writel(0, pcie->reg + ASPEED_H2X_CTRL); 892 + writel(ASPEED_H2X_BRIDGE_EN | ASPEED_H2X_BRIDGE_DIRECT_EN, 893 + pcie->reg + ASPEED_H2X_CTRL); 894 + 895 + /* Prepare for 64-bit BAR pref */ 896 + writel(ASPEED_REMAP_PREF_ADDR_63_32(0x3), 897 + pcie->reg + ASPEED_H2X_REMAP_PREF_ADDR); 898 + 899 + pcie->host->ops = &aspeed_ast2700_pcie_ops; 900 + pcie->host->child_ops = &aspeed_ast2700_pcie_child_ops; 901 + pcie->clear_msi_twice = true; 902 + 903 + return 0; 904 + } 905 + 906 + static void aspeed_pcie_reset_release(void *d) 907 + { 908 + struct reset_control *perst = d; 909 + 910 + if (!perst) 911 + return; 912 + 913 + reset_control_put(perst); 914 + } 915 + 916 + static int aspeed_pcie_parse_port(struct aspeed_pcie *pcie, 917 + struct device_node *node, 918 + int slot) 919 + { 920 + struct aspeed_pcie_port *port; 921 + struct device *dev = pcie->dev; 922 + int ret; 923 + 924 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 925 + if (!port) 926 + return -ENOMEM; 927 + 928 + port->clk = devm_get_clk_from_child(dev, node, NULL); 929 + if (IS_ERR(port->clk)) 930 + return dev_err_probe(dev, PTR_ERR(port->clk), 931 + "failed to get pcie%d clock\n", slot); 932 + 933 + port->phy = devm_of_phy_get(dev, node, NULL); 934 + if (IS_ERR(port->phy)) 935 + return dev_err_probe(dev, PTR_ERR(port->phy), 936 + "failed to get phy pcie%d\n", slot); 937 + 938 + port->perst = of_reset_control_get_exclusive(node, "perst"); 939 + if (IS_ERR(port->perst)) 940 + return dev_err_probe(dev, PTR_ERR(port->perst), 941 + "failed to get pcie%d reset control\n", 942 + slot); 943 + ret = devm_add_action_or_reset(dev, aspeed_pcie_reset_release, 944 + port->perst); 945 + if (ret) 946 + return ret; 947 + reset_control_assert(port->perst); 948 + 949 + port->slot = slot; 950 + port->pcie = pcie; 951 + 952 + INIT_LIST_HEAD(&port->list); 953 + list_add_tail(&port->list, &pcie->ports); 954 + 955 + ret = aspeed_pcie_port_init(port); 956 + if (ret) 957 + return ret; 958 + 959 + return 0; 960 + } 961 + 962 + static int aspeed_pcie_parse_dt(struct aspeed_pcie *pcie) 963 + { 964 + struct device *dev = pcie->dev; 965 + struct device_node *node = dev->of_node; 966 + int ret; 967 + 968 + for_each_available_child_of_node_scoped(node, child) { 969 + int slot; 970 + const char *type; 971 + 972 + ret = of_property_read_string(child, "device_type", &type); 973 + if (ret || strcmp(type, "pci")) 974 + continue; 975 + 976 + ret = of_pci_get_devfn(child); 977 + if (ret < 0) 978 + return dev_err_probe(dev, ret, 979 + "failed to parse devfn\n"); 980 + 981 + slot = PCI_SLOT(ret); 982 + 983 + ret = aspeed_pcie_parse_port(pcie, child, slot); 984 + if (ret) 985 + return ret; 986 + } 987 + 988 + if (list_empty(&pcie->ports)) 989 + return dev_err_probe(dev, -ENODEV, 990 + "No PCIe port found in DT\n"); 991 + 992 + return 0; 993 + } 994 + 995 + static int aspeed_pcie_probe(struct platform_device *pdev) 996 + { 997 + struct device *dev = &pdev->dev; 998 + struct pci_host_bridge *host; 999 + struct aspeed_pcie *pcie; 1000 + struct resource_entry *entry; 1001 + const struct aspeed_pcie_rc_platform *md; 1002 + int irq, ret; 1003 + 1004 + md = of_device_get_match_data(dev); 1005 + if (!md) 1006 + return -ENODEV; 1007 + 1008 + host = devm_pci_alloc_host_bridge(dev, sizeof(*pcie)); 1009 + if (!host) 1010 + return -ENOMEM; 1011 + 1012 + pcie = pci_host_bridge_priv(host); 1013 + pcie->dev = dev; 1014 + pcie->tx_tag = 0; 1015 + platform_set_drvdata(pdev, pcie); 1016 + 1017 + pcie->platform = md; 1018 + pcie->host = host; 1019 + INIT_LIST_HEAD(&pcie->ports); 1020 + 1021 + /* Get root bus num for cfg command to decide tlp type 0 or type 1 */ 1022 + entry = resource_list_first_type(&host->windows, IORESOURCE_BUS); 1023 + if (entry) 1024 + pcie->root_bus_nr = entry->res->start; 1025 + 1026 + pcie->reg = devm_platform_ioremap_resource(pdev, 0); 1027 + if (IS_ERR(pcie->reg)) 1028 + return PTR_ERR(pcie->reg); 1029 + 1030 + pcie->h2xrst = devm_reset_control_get_exclusive(dev, "h2x"); 1031 + if (IS_ERR(pcie->h2xrst)) 1032 + return dev_err_probe(dev, PTR_ERR(pcie->h2xrst), 1033 + "failed to get h2x reset\n"); 1034 + 1035 + ret = devm_mutex_init(dev, &pcie->lock); 1036 + if (ret) 1037 + return dev_err_probe(dev, ret, "failed to init mutex\n"); 1038 + 1039 + ret = pcie->platform->setup(pdev); 1040 + if (ret) 1041 + return dev_err_probe(dev, ret, "failed to setup PCIe RC\n"); 1042 + 1043 + aspeed_pcie_map_ranges(pcie); 1044 + 1045 + ret = aspeed_pcie_parse_dt(pcie); 1046 + if (ret) 1047 + return ret; 1048 + 1049 + host->sysdata = pcie; 1050 + 1051 + ret = aspeed_pcie_init_irq_domain(pcie); 1052 + if (ret) 1053 + return ret; 1054 + 1055 + irq = platform_get_irq(pdev, 0); 1056 + if (irq < 0) 1057 + return irq; 1058 + 1059 + ret = devm_add_action_or_reset(dev, aspeed_pcie_irq_domain_free, pcie); 1060 + if (ret) 1061 + return ret; 1062 + 1063 + ret = devm_request_irq(dev, irq, aspeed_pcie_intr_handler, IRQF_SHARED, 1064 + dev_name(dev), pcie); 1065 + if (ret) 1066 + return ret; 1067 + 1068 + return pci_host_probe(host); 1069 + } 1070 + 1071 + static const struct aspeed_pcie_rc_platform pcie_rc_ast2600 = { 1072 + .setup = aspeed_ast2600_setup, 1073 + .pcie_map_ranges = aspeed_ast2600_pcie_map_ranges, 1074 + .reg_intx_en = 0xc4, 1075 + .reg_intx_sts = 0xc8, 1076 + .reg_msi_en = 0xe0, 1077 + .reg_msi_sts = 0xe8, 1078 + .msi_address = 0x1e77005c, 1079 + }; 1080 + 1081 + static const struct aspeed_pcie_rc_platform pcie_rc_ast2700 = { 1082 + .setup = aspeed_ast2700_setup, 1083 + .pcie_map_ranges = aspeed_ast2700_pcie_map_ranges, 1084 + .reg_intx_en = 0x40, 1085 + .reg_intx_sts = 0x48, 1086 + .reg_msi_en = 0x50, 1087 + .reg_msi_sts = 0x58, 1088 + .msi_address = 0x000000f0, 1089 + }; 1090 + 1091 + static const struct of_device_id aspeed_pcie_of_match[] = { 1092 + { .compatible = "aspeed,ast2600-pcie", .data = &pcie_rc_ast2600 }, 1093 + { .compatible = "aspeed,ast2700-pcie", .data = &pcie_rc_ast2700 }, 1094 + {} 1095 + }; 1096 + 1097 + static struct platform_driver aspeed_pcie_driver = { 1098 + .driver = { 1099 + .name = "aspeed-pcie", 1100 + .of_match_table = aspeed_pcie_of_match, 1101 + .suppress_bind_attrs = true, 1102 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 1103 + }, 1104 + .probe = aspeed_pcie_probe, 1105 + }; 1106 + 1107 + builtin_platform_driver(aspeed_pcie_driver); 1108 + 1109 + MODULE_AUTHOR("Jacky Chou <jacky_chou@aspeedtech.com>"); 1110 + MODULE_DESCRIPTION("ASPEED PCIe Root Complex"); 1111 + MODULE_LICENSE("GPL");
+3 -1
drivers/pci/controller/pcie-mediatek.c
··· 585 585 586 586 if (IS_ENABLED(CONFIG_PCI_MSI)) { 587 587 ret = mtk_pcie_allocate_msi_domains(port); 588 - if (ret) 588 + if (ret) { 589 + irq_domain_remove(port->irq_domain); 589 590 return ret; 591 + } 590 592 } 591 593 592 594 return 0;
+9 -28
drivers/pci/controller/pcie-rzg3s-host.c
··· 73 73 #define RZG3S_PCI_PINTRCVIE_INTX(i) BIT(i) 74 74 #define RZG3S_PCI_PINTRCVIE_MSI BIT(4) 75 75 76 + /* Register is R/W1C, it doesn't require locking. */ 76 77 #define RZG3S_PCI_PINTRCVIS 0x114 77 78 #define RZG3S_PCI_PINTRCVIS_INTX(i) BIT(i) 78 79 #define RZG3S_PCI_PINTRCVIS_MSI BIT(4) ··· 115 114 #define RZG3S_PCI_MSIRE_ENA BIT(0) 116 115 117 116 #define RZG3S_PCI_MSIRM(id) (0x608 + (id) * 0x10) 117 + 118 + /* Register is R/W1C, it doesn't require locking. */ 118 119 #define RZG3S_PCI_MSIRS(id) (0x60c + (id) * 0x10) 119 120 120 121 #define RZG3S_PCI_AWBASEL(id) (0x1000 + (id) * 0x20) ··· 442 439 return host->pcie + where; 443 440 } 444 441 445 - /* Serialized by 'pci_lock' */ 446 - static int rzg3s_pcie_root_write(struct pci_bus *bus, unsigned int devfn, 447 - int where, int size, u32 val) 448 - { 449 - struct rzg3s_pcie_host *host = bus->sysdata; 450 - int ret; 451 - 452 - /* Enable access control to the CFGU */ 453 - writel_relaxed(RZG3S_PCI_PERM_CFG_HWINIT_EN, 454 - host->axi + RZG3S_PCI_PERM); 455 - 456 - ret = pci_generic_config_write(bus, devfn, where, size, val); 457 - 458 - /* Disable access control to the CFGU */ 459 - writel_relaxed(0, host->axi + RZG3S_PCI_PERM); 460 - 461 - return ret; 462 - } 463 - 464 442 static struct pci_ops rzg3s_pcie_root_ops = { 465 443 .read = pci_generic_config_read, 466 - .write = rzg3s_pcie_root_write, 444 + .write = pci_generic_config_write, 467 445 .map_bus = rzg3s_pcie_root_map_bus, 468 446 }; 469 447 ··· 509 525 struct rzg3s_pcie_host *host = rzg3s_msi_to_host(msi); 510 526 u8 reg_bit = d->hwirq % RZG3S_PCI_MSI_INT_PER_REG; 511 527 u8 reg_id = d->hwirq / RZG3S_PCI_MSI_INT_PER_REG; 512 - 513 - guard(raw_spinlock_irqsave)(&host->hw_lock); 514 528 515 529 writel_relaxed(BIT(reg_bit), host->axi + RZG3S_PCI_MSIRS(reg_id)); 516 530 } ··· 841 859 { 842 860 struct rzg3s_pcie_host *host = irq_data_get_irq_chip_data(d); 843 861 844 - guard(raw_spinlock_irqsave)(&host->hw_lock); 845 - 846 862 rzg3s_pcie_update_bits(host->axi, RZG3S_PCI_PINTRCVIS, 847 863 RZG3S_PCI_PINTRCVIS_INTX(d->hwirq), 848 864 RZG3S_PCI_PINTRCVIS_INTX(d->hwirq)); ··· 1045 1065 writel_relaxed(0xffffffff, host->pcie + RZG3S_PCI_CFG_BARMSK00L); 1046 1066 writel_relaxed(0xffffffff, host->pcie + RZG3S_PCI_CFG_BARMSK00U); 1047 1067 1068 + /* Disable access control to the CFGU */ 1069 + writel_relaxed(0, host->axi + RZG3S_PCI_PERM); 1070 + 1048 1071 /* Update bus info */ 1049 1072 writeb_relaxed(primary_bus, host->pcie + PCI_PRIMARY_BUS); 1050 1073 writeb_relaxed(secondary_bus, host->pcie + PCI_SECONDARY_BUS); 1051 1074 writeb_relaxed(subordinate_bus, host->pcie + PCI_SUBORDINATE_BUS); 1052 - 1053 - /* Disable access control to the CFGU */ 1054 - writel_relaxed(0, host->axi + RZG3S_PCI_PERM); 1055 1075 1056 1076 return 0; 1057 1077 } ··· 1142 1162 1143 1163 static int rzg3s_pcie_host_parse_port(struct rzg3s_pcie_host *host) 1144 1164 { 1145 - struct device_node *of_port = of_get_next_child(host->dev->of_node, NULL); 1165 + struct device_node *of_port __free(device_node) = 1166 + of_get_next_child(host->dev->of_node, NULL); 1146 1167 struct rzg3s_pcie_port *port = &host->port; 1147 1168 int ret; 1148 1169
+6 -3
drivers/pci/controller/pcie-xilinx.c
··· 302 302 return 0; 303 303 } 304 304 305 - static void xilinx_free_msi_domains(struct xilinx_pcie *pcie) 305 + static void xilinx_free_irq_domains(struct xilinx_pcie *pcie) 306 306 { 307 307 irq_domain_remove(pcie->msi_domain); 308 + irq_domain_remove(pcie->leg_domain); 308 309 } 309 310 310 311 /* INTx Functions */ ··· 481 480 phys_addr_t pa = ALIGN_DOWN(virt_to_phys(pcie), SZ_4K); 482 481 483 482 ret = xilinx_allocate_msi_domains(pcie); 484 - if (ret) 483 + if (ret) { 484 + irq_domain_remove(pcie->leg_domain); 485 485 return ret; 486 + } 486 487 487 488 pcie_write(pcie, upper_32_bits(pa), XILINX_PCIE_REG_MSIBASE1); 488 489 pcie_write(pcie, lower_32_bits(pa), XILINX_PCIE_REG_MSIBASE2); ··· 603 600 604 601 err = pci_host_probe(bridge); 605 602 if (err) 606 - xilinx_free_msi_domains(pcie); 603 + xilinx_free_irq_domains(pcie); 607 604 608 605 return err; 609 606 }
+15 -10
drivers/pci/controller/plda/pcie-starfive.c
··· 55 55 struct reset_control *resets; 56 56 struct clk_bulk_data *clks; 57 57 struct regmap *reg_syscon; 58 - struct gpio_desc *power_gpio; 58 + struct regulator *vpcie3v3; 59 59 struct gpio_desc *reset_gpio; 60 60 struct phy *phy; 61 61 ··· 153 153 return dev_err_probe(dev, PTR_ERR(pcie->reset_gpio), 154 154 "failed to get perst-gpio\n"); 155 155 156 - pcie->power_gpio = devm_gpiod_get_optional(dev, "enable", 157 - GPIOD_OUT_LOW); 158 - if (IS_ERR(pcie->power_gpio)) 159 - return dev_err_probe(dev, PTR_ERR(pcie->power_gpio), 160 - "failed to get power-gpio\n"); 156 + pcie->vpcie3v3 = devm_regulator_get_optional(dev, "vpcie3v3"); 157 + if (IS_ERR(pcie->vpcie3v3)) { 158 + if (PTR_ERR(pcie->vpcie3v3) != -ENODEV) 159 + return dev_err_probe(dev, PTR_ERR(pcie->vpcie3v3), 160 + "failed to get vpcie3v3 regulator\n"); 161 + pcie->vpcie3v3 = NULL; 162 + } 161 163 162 164 return 0; 163 165 } ··· 272 270 container_of(plda, struct starfive_jh7110_pcie, plda); 273 271 274 272 starfive_pcie_clk_rst_deinit(pcie); 275 - if (pcie->power_gpio) 276 - gpiod_set_value_cansleep(pcie->power_gpio, 0); 273 + if (pcie->vpcie3v3) 274 + regulator_disable(pcie->vpcie3v3); 277 275 starfive_pcie_disable_phy(pcie); 278 276 } 279 277 ··· 306 304 if (ret) 307 305 return ret; 308 306 309 - if (pcie->power_gpio) 310 - gpiod_set_value_cansleep(pcie->power_gpio, 1); 307 + if (pcie->vpcie3v3) { 308 + ret = regulator_enable(pcie->vpcie3v3); 309 + if (ret) 310 + dev_err_probe(dev, ret, "failed to enable vpcie3v3 regulator\n"); 311 + } 311 312 312 313 if (pcie->reset_gpio) 313 314 gpiod_set_value_cansleep(pcie->reset_gpio, 1);
-3
drivers/pci/devres.c
··· 469 469 if (!legacy_iomap_table) 470 470 return -ENOMEM; 471 471 472 - /* The legacy mechanism doesn't allow for duplicate mappings. */ 473 - WARN_ON(legacy_iomap_table[bar]); 474 - 475 472 legacy_iomap_table[bar] = mapping; 476 473 477 474 return 0;
+1 -1
drivers/pci/endpoint/functions/pci-epf-mhi.c
··· 686 686 goto err_release_tx; 687 687 } 688 688 689 - epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", 0, 0); 689 + epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", WQ_PERCPU, 0); 690 690 if (!epf_mhi->dma_wq) { 691 691 ret = -ENOMEM; 692 692 goto err_release_rx;
+7 -2
drivers/pci/endpoint/functions/pci-epf-ntb.c
··· 2124 2124 { 2125 2125 int ret; 2126 2126 2127 - kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM | 2128 - WQ_HIGHPRI, 0); 2127 + kpcintb_workqueue = alloc_workqueue("kpcintb", 2128 + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0); 2129 + if (!kpcintb_workqueue) { 2130 + pr_err("Failed to allocate kpcintb workqueue\n"); 2131 + return -ENOMEM; 2132 + } 2133 + 2129 2134 ret = pci_epf_register_driver(&epf_ntb_driver); 2130 2135 if (ret) { 2131 2136 destroy_workqueue(kpcintb_workqueue);
+268 -3
drivers/pci/endpoint/functions/pci-epf-test.c
··· 33 33 #define COMMAND_COPY BIT(5) 34 34 #define COMMAND_ENABLE_DOORBELL BIT(6) 35 35 #define COMMAND_DISABLE_DOORBELL BIT(7) 36 + #define COMMAND_BAR_SUBRANGE_SETUP BIT(8) 37 + #define COMMAND_BAR_SUBRANGE_CLEAR BIT(9) 36 38 37 39 #define STATUS_READ_SUCCESS BIT(0) 38 40 #define STATUS_READ_FAIL BIT(1) ··· 50 48 #define STATUS_DOORBELL_ENABLE_FAIL BIT(11) 51 49 #define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12) 52 50 #define STATUS_DOORBELL_DISABLE_FAIL BIT(13) 51 + #define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14) 52 + #define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15) 53 + #define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16) 54 + #define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17) 53 55 54 56 #define FLAG_USE_DMA BIT(0) 55 57 ··· 63 57 #define CAP_MSI BIT(1) 64 58 #define CAP_MSIX BIT(2) 65 59 #define CAP_INTX BIT(3) 60 + #define CAP_SUBRANGE_MAPPING BIT(4) 61 + 62 + #define PCI_EPF_TEST_BAR_SUBRANGE_NSUB 2 66 63 67 64 static struct workqueue_struct *kpcitest_workqueue; 68 65 69 66 struct pci_epf_test { 70 67 void *reg[PCI_STD_NUM_BARS]; 71 68 struct pci_epf *epf; 69 + struct config_group group; 72 70 enum pci_barno test_reg_bar; 73 71 size_t msix_table_offset; 74 72 struct delayed_work cmd_handler; ··· 86 76 bool dma_private; 87 77 const struct pci_epc_features *epc_features; 88 78 struct pci_epf_bar db_bar; 79 + size_t bar_size[PCI_STD_NUM_BARS]; 89 80 }; 90 81 91 82 struct pci_epf_test_reg { ··· 113 102 .interrupt_pin = PCI_INTERRUPT_INTA, 114 103 }; 115 104 116 - static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; 105 + /* default BAR sizes, can be overridden by the user using configfs */ 106 + static size_t default_bar_size[] = { 131072, 131072, 131072, 131072, 131072, 1048576 }; 117 107 118 108 static void pci_epf_test_dma_callback(void *param) 119 109 { ··· 818 806 reg->status = cpu_to_le32(status); 819 807 } 820 808 809 + static u8 pci_epf_test_subrange_sig_byte(enum pci_barno barno, 810 + unsigned int subno) 811 + { 812 + return 0x50 + (barno * 8) + subno; 813 + } 814 + 815 + static void pci_epf_test_bar_subrange_setup(struct pci_epf_test *epf_test, 816 + struct pci_epf_test_reg *reg) 817 + { 818 + struct pci_epf_bar_submap *submap, *old_submap; 819 + struct pci_epf *epf = epf_test->epf; 820 + struct pci_epc *epc = epf->epc; 821 + struct pci_epf_bar *bar; 822 + unsigned int nsub = PCI_EPF_TEST_BAR_SUBRANGE_NSUB, old_nsub; 823 + /* reg->size carries BAR number for BAR_SUBRANGE_* commands. */ 824 + enum pci_barno barno = le32_to_cpu(reg->size); 825 + u32 status = le32_to_cpu(reg->status); 826 + unsigned int i, phys_idx; 827 + size_t sub_size; 828 + u8 *addr; 829 + int ret; 830 + 831 + if (barno >= PCI_STD_NUM_BARS) { 832 + dev_err(&epf->dev, "Invalid barno: %d\n", barno); 833 + goto err; 834 + } 835 + 836 + /* Host side should've avoided test_reg_bar, this is a safeguard. */ 837 + if (barno == epf_test->test_reg_bar) { 838 + dev_err(&epf->dev, "test_reg_bar cannot be used for subrange test\n"); 839 + goto err; 840 + } 841 + 842 + if (!epf_test->epc_features->dynamic_inbound_mapping || 843 + !epf_test->epc_features->subrange_mapping) { 844 + dev_err(&epf->dev, "epc driver does not support subrange mapping\n"); 845 + goto err; 846 + } 847 + 848 + bar = &epf->bar[barno]; 849 + if (!bar->size || !bar->addr) { 850 + dev_err(&epf->dev, "bar size/addr (%zu/%p) is invalid\n", 851 + bar->size, bar->addr); 852 + goto err; 853 + } 854 + 855 + if (bar->size % nsub) { 856 + dev_err(&epf->dev, "BAR size %zu is not divisible by %u\n", 857 + bar->size, nsub); 858 + goto err; 859 + } 860 + 861 + sub_size = bar->size / nsub; 862 + 863 + submap = kcalloc(nsub, sizeof(*submap), GFP_KERNEL); 864 + if (!submap) 865 + goto err; 866 + 867 + for (i = 0; i < nsub; i++) { 868 + /* Swap the two halves so RC can verify ordering. */ 869 + phys_idx = i ^ 1; 870 + submap[i].phys_addr = bar->phys_addr + (phys_idx * sub_size); 871 + submap[i].size = sub_size; 872 + } 873 + 874 + old_submap = bar->submap; 875 + old_nsub = bar->num_submap; 876 + 877 + bar->submap = submap; 878 + bar->num_submap = nsub; 879 + 880 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); 881 + if (ret) { 882 + dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret); 883 + bar->submap = old_submap; 884 + bar->num_submap = old_nsub; 885 + kfree(submap); 886 + goto err; 887 + } 888 + kfree(old_submap); 889 + 890 + /* 891 + * Fill deterministic signatures into the physical regions that 892 + * each BAR subrange maps to. RC verifies these to ensure the 893 + * submap order is really applied. 894 + */ 895 + addr = (u8 *)bar->addr; 896 + for (i = 0; i < nsub; i++) { 897 + phys_idx = i ^ 1; 898 + memset(addr + (phys_idx * sub_size), 899 + pci_epf_test_subrange_sig_byte(barno, i), 900 + sub_size); 901 + } 902 + 903 + status |= STATUS_BAR_SUBRANGE_SETUP_SUCCESS; 904 + reg->status = cpu_to_le32(status); 905 + return; 906 + 907 + err: 908 + status |= STATUS_BAR_SUBRANGE_SETUP_FAIL; 909 + reg->status = cpu_to_le32(status); 910 + } 911 + 912 + static void pci_epf_test_bar_subrange_clear(struct pci_epf_test *epf_test, 913 + struct pci_epf_test_reg *reg) 914 + { 915 + struct pci_epf *epf = epf_test->epf; 916 + struct pci_epf_bar_submap *submap; 917 + struct pci_epc *epc = epf->epc; 918 + /* reg->size carries BAR number for BAR_SUBRANGE_* commands. */ 919 + enum pci_barno barno = le32_to_cpu(reg->size); 920 + u32 status = le32_to_cpu(reg->status); 921 + struct pci_epf_bar *bar; 922 + unsigned int nsub; 923 + int ret; 924 + 925 + if (barno >= PCI_STD_NUM_BARS) { 926 + dev_err(&epf->dev, "Invalid barno: %d\n", barno); 927 + goto err; 928 + } 929 + 930 + bar = &epf->bar[barno]; 931 + submap = bar->submap; 932 + nsub = bar->num_submap; 933 + 934 + if (!submap || !nsub) 935 + goto err; 936 + 937 + bar->submap = NULL; 938 + bar->num_submap = 0; 939 + 940 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); 941 + if (ret) { 942 + bar->submap = submap; 943 + bar->num_submap = nsub; 944 + dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret); 945 + goto err; 946 + } 947 + kfree(submap); 948 + 949 + status |= STATUS_BAR_SUBRANGE_CLEAR_SUCCESS; 950 + reg->status = cpu_to_le32(status); 951 + return; 952 + 953 + err: 954 + status |= STATUS_BAR_SUBRANGE_CLEAR_FAIL; 955 + reg->status = cpu_to_le32(status); 956 + } 957 + 821 958 static void pci_epf_test_cmd_handler(struct work_struct *work) 822 959 { 823 960 u32 command; ··· 1020 859 break; 1021 860 case COMMAND_DISABLE_DOORBELL: 1022 861 pci_epf_test_disable_doorbell(epf_test, reg); 862 + pci_epf_test_raise_irq(epf_test, reg); 863 + break; 864 + case COMMAND_BAR_SUBRANGE_SETUP: 865 + pci_epf_test_bar_subrange_setup(epf_test, reg); 866 + pci_epf_test_raise_irq(epf_test, reg); 867 + break; 868 + case COMMAND_BAR_SUBRANGE_CLEAR: 869 + pci_epf_test_bar_subrange_clear(epf_test, reg); 1023 870 pci_epf_test_raise_irq(epf_test, reg); 1024 871 break; 1025 872 default: ··· 1101 932 1102 933 if (epf_test->epc_features->intx_capable) 1103 934 caps |= CAP_INTX; 935 + 936 + if (epf_test->epc_features->dynamic_inbound_mapping && 937 + epf_test->epc_features->subrange_mapping) 938 + caps |= CAP_SUBRANGE_MAPPING; 1104 939 1105 940 reg->caps = cpu_to_le32(caps); 1106 941 } ··· 1243 1070 if (epc_features->bar[bar].type == BAR_FIXED) 1244 1071 test_reg_size = epc_features->bar[bar].fixed_size; 1245 1072 else 1246 - test_reg_size = bar_size[bar]; 1073 + test_reg_size = epf_test->bar_size[bar]; 1247 1074 1248 1075 base = pci_epf_alloc_space(epf, test_reg_size, bar, 1249 1076 epc_features, PRIMARY_INTERFACE); ··· 1315 1142 pci_epf_test_free_space(epf); 1316 1143 } 1317 1144 1145 + #define PCI_EPF_TEST_BAR_SIZE_R(_name, _id) \ 1146 + static ssize_t pci_epf_test_##_name##_show(struct config_item *item, \ 1147 + char *page) \ 1148 + { \ 1149 + struct config_group *group = to_config_group(item); \ 1150 + struct pci_epf_test *epf_test = \ 1151 + container_of(group, struct pci_epf_test, group); \ 1152 + \ 1153 + return sysfs_emit(page, "%zu\n", epf_test->bar_size[_id]); \ 1154 + } 1155 + 1156 + #define PCI_EPF_TEST_BAR_SIZE_W(_name, _id) \ 1157 + static ssize_t pci_epf_test_##_name##_store(struct config_item *item, \ 1158 + const char *page, \ 1159 + size_t len) \ 1160 + { \ 1161 + struct config_group *group = to_config_group(item); \ 1162 + struct pci_epf_test *epf_test = \ 1163 + container_of(group, struct pci_epf_test, group); \ 1164 + int val, ret; \ 1165 + \ 1166 + /* \ 1167 + * BAR sizes can only be modified before binding to an EPC, \ 1168 + * because pci_epf_test_alloc_space() is called in .bind(). \ 1169 + */ \ 1170 + if (epf_test->epf->epc) \ 1171 + return -EOPNOTSUPP; \ 1172 + \ 1173 + ret = kstrtouint(page, 0, &val); \ 1174 + if (ret) \ 1175 + return ret; \ 1176 + \ 1177 + if (!is_power_of_2(val)) \ 1178 + return -EINVAL; \ 1179 + \ 1180 + epf_test->bar_size[_id] = val; \ 1181 + \ 1182 + return len; \ 1183 + } 1184 + 1185 + PCI_EPF_TEST_BAR_SIZE_R(bar0_size, BAR_0) 1186 + PCI_EPF_TEST_BAR_SIZE_W(bar0_size, BAR_0) 1187 + PCI_EPF_TEST_BAR_SIZE_R(bar1_size, BAR_1) 1188 + PCI_EPF_TEST_BAR_SIZE_W(bar1_size, BAR_1) 1189 + PCI_EPF_TEST_BAR_SIZE_R(bar2_size, BAR_2) 1190 + PCI_EPF_TEST_BAR_SIZE_W(bar2_size, BAR_2) 1191 + PCI_EPF_TEST_BAR_SIZE_R(bar3_size, BAR_3) 1192 + PCI_EPF_TEST_BAR_SIZE_W(bar3_size, BAR_3) 1193 + PCI_EPF_TEST_BAR_SIZE_R(bar4_size, BAR_4) 1194 + PCI_EPF_TEST_BAR_SIZE_W(bar4_size, BAR_4) 1195 + PCI_EPF_TEST_BAR_SIZE_R(bar5_size, BAR_5) 1196 + PCI_EPF_TEST_BAR_SIZE_W(bar5_size, BAR_5) 1197 + 1198 + CONFIGFS_ATTR(pci_epf_test_, bar0_size); 1199 + CONFIGFS_ATTR(pci_epf_test_, bar1_size); 1200 + CONFIGFS_ATTR(pci_epf_test_, bar2_size); 1201 + CONFIGFS_ATTR(pci_epf_test_, bar3_size); 1202 + CONFIGFS_ATTR(pci_epf_test_, bar4_size); 1203 + CONFIGFS_ATTR(pci_epf_test_, bar5_size); 1204 + 1205 + static struct configfs_attribute *pci_epf_test_attrs[] = { 1206 + &pci_epf_test_attr_bar0_size, 1207 + &pci_epf_test_attr_bar1_size, 1208 + &pci_epf_test_attr_bar2_size, 1209 + &pci_epf_test_attr_bar3_size, 1210 + &pci_epf_test_attr_bar4_size, 1211 + &pci_epf_test_attr_bar5_size, 1212 + NULL, 1213 + }; 1214 + 1215 + static const struct config_item_type pci_epf_test_group_type = { 1216 + .ct_attrs = pci_epf_test_attrs, 1217 + .ct_owner = THIS_MODULE, 1218 + }; 1219 + 1220 + static struct config_group *pci_epf_test_add_cfs(struct pci_epf *epf, 1221 + struct config_group *group) 1222 + { 1223 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 1224 + struct config_group *epf_group = &epf_test->group; 1225 + struct device *dev = &epf->dev; 1226 + 1227 + config_group_init_type_name(epf_group, dev_name(dev), 1228 + &pci_epf_test_group_type); 1229 + 1230 + return epf_group; 1231 + } 1232 + 1318 1233 static const struct pci_epf_device_id pci_epf_test_ids[] = { 1319 1234 { 1320 1235 .name = "pci_epf_test", ··· 1415 1154 { 1416 1155 struct pci_epf_test *epf_test; 1417 1156 struct device *dev = &epf->dev; 1157 + enum pci_barno bar; 1418 1158 1419 1159 epf_test = devm_kzalloc(dev, sizeof(*epf_test), GFP_KERNEL); 1420 1160 if (!epf_test) ··· 1423 1161 1424 1162 epf->header = &test_header; 1425 1163 epf_test->epf = epf; 1164 + for (bar = BAR_0; bar < PCI_STD_NUM_BARS; bar++) 1165 + epf_test->bar_size[bar] = default_bar_size[bar]; 1426 1166 1427 1167 INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler); 1428 1168 ··· 1437 1173 static const struct pci_epf_ops ops = { 1438 1174 .unbind = pci_epf_test_unbind, 1439 1175 .bind = pci_epf_test_bind, 1176 + .add_cfs = pci_epf_test_add_cfs, 1440 1177 }; 1441 1178 1442 1179 static struct pci_epf_driver test_driver = { ··· 1453 1188 int ret; 1454 1189 1455 1190 kpcitest_workqueue = alloc_workqueue("kpcitest", 1456 - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); 1191 + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0); 1457 1192 if (!kpcitest_workqueue) { 1458 1193 pr_err("Failed to allocate the kpcitest work queue\n"); 1459 1194 return -ENOMEM;
+7 -2
drivers/pci/endpoint/functions/pci-epf-vntb.c
··· 1651 1651 { 1652 1652 int ret; 1653 1653 1654 - kpcintb_workqueue = alloc_workqueue("kpcintb", WQ_MEM_RECLAIM | 1655 - WQ_HIGHPRI, 0); 1654 + kpcintb_workqueue = alloc_workqueue("kpcintb", 1655 + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0); 1656 + if (!kpcintb_workqueue) { 1657 + pr_err("Failed to allocate kpcintb workqueue\n"); 1658 + return -ENOMEM; 1659 + } 1660 + 1656 1661 ret = pci_epf_register_driver(&epf_ntb_driver); 1657 1662 if (ret) { 1658 1663 destroy_workqueue(kpcintb_workqueue);
+9 -14
drivers/pci/endpoint/pci-ep-cfs.c
··· 23 23 struct config_group group; 24 24 struct config_group primary_epc_group; 25 25 struct config_group secondary_epc_group; 26 - struct delayed_work cfs_work; 27 26 struct pci_epf *epf; 28 27 int index; 29 28 }; ··· 68 69 return 0; 69 70 } 70 71 71 - static void pci_secondary_epc_epf_unlink(struct config_item *epc_item, 72 - struct config_item *epf_item) 72 + static void pci_secondary_epc_epf_unlink(struct config_item *epf_item, 73 + struct config_item *epc_item) 73 74 { 74 75 struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent); 75 76 struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); ··· 102 103 secondary_epc_group = &epf_group->secondary_epc_group; 103 104 config_group_init_type_name(secondary_epc_group, "secondary", 104 105 &pci_secondary_epc_type); 105 - configfs_register_group(&epf_group->group, secondary_epc_group); 106 + configfs_add_default_group(secondary_epc_group, &epf_group->group); 106 107 107 108 return secondary_epc_group; 108 109 } ··· 132 133 return 0; 133 134 } 134 135 135 - static void pci_primary_epc_epf_unlink(struct config_item *epc_item, 136 - struct config_item *epf_item) 136 + static void pci_primary_epc_epf_unlink(struct config_item *epf_item, 137 + struct config_item *epc_item) 137 138 { 138 139 struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent); 139 140 struct pci_epc_group *epc_group = to_pci_epc_group(epc_item); ··· 165 166 166 167 config_group_init_type_name(primary_epc_group, "primary", 167 168 &pci_primary_epc_type); 168 - configfs_register_group(&epf_group->group, primary_epc_group); 169 + configfs_add_default_group(primary_epc_group, &epf_group->group); 169 170 170 171 return primary_epc_group; 171 172 } ··· 569 570 return; 570 571 } 571 572 572 - configfs_register_group(&epf_group->group, group); 573 + configfs_add_default_group(group, &epf_group->group); 573 574 } 574 575 575 - static void pci_epf_cfs_work(struct work_struct *work) 576 + static void pci_epf_cfs_add_sub_groups(struct pci_epf_group *epf_group) 576 577 { 577 - struct pci_epf_group *epf_group; 578 578 struct config_group *group; 579 579 580 - epf_group = container_of(work, struct pci_epf_group, cfs_work.work); 581 580 group = pci_ep_cfs_add_primary_group(epf_group); 582 581 if (IS_ERR(group)) { 583 582 pr_err("failed to create 'primary' EPC interface\n"); ··· 634 637 635 638 kfree(epf_name); 636 639 637 - INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work); 638 - queue_delayed_work(system_wq, &epf_group->cfs_work, 639 - msecs_to_jiffies(1)); 640 + pci_epf_cfs_add_sub_groups(epf_group); 640 641 641 642 return &epf_group->group; 642 643
+8
drivers/pci/endpoint/pci-epc-core.c
··· 596 596 if (!epc_features) 597 597 return -EINVAL; 598 598 599 + if (epf_bar->num_submap && !epf_bar->submap) 600 + return -EINVAL; 601 + 602 + if (epf_bar->num_submap && 603 + !(epc_features->dynamic_inbound_mapping && 604 + epc_features->subrange_mapping)) 605 + return -EINVAL; 606 + 599 607 if (epc_features->bar[bar].type == BAR_RESIZABLE && 600 608 (epf_bar->size < SZ_1M || (u64)epf_bar->size > (SZ_128G * 1024))) 601 609 return -EINVAL;
+25 -6
drivers/pci/hotplug/pciehp_ctrl.c
··· 19 19 #include <linux/types.h> 20 20 #include <linux/pm_runtime.h> 21 21 #include <linux/pci.h> 22 + #include <trace/events/pci.h> 22 23 23 24 #include "../pci.h" 24 25 #include "pciehp.h" ··· 245 244 case ON_STATE: 246 245 ctrl->state = POWEROFF_STATE; 247 246 mutex_unlock(&ctrl->state_lock); 248 - if (events & PCI_EXP_SLTSTA_DLLSC) 247 + if (events & PCI_EXP_SLTSTA_DLLSC) { 249 248 ctrl_info(ctrl, "Slot(%s): Link Down\n", 250 249 slot_name(ctrl)); 251 - if (events & PCI_EXP_SLTSTA_PDC) 250 + trace_pci_hp_event(pci_name(ctrl->pcie->port), 251 + slot_name(ctrl), 252 + PCI_HOTPLUG_LINK_DOWN); 253 + } 254 + if (events & PCI_EXP_SLTSTA_PDC) { 252 255 ctrl_info(ctrl, "Slot(%s): Card not present\n", 253 256 slot_name(ctrl)); 257 + trace_pci_hp_event(pci_name(ctrl->pcie->port), 258 + slot_name(ctrl), 259 + PCI_HOTPLUG_CARD_NOT_PRESENT); 260 + } 254 261 pciehp_disable_slot(ctrl, SURPRISE_REMOVAL); 255 262 break; 256 263 default: ··· 278 269 INDICATOR_NOOP); 279 270 ctrl_info(ctrl, "Slot(%s): Card not present\n", 280 271 slot_name(ctrl)); 272 + trace_pci_hp_event(pci_name(ctrl->pcie->port), 273 + slot_name(ctrl), 274 + PCI_HOTPLUG_CARD_NOT_PRESENT); 281 275 } 282 276 mutex_unlock(&ctrl->state_lock); 283 277 return; ··· 293 281 case OFF_STATE: 294 282 ctrl->state = POWERON_STATE; 295 283 mutex_unlock(&ctrl->state_lock); 296 - if (present) 284 + if (present) { 297 285 ctrl_info(ctrl, "Slot(%s): Card present\n", 298 286 slot_name(ctrl)); 299 - if (link_active) 300 - ctrl_info(ctrl, "Slot(%s): Link Up\n", 301 - slot_name(ctrl)); 287 + trace_pci_hp_event(pci_name(ctrl->pcie->port), 288 + slot_name(ctrl), 289 + PCI_HOTPLUG_CARD_PRESENT); 290 + } 291 + if (link_active) { 292 + ctrl_info(ctrl, "Slot(%s): Link Up\n", slot_name(ctrl)); 293 + trace_pci_hp_event(pci_name(ctrl->pcie->port), 294 + slot_name(ctrl), 295 + PCI_HOTPLUG_LINK_UP); 296 + } 302 297 ctrl->request_result = pciehp_enable_slot(ctrl); 303 298 break; 304 299 default:
+2 -1
drivers/pci/hotplug/pciehp_hpc.c
··· 320 320 } 321 321 322 322 pcie_capability_read_word(pdev, PCI_EXP_LNKSTA2, &linksta2); 323 - __pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status, linksta2); 323 + __pcie_update_link_speed(ctrl->pcie->port->subordinate, PCIE_HOTPLUG, 324 + lnk_status, linksta2); 324 325 325 326 if (!found) { 326 327 ctrl_info(ctrl, "Slot(%s): No device found\n",
+1 -1
drivers/pci/hotplug/pnv_php.c
··· 802 802 } 803 803 804 804 /* Allocate workqueue for this slot's interrupt handling */ 805 - php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name); 805 + php_slot->wq = alloc_workqueue("pciehp-%s", WQ_PERCPU, 0, php_slot->name); 806 806 if (!php_slot->wq) { 807 807 SLOT_WARN(php_slot, "Cannot alloc workqueue\n"); 808 808 kfree(php_slot->name);
+2 -1
drivers/pci/hotplug/shpchp_core.c
··· 80 80 slot->device = ctrl->slot_device_offset + i; 81 81 slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i); 82 82 83 - slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number); 83 + slot->wq = alloc_workqueue("shpchp-%d", WQ_PERCPU, 0, 84 + slot->number); 84 85 if (!slot->wq) { 85 86 retval = -ENOMEM; 86 87 goto error_slot;
+4 -5
drivers/pci/iov.c
··· 495 495 496 496 if (num_vfs == 0) { 497 497 /* disable VFs */ 498 + pci_lock_rescan_remove(); 498 499 ret = pdev->driver->sriov_configure(pdev, 0); 500 + pci_unlock_rescan_remove(); 499 501 goto exit; 500 502 } 501 503 ··· 509 507 goto exit; 510 508 } 511 509 510 + pci_lock_rescan_remove(); 512 511 ret = pdev->driver->sriov_configure(pdev, num_vfs); 512 + pci_unlock_rescan_remove(); 513 513 if (ret < 0) 514 514 goto exit; 515 515 ··· 633 629 if (dev->no_vf_scan) 634 630 return 0; 635 631 636 - pci_lock_rescan_remove(); 637 632 for (i = 0; i < num_vfs; i++) { 638 633 rc = pci_iov_add_virtfn(dev, i); 639 634 if (rc) 640 635 goto failed; 641 636 } 642 - pci_unlock_rescan_remove(); 643 637 return 0; 644 638 failed: 645 639 while (i--) 646 640 pci_iov_remove_virtfn(dev, i); 647 - pci_unlock_rescan_remove(); 648 641 649 642 return rc; 650 643 } ··· 766 765 struct pci_sriov *iov = dev->sriov; 767 766 int i; 768 767 769 - pci_lock_rescan_remove(); 770 768 for (i = 0; i < iov->num_VFs; i++) 771 769 pci_iov_remove_virtfn(dev, i); 772 - pci_unlock_rescan_remove(); 773 770 } 774 771 775 772 static void sriov_disable(struct pci_dev *dev)
+1
drivers/pci/of.c
··· 867 867 868 868 return false; 869 869 } 870 + EXPORT_SYMBOL_GPL(of_pci_supply_present); 870 871 871 872 #endif /* CONFIG_PCI */ 872 873
+9 -1
drivers/pci/p2pdma.c
··· 147 147 * we have just allocated the page no one else should be 148 148 * using it. 149 149 */ 150 - VM_WARN_ON_ONCE_PAGE(!page_ref_count(page), page); 150 + VM_WARN_ON_ONCE_PAGE(page_ref_count(page), page); 151 151 set_page_count(page, 1); 152 152 ret = vm_insert_page(vma, vaddr, page); 153 153 if (ret) { 154 154 gen_pool_free(p2pdma->pool, (uintptr_t)kaddr, len); 155 + 156 + /* 157 + * Reset the page count. We don't use put_page() 158 + * because we don't want to trigger the 159 + * p2pdma_folio_free() path. 160 + */ 161 + set_page_count(page, 0); 162 + percpu_ref_put(ref); 155 163 return ret; 156 164 } 157 165 percpu_ref_get(ref);
+24 -35
drivers/pci/pci-acpi.c
··· 272 272 return AE_OK; 273 273 } 274 274 275 - static bool pcie_root_rcb_set(struct pci_dev *dev) 276 - { 277 - struct pci_dev *rp = pcie_find_root_port(dev); 278 - u16 lnkctl; 279 - 280 - if (!rp) 281 - return false; 282 - 283 - pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &lnkctl); 284 - if (lnkctl & PCI_EXP_LNKCTL_RCB) 285 - return true; 286 - 287 - return false; 288 - } 289 - 290 275 /* _HPX PCI Express Setting Record (Type 2) */ 291 276 struct hpx_type2 { 292 277 u32 revision; ··· 297 312 { 298 313 int pos; 299 314 u32 reg32; 315 + const struct pci_host_bridge *host; 300 316 301 317 if (!hpx) 302 318 return; 303 319 304 320 if (!pci_is_pcie(dev)) 321 + return; 322 + 323 + host = pci_find_host_bridge(dev->bus); 324 + 325 + /* 326 + * Only do the _HPX Type 2 programming if OS owns PCIe native 327 + * hotplug but not AER. 328 + */ 329 + if (!host->native_pcie_hotplug || host->native_aer) 305 330 return; 306 331 307 332 if (hpx->revision > 1) { ··· 321 326 } 322 327 323 328 /* 324 - * Don't allow _HPX to change MPS or MRRS settings. We manage 325 - * those to make sure they're consistent with the rest of the 326 - * platform. 329 + * We only allow _HPX to program DEVCTL bits related to AER, namely 330 + * PCI_EXP_DEVCTL_CERE, PCI_EXP_DEVCTL_NFERE, PCI_EXP_DEVCTL_FERE, 331 + * and PCI_EXP_DEVCTL_URRE. 332 + * 333 + * The rest of DEVCTL is managed by the OS to make sure it's 334 + * consistent with the rest of the platform. 327 335 */ 328 - hpx->pci_exp_devctl_and |= PCI_EXP_DEVCTL_PAYLOAD | 329 - PCI_EXP_DEVCTL_READRQ; 330 - hpx->pci_exp_devctl_or &= ~(PCI_EXP_DEVCTL_PAYLOAD | 331 - PCI_EXP_DEVCTL_READRQ); 336 + hpx->pci_exp_devctl_and |= ~PCI_EXP_AER_FLAGS; 337 + hpx->pci_exp_devctl_or &= PCI_EXP_AER_FLAGS; 332 338 333 339 /* Initialize Device Control Register */ 334 340 pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, 335 341 ~hpx->pci_exp_devctl_and, hpx->pci_exp_devctl_or); 336 342 337 - /* Initialize Link Control Register */ 343 + /* Log if _HPX attempts to modify Link Control Register */ 338 344 if (pcie_cap_has_lnkctl(dev)) { 339 - 340 - /* 341 - * If the Root Port supports Read Completion Boundary of 342 - * 128, set RCB to 128. Otherwise, clear it. 343 - */ 344 - hpx->pci_exp_lnkctl_and |= PCI_EXP_LNKCTL_RCB; 345 - hpx->pci_exp_lnkctl_or &= ~PCI_EXP_LNKCTL_RCB; 346 - if (pcie_root_rcb_set(dev)) 347 - hpx->pci_exp_lnkctl_or |= PCI_EXP_LNKCTL_RCB; 348 - 349 - pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 350 - ~hpx->pci_exp_lnkctl_and, hpx->pci_exp_lnkctl_or); 345 + if (hpx->pci_exp_lnkctl_and != 0xffff || 346 + hpx->pci_exp_lnkctl_or != 0) 347 + pci_info(dev, "_HPX attempts Link Control setting (AND %#06x OR %#06x)\n", 348 + hpx->pci_exp_lnkctl_and, 349 + hpx->pci_exp_lnkctl_or); 351 350 } 352 351 353 352 /* Find Advanced Error Reporting Enhanced Capability */
+8 -28
drivers/pci/pci-driver.c
··· 1679 1679 ret = acpi_dma_configure(dev, acpi_get_dma_attr(adev)); 1680 1680 } 1681 1681 1682 + /* 1683 + * Attempt to enable ACS regardless of capability because some Root 1684 + * Ports (e.g. those quirked with *_intel_pch_acs_*) do not have 1685 + * the standard ACS capability but still support ACS via those 1686 + * quirks. 1687 + */ 1688 + pci_enable_acs(to_pci_dev(dev)); 1689 + 1682 1690 pci_put_host_bridge_device(bridge); 1683 1691 1684 1692 /* @drv may not be valid when we're called from the IOMMU layer */ ··· 1737 1729 .dma_cleanup = pci_dma_cleanup, 1738 1730 }; 1739 1731 EXPORT_SYMBOL(pci_bus_type); 1740 - 1741 - #ifdef CONFIG_PCIEPORTBUS 1742 - static int pcie_port_bus_match(struct device *dev, const struct device_driver *drv) 1743 - { 1744 - struct pcie_device *pciedev; 1745 - const struct pcie_port_service_driver *driver; 1746 - 1747 - if (drv->bus != &pcie_port_bus_type || dev->bus != &pcie_port_bus_type) 1748 - return 0; 1749 - 1750 - pciedev = to_pcie_device(dev); 1751 - driver = to_service_driver(drv); 1752 - 1753 - if (driver->service != pciedev->service) 1754 - return 0; 1755 - 1756 - if (driver->port_type != PCIE_ANY_PORT && 1757 - driver->port_type != pci_pcie_type(pciedev->port)) 1758 - return 0; 1759 - 1760 - return 1; 1761 - } 1762 - 1763 - const struct bus_type pcie_port_bus_type = { 1764 - .name = "pci_express", 1765 - .match = pcie_port_bus_match, 1766 - }; 1767 - #endif 1768 1732 1769 1733 static int __init pci_driver_init(void) 1770 1734 {
+1 -1
drivers/pci/pci-sysfs.c
··· 181 181 struct resource zerores = {}; 182 182 183 183 /* For backwards compatibility */ 184 - if (i >= PCI_BRIDGE_RESOURCES && i <= PCI_BRIDGE_RESOURCE_END && 184 + if (pci_resource_is_bridge_win(i) && 185 185 res->flags & (IORESOURCE_UNSET | IORESOURCE_DISABLED)) 186 186 res = &zerores; 187 187
+68 -50
drivers/pci/pci.c
··· 14 14 #include <linux/dmi.h> 15 15 #include <linux/init.h> 16 16 #include <linux/iommu.h> 17 + #include <linux/lockdep.h> 17 18 #include <linux/msi.h> 18 19 #include <linux/of.h> 19 20 #include <linux/pci.h> ··· 101 100 #ifdef CONFIG_PCI_DOMAINS 102 101 int pci_domains_supported = 1; 103 102 #endif 104 - 105 - #define DEFAULT_CARDBUS_IO_SIZE (256) 106 - #define DEFAULT_CARDBUS_MEM_SIZE (64*1024*1024) 107 - /* pci=cbmemsize=nnM,cbiosize=nn can override this */ 108 - unsigned long pci_cardbus_io_size = DEFAULT_CARDBUS_IO_SIZE; 109 - unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE; 110 103 111 104 #define DEFAULT_HOTPLUG_IO_SIZE (256) 112 105 #define DEFAULT_HOTPLUG_MMIO_SIZE (2*1024*1024) ··· 423 428 static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn, 424 429 u8 pos, int cap) 425 430 { 426 - return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, bus, devfn); 431 + return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, NULL, bus, devfn); 427 432 } 428 433 429 434 u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap) ··· 528 533 return 0; 529 534 530 535 return PCI_FIND_NEXT_EXT_CAP(pci_bus_read_config, start, cap, 531 - dev->bus, dev->devfn); 536 + NULL, dev->bus, dev->devfn); 532 537 } 533 538 EXPORT_SYMBOL_GPL(pci_find_next_ext_capability); 534 539 ··· 597 602 mask = HT_5BIT_CAP_MASK; 598 603 599 604 pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, 600 - PCI_CAP_ID_HT, dev->bus, dev->devfn); 605 + PCI_CAP_ID_HT, NULL, dev->bus, dev->devfn); 601 606 while (pos) { 602 607 rc = pci_read_config_byte(dev, pos + 3, &cap); 603 608 if (rc != PCIBIOS_SUCCESSFUL) ··· 608 613 609 614 pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, 610 615 pos + PCI_CAP_LIST_NEXT, 611 - PCI_CAP_ID_HT, dev->bus, 616 + PCI_CAP_ID_HT, NULL, dev->bus, 612 617 dev->devfn); 613 618 } 614 619 ··· 889 894 static const char *config_acs_param; 890 895 891 896 struct pci_acs { 892 - u16 cap; 893 897 u16 ctrl; 894 898 u16 fw_ctrl; 895 899 }; ··· 991 997 static void pci_std_enable_acs(struct pci_dev *dev, struct pci_acs *caps) 992 998 { 993 999 /* Source Validation */ 994 - caps->ctrl |= (caps->cap & PCI_ACS_SV); 1000 + caps->ctrl |= (dev->acs_capabilities & PCI_ACS_SV); 995 1001 996 1002 /* P2P Request Redirect */ 997 - caps->ctrl |= (caps->cap & PCI_ACS_RR); 1003 + caps->ctrl |= (dev->acs_capabilities & PCI_ACS_RR); 998 1004 999 1005 /* P2P Completion Redirect */ 1000 - caps->ctrl |= (caps->cap & PCI_ACS_CR); 1006 + caps->ctrl |= (dev->acs_capabilities & PCI_ACS_CR); 1001 1007 1002 1008 /* Upstream Forwarding */ 1003 - caps->ctrl |= (caps->cap & PCI_ACS_UF); 1009 + caps->ctrl |= (dev->acs_capabilities & PCI_ACS_UF); 1004 1010 1005 1011 /* Enable Translation Blocking for external devices and noats */ 1006 1012 if (pci_ats_disabled() || dev->external_facing || dev->untrusted) 1007 - caps->ctrl |= (caps->cap & PCI_ACS_TB); 1013 + caps->ctrl |= (dev->acs_capabilities & PCI_ACS_TB); 1008 1014 } 1009 1015 1010 1016 /** 1011 1017 * pci_enable_acs - enable ACS if hardware support it 1012 1018 * @dev: the PCI device 1013 1019 */ 1014 - static void pci_enable_acs(struct pci_dev *dev) 1020 + void pci_enable_acs(struct pci_dev *dev) 1015 1021 { 1016 1022 struct pci_acs caps; 1017 1023 bool enable_acs = false; ··· 1027 1033 if (!pos) 1028 1034 return; 1029 1035 1030 - pci_read_config_word(dev, pos + PCI_ACS_CAP, &caps.cap); 1031 1036 pci_read_config_word(dev, pos + PCI_ACS_CTRL, &caps.ctrl); 1032 1037 caps.fw_ctrl = caps.ctrl; 1033 1038 ··· 1482 1489 if ((state == PCI_D1 && !dev->d1_support) 1483 1490 || (state == PCI_D2 && !dev->d2_support)) 1484 1491 return -EIO; 1492 + 1493 + if (dev->current_state == state) 1494 + return 0; 1485 1495 1486 1496 pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); 1487 1497 if (PCI_POSSIBLE_ERROR(pmcsr)) { ··· 2254 2258 */ 2255 2259 void pcie_clear_root_pme_status(struct pci_dev *dev) 2256 2260 { 2257 - pcie_capability_set_dword(dev, PCI_EXP_RTSTA, PCI_EXP_RTSTA_PME); 2261 + pcie_capability_write_dword(dev, PCI_EXP_RTSTA, PCI_EXP_RTSTA_PME); 2258 2262 } 2259 2263 2260 2264 /** ··· 3194 3198 poweron: 3195 3199 pci_pm_power_up_and_verify_state(dev); 3196 3200 pm_runtime_forbid(&dev->dev); 3201 + 3202 + /* 3203 + * Runtime PM will be enabled for the device when it has been fully 3204 + * configured, but since its parent and suppliers may suspend in 3205 + * the meantime, prevent them from doing so by changing the 3206 + * device's runtime PM status to "active". 3207 + */ 3197 3208 pm_runtime_set_active(&dev->dev); 3198 - pm_runtime_enable(&dev->dev); 3199 3209 } 3200 3210 3201 3211 static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) ··· 3518 3516 static bool pci_acs_flags_enabled(struct pci_dev *pdev, u16 acs_flags) 3519 3517 { 3520 3518 int pos; 3521 - u16 cap, ctrl; 3519 + u16 ctrl; 3522 3520 3523 3521 pos = pdev->acs_cap; 3524 3522 if (!pos) ··· 3529 3527 * or only required if controllable. Features missing from the 3530 3528 * capability field can therefore be assumed as hard-wired enabled. 3531 3529 */ 3532 - pci_read_config_word(pdev, pos + PCI_ACS_CAP, &cap); 3533 - acs_flags &= (cap | PCI_ACS_EC); 3530 + acs_flags &= (pdev->acs_capabilities | PCI_ACS_EC); 3534 3531 3535 3532 pci_read_config_word(pdev, pos + PCI_ACS_CTRL, &ctrl); 3536 3533 return (ctrl & acs_flags) == acs_flags; ··· 3650 3649 */ 3651 3650 void pci_acs_init(struct pci_dev *dev) 3652 3651 { 3653 - dev->acs_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); 3652 + int pos; 3654 3653 3655 - /* 3656 - * Attempt to enable ACS regardless of capability because some Root 3657 - * Ports (e.g. those quirked with *_intel_pch_acs_*) do not have 3658 - * the standard ACS capability but still support ACS via those 3659 - * quirks. 3660 - */ 3661 - pci_enable_acs(dev); 3654 + dev->acs_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS); 3655 + pos = dev->acs_cap; 3656 + if (!pos) 3657 + return; 3658 + 3659 + pci_read_config_word(dev, pos + PCI_ACS_CAP, &dev->acs_capabilities); 3660 + pci_disable_broken_acs_cap(dev); 3662 3661 } 3663 3662 3664 3663 /** ··· 4585 4584 * Link Speed. 4586 4585 */ 4587 4586 if (pdev->subordinate) 4588 - pcie_update_link_speed(pdev->subordinate); 4587 + pcie_update_link_speed(pdev->subordinate, PCIE_LINK_RETRAIN); 4589 4588 4590 4589 return rc; 4591 4590 } ··· 4657 4656 * spec says 100 ms, but firmware can lower it and we allow drivers to 4658 4657 * increase it as well. 4659 4658 * 4660 - * Called with @pci_bus_sem locked for reading. 4659 + * Context: Called with @pci_bus_sem locked for reading. 4661 4660 */ 4662 4661 static int pci_bus_max_d3cold_delay(const struct pci_bus *bus) 4663 4662 { 4664 4663 const struct pci_dev *pdev; 4665 4664 int min_delay = 100; 4666 4665 int max_delay = 0; 4666 + 4667 + lockdep_assert_held(&pci_bus_sem); 4667 4668 4668 4669 list_for_each_entry(pdev, &bus->devices, bus_list) { 4669 4670 if (pdev->d3cold_delay < min_delay) ··· 5021 5018 * races with ->remove() by the device lock, which must be held by 5022 5019 * the caller. 5023 5020 */ 5021 + device_lock_assert(&dev->dev); 5024 5022 if (err_handler && err_handler->reset_prepare) 5025 5023 err_handler->reset_prepare(dev); 5026 5024 else if (dev->driver) ··· 5092 5088 * device including MSI, bus mastering, BARs, decoding IO and memory spaces, 5093 5089 * etc. 5094 5090 * 5095 - * Returns 0 if the device function was successfully reset or negative if the 5091 + * Context: The caller must hold the device lock. 5092 + * 5093 + * Return: 0 if the device function was successfully reset or negative if the 5096 5094 * device doesn't support resetting a single function. 5097 5095 */ 5098 5096 int __pci_reset_function_locked(struct pci_dev *dev) ··· 5103 5097 const struct pci_reset_fn_method *method; 5104 5098 5105 5099 might_sleep(); 5100 + device_lock_assert(&dev->dev); 5106 5101 5107 5102 /* 5108 5103 * A reset method returns -ENOTTY if it doesn't support this device and ··· 5226 5219 * over the reset. It also differs from pci_reset_function() in that it 5227 5220 * requires the PCI device lock to be held. 5228 5221 * 5229 - * Returns 0 if the device function was successfully reset or negative if the 5222 + * Context: The caller must hold the device lock. 5223 + * 5224 + * Return: 0 if the device function was successfully reset or negative if the 5230 5225 * device doesn't support resetting a single function. 5231 5226 */ 5232 5227 int pci_reset_function_locked(struct pci_dev *dev) 5233 5228 { 5234 5229 int rc; 5230 + 5231 + device_lock_assert(&dev->dev); 5235 5232 5236 5233 if (!pci_reset_supported(dev)) 5237 5234 return -ENOTTY; ··· 5352 5341 /* Do any devices on or below this slot prevent a bus reset? */ 5353 5342 static bool pci_slot_resettable(struct pci_slot *slot) 5354 5343 { 5355 - struct pci_dev *dev; 5344 + struct pci_dev *dev, *bridge = slot->bus->self; 5356 5345 5357 - if (slot->bus->self && 5358 - (slot->bus->self->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET)) 5346 + if (bridge && (bridge->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET)) 5359 5347 return false; 5360 5348 5361 5349 list_for_each_entry(dev, &slot->bus->devices, bus_list) { ··· 5371 5361 /* Lock devices from the top of the tree down */ 5372 5362 static void pci_slot_lock(struct pci_slot *slot) 5373 5363 { 5374 - struct pci_dev *dev; 5364 + struct pci_dev *dev, *bridge = slot->bus->self; 5365 + 5366 + if (bridge) 5367 + pci_dev_lock(bridge); 5375 5368 5376 5369 list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5377 5370 if (!dev->slot || dev->slot != slot) ··· 5389 5376 /* Unlock devices from the bottom of the tree up */ 5390 5377 static void pci_slot_unlock(struct pci_slot *slot) 5391 5378 { 5392 - struct pci_dev *dev; 5379 + struct pci_dev *dev, *bridge = slot->bus->self; 5393 5380 5394 5381 list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5395 5382 if (!dev->slot || dev->slot != slot) ··· 5399 5386 else 5400 5387 pci_dev_unlock(dev); 5401 5388 } 5389 + 5390 + if (bridge) 5391 + pci_dev_unlock(bridge); 5402 5392 } 5403 5393 5404 5394 /* Return 1 on successful lock, 0 on contention */ 5405 5395 static int pci_slot_trylock(struct pci_slot *slot) 5406 5396 { 5407 - struct pci_dev *dev; 5397 + struct pci_dev *dev, *bridge = slot->bus->self; 5398 + 5399 + if (bridge && !pci_dev_trylock(bridge)) 5400 + return 0; 5408 5401 5409 5402 list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5410 5403 if (!dev->slot || dev->slot != slot) 5411 5404 continue; 5412 5405 if (dev->subordinate) { 5413 - if (!pci_bus_trylock(dev->subordinate)) { 5414 - pci_dev_unlock(dev); 5406 + if (!pci_bus_trylock(dev->subordinate)) 5415 5407 goto unlock; 5416 - } 5417 5408 } else if (!pci_dev_trylock(dev)) 5418 5409 goto unlock; 5419 5410 } ··· 5433 5416 else 5434 5417 pci_dev_unlock(dev); 5435 5418 } 5419 + 5420 + if (bridge) 5421 + pci_dev_unlock(bridge); 5436 5422 return 0; 5437 5423 } 5438 5424 ··· 6662 6642 return; 6663 6643 6664 6644 /* Release domain from IDA where it was allocated. */ 6665 - if (of_get_pci_domain_nr(parent->of_node) == domain_nr) 6645 + if (parent && of_get_pci_domain_nr(parent->of_node) == domain_nr) 6666 6646 ida_free(&pci_domain_nr_static_ida, domain_nr); 6667 6647 else 6668 6648 ida_free(&pci_domain_nr_dynamic_ida, domain_nr); ··· 6701 6681 if (k) 6702 6682 *k++ = 0; 6703 6683 if (*str && (str = pcibios_setup(str)) && *str) { 6704 - if (!strcmp(str, "nomsi")) { 6684 + if (!pci_setup_cardbus(str)) { 6685 + /* Function handled the parameters */ 6686 + } else if (!strcmp(str, "nomsi")) { 6705 6687 pci_no_msi(); 6706 6688 } else if (!strncmp(str, "noats", 5)) { 6707 6689 pr_info("PCIe: ATS is disabled\n"); ··· 6722 6700 pcie_ari_disabled = true; 6723 6701 } else if (!strncmp(str, "notph", 5)) { 6724 6702 pci_no_tph(); 6725 - } else if (!strncmp(str, "cbiosize=", 9)) { 6726 - pci_cardbus_io_size = memparse(str + 9, &str); 6727 - } else if (!strncmp(str, "cbmemsize=", 10)) { 6728 - pci_cardbus_mem_size = memparse(str + 10, &str); 6729 6703 } else if (!strncmp(str, "resource_alignment=", 19)) { 6730 6704 resource_alignment_param = str + 19; 6731 6705 } else if (!strncmp(str, "ecrc=", 5)) {
+104 -11
drivers/pci/pci.h
··· 5 5 #include <linux/align.h> 6 6 #include <linux/bitfield.h> 7 7 #include <linux/pci.h> 8 + #include <trace/events/pci.h> 8 9 9 10 struct pcie_tlp_log; 10 11 ··· 64 63 #define PCIE_LINK_WAIT_MAX_RETRIES 10 65 64 #define PCIE_LINK_WAIT_SLEEP_MS 90 66 65 66 + /* Format of TLP; PCIe r7.0, sec 2.2.1 */ 67 + #define PCIE_TLP_FMT_3DW_NO_DATA 0x00 /* 3DW header, no data */ 68 + #define PCIE_TLP_FMT_4DW_NO_DATA 0x01 /* 4DW header, no data */ 69 + #define PCIE_TLP_FMT_3DW_DATA 0x02 /* 3DW header, with data */ 70 + #define PCIE_TLP_FMT_4DW_DATA 0x03 /* 4DW header, with data */ 71 + 72 + /* Type of TLP; PCIe r7.0, sec 2.2.1 */ 73 + #define PCIE_TLP_TYPE_CFG0_RD 0x04 /* Config Type 0 Read Request */ 74 + #define PCIE_TLP_TYPE_CFG0_WR 0x04 /* Config Type 0 Write Request */ 75 + #define PCIE_TLP_TYPE_CFG1_RD 0x05 /* Config Type 1 Read Request */ 76 + #define PCIE_TLP_TYPE_CFG1_WR 0x05 /* Config Type 1 Write Request */ 77 + 67 78 /* Message Routing (r[2:0]); PCIe r6.0, sec 2.2.8 */ 68 79 #define PCIE_MSG_TYPE_R_RC 0 69 80 #define PCIE_MSG_TYPE_R_ADDR 1 ··· 97 84 #define PCIE_MSG_CODE_DEASSERT_INTC 0x26 98 85 #define PCIE_MSG_CODE_DEASSERT_INTD 0x27 99 86 87 + /* Cpl. status of Complete; PCIe r7.0, sec 2.2.9.1 */ 88 + #define PCIE_CPL_STS_SUCCESS 0x00 /* Successful Completion */ 89 + 100 90 #define PCI_BUS_BRIDGE_IO_WINDOW 0 101 91 #define PCI_BUS_BRIDGE_MEM_WINDOW 1 102 92 #define PCI_BUS_BRIDGE_PREF_MEM_WINDOW 2 93 + 94 + #define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \ 95 + PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) 103 96 104 97 extern const unsigned char pcie_link_speed[]; 105 98 extern bool pci_early_dump; ··· 122 103 * @read_cfg: Function pointer for reading PCI config space 123 104 * @start: Starting position to begin search 124 105 * @cap: Capability ID to find 106 + * @prev_ptr: Pointer to store position of preceding capability (optional) 125 107 * @args: Arguments to pass to read_cfg function 126 108 * 127 - * Search the capability list in PCI config space to find @cap. 109 + * Search the capability list in PCI config space to find @cap. If 110 + * found, update *prev_ptr with the position of the preceding capability 111 + * (if prev_ptr != NULL) 128 112 * Implements TTL (time-to-live) protection against infinite loops. 129 113 * 130 114 * Return: Position of the capability if found, 0 otherwise. 131 115 */ 132 - #define PCI_FIND_NEXT_CAP(read_cfg, start, cap, args...) \ 116 + #define PCI_FIND_NEXT_CAP(read_cfg, start, cap, prev_ptr, args...) \ 133 117 ({ \ 134 118 int __ttl = PCI_FIND_CAP_TTL; \ 135 - u8 __id, __found_pos = 0; \ 119 + u8 __id, __found_pos = 0; \ 120 + u8 __prev_pos = (start); \ 136 121 u8 __pos = (start); \ 137 122 u16 __ent; \ 138 123 \ ··· 155 132 \ 156 133 if (__id == (cap)) { \ 157 134 __found_pos = __pos; \ 135 + if (prev_ptr != NULL) \ 136 + *(u8 *)prev_ptr = __prev_pos; \ 158 137 break; \ 159 138 } \ 160 139 \ 140 + __prev_pos = __pos; \ 161 141 __pos = FIELD_GET(PCI_CAP_LIST_NEXT_MASK, __ent); \ 162 142 } \ 163 143 __found_pos; \ ··· 172 146 * @read_cfg: Function pointer for reading PCI config space 173 147 * @start: Starting position to begin search (0 for initial search) 174 148 * @cap: Extended capability ID to find 149 + * @prev_ptr: Pointer to store position of preceding capability (optional) 175 150 * @args: Arguments to pass to read_cfg function 176 151 * 177 152 * Search the extended capability list in PCI config space to find @cap. 153 + * If found, update *prev_ptr with the position of the preceding capability 154 + * (if prev_ptr != NULL) 178 155 * Implements TTL protection against infinite loops using a calculated 179 156 * maximum search count. 180 157 * 181 158 * Return: Position of the capability if found, 0 otherwise. 182 159 */ 183 - #define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, args...) \ 160 + #define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, prev_ptr, args...) \ 184 161 ({ \ 185 162 u16 __pos = (start) ?: PCI_CFG_SPACE_SIZE; \ 186 163 u16 __found_pos = 0; \ 164 + u16 __prev_pos; \ 187 165 int __ttl, __ret; \ 188 166 u32 __header; \ 189 167 \ 168 + __prev_pos = __pos; \ 190 169 __ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; \ 191 170 while (__ttl-- > 0 && __pos >= PCI_CFG_SPACE_SIZE) { \ 192 171 __ret = read_cfg##_dword(args, __pos, &__header); \ ··· 203 172 \ 204 173 if (PCI_EXT_CAP_ID(__header) == (cap) && __pos != start) {\ 205 174 __found_pos = __pos; \ 175 + if (prev_ptr != NULL) \ 176 + *(u16 *)prev_ptr = __prev_pos; \ 206 177 break; \ 207 178 } \ 208 179 \ 180 + __prev_pos = __pos; \ 209 181 __pos = PCI_EXT_CAP_NEXT(__header); \ 210 182 } \ 211 183 __found_pos; \ ··· 276 242 void pci_pm_power_up_and_verify_state(struct pci_dev *pci_dev); 277 243 void pci_pm_init(struct pci_dev *dev); 278 244 void pci_ea_init(struct pci_dev *dev); 245 + bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub); 279 246 void pci_msi_init(struct pci_dev *dev); 280 247 void pci_msix_init(struct pci_dev *dev); 281 248 bool pci_bridge_d3_possible(struct pci_dev *dev); ··· 411 376 extern unsigned long pci_hotplug_mmio_size; 412 377 extern unsigned long pci_hotplug_mmio_pref_size; 413 378 extern unsigned long pci_hotplug_bus_size; 414 - extern unsigned long pci_cardbus_io_size; 415 - extern unsigned long pci_cardbus_mem_size; 379 + 380 + static inline bool pci_is_cardbus_bridge(struct pci_dev *dev) 381 + { 382 + return dev->hdr_type == PCI_HEADER_TYPE_CARDBUS; 383 + } 384 + #ifdef CONFIG_CARDBUS 385 + unsigned long pci_cardbus_resource_alignment(struct resource *res); 386 + int pci_bus_size_cardbus_bridge(struct pci_bus *bus, 387 + struct list_head *realloc_head); 388 + int pci_cardbus_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev, 389 + u32 buses, int max, 390 + unsigned int available_buses, int pass); 391 + int pci_setup_cardbus(char *str); 392 + 393 + #else 394 + static inline unsigned long pci_cardbus_resource_alignment(struct resource *res) 395 + { 396 + return 0; 397 + } 398 + static inline int pci_bus_size_cardbus_bridge(struct pci_bus *bus, 399 + struct list_head *realloc_head) 400 + { 401 + return -EOPNOTSUPP; 402 + } 403 + static inline int pci_cardbus_scan_bridge_extend(struct pci_bus *bus, 404 + struct pci_dev *dev, 405 + u32 buses, int max, 406 + unsigned int available_buses, 407 + int pass) 408 + { 409 + return max; 410 + } 411 + static inline int pci_setup_cardbus(char *str) { return -ENOENT; } 412 + #endif /* CONFIG_CARDBUS */ 416 413 417 414 /** 418 415 * pci_match_one_device - Tell if a PCI device structure has a matching ··· 499 432 int rrs_timeout); 500 433 bool pci_bus_generic_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *pl, 501 434 int rrs_timeout); 502 - int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *pl, int rrs_timeout); 503 435 504 436 int pci_setup_device(struct pci_dev *dev); 505 437 void __pci_size_stdbars(struct pci_dev *dev, int count, ··· 506 440 int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, 507 441 struct resource *res, unsigned int reg, u32 *sizes); 508 442 void pci_configure_ari(struct pci_dev *dev); 443 + 444 + int pci_dev_res_add_to_list(struct list_head *head, struct pci_dev *dev, 445 + struct resource *res, resource_size_t add_size, 446 + resource_size_t min_align); 509 447 void __pci_bus_size_bridges(struct pci_bus *bus, 510 448 struct list_head *realloc_head); 511 449 void __pci_bus_assign_resources(const struct pci_bus *bus, ··· 522 452 523 453 const char *pci_resource_name(struct pci_dev *dev, unsigned int i); 524 454 bool pci_resource_is_optional(const struct pci_dev *dev, int resno); 455 + static inline bool pci_resource_is_bridge_win(int resno) 456 + { 457 + return resno >= PCI_BRIDGE_RESOURCES && 458 + resno <= PCI_BRIDGE_RESOURCE_END; 459 + } 525 460 526 461 /** 527 462 * pci_resource_num - Reverse lookup resource number from device resources ··· 550 475 return resno; 551 476 } 552 477 478 + void pbus_validate_busn(struct pci_bus *bus); 553 479 struct resource *pbus_select_window(struct pci_bus *bus, 554 480 const struct resource *res); 555 481 void pci_reassigndev_resource_alignment(struct pci_dev *dev); ··· 631 555 void __pcie_print_link_status(struct pci_dev *dev, bool verbose); 632 556 void pcie_report_downtraining(struct pci_dev *dev); 633 557 634 - static inline void __pcie_update_link_speed(struct pci_bus *bus, u16 linksta, u16 linksta2) 558 + enum pcie_link_change_reason { 559 + PCIE_LINK_RETRAIN, 560 + PCIE_ADD_BUS, 561 + PCIE_BWCTRL_ENABLE, 562 + PCIE_BWCTRL_IRQ, 563 + PCIE_HOTPLUG, 564 + }; 565 + 566 + static inline void __pcie_update_link_speed(struct pci_bus *bus, 567 + enum pcie_link_change_reason reason, 568 + u16 linksta, u16 linksta2) 635 569 { 636 570 bus->cur_bus_speed = pcie_link_speed[linksta & PCI_EXP_LNKSTA_CLS]; 637 571 bus->flit_mode = (linksta2 & PCI_EXP_LNKSTA2_FLIT) ? 1 : 0; 572 + 573 + trace_pcie_link_event(bus, 574 + reason, 575 + FIELD_GET(PCI_EXP_LNKSTA_NLW, linksta), 576 + linksta & PCI_EXP_LNKSTA_LINK_STATUS_MASK); 638 577 } 639 - void pcie_update_link_speed(struct pci_bus *bus); 578 + 579 + void pcie_update_link_speed(struct pci_bus *bus, enum pcie_link_change_reason reason); 640 580 641 581 /* Single Root I/O Virtualization */ 642 582 struct pci_sriov { ··· 1016 924 static inline void pci_resume_ptm(struct pci_dev *dev) { } 1017 925 #endif 1018 926 1019 - unsigned long pci_cardbus_resource_alignment(struct resource *); 1020 - 1021 927 static inline resource_size_t pci_resource_alignment(struct pci_dev *dev, 1022 928 struct resource *res) 1023 929 { ··· 1029 939 } 1030 940 1031 941 void pci_acs_init(struct pci_dev *dev); 942 + void pci_enable_acs(struct pci_dev *dev); 1032 943 #ifdef CONFIG_PCI_QUIRKS 1033 944 int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags); 1034 945 int pci_dev_specific_enable_acs(struct pci_dev *dev); 1035 946 int pci_dev_specific_disable_acs_redir(struct pci_dev *dev); 947 + void pci_disable_broken_acs_cap(struct pci_dev *pdev); 1036 948 int pcie_failed_link_retrain(struct pci_dev *dev); 1037 949 #else 1038 950 static inline int pci_dev_specific_acs_enabled(struct pci_dev *dev, ··· 1050 958 { 1051 959 return -ENOTTY; 1052 960 } 961 + static inline void pci_disable_broken_acs_cap(struct pci_dev *dev) { } 1053 962 static inline int pcie_failed_link_retrain(struct pci_dev *dev) 1054 963 { 1055 964 return -ENOTTY;
+25 -4
drivers/pci/pcie/aer.c
··· 239 239 } 240 240 #endif /* CONFIG_PCIE_ECRC */ 241 241 242 - #define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \ 243 - PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) 244 - 245 242 int pcie_aer_is_native(struct pci_dev *dev) 246 243 { 247 244 struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); ··· 1605 1608 pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32); 1606 1609 } 1607 1610 1611 + static int clear_status_iter(struct pci_dev *dev, void *data) 1612 + { 1613 + u16 devctl; 1614 + 1615 + /* Skip if pci_enable_pcie_error_reporting() hasn't been called yet */ 1616 + pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &devctl); 1617 + if (!(devctl & PCI_EXP_AER_FLAGS)) 1618 + return 0; 1619 + 1620 + pci_aer_clear_status(dev); 1621 + pcie_clear_device_status(dev); 1622 + return 0; 1623 + } 1624 + 1608 1625 /** 1609 1626 * aer_enable_rootport - enable Root Port's interrupts when receiving messages 1610 1627 * @rpc: pointer to a Root Port data structure ··· 1640 1629 pcie_capability_clear_word(pdev, PCI_EXP_RTCTL, 1641 1630 SYSTEM_ERROR_INTR_ON_MESG_MASK); 1642 1631 1643 - /* Clear error status */ 1632 + /* Clear error status of this Root Port or RCEC */ 1644 1633 pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32); 1645 1634 pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, reg32); 1635 + 1636 + /* Clear error status of agents reporting to this Root Port or RCEC */ 1637 + if (reg32 & AER_ERR_STATUS_MASK) { 1638 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_RC_EC) 1639 + pcie_walk_rcec(pdev, clear_status_iter, NULL); 1640 + else if (pdev->subordinate) 1641 + pci_walk_bus(pdev->subordinate, clear_status_iter, 1642 + NULL); 1643 + } 1644 + 1646 1645 pci_read_config_dword(pdev, aer + PCI_ERR_COR_STATUS, &reg32); 1647 1646 pci_write_config_dword(pdev, aer + PCI_ERR_COR_STATUS, reg32); 1648 1647 pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32);
+5 -2
drivers/pci/pcie/bwctrl.c
··· 199 199 * Update after enabling notifications & clearing status bits ensures 200 200 * link speed is up to date. 201 201 */ 202 - pcie_update_link_speed(port->subordinate); 202 + pcie_update_link_speed(port->subordinate, PCIE_BWCTRL_ENABLE); 203 203 } 204 204 205 205 static void pcie_bwnotif_disable(struct pci_dev *port) ··· 234 234 * speed (inside pcie_update_link_speed()) after LBMS has been 235 235 * cleared to avoid missing link speed changes. 236 236 */ 237 - pcie_update_link_speed(port->subordinate); 237 + pcie_update_link_speed(port->subordinate, PCIE_BWCTRL_IRQ); 238 238 239 239 return IRQ_HANDLED; 240 240 } ··· 249 249 { 250 250 struct pci_dev *port = srv->port; 251 251 int ret; 252 + 253 + if (port->no_bw_notif) 254 + return -ENODEV; 252 255 253 256 /* Can happen if we run out of bus numbers during enumeration. */ 254 257 if (!port->subordinate)
+28 -27
drivers/pci/pcie/portdrv.c
··· 508 508 pci_free_irq_vectors(dev); 509 509 } 510 510 511 + static int pcie_port_bus_match(struct device *dev, const struct device_driver *drv) 512 + { 513 + struct pcie_device *pciedev = to_pcie_device(dev); 514 + const struct pcie_port_service_driver *driver = to_service_driver(drv); 515 + 516 + if (driver->service != pciedev->service) 517 + return 0; 518 + 519 + if (driver->port_type != PCIE_ANY_PORT && 520 + driver->port_type != pci_pcie_type(pciedev->port)) 521 + return 0; 522 + 523 + return 1; 524 + } 525 + 511 526 /** 512 - * pcie_port_probe_service - probe driver for given PCI Express port service 527 + * pcie_port_bus_probe - probe driver for given PCI Express port service 513 528 * @dev: PCI Express port service device to probe against 514 529 * 515 530 * If PCI Express port service driver is registered with 516 531 * pcie_port_service_register(), this function will be called by the driver core 517 532 * whenever match is found between the driver and a port service device. 518 533 */ 519 - static int pcie_port_probe_service(struct device *dev) 534 + static int pcie_port_bus_probe(struct device *dev) 520 535 { 521 536 struct pcie_device *pciedev; 522 537 struct pcie_port_service_driver *driver; 523 538 int status; 524 - 525 - if (!dev || !dev->driver) 526 - return -ENODEV; 527 539 528 540 driver = to_service_driver(dev->driver); 529 541 if (!driver || !driver->probe) ··· 551 539 } 552 540 553 541 /** 554 - * pcie_port_remove_service - detach driver from given PCI Express port service 542 + * pcie_port_bus_remove - detach driver from given PCI Express port service 555 543 * @dev: PCI Express port service device to handle 556 544 * 557 545 * If PCI Express port service driver is registered with ··· 559 547 * when device_unregister() is called for the port service device associated 560 548 * with the driver. 561 549 */ 562 - static int pcie_port_remove_service(struct device *dev) 550 + static void pcie_port_bus_remove(struct device *dev) 563 551 { 564 552 struct pcie_device *pciedev; 565 553 struct pcie_port_service_driver *driver; 566 554 567 - if (!dev || !dev->driver) 568 - return 0; 569 - 570 555 pciedev = to_pcie_device(dev); 571 556 driver = to_service_driver(dev->driver); 572 - if (driver && driver->remove) { 557 + if (driver && driver->remove) 573 558 driver->remove(pciedev); 574 - put_device(dev); 575 - } 576 - return 0; 559 + 560 + put_device(dev); 577 561 } 578 562 579 - /** 580 - * pcie_port_shutdown_service - shut down given PCI Express port service 581 - * @dev: PCI Express port service device to handle 582 - * 583 - * If PCI Express port service driver is registered with 584 - * pcie_port_service_register(), this function will be called by the driver core 585 - * when device_shutdown() is called for the port service device associated 586 - * with the driver. 587 - */ 588 - static void pcie_port_shutdown_service(struct device *dev) {} 563 + const struct bus_type pcie_port_bus_type = { 564 + .name = "pci_express", 565 + .match = pcie_port_bus_match, 566 + .probe = pcie_port_bus_probe, 567 + .remove = pcie_port_bus_remove, 568 + }; 589 569 590 570 /** 591 571 * pcie_port_service_register - register PCI Express port service driver ··· 590 586 591 587 new->driver.name = new->name; 592 588 new->driver.bus = &pcie_port_bus_type; 593 - new->driver.probe = pcie_port_probe_service; 594 - new->driver.remove = pcie_port_remove_service; 595 - new->driver.shutdown = pcie_port_shutdown_service; 596 589 597 590 return driver_register(&new->driver); 598 591 }
+4 -1
drivers/pci/pcie/ptm.c
··· 542 542 return NULL; 543 543 544 544 dirname = devm_kasprintf(dev, GFP_KERNEL, "pcie_ptm_%s", dev_name(dev)); 545 - if (!dirname) 545 + if (!dirname) { 546 + kfree(ptm_debugfs); 546 547 return NULL; 548 + } 547 549 548 550 ptm_debugfs->debugfs = debugfs_create_dir(dirname, NULL); 549 551 ptm_debugfs->pdata = pdata; ··· 576 574 577 575 mutex_destroy(&ptm_debugfs->lock); 578 576 debugfs_remove_recursive(ptm_debugfs->debugfs); 577 + kfree(ptm_debugfs); 579 578 } 580 579 EXPORT_SYMBOL_GPL(pcie_ptm_destroy_debugfs); 581 580 #endif
+90 -159
drivers/pci/probe.c
··· 14 14 #include <linux/platform_device.h> 15 15 #include <linux/pci_hotplug.h> 16 16 #include <linux/slab.h> 17 + #include <linux/sprintf.h> 17 18 #include <linux/module.h> 18 19 #include <linux/cpumask.h> 19 20 #include <linux/aer.h> ··· 23 22 #include <linux/irqdomain.h> 24 23 #include <linux/pm_runtime.h> 25 24 #include <linux/bitfield.h> 25 + #include <trace/events/pci.h> 26 26 #include "pci.h" 27 - 28 - #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ 29 - #define CARDBUS_RESERVE_BUSNR 3 30 27 31 28 static struct resource busn_resource = { 32 29 .name = "PCI busn", ··· 286 287 if ((sizeof(pci_bus_addr_t) < 8 || sizeof(resource_size_t) < 8) 287 288 && sz64 > 0x100000000ULL) { 288 289 res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED; 289 - res->start = 0; 290 - res->end = 0; 290 + resource_set_range(res, 0, 0); 291 291 pci_err(dev, "%s: can't handle BAR larger than 4GB (size %#010llx)\n", 292 292 res_name, (unsigned long long)sz64); 293 293 goto out; ··· 295 297 if ((sizeof(pci_bus_addr_t) < 8) && l) { 296 298 /* Above 32-bit boundary; try to reallocate */ 297 299 res->flags |= IORESOURCE_UNSET; 298 - res->start = 0; 299 - res->end = sz64 - 1; 300 + resource_set_range(res, 0, sz64); 300 301 pci_info(dev, "%s: can't handle BAR above 4GB (bus address %#010llx)\n", 301 302 res_name, (unsigned long long)l64); 302 303 goto out; ··· 522 525 523 526 pci_read_config_dword(bridge, PCI_PRIMARY_BUS, &buses); 524 527 res.flags = IORESOURCE_BUS; 525 - res.start = (buses >> 8) & 0xff; 526 - res.end = (buses >> 16) & 0xff; 528 + res.start = FIELD_GET(PCI_SECONDARY_BUS_MASK, buses); 529 + res.end = FIELD_GET(PCI_SUBORDINATE_BUS_MASK, buses); 527 530 pci_info(bridge, "PCI bridge to %pR%s\n", &res, 528 531 bridge->transparent ? " (subtractive decode)" : ""); 529 532 ··· 821 824 } 822 825 EXPORT_SYMBOL_GPL(pci_speed_string); 823 826 824 - void pcie_update_link_speed(struct pci_bus *bus) 827 + void pcie_update_link_speed(struct pci_bus *bus, 828 + enum pcie_link_change_reason reason) 825 829 { 826 830 struct pci_dev *bridge = bus->self; 827 831 u16 linksta, linksta2; 828 832 829 833 pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta); 830 834 pcie_capability_read_word(bridge, PCI_EXP_LNKSTA2, &linksta2); 831 - __pcie_update_link_speed(bus, linksta, linksta2); 835 + 836 + __pcie_update_link_speed(bus, reason, linksta, linksta2); 832 837 } 833 838 EXPORT_SYMBOL_GPL(pcie_update_link_speed); 834 839 ··· 917 918 pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap); 918 919 bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS]; 919 920 920 - pcie_update_link_speed(bus); 921 + pcie_update_link_speed(bus, PCIE_ADD_BUS); 921 922 } 922 923 } 923 924 ··· 1312 1313 1313 1314 static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus, 1314 1315 unsigned int available_buses); 1316 + 1317 + void pbus_validate_busn(struct pci_bus *bus) 1318 + { 1319 + struct pci_bus *upstream = bus->parent; 1320 + struct pci_dev *bridge = bus->self; 1321 + 1322 + /* Check that all devices are accessible */ 1323 + while (upstream->parent) { 1324 + if ((bus->busn_res.end > upstream->busn_res.end) || 1325 + (bus->number > upstream->busn_res.end) || 1326 + (bus->number < upstream->number) || 1327 + (bus->busn_res.end < upstream->number)) { 1328 + pci_info(bridge, "devices behind bridge are unusable because %pR cannot be assigned for them\n", 1329 + &bus->busn_res); 1330 + break; 1331 + } 1332 + upstream = upstream->parent; 1333 + } 1334 + } 1335 + 1315 1336 /** 1316 1337 * pci_ea_fixed_busnrs() - Read fixed Secondary and Subordinate bus 1317 1338 * numbers from EA capability. ··· 1343 1324 * and subordinate bus numbers, return true with the bus numbers in @sec 1344 1325 * and @sub. Otherwise return false. 1345 1326 */ 1346 - static bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub) 1327 + bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub) 1347 1328 { 1348 1329 int ea, offset; 1349 1330 u32 dw; ··· 1397 1378 int pass) 1398 1379 { 1399 1380 struct pci_bus *child; 1400 - int is_cardbus = (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS); 1401 - u32 buses, i, j = 0; 1381 + u32 buses; 1402 1382 u16 bctl; 1403 1383 u8 primary, secondary, subordinate; 1404 1384 int broken = 0; ··· 1412 1394 pm_runtime_get_sync(&dev->dev); 1413 1395 1414 1396 pci_read_config_dword(dev, PCI_PRIMARY_BUS, &buses); 1415 - primary = buses & 0xFF; 1416 - secondary = (buses >> 8) & 0xFF; 1417 - subordinate = (buses >> 16) & 0xFF; 1397 + primary = FIELD_GET(PCI_PRIMARY_BUS_MASK, buses); 1398 + secondary = FIELD_GET(PCI_SECONDARY_BUS_MASK, buses); 1399 + subordinate = FIELD_GET(PCI_SUBORDINATE_BUS_MASK, buses); 1418 1400 1419 1401 pci_dbg(dev, "scanning [bus %02x-%02x] behind bridge, pass %d\n", 1420 1402 secondary, subordinate, pass); ··· 1441 1423 pci_write_config_word(dev, PCI_BRIDGE_CONTROL, 1442 1424 bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT); 1443 1425 1444 - if ((secondary || subordinate) && !pcibios_assign_all_busses() && 1445 - !is_cardbus && !broken) { 1426 + if (pci_is_cardbus_bridge(dev)) { 1427 + max = pci_cardbus_scan_bridge_extend(bus, dev, buses, max, 1428 + available_buses, 1429 + pass); 1430 + goto out; 1431 + } 1432 + 1433 + if ((secondary || subordinate) && 1434 + !pcibios_assign_all_busses() && !broken) { 1446 1435 unsigned int cmax, buses; 1447 1436 1448 1437 /* ··· 1491 1466 * do in the second pass. 1492 1467 */ 1493 1468 if (!pass) { 1494 - if (pcibios_assign_all_busses() || broken || is_cardbus) 1469 + if (pcibios_assign_all_busses() || broken) 1495 1470 1496 1471 /* 1497 1472 * Temporarily disable forwarding of the ··· 1502 1477 * ranges. 1503 1478 */ 1504 1479 pci_write_config_dword(dev, PCI_PRIMARY_BUS, 1505 - buses & ~0xffffff); 1480 + buses & PCI_SEC_LATENCY_TIMER_MASK); 1506 1481 goto out; 1507 1482 } 1508 1483 ··· 1533 1508 if (available_buses) 1534 1509 available_buses--; 1535 1510 1536 - buses = (buses & 0xff000000) 1537 - | ((unsigned int)(child->primary) << 0) 1538 - | ((unsigned int)(child->busn_res.start) << 8) 1539 - | ((unsigned int)(child->busn_res.end) << 16); 1540 - 1541 - /* 1542 - * yenta.c forces a secondary latency timer of 176. 1543 - * Copy that behaviour here. 1544 - */ 1545 - if (is_cardbus) { 1546 - buses &= ~0xff000000; 1547 - buses |= CARDBUS_LATENCY_TIMER << 24; 1548 - } 1511 + buses = (buses & PCI_SEC_LATENCY_TIMER_MASK) | 1512 + FIELD_PREP(PCI_PRIMARY_BUS_MASK, child->primary) | 1513 + FIELD_PREP(PCI_SECONDARY_BUS_MASK, child->busn_res.start) | 1514 + FIELD_PREP(PCI_SUBORDINATE_BUS_MASK, child->busn_res.end); 1549 1515 1550 1516 /* We need to blast all three values with a single write */ 1551 1517 pci_write_config_dword(dev, PCI_PRIMARY_BUS, buses); 1552 1518 1553 - if (!is_cardbus) { 1554 - child->bridge_ctl = bctl; 1555 - max = pci_scan_child_bus_extend(child, available_buses); 1556 - } else { 1557 - 1558 - /* 1559 - * For CardBus bridges, we leave 4 bus numbers as 1560 - * cards with a PCI-to-PCI bridge can be inserted 1561 - * later. 1562 - */ 1563 - for (i = 0; i < CARDBUS_RESERVE_BUSNR; i++) { 1564 - struct pci_bus *parent = bus; 1565 - if (pci_find_bus(pci_domain_nr(bus), 1566 - max+i+1)) 1567 - break; 1568 - while (parent->parent) { 1569 - if ((!pcibios_assign_all_busses()) && 1570 - (parent->busn_res.end > max) && 1571 - (parent->busn_res.end <= max+i)) { 1572 - j = 1; 1573 - } 1574 - parent = parent->parent; 1575 - } 1576 - if (j) { 1577 - 1578 - /* 1579 - * Often, there are two CardBus 1580 - * bridges -- try to leave one 1581 - * valid bus number for each one. 1582 - */ 1583 - i /= 2; 1584 - break; 1585 - } 1586 - } 1587 - max += i; 1588 - } 1519 + child->bridge_ctl = bctl; 1520 + max = pci_scan_child_bus_extend(child, available_buses); 1589 1521 1590 1522 /* 1591 1523 * Set subordinate bus number to its real value. ··· 1554 1572 pci_bus_update_busn_res_end(child, max); 1555 1573 pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max); 1556 1574 } 1575 + scnprintf(child->name, sizeof(child->name), "PCI Bus %04x:%02x", 1576 + pci_domain_nr(bus), child->number); 1557 1577 1558 - sprintf(child->name, 1559 - (is_cardbus ? "PCI CardBus %04x:%02x" : "PCI Bus %04x:%02x"), 1560 - pci_domain_nr(bus), child->number); 1561 - 1562 - /* Check that all devices are accessible */ 1563 - while (bus->parent) { 1564 - if ((child->busn_res.end > bus->busn_res.end) || 1565 - (child->number > bus->busn_res.end) || 1566 - (child->number < bus->number) || 1567 - (child->busn_res.end < bus->number)) { 1568 - dev_info(&dev->dev, "devices behind bridge are unusable because %pR cannot be assigned for them\n", 1569 - &child->busn_res); 1570 - break; 1571 - } 1572 - bus = bus->parent; 1573 - } 1578 + pbus_validate_busn(child); 1574 1579 1575 1580 out: 1576 1581 /* Clear errors in the Secondary Status Register */ ··· 2246 2277 u16 ctl; 2247 2278 int ret; 2248 2279 2249 - if (!pci_is_pcie(dev)) 2280 + /* PCI_EXP_DEVCTL_EXT_TAG is RsvdP in VFs */ 2281 + if (!pci_is_pcie(dev) || dev->is_virtfn) 2250 2282 return 0; 2251 2283 2252 2284 ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap); ··· 2387 2417 } 2388 2418 } 2389 2419 2420 + static void pci_configure_rcb(struct pci_dev *dev) 2421 + { 2422 + struct pci_dev *rp; 2423 + u16 rp_lnkctl; 2424 + 2425 + /* 2426 + * Per PCIe r7.0, sec 7.5.3.7, RCB is only meaningful in Root Ports 2427 + * (where it is read-only), Endpoints, and Bridges. It may only be 2428 + * set for Endpoints and Bridges if it is set in the Root Port. For 2429 + * Endpoints, it is 'RsvdP' for Virtual Functions. 2430 + */ 2431 + if (!pci_is_pcie(dev) || 2432 + pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 2433 + pci_pcie_type(dev) == PCI_EXP_TYPE_UPSTREAM || 2434 + pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || 2435 + pci_pcie_type(dev) == PCI_EXP_TYPE_RC_EC || 2436 + dev->is_virtfn) 2437 + return; 2438 + 2439 + /* Root Port often not visible to virtualized guests */ 2440 + rp = pcie_find_root_port(dev); 2441 + if (!rp) 2442 + return; 2443 + 2444 + pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &rp_lnkctl); 2445 + pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL, 2446 + PCI_EXP_LNKCTL_RCB, 2447 + (rp_lnkctl & PCI_EXP_LNKCTL_RCB) ? 2448 + PCI_EXP_LNKCTL_RCB : 0); 2449 + } 2450 + 2390 2451 static void pci_configure_device(struct pci_dev *dev) 2391 2452 { 2392 2453 pci_configure_mps(dev); ··· 2427 2426 pci_configure_aspm_l1ss(dev); 2428 2427 pci_configure_eetlp_prefix(dev); 2429 2428 pci_configure_serr(dev); 2429 + pci_configure_rcb(dev); 2430 2430 2431 2431 pci_acpi_program_hp_params(dev); 2432 2432 } ··· 2556 2554 bool pci_bus_read_dev_vendor_id(struct pci_bus *bus, int devfn, u32 *l, 2557 2555 int timeout) 2558 2556 { 2559 - #ifdef CONFIG_PCI_QUIRKS 2560 - struct pci_dev *bridge = bus->self; 2561 - 2562 - /* 2563 - * Certain IDT switches have an issue where they improperly trigger 2564 - * ACS Source Validation errors on completions for config reads. 2565 - */ 2566 - if (bridge && bridge->vendor == PCI_VENDOR_ID_IDT && 2567 - bridge->device == 0x80b5) 2568 - return pci_idt_bus_quirk(bus, devfn, l, timeout); 2569 - #endif 2570 - 2571 2557 return pci_bus_generic_read_dev_vendor_id(bus, devfn, l, timeout); 2572 2558 } 2573 2559 EXPORT_SYMBOL(pci_bus_read_dev_vendor_id); 2574 - 2575 - #if IS_ENABLED(CONFIG_PCI_PWRCTRL) 2576 - static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn) 2577 - { 2578 - struct pci_host_bridge *host = pci_find_host_bridge(bus); 2579 - struct platform_device *pdev; 2580 - struct device_node *np; 2581 - 2582 - np = of_pci_find_child_device(dev_of_node(&bus->dev), devfn); 2583 - if (!np) 2584 - return NULL; 2585 - 2586 - pdev = of_find_device_by_node(np); 2587 - if (pdev) { 2588 - put_device(&pdev->dev); 2589 - goto err_put_of_node; 2590 - } 2591 - 2592 - /* 2593 - * First check whether the pwrctrl device really needs to be created or 2594 - * not. This is decided based on at least one of the power supplies 2595 - * being defined in the devicetree node of the device. 2596 - */ 2597 - if (!of_pci_supply_present(np)) { 2598 - pr_debug("PCI/pwrctrl: Skipping OF node: %s\n", np->name); 2599 - goto err_put_of_node; 2600 - } 2601 - 2602 - /* Now create the pwrctrl device */ 2603 - pdev = of_platform_device_create(np, NULL, &host->dev); 2604 - if (!pdev) { 2605 - pr_err("PCI/pwrctrl: Failed to create pwrctrl device for node: %s\n", np->name); 2606 - goto err_put_of_node; 2607 - } 2608 - 2609 - of_node_put(np); 2610 - 2611 - return pdev; 2612 - 2613 - err_put_of_node: 2614 - of_node_put(np); 2615 - 2616 - return NULL; 2617 - } 2618 - #else 2619 - static struct platform_device *pci_pwrctrl_create_device(struct pci_bus *bus, int devfn) 2620 - { 2621 - return NULL; 2622 - } 2623 - #endif 2624 2560 2625 2561 /* 2626 2562 * Read the config data for a PCI device, sanity-check it, ··· 2568 2628 { 2569 2629 struct pci_dev *dev; 2570 2630 u32 l; 2571 - 2572 - /* 2573 - * Create pwrctrl device (if required) for the PCI device to handle the 2574 - * power state. If the pwrctrl device is created, then skip scanning 2575 - * further as the pwrctrl core will rescan the bus after powering on 2576 - * the device. 2577 - */ 2578 - if (pci_pwrctrl_create_device(bus, devfn)) 2579 - return NULL; 2580 2631 2581 2632 if (!pci_bus_read_dev_vendor_id(bus, devfn, &l, 60*1000)) 2582 2633 return NULL;
+1
drivers/pci/pwrctrl/Kconfig
··· 13 13 14 14 config PCI_PWRCTRL_SLOT 15 15 tristate "PCI Power Control driver for PCI slots" 16 + select POWER_SEQUENCING 16 17 select PCI_PWRCTRL 17 18 help 18 19 Say Y here to enable the PCI Power Control driver to control the power
+245 -15
drivers/pci/pwrctrl/core.c
··· 3 3 * Copyright (C) 2024 Linaro Ltd. 4 4 */ 5 5 6 + #define dev_fmt(fmt) "pwrctrl: " fmt 7 + 6 8 #include <linux/device.h> 7 9 #include <linux/export.h> 8 10 #include <linux/kernel.h> 11 + #include <linux/of.h> 12 + #include <linux/of_graph.h> 13 + #include <linux/of_platform.h> 9 14 #include <linux/pci.h> 10 15 #include <linux/pci-pwrctrl.h> 16 + #include <linux/platform_device.h> 11 17 #include <linux/property.h> 12 18 #include <linux/slab.h> 19 + 20 + #include "../pci.h" 13 21 14 22 static int pci_pwrctrl_notify(struct notifier_block *nb, unsigned long action, 15 23 void *data) ··· 46 38 return NOTIFY_DONE; 47 39 } 48 40 49 - static void rescan_work_func(struct work_struct *work) 50 - { 51 - struct pci_pwrctrl *pwrctrl = container_of(work, 52 - struct pci_pwrctrl, work); 53 - 54 - pci_lock_rescan_remove(); 55 - pci_rescan_bus(to_pci_host_bridge(pwrctrl->dev->parent)->bus); 56 - pci_unlock_rescan_remove(); 57 - } 58 - 59 41 /** 60 42 * pci_pwrctrl_init() - Initialize the PCI power control context struct 61 43 * ··· 55 57 void pci_pwrctrl_init(struct pci_pwrctrl *pwrctrl, struct device *dev) 56 58 { 57 59 pwrctrl->dev = dev; 58 - INIT_WORK(&pwrctrl->work, rescan_work_func); 60 + dev_set_drvdata(dev, pwrctrl); 59 61 } 60 62 EXPORT_SYMBOL_GPL(pci_pwrctrl_init); 61 63 ··· 85 87 if (ret) 86 88 return ret; 87 89 88 - schedule_work(&pwrctrl->work); 89 - 90 90 return 0; 91 91 } 92 92 EXPORT_SYMBOL_GPL(pci_pwrctrl_device_set_ready); ··· 97 101 */ 98 102 void pci_pwrctrl_device_unset_ready(struct pci_pwrctrl *pwrctrl) 99 103 { 100 - cancel_work_sync(&pwrctrl->work); 101 - 102 104 /* 103 105 * We don't have to delete the link here. Typically, this function 104 106 * is only called when the power control device is being detached. If ··· 138 144 pwrctrl); 139 145 } 140 146 EXPORT_SYMBOL_GPL(devm_pci_pwrctrl_device_set_ready); 147 + 148 + static int __pci_pwrctrl_power_off_device(struct device *dev) 149 + { 150 + struct pci_pwrctrl *pwrctrl = dev_get_drvdata(dev); 151 + 152 + if (!pwrctrl) 153 + return 0; 154 + 155 + return pwrctrl->power_off(pwrctrl); 156 + } 157 + 158 + static void pci_pwrctrl_power_off_device(struct device_node *np) 159 + { 160 + struct platform_device *pdev; 161 + int ret; 162 + 163 + for_each_available_child_of_node_scoped(np, child) 164 + pci_pwrctrl_power_off_device(child); 165 + 166 + pdev = of_find_device_by_node(np); 167 + if (!pdev) 168 + return; 169 + 170 + if (device_is_bound(&pdev->dev)) { 171 + ret = __pci_pwrctrl_power_off_device(&pdev->dev); 172 + if (ret) 173 + dev_err(&pdev->dev, "Failed to power off device: %d", ret); 174 + } 175 + 176 + platform_device_put(pdev); 177 + } 178 + 179 + /** 180 + * pci_pwrctrl_power_off_devices - Power off pwrctrl devices 181 + * 182 + * @parent: PCI host controller device 183 + * 184 + * Recursively traverse all pwrctrl devices for the devicetree hierarchy 185 + * below the specified PCI host controller and power them off in a depth 186 + * first manner. 187 + */ 188 + void pci_pwrctrl_power_off_devices(struct device *parent) 189 + { 190 + struct device_node *np = parent->of_node; 191 + 192 + for_each_available_child_of_node_scoped(np, child) 193 + pci_pwrctrl_power_off_device(child); 194 + } 195 + EXPORT_SYMBOL_GPL(pci_pwrctrl_power_off_devices); 196 + 197 + static int __pci_pwrctrl_power_on_device(struct device *dev) 198 + { 199 + struct pci_pwrctrl *pwrctrl = dev_get_drvdata(dev); 200 + 201 + if (!pwrctrl) 202 + return 0; 203 + 204 + return pwrctrl->power_on(pwrctrl); 205 + } 206 + 207 + /* 208 + * Power on the devices in a depth first manner. Before powering on the device, 209 + * make sure its driver is bound. 210 + */ 211 + static int pci_pwrctrl_power_on_device(struct device_node *np) 212 + { 213 + struct platform_device *pdev; 214 + int ret; 215 + 216 + for_each_available_child_of_node_scoped(np, child) { 217 + ret = pci_pwrctrl_power_on_device(child); 218 + if (ret) 219 + return ret; 220 + } 221 + 222 + pdev = of_find_device_by_node(np); 223 + if (!pdev) 224 + return 0; 225 + 226 + if (device_is_bound(&pdev->dev)) { 227 + ret = __pci_pwrctrl_power_on_device(&pdev->dev); 228 + } else { 229 + /* FIXME: Use blocking wait instead of probe deferral */ 230 + dev_dbg(&pdev->dev, "driver is not bound\n"); 231 + ret = -EPROBE_DEFER; 232 + } 233 + 234 + platform_device_put(pdev); 235 + 236 + return ret; 237 + } 238 + 239 + /** 240 + * pci_pwrctrl_power_on_devices - Power on pwrctrl devices 241 + * 242 + * @parent: PCI host controller device 243 + * 244 + * Recursively traverse all pwrctrl devices for the devicetree hierarchy 245 + * below the specified PCI host controller and power them on in a depth 246 + * first manner. On error, all powered on devices will be powered off. 247 + * 248 + * Return: 0 on success, -EPROBE_DEFER if any pwrctrl driver is not bound, an 249 + * appropriate error code otherwise. 250 + */ 251 + int pci_pwrctrl_power_on_devices(struct device *parent) 252 + { 253 + struct device_node *np = parent->of_node; 254 + struct device_node *child = NULL; 255 + int ret; 256 + 257 + for_each_available_child_of_node(np, child) { 258 + ret = pci_pwrctrl_power_on_device(child); 259 + if (ret) 260 + goto err_power_off; 261 + } 262 + 263 + return 0; 264 + 265 + err_power_off: 266 + for_each_available_child_of_node_scoped(np, tmp) { 267 + if (tmp == child) 268 + break; 269 + pci_pwrctrl_power_off_device(tmp); 270 + } 271 + of_node_put(child); 272 + 273 + return ret; 274 + } 275 + EXPORT_SYMBOL_GPL(pci_pwrctrl_power_on_devices); 276 + 277 + static int pci_pwrctrl_create_device(struct device_node *np, 278 + struct device *parent) 279 + { 280 + struct platform_device *pdev; 281 + int ret; 282 + 283 + for_each_available_child_of_node_scoped(np, child) { 284 + ret = pci_pwrctrl_create_device(child, parent); 285 + if (ret) 286 + return ret; 287 + } 288 + 289 + /* Bail out if the platform device is already available for the node */ 290 + pdev = of_find_device_by_node(np); 291 + if (pdev) { 292 + platform_device_put(pdev); 293 + return 0; 294 + } 295 + 296 + /* 297 + * Sanity check to make sure that the node has the compatible property 298 + * to allow driver binding. 299 + */ 300 + if (!of_property_present(np, "compatible")) 301 + return 0; 302 + 303 + /* 304 + * Check whether the pwrctrl device really needs to be created or not. 305 + * This is decided based on at least one of the power supplies defined 306 + * in the devicetree node of the device or the graph property. 307 + */ 308 + if (!of_pci_supply_present(np) && !of_graph_is_present(np)) { 309 + dev_dbg(parent, "Skipping OF node: %s\n", np->name); 310 + return 0; 311 + } 312 + 313 + /* Now create the pwrctrl device */ 314 + pdev = of_platform_device_create(np, NULL, parent); 315 + if (!pdev) { 316 + dev_err(parent, "Failed to create pwrctrl device for node: %s\n", np->name); 317 + return -EINVAL; 318 + } 319 + 320 + return 0; 321 + } 322 + 323 + /** 324 + * pci_pwrctrl_create_devices - Create pwrctrl devices 325 + * 326 + * @parent: PCI host controller device 327 + * 328 + * Recursively create pwrctrl devices for the devicetree hierarchy below 329 + * the specified PCI host controller in a depth first manner. On error, all 330 + * created devices will be destroyed. 331 + * 332 + * Return: 0 on success, negative error number on error. 333 + */ 334 + int pci_pwrctrl_create_devices(struct device *parent) 335 + { 336 + int ret; 337 + 338 + for_each_available_child_of_node_scoped(parent->of_node, child) { 339 + ret = pci_pwrctrl_create_device(child, parent); 340 + if (ret) { 341 + pci_pwrctrl_destroy_devices(parent); 342 + return ret; 343 + } 344 + } 345 + 346 + return 0; 347 + } 348 + EXPORT_SYMBOL_GPL(pci_pwrctrl_create_devices); 349 + 350 + static void pci_pwrctrl_destroy_device(struct device_node *np) 351 + { 352 + struct platform_device *pdev; 353 + 354 + for_each_available_child_of_node_scoped(np, child) 355 + pci_pwrctrl_destroy_device(child); 356 + 357 + pdev = of_find_device_by_node(np); 358 + if (!pdev) 359 + return; 360 + 361 + of_device_unregister(pdev); 362 + platform_device_put(pdev); 363 + 364 + of_node_clear_flag(np, OF_POPULATED); 365 + } 366 + 367 + /** 368 + * pci_pwrctrl_destroy_devices - Destroy pwrctrl devices 369 + * 370 + * @parent: PCI host controller device 371 + * 372 + * Recursively destroy pwrctrl devices for the devicetree hierarchy below 373 + * the specified PCI host controller in a depth first manner. 374 + */ 375 + void pci_pwrctrl_destroy_devices(struct device *parent) 376 + { 377 + struct device_node *np = parent->of_node; 378 + 379 + for_each_available_child_of_node_scoped(np, child) 380 + pci_pwrctrl_destroy_device(child); 381 + } 382 + EXPORT_SYMBOL_GPL(pci_pwrctrl_destroy_devices); 141 383 142 384 MODULE_AUTHOR("Bartosz Golaszewski <bartosz.golaszewski@linaro.org>"); 143 385 MODULE_DESCRIPTION("PCI Device Power Control core driver");
+49 -35
drivers/pci/pwrctrl/pci-pwrctrl-pwrseq.c
··· 13 13 #include <linux/slab.h> 14 14 #include <linux/types.h> 15 15 16 - struct pci_pwrctrl_pwrseq_data { 17 - struct pci_pwrctrl ctx; 16 + struct pwrseq_pwrctrl { 17 + struct pci_pwrctrl pwrctrl; 18 18 struct pwrseq_desc *pwrseq; 19 19 }; 20 20 21 - struct pci_pwrctrl_pwrseq_pdata { 21 + struct pwrseq_pwrctrl_pdata { 22 22 const char *target; 23 23 /* 24 24 * Called before doing anything else to perform device-specific ··· 27 27 int (*validate_device)(struct device *dev); 28 28 }; 29 29 30 - static int pci_pwrctrl_pwrseq_qcm_wcn_validate_device(struct device *dev) 30 + static int pwrseq_pwrctrl_qcm_wcn_validate_device(struct device *dev) 31 31 { 32 32 /* 33 33 * Old device trees for some platforms already define wifi nodes for ··· 47 47 return 0; 48 48 } 49 49 50 - static const struct pci_pwrctrl_pwrseq_pdata pci_pwrctrl_pwrseq_qcom_wcn_pdata = { 50 + static const struct pwrseq_pwrctrl_pdata pwrseq_pwrctrl_qcom_wcn_pdata = { 51 51 .target = "wlan", 52 - .validate_device = pci_pwrctrl_pwrseq_qcm_wcn_validate_device, 52 + .validate_device = pwrseq_pwrctrl_qcm_wcn_validate_device, 53 53 }; 54 54 55 - static void devm_pci_pwrctrl_pwrseq_power_off(void *data) 55 + static int pwrseq_pwrctrl_power_on(struct pci_pwrctrl *pwrctrl) 56 56 { 57 - struct pwrseq_desc *pwrseq = data; 57 + struct pwrseq_pwrctrl *pwrseq = container_of(pwrctrl, 58 + struct pwrseq_pwrctrl, pwrctrl); 58 59 59 - pwrseq_power_off(pwrseq); 60 + return pwrseq_power_on(pwrseq->pwrseq); 60 61 } 61 62 62 - static int pci_pwrctrl_pwrseq_probe(struct platform_device *pdev) 63 + static int pwrseq_pwrctrl_power_off(struct pci_pwrctrl *pwrctrl) 63 64 { 64 - const struct pci_pwrctrl_pwrseq_pdata *pdata; 65 - struct pci_pwrctrl_pwrseq_data *data; 65 + struct pwrseq_pwrctrl *pwrseq = container_of(pwrctrl, 66 + struct pwrseq_pwrctrl, pwrctrl); 67 + 68 + return pwrseq_power_off(pwrseq->pwrseq); 69 + } 70 + 71 + static void devm_pwrseq_pwrctrl_power_off(void *data) 72 + { 73 + struct pwrseq_pwrctrl *pwrseq = data; 74 + 75 + pwrseq_pwrctrl_power_off(&pwrseq->pwrctrl); 76 + } 77 + 78 + static int pwrseq_pwrctrl_probe(struct platform_device *pdev) 79 + { 80 + const struct pwrseq_pwrctrl_pdata *pdata; 81 + struct pwrseq_pwrctrl *pwrseq; 66 82 struct device *dev = &pdev->dev; 67 83 int ret; 68 84 ··· 92 76 return ret; 93 77 } 94 78 95 - data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 96 - if (!data) 79 + pwrseq = devm_kzalloc(dev, sizeof(*pwrseq), GFP_KERNEL); 80 + if (!pwrseq) 97 81 return -ENOMEM; 98 82 99 - data->pwrseq = devm_pwrseq_get(dev, pdata->target); 100 - if (IS_ERR(data->pwrseq)) 101 - return dev_err_probe(dev, PTR_ERR(data->pwrseq), 83 + pwrseq->pwrseq = devm_pwrseq_get(dev, pdata->target); 84 + if (IS_ERR(pwrseq->pwrseq)) 85 + return dev_err_probe(dev, PTR_ERR(pwrseq->pwrseq), 102 86 "Failed to get the power sequencer\n"); 103 87 104 - ret = pwrseq_power_on(data->pwrseq); 105 - if (ret) 106 - return dev_err_probe(dev, ret, 107 - "Failed to power-on the device\n"); 108 - 109 - ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_pwrseq_power_off, 110 - data->pwrseq); 88 + ret = devm_add_action_or_reset(dev, devm_pwrseq_pwrctrl_power_off, 89 + pwrseq); 111 90 if (ret) 112 91 return ret; 113 92 114 - pci_pwrctrl_init(&data->ctx, dev); 93 + pwrseq->pwrctrl.power_on = pwrseq_pwrctrl_power_on; 94 + pwrseq->pwrctrl.power_off = pwrseq_pwrctrl_power_off; 115 95 116 - ret = devm_pci_pwrctrl_device_set_ready(dev, &data->ctx); 96 + pci_pwrctrl_init(&pwrseq->pwrctrl, dev); 97 + 98 + ret = devm_pci_pwrctrl_device_set_ready(dev, &pwrseq->pwrctrl); 117 99 if (ret) 118 100 return dev_err_probe(dev, ret, 119 101 "Failed to register the pwrctrl wrapper\n"); ··· 119 105 return 0; 120 106 } 121 107 122 - static const struct of_device_id pci_pwrctrl_pwrseq_of_match[] = { 108 + static const struct of_device_id pwrseq_pwrctrl_of_match[] = { 123 109 { 124 110 /* ATH11K in QCA6390 package. */ 125 111 .compatible = "pci17cb,1101", 126 - .data = &pci_pwrctrl_pwrseq_qcom_wcn_pdata, 112 + .data = &pwrseq_pwrctrl_qcom_wcn_pdata, 127 113 }, 128 114 { 129 115 /* ATH11K in WCN6855 package. */ 130 116 .compatible = "pci17cb,1103", 131 - .data = &pci_pwrctrl_pwrseq_qcom_wcn_pdata, 117 + .data = &pwrseq_pwrctrl_qcom_wcn_pdata, 132 118 }, 133 119 { 134 120 /* ATH12K in WCN7850 package. */ 135 121 .compatible = "pci17cb,1107", 136 - .data = &pci_pwrctrl_pwrseq_qcom_wcn_pdata, 122 + .data = &pwrseq_pwrctrl_qcom_wcn_pdata, 137 123 }, 138 124 { } 139 125 }; 140 - MODULE_DEVICE_TABLE(of, pci_pwrctrl_pwrseq_of_match); 126 + MODULE_DEVICE_TABLE(of, pwrseq_pwrctrl_of_match); 141 127 142 - static struct platform_driver pci_pwrctrl_pwrseq_driver = { 128 + static struct platform_driver pwrseq_pwrctrl_driver = { 143 129 .driver = { 144 130 .name = "pci-pwrctrl-pwrseq", 145 - .of_match_table = pci_pwrctrl_pwrseq_of_match, 131 + .of_match_table = pwrseq_pwrctrl_of_match, 146 132 }, 147 - .probe = pci_pwrctrl_pwrseq_probe, 133 + .probe = pwrseq_pwrctrl_probe, 148 134 }; 149 - module_platform_driver(pci_pwrctrl_pwrseq_driver); 135 + module_platform_driver(pwrseq_pwrctrl_driver); 150 136 151 137 MODULE_AUTHOR("Bartosz Golaszewski <bartosz.golaszewski@linaro.org>"); 152 138 MODULE_DESCRIPTION("Generic PCI Power Control module for power sequenced devices");
+120 -106
drivers/pci/pwrctrl/pci-pwrctrl-tc9563.c
··· 59 59 #define TC9563_POWER_CONTROL_OVREN 0x82b2c8 60 60 61 61 #define TC9563_GPIO_MASK 0xfffffff3 62 - #define TC9563_GPIO_DEASSERT_BITS 0xc /* Bits to clear for GPIO deassert */ 62 + #define TC9563_GPIO_DEASSERT_BITS 0xc /* Clear to deassert GPIO */ 63 63 64 64 #define TC9563_TX_MARGIN_MIN_UA 400000 65 65 ··· 69 69 */ 70 70 #define TC9563_OSC_STAB_DELAY_US (10 * USEC_PER_MSEC) 71 71 72 - #define TC9563_L0S_L1_DELAY_UNIT_NS 256 /* Each unit represents 256 nanoseconds */ 72 + #define TC9563_L0S_L1_DELAY_UNIT_NS 256 /* Each unit represents 256 ns */ 73 73 74 74 struct tc9563_pwrctrl_reg_setting { 75 75 unsigned int offset; ··· 105 105 "vddio18", 106 106 }; 107 107 108 - struct tc9563_pwrctrl_ctx { 108 + struct tc9563_pwrctrl { 109 + struct pci_pwrctrl pwrctrl; 109 110 struct regulator_bulk_data supplies[TC9563_PWRCTL_MAX_SUPPLY]; 110 111 struct tc9563_pwrctrl_cfg cfg[TC9563_MAX]; 111 112 struct gpio_desc *reset_gpio; 112 113 struct i2c_adapter *adapter; 113 114 struct i2c_client *client; 114 - struct pci_pwrctrl pwrctrl; 115 115 }; 116 116 117 117 /* ··· 217 217 } 218 218 219 219 static int tc9563_pwrctrl_i2c_bulk_write(struct i2c_client *client, 220 - const struct tc9563_pwrctrl_reg_setting *seq, int len) 220 + const struct tc9563_pwrctrl_reg_setting *seq, 221 + int len) 221 222 { 222 223 int ret, i; 223 224 ··· 231 230 return 0; 232 231 } 233 232 234 - static int tc9563_pwrctrl_disable_port(struct tc9563_pwrctrl_ctx *ctx, 233 + static int tc9563_pwrctrl_disable_port(struct tc9563_pwrctrl *tc9563, 235 234 enum tc9563_pwrctrl_ports port) 236 235 { 237 - struct tc9563_pwrctrl_cfg *cfg = &ctx->cfg[port]; 236 + struct tc9563_pwrctrl_cfg *cfg = &tc9563->cfg[port]; 238 237 const struct tc9563_pwrctrl_reg_setting *seq; 239 238 int ret, len; 240 239 ··· 249 248 len = ARRAY_SIZE(dsp2_pwroff_seq); 250 249 } 251 250 252 - ret = tc9563_pwrctrl_i2c_bulk_write(ctx->client, seq, len); 251 + ret = tc9563_pwrctrl_i2c_bulk_write(tc9563->client, seq, len); 253 252 if (ret) 254 253 return ret; 255 254 256 - return tc9563_pwrctrl_i2c_bulk_write(ctx->client, 257 - common_pwroff_seq, ARRAY_SIZE(common_pwroff_seq)); 255 + return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, common_pwroff_seq, 256 + ARRAY_SIZE(common_pwroff_seq)); 258 257 } 259 258 260 - static int tc9563_pwrctrl_set_l0s_l1_entry_delay(struct tc9563_pwrctrl_ctx *ctx, 261 - enum tc9563_pwrctrl_ports port, bool is_l1, u32 ns) 259 + static int tc9563_pwrctrl_set_l0s_l1_entry_delay(struct tc9563_pwrctrl *tc9563, 260 + enum tc9563_pwrctrl_ports port, 261 + bool is_l1, u32 ns) 262 262 { 263 263 u32 rd_val, units; 264 264 int ret; ··· 271 269 units = ns / TC9563_L0S_L1_DELAY_UNIT_NS; 272 270 273 271 if (port == TC9563_ETHERNET) { 274 - ret = tc9563_pwrctrl_i2c_read(ctx->client, TC9563_EMBEDDED_ETH_DELAY, &rd_val); 272 + ret = tc9563_pwrctrl_i2c_read(tc9563->client, 273 + TC9563_EMBEDDED_ETH_DELAY, 274 + &rd_val); 275 275 if (ret) 276 276 return ret; 277 277 278 278 if (is_l1) 279 - rd_val = u32_replace_bits(rd_val, units, TC9563_ETH_L1_DELAY_MASK); 279 + rd_val = u32_replace_bits(rd_val, units, 280 + TC9563_ETH_L1_DELAY_MASK); 280 281 else 281 - rd_val = u32_replace_bits(rd_val, units, TC9563_ETH_L0S_DELAY_MASK); 282 + rd_val = u32_replace_bits(rd_val, units, 283 + TC9563_ETH_L0S_DELAY_MASK); 282 284 283 - return tc9563_pwrctrl_i2c_write(ctx->client, TC9563_EMBEDDED_ETH_DELAY, rd_val); 285 + return tc9563_pwrctrl_i2c_write(tc9563->client, 286 + TC9563_EMBEDDED_ETH_DELAY, 287 + rd_val); 284 288 } 285 289 286 - ret = tc9563_pwrctrl_i2c_write(ctx->client, TC9563_PORT_SELECT, BIT(port)); 290 + ret = tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_PORT_SELECT, 291 + BIT(port)); 287 292 if (ret) 288 293 return ret; 289 294 290 - return tc9563_pwrctrl_i2c_write(ctx->client, 291 - is_l1 ? TC9563_PORT_L1_DELAY : TC9563_PORT_L0S_DELAY, units); 295 + return tc9563_pwrctrl_i2c_write(tc9563->client, 296 + is_l1 ? TC9563_PORT_L1_DELAY : TC9563_PORT_L0S_DELAY, 297 + units); 292 298 } 293 299 294 - static int tc9563_pwrctrl_set_tx_amplitude(struct tc9563_pwrctrl_ctx *ctx, 300 + static int tc9563_pwrctrl_set_tx_amplitude(struct tc9563_pwrctrl *tc9563, 295 301 enum tc9563_pwrctrl_ports port) 296 302 { 297 - u32 amp = ctx->cfg[port].tx_amp; 303 + u32 amp = tc9563->cfg[port].tx_amp; 298 304 int port_access; 299 305 300 306 if (amp < TC9563_TX_MARGIN_MIN_UA) ··· 331 321 {TC9563_TX_MARGIN, amp}, 332 322 }; 333 323 334 - return tc9563_pwrctrl_i2c_bulk_write(ctx->client, tx_amp_seq, ARRAY_SIZE(tx_amp_seq)); 324 + return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, tx_amp_seq, 325 + ARRAY_SIZE(tx_amp_seq)); 335 326 } 336 327 337 - static int tc9563_pwrctrl_disable_dfe(struct tc9563_pwrctrl_ctx *ctx, 328 + static int tc9563_pwrctrl_disable_dfe(struct tc9563_pwrctrl *tc9563, 338 329 enum tc9563_pwrctrl_ports port) 339 330 { 340 - struct tc9563_pwrctrl_cfg *cfg = &ctx->cfg[port]; 331 + struct tc9563_pwrctrl_cfg *cfg = &tc9563->cfg[port]; 341 332 int port_access, lane_access = 0x3; 342 333 u32 phy_rate = 0x21; 343 334 ··· 375 364 {TC9563_PHY_RATE_CHANGE_OVERRIDE, 0x0}, 376 365 }; 377 366 378 - return tc9563_pwrctrl_i2c_bulk_write(ctx->client, 379 - disable_dfe_seq, ARRAY_SIZE(disable_dfe_seq)); 367 + return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, disable_dfe_seq, 368 + ARRAY_SIZE(disable_dfe_seq)); 380 369 } 381 370 382 - static int tc9563_pwrctrl_set_nfts(struct tc9563_pwrctrl_ctx *ctx, 371 + static int tc9563_pwrctrl_set_nfts(struct tc9563_pwrctrl *tc9563, 383 372 enum tc9563_pwrctrl_ports port) 384 373 { 385 - u8 *nfts = ctx->cfg[port].nfts; 374 + u8 *nfts = tc9563->cfg[port].nfts; 386 375 struct tc9563_pwrctrl_reg_setting nfts_seq[] = { 387 376 {TC9563_NFTS_2_5_GT, nfts[0]}, 388 377 {TC9563_NFTS_5_GT, nfts[1]}, ··· 392 381 if (!nfts[0]) 393 382 return 0; 394 383 395 - ret = tc9563_pwrctrl_i2c_write(ctx->client, TC9563_PORT_SELECT, BIT(port)); 384 + ret = tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_PORT_SELECT, 385 + BIT(port)); 396 386 if (ret) 397 387 return ret; 398 388 399 - return tc9563_pwrctrl_i2c_bulk_write(ctx->client, nfts_seq, ARRAY_SIZE(nfts_seq)); 389 + return tc9563_pwrctrl_i2c_bulk_write(tc9563->client, nfts_seq, 390 + ARRAY_SIZE(nfts_seq)); 400 391 } 401 392 402 - static int tc9563_pwrctrl_assert_deassert_reset(struct tc9563_pwrctrl_ctx *ctx, bool deassert) 393 + static int tc9563_pwrctrl_assert_deassert_reset(struct tc9563_pwrctrl *tc9563, 394 + bool deassert) 403 395 { 404 396 int ret, val; 405 397 406 - ret = tc9563_pwrctrl_i2c_write(ctx->client, TC9563_GPIO_CONFIG, TC9563_GPIO_MASK); 398 + ret = tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_GPIO_CONFIG, 399 + TC9563_GPIO_MASK); 407 400 if (ret) 408 401 return ret; 409 402 410 403 val = deassert ? TC9563_GPIO_DEASSERT_BITS : 0; 411 404 412 - return tc9563_pwrctrl_i2c_write(ctx->client, TC9563_RESET_GPIO, val); 405 + return tc9563_pwrctrl_i2c_write(tc9563->client, TC9563_RESET_GPIO, val); 413 406 } 414 407 415 - static int tc9563_pwrctrl_parse_device_dt(struct tc9563_pwrctrl_ctx *ctx, struct device_node *node, 408 + static int tc9563_pwrctrl_parse_device_dt(struct tc9563_pwrctrl *tc9563, 409 + struct device_node *node, 416 410 enum tc9563_pwrctrl_ports port) 417 411 { 418 - struct tc9563_pwrctrl_cfg *cfg = &ctx->cfg[port]; 412 + struct tc9563_pwrctrl_cfg *cfg = &tc9563->cfg[port]; 419 413 int ret; 420 414 421 415 /* Disable port if the status of the port is disabled. */ ··· 450 434 return 0; 451 435 } 452 436 453 - static void tc9563_pwrctrl_power_off(struct tc9563_pwrctrl_ctx *ctx) 437 + static int tc9563_pwrctrl_power_off(struct pci_pwrctrl *pwrctrl) 454 438 { 455 - gpiod_set_value(ctx->reset_gpio, 1); 439 + struct tc9563_pwrctrl *tc9563 = container_of(pwrctrl, 440 + struct tc9563_pwrctrl, pwrctrl); 456 441 457 - regulator_bulk_disable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 442 + gpiod_set_value(tc9563->reset_gpio, 1); 443 + 444 + regulator_bulk_disable(ARRAY_SIZE(tc9563->supplies), tc9563->supplies); 445 + 446 + return 0; 458 447 } 459 448 460 - static int tc9563_pwrctrl_bring_up(struct tc9563_pwrctrl_ctx *ctx) 449 + static int tc9563_pwrctrl_power_on(struct pci_pwrctrl *pwrctrl) 461 450 { 451 + struct tc9563_pwrctrl *tc9563 = container_of(pwrctrl, 452 + struct tc9563_pwrctrl, pwrctrl); 453 + struct device *dev = tc9563->pwrctrl.dev; 462 454 struct tc9563_pwrctrl_cfg *cfg; 463 455 int ret, i; 464 456 465 - ret = regulator_bulk_enable(ARRAY_SIZE(ctx->supplies), ctx->supplies); 457 + ret = regulator_bulk_enable(ARRAY_SIZE(tc9563->supplies), 458 + tc9563->supplies); 466 459 if (ret < 0) 467 - return dev_err_probe(ctx->pwrctrl.dev, ret, "cannot enable regulators\n"); 460 + return dev_err_probe(dev, ret, "cannot enable regulators\n"); 468 461 469 - gpiod_set_value(ctx->reset_gpio, 0); 462 + gpiod_set_value(tc9563->reset_gpio, 0); 470 463 471 464 fsleep(TC9563_OSC_STAB_DELAY_US); 472 465 473 - ret = tc9563_pwrctrl_assert_deassert_reset(ctx, false); 466 + ret = tc9563_pwrctrl_assert_deassert_reset(tc9563, false); 474 467 if (ret) 475 468 goto power_off; 476 469 477 470 for (i = 0; i < TC9563_MAX; i++) { 478 - cfg = &ctx->cfg[i]; 479 - ret = tc9563_pwrctrl_disable_port(ctx, i); 471 + cfg = &tc9563->cfg[i]; 472 + ret = tc9563_pwrctrl_disable_port(tc9563, i); 480 473 if (ret) { 481 - dev_err(ctx->pwrctrl.dev, "Disabling port failed\n"); 474 + dev_err(dev, "Disabling port failed\n"); 482 475 goto power_off; 483 476 } 484 477 485 - ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(ctx, i, false, cfg->l0s_delay); 478 + ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(tc9563, i, false, cfg->l0s_delay); 486 479 if (ret) { 487 - dev_err(ctx->pwrctrl.dev, "Setting L0s entry delay failed\n"); 480 + dev_err(dev, "Setting L0s entry delay failed\n"); 488 481 goto power_off; 489 482 } 490 483 491 - ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(ctx, i, true, cfg->l1_delay); 484 + ret = tc9563_pwrctrl_set_l0s_l1_entry_delay(tc9563, i, true, cfg->l1_delay); 492 485 if (ret) { 493 - dev_err(ctx->pwrctrl.dev, "Setting L1 entry delay failed\n"); 486 + dev_err(dev, "Setting L1 entry delay failed\n"); 494 487 goto power_off; 495 488 } 496 489 497 - ret = tc9563_pwrctrl_set_tx_amplitude(ctx, i); 490 + ret = tc9563_pwrctrl_set_tx_amplitude(tc9563, i); 498 491 if (ret) { 499 - dev_err(ctx->pwrctrl.dev, "Setting Tx amplitude failed\n"); 492 + dev_err(dev, "Setting Tx amplitude failed\n"); 500 493 goto power_off; 501 494 } 502 495 503 - ret = tc9563_pwrctrl_set_nfts(ctx, i); 496 + ret = tc9563_pwrctrl_set_nfts(tc9563, i); 504 497 if (ret) { 505 - dev_err(ctx->pwrctrl.dev, "Setting N_FTS failed\n"); 498 + dev_err(dev, "Setting N_FTS failed\n"); 506 499 goto power_off; 507 500 } 508 501 509 - ret = tc9563_pwrctrl_disable_dfe(ctx, i); 502 + ret = tc9563_pwrctrl_disable_dfe(tc9563, i); 510 503 if (ret) { 511 - dev_err(ctx->pwrctrl.dev, "Disabling DFE failed\n"); 504 + dev_err(dev, "Disabling DFE failed\n"); 512 505 goto power_off; 513 506 } 514 507 } 515 508 516 - ret = tc9563_pwrctrl_assert_deassert_reset(ctx, true); 509 + ret = tc9563_pwrctrl_assert_deassert_reset(tc9563, true); 517 510 if (!ret) 518 511 return 0; 519 512 520 513 power_off: 521 - tc9563_pwrctrl_power_off(ctx); 514 + tc9563_pwrctrl_power_off(&tc9563->pwrctrl); 522 515 return ret; 523 516 } 524 517 525 518 static int tc9563_pwrctrl_probe(struct platform_device *pdev) 526 519 { 527 - struct pci_host_bridge *bridge = to_pci_host_bridge(pdev->dev.parent); 528 - struct pci_bus *bus = bridge->bus; 520 + struct device_node *node = pdev->dev.of_node; 529 521 struct device *dev = &pdev->dev; 530 522 enum tc9563_pwrctrl_ports port; 531 - struct tc9563_pwrctrl_ctx *ctx; 523 + struct tc9563_pwrctrl *tc9563; 532 524 struct device_node *i2c_node; 533 525 int ret, addr; 534 526 535 - ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); 536 - if (!ctx) 527 + tc9563 = devm_kzalloc(dev, sizeof(*tc9563), GFP_KERNEL); 528 + if (!tc9563) 537 529 return -ENOMEM; 538 530 539 - ret = of_property_read_u32_index(pdev->dev.of_node, "i2c-parent", 1, &addr); 531 + ret = of_property_read_u32_index(node, "i2c-parent", 1, &addr); 540 532 if (ret) 541 533 return dev_err_probe(dev, ret, "Failed to read i2c-parent property\n"); 542 534 543 535 i2c_node = of_parse_phandle(dev->of_node, "i2c-parent", 0); 544 - ctx->adapter = of_find_i2c_adapter_by_node(i2c_node); 536 + tc9563->adapter = of_find_i2c_adapter_by_node(i2c_node); 545 537 of_node_put(i2c_node); 546 - if (!ctx->adapter) 538 + if (!tc9563->adapter) 547 539 return dev_err_probe(dev, -EPROBE_DEFER, "Failed to find I2C adapter\n"); 548 540 549 - ctx->client = i2c_new_dummy_device(ctx->adapter, addr); 550 - if (IS_ERR(ctx->client)) { 541 + tc9563->client = i2c_new_dummy_device(tc9563->adapter, addr); 542 + if (IS_ERR(tc9563->client)) { 551 543 dev_err(dev, "Failed to create I2C client\n"); 552 - i2c_put_adapter(ctx->adapter); 553 - return PTR_ERR(ctx->client); 544 + put_device(&tc9563->adapter->dev); 545 + return PTR_ERR(tc9563->client); 554 546 } 555 547 556 548 for (int i = 0; i < ARRAY_SIZE(tc9563_supply_names); i++) 557 - ctx->supplies[i].supply = tc9563_supply_names[i]; 549 + tc9563->supplies[i].supply = tc9563_supply_names[i]; 558 550 559 - ret = devm_regulator_bulk_get(dev, TC9563_PWRCTL_MAX_SUPPLY, ctx->supplies); 551 + ret = devm_regulator_bulk_get(dev, TC9563_PWRCTL_MAX_SUPPLY, 552 + tc9563->supplies); 560 553 if (ret) { 561 554 dev_err_probe(dev, ret, "failed to get supply regulator\n"); 562 555 goto remove_i2c; 563 556 } 564 557 565 - ctx->reset_gpio = devm_gpiod_get(dev, "resx", GPIOD_OUT_HIGH); 566 - if (IS_ERR(ctx->reset_gpio)) { 567 - ret = dev_err_probe(dev, PTR_ERR(ctx->reset_gpio), "failed to get resx GPIO\n"); 558 + tc9563->reset_gpio = devm_gpiod_get(dev, "resx", GPIOD_OUT_HIGH); 559 + if (IS_ERR(tc9563->reset_gpio)) { 560 + ret = dev_err_probe(dev, PTR_ERR(tc9563->reset_gpio), "failed to get resx GPIO\n"); 568 561 goto remove_i2c; 569 562 } 570 563 571 - pci_pwrctrl_init(&ctx->pwrctrl, dev); 564 + pci_pwrctrl_init(&tc9563->pwrctrl, dev); 572 565 573 566 port = TC9563_USP; 574 - ret = tc9563_pwrctrl_parse_device_dt(ctx, pdev->dev.of_node, port); 567 + ret = tc9563_pwrctrl_parse_device_dt(tc9563, node, port); 575 568 if (ret) { 576 569 dev_err(dev, "failed to parse device tree properties: %d\n", ret); 577 570 goto remove_i2c; ··· 588 563 589 564 /* 590 565 * Downstream ports are always children of the upstream port. 591 - * The first node represents DSP1, the second node represents DSP2, and so on. 566 + * The first node represents DSP1, the second node represents DSP2, 567 + * and so on. 592 568 */ 593 - for_each_child_of_node_scoped(pdev->dev.of_node, child) { 569 + for_each_child_of_node_scoped(node, child) { 594 570 port++; 595 - ret = tc9563_pwrctrl_parse_device_dt(ctx, child, port); 571 + ret = tc9563_pwrctrl_parse_device_dt(tc9563, child, port); 596 572 if (ret) 597 573 break; 598 574 /* Embedded ethernet device are under DSP3 */ 599 575 if (port == TC9563_DSP3) { 600 576 for_each_child_of_node_scoped(child, child1) { 601 577 port++; 602 - ret = tc9563_pwrctrl_parse_device_dt(ctx, child1, port); 578 + ret = tc9563_pwrctrl_parse_device_dt(tc9563, 579 + child1, port); 603 580 if (ret) 604 581 break; 605 582 } ··· 612 585 goto remove_i2c; 613 586 } 614 587 615 - if (bridge->ops->assert_perst) { 616 - ret = bridge->ops->assert_perst(bus, true); 617 - if (ret) 618 - goto remove_i2c; 619 - } 588 + tc9563->pwrctrl.power_on = tc9563_pwrctrl_power_on; 589 + tc9563->pwrctrl.power_off = tc9563_pwrctrl_power_off; 620 590 621 - ret = tc9563_pwrctrl_bring_up(ctx); 622 - if (ret) 623 - goto remove_i2c; 624 - 625 - if (bridge->ops->assert_perst) { 626 - ret = bridge->ops->assert_perst(bus, false); 627 - if (ret) 628 - goto power_off; 629 - } 630 - 631 - ret = devm_pci_pwrctrl_device_set_ready(dev, &ctx->pwrctrl); 591 + ret = devm_pci_pwrctrl_device_set_ready(dev, &tc9563->pwrctrl); 632 592 if (ret) 633 593 goto power_off; 634 - 635 - platform_set_drvdata(pdev, ctx); 636 594 637 595 return 0; 638 596 639 597 power_off: 640 - tc9563_pwrctrl_power_off(ctx); 598 + tc9563_pwrctrl_power_off(&tc9563->pwrctrl); 641 599 remove_i2c: 642 - i2c_unregister_device(ctx->client); 643 - i2c_put_adapter(ctx->adapter); 600 + i2c_unregister_device(tc9563->client); 601 + put_device(&tc9563->adapter->dev); 644 602 return ret; 645 603 } 646 604 647 605 static void tc9563_pwrctrl_remove(struct platform_device *pdev) 648 606 { 649 - struct tc9563_pwrctrl_ctx *ctx = platform_get_drvdata(pdev); 607 + struct pci_pwrctrl *pwrctrl = dev_get_drvdata(&pdev->dev); 608 + struct tc9563_pwrctrl *tc9563 = container_of(pwrctrl, 609 + struct tc9563_pwrctrl, pwrctrl); 650 610 651 - tc9563_pwrctrl_power_off(ctx); 652 - i2c_unregister_device(ctx->client); 653 - i2c_put_adapter(ctx->adapter); 611 + tc9563_pwrctrl_power_off(&tc9563->pwrctrl); 612 + i2c_unregister_device(tc9563->client); 613 + put_device(&tc9563->adapter->dev); 654 614 } 655 615 656 616 static const struct of_device_id tc9563_pwrctrl_of_match[] = {
+74 -29
drivers/pci/pwrctrl/slot.c
··· 8 8 #include <linux/device.h> 9 9 #include <linux/mod_devicetable.h> 10 10 #include <linux/module.h> 11 + #include <linux/of_graph.h> 11 12 #include <linux/pci-pwrctrl.h> 12 13 #include <linux/platform_device.h> 14 + #include <linux/pwrseq/consumer.h> 13 15 #include <linux/regulator/consumer.h> 14 16 #include <linux/slab.h> 15 17 16 - struct pci_pwrctrl_slot_data { 17 - struct pci_pwrctrl ctx; 18 + struct slot_pwrctrl { 19 + struct pci_pwrctrl pwrctrl; 18 20 struct regulator_bulk_data *supplies; 19 21 int num_supplies; 22 + struct clk *clk; 23 + struct pwrseq_desc *pwrseq; 20 24 }; 21 25 22 - static void devm_pci_pwrctrl_slot_power_off(void *data) 26 + static int slot_pwrctrl_power_on(struct pci_pwrctrl *pwrctrl) 23 27 { 24 - struct pci_pwrctrl_slot_data *slot = data; 28 + struct slot_pwrctrl *slot = container_of(pwrctrl, 29 + struct slot_pwrctrl, pwrctrl); 30 + int ret; 31 + 32 + if (slot->pwrseq) { 33 + pwrseq_power_on(slot->pwrseq); 34 + return 0; 35 + } 36 + 37 + ret = regulator_bulk_enable(slot->num_supplies, slot->supplies); 38 + if (ret < 0) { 39 + dev_err(slot->pwrctrl.dev, "Failed to enable slot regulators\n"); 40 + return ret; 41 + } 42 + 43 + return clk_prepare_enable(slot->clk); 44 + } 45 + 46 + static int slot_pwrctrl_power_off(struct pci_pwrctrl *pwrctrl) 47 + { 48 + struct slot_pwrctrl *slot = container_of(pwrctrl, 49 + struct slot_pwrctrl, pwrctrl); 50 + 51 + if (slot->pwrseq) { 52 + pwrseq_power_off(slot->pwrseq); 53 + return 0; 54 + } 25 55 26 56 regulator_bulk_disable(slot->num_supplies, slot->supplies); 57 + clk_disable_unprepare(slot->clk); 58 + 59 + return 0; 60 + } 61 + 62 + static void devm_slot_pwrctrl_release(void *data) 63 + { 64 + struct slot_pwrctrl *slot = data; 65 + 66 + slot_pwrctrl_power_off(&slot->pwrctrl); 27 67 regulator_bulk_free(slot->num_supplies, slot->supplies); 28 68 } 29 69 30 - static int pci_pwrctrl_slot_probe(struct platform_device *pdev) 70 + static int slot_pwrctrl_probe(struct platform_device *pdev) 31 71 { 32 - struct pci_pwrctrl_slot_data *slot; 72 + struct slot_pwrctrl *slot; 33 73 struct device *dev = &pdev->dev; 34 - struct clk *clk; 35 74 int ret; 36 75 37 76 slot = devm_kzalloc(dev, sizeof(*slot), GFP_KERNEL); 38 77 if (!slot) 39 78 return -ENOMEM; 79 + 80 + if (of_graph_is_present(dev_of_node(dev))) { 81 + slot->pwrseq = devm_pwrseq_get(dev, "pcie"); 82 + if (IS_ERR(slot->pwrseq)) 83 + return dev_err_probe(dev, PTR_ERR(slot->pwrseq), 84 + "Failed to get the power sequencer\n"); 85 + 86 + goto skip_resources; 87 + } 40 88 41 89 ret = of_regulator_bulk_get_all(dev, dev_of_node(dev), 42 90 &slot->supplies); ··· 94 46 } 95 47 96 48 slot->num_supplies = ret; 97 - ret = regulator_bulk_enable(slot->num_supplies, slot->supplies); 98 - if (ret < 0) { 99 - dev_err_probe(dev, ret, "Failed to enable slot regulators\n"); 100 - regulator_bulk_free(slot->num_supplies, slot->supplies); 101 - return ret; 102 - } 103 49 104 - ret = devm_add_action_or_reset(dev, devm_pci_pwrctrl_slot_power_off, 105 - slot); 106 - if (ret) 107 - return ret; 108 - 109 - clk = devm_clk_get_optional_enabled(dev, NULL); 110 - if (IS_ERR(clk)) { 111 - return dev_err_probe(dev, PTR_ERR(clk), 50 + slot->clk = devm_clk_get_optional(dev, NULL); 51 + if (IS_ERR(slot->clk)) { 52 + return dev_err_probe(dev, PTR_ERR(slot->clk), 112 53 "Failed to enable slot clock\n"); 113 54 } 114 55 115 - pci_pwrctrl_init(&slot->ctx, dev); 56 + skip_resources: 57 + slot->pwrctrl.power_on = slot_pwrctrl_power_on; 58 + slot->pwrctrl.power_off = slot_pwrctrl_power_off; 116 59 117 - ret = devm_pci_pwrctrl_device_set_ready(dev, &slot->ctx); 60 + ret = devm_add_action_or_reset(dev, devm_slot_pwrctrl_release, slot); 61 + if (ret) 62 + return ret; 63 + 64 + pci_pwrctrl_init(&slot->pwrctrl, dev); 65 + 66 + ret = devm_pci_pwrctrl_device_set_ready(dev, &slot->pwrctrl); 118 67 if (ret) 119 68 return dev_err_probe(dev, ret, "Failed to register pwrctrl driver\n"); 120 69 121 70 return 0; 122 71 } 123 72 124 - static const struct of_device_id pci_pwrctrl_slot_of_match[] = { 73 + static const struct of_device_id slot_pwrctrl_of_match[] = { 125 74 { 126 75 .compatible = "pciclass,0604", 127 76 }, 128 77 { } 129 78 }; 130 - MODULE_DEVICE_TABLE(of, pci_pwrctrl_slot_of_match); 79 + MODULE_DEVICE_TABLE(of, slot_pwrctrl_of_match); 131 80 132 - static struct platform_driver pci_pwrctrl_slot_driver = { 81 + static struct platform_driver slot_pwrctrl_driver = { 133 82 .driver = { 134 83 .name = "pci-pwrctrl-slot", 135 - .of_match_table = pci_pwrctrl_slot_of_match, 84 + .of_match_table = slot_pwrctrl_of_match, 136 85 }, 137 - .probe = pci_pwrctrl_slot_probe, 86 + .probe = slot_pwrctrl_probe, 138 87 }; 139 - module_platform_driver(pci_pwrctrl_slot_driver); 88 + module_platform_driver(slot_pwrctrl_driver); 140 89 141 90 MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>"); 142 91 MODULE_DESCRIPTION("Generic PCI Power Control driver for PCI Slots");
+57 -34
drivers/pci/quirks.c
··· 1361 1361 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TOSHIBA, 0x605, quirk_transparent_bridge); 1362 1362 1363 1363 /* 1364 + * Enabling Link Bandwidth Management Interrupts (BW notifications) can cause 1365 + * boot hangs on P45. 1366 + */ 1367 + static void quirk_p45_bw_notifications(struct pci_dev *dev) 1368 + { 1369 + dev->no_bw_notif = 1; 1370 + } 1371 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e21, quirk_p45_bw_notifications); 1372 + 1373 + /* 1364 1374 * Common misconfiguration of the MediaGX/Geode PCI master that will reduce 1365 1375 * PCI bandwidth from 70MB/s to 25MB/s. See the GXM/GXLV/GX1 datasheets 1366 1376 * found at http://www.national.com/analog for info on what these bits do. ··· 3760 3750 } 3761 3751 3762 3752 /* 3753 + * After asserting Secondary Bus Reset to downstream devices via a GB10 3754 + * Root Port, the link may not retrain correctly. 3755 + * https://lore.kernel.org/r/20251113084441.2124737-1-Johnny-CC.Chang@mediatek.com 3756 + */ 3757 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, 0x22CE, quirk_no_bus_reset); 3758 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, 0x22D0, quirk_no_bus_reset); 3759 + 3760 + /* 3763 3761 * Some NVIDIA GPU devices do not work with bus reset, SBR needs to be 3764 3762 * prevented for those affected devices. 3765 3763 */ ··· 3809 3791 * https://e2e.ti.com/support/processors/f/791/t/954382 3810 3792 */ 3811 3793 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TI, 0xb005, quirk_no_bus_reset); 3794 + 3795 + /* 3796 + * Reports from users making use of PCI device assignment with ASM1164 3797 + * controllers indicate an issue with bus reset where the device fails to 3798 + * retrain. The issue appears more common in configurations with multiple 3799 + * controllers. The device does indicate PM reset support (NoSoftRst-), 3800 + * therefore this still leaves a viable reset method. 3801 + * https://forum.proxmox.com/threads/problems-with-pcie-passthrough-with-two-identical-devices.149003/ 3802 + */ 3803 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ASMEDIA, 0x1164, quirk_no_bus_reset); 3812 3804 3813 3805 static void quirk_no_pm_reset(struct pci_dev *dev) 3814 3806 { ··· 4499 4471 quirk_bridge_cavm_thrx2_pcie_root); 4500 4472 4501 4473 /* 4474 + * AST1150 doesn't use a real PCI bus and always forwards the requester ID 4475 + * from downstream devices. 4476 + */ 4477 + static void quirk_aspeed_pci_bridge_no_alias(struct pci_dev *pdev) 4478 + { 4479 + pdev->dev_flags |= PCI_DEV_FLAGS_PCI_BRIDGE_NO_ALIAS; 4480 + } 4481 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ASPEED, 0x1150, quirk_aspeed_pci_bridge_no_alias); 4482 + 4483 + /* 4502 4484 * Intersil/Techwell TW686[4589]-based video capture cards have an empty (zero) 4503 4485 * class code. Fix it. 4504 4486 */ ··· 5162 5124 { PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs }, 5163 5125 /* QCOM SA8775P root port */ 5164 5126 { PCI_VENDOR_ID_QCOM, 0x0115, pci_quirk_qcom_rp_acs }, 5127 + /* QCOM Hamoa root port */ 5128 + { PCI_VENDOR_ID_QCOM, 0x0111, pci_quirk_qcom_rp_acs }, 5129 + /* QCOM Glymur root port */ 5130 + { PCI_VENDOR_ID_QCOM, 0x0120, pci_quirk_qcom_rp_acs }, 5165 5131 /* HXT SD4800 root ports. The ACS design is same as QCOM QDF2xxx */ 5166 5132 { PCI_VENDOR_ID_HXT, 0x0401, pci_quirk_qcom_rp_acs }, 5167 5133 /* Intel PCH root ports */ ··· 5640 5598 pci_walk_bus(bridge->bus, pci_configure_extended_tags, NULL); 5641 5599 } 5642 5600 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1004, quirk_no_ext_tags); 5601 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1005, quirk_no_ext_tags); 5643 5602 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0132, quirk_no_ext_tags); 5644 5603 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0140, quirk_no_ext_tags); 5645 5604 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0141, quirk_no_ext_tags); ··· 5839 5796 5840 5797 /* 5841 5798 * Some IDT switches incorrectly flag an ACS Source Validation error on 5842 - * completions for config read requests even though PCIe r4.0, sec 5799 + * completions for config read requests even though PCIe r7.0, sec 5843 5800 * 6.12.1.1, says that completions are never affected by ACS Source 5844 5801 * Validation. Here's the text of IDT 89H32H8G3-YC, erratum #36: 5845 5802 * ··· 5852 5809 * 5853 5810 * The workaround suggested by IDT is to issue a config write to the 5854 5811 * downstream device before issuing the first config read. This allows the 5855 - * downstream device to capture its bus and device numbers (see PCIe r4.0, 5856 - * sec 2.2.9), thus avoiding the ACS error on the completion. 5812 + * downstream device to capture its bus and device numbers (see PCIe r7.0, 5813 + * sec 2.2.9.1), thus avoiding the ACS error on the completion. 5857 5814 * 5858 5815 * However, we don't know when the device is ready to accept the config 5859 - * write, so we do config reads until we receive a non-Config Request Retry 5860 - * Status, then do the config write. 5861 - * 5862 - * To avoid hitting the erratum when doing the config reads, we disable ACS 5863 - * SV around this process. 5816 + * write, and the issue affects resets of the switch as well as enumeration, 5817 + * so disable use of ACS SV for these devices altogether. 5864 5818 */ 5865 - int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *l, int timeout) 5819 + void pci_disable_broken_acs_cap(struct pci_dev *pdev) 5866 5820 { 5867 - int pos; 5868 - u16 ctrl = 0; 5869 - bool found; 5870 - struct pci_dev *bridge = bus->self; 5871 - 5872 - pos = bridge->acs_cap; 5873 - 5874 - /* Disable ACS SV before initial config reads */ 5875 - if (pos) { 5876 - pci_read_config_word(bridge, pos + PCI_ACS_CTRL, &ctrl); 5877 - if (ctrl & PCI_ACS_SV) 5878 - pci_write_config_word(bridge, pos + PCI_ACS_CTRL, 5879 - ctrl & ~PCI_ACS_SV); 5821 + if (pdev->vendor == PCI_VENDOR_ID_IDT && 5822 + (pdev->device == 0x80b5 || pdev->device == 0x8090)) { 5823 + pci_info(pdev, "Disabling broken ACS SV; downstream device isolation reduced\n"); 5824 + pdev->acs_capabilities &= ~PCI_ACS_SV; 5880 5825 } 5881 - 5882 - found = pci_bus_generic_read_dev_vendor_id(bus, devfn, l, timeout); 5883 - 5884 - /* Write Vendor ID (read-only) so the endpoint latches its bus/dev */ 5885 - if (found) 5886 - pci_bus_write_config_word(bus, devfn, PCI_VENDOR_ID, 0); 5887 - 5888 - /* Re-enable ACS_SV if it was previously enabled */ 5889 - if (ctrl & PCI_ACS_SV) 5890 - pci_write_config_word(bridge, pos + PCI_ACS_CTRL, ctrl); 5891 - 5892 - return found; 5893 5826 } 5894 5827 5895 5828 /* ··· 6223 6204 DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2303, 6224 6205 pci_fixup_pericom_acs_store_forward); 6225 6206 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2303, 6207 + pci_fixup_pericom_acs_store_forward); 6208 + DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0xb404, 6209 + pci_fixup_pericom_acs_store_forward); 6210 + DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0xb404, 6226 6211 pci_fixup_pericom_acs_store_forward); 6227 6212 6228 6213 static void nvidia_ion_ahci_fixup(struct pci_dev *pdev)
-20
drivers/pci/remove.c
··· 17 17 } 18 18 } 19 19 20 - static void pci_pwrctrl_unregister(struct device *dev) 21 - { 22 - struct device_node *np; 23 - struct platform_device *pdev; 24 - 25 - np = dev_of_node(dev); 26 - if (!np) 27 - return; 28 - 29 - pdev = of_find_device_by_node(np); 30 - if (!pdev) 31 - return; 32 - 33 - of_device_unregister(pdev); 34 - put_device(&pdev->dev); 35 - 36 - of_node_clear_flag(np, OF_POPULATED); 37 - } 38 - 39 20 static void pci_stop_dev(struct pci_dev *dev) 40 21 { 41 22 pci_pme_active(dev, false); ··· 54 73 pci_ide_destroy(dev); 55 74 pcie_aspm_exit_link_state(dev); 56 75 pci_bridge_d3_update(dev); 57 - pci_pwrctrl_unregister(&dev->dev); 58 76 pci_free_resources(dev); 59 77 put_device(&dev->dev); 60 78 }
+2
drivers/pci/search.c
··· 86 86 case PCI_EXP_TYPE_DOWNSTREAM: 87 87 continue; 88 88 case PCI_EXP_TYPE_PCI_BRIDGE: 89 + if (tmp->dev_flags & PCI_DEV_FLAGS_PCI_BRIDGE_NO_ALIAS) 90 + continue; 89 91 ret = fn(tmp, 90 92 PCI_DEVID(tmp->subordinate->number, 91 93 PCI_DEVFN(0, 0)), data);
+228 -410
drivers/pci/setup-bus.c
··· 14 14 * tighter packing. Prefetchable range support. 15 15 */ 16 16 17 + #include <linux/align.h> 17 18 #include <linux/bitops.h> 18 19 #include <linux/bug.h> 19 20 #include <linux/init.h> 20 21 #include <linux/kernel.h> 22 + #include <linux/minmax.h> 21 23 #include <linux/module.h> 22 24 #include <linux/pci.h> 23 25 #include <linux/errno.h> ··· 49 47 unsigned long flags; 50 48 }; 51 49 52 - static void free_list(struct list_head *head) 50 + static void pci_dev_res_free_list(struct list_head *head) 53 51 { 54 52 struct pci_dev_resource *dev_res, *tmp; 55 53 ··· 60 58 } 61 59 62 60 /** 63 - * add_to_list() - Add a new resource tracker to the list 61 + * pci_dev_res_add_to_list() - Add a new resource tracker to the list 64 62 * @head: Head of the list 65 63 * @dev: Device to which the resource belongs 66 64 * @res: Resource to be tracked 67 65 * @add_size: Additional size to be optionally added to the resource 68 66 * @min_align: Minimum memory window alignment 69 67 */ 70 - static int add_to_list(struct list_head *head, struct pci_dev *dev, 71 - struct resource *res, resource_size_t add_size, 72 - resource_size_t min_align) 68 + int pci_dev_res_add_to_list(struct list_head *head, struct pci_dev *dev, 69 + struct resource *res, resource_size_t add_size, 70 + resource_size_t min_align) 73 71 { 74 72 struct pci_dev_resource *tmp; 75 73 ··· 90 88 return 0; 91 89 } 92 90 93 - static void remove_from_list(struct list_head *head, struct resource *res) 91 + static void pci_dev_res_remove_from_list(struct list_head *head, 92 + struct resource *res) 94 93 { 95 94 struct pci_dev_resource *dev_res, *tmp; 96 95 ··· 126 123 return dev_res ? dev_res->add_size : 0; 127 124 } 128 125 129 - static resource_size_t get_res_add_align(struct list_head *head, 130 - struct resource *res) 131 - { 132 - struct pci_dev_resource *dev_res; 133 - 134 - dev_res = res_to_dev_res(head, res); 135 - return dev_res ? dev_res->min_align : 0; 136 - } 137 - 138 - static void restore_dev_resource(struct pci_dev_resource *dev_res) 126 + static void pci_dev_res_restore(struct pci_dev_resource *dev_res) 139 127 { 140 128 struct resource *res = dev_res->res; 129 + struct pci_dev *dev = dev_res->dev; 130 + int idx = pci_resource_num(dev, res); 131 + const char *res_name = pci_resource_name(dev, idx); 141 132 142 - if (WARN_ON_ONCE(res->parent)) 133 + if (WARN_ON_ONCE(resource_assigned(res))) 143 134 return; 144 135 145 136 res->start = dev_res->start; 146 137 res->end = dev_res->end; 147 138 res->flags = dev_res->flags; 139 + 140 + pci_dbg(dev, "%s %pR: resource restored\n", res_name, res); 148 141 } 149 142 150 143 /* ··· 167 168 if ((r->flags & type_mask) != type) 168 169 continue; 169 170 170 - if (!r->parent) 171 + if (!resource_assigned(r)) 171 172 return r; 172 173 if (!r_assigned) 173 174 r_assigned = r; ··· 270 271 struct resource *pbus_select_window(struct pci_bus *bus, 271 272 const struct resource *res) 272 273 { 273 - if (res->parent) 274 + if (resource_assigned(res)) 274 275 return res->parent; 275 276 276 277 return pbus_select_window_for_type(bus, res->flags); ··· 301 302 if (!res->flags) 302 303 return false; 303 304 304 - if (idx >= PCI_BRIDGE_RESOURCES && idx <= PCI_BRIDGE_RESOURCE_END && 305 - res->flags & IORESOURCE_DISABLED) 305 + if (pci_resource_is_bridge_win(idx) && res->flags & IORESOURCE_DISABLED) 306 306 return false; 307 307 308 308 return true; ··· 309 311 310 312 static bool pdev_resource_should_fit(struct pci_dev *dev, struct resource *res) 311 313 { 312 - if (res->parent) 314 + if (resource_assigned(res)) 313 315 return false; 314 316 315 317 if (res->flags & IORESOURCE_PCI_FIXED) ··· 378 380 return true; 379 381 if (resno == PCI_ROM_RESOURCE && !(res->flags & IORESOURCE_ROM_ENABLE)) 380 382 return true; 383 + if (pci_resource_is_bridge_win(resno) && !resource_size(res)) 384 + return true; 381 385 382 386 return false; 383 387 } 384 388 385 - static inline void reset_resource(struct pci_dev *dev, struct resource *res) 389 + static void reset_resource(struct pci_dev *dev, struct resource *res) 386 390 { 387 391 int idx = pci_resource_num(dev, res); 392 + const char *res_name = pci_resource_name(dev, idx); 388 393 389 - if (idx >= PCI_BRIDGE_RESOURCES && idx <= PCI_BRIDGE_RESOURCE_END) { 394 + if (pci_resource_is_bridge_win(idx)) { 390 395 res->flags |= IORESOURCE_UNSET; 391 396 return; 392 397 } 398 + 399 + pci_dbg(dev, "%s %pR: resetting resource\n", res_name, res); 393 400 394 401 res->start = 0; 395 402 res->end = 0; ··· 416 413 struct list_head *head) 417 414 { 418 415 struct pci_dev_resource *add_res, *tmp; 419 - struct pci_dev_resource *dev_res; 420 416 struct pci_dev *dev; 421 417 struct resource *res; 422 418 const char *res_name; ··· 423 421 int idx; 424 422 425 423 list_for_each_entry_safe(add_res, tmp, realloc_head, list) { 426 - bool found_match = false; 427 - 428 424 res = add_res->res; 429 425 dev = add_res->dev; 430 426 idx = pci_resource_num(dev, res); ··· 431 431 * Skip resource that failed the earlier assignment and is 432 432 * not optional as it would just fail again. 433 433 */ 434 - if (!res->parent && resource_size(res) && 434 + if (!resource_assigned(res) && resource_size(res) && 435 435 !pci_resource_is_optional(dev, idx)) 436 436 goto out; 437 437 438 438 /* Skip this resource if not found in head list */ 439 - list_for_each_entry(dev_res, head, list) { 440 - if (dev_res->res == res) { 441 - found_match = true; 442 - break; 443 - } 444 - } 445 - if (!found_match) /* Just skip */ 439 + if (!res_to_dev_res(head, res)) 446 440 continue; 447 441 448 442 res_name = pci_resource_name(dev, idx); 449 443 add_size = add_res->add_size; 450 444 align = add_res->min_align; 451 - if (!res->parent) { 445 + if (!resource_assigned(res)) { 452 446 resource_set_range(res, align, 453 447 resource_size(res) + add_size); 454 448 if (pci_assign_resource(dev, idx)) { ··· 450 456 "%s %pR: ignoring failure in optional allocation\n", 451 457 res_name, res); 452 458 } 453 - } else if (add_size > 0) { 459 + } else if (add_size > 0 || !IS_ALIGNED(res->start, align)) { 454 460 res->flags |= add_res->flags & 455 461 (IORESOURCE_STARTALIGN|IORESOURCE_SIZEALIGN); 456 462 if (pci_reassign_resource(dev, idx, add_size, align)) ··· 499 505 500 506 if (pci_assign_resource(dev, idx)) { 501 507 if (fail_head) { 502 - add_to_list(fail_head, dev, res, 503 - 0 /* don't care */, 504 - 0 /* don't care */); 508 + pci_dev_res_add_to_list(fail_head, dev, res, 509 + 0 /* don't care */, 510 + 0 /* don't care */); 505 511 } 506 512 } 507 513 } ··· 598 604 LIST_HEAD(local_fail_head); 599 605 LIST_HEAD(dummy_head); 600 606 struct pci_dev_resource *save_res; 601 - struct pci_dev_resource *dev_res, *tmp_res, *dev_res2; 607 + struct pci_dev_resource *dev_res, *tmp_res, *dev_res2, *addsize_res; 602 608 struct resource *res; 603 609 struct pci_dev *dev; 604 610 unsigned long fail_type; 605 - resource_size_t add_align, align; 611 + resource_size_t align; 606 612 607 613 if (!realloc_head) 608 614 realloc_head = &dummy_head; ··· 613 619 614 620 /* Save original start, end, flags etc at first */ 615 621 list_for_each_entry(dev_res, head, list) { 616 - if (add_to_list(&save_head, dev_res->dev, dev_res->res, 0, 0)) { 617 - free_list(&save_head); 622 + if (pci_dev_res_add_to_list(&save_head, dev_res->dev, 623 + dev_res->res, 0, 0)) { 624 + pci_dev_res_free_list(&save_head); 618 625 goto assign; 619 626 } 620 627 } ··· 624 629 list_for_each_entry_safe(dev_res, tmp_res, head, list) { 625 630 res = dev_res->res; 626 631 627 - res->end += get_res_add_size(realloc_head, res); 632 + addsize_res = res_to_dev_res(realloc_head, res); 633 + if (!addsize_res) 634 + continue; 628 635 636 + res->end += addsize_res->add_size; 629 637 /* 630 638 * There are two kinds of additional resources in the list: 631 639 * 1. bridge resource -- IORESOURCE_STARTALIGN ··· 638 640 if (!(res->flags & IORESOURCE_STARTALIGN)) 639 641 continue; 640 642 641 - add_align = get_res_add_align(realloc_head, res); 642 - 643 + if (addsize_res->min_align <= res->start) 644 + continue; 643 645 /* 644 646 * The "head" list is sorted by alignment so resources with 645 647 * bigger alignment will be assigned first. After we ··· 647 649 * need to reorder the list by alignment to make it 648 650 * consistent. 649 651 */ 650 - if (add_align > res->start) { 651 - resource_set_range(res, add_align, resource_size(res)); 652 + resource_set_range(res, addsize_res->min_align, 653 + resource_size(res)); 652 654 653 - list_for_each_entry(dev_res2, head, list) { 654 - align = pci_resource_alignment(dev_res2->dev, 655 - dev_res2->res); 656 - if (add_align > align) { 657 - list_move_tail(&dev_res->list, 658 - &dev_res2->list); 659 - break; 660 - } 655 + list_for_each_entry(dev_res2, head, list) { 656 + align = pci_resource_alignment(dev_res2->dev, 657 + dev_res2->res); 658 + if (addsize_res->min_align > align) { 659 + list_move_tail(&dev_res->list, &dev_res2->list); 660 + break; 661 661 } 662 662 } 663 663 ··· 668 672 if (list_empty(&local_fail_head)) { 669 673 /* Remove head list from realloc_head list */ 670 674 list_for_each_entry(dev_res, head, list) 671 - remove_from_list(realloc_head, dev_res->res); 672 - free_list(&save_head); 675 + pci_dev_res_remove_from_list(realloc_head, 676 + dev_res->res); 677 + pci_dev_res_free_list(&save_head); 673 678 goto out; 674 679 } 675 680 ··· 680 683 list_for_each_entry(save_res, &save_head, list) { 681 684 struct resource *res = save_res->res; 682 685 683 - if (res->parent) 686 + if (resource_assigned(res)) 684 687 continue; 685 688 686 - restore_dev_resource(save_res); 689 + pci_dev_res_restore(save_res); 687 690 } 688 - free_list(&local_fail_head); 689 - free_list(&save_head); 691 + pci_dev_res_free_list(&local_fail_head); 692 + pci_dev_res_free_list(&save_head); 690 693 goto out; 691 694 } 692 695 ··· 696 699 list_for_each_entry_safe(dev_res, tmp_res, head, list) { 697 700 res = dev_res->res; 698 701 699 - if (res->parent && !pci_need_to_release(fail_type, res)) { 702 + if (resource_assigned(res) && 703 + !pci_need_to_release(fail_type, res)) { 700 704 /* Remove it from realloc_head list */ 701 - remove_from_list(realloc_head, res); 702 - remove_from_list(&save_head, res); 705 + pci_dev_res_remove_from_list(realloc_head, res); 706 + pci_dev_res_remove_from_list(&save_head, res); 703 707 list_del(&dev_res->list); 704 708 kfree(dev_res); 705 709 } 706 710 } 707 711 708 - free_list(&local_fail_head); 712 + pci_dev_res_free_list(&local_fail_head); 709 713 /* Release assigned resource */ 710 714 list_for_each_entry(dev_res, head, list) { 711 715 res = dev_res->res; 712 716 dev = dev_res->dev; 713 717 714 718 pci_release_resource(dev, pci_resource_num(dev, res)); 715 - restore_dev_resource(dev_res); 719 + pci_dev_res_restore(dev_res); 716 720 } 717 721 /* Restore start/end/flags from saved list */ 718 722 list_for_each_entry(save_res, &save_head, list) 719 - restore_dev_resource(save_res); 720 - free_list(&save_head); 723 + pci_dev_res_restore(save_res); 724 + pci_dev_res_free_list(&save_head); 721 725 722 726 /* Satisfy the must-have resource requests */ 723 727 assign_requested_resources_sorted(head, NULL, false); ··· 733 735 res = dev_res->res; 734 736 dev = dev_res->dev; 735 737 736 - if (res->parent) 738 + if (resource_assigned(res)) 737 739 continue; 738 740 739 741 if (fail_head) { 740 - add_to_list(fail_head, dev, res, 741 - 0 /* don't care */, 742 - 0 /* don't care */); 742 + pci_dev_res_add_to_list(fail_head, dev, res, 743 + 0 /* don't care */, 744 + 0 /* don't care */); 743 745 } 744 746 745 747 reset_resource(dev, res); 746 748 } 747 749 748 - free_list(head); 750 + pci_dev_res_free_list(head); 749 751 } 750 752 751 753 static void pdev_assign_resources_sorted(struct pci_dev *dev, ··· 771 773 772 774 __assign_resources_sorted(&head, realloc_head, fail_head); 773 775 } 774 - 775 - void pci_setup_cardbus(struct pci_bus *bus) 776 - { 777 - struct pci_dev *bridge = bus->self; 778 - struct resource *res; 779 - struct pci_bus_region region; 780 - 781 - pci_info(bridge, "CardBus bridge to %pR\n", 782 - &bus->busn_res); 783 - 784 - res = bus->resource[0]; 785 - pcibios_resource_to_bus(bridge->bus, &region, res); 786 - if (res->parent && res->flags & IORESOURCE_IO) { 787 - /* 788 - * The IO resource is allocated a range twice as large as it 789 - * would normally need. This allows us to set both IO regs. 790 - */ 791 - pci_info(bridge, " bridge window %pR\n", res); 792 - pci_write_config_dword(bridge, PCI_CB_IO_BASE_0, 793 - region.start); 794 - pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_0, 795 - region.end); 796 - } 797 - 798 - res = bus->resource[1]; 799 - pcibios_resource_to_bus(bridge->bus, &region, res); 800 - if (res->parent && res->flags & IORESOURCE_IO) { 801 - pci_info(bridge, " bridge window %pR\n", res); 802 - pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 803 - region.start); 804 - pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_1, 805 - region.end); 806 - } 807 - 808 - res = bus->resource[2]; 809 - pcibios_resource_to_bus(bridge->bus, &region, res); 810 - if (res->parent && res->flags & IORESOURCE_MEM) { 811 - pci_info(bridge, " bridge window %pR\n", res); 812 - pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 813 - region.start); 814 - pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_0, 815 - region.end); 816 - } 817 - 818 - res = bus->resource[3]; 819 - pcibios_resource_to_bus(bridge->bus, &region, res); 820 - if (res->parent && res->flags & IORESOURCE_MEM) { 821 - pci_info(bridge, " bridge window %pR\n", res); 822 - pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 823 - region.start); 824 - pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_1, 825 - region.end); 826 - } 827 - } 828 - EXPORT_SYMBOL(pci_setup_cardbus); 829 776 830 777 /* 831 778 * Initialize bridges with base/limit values we have collected. PCI-to-PCI ··· 803 860 res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 804 861 res_name = pci_resource_name(bridge, PCI_BRIDGE_IO_WINDOW); 805 862 pcibios_resource_to_bus(bridge->bus, &region, res); 806 - if (res->parent && res->flags & IORESOURCE_IO) { 863 + if (resource_assigned(res) && res->flags & IORESOURCE_IO) { 807 864 pci_read_config_word(bridge, PCI_IO_BASE, &l); 808 865 io_base_lo = (region.start >> 8) & io_mask; 809 866 io_limit_lo = (region.end >> 8) & io_mask; ··· 835 892 res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; 836 893 res_name = pci_resource_name(bridge, PCI_BRIDGE_MEM_WINDOW); 837 894 pcibios_resource_to_bus(bridge->bus, &region, res); 838 - if (res->parent && res->flags & IORESOURCE_MEM) { 895 + if (resource_assigned(res) && res->flags & IORESOURCE_MEM) { 839 896 l = (region.start >> 16) & 0xfff0; 840 897 l |= region.end & 0xfff00000; 841 898 pci_info(bridge, " %s %pR\n", res_name, res); ··· 864 921 res = &bridge->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 865 922 res_name = pci_resource_name(bridge, PCI_BRIDGE_PREF_MEM_WINDOW); 866 923 pcibios_resource_to_bus(bridge->bus, &region, res); 867 - if (res->parent && res->flags & IORESOURCE_PREFETCH) { 924 + if (resource_assigned(res) && res->flags & IORESOURCE_PREFETCH) { 868 925 l = (region.start >> 16) & 0xfff0; 869 926 l |= region.end & 0xfff00000; 870 927 if (res->flags & IORESOURCE_MEM_64) { ··· 935 992 { 936 993 int ret = -EINVAL; 937 994 938 - if (i < PCI_BRIDGE_RESOURCES || i > PCI_BRIDGE_RESOURCE_END) 995 + if (!pci_resource_is_bridge_win(i)) 939 996 return 0; 940 997 941 998 if (pci_claim_resource(bridge, i) == 0) ··· 1011 1068 1012 1069 static resource_size_t calculate_memsize(resource_size_t size, 1013 1070 resource_size_t min_size, 1014 - resource_size_t add_size, 1015 1071 resource_size_t children_add_size, 1016 - resource_size_t old_size, 1017 1072 resource_size_t align) 1018 1073 { 1019 - if (size < min_size) 1020 - size = min_size; 1021 - if (old_size == 1) 1022 - old_size = 0; 1023 - 1024 - size = max(size, add_size) + children_add_size; 1025 - return ALIGN(max(size, old_size), align); 1074 + size = max(size, min_size) + children_add_size; 1075 + return ALIGN(size, align); 1026 1076 } 1027 1077 1028 1078 resource_size_t __weak pcibios_window_alignment(struct pci_bus *bus, ··· 1053 1117 * pbus_size_io() - Size the I/O window of a given bus 1054 1118 * 1055 1119 * @bus: The bus 1056 - * @min_size: The minimum I/O window that must be allocated 1057 - * @add_size: Additional optional I/O window 1120 + * @add_size: Additional I/O window 1058 1121 * @realloc_head: Track the additional I/O window on this list 1059 1122 * 1060 1123 * Sizing the I/O windows of the PCI-PCI bridge is trivial, since these ··· 1061 1126 * devices are limited to 256 bytes. We must be careful with the ISA 1062 1127 * aliasing though. 1063 1128 */ 1064 - static void pbus_size_io(struct pci_bus *bus, resource_size_t min_size, 1065 - resource_size_t add_size, 1129 + static void pbus_size_io(struct pci_bus *bus, resource_size_t add_size, 1066 1130 struct list_head *realloc_head) 1067 1131 { 1068 1132 struct pci_dev *dev; ··· 1074 1140 return; 1075 1141 1076 1142 /* If resource is already assigned, nothing more to do */ 1077 - if (b_res->parent) 1143 + if (resource_assigned(b_res)) 1078 1144 return; 1079 1145 1080 1146 min_align = window_alignment(bus, IORESOURCE_IO); ··· 1084 1150 pci_dev_for_each_resource(dev, r) { 1085 1151 unsigned long r_size; 1086 1152 1087 - if (r->parent || !(r->flags & IORESOURCE_IO)) 1153 + if (resource_assigned(r) || !(r->flags & IORESOURCE_IO)) 1088 1154 continue; 1089 1155 1090 1156 if (!pdev_resource_assignable(dev, r)) ··· 1106 1172 } 1107 1173 } 1108 1174 1109 - size0 = calculate_iosize(size, min_size, size1, 0, 0, 1175 + size0 = calculate_iosize(size, realloc_head ? 0 : add_size, size1, 0, 0, 1110 1176 resource_size(b_res), min_align); 1111 1177 1112 1178 if (size0) ··· 1114 1180 1115 1181 size1 = size0; 1116 1182 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1117 - size1 = calculate_iosize(size, min_size, size1, add_size, 1183 + size1 = calculate_iosize(size, 0, size1, add_size, 1118 1184 children_add_size, resource_size(b_res), 1119 1185 min_align); 1120 1186 } ··· 1131 1197 b_res->flags |= IORESOURCE_STARTALIGN; 1132 1198 if (bus->self && size1 > size0 && realloc_head) { 1133 1199 b_res->flags &= ~IORESOURCE_DISABLED; 1134 - add_to_list(realloc_head, bus->self, b_res, size1-size0, 1135 - min_align); 1200 + pci_dev_res_add_to_list(realloc_head, bus->self, b_res, 1201 + size1 - size0, min_align); 1136 1202 pci_info(bus->self, "bridge window %pR to %pR add_size %llx\n", 1137 1203 b_res, &bus->busn_res, 1138 1204 (unsigned long long) size1 - size0); ··· 1161 1227 return min_align; 1162 1228 } 1163 1229 1164 - /** 1165 - * pbus_upstream_space_available - Check no upstream resource limits allocation 1166 - * @bus: The bus 1167 - * @res: The resource to help select the correct bridge window 1168 - * @size: The size required from the bridge window 1169 - * @align: Required alignment for the resource 1170 - * 1171 - * Check that @size can fit inside the upstream bridge resources that are 1172 - * already assigned. Select the upstream bridge window based on the type of 1173 - * @res. 1174 - * 1175 - * Return: %true if enough space is available on all assigned upstream 1230 + /* 1231 + * Calculate bridge window head alignment that leaves no gaps in between 1176 1232 * resources. 1177 1233 */ 1178 - static bool pbus_upstream_space_available(struct pci_bus *bus, 1179 - struct resource *res, 1180 - resource_size_t size, 1181 - resource_size_t align) 1234 + static resource_size_t calculate_head_align(resource_size_t *aligns, 1235 + int max_order) 1182 1236 { 1183 - struct resource_constraint constraint = { 1184 - .max = RESOURCE_SIZE_MAX, 1185 - .align = align, 1186 - }; 1187 - struct pci_bus *downstream = bus; 1237 + resource_size_t head_align = 1; 1238 + resource_size_t remainder = 0; 1239 + int order; 1188 1240 1189 - while ((bus = bus->parent)) { 1190 - if (pci_is_root_bus(bus)) 1191 - break; 1241 + /* Take the largest alignment as the starting point. */ 1242 + head_align <<= max_order + __ffs(SZ_1M); 1192 1243 1193 - res = pbus_select_window(bus, res); 1194 - if (!res) 1195 - return false; 1196 - if (!res->parent) 1197 - continue; 1244 + for (order = max_order - 1; order >= 0; order--) { 1245 + resource_size_t align1 = 1; 1198 1246 1199 - if (resource_size(res) >= size) { 1200 - struct resource gap = {}; 1247 + align1 <<= order + __ffs(SZ_1M); 1201 1248 1202 - if (find_resource_space(res, &gap, size, &constraint) == 0) { 1203 - gap.flags = res->flags; 1204 - pci_dbg(bus->self, 1205 - "Assigned bridge window %pR to %pR free space at %pR\n", 1206 - res, &bus->busn_res, &gap); 1207 - return true; 1208 - } 1249 + /* 1250 + * Account smaller resources with alignment < max_order that 1251 + * could be used to fill head room if alignment less than 1252 + * max_order is used. 1253 + */ 1254 + remainder += aligns[order]; 1255 + 1256 + /* 1257 + * Test if head fill is enough to satisfy the alignment of 1258 + * the larger resources after reducing the alignment. 1259 + */ 1260 + while ((head_align > align1) && (remainder >= head_align / 2)) { 1261 + head_align /= 2; 1262 + remainder -= head_align; 1209 1263 } 1264 + } 1210 1265 1211 - if (bus->self) { 1212 - pci_info(bus->self, 1213 - "Assigned bridge window %pR to %pR cannot fit 0x%llx required for %s bridging to %pR\n", 1214 - res, &bus->busn_res, 1215 - (unsigned long long)size, 1216 - pci_name(downstream->self), 1217 - &downstream->busn_res); 1266 + return head_align; 1267 + } 1268 + 1269 + /* 1270 + * pbus_size_mem_optional - Account optional resources in bridge window 1271 + * 1272 + * Account an optional resource or the optional part of the resource in bridge 1273 + * window size. 1274 + * 1275 + * Return: %true if the resource is entirely optional. 1276 + */ 1277 + static bool pbus_size_mem_optional(struct pci_dev *dev, int resno, 1278 + resource_size_t align, 1279 + struct list_head *realloc_head, 1280 + resource_size_t *add_align, 1281 + resource_size_t *children_add_size) 1282 + { 1283 + struct resource *res = pci_resource_n(dev, resno); 1284 + bool optional = pci_resource_is_optional(dev, resno); 1285 + resource_size_t r_size = resource_size(res); 1286 + struct pci_dev_resource *dev_res; 1287 + 1288 + if (!realloc_head) 1289 + return false; 1290 + 1291 + if (!optional) { 1292 + /* 1293 + * Only bridges have optional sizes in realloc_head at this 1294 + * point. As res_to_dev_res() walks the entire realloc_head 1295 + * list, skip calling it when known unnecessary. 1296 + */ 1297 + if (!pci_resource_is_bridge_win(resno)) 1298 + return false; 1299 + 1300 + dev_res = res_to_dev_res(realloc_head, res); 1301 + if (dev_res) { 1302 + *children_add_size += dev_res->add_size; 1303 + *add_align = max(*add_align, dev_res->min_align); 1218 1304 } 1219 1305 1220 1306 return false; 1221 1307 } 1308 + 1309 + /* Put SRIOV requested res to the optional list */ 1310 + pci_dev_res_add_to_list(realloc_head, dev, res, 0, align); 1311 + *children_add_size += r_size; 1312 + *add_align = max(align, *add_align); 1222 1313 1223 1314 return true; 1224 1315 } ··· 1252 1293 * pbus_size_mem() - Size the memory window of a given bus 1253 1294 * 1254 1295 * @bus: The bus 1255 - * @type: The type of bridge resource 1256 - * @min_size: The minimum memory window that must be allocated 1257 - * @add_size: Additional optional memory window 1296 + * @b_res: The bridge window resource 1297 + * @add_size: Additional memory window 1258 1298 * @realloc_head: Track the additional memory window on this list 1259 1299 * 1260 - * Calculate the size of the bus resource for @type and minimal alignment 1300 + * Calculate the size of the bridge window @b_res and minimal alignment 1261 1301 * which guarantees that all child resources fit in this size. 1262 1302 * 1263 1303 * Set the bus resource start/end to indicate the required size if there an ··· 1265 1307 * Add optional resource requests to the @realloc_head list if it is 1266 1308 * supplied. 1267 1309 */ 1268 - static void pbus_size_mem(struct pci_bus *bus, unsigned long type, 1269 - resource_size_t min_size, 1270 - resource_size_t add_size, 1271 - struct list_head *realloc_head) 1310 + static void pbus_size_mem(struct pci_bus *bus, struct resource *b_res, 1311 + resource_size_t add_size, 1312 + struct list_head *realloc_head) 1272 1313 { 1273 1314 struct pci_dev *dev; 1274 1315 resource_size_t min_align, win_align, align, size, size0, size1 = 0; 1275 - resource_size_t aligns[28]; /* Alignments from 1MB to 128TB */ 1316 + resource_size_t aligns[28] = {}; /* Alignments from 1MB to 128TB */ 1276 1317 int order, max_order; 1277 - struct resource *b_res = pbus_select_window_for_type(bus, type); 1278 1318 resource_size_t children_add_size = 0; 1279 - resource_size_t children_add_align = 0; 1280 1319 resource_size_t add_align = 0; 1281 - resource_size_t relaxed_align; 1282 - resource_size_t old_size; 1283 1320 1284 1321 if (!b_res) 1285 1322 return; 1286 1323 1287 1324 /* If resource is already assigned, nothing more to do */ 1288 - if (b_res->parent) 1325 + if (resource_assigned(b_res)) 1289 1326 return; 1290 1327 1291 - memset(aligns, 0, sizeof(aligns)); 1292 1328 max_order = 0; 1293 1329 size = 0; 1294 1330 ··· 1300 1348 if (b_res != pbus_select_window(bus, r)) 1301 1349 continue; 1302 1350 1303 - r_size = resource_size(r); 1304 - 1305 - /* Put SRIOV requested res to the optional list */ 1306 - if (realloc_head && pci_resource_is_optional(dev, i)) { 1307 - add_align = max(pci_resource_alignment(dev, r), add_align); 1308 - add_to_list(realloc_head, dev, r, 0, 0 /* Don't care */); 1309 - children_add_size += r_size; 1310 - continue; 1311 - } 1312 - 1351 + align = pci_resource_alignment(dev, r); 1313 1352 /* 1314 1353 * aligns[0] is for 1MB (since bridge memory 1315 1354 * windows are always at least 1MB aligned), so 1316 1355 * keep "order" from being negative for smaller 1317 1356 * resources. 1318 1357 */ 1319 - align = pci_resource_alignment(dev, r); 1320 - order = __ffs(align) - __ffs(SZ_1M); 1321 - if (order < 0) 1322 - order = 0; 1358 + order = max_t(int, __ffs(align) - __ffs(SZ_1M), 0); 1323 1359 if (order >= ARRAY_SIZE(aligns)) { 1324 1360 pci_warn(dev, "%s %pR: disabling; bad alignment %#llx\n", 1325 1361 r_name, r, (unsigned long long) align); 1326 1362 r->flags = 0; 1327 1363 continue; 1328 1364 } 1365 + 1366 + if (pbus_size_mem_optional(dev, i, align, 1367 + realloc_head, &add_align, 1368 + &children_add_size)) 1369 + continue; 1370 + 1371 + r_size = resource_size(r); 1329 1372 size += max(r_size, align); 1330 - /* 1331 - * Exclude ranges with size > align from calculation of 1332 - * the alignment. 1333 - */ 1334 - if (r_size <= align) 1335 - aligns[order] += align; 1373 + 1374 + aligns[order] += align; 1336 1375 if (order > max_order) 1337 1376 max_order = order; 1338 - 1339 - if (realloc_head) { 1340 - children_add_size += get_res_add_size(realloc_head, r); 1341 - children_add_align = get_res_add_align(realloc_head, r); 1342 - add_align = max(add_align, children_add_align); 1343 - } 1344 1377 } 1345 1378 } 1346 1379 1347 - old_size = resource_size(b_res); 1348 1380 win_align = window_alignment(bus, b_res->flags); 1349 - min_align = calculate_mem_align(aligns, max_order); 1381 + min_align = calculate_head_align(aligns, max_order); 1350 1382 min_align = max(min_align, win_align); 1351 - size0 = calculate_memsize(size, min_size, 0, 0, old_size, min_align); 1383 + size0 = calculate_memsize(size, realloc_head ? 0 : add_size, 1384 + 0, win_align); 1352 1385 1353 1386 if (size0) { 1354 1387 resource_set_range(b_res, min_align, size0); 1355 1388 b_res->flags &= ~IORESOURCE_DISABLED; 1356 1389 } 1357 1390 1358 - if (bus->self && size0 && 1359 - !pbus_upstream_space_available(bus, b_res, size0, min_align)) { 1360 - relaxed_align = 1ULL << (max_order + __ffs(SZ_1M)); 1361 - relaxed_align = max(relaxed_align, win_align); 1362 - min_align = min(min_align, relaxed_align); 1363 - size0 = calculate_memsize(size, min_size, 0, 0, old_size, win_align); 1364 - resource_set_range(b_res, min_align, size0); 1365 - pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n", 1366 - b_res, &bus->busn_res); 1367 - } 1368 - 1369 1391 if (realloc_head && (add_size > 0 || children_add_size > 0)) { 1370 1392 add_align = max(min_align, add_align); 1371 - size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1372 - old_size, add_align); 1373 - 1374 - if (bus->self && size1 && 1375 - !pbus_upstream_space_available(bus, b_res, size1, add_align)) { 1376 - relaxed_align = 1ULL << (max_order + __ffs(SZ_1M)); 1377 - relaxed_align = max(relaxed_align, win_align); 1378 - min_align = min(min_align, relaxed_align); 1379 - size1 = calculate_memsize(size, min_size, add_size, children_add_size, 1380 - old_size, win_align); 1381 - pci_info(bus->self, 1382 - "bridge window %pR to %pR requires relaxed alignment rules\n", 1383 - b_res, &bus->busn_res); 1384 - } 1393 + size1 = calculate_memsize(size, add_size, children_add_size, 1394 + win_align); 1385 1395 } 1386 1396 1387 1397 if (!size0 && !size1) { ··· 1356 1442 1357 1443 resource_set_range(b_res, min_align, size0); 1358 1444 b_res->flags |= IORESOURCE_STARTALIGN; 1359 - if (bus->self && size1 > size0 && realloc_head) { 1445 + if (bus->self && realloc_head && (size1 > size0 || add_align > min_align)) { 1360 1446 b_res->flags &= ~IORESOURCE_DISABLED; 1361 - add_to_list(realloc_head, bus->self, b_res, size1-size0, add_align); 1447 + add_size = size1 > size0 ? size1 - size0 : 0; 1448 + pci_dev_res_add_to_list(realloc_head, bus->self, b_res, 1449 + add_size, add_align); 1362 1450 pci_info(bus->self, "bridge window %pR to %pR add_size %llx add_align %llx\n", 1363 1451 b_res, &bus->busn_res, 1364 - (unsigned long long) (size1 - size0), 1452 + (unsigned long long) add_size, 1365 1453 (unsigned long long) add_align); 1366 1454 } 1367 - } 1368 - 1369 - unsigned long pci_cardbus_resource_alignment(struct resource *res) 1370 - { 1371 - if (res->flags & IORESOURCE_IO) 1372 - return pci_cardbus_io_size; 1373 - if (res->flags & IORESOURCE_MEM) 1374 - return pci_cardbus_mem_size; 1375 - return 0; 1376 - } 1377 - 1378 - static void pci_bus_size_cardbus(struct pci_bus *bus, 1379 - struct list_head *realloc_head) 1380 - { 1381 - struct pci_dev *bridge = bus->self; 1382 - struct resource *b_res; 1383 - resource_size_t b_res_3_size = pci_cardbus_mem_size * 2; 1384 - u16 ctrl; 1385 - 1386 - b_res = &bridge->resource[PCI_CB_BRIDGE_IO_0_WINDOW]; 1387 - if (b_res->parent) 1388 - goto handle_b_res_1; 1389 - /* 1390 - * Reserve some resources for CardBus. We reserve a fixed amount 1391 - * of bus space for CardBus bridges. 1392 - */ 1393 - resource_set_range(b_res, pci_cardbus_io_size, pci_cardbus_io_size); 1394 - b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 1395 - if (realloc_head) { 1396 - b_res->end -= pci_cardbus_io_size; 1397 - add_to_list(realloc_head, bridge, b_res, pci_cardbus_io_size, 1398 - pci_cardbus_io_size); 1399 - } 1400 - 1401 - handle_b_res_1: 1402 - b_res = &bridge->resource[PCI_CB_BRIDGE_IO_1_WINDOW]; 1403 - if (b_res->parent) 1404 - goto handle_b_res_2; 1405 - resource_set_range(b_res, pci_cardbus_io_size, pci_cardbus_io_size); 1406 - b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 1407 - if (realloc_head) { 1408 - b_res->end -= pci_cardbus_io_size; 1409 - add_to_list(realloc_head, bridge, b_res, pci_cardbus_io_size, 1410 - pci_cardbus_io_size); 1411 - } 1412 - 1413 - handle_b_res_2: 1414 - /* MEM1 must not be pref MMIO */ 1415 - pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1416 - if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM1) { 1417 - ctrl &= ~PCI_CB_BRIDGE_CTL_PREFETCH_MEM1; 1418 - pci_write_config_word(bridge, PCI_CB_BRIDGE_CONTROL, ctrl); 1419 - pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1420 - } 1421 - 1422 - /* Check whether prefetchable memory is supported by this bridge. */ 1423 - pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1424 - if (!(ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0)) { 1425 - ctrl |= PCI_CB_BRIDGE_CTL_PREFETCH_MEM0; 1426 - pci_write_config_word(bridge, PCI_CB_BRIDGE_CONTROL, ctrl); 1427 - pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 1428 - } 1429 - 1430 - b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_0_WINDOW]; 1431 - if (b_res->parent) 1432 - goto handle_b_res_3; 1433 - /* 1434 - * If we have prefetchable memory support, allocate two regions. 1435 - * Otherwise, allocate one region of twice the size. 1436 - */ 1437 - if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0) { 1438 - resource_set_range(b_res, pci_cardbus_mem_size, 1439 - pci_cardbus_mem_size); 1440 - b_res->flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH | 1441 - IORESOURCE_STARTALIGN; 1442 - if (realloc_head) { 1443 - b_res->end -= pci_cardbus_mem_size; 1444 - add_to_list(realloc_head, bridge, b_res, 1445 - pci_cardbus_mem_size, pci_cardbus_mem_size); 1446 - } 1447 - 1448 - /* Reduce that to half */ 1449 - b_res_3_size = pci_cardbus_mem_size; 1450 - } 1451 - 1452 - handle_b_res_3: 1453 - b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_1_WINDOW]; 1454 - if (b_res->parent) 1455 - goto handle_done; 1456 - resource_set_range(b_res, pci_cardbus_mem_size, b_res_3_size); 1457 - b_res->flags |= IORESOURCE_MEM | IORESOURCE_STARTALIGN; 1458 - if (realloc_head) { 1459 - b_res->end -= b_res_3_size; 1460 - add_to_list(realloc_head, bridge, b_res, b_res_3_size, 1461 - pci_cardbus_mem_size); 1462 - } 1463 - 1464 - handle_done: 1465 - ; 1466 1455 } 1467 1456 1468 1457 void __pci_bus_size_bridges(struct pci_bus *bus, struct list_head *realloc_head) ··· 1373 1556 struct pci_dev *dev; 1374 1557 resource_size_t additional_io_size = 0, additional_mmio_size = 0, 1375 1558 additional_mmio_pref_size = 0; 1376 - struct resource *pref; 1559 + struct resource *b_res; 1377 1560 struct pci_host_bridge *host; 1378 1561 int hdr_type; 1379 1562 ··· 1384 1567 1385 1568 switch (dev->hdr_type) { 1386 1569 case PCI_HEADER_TYPE_CARDBUS: 1387 - pci_bus_size_cardbus(b, realloc_head); 1570 + if (pci_bus_size_cardbus_bridge(b, realloc_head)) 1571 + continue; 1388 1572 break; 1389 1573 1390 1574 case PCI_HEADER_TYPE_BRIDGE: ··· 1400 1582 host = to_pci_host_bridge(bus->bridge); 1401 1583 if (!host->size_windows) 1402 1584 return; 1403 - pci_bus_for_each_resource(bus, pref) 1404 - if (pref && (pref->flags & IORESOURCE_PREFETCH)) 1405 - break; 1406 1585 hdr_type = -1; /* Intentionally invalid - not a PCI device. */ 1407 1586 } else { 1408 - pref = &bus->self->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 1409 1587 hdr_type = bus->self->hdr_type; 1410 1588 } 1411 1589 ··· 1419 1605 } 1420 1606 fallthrough; 1421 1607 default: 1422 - pbus_size_io(bus, realloc_head ? 0 : additional_io_size, 1423 - additional_io_size, realloc_head); 1608 + pbus_size_io(bus, additional_io_size, realloc_head); 1424 1609 1425 - if (pref && (pref->flags & IORESOURCE_PREFETCH)) { 1426 - pbus_size_mem(bus, 1427 - IORESOURCE_MEM | IORESOURCE_PREFETCH | 1428 - (pref->flags & IORESOURCE_MEM_64), 1429 - realloc_head ? 0 : additional_mmio_pref_size, 1430 - additional_mmio_pref_size, realloc_head); 1610 + b_res = pbus_select_window_for_type(bus, IORESOURCE_MEM | 1611 + IORESOURCE_PREFETCH | 1612 + IORESOURCE_MEM_64); 1613 + if (b_res && (b_res->flags & IORESOURCE_PREFETCH)) { 1614 + pbus_size_mem(bus, b_res, additional_mmio_pref_size, 1615 + realloc_head); 1431 1616 } 1432 1617 1433 - pbus_size_mem(bus, IORESOURCE_MEM, 1434 - realloc_head ? 0 : additional_mmio_size, 1435 - additional_mmio_size, realloc_head); 1618 + b_res = pbus_select_window_for_type(bus, IORESOURCE_MEM); 1619 + if (b_res) { 1620 + pbus_size_mem(bus, b_res, additional_mmio_size, 1621 + realloc_head); 1622 + } 1436 1623 break; 1437 1624 } 1438 1625 } ··· 1471 1656 pci_dev_for_each_resource(dev, r) { 1472 1657 struct pci_bus *b; 1473 1658 1474 - if (r->parent || !(r->flags & IORESOURCE_PCI_FIXED) || 1659 + if (resource_assigned(r) || 1660 + !(r->flags & IORESOURCE_PCI_FIXED) || 1475 1661 !(r->flags & (IORESOURCE_IO | IORESOURCE_MEM))) 1476 1662 continue; 1477 1663 1478 1664 b = dev->bus; 1479 - while (b && !r->parent) { 1665 + while (b && !resource_assigned(r)) { 1480 1666 assign_fixed_resource_on_bus(b, r); 1481 1667 b = b->parent; 1482 1668 } ··· 1509 1693 break; 1510 1694 1511 1695 case PCI_HEADER_TYPE_CARDBUS: 1512 - pci_setup_cardbus(b); 1696 + pci_setup_cardbus_bridge(b); 1513 1697 break; 1514 1698 1515 1699 default: ··· 1533 1717 for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { 1534 1718 struct resource *r = &dev->resource[i]; 1535 1719 1536 - if (!r->flags || r->parent) 1720 + if (!r->flags || resource_assigned(r)) 1537 1721 continue; 1538 1722 1539 1723 pci_claim_resource(dev, i); ··· 1547 1731 for (i = PCI_BRIDGE_RESOURCES; i < PCI_NUM_RESOURCES; i++) { 1548 1732 struct resource *r = &dev->resource[i]; 1549 1733 1550 - if (!r->flags || r->parent) 1734 + if (!r->flags || resource_assigned(r)) 1735 + continue; 1736 + if (r->flags & IORESOURCE_DISABLED) 1551 1737 continue; 1552 1738 1553 1739 pci_claim_bridge_resource(dev, i); ··· 1616 1798 break; 1617 1799 1618 1800 case PCI_CLASS_BRIDGE_CARDBUS: 1619 - pci_setup_cardbus(b); 1801 + pci_setup_cardbus_bridge(b); 1620 1802 break; 1621 1803 1622 1804 default: ··· 1632 1814 struct pci_dev *dev = bus->self; 1633 1815 int idx, ret; 1634 1816 1635 - if (!b_win->parent) 1817 + if (!resource_assigned(b_win)) 1636 1818 return; 1637 1819 1638 1820 idx = pci_resource_num(dev, b_win); ··· 1828 2010 { 1829 2011 resource_size_t add_size, size = resource_size(res); 1830 2012 1831 - if (res->parent) 2013 + if (resource_assigned(res)) 1832 2014 return; 1833 2015 1834 2016 if (!new_size) ··· 1850 2032 1851 2033 /* If the resource is part of the add_list, remove it now */ 1852 2034 if (add_list) 1853 - remove_from_list(add_list, res); 2035 + pci_dev_res_remove_from_list(add_list, res); 1854 2036 } 1855 2037 1856 2038 static void remove_dev_resource(struct resource *avail, struct pci_dev *dev, ··· 1918 2100 * window. 1919 2101 */ 1920 2102 align = pci_resource_alignment(bridge, res); 1921 - if (!res->parent && align) 2103 + if (!resource_assigned(res) && align) 1922 2104 available[i].start = min(ALIGN(available[i].start, align), 1923 2105 available[i].end + 1); 1924 2106 ··· 2102 2284 2103 2285 /* Restore size and flags */ 2104 2286 list_for_each_entry(fail_res, fail_head, list) 2105 - restore_dev_resource(fail_res); 2287 + pci_dev_res_restore(fail_res); 2106 2288 2107 - free_list(fail_head); 2289 + pci_dev_res_free_list(fail_head); 2108 2290 } 2109 2291 2110 2292 /* ··· 2151 2333 /* Depth last, allocate resources and update the hardware. */ 2152 2334 __pci_bus_assign_resources(bus, add_list, &fail_head); 2153 2335 if (WARN_ON_ONCE(add_list && !list_empty(add_list))) 2154 - free_list(add_list); 2336 + pci_dev_res_free_list(add_list); 2155 2337 tried_times++; 2156 2338 2157 2339 /* Any device complain? */ ··· 2166 2348 dev_info(&bus->dev, 2167 2349 "Automatically enabled pci realloc, if you have problem, try booting with pci=realloc=off\n"); 2168 2350 } 2169 - free_list(&fail_head); 2351 + pci_dev_res_free_list(&fail_head); 2170 2352 break; 2171 2353 } 2172 2354 ··· 2214 2396 2215 2397 __pci_bridge_assign_resources(bridge, &add_list, &fail_head); 2216 2398 if (WARN_ON_ONCE(!list_empty(&add_list))) 2217 - free_list(&add_list); 2399 + pci_dev_res_free_list(&add_list); 2218 2400 tried_times++; 2219 2401 2220 2402 if (list_empty(&fail_head)) ··· 2222 2404 2223 2405 if (tried_times >= 2) { 2224 2406 /* Still fail, don't need to try more */ 2225 - free_list(&fail_head); 2407 + pci_dev_res_free_list(&fail_head); 2226 2408 break; 2227 2409 } 2228 2410 ··· 2263 2445 2264 2446 /* Ignore BARs which are still in use */ 2265 2447 if (!res->child) { 2266 - ret = add_to_list(saved, bridge, res, 0, 0); 2448 + ret = pci_dev_res_add_to_list(saved, bridge, res, 0, 0); 2267 2449 if (ret) 2268 2450 return ret; 2269 2451 ··· 2285 2467 __pci_bus_size_bridges(bridge->subordinate, &added); 2286 2468 __pci_bridge_assign_resources(bridge, &added, &failed); 2287 2469 if (WARN_ON_ONCE(!list_empty(&added))) 2288 - free_list(&added); 2470 + pci_dev_res_free_list(&added); 2289 2471 2290 2472 if (!list_empty(&failed)) { 2291 2473 if (pci_required_resource_failed(&failed, type)) 2292 2474 ret = -ENOSPC; 2293 - free_list(&failed); 2475 + pci_dev_res_free_list(&failed); 2294 2476 if (ret) 2295 2477 return ret; 2296 2478 ··· 2346 2528 if (b_win != pbus_select_window(bus, r)) 2347 2529 continue; 2348 2530 2349 - ret = add_to_list(&saved, pdev, r, 0, 0); 2531 + ret = pci_dev_res_add_to_list(&saved, pdev, r, 0, 0); 2350 2532 if (ret) 2351 2533 goto restore; 2352 2534 pci_release_resource(pdev, i); ··· 2364 2546 2365 2547 out: 2366 2548 up_read(&pci_bus_sem); 2367 - free_list(&saved); 2549 + pci_dev_res_free_list(&saved); 2368 2550 return ret; 2369 2551 2370 2552 restore: ··· 2383 2565 2384 2566 i = pci_resource_num(dev, res); 2385 2567 2386 - if (res->parent) { 2568 + if (resource_assigned(res)) { 2387 2569 release_child_resources(res); 2388 2570 pci_release_resource(dev, i); 2389 2571 } 2390 2572 2391 - restore_dev_resource(dev_res); 2573 + pci_dev_res_restore(dev_res); 2392 2574 2393 2575 if (pci_claim_resource(dev, i)) 2394 2576 continue; ··· 2419 2601 up_read(&pci_bus_sem); 2420 2602 __pci_bus_assign_resources(bus, &add_list, NULL); 2421 2603 if (WARN_ON_ONCE(!list_empty(&add_list))) 2422 - free_list(&add_list); 2604 + pci_dev_res_free_list(&add_list); 2423 2605 } 2424 2606 EXPORT_SYMBOL_GPL(pci_assign_unassigned_bus_resources);
+306
drivers/pci/setup-cardbus.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Cardbus bridge setup routines. 4 + */ 5 + 6 + #include <linux/bitfield.h> 7 + #include <linux/errno.h> 8 + #include <linux/ioport.h> 9 + #include <linux/pci.h> 10 + #include <linux/sizes.h> 11 + #include <linux/sprintf.h> 12 + #include <linux/types.h> 13 + 14 + #include "pci.h" 15 + 16 + #define CARDBUS_LATENCY_TIMER 176 /* secondary latency timer */ 17 + #define CARDBUS_RESERVE_BUSNR 3 18 + 19 + #define DEFAULT_CARDBUS_IO_SIZE SZ_256 20 + #define DEFAULT_CARDBUS_MEM_SIZE SZ_64M 21 + /* pci=cbmemsize=nnM,cbiosize=nn can override this */ 22 + static unsigned long pci_cardbus_io_size = DEFAULT_CARDBUS_IO_SIZE; 23 + static unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE; 24 + 25 + unsigned long pci_cardbus_resource_alignment(struct resource *res) 26 + { 27 + if (res->flags & IORESOURCE_IO) 28 + return pci_cardbus_io_size; 29 + if (res->flags & IORESOURCE_MEM) 30 + return pci_cardbus_mem_size; 31 + return 0; 32 + } 33 + 34 + int pci_bus_size_cardbus_bridge(struct pci_bus *bus, 35 + struct list_head *realloc_head) 36 + { 37 + struct pci_dev *bridge = bus->self; 38 + struct resource *b_res; 39 + resource_size_t b_res_3_size = pci_cardbus_mem_size * 2; 40 + u16 ctrl; 41 + 42 + b_res = &bridge->resource[PCI_CB_BRIDGE_IO_0_WINDOW]; 43 + if (resource_assigned(b_res)) 44 + goto handle_b_res_1; 45 + /* 46 + * Reserve some resources for CardBus. We reserve a fixed amount 47 + * of bus space for CardBus bridges. 48 + */ 49 + resource_set_range(b_res, pci_cardbus_io_size, pci_cardbus_io_size); 50 + b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 51 + if (realloc_head) { 52 + b_res->end -= pci_cardbus_io_size; 53 + pci_dev_res_add_to_list(realloc_head, bridge, b_res, 54 + pci_cardbus_io_size, 55 + pci_cardbus_io_size); 56 + } 57 + 58 + handle_b_res_1: 59 + b_res = &bridge->resource[PCI_CB_BRIDGE_IO_1_WINDOW]; 60 + if (resource_assigned(b_res)) 61 + goto handle_b_res_2; 62 + resource_set_range(b_res, pci_cardbus_io_size, pci_cardbus_io_size); 63 + b_res->flags |= IORESOURCE_IO | IORESOURCE_STARTALIGN; 64 + if (realloc_head) { 65 + b_res->end -= pci_cardbus_io_size; 66 + pci_dev_res_add_to_list(realloc_head, bridge, b_res, 67 + pci_cardbus_io_size, 68 + pci_cardbus_io_size); 69 + } 70 + 71 + handle_b_res_2: 72 + /* MEM1 must not be pref MMIO */ 73 + pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 74 + if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM1) { 75 + ctrl &= ~PCI_CB_BRIDGE_CTL_PREFETCH_MEM1; 76 + pci_write_config_word(bridge, PCI_CB_BRIDGE_CONTROL, ctrl); 77 + pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 78 + } 79 + 80 + /* Check whether prefetchable memory is supported by this bridge. */ 81 + pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 82 + if (!(ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0)) { 83 + ctrl |= PCI_CB_BRIDGE_CTL_PREFETCH_MEM0; 84 + pci_write_config_word(bridge, PCI_CB_BRIDGE_CONTROL, ctrl); 85 + pci_read_config_word(bridge, PCI_CB_BRIDGE_CONTROL, &ctrl); 86 + } 87 + 88 + b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_0_WINDOW]; 89 + if (resource_assigned(b_res)) 90 + goto handle_b_res_3; 91 + /* 92 + * If we have prefetchable memory support, allocate two regions. 93 + * Otherwise, allocate one region of twice the size. 94 + */ 95 + if (ctrl & PCI_CB_BRIDGE_CTL_PREFETCH_MEM0) { 96 + resource_set_range(b_res, pci_cardbus_mem_size, 97 + pci_cardbus_mem_size); 98 + b_res->flags |= IORESOURCE_MEM | IORESOURCE_PREFETCH | 99 + IORESOURCE_STARTALIGN; 100 + if (realloc_head) { 101 + b_res->end -= pci_cardbus_mem_size; 102 + pci_dev_res_add_to_list(realloc_head, bridge, b_res, 103 + pci_cardbus_mem_size, 104 + pci_cardbus_mem_size); 105 + } 106 + 107 + /* Reduce that to half */ 108 + b_res_3_size = pci_cardbus_mem_size; 109 + } 110 + 111 + handle_b_res_3: 112 + b_res = &bridge->resource[PCI_CB_BRIDGE_MEM_1_WINDOW]; 113 + if (resource_assigned(b_res)) 114 + goto handle_done; 115 + resource_set_range(b_res, pci_cardbus_mem_size, b_res_3_size); 116 + b_res->flags |= IORESOURCE_MEM | IORESOURCE_STARTALIGN; 117 + if (realloc_head) { 118 + b_res->end -= b_res_3_size; 119 + pci_dev_res_add_to_list(realloc_head, bridge, b_res, 120 + b_res_3_size, pci_cardbus_mem_size); 121 + } 122 + 123 + handle_done: 124 + return 0; 125 + } 126 + 127 + void pci_setup_cardbus_bridge(struct pci_bus *bus) 128 + { 129 + struct pci_dev *bridge = bus->self; 130 + struct resource *res; 131 + struct pci_bus_region region; 132 + 133 + pci_info(bridge, "CardBus bridge to %pR\n", 134 + &bus->busn_res); 135 + 136 + res = bus->resource[0]; 137 + pcibios_resource_to_bus(bridge->bus, &region, res); 138 + if (resource_assigned(res) && res->flags & IORESOURCE_IO) { 139 + /* 140 + * The IO resource is allocated a range twice as large as it 141 + * would normally need. This allows us to set both IO regs. 142 + */ 143 + pci_info(bridge, " bridge window %pR\n", res); 144 + pci_write_config_dword(bridge, PCI_CB_IO_BASE_0, 145 + region.start); 146 + pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_0, 147 + region.end); 148 + } 149 + 150 + res = bus->resource[1]; 151 + pcibios_resource_to_bus(bridge->bus, &region, res); 152 + if (resource_assigned(res) && res->flags & IORESOURCE_IO) { 153 + pci_info(bridge, " bridge window %pR\n", res); 154 + pci_write_config_dword(bridge, PCI_CB_IO_BASE_1, 155 + region.start); 156 + pci_write_config_dword(bridge, PCI_CB_IO_LIMIT_1, 157 + region.end); 158 + } 159 + 160 + res = bus->resource[2]; 161 + pcibios_resource_to_bus(bridge->bus, &region, res); 162 + if (resource_assigned(res) && res->flags & IORESOURCE_MEM) { 163 + pci_info(bridge, " bridge window %pR\n", res); 164 + pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_0, 165 + region.start); 166 + pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_0, 167 + region.end); 168 + } 169 + 170 + res = bus->resource[3]; 171 + pcibios_resource_to_bus(bridge->bus, &region, res); 172 + if (resource_assigned(res) && res->flags & IORESOURCE_MEM) { 173 + pci_info(bridge, " bridge window %pR\n", res); 174 + pci_write_config_dword(bridge, PCI_CB_MEMORY_BASE_1, 175 + region.start); 176 + pci_write_config_dword(bridge, PCI_CB_MEMORY_LIMIT_1, 177 + region.end); 178 + } 179 + } 180 + EXPORT_SYMBOL(pci_setup_cardbus_bridge); 181 + 182 + int pci_setup_cardbus(char *str) 183 + { 184 + if (!strncmp(str, "cbiosize=", 9)) { 185 + pci_cardbus_io_size = memparse(str + 9, &str); 186 + return 0; 187 + } else if (!strncmp(str, "cbmemsize=", 10)) { 188 + pci_cardbus_mem_size = memparse(str + 10, &str); 189 + return 0; 190 + } 191 + 192 + return -ENOENT; 193 + } 194 + 195 + int pci_cardbus_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev, 196 + u32 buses, int max, 197 + unsigned int available_buses, int pass) 198 + { 199 + struct pci_bus *child; 200 + bool fixed_buses; 201 + u8 fixed_sec, fixed_sub; 202 + int next_busnr; 203 + u32 i, j = 0; 204 + 205 + /* 206 + * We need to assign a number to this bus which we always do in the 207 + * second pass. 208 + */ 209 + if (!pass) { 210 + /* 211 + * Temporarily disable forwarding of the configuration 212 + * cycles on all bridges in this bus segment to avoid 213 + * possible conflicts in the second pass between two bridges 214 + * programmed with overlapping bus ranges. 215 + */ 216 + pci_write_config_dword(dev, PCI_PRIMARY_BUS, 217 + buses & PCI_SEC_LATENCY_TIMER_MASK); 218 + return max; 219 + } 220 + 221 + /* Clear errors */ 222 + pci_write_config_word(dev, PCI_STATUS, 0xffff); 223 + 224 + /* Read bus numbers from EA Capability (if present) */ 225 + fixed_buses = pci_ea_fixed_busnrs(dev, &fixed_sec, &fixed_sub); 226 + if (fixed_buses) 227 + next_busnr = fixed_sec; 228 + else 229 + next_busnr = max + 1; 230 + 231 + /* 232 + * Prevent assigning a bus number that already exists. This can 233 + * happen when a bridge is hot-plugged, so in this case we only 234 + * re-scan this bus. 235 + */ 236 + child = pci_find_bus(pci_domain_nr(bus), next_busnr); 237 + if (!child) { 238 + child = pci_add_new_bus(bus, dev, next_busnr); 239 + if (!child) 240 + return max; 241 + pci_bus_insert_busn_res(child, next_busnr, bus->busn_res.end); 242 + } 243 + max++; 244 + if (available_buses) 245 + available_buses--; 246 + 247 + buses = (buses & PCI_SEC_LATENCY_TIMER_MASK) | 248 + FIELD_PREP(PCI_PRIMARY_BUS_MASK, child->primary) | 249 + FIELD_PREP(PCI_SECONDARY_BUS_MASK, child->busn_res.start) | 250 + FIELD_PREP(PCI_SUBORDINATE_BUS_MASK, child->busn_res.end); 251 + 252 + /* 253 + * yenta.c forces a secondary latency timer of 176. 254 + * Copy that behaviour here. 255 + */ 256 + buses &= ~PCI_SEC_LATENCY_TIMER_MASK; 257 + buses |= FIELD_PREP(PCI_SEC_LATENCY_TIMER_MASK, CARDBUS_LATENCY_TIMER); 258 + 259 + /* We need to blast all three values with a single write */ 260 + pci_write_config_dword(dev, PCI_PRIMARY_BUS, buses); 261 + 262 + /* 263 + * For CardBus bridges, we leave 4 bus numbers as cards with a 264 + * PCI-to-PCI bridge can be inserted later. 265 + */ 266 + for (i = 0; i < CARDBUS_RESERVE_BUSNR; i++) { 267 + struct pci_bus *parent = bus; 268 + 269 + if (pci_find_bus(pci_domain_nr(bus), max + i + 1)) 270 + break; 271 + 272 + while (parent->parent) { 273 + if (!pcibios_assign_all_busses() && 274 + (parent->busn_res.end > max) && 275 + (parent->busn_res.end <= max + i)) { 276 + j = 1; 277 + } 278 + parent = parent->parent; 279 + } 280 + if (j) { 281 + /* 282 + * Often, there are two CardBus bridges -- try to 283 + * leave one valid bus number for each one. 284 + */ 285 + i /= 2; 286 + break; 287 + } 288 + } 289 + max += i; 290 + 291 + /* 292 + * Set subordinate bus number to its real value. If fixed 293 + * subordinate bus number exists from EA capability then use it. 294 + */ 295 + if (fixed_buses) 296 + max = fixed_sub; 297 + pci_bus_update_busn_res_end(child, max); 298 + pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max); 299 + 300 + scnprintf(child->name, sizeof(child->name), "PCI CardBus %04x:%02x", 301 + pci_domain_nr(bus), child->number); 302 + 303 + pbus_validate_busn(child); 304 + 305 + return max; 306 + }
+1 -1
drivers/pci/setup-res.c
··· 359 359 360 360 res->flags &= ~IORESOURCE_UNSET; 361 361 res->flags &= ~IORESOURCE_STARTALIGN; 362 - if (resno >= PCI_BRIDGE_RESOURCES && resno <= PCI_BRIDGE_RESOURCE_END) 362 + if (pci_resource_is_bridge_win(resno)) 363 363 res->flags &= ~IORESOURCE_DISABLED; 364 364 365 365 pci_info(dev, "%s %pR: assigned\n", res_name, res);
+11
drivers/pci/trace.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Tracepoints for PCI system 4 + * 5 + * Copyright (C) 2025 Alibaba Corporation 6 + */ 7 + 8 + #include <linux/pci.h> 9 + 10 + #define CREATE_TRACE_POINTS 11 + #include <trace/events/pci.h>
+1 -1
drivers/pcmcia/yenta_socket.c
··· 779 779 IORESOURCE_MEM, 780 780 PCI_CB_MEMORY_BASE_1, PCI_CB_MEMORY_LIMIT_1); 781 781 if (program) 782 - pci_setup_cardbus(socket->dev->subordinate); 782 + pci_setup_cardbus_bridge(socket->dev->subordinate); 783 783 } 784 784 785 785
-1
drivers/usb/host/ehci-pci.c
··· 21 21 /* defined here to avoid adding to pci_ids.h for single instance use */ 22 22 #define PCI_DEVICE_ID_INTEL_CE4100_USB 0x2e70 23 23 24 - #define PCI_VENDOR_ID_ASPEED 0x1a03 25 24 #define PCI_DEVICE_ID_ASPEED_EHCI 0x2603 26 25 27 26 /*-------------------------------------------------------------------------*/
+1 -1
include/linux/ioport.h
··· 338 338 * Check if this resource is added to a resource tree or detached. Caller is 339 339 * responsible for not racing assignment. 340 340 */ 341 - static inline bool resource_assigned(struct resource *res) 341 + static inline bool resource_assigned(const struct resource *res) 342 342 { 343 343 return res->parent; 344 344 }
+9
include/linux/pci-epc.h
··· 223 223 /** 224 224 * struct pci_epc_features - features supported by a EPC device per function 225 225 * @linkup_notifier: indicate if the EPC device can notify EPF driver on link up 226 + * @dynamic_inbound_mapping: indicate if the EPC device supports updating 227 + * inbound mappings for an already configured BAR 228 + * (i.e. allow calling pci_epc_set_bar() again 229 + * without first calling pci_epc_clear_bar()) 230 + * @subrange_mapping: indicate if the EPC device can map inbound subranges for a 231 + * BAR. This feature depends on @dynamic_inbound_mapping 232 + * feature. 226 233 * @msi_capable: indicate if the endpoint function has MSI capability 227 234 * @msix_capable: indicate if the endpoint function has MSI-X capability 228 235 * @intx_capable: indicate if the endpoint can raise INTx interrupts ··· 238 231 */ 239 232 struct pci_epc_features { 240 233 unsigned int linkup_notifier : 1; 234 + unsigned int dynamic_inbound_mapping : 1; 235 + unsigned int subrange_mapping : 1; 241 236 unsigned int msi_capable : 1; 242 237 unsigned int msix_capable : 1; 243 238 unsigned int intx_capable : 1;
+23
include/linux/pci-epf.h
··· 111 111 #define to_pci_epf_driver(drv) container_of_const((drv), struct pci_epf_driver, driver) 112 112 113 113 /** 114 + * struct pci_epf_bar_submap - BAR subrange for inbound mapping 115 + * @phys_addr: target physical/DMA address for this subrange 116 + * @size: the size of the subrange to be mapped 117 + * 118 + * When pci_epf_bar.num_submap is >0, pci_epf_bar.submap describes the 119 + * complete BAR layout. This allows an EPC driver to program multiple 120 + * inbound translation windows for a single BAR when supported by the 121 + * controller. The array order defines the BAR layout (submap[0] at offset 122 + * 0, and each immediately follows the previous one). 123 + */ 124 + struct pci_epf_bar_submap { 125 + dma_addr_t phys_addr; 126 + size_t size; 127 + }; 128 + 129 + /** 114 130 * struct pci_epf_bar - represents the BAR of EPF device 115 131 * @phys_addr: physical address that should be mapped to the BAR 116 132 * @addr: virtual address corresponding to the @phys_addr ··· 135 119 * requirement 136 120 * @barno: BAR number 137 121 * @flags: flags that are set for the BAR 122 + * @num_submap: number of entries in @submap 123 + * @submap: array of subrange descriptors allocated by the caller. See 124 + * struct pci_epf_bar_submap for the semantics in detail. 138 125 */ 139 126 struct pci_epf_bar { 140 127 dma_addr_t phys_addr; ··· 146 127 size_t mem_size; 147 128 enum pci_barno barno; 148 129 int flags; 130 + 131 + /* Optional sub-range mapping */ 132 + unsigned int num_submap; 133 + struct pci_epf_bar_submap *submap; 149 134 }; 150 135 151 136 /**
+2
include/linux/pci-p2pdma.h
··· 20 20 * struct p2pdma_provider 21 21 * 22 22 * A p2pdma provider is a range of MMIO address space available to the CPU. 23 + * @owner: Device to which this provider belongs. 24 + * @bus_offset: Bus offset for p2p communication. 23 25 */ 24 26 struct p2pdma_provider { 25 27 struct device *owner;
+15 -1
include/linux/pci-pwrctrl.h
··· 31 31 /** 32 32 * struct pci_pwrctrl - PCI device power control context. 33 33 * @dev: Address of the power controlling device. 34 + * @power_on: Callback to power on the power controlling device. 35 + * @power_off: Callback to power off the power controlling device. 34 36 * 35 37 * An object of this type must be allocated by the PCI power control device and 36 38 * passed to the pwrctrl subsystem to trigger a bus rescan and setup a device ··· 40 38 */ 41 39 struct pci_pwrctrl { 42 40 struct device *dev; 41 + int (*power_on)(struct pci_pwrctrl *pwrctrl); 42 + int (*power_off)(struct pci_pwrctrl *pwrctrl); 43 43 44 44 /* private: internal use only */ 45 45 struct notifier_block nb; ··· 54 50 void pci_pwrctrl_device_unset_ready(struct pci_pwrctrl *pwrctrl); 55 51 int devm_pci_pwrctrl_device_set_ready(struct device *dev, 56 52 struct pci_pwrctrl *pwrctrl); 57 - 53 + #if IS_ENABLED(CONFIG_PCI_PWRCTRL) 54 + int pci_pwrctrl_create_devices(struct device *parent); 55 + void pci_pwrctrl_destroy_devices(struct device *parent); 56 + int pci_pwrctrl_power_on_devices(struct device *parent); 57 + void pci_pwrctrl_power_off_devices(struct device *parent); 58 + #else 59 + static inline int pci_pwrctrl_create_devices(struct device *parent) { return 0; } 60 + static void pci_pwrctrl_destroy_devices(struct device *parent) { } 61 + static inline int pci_pwrctrl_power_on_devices(struct device *parent) { return 0; } 62 + static void pci_pwrctrl_power_off_devices(struct device *parent) { } 63 + #endif 58 64 #endif /* __PCI_PWRCTRL_H__ */
+12 -2
include/linux/pci.h
··· 248 248 PCI_DEV_FLAGS_HAS_MSI_MASKING = (__force pci_dev_flags_t) (1 << 12), 249 249 /* Device requires write to PCI_MSIX_ENTRY_DATA before any MSIX reads */ 250 250 PCI_DEV_FLAGS_MSIX_TOUCH_ENTRY_DATA_FIRST = (__force pci_dev_flags_t) (1 << 13), 251 + /* 252 + * PCIe to PCI bridge does not create RID aliases because the bridge is 253 + * integrated with the downstream devices and doesn't use real PCI. 254 + */ 255 + PCI_DEV_FLAGS_PCI_BRIDGE_NO_ALIAS = (__force pci_dev_flags_t) (1 << 14), 251 256 }; 252 257 253 258 enum pci_irq_reroute_variant { ··· 418 413 user sysfs */ 419 414 unsigned int clear_retrain_link:1; /* Need to clear Retrain Link 420 415 bit manually */ 416 + unsigned int no_bw_notif:1; /* BW notifications may cause issues */ 421 417 unsigned int d3hot_delay; /* D3hot->D0 transition time in ms */ 422 418 unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */ 423 419 ··· 570 564 struct pci_tsm *tsm; /* TSM operation state */ 571 565 #endif 572 566 u16 acs_cap; /* ACS Capability offset */ 567 + u16 acs_capabilities; /* ACS Capabilities */ 573 568 u8 supported_speeds; /* Supported Link Speeds Vector */ 574 569 phys_addr_t rom; /* Physical address if not from BAR */ 575 570 size_t romlen; /* Length if not from BAR */ ··· 867 860 void __iomem *(*map_bus)(struct pci_bus *bus, unsigned int devfn, int where); 868 861 int (*read)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val); 869 862 int (*write)(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val); 870 - int (*assert_perst)(struct pci_bus *bus, bool assert); 871 863 }; 872 864 873 865 /* ··· 1256 1250 void pci_stop_and_remove_bus_device_locked(struct pci_dev *dev); 1257 1251 void pci_stop_root_bus(struct pci_bus *bus); 1258 1252 void pci_remove_root_bus(struct pci_bus *bus); 1259 - void pci_setup_cardbus(struct pci_bus *bus); 1253 + #ifdef CONFIG_CARDBUS 1254 + void pci_setup_cardbus_bridge(struct pci_bus *bus); 1255 + #else 1256 + static inline void pci_setup_cardbus_bridge(struct pci_bus *bus) { } 1257 + #endif 1260 1258 void pcibios_setup_bridge(struct pci_bus *bus, unsigned long type); 1261 1259 void pci_sort_breadthfirst(void); 1262 1260 #define dev_is_pci(d) ((d)->bus == &pci_bus_type)
+2
include/linux/pci_ids.h
··· 2583 2583 #define PCI_DEVICE_ID_NETRONOME_NFP3800_VF 0x3803 2584 2584 #define PCI_DEVICE_ID_NETRONOME_NFP6000_VF 0x6003 2585 2585 2586 + #define PCI_VENDOR_ID_ASPEED 0x1a03 2587 + 2586 2588 #define PCI_VENDOR_ID_QMI 0x1a32 2587 2589 2588 2590 #define PCI_VENDOR_ID_AZWAVE 0x1a3b
+129
include/trace/events/pci.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #undef TRACE_SYSTEM 3 + #define TRACE_SYSTEM pci 4 + 5 + #if !defined(_TRACE_HW_EVENT_PCI_H) || defined(TRACE_HEADER_MULTI_READ) 6 + #define _TRACE_HW_EVENT_PCI_H 7 + 8 + #include <uapi/linux/pci_regs.h> 9 + #include <linux/tracepoint.h> 10 + 11 + #define PCI_HOTPLUG_EVENT \ 12 + EM(PCI_HOTPLUG_LINK_UP, "LINK_UP") \ 13 + EM(PCI_HOTPLUG_LINK_DOWN, "LINK_DOWN") \ 14 + EM(PCI_HOTPLUG_CARD_PRESENT, "CARD_PRESENT") \ 15 + EMe(PCI_HOTPLUG_CARD_NOT_PRESENT, "CARD_NOT_PRESENT") 16 + 17 + /* Enums require being exported to userspace, for user tool parsing */ 18 + #undef EM 19 + #undef EMe 20 + #define EM(a, b) TRACE_DEFINE_ENUM(a); 21 + #define EMe(a, b) TRACE_DEFINE_ENUM(a); 22 + 23 + PCI_HOTPLUG_EVENT 24 + 25 + /* 26 + * Now redefine the EM() and EMe() macros to map the enums to the strings 27 + * that will be printed in the output. 28 + */ 29 + #undef EM 30 + #undef EMe 31 + #define EM(a, b) {a, b}, 32 + #define EMe(a, b) {a, b} 33 + 34 + /* 35 + * Note: For generic PCI hotplug events, we pass already-resolved strings 36 + * (port_name, slot) instead of driver-specific structures like 'struct 37 + * controller'. This is because different PCI hotplug drivers (pciehp, cpqphp, 38 + * ibmphp, shpchp) define their own versions of 'struct controller' with 39 + * different fields and helper functions. Using driver-specific structures would 40 + * make the tracepoint interface non-generic and cause compatibility issues 41 + * across different drivers. 42 + */ 43 + TRACE_EVENT(pci_hp_event, 44 + 45 + TP_PROTO(const char *port_name, 46 + const char *slot, 47 + const int event), 48 + 49 + TP_ARGS(port_name, slot, event), 50 + 51 + TP_STRUCT__entry( 52 + __string( port_name, port_name ) 53 + __string( slot, slot ) 54 + __field( int, event ) 55 + ), 56 + 57 + TP_fast_assign( 58 + __assign_str(port_name); 59 + __assign_str(slot); 60 + __entry->event = event; 61 + ), 62 + 63 + TP_printk("%s slot:%s, event:%s\n", 64 + __get_str(port_name), 65 + __get_str(slot), 66 + __print_symbolic(__entry->event, PCI_HOTPLUG_EVENT) 67 + ) 68 + ); 69 + 70 + #define PCI_EXP_LNKSTA_LINK_STATUS_MASK (PCI_EXP_LNKSTA_LBMS | \ 71 + PCI_EXP_LNKSTA_LABS | \ 72 + PCI_EXP_LNKSTA_LT | \ 73 + PCI_EXP_LNKSTA_DLLLA) 74 + 75 + #define LNKSTA_FLAGS \ 76 + { PCI_EXP_LNKSTA_LT, "LT"}, \ 77 + { PCI_EXP_LNKSTA_DLLLA, "DLLLA"}, \ 78 + { PCI_EXP_LNKSTA_LBMS, "LBMS"}, \ 79 + { PCI_EXP_LNKSTA_LABS, "LABS"} 80 + 81 + TRACE_EVENT(pcie_link_event, 82 + 83 + TP_PROTO(struct pci_bus *bus, 84 + unsigned int reason, 85 + unsigned int width, 86 + unsigned int status 87 + ), 88 + 89 + TP_ARGS(bus, reason, width, status), 90 + 91 + TP_STRUCT__entry( 92 + __string( port_name, pci_name(bus->self)) 93 + __field( unsigned int, type ) 94 + __field( unsigned int, reason ) 95 + __field( unsigned int, cur_bus_speed ) 96 + __field( unsigned int, max_bus_speed ) 97 + __field( unsigned int, width ) 98 + __field( unsigned int, flit_mode ) 99 + __field( unsigned int, link_status ) 100 + ), 101 + 102 + TP_fast_assign( 103 + __assign_str(port_name); 104 + __entry->type = pci_pcie_type(bus->self); 105 + __entry->reason = reason; 106 + __entry->cur_bus_speed = bus->cur_bus_speed; 107 + __entry->max_bus_speed = bus->max_bus_speed; 108 + __entry->width = width; 109 + __entry->flit_mode = bus->flit_mode; 110 + __entry->link_status = status; 111 + ), 112 + 113 + TP_printk("%s type:%d, reason:%d, cur_bus_speed:%d, max_bus_speed:%d, width:%u, flit_mode:%u, status:%s\n", 114 + __get_str(port_name), 115 + __entry->type, 116 + __entry->reason, 117 + __entry->cur_bus_speed, 118 + __entry->max_bus_speed, 119 + __entry->width, 120 + __entry->flit_mode, 121 + __print_flags((unsigned long)__entry->link_status, "|", 122 + LNKSTA_FLAGS) 123 + ) 124 + ); 125 + 126 + #endif /* _TRACE_HW_EVENT_PCI_H */ 127 + 128 + /* This part must be outside protection */ 129 + #include <trace/define_trace.h>
+7
include/uapi/linux/pci.h
··· 39 39 #define PCIIOC_MMAP_IS_MEM (PCIIOC_BASE | 0x02) /* Set mmap state to MEM space. */ 40 40 #define PCIIOC_WRITE_COMBINE (PCIIOC_BASE | 0x03) /* Enable/disable write-combining. */ 41 41 42 + enum pci_hotplug_event { 43 + PCI_HOTPLUG_LINK_UP, 44 + PCI_HOTPLUG_LINK_DOWN, 45 + PCI_HOTPLUG_CARD_PRESENT, 46 + PCI_HOTPLUG_CARD_NOT_PRESENT, 47 + }; 48 + 42 49 #endif /* _UAPILINUX_PCI_H */
+5
include/uapi/linux/pci_regs.h
··· 132 132 #define PCI_SECONDARY_BUS 0x19 /* Secondary bus number */ 133 133 #define PCI_SUBORDINATE_BUS 0x1a /* Highest bus number behind the bridge */ 134 134 #define PCI_SEC_LATENCY_TIMER 0x1b /* Latency timer for secondary interface */ 135 + /* Masks for dword-sized processing of Bus Number and Sec Latency Timer fields */ 136 + #define PCI_PRIMARY_BUS_MASK 0x000000ff 137 + #define PCI_SECONDARY_BUS_MASK 0x0000ff00 138 + #define PCI_SUBORDINATE_BUS_MASK 0x00ff0000 139 + #define PCI_SEC_LATENCY_TIMER_MASK 0xff000000 135 140 #define PCI_IO_BASE 0x1c /* I/O range behind the bridge */ 136 141 #define PCI_IO_LIMIT 0x1d 137 142 #define PCI_IO_RANGE_TYPE_MASK 0x0fUL /* I/O bridging type */
+1
include/uapi/linux/pcitest.h
··· 22 22 #define PCITEST_GET_IRQTYPE _IO('P', 0x9) 23 23 #define PCITEST_BARS _IO('P', 0xa) 24 24 #define PCITEST_DOORBELL _IO('P', 0xb) 25 + #define PCITEST_BAR_SUBRANGE _IO('P', 0xc) 25 26 #define PCITEST_CLEAR_IRQ _IO('P', 0x10) 26 27 27 28 #define PCITEST_IRQ_TYPE_UNDEFINED -1
+1
kernel/irq/irqdomain.c
··· 1913 1913 irq_domain_free_irq_data(virq, nr_irqs); 1914 1914 irq_free_descs(virq, nr_irqs); 1915 1915 } 1916 + EXPORT_SYMBOL_GPL(irq_domain_free_irqs); 1916 1917 1917 1918 static void irq_domain_free_one_irq(struct irq_domain *domain, unsigned int virq) 1918 1919 {
+1 -1
kernel/resource.c
··· 82 82 83 83 #ifdef CONFIG_PROC_FS 84 84 85 - enum { MAX_IORES_LEVEL = 5 }; 85 + enum { MAX_IORES_LEVEL = 8 }; 86 86 87 87 static void *r_start(struct seq_file *m, loff_t *pos) 88 88 __acquires(resource_lock)
+17
tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
··· 70 70 EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno); 71 71 } 72 72 73 + TEST_F(pci_ep_bar, BAR_SUBRANGE_TEST) 74 + { 75 + int ret; 76 + 77 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, PCITEST_IRQ_TYPE_AUTO); 78 + ASSERT_EQ(0, ret) TH_LOG("Can't set AUTO IRQ type"); 79 + 80 + pci_ep_ioctl(PCITEST_BAR_SUBRANGE, variant->barno); 81 + if (ret == -ENODATA) 82 + SKIP(return, "BAR is disabled"); 83 + if (ret == -EBUSY) 84 + SKIP(return, "BAR is test register space"); 85 + if (ret == -EOPNOTSUPP) 86 + SKIP(return, "Subrange map is not supported"); 87 + EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno); 88 + } 89 + 73 90 FIXTURE(pci_ep_basic) 74 91 { 75 92 int fd;