Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Batch sizing of multiple BARs while memory decoding is disabled
instead of disabling/enabling decoding for each BAR individually;
this optimizes virtualized environments where toggling decoding
enable is expensive (Alex Williamson)

- Add host bridge .enable_device() and .disable_device() hooks for
bridges that need to configure things like Requester ID to StreamID
mapping when enabling devices (Frank Li)

- Extend struct pci_ecam_ops with .enable_device() and
.disable_device() hooks so drivers that use pci_host_common_probe()
instead of their own .probe() have a way to set the
.enable_device() callbacks (Marc Zyngier)

- Drop 'No bus range found' message so we don't complain when DTs
don't specify the default 'bus-range = <0x00 0xff>' (Bjorn Helgaas)

- Rename the drivers/pci/of_property.c struct of_pci_range to
of_pci_range_entry to avoid confusion with the global of_pci_range
in include/linux/of_address.h (Bjorn Helgaas)

Driver binding:

- Update resource request API documentation to encourage callers to
supply a driver name when requesting resources (Philipp Stanner)

- Export pci_intx_unmanaged() and pcim_intx() (always managed) so
callers of pci_intx() (which is sometimes managed) can explicitly
choose the one they need (Philipp Stanner)

- Convert drivers from pci_intx() to always-managed pcim_intx() or
never-managed pci_intx_unmanaged(): amd_sfh, ata (ahci, ata_piix,
pata_rdc, sata_sil24, sata_sis, sata_uli, sata_vsc), bnx2x, bna,
ntb, qtnfmac, rtsx, tifm_7xx1, vfio, xen-pciback (Philipp Stanner)

- Remove pci_intx_unmanaged() since pci_intx() is now always
unmanaged and pcim_intx() is always managed (Philipp Stanner)

Error handling:

- Unexport pcie_read_tlp_log() to encourage drivers to use PCI core
logging rather than building their own (Ilpo Järvinen)

- Move TLP Log handling to its own file (Ilpo Järvinen)

- Store number of supported End-End TLP Prefixes always so we can
read the correct number of DWORDs from the TLP Prefix Log (Ilpo
Järvinen)

- Read TLP Prefixes in addition to the Header Log in
pcie_read_tlp_log() (Ilpo Järvinen)

- Add pcie_print_tlp_log() to consolidate printing of TLP Header and
Prefix Log (Ilpo Järvinen)

- Quirk the Intel Raptor Lake-P PIO log size to accommodate vendor
BIOSes that don't configure it correctly (Takashi Iwai)

ASPM:

- Save parent L1 PM Substates config so when we restore it along with
an endpoint's config, the parent info isn't junk (Jian-Hong Pan)

Power management:

- Avoid D3 for Root Ports on TUXEDO Sirius Gen1 with old BIOS because
the system can't wake up from suspend (Werner Sembach)

Endpoint framework:

- Destroy the EPC device in devm_pci_epc_destroy(), which previously
didn't call devres_release() (Zijun Hu)

- Finish virtual EP removal in pci_epf_remove_vepf(), which
previously caused a subsequent pci_epf_add_vepf() to fail with
-EBUSY (Zijun Hu)

- Write BAR_MASK before iATU registers in pci_epc_set_bar() so we
don't depend on the BAR_MASK reset value being larger than the
requested BAR size (Niklas Cassel)

- Prevent changing BAR size/flags in pci_epc_set_bar() to prevent
reads from bypassing the iATU if we reduced the BAR size (Niklas
Cassel)

- Verify address alignment when programming iATU so we don't attempt
to write bits that are read-only because of the BAR size, which
could lead to directing accesses to the wrong address (Niklas
Cassel)

- Implement artpec6 pci_epc_features so we can rely on all drivers
supporting it so we can use it in EPC core code (Niklas Cassel)

- Check for BARs of fixed size to prevent endpoint drivers from
trying to change their size (Niklas Cassel)

- Verify that requested BAR size is a power of two when endpoint
driver sets the BAR (Niklas Cassel)

Endpoint framework tests:

- Clear pci-epf-test dma_chan_rx, not dma_chan_tx, after freeing
dma_chan_rx (Mohamed Khalfella)

- Correct the DMA MEMCPY test so it doesn't fail if the Endpoint
supports both DMA_PRIVATE and DMA_MEMCPY (Manivannan Sadhasivam)

- Add pci-epf-test and pci_endpoint_test support for capabilities
(Niklas Cassel)

- Add Endpoint test for consecutive BARs (Niklas Cassel)

- Remove redundant comparison from Endpoint BAR test because a > 1MB
BAR can always be exactly covered by iterating with a 1MB buffer
(Hans Zhang)

- Move and convert PCI Endpoint tests from tools/pci to Kselftests
(Manivannan Sadhasivam)

Apple PCIe controller driver:

- Convert StreamID mapping configuration from a bus notifier to the
.enable_device() and .disable_device() callbacks (Marc Zyngier)

Freescale i.MX6 PCIe controller driver:

- Add Requester ID to StreamID mapping configuration when enabling
devices (Frank Li)

- Use DWC core suspend/resume functions for imx6 (Frank Li)

- Add suspend/resume support for i.MX8MQ, i.MX8Q, and i.MX95 (Richard
Zhu)

- Add DT compatible string 'fsl,imx8q-pcie-ep' and driver support for
i.MX8Q series (i.MX8QM, i.MX8QXP, and i.MX8DXL) Endpoints (Frank
Li)

- Add DT binding for optional i.MX95 Refclk and driver support to
enable it if the platform hasn't enabled it (Richard Zhu)

- Configure PHY based on controller being in Root Complex or Endpoint
mode (Frank Li)

- Rely on dbi2 and iATU base addresses from DT via
dw_pcie_get_resources() instead of hardcoding them (Richard Zhu)

- Deassert apps_reset in imx_pcie_deassert_core_reset() since it is
asserted in imx_pcie_assert_core_reset() (Richard Zhu)

- Add missing reference clock enable or disable logic for IMX6SX,
IMX7D, IMX8MM (Richard Zhu)

- Remove redundant imx7d_pcie_init_phy() since
imx7d_pcie_enable_ref_clk() does the same thing (Richard Zhu)

Freescale Layerscape PCIe controller driver:

- Simplify by using syscon_regmap_lookup_by_phandle_args() instead
of syscon_regmap_lookup_by_phandle() followed by
of_property_read_u32_array() (Krzysztof Kozlowski)

Marvell MVEBU PCIe controller driver:

- Add MODULE_DEVICE_TABLE() to enable module autoloading (Liao Chen)

MediaTek PCIe Gen3 controller driver:

- Use clk_bulk_prepare_enable() instead of separate
clk_bulk_prepare() and clk_bulk_enable() (Lorenzo Bianconi)

- Rearrange reset assert/deassert so they're both done in the
*_power_up() callbacks (Lorenzo Bianconi)

- Document that Airoha EN7581 requires PHY init and power-on before
PHY reset deassert, unlike other MediaTek Gen3 controllers (Lorenzo
Bianconi)

- Move Airoha EN7581 post-reset delay from the en7581 clock .enable()
method to mtk_pcie_en7581_power_up() (Lorenzo Bianconi)

- Sleep instead of delay during Airoha EN7581 power-up, since this is
a non-atomic context (Lorenzo Bianconi)

- Skip PERST# assertion on Airoha EN7581 during probe and
suspend/resume to avoid a hardware defect (Lorenzo Bianconi)

- Enable async probe to reduce system startup time (Douglas Anderson)

Microchip PolarFlare PCIe controller driver:

- Set up the inbound address translation based on whether the
platform allows coherent or non-coherent DMA (Daire McNamara)

- Update DT binding such that platforms are DMA-coherent by default
and must specify 'dma-noncoherent' if needed (Conor Dooley)

Mobiveil PCIe controller driver:

- Convert mobiveil-pcie.txt to YAML and update 'interrupt-names'
and 'reg-names' (Frank Li)

Qualcomm PCIe controller driver:

- Add DT SM8550 and SM8650 optional 'global' interrupt for link
events (Neil Armstrong)

- Add DT 'compatible' strings for IPQ5424 PCIe controller (Manikanta
Mylavarapu)

- If 'global' IRQ is supported for detection of Link Up events, tell
DWC core not to wait for link up (Krishna chaitanya chundru)

Renesas R-Car PCIe controller driver:

- Avoid passing stack buffer as resource name (King Dix)

Rockchip PCIe controller driver:

- Simplify clock and reset handling by using bulk interfaces (Anand
Moon)

- Pass typed rockchip_pcie (not void) pointer to
rockchip_pcie_disable_clocks() (Anand Moon)

- Return -ENOMEM, not success, when pci_epc_mem_alloc_addr() fails
(Dan Carpenter)

Rockchip DesignWare PCIe controller driver:

- Use dll_link_up IRQ to detect Link Up and enumerate devices so
users don't have to manually rescan (Niklas Cassel)

- Tell DWC core not to wait for link up since the 'sys' interrupt is
required and detects Link Up events (Niklas Cassel)

Synopsys DesignWare PCIe controller driver:

- Don't wait for link up in DWC core if driver can detect Link Up
event (Krishna chaitanya chundru)

- Update ICC and OPP votes after Link Up events (Krishna chaitanya
chundru)

- Always stop link in dw_pcie_suspend_noirq(), which is required at
least for i.MX8QM to re-establish link on resume (Richard Zhu)

- Drop racy and unnecessary LTSSM state check before sending
PME_TURN_OFF message in dw_pcie_suspend_noirq() (Richard Zhu)

- Add struct of_pci_range.parent_bus_addr for devices that need their
immediate parent bus address, not the CPU address, e.g., to program
an internal Address Translation Unit (iATU) (Frank Li)

TI DRA7xx PCIe controller driver:

- Simplify by using syscon_regmap_lookup_by_phandle_args() instead of
syscon_regmap_lookup_by_phandle() followed by
of_parse_phandle_with_fixed_args() or of_property_read_u32_index()
(Krzysztof Kozlowski)

Xilinx Versal CPM PCIe controller driver:

- Add DT binding and driver support for Xilinx Versal CPM5
(Thippeswamy Havalige)

MicroSemi Switchtec management driver:

- Add Microchip PCI100X device IDs (Rakesh Babu Saladi)

Miscellaneous:

- Move reset related sysfs code from pci.c to pci-sysfs.c where other
similar code lives (Ilpo Järvinen)

- Simplify reset_method_store() memory management by using __free()
instead of explicit kfree() cleanup (Ilpo Järvinen)

- Constify struct bin_attribute for sysfs, VPD, P2PDMA, and the IBM
ACPI hotplug driver (Thomas Weißschuh)

- Remove redundant PCI_VSEC_HDR and PCI_VSEC_HDR_LEN_SHIFT (Dongdong
Zhang)

- Correct documentation of the 'config_acs=' kernel parameter
(Akihiko Odaki)"

* tag 'pci-v6.14-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (111 commits)
PCI: Batch BAR sizing operations
dt-bindings: PCI: microchip,pcie-host: Allow dma-noncoherent
PCI: microchip: Set inbound address translation for coherent or non-coherent mode
Documentation: Fix pci=config_acs= example
PCI: Remove redundant PCI_VSEC_HDR and PCI_VSEC_HDR_LEN_SHIFT
PCI: Don't include 'pm_wakeup.h' directly
selftests: pci_endpoint: Migrate to Kselftest framework
selftests: Move PCI Endpoint tests from tools/pci to Kselftests
misc: pci_endpoint_test: Fix IOCTL return value
dt-bindings: PCI: qcom: Document the IPQ5424 PCIe controller
dt-bindings: PCI: qcom,pcie-sm8550: Document 'global' interrupt
dt-bindings: PCI: mobiveil: Convert mobiveil-pcie.txt to YAML
PCI: switchtec: Add Microchip PCI100X device IDs
misc: pci_endpoint_test: Remove redundant 'remainder' test
misc: pci_endpoint_test: Add consecutive BAR test
misc: pci_endpoint_test: Add support for capabilities
PCI: endpoint: pci-epf-test: Add support for capabilities
PCI: endpoint: pci-epf-test: Fix check for DMA MEMCPY test
PCI: endpoint: pci-epf-test: Set dma_chan_rx pointer to NULL on error
PCI: dwc: Simplify config resource lookup
...

+2248 -1670
+69 -99
Documentation/PCI/endpoint/pci-test-howto.rst
··· 81 81 82 82 # echo 0x104c > functions/pci_epf_test/func1/vendorid 83 83 # echo 0xb500 > functions/pci_epf_test/func1/deviceid 84 - # echo 16 > functions/pci_epf_test/func1/msi_interrupts 85 - # echo 8 > functions/pci_epf_test/func1/msix_interrupts 84 + # echo 32 > functions/pci_epf_test/func1/msi_interrupts 85 + # echo 2048 > functions/pci_epf_test/func1/msix_interrupts 86 86 87 87 88 88 Binding pci-epf-test Device to EP Controller ··· 123 123 Using Endpoint Test function Device 124 124 ----------------------------------- 125 125 126 - pcitest.sh added in tools/pci/ can be used to run all the default PCI endpoint 127 - tests. To compile this tool the following commands should be used:: 126 + Kselftest added in tools/testing/selftests/pci_endpoint can be used to run all 127 + the default PCI endpoint tests. To build the Kselftest for PCI endpoint 128 + subsystem, the following commands should be used:: 128 129 129 130 # cd <kernel-dir> 130 - # make -C tools/pci 131 + # make -C tools/testing/selftests/pci_endpoint 131 132 132 133 or if you desire to compile and install in your system:: 133 134 134 135 # cd <kernel-dir> 135 - # make -C tools/pci install 136 + # make -C tools/testing/selftests/pci_endpoint INSTALL_PATH=/usr/bin install 136 137 137 - The tool and script will be located in <rootfs>/usr/bin/ 138 + The test will be located in <rootfs>/usr/bin/ 138 139 139 - 140 - pcitest.sh Output 141 - ~~~~~~~~~~~~~~~~~ 140 + Kselftest Output 141 + ~~~~~~~~~~~~~~~~ 142 142 :: 143 143 144 - # pcitest.sh 145 - BAR tests 144 + # pci_endpoint_test 145 + TAP version 13 146 + 1..16 147 + # Starting 16 tests from 9 test cases. 148 + # RUN pci_ep_bar.BAR0.BAR_TEST ... 149 + # OK pci_ep_bar.BAR0.BAR_TEST 150 + ok 1 pci_ep_bar.BAR0.BAR_TEST 151 + # RUN pci_ep_bar.BAR1.BAR_TEST ... 152 + # OK pci_ep_bar.BAR1.BAR_TEST 153 + ok 2 pci_ep_bar.BAR1.BAR_TEST 154 + # RUN pci_ep_bar.BAR2.BAR_TEST ... 155 + # OK pci_ep_bar.BAR2.BAR_TEST 156 + ok 3 pci_ep_bar.BAR2.BAR_TEST 157 + # RUN pci_ep_bar.BAR3.BAR_TEST ... 158 + # OK pci_ep_bar.BAR3.BAR_TEST 159 + ok 4 pci_ep_bar.BAR3.BAR_TEST 160 + # RUN pci_ep_bar.BAR4.BAR_TEST ... 161 + # OK pci_ep_bar.BAR4.BAR_TEST 162 + ok 5 pci_ep_bar.BAR4.BAR_TEST 163 + # RUN pci_ep_bar.BAR5.BAR_TEST ... 164 + # OK pci_ep_bar.BAR5.BAR_TEST 165 + ok 6 pci_ep_bar.BAR5.BAR_TEST 166 + # RUN pci_ep_basic.CONSECUTIVE_BAR_TEST ... 167 + # OK pci_ep_basic.CONSECUTIVE_BAR_TEST 168 + ok 7 pci_ep_basic.CONSECUTIVE_BAR_TEST 169 + # RUN pci_ep_basic.LEGACY_IRQ_TEST ... 170 + # OK pci_ep_basic.LEGACY_IRQ_TEST 171 + ok 8 pci_ep_basic.LEGACY_IRQ_TEST 172 + # RUN pci_ep_basic.MSI_TEST ... 173 + # OK pci_ep_basic.MSI_TEST 174 + ok 9 pci_ep_basic.MSI_TEST 175 + # RUN pci_ep_basic.MSIX_TEST ... 176 + # OK pci_ep_basic.MSIX_TEST 177 + ok 10 pci_ep_basic.MSIX_TEST 178 + # RUN pci_ep_data_transfer.memcpy.READ_TEST ... 179 + # OK pci_ep_data_transfer.memcpy.READ_TEST 180 + ok 11 pci_ep_data_transfer.memcpy.READ_TEST 181 + # RUN pci_ep_data_transfer.memcpy.WRITE_TEST ... 182 + # OK pci_ep_data_transfer.memcpy.WRITE_TEST 183 + ok 12 pci_ep_data_transfer.memcpy.WRITE_TEST 184 + # RUN pci_ep_data_transfer.memcpy.COPY_TEST ... 185 + # OK pci_ep_data_transfer.memcpy.COPY_TEST 186 + ok 13 pci_ep_data_transfer.memcpy.COPY_TEST 187 + # RUN pci_ep_data_transfer.dma.READ_TEST ... 188 + # OK pci_ep_data_transfer.dma.READ_TEST 189 + ok 14 pci_ep_data_transfer.dma.READ_TEST 190 + # RUN pci_ep_data_transfer.dma.WRITE_TEST ... 191 + # OK pci_ep_data_transfer.dma.WRITE_TEST 192 + ok 15 pci_ep_data_transfer.dma.WRITE_TEST 193 + # RUN pci_ep_data_transfer.dma.COPY_TEST ... 194 + # OK pci_ep_data_transfer.dma.COPY_TEST 195 + ok 16 pci_ep_data_transfer.dma.COPY_TEST 196 + # PASSED: 16 / 16 tests passed. 197 + # Totals: pass:16 fail:0 xfail:0 xpass:0 skip:0 error:0 146 198 147 - BAR0: OKAY 148 - BAR1: OKAY 149 - BAR2: OKAY 150 - BAR3: OKAY 151 - BAR4: NOT OKAY 152 - BAR5: NOT OKAY 153 199 154 - Interrupt tests 200 + Testcase 16 (pci_ep_data_transfer.dma.COPY_TEST) will fail for most of the DMA 201 + capable endpoint controllers due to the absence of the MEMCPY over DMA. For such 202 + controllers, it is advisable to skip this testcase using this 203 + command:: 155 204 156 - SET IRQ TYPE TO LEGACY: OKAY 157 - LEGACY IRQ: NOT OKAY 158 - SET IRQ TYPE TO MSI: OKAY 159 - MSI1: OKAY 160 - MSI2: OKAY 161 - MSI3: OKAY 162 - MSI4: OKAY 163 - MSI5: OKAY 164 - MSI6: OKAY 165 - MSI7: OKAY 166 - MSI8: OKAY 167 - MSI9: OKAY 168 - MSI10: OKAY 169 - MSI11: OKAY 170 - MSI12: OKAY 171 - MSI13: OKAY 172 - MSI14: OKAY 173 - MSI15: OKAY 174 - MSI16: OKAY 175 - MSI17: NOT OKAY 176 - MSI18: NOT OKAY 177 - MSI19: NOT OKAY 178 - MSI20: NOT OKAY 179 - MSI21: NOT OKAY 180 - MSI22: NOT OKAY 181 - MSI23: NOT OKAY 182 - MSI24: NOT OKAY 183 - MSI25: NOT OKAY 184 - MSI26: NOT OKAY 185 - MSI27: NOT OKAY 186 - MSI28: NOT OKAY 187 - MSI29: NOT OKAY 188 - MSI30: NOT OKAY 189 - MSI31: NOT OKAY 190 - MSI32: NOT OKAY 191 - SET IRQ TYPE TO MSI-X: OKAY 192 - MSI-X1: OKAY 193 - MSI-X2: OKAY 194 - MSI-X3: OKAY 195 - MSI-X4: OKAY 196 - MSI-X5: OKAY 197 - MSI-X6: OKAY 198 - MSI-X7: OKAY 199 - MSI-X8: OKAY 200 - MSI-X9: NOT OKAY 201 - MSI-X10: NOT OKAY 202 - MSI-X11: NOT OKAY 203 - MSI-X12: NOT OKAY 204 - MSI-X13: NOT OKAY 205 - MSI-X14: NOT OKAY 206 - MSI-X15: NOT OKAY 207 - MSI-X16: NOT OKAY 208 - [...] 209 - MSI-X2047: NOT OKAY 210 - MSI-X2048: NOT OKAY 211 - 212 - Read Tests 213 - 214 - SET IRQ TYPE TO MSI: OKAY 215 - READ ( 1 bytes): OKAY 216 - READ ( 1024 bytes): OKAY 217 - READ ( 1025 bytes): OKAY 218 - READ (1024000 bytes): OKAY 219 - READ (1024001 bytes): OKAY 220 - 221 - Write Tests 222 - 223 - WRITE ( 1 bytes): OKAY 224 - WRITE ( 1024 bytes): OKAY 225 - WRITE ( 1025 bytes): OKAY 226 - WRITE (1024000 bytes): OKAY 227 - WRITE (1024001 bytes): OKAY 228 - 229 - Copy Tests 230 - 231 - COPY ( 1 bytes): OKAY 232 - COPY ( 1024 bytes): OKAY 233 - COPY ( 1025 bytes): OKAY 234 - COPY (1024000 bytes): OKAY 235 - COPY (1024001 bytes): OKAY 205 + # pci_endpoint_test -f pci_ep_bar -f pci_ep_basic -v memcpy -T COPY_TEST -v dma
+1 -1
Documentation/admin-guide/kernel-parameters.txt
··· 4830 4830 '1' – force enabled 4831 4831 'x' – unchanged 4832 4832 For example, 4833 - pci=config_acs=10x 4833 + pci=config_acs=10x@pci:0:0 4834 4834 would configure all devices that support 4835 4835 ACS to enable P2P Request Redirect, disable 4836 4836 Translation Blocking, and leave Source
+2 -2
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie-common.yaml
··· 17 17 properties: 18 18 clocks: 19 19 minItems: 3 20 - maxItems: 4 20 + maxItems: 5 21 21 22 22 clock-names: 23 23 minItems: 3 24 - maxItems: 4 24 + maxItems: 5 25 25 26 26 num-lanes: 27 27 const: 1
+38 -1
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie-ep.yaml
··· 22 22 - fsl,imx8mm-pcie-ep 23 23 - fsl,imx8mq-pcie-ep 24 24 - fsl,imx8mp-pcie-ep 25 + - fsl,imx8q-pcie-ep 25 26 - fsl,imx95-pcie-ep 26 27 27 28 clocks: ··· 79 78 properties: 80 79 compatible: 81 80 enum: 81 + - fsl,imx8q-pcie-ep 82 + then: 83 + properties: 84 + reg: 85 + maxItems: 2 86 + reg-names: 87 + items: 88 + - const: dbi 89 + - const: addr_space 90 + 91 + - if: 92 + properties: 93 + compatible: 94 + enum: 82 95 - fsl,imx95-pcie-ep 83 96 then: 84 97 properties: ··· 118 103 properties: 119 104 clocks: 120 105 minItems: 4 106 + maxItems: 4 121 107 clock-names: 122 108 items: 123 109 - const: pcie 124 110 - const: pcie_bus 125 111 - const: pcie_phy 126 112 - const: pcie_aux 127 - else: 113 + 114 + - if: 115 + properties: 116 + compatible: 117 + enum: 118 + - fsl,imx8mm-pcie-ep 119 + - fsl,imx8mp-pcie-ep 120 + then: 128 121 properties: 129 122 clocks: 130 123 maxItems: 3 ··· 142 119 - const: pcie_bus 143 120 - const: pcie_aux 144 121 122 + - if: 123 + properties: 124 + compatible: 125 + enum: 126 + - fsl,imxq-pcie-ep 127 + then: 128 + properties: 129 + clocks: 130 + maxItems: 3 131 + clock-names: 132 + items: 133 + - const: dbi 134 + - const: mstr 135 + - const: slv 145 136 146 137 unevaluatedProperties: false 147 138
+21 -4
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml
··· 40 40 - description: PCIe PHY clock. 41 41 - description: Additional required clock entry for imx6sx-pcie, 42 42 imx6sx-pcie-ep, imx8mq-pcie, imx8mq-pcie-ep. 43 + - description: PCIe reference clock. 43 44 44 45 clock-names: 45 46 minItems: 3 46 - maxItems: 4 47 + maxItems: 5 47 48 48 49 interrupts: 49 50 items: ··· 128 127 then: 129 128 properties: 130 129 clocks: 131 - minItems: 4 130 + maxItems: 4 132 131 clock-names: 133 132 items: 134 133 - const: pcie ··· 141 140 compatible: 142 141 enum: 143 142 - fsl,imx8mq-pcie 144 - - fsl,imx95-pcie 145 143 then: 146 144 properties: 147 145 clocks: 148 - minItems: 4 146 + maxItems: 4 149 147 clock-names: 150 148 items: 151 149 - const: pcie ··· 199 199 - const: dbi 200 200 - const: mstr 201 201 - const: slv 202 + 203 + - if: 204 + properties: 205 + compatible: 206 + enum: 207 + - fsl,imx95-pcie 208 + then: 209 + properties: 210 + clocks: 211 + maxItems: 5 212 + clock-names: 213 + items: 214 + - const: pcie 215 + - const: pcie_bus 216 + - const: pcie_phy 217 + - const: pcie_aux 218 + - const: ref 202 219 203 220 unevaluatedProperties: false 204 221
-52
Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt
··· 1 - NXP Layerscape PCIe Gen4 controller 2 - 3 - This PCIe controller is based on the Mobiveil PCIe IP and thus inherits all 4 - the common properties defined in mobiveil-pcie.txt. 5 - 6 - Required properties: 7 - - compatible: should contain the platform identifier such as: 8 - "fsl,lx2160a-pcie" 9 - - reg: base addresses and lengths of the PCIe controller register blocks. 10 - "csr_axi_slave": Bridge config registers 11 - "config_axi_slave": PCIe controller registers 12 - - interrupts: A list of interrupt outputs of the controller. Must contain an 13 - entry for each entry in the interrupt-names property. 14 - - interrupt-names: It could include the following entries: 15 - "intr": The interrupt that is asserted for controller interrupts 16 - "aer": Asserted for aer interrupt when chip support the aer interrupt with 17 - none MSI/MSI-X/INTx mode,but there is interrupt line for aer. 18 - "pme": Asserted for pme interrupt when chip support the pme interrupt with 19 - none MSI/MSI-X/INTx mode,but there is interrupt line for pme. 20 - - dma-coherent: Indicates that the hardware IP block can ensure the coherency 21 - of the data transferred from/to the IP block. This can avoid the software 22 - cache flush/invalid actions, and improve the performance significantly. 23 - - msi-parent : See the generic MSI binding described in 24 - Documentation/devicetree/bindings/interrupt-controller/msi.txt. 25 - 26 - Example: 27 - 28 - pcie@3400000 { 29 - compatible = "fsl,lx2160a-pcie"; 30 - reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */ 31 - 0x80 0x00000000 0x0 0x00001000>; /* configuration space */ 32 - reg-names = "csr_axi_slave", "config_axi_slave"; 33 - interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* AER interrupt */ 34 - <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* PME interrupt */ 35 - <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */ 36 - interrupt-names = "aer", "pme", "intr"; 37 - #address-cells = <3>; 38 - #size-cells = <2>; 39 - device_type = "pci"; 40 - apio-wins = <8>; 41 - ppio-wins = <8>; 42 - dma-coherent; 43 - bus-range = <0x0 0xff>; 44 - msi-parent = <&its>; 45 - ranges = <0x82000000 0x0 0x40000000 0x80 0x40000000 0x0 0x40000000>; 46 - #interrupt-cells = <1>; 47 - interrupt-map-mask = <0 0 0 7>; 48 - interrupt-map = <0000 0 0 1 &gic 0 0 GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>, 49 - <0000 0 0 2 &gic 0 0 GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>, 50 - <0000 0 0 3 &gic 0 0 GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>, 51 - <0000 0 0 4 &gic 0 0 GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>; 52 - };
+173
Documentation/devicetree/bindings/pci/mbvl,gpex40-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/mbvl,gpex40-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Mobiveil AXI PCIe Host Bridge 8 + 9 + maintainers: 10 + - Frank Li <Frank Li@nxp.com> 11 + 12 + description: 13 + Mobiveil's GPEX 4.0 is a PCIe Gen4 host bridge IP. This configurable IP 14 + has up to 8 outbound and inbound windows for address translation. 15 + 16 + NXP Layerscape PCIe Gen4 controller (Deprecated) base on Mobiveil's GPEX 4.0. 17 + 18 + properties: 19 + compatible: 20 + enum: 21 + - fsl,lx2160a-pcie 22 + - mbvl,gpex40-pcie 23 + 24 + reg: 25 + items: 26 + - description: PCIe controller registers 27 + - description: Bridge config registers 28 + - description: GPIO registers to control slot power 29 + - description: MSI registers 30 + minItems: 2 31 + 32 + reg-names: 33 + items: 34 + - const: csr_axi_slave 35 + - const: config_axi_slave 36 + - const: gpio_slave 37 + - const: apb_csr 38 + minItems: 2 39 + 40 + apio-wins: 41 + $ref: /schemas/types.yaml#/definitions/uint32 42 + description: | 43 + number of requested APIO outbound windows 44 + 1. Config window 45 + 2. Memory window 46 + default: 2 47 + maximum: 256 48 + 49 + ppio-wins: 50 + $ref: /schemas/types.yaml#/definitions/uint32 51 + description: number of requested PPIO inbound windows 52 + default: 1 53 + maximum: 256 54 + 55 + interrupt-controller: true 56 + 57 + "#interrupt-cells": 58 + const: 1 59 + 60 + interrupts: 61 + minItems: 1 62 + maxItems: 3 63 + 64 + interrupt-names: 65 + minItems: 1 66 + maxItems: 3 67 + 68 + dma-coherent: true 69 + 70 + msi-parent: true 71 + 72 + required: 73 + - compatible 74 + - reg 75 + - reg-names 76 + 77 + allOf: 78 + - $ref: /schemas/pci/pci-host-bridge.yaml# 79 + - if: 80 + properties: 81 + compatible: 82 + enum: 83 + - fsl,lx2160a-pcie 84 + then: 85 + properties: 86 + reg: 87 + maxItems: 2 88 + 89 + reg-names: 90 + maxItems: 2 91 + 92 + interrupts: 93 + minItems: 3 94 + 95 + interrupt-names: 96 + items: 97 + - const: aer 98 + - const: pme 99 + - const: intr 100 + else: 101 + properties: 102 + dma-coherent: false 103 + msi-parent: false 104 + interrupts: 105 + maxItems: 1 106 + interrupt-names: false 107 + 108 + unevaluatedProperties: false 109 + 110 + examples: 111 + - | 112 + #include <dt-bindings/interrupt-controller/arm-gic.h> 113 + 114 + pcie@b0000000 { 115 + compatible = "mbvl,gpex40-pcie"; 116 + reg = <0xb0000000 0x00010000>, 117 + <0xa0000000 0x00001000>, 118 + <0xff000000 0x00200000>, 119 + <0xb0010000 0x00001000>; 120 + reg-names = "csr_axi_slave", 121 + "config_axi_slave", 122 + "gpio_slave", 123 + "apb_csr"; 124 + ranges = <0x83000000 0 0x00000000 0xa8000000 0 0x8000000>; 125 + #address-cells = <3>; 126 + #size-cells = <2>; 127 + device_type = "pci"; 128 + apio-wins = <2>; 129 + ppio-wins = <1>; 130 + bus-range = <0x00 0xff>; 131 + interrupt-controller; 132 + #interrupt-cells = <1>; 133 + interrupt-parent = <&gic>; 134 + interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>; 135 + interrupt-map-mask = <0 0 0 7>; 136 + interrupt-map = <0 0 0 0 &pci_express 0>, 137 + <0 0 0 1 &pci_express 1>, 138 + <0 0 0 2 &pci_express 2>, 139 + <0 0 0 3 &pci_express 3>; 140 + }; 141 + 142 + - | 143 + #include <dt-bindings/interrupt-controller/arm-gic.h> 144 + 145 + soc { 146 + #address-cells = <2>; 147 + #size-cells = <2>; 148 + pcie@3400000 { 149 + compatible = "fsl,lx2160a-pcie"; 150 + reg = <0x00 0x03400000 0x0 0x00100000 /* controller registers */ 151 + 0x80 0x00000000 0x0 0x00001000>; /* configuration space */ 152 + reg-names = "csr_axi_slave", "config_axi_slave"; 153 + ranges = <0x82000000 0x0 0x40000000 0x80 0x40000000 0x0 0x40000000>; 154 + interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* AER interrupt */ 155 + <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>, /* PME interrupt */ 156 + <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>; /* controller interrupt */ 157 + interrupt-names = "aer", "pme", "intr"; 158 + #address-cells = <3>; 159 + #size-cells = <2>; 160 + device_type = "pci"; 161 + apio-wins = <8>; 162 + ppio-wins = <8>; 163 + dma-coherent; 164 + bus-range = <0x00 0xff>; 165 + msi-parent = <&its>; 166 + #interrupt-cells = <1>; 167 + interrupt-map-mask = <0 0 0 7>; 168 + interrupt-map = <0000 0 0 1 &gic 0 0 GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>, 169 + <0000 0 0 2 &gic 0 0 GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>, 170 + <0000 0 0 3 &gic 0 0 GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>, 171 + <0000 0 0 4 &gic 0 0 GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>; 172 + }; 173 + };
+2
Documentation/devicetree/bindings/pci/microchip,pcie-host.yaml
··· 50 50 items: 51 51 pattern: '^fic[0-3]$' 52 52 53 + dma-coherent: true 54 + 53 55 ranges: 54 56 minItems: 1 55 57 maxItems: 3
-72
Documentation/devicetree/bindings/pci/mobiveil-pcie.txt
··· 1 - * Mobiveil AXI PCIe Root Port Bridge DT description 2 - 3 - Mobiveil's GPEX 4.0 is a PCIe Gen4 root port bridge IP. This configurable IP 4 - has up to 8 outbound and inbound windows for the address translation. 5 - 6 - Required properties: 7 - - #address-cells: Address representation for root ports, set to <3> 8 - - #size-cells: Size representation for root ports, set to <2> 9 - - #interrupt-cells: specifies the number of cells needed to encode an 10 - interrupt source. The value must be 1. 11 - - compatible: Should contain "mbvl,gpex40-pcie" 12 - - reg: Should contain PCIe registers location and length 13 - Mandatory: 14 - "config_axi_slave": PCIe controller registers 15 - "csr_axi_slave" : Bridge config registers 16 - Optional: 17 - "gpio_slave" : GPIO registers to control slot power 18 - "apb_csr" : MSI registers 19 - 20 - - device_type: must be "pci" 21 - - apio-wins : number of requested apio outbound windows 22 - default 2 outbound windows are configured - 23 - 1. Config window 24 - 2. Memory window 25 - - ppio-wins : number of requested ppio inbound windows 26 - default 1 inbound memory window is configured. 27 - - bus-range: PCI bus numbers covered 28 - - interrupt-controller: identifies the node as an interrupt controller 29 - - #interrupt-cells: specifies the number of cells needed to encode an 30 - interrupt source. The value must be 1. 31 - - interrupts: The interrupt line of the PCIe controller 32 - last cell of this field is set to 4 to 33 - denote it as IRQ_TYPE_LEVEL_HIGH type interrupt. 34 - - interrupt-map-mask, 35 - interrupt-map: standard PCI properties to define the mapping of the 36 - PCI interface to interrupt numbers. 37 - - ranges: ranges for the PCI memory regions (I/O space region is not 38 - supported by hardware) 39 - Please refer to the standard PCI bus binding document for a more 40 - detailed explanation 41 - 42 - 43 - Example: 44 - ++++++++ 45 - pcie0: pcie@a0000000 { 46 - #address-cells = <3>; 47 - #size-cells = <2>; 48 - compatible = "mbvl,gpex40-pcie"; 49 - reg = <0xa0000000 0x00001000>, 50 - <0xb0000000 0x00010000>, 51 - <0xff000000 0x00200000>, 52 - <0xb0010000 0x00001000>; 53 - reg-names = "config_axi_slave", 54 - "csr_axi_slave", 55 - "gpio_slave", 56 - "apb_csr"; 57 - device_type = "pci"; 58 - apio-wins = <2>; 59 - ppio-wins = <1>; 60 - bus-range = <0x00000000 0x000000ff>; 61 - interrupt-controller; 62 - interrupt-parent = <&gic>; 63 - #interrupt-cells = <1>; 64 - interrupts = < 0 89 4 >; 65 - interrupt-map-mask = <0 0 0 7>; 66 - interrupt-map = <0 0 0 0 &pci_express 0>, 67 - <0 0 0 1 &pci_express 1>, 68 - <0 0 0 2 &pci_express 2>, 69 - <0 0 0 3 &pci_express 3>; 70 - ranges = < 0x83000000 0 0x00000000 0xa8000000 0 0x8000000>; 71 - 72 - };
+6 -3
Documentation/devicetree/bindings/pci/qcom,pcie-sm8550.yaml
··· 57 57 58 58 interrupts: 59 59 minItems: 8 60 - maxItems: 8 60 + maxItems: 9 61 61 62 62 interrupt-names: 63 + minItems: 8 63 64 items: 64 65 - const: msi0 65 66 - const: msi1 ··· 70 69 - const: msi5 71 70 - const: msi6 72 71 - const: msi7 72 + - const: global 73 73 74 74 resets: 75 75 minItems: 1 ··· 141 139 <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>, 142 140 <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>, 143 141 <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>, 144 - <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>; 142 + <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>, 143 + <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; 145 144 interrupt-names = "msi0", "msi1", "msi2", "msi3", 146 - "msi4", "msi5", "msi6", "msi7"; 145 + "msi4", "msi5", "msi6", "msi7", "global"; 147 146 #interrupt-cells = <1>; 148 147 interrupt-map-mask = <0 0 0 0x7>; 149 148 interrupt-map = <0 0 0 1 &intc 0 0 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */
+4
Documentation/devicetree/bindings/pci/qcom,pcie.yaml
··· 32 32 - qcom,pcie-sdm845 33 33 - qcom,pcie-sdx55 34 34 - items: 35 + - enum: 36 + - qcom,pcie-ipq5424 37 + - const: qcom,pcie-ipq9574 38 + - items: 35 39 - const: qcom,pcie-msm8998 36 40 - const: qcom,pcie-msm8996 37 41
+1
Documentation/devicetree/bindings/pci/xilinx-versal-cpm.yaml
··· 17 17 enum: 18 18 - xlnx,versal-cpm-host-1.00 19 19 - xlnx,versal-cpm5-host 20 + - xlnx,versal-cpm5-host1 20 21 21 22 reg: 22 23 items:
+2 -3
MAINTAINERS
··· 18009 18009 M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com> 18010 18010 L: linux-pci@vger.kernel.org 18011 18011 S: Supported 18012 - F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt 18012 + F: Documentation/devicetree/bindings/pci/mbvl,gpex40-pcie.yaml 18013 18013 F: drivers/pci/controller/mobiveil/pcie-mobiveil* 18014 18014 18015 18015 PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support) ··· 18033 18033 L: linux-pci@vger.kernel.org 18034 18034 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 18035 18035 S: Maintained 18036 - F: Documentation/devicetree/bindings/pci/layerscape-pcie-gen4.txt 18037 18036 F: drivers/pci/controller/mobiveil/pcie-layerscape-gen4.c 18038 18037 18039 18038 PCI DRIVER FOR PLDA PCIE IP ··· 18110 18111 F: Documentation/misc-devices/pci-endpoint-test.rst 18111 18112 F: drivers/misc/pci_endpoint_test.c 18112 18113 F: drivers/pci/endpoint/ 18113 - F: tools/pci/ 18114 + F: tools/testing/selftests/pci_endpoint/ 18114 18115 18115 18116 PCI ENHANCED ERROR HANDLING (EEH) FOR POWERPC 18116 18117 M: Mahesh J Salgaonkar <mahesh@linux.ibm.com>
+1 -1
arch/sparc/kernel/pci_common.c
··· 361 361 int i, saw_mem, saw_io; 362 362 int num_pbm_ranges; 363 363 364 - /* Corresponding generic code in of_pci_get_host_bridge_resources() */ 364 + /* Corresponds to generic devm_of_pci_get_host_bridge_resources() */ 365 365 366 366 saw_mem = saw_io = 0; 367 367 pbm_ranges = of_get_property(pbm->op->dev.of_node, "ranges", &i);
+30
arch/x86/pci/fixup.c
··· 1010 1010 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_resume); 1011 1011 DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_suspend); 1012 1012 DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_resume); 1013 + 1014 + /* 1015 + * Putting PCIe root ports on Ryzen SoCs with USB4 controllers into D3hot 1016 + * may cause problems when the system attempts wake up from s2idle. 1017 + * 1018 + * On the TUXEDO Sirius 16 Gen 1 with a specific old BIOS this manifests as 1019 + * a system hang. 1020 + */ 1021 + static const struct dmi_system_id quirk_tuxeo_rp_d3_dmi_table[] = { 1022 + { 1023 + .matches = { 1024 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TUXEDO"), 1025 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "APX958"), 1026 + DMI_EXACT_MATCH(DMI_BIOS_VERSION, "V1.00A00_20240108"), 1027 + }, 1028 + }, 1029 + {} 1030 + }; 1031 + 1032 + static void quirk_tuxeo_rp_d3(struct pci_dev *pdev) 1033 + { 1034 + struct pci_dev *root_pdev; 1035 + 1036 + if (dmi_check_system(quirk_tuxeo_rp_d3_dmi_table)) { 1037 + root_pdev = pcie_find_root_port(pdev); 1038 + if (root_pdev) 1039 + root_pdev->dev_flags |= PCI_DEV_FLAGS_NO_D3; 1040 + } 1041 + } 1042 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1502, quirk_tuxeo_rp_d3); 1013 1043 #endif /* CONFIG_SUSPEND */
+1 -1
drivers/ata/ahci.c
··· 1987 1987 1988 1988 if (ahci_init_msi(pdev, n_ports, hpriv) < 0) { 1989 1989 /* legacy intx interrupts */ 1990 - pci_intx(pdev, 1); 1990 + pcim_intx(pdev, 1); 1991 1991 } 1992 1992 hpriv->irq = pci_irq_vector(pdev, 0); 1993 1993
+1 -1
drivers/ata/ata_piix.c
··· 1725 1725 * message-signalled interrupts currently). 1726 1726 */ 1727 1727 if (port_flags & PIIX_FLAG_CHECKINTR) 1728 - pci_intx(pdev, 1); 1728 + pcim_intx(pdev, 1); 1729 1729 1730 1730 if (piix_check_450nx_errata(pdev)) { 1731 1731 /* This writes into the master table but it does not
+1 -1
drivers/ata/pata_rdc.c
··· 340 340 return rc; 341 341 host->private_data = hpriv; 342 342 343 - pci_intx(pdev, 1); 343 + pcim_intx(pdev, 1); 344 344 345 345 host->flags |= ATA_HOST_PARALLEL_SCAN; 346 346
+1 -1
drivers/ata/sata_sil24.c
··· 1316 1316 1317 1317 if (sata_sil24_msi && !pci_enable_msi(pdev)) { 1318 1318 dev_info(&pdev->dev, "Using MSI\n"); 1319 - pci_intx(pdev, 0); 1319 + pcim_intx(pdev, 0); 1320 1320 } 1321 1321 1322 1322 pci_set_master(pdev);
+1 -1
drivers/ata/sata_sis.c
··· 290 290 } 291 291 292 292 pci_set_master(pdev); 293 - pci_intx(pdev, 1); 293 + pcim_intx(pdev, 1); 294 294 return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt, 295 295 IRQF_SHARED, &sis_sht); 296 296 }
+1 -1
drivers/ata/sata_uli.c
··· 221 221 } 222 222 223 223 pci_set_master(pdev); 224 - pci_intx(pdev, 1); 224 + pcim_intx(pdev, 1); 225 225 return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt, 226 226 IRQF_SHARED, &uli_sht); 227 227 }
+1 -1
drivers/ata/sata_vsc.c
··· 384 384 pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x80); 385 385 386 386 if (pci_enable_msi(pdev) == 0) 387 - pci_intx(pdev, 0); 387 + pcim_intx(pdev, 0); 388 388 389 389 /* 390 390 * Config offset 0x98 is "Extended Control and Status Register 0"
-1
drivers/clk/clk-en7523.c
··· 489 489 REG_PCI_CONTROL_PERSTOUT; 490 490 val = readl(np_base + REG_PCI_CONTROL); 491 491 writel(val | mask, np_base + REG_PCI_CONTROL); 492 - msleep(250); 493 492 494 493 return 0; 495 494 }
+2 -2
drivers/hid/amd-sfh-hid/amd_sfh_pcie.c
··· 122 122 { 123 123 int rc; 124 124 125 - pci_intx(privdata->pdev, true); 125 + pcim_intx(privdata->pdev, true); 126 126 127 127 rc = devm_request_irq(&privdata->pdev->dev, privdata->pdev->irq, 128 128 amd_sfh_irq_handler, 0, DRIVER_NAME, privdata); ··· 248 248 struct amd_mp2_dev *mp2 = privdata; 249 249 amd_sfh_hid_client_deinit(privdata); 250 250 mp2->mp2_ops->stop_all(mp2); 251 - pci_intx(mp2->pdev, false); 251 + pcim_intx(mp2->pdev, false); 252 252 amd_sfh_clear_intr(mp2); 253 253 } 254 254
+1 -1
drivers/hid/amd-sfh-hid/sfh1_1/amd_sfh_init.c
··· 311 311 sfh_deinit_emp2(); 312 312 amd_sfh_hid_client_deinit(privdata); 313 313 mp2->mp2_ops->stop_all(mp2); 314 - pci_intx(mp2->pdev, false); 314 + pcim_intx(mp2->pdev, false); 315 315 amd_sfh_clear_intr(mp2); 316 316 } 317 317
+227 -127
drivers/misc/pci_endpoint_test.c
··· 69 69 #define PCI_ENDPOINT_TEST_FLAGS 0x2c 70 70 #define FLAG_USE_DMA BIT(0) 71 71 72 + #define PCI_ENDPOINT_TEST_CAPS 0x30 73 + #define CAP_UNALIGNED_ACCESS BIT(0) 74 + 72 75 #define PCI_DEVICE_ID_TI_AM654 0xb00c 73 76 #define PCI_DEVICE_ID_TI_J7200 0xb00f 74 77 #define PCI_DEVICE_ID_TI_AM64 0xb010 ··· 169 166 test->irq_type = IRQ_TYPE_UNDEFINED; 170 167 } 171 168 172 - static bool pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test, 169 + static int pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test, 173 170 int type) 174 171 { 175 - int irq = -1; 172 + int irq; 176 173 struct pci_dev *pdev = test->pdev; 177 174 struct device *dev = &pdev->dev; 178 - bool res = true; 179 175 180 176 switch (type) { 181 177 case IRQ_TYPE_INTX: 182 178 irq = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX); 183 - if (irq < 0) 179 + if (irq < 0) { 184 180 dev_err(dev, "Failed to get Legacy interrupt\n"); 181 + return irq; 182 + } 183 + 185 184 break; 186 185 case IRQ_TYPE_MSI: 187 186 irq = pci_alloc_irq_vectors(pdev, 1, 32, PCI_IRQ_MSI); 188 - if (irq < 0) 187 + if (irq < 0) { 189 188 dev_err(dev, "Failed to get MSI interrupts\n"); 189 + return irq; 190 + } 191 + 190 192 break; 191 193 case IRQ_TYPE_MSIX: 192 194 irq = pci_alloc_irq_vectors(pdev, 1, 2048, PCI_IRQ_MSIX); 193 - if (irq < 0) 195 + if (irq < 0) { 194 196 dev_err(dev, "Failed to get MSI-X interrupts\n"); 197 + return irq; 198 + } 199 + 195 200 break; 196 201 default: 197 202 dev_err(dev, "Invalid IRQ type selected\n"); 198 - } 199 - 200 - if (irq < 0) { 201 - irq = 0; 202 - res = false; 203 + return -EINVAL; 203 204 } 204 205 205 206 test->irq_type = type; 206 207 test->num_irqs = irq; 207 208 208 - return res; 209 + return 0; 209 210 } 210 211 211 212 static void pci_endpoint_test_release_irq(struct pci_endpoint_test *test) ··· 224 217 test->num_irqs = 0; 225 218 } 226 219 227 - static bool pci_endpoint_test_request_irq(struct pci_endpoint_test *test) 220 + static int pci_endpoint_test_request_irq(struct pci_endpoint_test *test) 228 221 { 229 222 int i; 230 - int err; 223 + int ret; 231 224 struct pci_dev *pdev = test->pdev; 232 225 struct device *dev = &pdev->dev; 233 226 234 227 for (i = 0; i < test->num_irqs; i++) { 235 - err = devm_request_irq(dev, pci_irq_vector(pdev, i), 228 + ret = devm_request_irq(dev, pci_irq_vector(pdev, i), 236 229 pci_endpoint_test_irqhandler, 237 230 IRQF_SHARED, test->name, test); 238 - if (err) 231 + if (ret) 239 232 goto fail; 240 233 } 241 234 242 - return true; 235 + return 0; 243 236 244 237 fail: 245 238 switch (irq_type) { ··· 259 252 break; 260 253 } 261 254 262 - return false; 255 + return ret; 263 256 } 264 257 265 258 static const u32 bar_test_pattern[] = { ··· 284 277 return memcmp(write_buf, read_buf, size); 285 278 } 286 279 287 - static bool pci_endpoint_test_bar(struct pci_endpoint_test *test, 280 + static int pci_endpoint_test_bar(struct pci_endpoint_test *test, 288 281 enum pci_barno barno) 289 282 { 290 - int j, bar_size, buf_size, iters, remain; 283 + int j, bar_size, buf_size, iters; 291 284 void *write_buf __free(kfree) = NULL; 292 285 void *read_buf __free(kfree) = NULL; 293 286 struct pci_dev *pdev = test->pdev; 294 287 295 288 if (!test->bar[barno]) 296 - return false; 289 + return -ENOMEM; 297 290 298 291 bar_size = pci_resource_len(pdev, barno); 299 292 ··· 308 301 309 302 write_buf = kmalloc(buf_size, GFP_KERNEL); 310 303 if (!write_buf) 311 - return false; 304 + return -ENOMEM; 312 305 313 306 read_buf = kmalloc(buf_size, GFP_KERNEL); 314 307 if (!read_buf) 315 - return false; 308 + return -ENOMEM; 316 309 317 310 iters = bar_size / buf_size; 318 311 for (j = 0; j < iters; j++) 319 312 if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * j, 320 313 write_buf, read_buf, buf_size)) 321 - return false; 314 + return -EIO; 322 315 323 - remain = bar_size % buf_size; 324 - if (remain) 325 - if (pci_endpoint_test_bar_memcmp(test, barno, buf_size * iters, 326 - write_buf, read_buf, remain)) 327 - return false; 328 - 329 - return true; 316 + return 0; 330 317 } 331 318 332 - static bool pci_endpoint_test_intx_irq(struct pci_endpoint_test *test) 319 + static u32 bar_test_pattern_with_offset(enum pci_barno barno, int offset) 320 + { 321 + u32 val; 322 + 323 + /* Keep the BAR pattern in the top byte. */ 324 + val = bar_test_pattern[barno] & 0xff000000; 325 + /* Store the (partial) offset in the remaining bytes. */ 326 + val |= offset & 0x00ffffff; 327 + 328 + return val; 329 + } 330 + 331 + static void pci_endpoint_test_bars_write_bar(struct pci_endpoint_test *test, 332 + enum pci_barno barno) 333 + { 334 + struct pci_dev *pdev = test->pdev; 335 + int j, size; 336 + 337 + size = pci_resource_len(pdev, barno); 338 + 339 + if (barno == test->test_reg_bar) 340 + size = 0x4; 341 + 342 + for (j = 0; j < size; j += 4) 343 + writel_relaxed(bar_test_pattern_with_offset(barno, j), 344 + test->bar[barno] + j); 345 + } 346 + 347 + static int pci_endpoint_test_bars_read_bar(struct pci_endpoint_test *test, 348 + enum pci_barno barno) 349 + { 350 + struct pci_dev *pdev = test->pdev; 351 + struct device *dev = &pdev->dev; 352 + int j, size; 353 + u32 val; 354 + 355 + size = pci_resource_len(pdev, barno); 356 + 357 + if (barno == test->test_reg_bar) 358 + size = 0x4; 359 + 360 + for (j = 0; j < size; j += 4) { 361 + u32 expected = bar_test_pattern_with_offset(barno, j); 362 + 363 + val = readl_relaxed(test->bar[barno] + j); 364 + if (val != expected) { 365 + dev_err(dev, 366 + "BAR%d incorrect data at offset: %#x, got: %#x expected: %#x\n", 367 + barno, j, val, expected); 368 + return -EIO; 369 + } 370 + } 371 + 372 + return 0; 373 + } 374 + 375 + static int pci_endpoint_test_bars(struct pci_endpoint_test *test) 376 + { 377 + enum pci_barno bar; 378 + bool ret; 379 + 380 + /* Write all BARs in order (without reading). */ 381 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 382 + if (test->bar[bar]) 383 + pci_endpoint_test_bars_write_bar(test, bar); 384 + 385 + /* 386 + * Read all BARs in order (without writing). 387 + * If there is an address translation issue on the EP, writing one BAR 388 + * might have overwritten another BAR. Ensure that this is not the case. 389 + * (Reading back the BAR directly after writing can not detect this.) 390 + */ 391 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 392 + if (test->bar[bar]) { 393 + ret = pci_endpoint_test_bars_read_bar(test, bar); 394 + if (!ret) 395 + return ret; 396 + } 397 + } 398 + 399 + return 0; 400 + } 401 + 402 + static int pci_endpoint_test_intx_irq(struct pci_endpoint_test *test) 333 403 { 334 404 u32 val; 335 405 ··· 418 334 val = wait_for_completion_timeout(&test->irq_raised, 419 335 msecs_to_jiffies(1000)); 420 336 if (!val) 421 - return false; 337 + return -ETIMEDOUT; 422 338 423 - return true; 339 + return 0; 424 340 } 425 341 426 - static bool pci_endpoint_test_msi_irq(struct pci_endpoint_test *test, 342 + static int pci_endpoint_test_msi_irq(struct pci_endpoint_test *test, 427 343 u16 msi_num, bool msix) 428 344 { 429 - u32 val; 430 345 struct pci_dev *pdev = test->pdev; 346 + u32 val; 347 + int ret; 431 348 432 349 pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, 433 350 msix ? IRQ_TYPE_MSIX : IRQ_TYPE_MSI); ··· 439 354 val = wait_for_completion_timeout(&test->irq_raised, 440 355 msecs_to_jiffies(1000)); 441 356 if (!val) 442 - return false; 357 + return -ETIMEDOUT; 443 358 444 - return pci_irq_vector(pdev, msi_num - 1) == test->last_irq; 359 + ret = pci_irq_vector(pdev, msi_num - 1); 360 + if (ret < 0) 361 + return ret; 362 + 363 + if (ret != test->last_irq) 364 + return -EIO; 365 + 366 + return 0; 445 367 } 446 368 447 369 static int pci_endpoint_test_validate_xfer_params(struct device *dev, ··· 467 375 return 0; 468 376 } 469 377 470 - static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, 378 + static int pci_endpoint_test_copy(struct pci_endpoint_test *test, 471 379 unsigned long arg) 472 380 { 473 381 struct pci_endpoint_test_xfer_param param; 474 - bool ret = false; 475 382 void *src_addr; 476 383 void *dst_addr; 477 384 u32 flags = 0; ··· 489 398 int irq_type = test->irq_type; 490 399 u32 src_crc32; 491 400 u32 dst_crc32; 492 - int err; 401 + int ret; 493 402 494 - err = copy_from_user(&param, (void __user *)arg, sizeof(param)); 495 - if (err) { 403 + ret = copy_from_user(&param, (void __user *)arg, sizeof(param)); 404 + if (ret) { 496 405 dev_err(dev, "Failed to get transfer param\n"); 497 - return false; 406 + return -EFAULT; 498 407 } 499 408 500 - err = pci_endpoint_test_validate_xfer_params(dev, &param, alignment); 501 - if (err) 502 - return false; 409 + ret = pci_endpoint_test_validate_xfer_params(dev, &param, alignment); 410 + if (ret) 411 + return ret; 503 412 504 413 size = param.size; 505 414 ··· 509 418 510 419 if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) { 511 420 dev_err(dev, "Invalid IRQ type option\n"); 512 - goto err; 421 + return -EINVAL; 513 422 } 514 423 515 424 orig_src_addr = kzalloc(size + alignment, GFP_KERNEL); 516 425 if (!orig_src_addr) { 517 426 dev_err(dev, "Failed to allocate source buffer\n"); 518 - ret = false; 519 - goto err; 427 + return -ENOMEM; 520 428 } 521 429 522 430 get_random_bytes(orig_src_addr, size + alignment); 523 431 orig_src_phys_addr = dma_map_single(dev, orig_src_addr, 524 432 size + alignment, DMA_TO_DEVICE); 525 - if (dma_mapping_error(dev, orig_src_phys_addr)) { 433 + ret = dma_mapping_error(dev, orig_src_phys_addr); 434 + if (ret) { 526 435 dev_err(dev, "failed to map source buffer address\n"); 527 - ret = false; 528 436 goto err_src_phys_addr; 529 437 } 530 438 ··· 547 457 orig_dst_addr = kzalloc(size + alignment, GFP_KERNEL); 548 458 if (!orig_dst_addr) { 549 459 dev_err(dev, "Failed to allocate destination address\n"); 550 - ret = false; 460 + ret = -ENOMEM; 551 461 goto err_dst_addr; 552 462 } 553 463 554 464 orig_dst_phys_addr = dma_map_single(dev, orig_dst_addr, 555 465 size + alignment, DMA_FROM_DEVICE); 556 - if (dma_mapping_error(dev, orig_dst_phys_addr)) { 466 + ret = dma_mapping_error(dev, orig_dst_phys_addr); 467 + if (ret) { 557 468 dev_err(dev, "failed to map destination buffer address\n"); 558 - ret = false; 559 469 goto err_dst_phys_addr; 560 470 } 561 471 ··· 588 498 DMA_FROM_DEVICE); 589 499 590 500 dst_crc32 = crc32_le(~0, dst_addr, size); 591 - if (dst_crc32 == src_crc32) 592 - ret = true; 501 + if (dst_crc32 != src_crc32) 502 + ret = -EIO; 593 503 594 504 err_dst_phys_addr: 595 505 kfree(orig_dst_addr); ··· 600 510 601 511 err_src_phys_addr: 602 512 kfree(orig_src_addr); 603 - 604 - err: 605 513 return ret; 606 514 } 607 515 608 - static bool pci_endpoint_test_write(struct pci_endpoint_test *test, 516 + static int pci_endpoint_test_write(struct pci_endpoint_test *test, 609 517 unsigned long arg) 610 518 { 611 519 struct pci_endpoint_test_xfer_param param; 612 - bool ret = false; 613 520 u32 flags = 0; 614 521 bool use_dma; 615 522 u32 reg; ··· 621 534 int irq_type = test->irq_type; 622 535 size_t size; 623 536 u32 crc32; 624 - int err; 537 + int ret; 625 538 626 - err = copy_from_user(&param, (void __user *)arg, sizeof(param)); 627 - if (err != 0) { 539 + ret = copy_from_user(&param, (void __user *)arg, sizeof(param)); 540 + if (ret) { 628 541 dev_err(dev, "Failed to get transfer param\n"); 629 - return false; 542 + return -EFAULT; 630 543 } 631 544 632 - err = pci_endpoint_test_validate_xfer_params(dev, &param, alignment); 633 - if (err) 634 - return false; 545 + ret = pci_endpoint_test_validate_xfer_params(dev, &param, alignment); 546 + if (ret) 547 + return ret; 635 548 636 549 size = param.size; 637 550 ··· 641 554 642 555 if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) { 643 556 dev_err(dev, "Invalid IRQ type option\n"); 644 - goto err; 557 + return -EINVAL; 645 558 } 646 559 647 560 orig_addr = kzalloc(size + alignment, GFP_KERNEL); 648 561 if (!orig_addr) { 649 562 dev_err(dev, "Failed to allocate address\n"); 650 - ret = false; 651 - goto err; 563 + return -ENOMEM; 652 564 } 653 565 654 566 get_random_bytes(orig_addr, size + alignment); 655 567 656 568 orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment, 657 569 DMA_TO_DEVICE); 658 - if (dma_mapping_error(dev, orig_phys_addr)) { 570 + ret = dma_mapping_error(dev, orig_phys_addr); 571 + if (ret) { 659 572 dev_err(dev, "failed to map source buffer address\n"); 660 - ret = false; 661 573 goto err_phys_addr; 662 574 } 663 575 ··· 689 603 wait_for_completion(&test->irq_raised); 690 604 691 605 reg = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS); 692 - if (reg & STATUS_READ_SUCCESS) 693 - ret = true; 606 + if (!(reg & STATUS_READ_SUCCESS)) 607 + ret = -EIO; 694 608 695 609 dma_unmap_single(dev, orig_phys_addr, size + alignment, 696 610 DMA_TO_DEVICE); 697 611 698 612 err_phys_addr: 699 613 kfree(orig_addr); 700 - 701 - err: 702 614 return ret; 703 615 } 704 616 705 - static bool pci_endpoint_test_read(struct pci_endpoint_test *test, 617 + static int pci_endpoint_test_read(struct pci_endpoint_test *test, 706 618 unsigned long arg) 707 619 { 708 620 struct pci_endpoint_test_xfer_param param; 709 - bool ret = false; 710 621 u32 flags = 0; 711 622 bool use_dma; 712 623 size_t size; ··· 717 634 size_t alignment = test->alignment; 718 635 int irq_type = test->irq_type; 719 636 u32 crc32; 720 - int err; 637 + int ret; 721 638 722 - err = copy_from_user(&param, (void __user *)arg, sizeof(param)); 723 - if (err) { 639 + ret = copy_from_user(&param, (void __user *)arg, sizeof(param)); 640 + if (ret) { 724 641 dev_err(dev, "Failed to get transfer param\n"); 725 - return false; 642 + return -EFAULT; 726 643 } 727 644 728 - err = pci_endpoint_test_validate_xfer_params(dev, &param, alignment); 729 - if (err) 730 - return false; 645 + ret = pci_endpoint_test_validate_xfer_params(dev, &param, alignment); 646 + if (ret) 647 + return ret; 731 648 732 649 size = param.size; 733 650 ··· 737 654 738 655 if (irq_type < IRQ_TYPE_INTX || irq_type > IRQ_TYPE_MSIX) { 739 656 dev_err(dev, "Invalid IRQ type option\n"); 740 - goto err; 657 + return -EINVAL; 741 658 } 742 659 743 660 orig_addr = kzalloc(size + alignment, GFP_KERNEL); 744 661 if (!orig_addr) { 745 662 dev_err(dev, "Failed to allocate destination address\n"); 746 - ret = false; 747 - goto err; 663 + return -ENOMEM; 748 664 } 749 665 750 666 orig_phys_addr = dma_map_single(dev, orig_addr, size + alignment, 751 667 DMA_FROM_DEVICE); 752 - if (dma_mapping_error(dev, orig_phys_addr)) { 668 + ret = dma_mapping_error(dev, orig_phys_addr); 669 + if (ret) { 753 670 dev_err(dev, "failed to map source buffer address\n"); 754 - ret = false; 755 671 goto err_phys_addr; 756 672 } 757 673 ··· 782 700 DMA_FROM_DEVICE); 783 701 784 702 crc32 = crc32_le(~0, addr, size); 785 - if (crc32 == pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CHECKSUM)) 786 - ret = true; 703 + if (crc32 != pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CHECKSUM)) 704 + ret = -EIO; 787 705 788 706 err_phys_addr: 789 707 kfree(orig_addr); 790 - err: 791 708 return ret; 792 709 } 793 710 794 - static bool pci_endpoint_test_clear_irq(struct pci_endpoint_test *test) 711 + static int pci_endpoint_test_clear_irq(struct pci_endpoint_test *test) 795 712 { 796 713 pci_endpoint_test_release_irq(test); 797 714 pci_endpoint_test_free_irq_vectors(test); 798 - return true; 715 + 716 + return 0; 799 717 } 800 718 801 - static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test, 719 + static int pci_endpoint_test_set_irq(struct pci_endpoint_test *test, 802 720 int req_irq_type) 803 721 { 804 722 struct pci_dev *pdev = test->pdev; 805 723 struct device *dev = &pdev->dev; 724 + int ret; 806 725 807 726 if (req_irq_type < IRQ_TYPE_INTX || req_irq_type > IRQ_TYPE_MSIX) { 808 727 dev_err(dev, "Invalid IRQ type option\n"); 809 - return false; 728 + return -EINVAL; 810 729 } 811 730 812 731 if (test->irq_type == req_irq_type) 813 - return true; 732 + return 0; 814 733 815 734 pci_endpoint_test_release_irq(test); 816 735 pci_endpoint_test_free_irq_vectors(test); 817 736 818 - if (!pci_endpoint_test_alloc_irq_vectors(test, req_irq_type)) 819 - goto err; 737 + ret = pci_endpoint_test_alloc_irq_vectors(test, req_irq_type); 738 + if (ret) 739 + return ret; 820 740 821 - if (!pci_endpoint_test_request_irq(test)) 822 - goto err; 741 + ret = pci_endpoint_test_request_irq(test); 742 + if (ret) { 743 + pci_endpoint_test_free_irq_vectors(test); 744 + return ret; 745 + } 823 746 824 - return true; 825 - 826 - err: 827 - pci_endpoint_test_free_irq_vectors(test); 828 - return false; 747 + return 0; 829 748 } 830 749 831 750 static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd, ··· 850 767 if (is_am654_pci_dev(pdev) && bar == BAR_0) 851 768 goto ret; 852 769 ret = pci_endpoint_test_bar(test, bar); 770 + break; 771 + case PCITEST_BARS: 772 + ret = pci_endpoint_test_bars(test); 853 773 break; 854 774 case PCITEST_INTX_IRQ: 855 775 ret = pci_endpoint_test_intx_irq(test); ··· 891 805 .unlocked_ioctl = pci_endpoint_test_ioctl, 892 806 }; 893 807 808 + static void pci_endpoint_test_get_capabilities(struct pci_endpoint_test *test) 809 + { 810 + struct pci_dev *pdev = test->pdev; 811 + struct device *dev = &pdev->dev; 812 + u32 caps; 813 + 814 + caps = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_CAPS); 815 + dev_dbg(dev, "PCI_ENDPOINT_TEST_CAPS: %#x\n", caps); 816 + 817 + /* CAP_UNALIGNED_ACCESS is set if the EP can do unaligned access */ 818 + if (caps & CAP_UNALIGNED_ACCESS) 819 + test->alignment = 0; 820 + } 821 + 894 822 static int pci_endpoint_test_probe(struct pci_dev *pdev, 895 823 const struct pci_device_id *ent) 896 824 { 897 - int err; 825 + int ret; 898 826 int id; 899 827 char name[24]; 900 828 enum pci_barno bar; ··· 947 847 948 848 dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(48)); 949 849 950 - err = pci_enable_device(pdev); 951 - if (err) { 850 + ret = pci_enable_device(pdev); 851 + if (ret) { 952 852 dev_err(dev, "Cannot enable PCI device\n"); 953 - return err; 853 + return ret; 954 854 } 955 855 956 - err = pci_request_regions(pdev, DRV_MODULE_NAME); 957 - if (err) { 856 + ret = pci_request_regions(pdev, DRV_MODULE_NAME); 857 + if (ret) { 958 858 dev_err(dev, "Cannot obtain PCI resources\n"); 959 859 goto err_disable_pdev; 960 860 } 961 861 962 862 pci_set_master(pdev); 963 863 964 - if (!pci_endpoint_test_alloc_irq_vectors(test, irq_type)) { 965 - err = -EINVAL; 864 + ret = pci_endpoint_test_alloc_irq_vectors(test, irq_type); 865 + if (ret) 966 866 goto err_disable_irq; 967 - } 968 867 969 868 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 970 869 if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { ··· 978 879 979 880 test->base = test->bar[test_reg_bar]; 980 881 if (!test->base) { 981 - err = -ENOMEM; 882 + ret = -ENOMEM; 982 883 dev_err(dev, "Cannot perform PCI test without BAR%d\n", 983 884 test_reg_bar); 984 885 goto err_iounmap; ··· 988 889 989 890 id = ida_alloc(&pci_endpoint_test_ida, GFP_KERNEL); 990 891 if (id < 0) { 991 - err = id; 892 + ret = id; 992 893 dev_err(dev, "Unable to get id\n"); 993 894 goto err_iounmap; 994 895 } ··· 996 897 snprintf(name, sizeof(name), DRV_MODULE_NAME ".%d", id); 997 898 test->name = kstrdup(name, GFP_KERNEL); 998 899 if (!test->name) { 999 - err = -ENOMEM; 900 + ret = -ENOMEM; 1000 901 goto err_ida_remove; 1001 902 } 1002 903 1003 - if (!pci_endpoint_test_request_irq(test)) { 1004 - err = -EINVAL; 904 + ret = pci_endpoint_test_request_irq(test); 905 + if (ret) 1005 906 goto err_kfree_test_name; 1006 - } 907 + 908 + pci_endpoint_test_get_capabilities(test); 1007 909 1008 910 misc_device = &test->miscdev; 1009 911 misc_device->minor = MISC_DYNAMIC_MINOR; 1010 912 misc_device->name = kstrdup(name, GFP_KERNEL); 1011 913 if (!misc_device->name) { 1012 - err = -ENOMEM; 914 + ret = -ENOMEM; 1013 915 goto err_release_irq; 1014 916 } 1015 917 misc_device->parent = &pdev->dev; 1016 918 misc_device->fops = &pci_endpoint_test_fops; 1017 919 1018 - err = misc_register(misc_device); 1019 - if (err) { 920 + ret = misc_register(misc_device); 921 + if (ret) { 1020 922 dev_err(dev, "Failed to register device\n"); 1021 923 goto err_kfree_name; 1022 924 } ··· 1049 949 err_disable_pdev: 1050 950 pci_disable_device(pdev); 1051 951 1052 - return err; 952 + return ret; 1053 953 } 1054 954 1055 955 static void pci_endpoint_test_remove(struct pci_dev *pdev)
+1 -1
drivers/net/wireless/quantenna/qtnfmac/pcie/pcie.c
··· 204 204 205 205 if (!priv->msi_enabled) { 206 206 pr_warn("legacy PCIE interrupts enabled\n"); 207 - pci_intx(pdev, 1); 207 + pcim_intx(pdev, 1); 208 208 } 209 209 } 210 210
+2
drivers/of/address.c
··· 811 811 else 812 812 range->cpu_addr = of_translate_address(parser->node, 813 813 parser->range + na); 814 + 815 + range->parent_bus_addr = of_read_number(parser->range + na, parser->pna); 814 816 range->size = of_read_number(parser->range + parser->pna + na, ns); 815 817 816 818 parser->range += np;
+1 -1
drivers/pci/ats.c
··· 410 410 if (WARN_ON(pdev->pasid_enabled)) 411 411 return -EBUSY; 412 412 413 - if (!pdev->eetlp_prefix_path && !pdev->pasid_no_tlp) 413 + if (!pdev->eetlp_prefix_max && !pdev->pasid_no_tlp) 414 414 return -EINVAL; 415 415 416 416 if (!pasid)
+6 -21
drivers/pci/controller/dwc/pci-dra7xx.c
··· 635 635 { 636 636 int ret; 637 637 struct device_node *np = dev->of_node; 638 - struct of_phandle_args args; 638 + unsigned int args[2]; 639 639 struct regmap *regmap; 640 640 641 - regmap = syscon_regmap_lookup_by_phandle(np, 642 - "ti,syscon-unaligned-access"); 641 + regmap = syscon_regmap_lookup_by_phandle_args(np, "ti,syscon-unaligned-access", 642 + 2, args); 643 643 if (IS_ERR(regmap)) { 644 644 dev_dbg(dev, "can't get ti,syscon-unaligned-access\n"); 645 645 return -EINVAL; 646 646 } 647 647 648 - ret = of_parse_phandle_with_fixed_args(np, "ti,syscon-unaligned-access", 649 - 2, 0, &args); 650 - if (ret) { 651 - dev_err(dev, "failed to parse ti,syscon-unaligned-access\n"); 652 - return ret; 653 - } 654 - 655 - ret = regmap_update_bits(regmap, args.args[0], args.args[1], 656 - args.args[1]); 648 + ret = regmap_update_bits(regmap, args[0], args[1], args[1]); 657 649 if (ret) 658 650 dev_err(dev, "failed to enable unaligned access\n"); 659 - 660 - of_node_put(args.np); 661 651 662 652 return ret; 663 653 } ··· 661 671 u32 mask; 662 672 u32 val; 663 673 664 - pcie_syscon = syscon_regmap_lookup_by_phandle(np, "ti,syscon-lane-sel"); 674 + pcie_syscon = syscon_regmap_lookup_by_phandle_args(np, "ti,syscon-lane-sel", 675 + 1, &pcie_reg); 665 676 if (IS_ERR(pcie_syscon)) { 666 677 dev_err(dev, "unable to get ti,syscon-lane-sel\n"); 667 - return -EINVAL; 668 - } 669 - 670 - if (of_property_read_u32_index(np, "ti,syscon-lane-sel", 1, 671 - &pcie_reg)) { 672 - dev_err(dev, "couldn't get lane selection reg offset\n"); 673 678 return -EINVAL; 674 679 } 675 680
+316 -133
drivers/pci/controller/dwc/pci-imx6.c
··· 33 33 #include <linux/pm_domain.h> 34 34 #include <linux/pm_runtime.h> 35 35 36 + #include "../../pci.h" 36 37 #include "pcie-designware.h" 37 38 38 39 #define IMX8MQ_GPR_PCIE_REF_USE_PAD BIT(9) ··· 56 55 #define IMX95_PE0_GEN_CTRL_3 0x1058 57 56 #define IMX95_PCIE_LTSSM_EN BIT(0) 58 57 58 + #define IMX95_PE0_LUT_ACSCTRL 0x1008 59 + #define IMX95_PEO_LUT_RWA BIT(16) 60 + #define IMX95_PE0_LUT_ENLOC GENMASK(4, 0) 61 + 62 + #define IMX95_PE0_LUT_DATA1 0x100c 63 + #define IMX95_PE0_LUT_VLD BIT(31) 64 + #define IMX95_PE0_LUT_DAC_ID GENMASK(10, 8) 65 + #define IMX95_PE0_LUT_STREAM_ID GENMASK(5, 0) 66 + 67 + #define IMX95_PE0_LUT_DATA2 0x1010 68 + #define IMX95_PE0_LUT_REQID GENMASK(31, 16) 69 + #define IMX95_PE0_LUT_MASK GENMASK(15, 0) 70 + 71 + #define IMX95_SID_MASK GENMASK(5, 0) 72 + #define IMX95_MAX_LUT 32 73 + 59 74 #define to_imx_pcie(x) dev_get_drvdata((x)->dev) 60 75 61 76 enum imx_pcie_variants { ··· 87 70 IMX8MQ_EP, 88 71 IMX8MM_EP, 89 72 IMX8MP_EP, 73 + IMX8Q_EP, 90 74 IMX95_EP, 91 75 }; 92 76 ··· 105 87 * workaround suspend resume on some devices which are affected by this errata. 106 88 */ 107 89 #define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9) 90 + #define IMX_PCIE_FLAG_HAS_LUT BIT(10) 108 91 109 92 #define imx_check_flag(pci, val) (pci->drvdata->flags & val) 110 93 ··· 122 103 const char *gpr; 123 104 const char * const *clk_names; 124 105 const u32 clks_cnt; 106 + const u32 clks_optional_cnt; 125 107 const u32 ltssm_off; 126 108 const u32 ltssm_mask; 127 109 const u32 mode_off[IMX_PCIE_MAX_INSTANCES]; ··· 131 111 int (*init_phy)(struct imx_pcie *pcie); 132 112 int (*enable_ref_clk)(struct imx_pcie *pcie, bool enable); 133 113 int (*core_reset)(struct imx_pcie *pcie, bool assert); 114 + const struct dw_pcie_host_ops *ops; 134 115 }; 135 116 136 117 struct imx_pcie { 137 118 struct dw_pcie *pci; 138 119 struct gpio_desc *reset_gpiod; 139 - bool link_is_up; 140 120 struct clk_bulk_data clks[IMX_PCIE_MAX_CLKS]; 141 121 struct regmap *iomuxc_gpr; 142 122 u16 msi_ctrl; 143 123 u32 controller_id; 144 124 struct reset_control *pciephy_reset; 145 125 struct reset_control *apps_reset; 146 - struct reset_control *turnoff_reset; 147 126 u32 tx_deemph_gen1; 148 127 u32 tx_deemph_gen2_3p5db; 149 128 u32 tx_deemph_gen2_6db; ··· 158 139 struct device *pd_pcie_phy; 159 140 struct phy *phy; 160 141 const struct imx_pcie_drvdata *drvdata; 142 + 143 + /* Ensure that only one device's LUT is configured at any given time */ 144 + struct mutex lock; 161 145 }; 162 146 163 147 /* Parameters for the waiting for PCIe PHY PLL to lock on i.MX7 */ ··· 256 234 257 235 id = imx_pcie->controller_id; 258 236 259 - /* If mode_mask is 0, then generic PHY driver is used to set the mode */ 237 + /* If mode_mask is 0, generic PHY driver is used to set the mode */ 260 238 if (!drvdata->mode_mask[0]) 261 239 return; 262 240 263 - /* If mode_mask[id] is zero, means each controller have its individual gpr */ 241 + /* If mode_mask[id] is 0, each controller has its individual GPR */ 264 242 if (!drvdata->mode_mask[id]) 265 243 id = 0; 266 244 ··· 397 375 398 376 static int imx8mq_pcie_init_phy(struct imx_pcie *imx_pcie) 399 377 { 400 - /* TODO: Currently this code assumes external oscillator is being used */ 378 + /* TODO: This code assumes external oscillator is being used */ 401 379 regmap_update_bits(imx_pcie->iomuxc_gpr, 402 380 imx_pcie_grp_offset(imx_pcie), 403 381 IMX8MQ_GPR_PCIE_REF_USE_PAD, 404 382 IMX8MQ_GPR_PCIE_REF_USE_PAD); 405 383 /* 406 - * Regarding the datasheet, the PCIE_VPH is suggested to be 1.8V. If the PCIE_VPH is 407 - * supplied by 3.3V, the VREG_BYPASS should be cleared to zero. 384 + * Per the datasheet, the PCIE_VPH is suggested to be 1.8V. If the 385 + * PCIE_VPH is supplied by 3.3V, the VREG_BYPASS should be cleared 386 + * to zero. 408 387 */ 409 388 if (imx_pcie->vph && regulator_get_voltage(imx_pcie->vph) > 3000000) 410 389 regmap_update_bits(imx_pcie->iomuxc_gpr, 411 390 imx_pcie_grp_offset(imx_pcie), 412 391 IMX8MQ_GPR_PCIE_VREG_BYPASS, 413 392 0); 414 - 415 - return 0; 416 - } 417 - 418 - static int imx7d_pcie_init_phy(struct imx_pcie *imx_pcie) 419 - { 420 - regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 0); 421 393 422 394 return 0; 423 395 } ··· 592 576 DL_FLAG_PM_RUNTIME | 593 577 DL_FLAG_RPM_ACTIVE); 594 578 if (!link) { 595 - dev_err(dev, "Failed to add device_link to pcie pd.\n"); 579 + dev_err(dev, "Failed to add device_link to pcie pd\n"); 596 580 return -EINVAL; 597 581 } 598 582 ··· 605 589 DL_FLAG_PM_RUNTIME | 606 590 DL_FLAG_RPM_ACTIVE); 607 591 if (!link) { 608 - dev_err(dev, "Failed to add device_link to pcie_phy pd.\n"); 592 + dev_err(dev, "Failed to add device_link to pcie_phy pd\n"); 609 593 return -EINVAL; 610 594 } 611 595 ··· 614 598 615 599 static int imx6sx_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable) 616 600 { 617 - if (enable) 618 - regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, 619 - IMX6SX_GPR12_PCIE_TEST_POWERDOWN); 620 - 601 + regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, 602 + IMX6SX_GPR12_PCIE_TEST_POWERDOWN, 603 + enable ? 0 : IMX6SX_GPR12_PCIE_TEST_POWERDOWN); 621 604 return 0; 622 605 } 623 606 ··· 626 611 /* power up core phy and enable ref clock */ 627 612 regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR1, IMX6Q_GPR1_PCIE_TEST_PD); 628 613 /* 629 - * the async reset input need ref clock to sync internally, 614 + * The async reset input need ref clock to sync internally, 630 615 * when the ref clock comes after reset, internal synced 631 616 * reset time is too short, cannot meet the requirement. 632 - * add one ~10us delay here. 617 + * Add a ~10us delay here. 633 618 */ 634 619 usleep_range(10, 100); 635 620 regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR1, IMX6Q_GPR1_PCIE_REF_CLK_EN); ··· 645 630 { 646 631 int offset = imx_pcie_grp_offset(imx_pcie); 647 632 648 - if (enable) { 649 - regmap_clear_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE); 650 - regmap_set_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN); 651 - } 652 - 633 + regmap_update_bits(imx_pcie->iomuxc_gpr, offset, 634 + IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE, 635 + enable ? 0 : IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE); 636 + regmap_update_bits(imx_pcie->iomuxc_gpr, offset, 637 + IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN, 638 + enable ? IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN : 0); 653 639 return 0; 654 640 } 655 641 656 642 static int imx7d_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable) 657 643 { 658 - if (!enable) 659 - regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, 660 - IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); 644 + regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, 645 + IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 646 + enable ? 0 : IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); 661 647 return 0; 662 648 } 663 649 ··· 791 775 static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie) 792 776 { 793 777 reset_control_deassert(imx_pcie->pciephy_reset); 778 + reset_control_deassert(imx_pcie->apps_reset); 794 779 795 780 if (imx_pcie->drvdata->core_reset) 796 781 imx_pcie->drvdata->core_reset(imx_pcie, false); ··· 901 884 902 885 if (imx_pcie->drvdata->flags & 903 886 IMX_PCIE_FLAG_IMX_SPEED_CHANGE) { 887 + 904 888 /* 905 889 * On i.MX7, DIRECT_SPEED_CHANGE behaves differently 906 890 * from i.MX6 family when no link speed transition ··· 910 892 * which will cause the following code to report false 911 893 * failure. 912 894 */ 913 - 914 895 ret = imx_pcie_wait_for_speed_change(imx_pcie); 915 896 if (ret) { 916 897 dev_err(dev, "Failed to bring link up!\n"); ··· 925 908 dev_info(dev, "Link: Only Gen1 is enabled\n"); 926 909 } 927 910 928 - imx_pcie->link_is_up = true; 929 911 tmp = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 930 912 dev_info(dev, "Link up, Gen%i\n", tmp & PCI_EXP_LNKSTA_CLS); 931 913 return 0; 932 914 933 915 err_reset_phy: 934 - imx_pcie->link_is_up = false; 935 916 dev_dbg(dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n", 936 917 dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0), 937 918 dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG1)); ··· 943 928 944 929 /* Turn off PCIe LTSSM */ 945 930 imx_pcie_ltssm_disable(dev); 931 + } 932 + 933 + static int imx_pcie_add_lut(struct imx_pcie *imx_pcie, u16 rid, u8 sid) 934 + { 935 + struct dw_pcie *pci = imx_pcie->pci; 936 + struct device *dev = pci->dev; 937 + u32 data1, data2; 938 + int free = -1; 939 + int i; 940 + 941 + if (sid >= 64) { 942 + dev_err(dev, "Invalid SID for index %d\n", sid); 943 + return -EINVAL; 944 + } 945 + 946 + guard(mutex)(&imx_pcie->lock); 947 + 948 + /* 949 + * Iterate through all LUT entries to check for duplicate RID and 950 + * identify the first available entry. Configure this available entry 951 + * immediately after verification to avoid rescanning it. 952 + */ 953 + for (i = 0; i < IMX95_MAX_LUT; i++) { 954 + regmap_write(imx_pcie->iomuxc_gpr, 955 + IMX95_PE0_LUT_ACSCTRL, IMX95_PEO_LUT_RWA | i); 956 + regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, &data1); 957 + 958 + if (!(data1 & IMX95_PE0_LUT_VLD)) { 959 + if (free < 0) 960 + free = i; 961 + continue; 962 + } 963 + 964 + regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, &data2); 965 + 966 + /* Do not add duplicate RID */ 967 + if (rid == FIELD_GET(IMX95_PE0_LUT_REQID, data2)) { 968 + dev_warn(dev, "Existing LUT entry available for RID (%d)", rid); 969 + return 0; 970 + } 971 + } 972 + 973 + if (free < 0) { 974 + dev_err(dev, "LUT entry is not available\n"); 975 + return -ENOSPC; 976 + } 977 + 978 + data1 = FIELD_PREP(IMX95_PE0_LUT_DAC_ID, 0); 979 + data1 |= FIELD_PREP(IMX95_PE0_LUT_STREAM_ID, sid); 980 + data1 |= IMX95_PE0_LUT_VLD; 981 + regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA1, data1); 982 + 983 + data2 = IMX95_PE0_LUT_MASK; /* Match all bits of RID */ 984 + data2 |= FIELD_PREP(IMX95_PE0_LUT_REQID, rid); 985 + regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, data2); 986 + 987 + regmap_write(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_ACSCTRL, free); 988 + 989 + return 0; 990 + } 991 + 992 + static void imx_pcie_remove_lut(struct imx_pcie *imx_pcie, u16 rid) 993 + { 994 + u32 data2; 995 + int i; 996 + 997 + guard(mutex)(&imx_pcie->lock); 998 + 999 + for (i = 0; i < IMX95_MAX_LUT; i++) { 1000 + regmap_write(imx_pcie->iomuxc_gpr, 1001 + IMX95_PE0_LUT_ACSCTRL, IMX95_PEO_LUT_RWA | i); 1002 + regmap_read(imx_pcie->iomuxc_gpr, IMX95_PE0_LUT_DATA2, &data2); 1003 + if (FIELD_GET(IMX95_PE0_LUT_REQID, data2) == rid) { 1004 + regmap_write(imx_pcie->iomuxc_gpr, 1005 + IMX95_PE0_LUT_DATA1, 0); 1006 + regmap_write(imx_pcie->iomuxc_gpr, 1007 + IMX95_PE0_LUT_DATA2, 0); 1008 + regmap_write(imx_pcie->iomuxc_gpr, 1009 + IMX95_PE0_LUT_ACSCTRL, i); 1010 + 1011 + break; 1012 + } 1013 + } 1014 + } 1015 + 1016 + static int imx_pcie_enable_device(struct pci_host_bridge *bridge, 1017 + struct pci_dev *pdev) 1018 + { 1019 + struct imx_pcie *imx_pcie = to_imx_pcie(to_dw_pcie_from_pp(bridge->sysdata)); 1020 + u32 sid_i, sid_m, rid = pci_dev_id(pdev); 1021 + struct device_node *target; 1022 + struct device *dev; 1023 + int err_i, err_m; 1024 + u32 sid = 0; 1025 + 1026 + dev = imx_pcie->pci->dev; 1027 + 1028 + target = NULL; 1029 + err_i = of_map_id(dev->of_node, rid, "iommu-map", "iommu-map-mask", 1030 + &target, &sid_i); 1031 + if (target) { 1032 + of_node_put(target); 1033 + } else { 1034 + /* 1035 + * "target == NULL && err_i == 0" means RID out of map range. 1036 + * Use 1:1 map RID to streamID. Hardware can't support this 1037 + * because the streamID is only 6 bits 1038 + */ 1039 + err_i = -EINVAL; 1040 + } 1041 + 1042 + target = NULL; 1043 + err_m = of_map_id(dev->of_node, rid, "msi-map", "msi-map-mask", 1044 + &target, &sid_m); 1045 + 1046 + /* 1047 + * err_m target 1048 + * 0 NULL RID out of range. Use 1:1 map RID to 1049 + * streamID, Current hardware can't 1050 + * support it, so return -EINVAL. 1051 + * != 0 NULL msi-map does not exist, use built-in MSI 1052 + * 0 != NULL Get correct streamID from RID 1053 + * != 0 != NULL Invalid combination 1054 + */ 1055 + if (!err_m && !target) 1056 + return -EINVAL; 1057 + else if (target) 1058 + of_node_put(target); /* Find streamID map entry for RID in msi-map */ 1059 + 1060 + /* 1061 + * msi-map iommu-map 1062 + * N N DWC MSI Ctrl 1063 + * Y Y ITS + SMMU, require the same SID 1064 + * Y N ITS 1065 + * N Y DWC MSI Ctrl + SMMU 1066 + */ 1067 + if (err_i && err_m) 1068 + return 0; 1069 + 1070 + if (!err_i && !err_m) { 1071 + /* 1072 + * Glue Layer 1073 + * <==========> 1074 + * ┌─────┐ ┌──────────┐ 1075 + * │ LUT │ 6-bit streamID │ │ 1076 + * │ │─────────────────►│ MSI │ 1077 + * └─────┘ 2-bit ctrl ID │ │ 1078 + * ┌───────────►│ │ 1079 + * (i.MX95) │ │ │ 1080 + * 00 PCIe0 │ │ │ 1081 + * 01 ENETC │ │ │ 1082 + * 10 PCIe1 │ │ │ 1083 + * │ └──────────┘ 1084 + * The MSI glue layer auto adds 2 bits controller ID ahead of 1085 + * streamID, so mask these 2 bits to get streamID. The 1086 + * IOMMU glue layer doesn't do that. 1087 + */ 1088 + if (sid_i != (sid_m & IMX95_SID_MASK)) { 1089 + dev_err(dev, "iommu-map and msi-map entries mismatch!\n"); 1090 + return -EINVAL; 1091 + } 1092 + } 1093 + 1094 + if (!err_i) 1095 + sid = sid_i; 1096 + else if (!err_m) 1097 + sid = sid_m & IMX95_SID_MASK; 1098 + 1099 + return imx_pcie_add_lut(imx_pcie, rid, sid); 1100 + } 1101 + 1102 + static void imx_pcie_disable_device(struct pci_host_bridge *bridge, 1103 + struct pci_dev *pdev) 1104 + { 1105 + struct imx_pcie *imx_pcie; 1106 + 1107 + imx_pcie = to_imx_pcie(to_dw_pcie_from_pp(bridge->sysdata)); 1108 + imx_pcie_remove_lut(imx_pcie, pci_dev_id(pdev)); 946 1109 } 947 1110 948 1111 static int imx_pcie_host_init(struct dw_pcie_rp *pp) ··· 1137 944 ret); 1138 945 return ret; 1139 946 } 947 + } 948 + 949 + if (pp->bridge && imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_LUT)) { 950 + pp->bridge->enable_device = imx_pcie_enable_device; 951 + pp->bridge->disable_device = imx_pcie_disable_device; 1140 952 } 1141 953 1142 954 imx_pcie_assert_core_reset(imx_pcie); ··· 1164 966 goto err_clk_disable; 1165 967 } 1166 968 1167 - ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC); 969 + ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE, 970 + imx_pcie->drvdata->mode == DW_PCIE_EP_TYPE ? 971 + PHY_MODE_PCIE_EP : PHY_MODE_PCIE_RC); 1168 972 if (ret) { 1169 973 dev_err(dev, "unable to set PCIe PHY mode\n"); 1170 974 goto err_phy_exit; ··· 1233 1033 return cpu_addr - entry->offset; 1234 1034 } 1235 1035 1036 + /* 1037 + * In old DWC implementations, PCIE_ATU_INHIBIT_PAYLOAD in iATU Ctrl2 1038 + * register is reserved, so the generic DWC implementation of sending the 1039 + * PME_Turn_Off message using a dummy MMIO write cannot be used. 1040 + */ 1041 + static void imx_pcie_pme_turn_off(struct dw_pcie_rp *pp) 1042 + { 1043 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1044 + struct imx_pcie *imx_pcie = to_imx_pcie(pci); 1045 + 1046 + regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX6SX_GPR12_PCIE_PM_TURN_OFF); 1047 + regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, IMX6SX_GPR12_PCIE_PM_TURN_OFF); 1048 + 1049 + usleep_range(PCIE_PME_TO_L2_TIMEOUT_US/10, PCIE_PME_TO_L2_TIMEOUT_US); 1050 + } 1051 + 1236 1052 static const struct dw_pcie_host_ops imx_pcie_host_ops = { 1053 + .init = imx_pcie_host_init, 1054 + .deinit = imx_pcie_host_exit, 1055 + .pme_turn_off = imx_pcie_pme_turn_off, 1056 + }; 1057 + 1058 + static const struct dw_pcie_host_ops imx_pcie_host_dw_pme_ops = { 1237 1059 .init = imx_pcie_host_init, 1238 1060 .deinit = imx_pcie_host_exit, 1239 1061 }; ··· 1304 1082 .align = SZ_64K, 1305 1083 }; 1306 1084 1085 + static const struct pci_epc_features imx8q_pcie_epc_features = { 1086 + .linkup_notifier = false, 1087 + .msi_capable = true, 1088 + .msix_capable = false, 1089 + .bar[BAR_1] = { .type = BAR_RESERVED, }, 1090 + .bar[BAR_3] = { .type = BAR_RESERVED, }, 1091 + .bar[BAR_5] = { .type = BAR_RESERVED, }, 1092 + .align = SZ_64K, 1093 + }; 1094 + 1307 1095 /* 1308 - * BAR# | Default BAR enable | Default BAR Type | Default BAR Size | BAR Sizing Scheme 1309 - * ================================================================================================ 1310 - * BAR0 | Enable | 64-bit | 1 MB | Programmable Size 1311 - * BAR1 | Disable | 32-bit | 64 KB | Fixed Size 1312 - * BAR1 should be disabled if BAR0 is 64bit. 1313 - * BAR2 | Enable | 32-bit | 1 MB | Programmable Size 1314 - * BAR3 | Enable | 32-bit | 64 KB | Programmable Size 1315 - * BAR4 | Enable | 32-bit | 1M | Programmable Size 1316 - * BAR5 | Enable | 32-bit | 64 KB | Programmable Size 1096 + * | Default | Default | Default | BAR Sizing 1097 + * BAR# | Enable? | Type | Size | Scheme 1098 + * ======================================================= 1099 + * BAR0 | Enable | 64-bit | 1 MB | Programmable Size 1100 + * BAR1 | Disable | 32-bit | 64 KB | Fixed Size 1101 + * (BAR1 should be disabled if BAR0 is 64-bit) 1102 + * BAR2 | Enable | 32-bit | 1 MB | Programmable Size 1103 + * BAR3 | Enable | 32-bit | 64 KB | Programmable Size 1104 + * BAR4 | Enable | 32-bit | 1 MB | Programmable Size 1105 + * BAR5 | Enable | 32-bit | 64 KB | Programmable Size 1317 1106 */ 1318 1107 static const struct pci_epc_features imx95_pcie_epc_features = { 1319 1108 .msi_capable = true, ··· 1351 1118 struct platform_device *pdev) 1352 1119 { 1353 1120 int ret; 1354 - unsigned int pcie_dbi2_offset; 1355 1121 struct dw_pcie_ep *ep; 1356 1122 struct dw_pcie *pci = imx_pcie->pci; 1357 1123 struct dw_pcie_rp *pp = &pci->pp; ··· 1359 1127 imx_pcie_host_init(pp); 1360 1128 ep = &pci->ep; 1361 1129 ep->ops = &pcie_ep_ops; 1362 - 1363 - switch (imx_pcie->drvdata->variant) { 1364 - case IMX8MQ_EP: 1365 - case IMX8MM_EP: 1366 - case IMX8MP_EP: 1367 - pcie_dbi2_offset = SZ_1M; 1368 - break; 1369 - default: 1370 - pcie_dbi2_offset = SZ_4K; 1371 - break; 1372 - } 1373 - 1374 - pci->dbi_base2 = pci->dbi_base + pcie_dbi2_offset; 1375 - 1376 - /* 1377 - * FIXME: Ideally, dbi2 base address should come from DT. But since only IMX95 is defining 1378 - * "dbi2" in DT, "dbi_base2" is set to NULL here for that platform alone so that the DWC 1379 - * core code can fetch that from DT. But once all platform DTs were fixed, this and the 1380 - * above "dbi_base2" setting should be removed. 1381 - */ 1382 - if (device_property_match_string(dev, "reg-names", "dbi2") >= 0) 1383 - pci->dbi_base2 = NULL; 1384 1130 1385 1131 if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_SUPPORT_64BIT)) 1386 1132 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); ··· 1386 1176 return 0; 1387 1177 } 1388 1178 1389 - static void imx_pcie_pm_turnoff(struct imx_pcie *imx_pcie) 1390 - { 1391 - struct device *dev = imx_pcie->pci->dev; 1392 - 1393 - /* Some variants have a turnoff reset in DT */ 1394 - if (imx_pcie->turnoff_reset) { 1395 - reset_control_assert(imx_pcie->turnoff_reset); 1396 - reset_control_deassert(imx_pcie->turnoff_reset); 1397 - goto pm_turnoff_sleep; 1398 - } 1399 - 1400 - /* Others poke directly at IOMUXC registers */ 1401 - switch (imx_pcie->drvdata->variant) { 1402 - case IMX6SX: 1403 - case IMX6QP: 1404 - regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, 1405 - IMX6SX_GPR12_PCIE_PM_TURN_OFF, 1406 - IMX6SX_GPR12_PCIE_PM_TURN_OFF); 1407 - regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12, 1408 - IMX6SX_GPR12_PCIE_PM_TURN_OFF, 0); 1409 - break; 1410 - default: 1411 - dev_err(dev, "PME_Turn_Off not implemented\n"); 1412 - return; 1413 - } 1414 - 1415 - /* 1416 - * Components with an upstream port must respond to 1417 - * PME_Turn_Off with PME_TO_Ack but we can't check. 1418 - * 1419 - * The standard recommends a 1-10ms timeout after which to 1420 - * proceed anyway as if acks were received. 1421 - */ 1422 - pm_turnoff_sleep: 1423 - usleep_range(1000, 10000); 1424 - } 1425 - 1426 1179 static void imx_pcie_msi_save_restore(struct imx_pcie *imx_pcie, bool save) 1427 1180 { 1428 1181 u8 offset; ··· 1409 1236 static int imx_pcie_suspend_noirq(struct device *dev) 1410 1237 { 1411 1238 struct imx_pcie *imx_pcie = dev_get_drvdata(dev); 1412 - struct dw_pcie_rp *pp = &imx_pcie->pci->pp; 1413 1239 1414 1240 if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_SUPPORTS_SUSPEND)) 1415 1241 return 0; ··· 1423 1251 imx_pcie_assert_core_reset(imx_pcie); 1424 1252 imx_pcie->drvdata->enable_ref_clk(imx_pcie, false); 1425 1253 } else { 1426 - imx_pcie_pm_turnoff(imx_pcie); 1427 - imx_pcie_stop_link(imx_pcie->pci); 1428 - imx_pcie_host_exit(pp); 1254 + return dw_pcie_suspend_noirq(imx_pcie->pci); 1429 1255 } 1430 1256 1431 1257 return 0; ··· 1433 1263 { 1434 1264 int ret; 1435 1265 struct imx_pcie *imx_pcie = dev_get_drvdata(dev); 1436 - struct dw_pcie_rp *pp = &imx_pcie->pci->pp; 1437 1266 1438 1267 if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_SUPPORTS_SUSPEND)) 1439 1268 return 0; ··· 1444 1275 ret = imx_pcie_deassert_core_reset(imx_pcie); 1445 1276 if (ret) 1446 1277 return ret; 1278 + 1447 1279 /* 1448 1280 * Using PCIE_TEST_PD seems to disable MSI and powers down the 1449 1281 * root complex. This is why we have to setup the rc again and ··· 1453 1283 ret = dw_pcie_setup_rc(&imx_pcie->pci->pp); 1454 1284 if (ret) 1455 1285 return ret; 1456 - imx_pcie_msi_save_restore(imx_pcie, false); 1457 1286 } else { 1458 - ret = imx_pcie_host_init(pp); 1287 + ret = dw_pcie_resume_noirq(imx_pcie->pci); 1459 1288 if (ret) 1460 1289 return ret; 1461 - imx_pcie_msi_save_restore(imx_pcie, false); 1462 - dw_pcie_setup_rc(pp); 1463 - 1464 - if (imx_pcie->link_is_up) 1465 - imx_pcie_start_link(imx_pcie->pci); 1466 1290 } 1291 + imx_pcie_msi_save_restore(imx_pcie, false); 1467 1292 1468 1293 return 0; 1469 1294 } ··· 1476 1311 struct device_node *np; 1477 1312 struct resource *dbi_base; 1478 1313 struct device_node *node = dev->of_node; 1479 - int ret; 1314 + int i, ret, req_cnt; 1480 1315 u16 val; 1481 - int i; 1482 1316 1483 1317 imx_pcie = devm_kzalloc(dev, sizeof(*imx_pcie), GFP_KERNEL); 1484 1318 if (!imx_pcie) ··· 1489 1325 1490 1326 pci->dev = dev; 1491 1327 pci->ops = &dw_pcie_ops; 1492 - pci->pp.ops = &imx_pcie_host_ops; 1493 1328 1494 1329 imx_pcie->pci = pci; 1495 1330 imx_pcie->drvdata = of_device_get_match_data(dev); 1331 + 1332 + mutex_init(&imx_pcie->lock); 1333 + 1334 + if (imx_pcie->drvdata->ops) 1335 + pci->pp.ops = imx_pcie->drvdata->ops; 1336 + else 1337 + pci->pp.ops = &imx_pcie_host_dw_pme_ops; 1496 1338 1497 1339 /* Find the PHY if one is defined, only imx7d uses it */ 1498 1340 np = of_parse_phandle(node, "fsl,imx7d-pcie-phy", 0); ··· 1533 1363 imx_pcie->clks[i].id = imx_pcie->drvdata->clk_names[i]; 1534 1364 1535 1365 /* Fetch clocks */ 1536 - ret = devm_clk_bulk_get(dev, imx_pcie->drvdata->clks_cnt, imx_pcie->clks); 1366 + req_cnt = imx_pcie->drvdata->clks_cnt - imx_pcie->drvdata->clks_optional_cnt; 1367 + ret = devm_clk_bulk_get(dev, req_cnt, imx_pcie->clks); 1537 1368 if (ret) 1538 1369 return ret; 1370 + imx_pcie->clks[req_cnt].clk = devm_clk_get_optional(dev, "ref"); 1371 + if (IS_ERR(imx_pcie->clks[req_cnt].clk)) 1372 + return PTR_ERR(imx_pcie->clks[req_cnt].clk); 1539 1373 1540 1374 if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_HAS_PHYDRV)) { 1541 1375 imx_pcie->phy = devm_phy_get(dev, "pcie-phy"); ··· 1565 1391 switch (imx_pcie->drvdata->variant) { 1566 1392 case IMX8MQ: 1567 1393 case IMX8MQ_EP: 1568 - case IMX7D: 1569 1394 if (dbi_base->start == IMX8MQ_PCIE2_BASE_ADDR) 1570 1395 imx_pcie->controller_id = 1; 1571 1396 break; 1572 1397 default: 1573 1398 break; 1574 - } 1575 - 1576 - /* Grab turnoff reset */ 1577 - imx_pcie->turnoff_reset = devm_reset_control_get_optional_exclusive(dev, "turnoff"); 1578 - if (IS_ERR(imx_pcie->turnoff_reset)) { 1579 - dev_err(dev, "Failed to get TURNOFF reset control\n"); 1580 - return PTR_ERR(imx_pcie->turnoff_reset); 1581 1399 } 1582 1400 1583 1401 if (imx_pcie->drvdata->gpr) { ··· 1650 1484 if (ret < 0) 1651 1485 return ret; 1652 1486 } else { 1487 + pci->pp.use_atu_msg = true; 1653 1488 ret = dw_pcie_host_init(&pci->pp); 1654 1489 if (ret < 0) 1655 1490 return ret; ··· 1680 1513 static const char * const imx8mq_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_aux"}; 1681 1514 static const char * const imx6sx_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_inbound_axi"}; 1682 1515 static const char * const imx8q_clks[] = {"mstr", "slv", "dbi"}; 1516 + static const char * const imx95_clks[] = {"pcie_bus", "pcie", "pcie_phy", "pcie_aux", "ref"}; 1683 1517 1684 1518 static const struct imx_pcie_drvdata drvdata[] = { 1685 1519 [IMX6Q] = { ··· 1716 1548 .init_phy = imx6sx_pcie_init_phy, 1717 1549 .enable_ref_clk = imx6sx_pcie_enable_ref_clk, 1718 1550 .core_reset = imx6sx_pcie_core_reset, 1551 + .ops = &imx_pcie_host_ops, 1719 1552 }, 1720 1553 [IMX6QP] = { 1721 1554 .variant = IMX6QP, ··· 1734 1565 .init_phy = imx_pcie_init_phy, 1735 1566 .enable_ref_clk = imx6q_pcie_enable_ref_clk, 1736 1567 .core_reset = imx6qp_pcie_core_reset, 1568 + .ops = &imx_pcie_host_ops, 1737 1569 }, 1738 1570 [IMX7D] = { 1739 1571 .variant = IMX7D, ··· 1746 1576 .clks_cnt = ARRAY_SIZE(imx6q_clks), 1747 1577 .mode_off[0] = IOMUXC_GPR12, 1748 1578 .mode_mask[0] = IMX6Q_GPR12_DEVICE_TYPE, 1749 - .init_phy = imx7d_pcie_init_phy, 1750 1579 .enable_ref_clk = imx7d_pcie_enable_ref_clk, 1751 1580 .core_reset = imx7d_pcie_core_reset, 1752 1581 }, 1753 1582 [IMX8MQ] = { 1754 1583 .variant = IMX8MQ, 1755 1584 .flags = IMX_PCIE_FLAG_HAS_APP_RESET | 1756 - IMX_PCIE_FLAG_HAS_PHY_RESET, 1585 + IMX_PCIE_FLAG_HAS_PHY_RESET | 1586 + IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1757 1587 .gpr = "fsl,imx8mq-iomuxc-gpr", 1758 1588 .clk_names = imx8mq_clks, 1759 1589 .clks_cnt = ARRAY_SIZE(imx8mq_clks), ··· 1791 1621 [IMX8Q] = { 1792 1622 .variant = IMX8Q, 1793 1623 .flags = IMX_PCIE_FLAG_HAS_PHYDRV | 1794 - IMX_PCIE_FLAG_CPU_ADDR_FIXUP, 1624 + IMX_PCIE_FLAG_CPU_ADDR_FIXUP | 1625 + IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1795 1626 .clk_names = imx8q_clks, 1796 1627 .clks_cnt = ARRAY_SIZE(imx8q_clks), 1797 1628 }, 1798 1629 [IMX95] = { 1799 1630 .variant = IMX95, 1800 - .flags = IMX_PCIE_FLAG_HAS_SERDES, 1801 - .clk_names = imx8mq_clks, 1802 - .clks_cnt = ARRAY_SIZE(imx8mq_clks), 1631 + .flags = IMX_PCIE_FLAG_HAS_SERDES | 1632 + IMX_PCIE_FLAG_HAS_LUT | 1633 + IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1634 + .clk_names = imx95_clks, 1635 + .clks_cnt = ARRAY_SIZE(imx95_clks), 1636 + .clks_optional_cnt = 1, 1803 1637 .ltssm_off = IMX95_PE0_GEN_CTRL_3, 1804 1638 .ltssm_mask = IMX95_PCIE_LTSSM_EN, 1805 1639 .mode_off[0] = IMX95_PE0_GEN_CTRL_1, ··· 1852 1678 .epc_features = &imx8m_pcie_epc_features, 1853 1679 .enable_ref_clk = imx8mm_pcie_enable_ref_clk, 1854 1680 }, 1681 + [IMX8Q_EP] = { 1682 + .variant = IMX8Q_EP, 1683 + .flags = IMX_PCIE_FLAG_HAS_PHYDRV, 1684 + .mode = DW_PCIE_EP_TYPE, 1685 + .epc_features = &imx8q_pcie_epc_features, 1686 + .clk_names = imx8q_clks, 1687 + .clks_cnt = ARRAY_SIZE(imx8q_clks), 1688 + }, 1855 1689 [IMX95_EP] = { 1856 1690 .variant = IMX95_EP, 1857 1691 .flags = IMX_PCIE_FLAG_HAS_SERDES | ··· 1889 1707 { .compatible = "fsl,imx8mq-pcie-ep", .data = &drvdata[IMX8MQ_EP], }, 1890 1708 { .compatible = "fsl,imx8mm-pcie-ep", .data = &drvdata[IMX8MM_EP], }, 1891 1709 { .compatible = "fsl,imx8mp-pcie-ep", .data = &drvdata[IMX8MP_EP], }, 1710 + { .compatible = "fsl,imx8q-pcie-ep", .data = &drvdata[IMX8Q_EP], }, 1892 1711 { .compatible = "fsl,imx95-pcie-ep", .data = &drvdata[IMX95_EP], }, 1893 1712 {}, 1894 1713 };
+4 -6
drivers/pci/controller/dwc/pci-layerscape.c
··· 329 329 struct ls_pcie *pcie; 330 330 struct resource *dbi_base; 331 331 u32 index[2]; 332 - int ret; 333 332 334 333 pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 335 334 if (!pcie) ··· 354 355 pcie->pf_lut_base = pci->dbi_base + pcie->drvdata->pf_lut_off; 355 356 356 357 if (pcie->drvdata->scfg_support) { 357 - pcie->scfg = syscon_regmap_lookup_by_phandle(dev->of_node, "fsl,pcie-scfg"); 358 + pcie->scfg = 359 + syscon_regmap_lookup_by_phandle_args(dev->of_node, 360 + "fsl,pcie-scfg", 2, 361 + index); 358 362 if (IS_ERR(pcie->scfg)) { 359 363 dev_err(dev, "No syscfg phandle specified\n"); 360 364 return PTR_ERR(pcie->scfg); 361 365 } 362 - 363 - ret = of_property_read_u32_array(dev->of_node, "fsl,pcie-scfg", index, 2); 364 - if (ret) 365 - return ret; 366 366 367 367 pcie->index = index[1]; 368 368 }
+13
drivers/pci/controller/dwc/pcie-artpec6.c
··· 369 369 return 0; 370 370 } 371 371 372 + static const struct pci_epc_features artpec6_pcie_epc_features = { 373 + .linkup_notifier = false, 374 + .msi_capable = true, 375 + .msix_capable = false, 376 + }; 377 + 378 + static const struct pci_epc_features * 379 + artpec6_pcie_get_features(struct dw_pcie_ep *ep) 380 + { 381 + return &artpec6_pcie_epc_features; 382 + } 383 + 372 384 static const struct dw_pcie_ep_ops pcie_ep_ops = { 373 385 .init = artpec6_pcie_ep_init, 374 386 .raise_irq = artpec6_pcie_raise_irq, 387 + .get_features = artpec6_pcie_get_features, 375 388 }; 376 389 377 390 static int artpec6_pcie_probe(struct platform_device *pdev)
+39 -15
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 128 128 } 129 129 130 130 static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type, 131 - dma_addr_t cpu_addr, enum pci_barno bar) 131 + dma_addr_t cpu_addr, enum pci_barno bar, 132 + size_t size) 132 133 { 133 134 int ret; 134 135 u32 free_win; ··· 146 145 } 147 146 148 147 ret = dw_pcie_prog_ep_inbound_atu(pci, func_no, free_win, type, 149 - cpu_addr, bar); 148 + cpu_addr, bar, size); 150 149 if (ret < 0) { 151 150 dev_err(pci->dev, "Failed to program IB window\n"); 152 151 return ret; ··· 223 222 if ((flags & PCI_BASE_ADDRESS_MEM_TYPE_64) && (bar & 1)) 224 223 return -EINVAL; 225 224 225 + /* 226 + * Certain EPF drivers dynamically change the physical address of a BAR 227 + * (i.e. they call set_bar() twice, without ever calling clear_bar(), as 228 + * calling clear_bar() would clear the BAR's PCI address assigned by the 229 + * host). 230 + */ 231 + if (ep->epf_bar[bar]) { 232 + /* 233 + * We can only dynamically change a BAR if the new BAR size and 234 + * BAR flags do not differ from the existing configuration. 235 + */ 236 + if (ep->epf_bar[bar]->barno != bar || 237 + ep->epf_bar[bar]->size != size || 238 + ep->epf_bar[bar]->flags != flags) 239 + return -EINVAL; 240 + 241 + /* 242 + * When dynamically changing a BAR, skip writing the BAR reg, as 243 + * that would clear the BAR's PCI address assigned by the host. 244 + */ 245 + goto config_atu; 246 + } 247 + 226 248 reg = PCI_BASE_ADDRESS_0 + (4 * bar); 227 - 228 - if (!(flags & PCI_BASE_ADDRESS_SPACE)) 229 - type = PCIE_ATU_TYPE_MEM; 230 - else 231 - type = PCIE_ATU_TYPE_IO; 232 - 233 - ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar); 234 - if (ret) 235 - return ret; 236 - 237 - if (ep->epf_bar[bar]) 238 - return 0; 239 249 240 250 dw_pcie_dbi_ro_wr_en(pci); 241 251 ··· 258 246 dw_pcie_ep_writel_dbi(ep, func_no, reg + 4, 0); 259 247 } 260 248 261 - ep->epf_bar[bar] = epf_bar; 262 249 dw_pcie_dbi_ro_wr_dis(pci); 250 + 251 + config_atu: 252 + if (!(flags & PCI_BASE_ADDRESS_SPACE)) 253 + type = PCIE_ATU_TYPE_MEM; 254 + else 255 + type = PCIE_ATU_TYPE_IO; 256 + 257 + ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar, 258 + size); 259 + if (ret) 260 + return ret; 261 + 262 + ep->epf_bar[bar] = epf_bar; 263 263 264 264 return 0; 265 265 }
+35 -21
drivers/pci/controller/dwc/pcie-designware-host.c
··· 436 436 return ret; 437 437 438 438 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "config"); 439 - if (res) { 440 - pp->cfg0_size = resource_size(res); 441 - pp->cfg0_base = res->start; 442 - 443 - pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 444 - if (IS_ERR(pp->va_cfg0_base)) 445 - return PTR_ERR(pp->va_cfg0_base); 446 - } else { 447 - dev_err(dev, "Missing *config* reg space\n"); 439 + if (!res) { 440 + dev_err(dev, "Missing \"config\" reg space\n"); 448 441 return -ENODEV; 449 442 } 443 + 444 + pp->cfg0_size = resource_size(res); 445 + pp->cfg0_base = res->start; 446 + 447 + pp->va_cfg0_base = devm_pci_remap_cfg_resource(dev, res); 448 + if (IS_ERR(pp->va_cfg0_base)) 449 + return PTR_ERR(pp->va_cfg0_base); 450 450 451 451 bridge = devm_pci_alloc_host_bridge(dev, 0); 452 452 if (!bridge) ··· 530 530 goto err_remove_edma; 531 531 } 532 532 533 - /* Ignore errors, the link may come up later */ 534 - dw_pcie_wait_for_link(pci); 533 + /* 534 + * Note: Skip the link up delay only when a Link Up IRQ is present. 535 + * If there is no Link Up IRQ, we should not bypass the delay 536 + * because that would require users to manually rescan for devices. 537 + */ 538 + if (!pp->use_linkup_irq) 539 + /* Ignore errors, the link may come up later */ 540 + dw_pcie_wait_for_link(pci); 535 541 536 542 bridge->sysdata = pp; 537 543 ··· 924 918 { 925 919 u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 926 920 u32 val; 927 - int ret = 0; 921 + int ret; 928 922 929 923 /* 930 924 * If L1SS is supported, then do not put the link into L2 as some ··· 933 927 if (dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKCTL) & PCI_EXP_LNKCTL_ASPM_L1) 934 928 return 0; 935 929 936 - if (dw_pcie_get_ltssm(pci) <= DW_PCIE_LTSSM_DETECT_ACT) 937 - return 0; 938 - 939 - if (pci->pp.ops->pme_turn_off) 930 + if (pci->pp.ops->pme_turn_off) { 940 931 pci->pp.ops->pme_turn_off(&pci->pp); 941 - else 932 + } else { 942 933 ret = dw_pcie_pme_turn_off(pci); 934 + if (ret) 935 + return ret; 936 + } 943 937 944 - if (ret) 945 - return ret; 946 - 947 - ret = read_poll_timeout(dw_pcie_get_ltssm, val, val == DW_PCIE_LTSSM_L2_IDLE, 938 + ret = read_poll_timeout(dw_pcie_get_ltssm, val, 939 + val == DW_PCIE_LTSSM_L2_IDLE || 940 + val <= DW_PCIE_LTSSM_DETECT_WAIT, 948 941 PCIE_PME_TO_L2_TIMEOUT_US/10, 949 942 PCIE_PME_TO_L2_TIMEOUT_US, false, pci); 950 943 if (ret) { 944 + /* Only log message when LTSSM isn't in DETECT or POLL */ 951 945 dev_err(pci->dev, "Timeout waiting for L2 entry! LTSSM: 0x%x\n", val); 952 946 return ret; 953 947 } 954 948 949 + /* 950 + * Per PCIe r6.0, sec 5.3.3.2.1, software should wait at least 951 + * 100ns after L2/L3 Ready before turning off refclock and 952 + * main power. This is harmless when no endpoint is connected. 953 + */ 954 + udelay(1); 955 + 956 + dw_pcie_stop_link(pci); 955 957 if (pci->pp.ops->deinit) 956 958 pci->pp.ops->deinit(&pci->pp); 957 959
+4 -3
drivers/pci/controller/dwc/pcie-designware.c
··· 597 597 } 598 598 599 599 int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, 600 - int type, u64 cpu_addr, u8 bar) 600 + int type, u64 cpu_addr, u8 bar, size_t size) 601 601 { 602 602 u32 retries, val; 603 603 604 - if (!IS_ALIGNED(cpu_addr, pci->region_align)) 604 + if (!IS_ALIGNED(cpu_addr, pci->region_align) || 605 + !IS_ALIGNED(cpu_addr, size)) 605 606 return -EINVAL; 606 607 607 608 dw_pcie_writel_atu_ib(pci, index, PCIE_ATU_LOWER_TARGET, ··· 971 970 { 972 971 struct platform_device *pdev = to_platform_device(pci->dev); 973 972 u16 ch_cnt = pci->edma.ll_wr_cnt + pci->edma.ll_rd_cnt; 974 - char name[6]; 973 + char name[15]; 975 974 int ret; 976 975 977 976 if (pci->edma.nr_irqs == 1)
+15 -4
drivers/pci/controller/dwc/pcie-designware.h
··· 330 330 /* Need to align with PCIE_PORT_DEBUG0 bits 0:5 */ 331 331 DW_PCIE_LTSSM_DETECT_QUIET = 0x0, 332 332 DW_PCIE_LTSSM_DETECT_ACT = 0x1, 333 + DW_PCIE_LTSSM_DETECT_WAIT = 0x6, 333 334 DW_PCIE_LTSSM_L0 = 0x11, 334 335 DW_PCIE_LTSSM_L2_IDLE = 0x15, 335 336 ··· 380 379 bool use_atu_msg; 381 380 int msg_atu_index; 382 381 struct resource *msg_res; 382 + bool use_linkup_irq; 383 383 }; 384 384 385 385 struct dw_pcie_ep_ops { ··· 493 491 int dw_pcie_prog_inbound_atu(struct dw_pcie *pci, int index, int type, 494 492 u64 cpu_addr, u64 pci_addr, u64 size); 495 493 int dw_pcie_prog_ep_inbound_atu(struct dw_pcie *pci, u8 func_no, int index, 496 - int type, u64 cpu_addr, u8 bar); 494 + int type, u64 cpu_addr, u8 bar, size_t size); 497 495 void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index); 498 496 void dw_pcie_setup(struct dw_pcie *pci); 499 497 void dw_pcie_iatu_detect(struct dw_pcie *pci); 500 498 int dw_pcie_edma_detect(struct dw_pcie *pci); 501 499 void dw_pcie_edma_remove(struct dw_pcie *pci); 502 - 503 - int dw_pcie_suspend_noirq(struct dw_pcie *pci); 504 - int dw_pcie_resume_noirq(struct dw_pcie *pci); 505 500 506 501 static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) 507 502 { ··· 677 678 } 678 679 679 680 #ifdef CONFIG_PCIE_DW_HOST 681 + int dw_pcie_suspend_noirq(struct dw_pcie *pci); 682 + int dw_pcie_resume_noirq(struct dw_pcie *pci); 680 683 irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp); 681 684 int dw_pcie_setup_rc(struct dw_pcie_rp *pp); 682 685 int dw_pcie_host_init(struct dw_pcie_rp *pp); ··· 687 686 void __iomem *dw_pcie_own_conf_map_bus(struct pci_bus *bus, unsigned int devfn, 688 687 int where); 689 688 #else 689 + static inline int dw_pcie_suspend_noirq(struct dw_pcie *pci) 690 + { 691 + return 0; 692 + } 693 + 694 + static inline int dw_pcie_resume_noirq(struct dw_pcie *pci) 695 + { 696 + return 0; 697 + } 698 + 690 699 static inline irqreturn_t dw_handle_msi_irq(struct dw_pcie_rp *pp) 691 700 { 692 701 return IRQ_NONE;
+61 -8
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 389 389 .stop_link = rockchip_pcie_stop_link, 390 390 }; 391 391 392 + static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg) 393 + { 394 + struct rockchip_pcie *rockchip = arg; 395 + struct dw_pcie *pci = &rockchip->pci; 396 + struct dw_pcie_rp *pp = &pci->pp; 397 + struct device *dev = pci->dev; 398 + u32 reg, val; 399 + 400 + reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC); 401 + rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); 402 + 403 + dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg); 404 + dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip)); 405 + 406 + if (reg & PCIE_RDLH_LINK_UP_CHGED) { 407 + val = rockchip_pcie_get_ltssm(rockchip); 408 + if ((val & PCIE_LINKUP) == PCIE_LINKUP) { 409 + dev_dbg(dev, "Received Link up event. Starting enumeration!\n"); 410 + /* Rescan the bus to enumerate endpoint devices */ 411 + pci_lock_rescan_remove(); 412 + pci_rescan_bus(pp->bridge->bus); 413 + pci_unlock_rescan_remove(); 414 + } 415 + } 416 + 417 + return IRQ_HANDLED; 418 + } 419 + 392 420 static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg) 393 421 { 394 422 struct rockchip_pcie *rockchip = arg; ··· 446 418 return IRQ_HANDLED; 447 419 } 448 420 449 - static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip) 421 + static int rockchip_pcie_configure_rc(struct platform_device *pdev, 422 + struct rockchip_pcie *rockchip) 450 423 { 424 + struct device *dev = &pdev->dev; 451 425 struct dw_pcie_rp *pp; 426 + int irq, ret; 452 427 u32 val; 453 428 454 429 if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST)) 455 430 return -ENODEV; 431 + 432 + irq = platform_get_irq_byname(pdev, "sys"); 433 + if (irq < 0) 434 + return irq; 435 + 436 + ret = devm_request_threaded_irq(dev, irq, NULL, 437 + rockchip_pcie_rc_sys_irq_thread, 438 + IRQF_ONESHOT, "pcie-sys-rc", rockchip); 439 + if (ret) { 440 + dev_err(dev, "failed to request PCIe sys IRQ\n"); 441 + return ret; 442 + } 456 443 457 444 /* LTSSM enable control mode */ 458 445 val = HIWORD_UPDATE_BIT(PCIE_LTSSM_ENABLE_ENHANCE); ··· 478 435 479 436 pp = &rockchip->pci.pp; 480 437 pp->ops = &rockchip_pcie_host_ops; 438 + pp->use_linkup_irq = true; 481 439 482 - return dw_pcie_host_init(pp); 440 + ret = dw_pcie_host_init(pp); 441 + if (ret) { 442 + dev_err(dev, "failed to initialize host\n"); 443 + return ret; 444 + } 445 + 446 + /* unmask DLL up/down indicator */ 447 + val = HIWORD_UPDATE(PCIE_RDLH_LINK_UP_CHGED, 0); 448 + rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC); 449 + 450 + return ret; 483 451 } 484 452 485 453 static int rockchip_pcie_configure_ep(struct platform_device *pdev, ··· 504 450 return -ENODEV; 505 451 506 452 irq = platform_get_irq_byname(pdev, "sys"); 507 - if (irq < 0) { 508 - dev_err(dev, "missing sys IRQ resource\n"); 453 + if (irq < 0) 509 454 return irq; 510 - } 511 455 512 456 ret = devm_request_threaded_irq(dev, irq, NULL, 513 457 rockchip_pcie_ep_sys_irq_thread, 514 - IRQF_ONESHOT, "pcie-sys", rockchip); 458 + IRQF_ONESHOT, "pcie-sys-ep", rockchip); 515 459 if (ret) { 516 460 dev_err(dev, "failed to request PCIe sys IRQ\n"); 517 461 return ret; ··· 543 491 pci_epc_init_notify(rockchip->pci.ep.epc); 544 492 545 493 /* unmask DLL up/down indicator and hot reset/link-down reset */ 546 - rockchip_pcie_writel_apb(rockchip, 0x60000, PCIE_CLIENT_INTR_MASK_MISC); 494 + val = HIWORD_UPDATE(PCIE_RDLH_LINK_UP_CHGED | PCIE_LINK_REQ_RST_NOT_INT, 0); 495 + rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC); 547 496 548 497 return ret; 549 498 } ··· 606 553 607 554 switch (data->mode) { 608 555 case DW_PCIE_RC_TYPE: 609 - ret = rockchip_pcie_configure_rc(rockchip); 556 + ret = rockchip_pcie_configure_rc(pdev, rockchip); 610 557 if (ret) 611 558 goto deinit_clk; 612 559 break;
+6 -1
drivers/pci/controller/dwc/pcie-qcom.c
··· 1569 1569 pci_lock_rescan_remove(); 1570 1570 pci_rescan_bus(pp->bridge->bus); 1571 1571 pci_unlock_rescan_remove(); 1572 + 1573 + qcom_pcie_icc_opp_update(pcie); 1572 1574 } else { 1573 1575 dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n", 1574 1576 status); ··· 1705 1703 1706 1704 platform_set_drvdata(pdev, pcie); 1707 1705 1706 + irq = platform_get_irq_byname_optional(pdev, "global"); 1707 + if (irq > 0) 1708 + pp->use_linkup_irq = true; 1709 + 1708 1710 ret = dw_pcie_host_init(pp); 1709 1711 if (ret) { 1710 1712 dev_err(dev, "cannot initialize host\n"); ··· 1722 1716 goto err_host_deinit; 1723 1717 } 1724 1718 1725 - irq = platform_get_irq_byname_optional(pdev, "global"); 1726 1719 if (irq > 0) { 1727 1720 ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, 1728 1721 qcom_pcie_global_irq_thread,
+2
drivers/pci/controller/pci-host-common.c
··· 75 75 76 76 bridge->sysdata = cfg; 77 77 bridge->ops = (struct pci_ops *)&ops->pci_ops; 78 + bridge->enable_device = ops->enable_device; 79 + bridge->disable_device = ops->disable_device; 78 80 bridge->msi_domain = true; 79 81 80 82 return pci_host_probe(bridge);
+1
drivers/pci/controller/pci-mvebu.c
··· 1715 1715 { .compatible = "marvell,kirkwood-pcie", }, 1716 1716 {}, 1717 1717 }; 1718 + MODULE_DEVICE_TABLE(of, mvebu_pcie_of_match_table); 1718 1719 1719 1720 static const struct dev_pm_ops mvebu_pcie_pm_ops = { 1720 1721 NOIRQ_SYSTEM_SLEEP_PM_OPS(mvebu_pcie_suspend, mvebu_pcie_resume)
+15 -60
drivers/pci/controller/pcie-apple.c
··· 26 26 #include <linux/list.h> 27 27 #include <linux/module.h> 28 28 #include <linux/msi.h> 29 - #include <linux/notifier.h> 30 29 #include <linux/of_irq.h> 31 30 #include <linux/pci-ecam.h> 32 31 ··· 666 667 return NULL; 667 668 } 668 669 669 - static int apple_pcie_add_device(struct apple_pcie_port *port, 670 - struct pci_dev *pdev) 670 + static int apple_pcie_enable_device(struct pci_host_bridge *bridge, struct pci_dev *pdev) 671 671 { 672 672 u32 sid, rid = pci_dev_id(pdev); 673 + struct apple_pcie_port *port; 673 674 int idx, err; 675 + 676 + port = apple_pcie_get_port(pdev); 677 + if (!port) 678 + return 0; 674 679 675 680 dev_dbg(&pdev->dev, "added to bus %s, index %d\n", 676 681 pci_name(pdev->bus->self), port->idx); ··· 701 698 return idx >= 0 ? 0 : -ENOSPC; 702 699 } 703 700 704 - static void apple_pcie_release_device(struct apple_pcie_port *port, 705 - struct pci_dev *pdev) 701 + static void apple_pcie_disable_device(struct pci_host_bridge *bridge, struct pci_dev *pdev) 706 702 { 703 + struct apple_pcie_port *port; 707 704 u32 rid = pci_dev_id(pdev); 708 705 int idx; 706 + 707 + port = apple_pcie_get_port(pdev); 708 + if (!port) 709 + return; 709 710 710 711 mutex_lock(&port->pcie->lock); 711 712 ··· 727 720 728 721 mutex_unlock(&port->pcie->lock); 729 722 } 730 - 731 - static int apple_pcie_bus_notifier(struct notifier_block *nb, 732 - unsigned long action, 733 - void *data) 734 - { 735 - struct device *dev = data; 736 - struct pci_dev *pdev = to_pci_dev(dev); 737 - struct apple_pcie_port *port; 738 - int err; 739 - 740 - /* 741 - * This is a bit ugly. We assume that if we get notified for 742 - * any PCI device, we must be in charge of it, and that there 743 - * is no other PCI controller in the whole system. It probably 744 - * holds for now, but who knows for how long? 745 - */ 746 - port = apple_pcie_get_port(pdev); 747 - if (!port) 748 - return NOTIFY_DONE; 749 - 750 - switch (action) { 751 - case BUS_NOTIFY_ADD_DEVICE: 752 - err = apple_pcie_add_device(port, pdev); 753 - if (err) 754 - return notifier_from_errno(err); 755 - break; 756 - case BUS_NOTIFY_DEL_DEVICE: 757 - apple_pcie_release_device(port, pdev); 758 - break; 759 - default: 760 - return NOTIFY_DONE; 761 - } 762 - 763 - return NOTIFY_OK; 764 - } 765 - 766 - static struct notifier_block apple_pcie_nb = { 767 - .notifier_call = apple_pcie_bus_notifier, 768 - }; 769 723 770 724 static int apple_pcie_init(struct pci_config_window *cfg) 771 725 { ··· 767 799 return 0; 768 800 } 769 801 770 - static int apple_pcie_probe(struct platform_device *pdev) 771 - { 772 - int ret; 773 - 774 - ret = bus_register_notifier(&pci_bus_type, &apple_pcie_nb); 775 - if (ret) 776 - return ret; 777 - 778 - ret = pci_host_common_probe(pdev); 779 - if (ret) 780 - bus_unregister_notifier(&pci_bus_type, &apple_pcie_nb); 781 - 782 - return ret; 783 - } 784 - 785 802 static const struct pci_ecam_ops apple_pcie_cfg_ecam_ops = { 786 803 .init = apple_pcie_init, 804 + .enable_device = apple_pcie_enable_device, 805 + .disable_device = apple_pcie_disable_device, 787 806 .pci_ops = { 788 807 .map_bus = pci_ecam_map_bus, 789 808 .read = pci_generic_config_read, ··· 785 830 MODULE_DEVICE_TABLE(of, apple_pcie_of_match); 786 831 787 832 static struct platform_driver apple_pcie_driver = { 788 - .probe = apple_pcie_probe, 833 + .probe = pci_host_common_probe, 789 834 .driver = { 790 835 .name = "pcie-apple", 791 836 .of_match_table = apple_pcie_of_match,
+75 -40
drivers/pci/controller/pcie-mediatek-gen3.c
··· 125 125 126 126 #define MAX_NUM_PHY_RESETS 3 127 127 128 + #define PCIE_MTK_RESET_TIME_US 10 129 + 128 130 /* Time in ms needed to complete PCIe reset on EN7581 SoC */ 129 131 #define PCIE_EN7581_RESET_TIME_MS 100 130 132 ··· 135 133 #define PCIE_CONF_LINK2_CTL_STS (PCIE_CFG_OFFSET_ADDR + 0xb0) 136 134 #define PCIE_CONF_LINK2_LCR2_LINK_SPEED GENMASK(3, 0) 137 135 136 + enum mtk_gen3_pcie_flags { 137 + SKIP_PCIE_RSTB = BIT(0), /* Skip PERST# assertion during device 138 + * probing or suspend/resume phase to 139 + * avoid hw bugs/issues. 140 + */ 141 + }; 142 + 138 143 /** 139 144 * struct mtk_gen3_pcie_pdata - differentiate between host generations 140 145 * @power_up: pcie power_up callback 141 146 * @phy_resets: phy reset lines SoC data. 147 + * @flags: pcie device flags. 142 148 */ 143 149 struct mtk_gen3_pcie_pdata { 144 150 int (*power_up)(struct mtk_gen3_pcie *pcie); ··· 154 144 const char *id[MAX_NUM_PHY_RESETS]; 155 145 int num_resets; 156 146 } phy_resets; 147 + u32 flags; 157 148 }; 158 149 159 150 /** ··· 449 438 val |= PCIE_DISABLE_DVFSRC_VLT_REQ; 450 439 writel_relaxed(val, pcie->base + PCIE_MISC_CTRL_REG); 451 440 452 - /* Assert all reset signals */ 453 - val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 454 - val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB; 455 - writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 456 - 457 441 /* 458 - * Described in PCIe CEM specification sections 2.2 (PERST# Signal) 459 - * and 2.2.1 (Initial Power-Up (G3 to S0)). 460 - * The deassertion of PERST# should be delayed 100ms (TPVPERL) 461 - * for the power and clock to become stable. 442 + * Airoha EN7581 has a hw bug asserting/releasing PCIE_PE_RSTB signal 443 + * causing occasional PCIe link down. In order to overcome the issue, 444 + * PCIE_RSTB signals are not asserted/released at this stage and the 445 + * PCIe block is reset using en7523_reset_assert() and 446 + * en7581_pci_enable(). 462 447 */ 463 - msleep(100); 448 + if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) { 449 + /* Assert all reset signals */ 450 + val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 451 + val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | 452 + PCIE_PE_RSTB; 453 + writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 464 454 465 - /* De-assert reset signals */ 466 - val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | PCIE_PE_RSTB); 467 - writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 455 + /* 456 + * Described in PCIe CEM specification revision 6.0. 457 + * 458 + * The deassertion of PERST# should be delayed 100ms (TPVPERL) 459 + * for the power and clock to become stable. 460 + */ 461 + msleep(PCIE_T_PVPERL_MS); 462 + 463 + /* De-assert reset signals */ 464 + val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | 465 + PCIE_PE_RSTB); 466 + writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 467 + } 468 468 469 469 /* Check if the link is up or not */ 470 470 err = readl_poll_timeout(pcie->base + PCIE_LINK_STATUS_REG, val, ··· 935 913 u32 val; 936 914 937 915 /* 938 - * Wait for the time needed to complete the bulk assert in 939 - * mtk_pcie_setup for EN7581 SoC. 916 + * The controller may have been left out of reset by the bootloader 917 + * so make sure that we get a clean start by asserting resets here. 940 918 */ 941 - mdelay(PCIE_EN7581_RESET_TIME_MS); 919 + reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, 920 + pcie->phy_resets); 921 + reset_control_assert(pcie->mac_reset); 942 922 923 + /* Wait for the time needed to complete the reset lines assert. */ 924 + msleep(PCIE_EN7581_RESET_TIME_MS); 925 + 926 + /* 927 + * Unlike the other MediaTek Gen3 controllers, the Airoha EN7581 928 + * requires PHY initialization and power-on before PHY reset deassert. 929 + */ 943 930 err = phy_init(pcie->phy); 944 931 if (err) { 945 932 dev_err(dev, "failed to initialize PHY\n"); ··· 971 940 * Wait for the time needed to complete the bulk de-assert above. 972 941 * This time is specific for EN7581 SoC. 973 942 */ 974 - mdelay(PCIE_EN7581_RESET_TIME_MS); 943 + msleep(PCIE_EN7581_RESET_TIME_MS); 975 944 976 945 pm_runtime_enable(dev); 977 946 pm_runtime_get_sync(dev); 978 - 979 - err = clk_bulk_prepare(pcie->num_clks, pcie->clks); 980 - if (err) { 981 - dev_err(dev, "failed to prepare clock\n"); 982 - goto err_clk_prepare; 983 - } 984 947 985 948 val = FIELD_PREP(PCIE_VAL_LN0_DOWNSTREAM, 0x47) | 986 949 FIELD_PREP(PCIE_VAL_LN1_DOWNSTREAM, 0x47) | ··· 988 963 FIELD_PREP(PCIE_K_FINETUNE_MAX, 0xf); 989 964 writel_relaxed(val, pcie->base + PCIE_PIPE4_PIE8_REG); 990 965 991 - err = clk_bulk_enable(pcie->num_clks, pcie->clks); 966 + err = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks); 992 967 if (err) { 993 968 dev_err(dev, "failed to prepare clock\n"); 994 - goto err_clk_enable; 969 + goto err_clk_prepare_enable; 995 970 } 971 + 972 + /* 973 + * Airoha EN7581 performs PCIe reset via clk callbacks since it has a 974 + * hw issue with PCIE_PE_RSTB signal. Add wait for the time needed to 975 + * complete the PCIe reset. 976 + */ 977 + msleep(PCIE_T_PVPERL_MS); 996 978 997 979 return 0; 998 980 999 - err_clk_enable: 1000 - clk_bulk_unprepare(pcie->num_clks, pcie->clks); 1001 - err_clk_prepare: 981 + err_clk_prepare_enable: 1002 982 pm_runtime_put_sync(dev); 1003 983 pm_runtime_disable(dev); 1004 984 reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets); ··· 1019 989 { 1020 990 struct device *dev = pcie->dev; 1021 991 int err; 992 + 993 + /* 994 + * The controller may have been left out of reset by the bootloader 995 + * so make sure that we get a clean start by asserting resets here. 996 + */ 997 + reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, 998 + pcie->phy_resets); 999 + reset_control_assert(pcie->mac_reset); 1000 + usleep_range(PCIE_MTK_RESET_TIME_US, 2 * PCIE_MTK_RESET_TIME_US); 1022 1001 1023 1002 /* PHY power on and enable pipe clock */ 1024 1003 err = reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets, pcie->phy_resets); ··· 1113 1074 * counter since the bulk is shared. 1114 1075 */ 1115 1076 reset_control_bulk_deassert(pcie->soc->phy_resets.num_resets, pcie->phy_resets); 1116 - /* 1117 - * The controller may have been left out of reset by the bootloader 1118 - * so make sure that we get a clean start by asserting resets here. 1119 - */ 1120 - reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets); 1121 - 1122 - reset_control_assert(pcie->mac_reset); 1123 - usleep_range(10, 20); 1124 1077 1125 1078 /* Don't touch the hardware registers before power up */ 1126 1079 err = pcie->soc->power_up(pcie); ··· 1262 1231 return err; 1263 1232 } 1264 1233 1265 - /* Pull down the PERST# pin */ 1266 - val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 1267 - val |= PCIE_PE_RSTB; 1268 - writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 1234 + if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) { 1235 + /* Assert the PERST# pin */ 1236 + val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 1237 + val |= PCIE_PE_RSTB; 1238 + writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 1239 + } 1269 1240 1270 1241 dev_dbg(pcie->dev, "entered L2 states successfully"); 1271 1242 ··· 1318 1285 .id[2] = "phy-lane2", 1319 1286 .num_resets = 3, 1320 1287 }, 1288 + .flags = SKIP_PCIE_RSTB, 1321 1289 }; 1322 1290 1323 1291 static const struct of_device_id mtk_pcie_of_match[] = { ··· 1335 1301 .name = "mtk-pcie-gen3", 1336 1302 .of_match_table = mtk_pcie_of_match, 1337 1303 .pm = &mtk_pcie_pm_ops, 1304 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 1338 1305 }, 1339 1306 }; 1340 1307
+1 -1
drivers/pci/controller/pcie-rcar-ep.c
··· 107 107 } 108 108 if (!devm_request_mem_region(&pdev->dev, res->start, 109 109 resource_size(res), 110 - outbound_name)) { 110 + res->name)) { 111 111 dev_err(pcie->dev, "Cannot request memory region %s.\n", 112 112 outbound_name); 113 113 return -EIO;
+5
drivers/pci/controller/pcie-rockchip-ep.c
··· 40 40 * @irq_pci_fn: the latest PCI function that has updated the mapping of 41 41 * the MSI/INTX IRQ dedicated outbound region. 42 42 * @irq_pending: bitmask of asserted INTX IRQs. 43 + * @perst_irq: IRQ used for the PERST# signal. 44 + * @perst_asserted: True if the PERST# signal was asserted. 45 + * @link_up: True if the PCI link is up. 46 + * @link_training: Work item to execute PCI link training. 43 47 */ 44 48 struct rockchip_pcie_ep { 45 49 struct rockchip_pcie rockchip; ··· 788 784 SZ_1M); 789 785 if (!ep->irq_cpu_addr) { 790 786 dev_err(dev, "failed to reserve memory space for MSI\n"); 787 + err = -ENOMEM; 791 788 goto err_epc_mem_exit; 792 789 } 793 790
+37 -182
drivers/pci/controller/pcie-rockchip.c
··· 30 30 struct platform_device *pdev = to_platform_device(dev); 31 31 struct device_node *node = dev->of_node; 32 32 struct resource *regs; 33 - int err; 33 + int err, i; 34 34 35 35 if (rockchip->is_rc) { 36 36 regs = platform_get_resource_byname(pdev, ··· 69 69 if (rockchip->link_gen < 0 || rockchip->link_gen > 2) 70 70 rockchip->link_gen = 2; 71 71 72 - rockchip->core_rst = devm_reset_control_get_exclusive(dev, "core"); 73 - if (IS_ERR(rockchip->core_rst)) { 74 - if (PTR_ERR(rockchip->core_rst) != -EPROBE_DEFER) 75 - dev_err(dev, "missing core reset property in node\n"); 76 - return PTR_ERR(rockchip->core_rst); 77 - } 72 + for (i = 0; i < ROCKCHIP_NUM_PM_RSTS; i++) 73 + rockchip->pm_rsts[i].id = rockchip_pci_pm_rsts[i]; 78 74 79 - rockchip->mgmt_rst = devm_reset_control_get_exclusive(dev, "mgmt"); 80 - if (IS_ERR(rockchip->mgmt_rst)) { 81 - if (PTR_ERR(rockchip->mgmt_rst) != -EPROBE_DEFER) 82 - dev_err(dev, "missing mgmt reset property in node\n"); 83 - return PTR_ERR(rockchip->mgmt_rst); 84 - } 75 + err = devm_reset_control_bulk_get_exclusive(dev, 76 + ROCKCHIP_NUM_PM_RSTS, 77 + rockchip->pm_rsts); 78 + if (err) 79 + return dev_err_probe(dev, err, "Cannot get the PM reset\n"); 85 80 86 - rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev, 87 - "mgmt-sticky"); 88 - if (IS_ERR(rockchip->mgmt_sticky_rst)) { 89 - if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER) 90 - dev_err(dev, "missing mgmt-sticky reset property in node\n"); 91 - return PTR_ERR(rockchip->mgmt_sticky_rst); 92 - } 81 + for (i = 0; i < ROCKCHIP_NUM_CORE_RSTS; i++) 82 + rockchip->core_rsts[i].id = rockchip_pci_core_rsts[i]; 93 83 94 - rockchip->pipe_rst = devm_reset_control_get_exclusive(dev, "pipe"); 95 - if (IS_ERR(rockchip->pipe_rst)) { 96 - if (PTR_ERR(rockchip->pipe_rst) != -EPROBE_DEFER) 97 - dev_err(dev, "missing pipe reset property in node\n"); 98 - return PTR_ERR(rockchip->pipe_rst); 99 - } 100 - 101 - rockchip->pm_rst = devm_reset_control_get_exclusive(dev, "pm"); 102 - if (IS_ERR(rockchip->pm_rst)) { 103 - if (PTR_ERR(rockchip->pm_rst) != -EPROBE_DEFER) 104 - dev_err(dev, "missing pm reset property in node\n"); 105 - return PTR_ERR(rockchip->pm_rst); 106 - } 107 - 108 - rockchip->pclk_rst = devm_reset_control_get_exclusive(dev, "pclk"); 109 - if (IS_ERR(rockchip->pclk_rst)) { 110 - if (PTR_ERR(rockchip->pclk_rst) != -EPROBE_DEFER) 111 - dev_err(dev, "missing pclk reset property in node\n"); 112 - return PTR_ERR(rockchip->pclk_rst); 113 - } 114 - 115 - rockchip->aclk_rst = devm_reset_control_get_exclusive(dev, "aclk"); 116 - if (IS_ERR(rockchip->aclk_rst)) { 117 - if (PTR_ERR(rockchip->aclk_rst) != -EPROBE_DEFER) 118 - dev_err(dev, "missing aclk reset property in node\n"); 119 - return PTR_ERR(rockchip->aclk_rst); 120 - } 84 + err = devm_reset_control_bulk_get_exclusive(dev, 85 + ROCKCHIP_NUM_CORE_RSTS, 86 + rockchip->core_rsts); 87 + if (err) 88 + return dev_err_probe(dev, err, "Cannot get the Core resets\n"); 121 89 122 90 if (rockchip->is_rc) 123 91 rockchip->perst_gpio = devm_gpiod_get_optional(dev, "ep", ··· 97 129 return dev_err_probe(dev, PTR_ERR(rockchip->perst_gpio), 98 130 "failed to get PERST# GPIO\n"); 99 131 100 - rockchip->aclk_pcie = devm_clk_get(dev, "aclk"); 101 - if (IS_ERR(rockchip->aclk_pcie)) { 102 - dev_err(dev, "aclk clock not found\n"); 103 - return PTR_ERR(rockchip->aclk_pcie); 104 - } 105 - 106 - rockchip->aclk_perf_pcie = devm_clk_get(dev, "aclk-perf"); 107 - if (IS_ERR(rockchip->aclk_perf_pcie)) { 108 - dev_err(dev, "aclk_perf clock not found\n"); 109 - return PTR_ERR(rockchip->aclk_perf_pcie); 110 - } 111 - 112 - rockchip->hclk_pcie = devm_clk_get(dev, "hclk"); 113 - if (IS_ERR(rockchip->hclk_pcie)) { 114 - dev_err(dev, "hclk clock not found\n"); 115 - return PTR_ERR(rockchip->hclk_pcie); 116 - } 117 - 118 - rockchip->clk_pcie_pm = devm_clk_get(dev, "pm"); 119 - if (IS_ERR(rockchip->clk_pcie_pm)) { 120 - dev_err(dev, "pm clock not found\n"); 121 - return PTR_ERR(rockchip->clk_pcie_pm); 122 - } 132 + rockchip->num_clks = devm_clk_bulk_get_all(dev, &rockchip->clks); 133 + if (rockchip->num_clks < 0) 134 + return dev_err_probe(dev, rockchip->num_clks, 135 + "failed to get clocks\n"); 123 136 124 137 return 0; 125 138 } ··· 118 169 int err, i; 119 170 u32 regs; 120 171 121 - err = reset_control_assert(rockchip->aclk_rst); 122 - if (err) { 123 - dev_err(dev, "assert aclk_rst err %d\n", err); 124 - return err; 125 - } 126 - 127 - err = reset_control_assert(rockchip->pclk_rst); 128 - if (err) { 129 - dev_err(dev, "assert pclk_rst err %d\n", err); 130 - return err; 131 - } 132 - 133 - err = reset_control_assert(rockchip->pm_rst); 134 - if (err) { 135 - dev_err(dev, "assert pm_rst err %d\n", err); 136 - return err; 137 - } 172 + err = reset_control_bulk_assert(ROCKCHIP_NUM_PM_RSTS, 173 + rockchip->pm_rsts); 174 + if (err) 175 + return dev_err_probe(dev, err, "Couldn't assert PM resets\n"); 138 176 139 177 for (i = 0; i < MAX_LANE_NUM; i++) { 140 178 err = phy_init(rockchip->phys[i]); ··· 131 195 } 132 196 } 133 197 134 - err = reset_control_assert(rockchip->core_rst); 198 + err = reset_control_bulk_assert(ROCKCHIP_NUM_CORE_RSTS, 199 + rockchip->core_rsts); 135 200 if (err) { 136 - dev_err(dev, "assert core_rst err %d\n", err); 137 - goto err_exit_phy; 138 - } 139 - 140 - err = reset_control_assert(rockchip->mgmt_rst); 141 - if (err) { 142 - dev_err(dev, "assert mgmt_rst err %d\n", err); 143 - goto err_exit_phy; 144 - } 145 - 146 - err = reset_control_assert(rockchip->mgmt_sticky_rst); 147 - if (err) { 148 - dev_err(dev, "assert mgmt_sticky_rst err %d\n", err); 149 - goto err_exit_phy; 150 - } 151 - 152 - err = reset_control_assert(rockchip->pipe_rst); 153 - if (err) { 154 - dev_err(dev, "assert pipe_rst err %d\n", err); 201 + dev_err_probe(dev, err, "Couldn't assert Core resets\n"); 155 202 goto err_exit_phy; 156 203 } 157 204 158 205 udelay(10); 159 206 160 - err = reset_control_deassert(rockchip->pm_rst); 207 + err = reset_control_bulk_deassert(ROCKCHIP_NUM_PM_RSTS, 208 + rockchip->pm_rsts); 161 209 if (err) { 162 - dev_err(dev, "deassert pm_rst err %d\n", err); 163 - goto err_exit_phy; 164 - } 165 - 166 - err = reset_control_deassert(rockchip->aclk_rst); 167 - if (err) { 168 - dev_err(dev, "deassert aclk_rst err %d\n", err); 169 - goto err_exit_phy; 170 - } 171 - 172 - err = reset_control_deassert(rockchip->pclk_rst); 173 - if (err) { 174 - dev_err(dev, "deassert pclk_rst err %d\n", err); 210 + dev_err(dev, "Couldn't deassert PM resets %d\n", err); 175 211 goto err_exit_phy; 176 212 } 177 213 ··· 183 275 goto err_power_off_phy; 184 276 } 185 277 186 - /* 187 - * Please don't reorder the deassert sequence of the following 188 - * four reset pins. 189 - */ 190 - err = reset_control_deassert(rockchip->mgmt_sticky_rst); 278 + err = reset_control_bulk_deassert(ROCKCHIP_NUM_CORE_RSTS, 279 + rockchip->core_rsts); 191 280 if (err) { 192 - dev_err(dev, "deassert mgmt_sticky_rst err %d\n", err); 193 - goto err_power_off_phy; 194 - } 195 - 196 - err = reset_control_deassert(rockchip->core_rst); 197 - if (err) { 198 - dev_err(dev, "deassert core_rst err %d\n", err); 199 - goto err_power_off_phy; 200 - } 201 - 202 - err = reset_control_deassert(rockchip->mgmt_rst); 203 - if (err) { 204 - dev_err(dev, "deassert mgmt_rst err %d\n", err); 205 - goto err_power_off_phy; 206 - } 207 - 208 - err = reset_control_deassert(rockchip->pipe_rst); 209 - if (err) { 210 - dev_err(dev, "deassert pipe_rst err %d\n", err); 281 + dev_err(dev, "Couldn't deassert Core reset %d\n", err); 211 282 goto err_power_off_phy; 212 283 } 213 284 ··· 262 375 struct device *dev = rockchip->dev; 263 376 int err; 264 377 265 - err = clk_prepare_enable(rockchip->aclk_pcie); 266 - if (err) { 267 - dev_err(dev, "unable to enable aclk_pcie clock\n"); 268 - return err; 269 - } 270 - 271 - err = clk_prepare_enable(rockchip->aclk_perf_pcie); 272 - if (err) { 273 - dev_err(dev, "unable to enable aclk_perf_pcie clock\n"); 274 - goto err_aclk_perf_pcie; 275 - } 276 - 277 - err = clk_prepare_enable(rockchip->hclk_pcie); 278 - if (err) { 279 - dev_err(dev, "unable to enable hclk_pcie clock\n"); 280 - goto err_hclk_pcie; 281 - } 282 - 283 - err = clk_prepare_enable(rockchip->clk_pcie_pm); 284 - if (err) { 285 - dev_err(dev, "unable to enable clk_pcie_pm clock\n"); 286 - goto err_clk_pcie_pm; 287 - } 378 + err = clk_bulk_prepare_enable(rockchip->num_clks, rockchip->clks); 379 + if (err) 380 + return dev_err_probe(dev, err, "failed to enable clocks\n"); 288 381 289 382 return 0; 290 - 291 - err_clk_pcie_pm: 292 - clk_disable_unprepare(rockchip->hclk_pcie); 293 - err_hclk_pcie: 294 - clk_disable_unprepare(rockchip->aclk_perf_pcie); 295 - err_aclk_perf_pcie: 296 - clk_disable_unprepare(rockchip->aclk_pcie); 297 - return err; 298 383 } 299 384 EXPORT_SYMBOL_GPL(rockchip_pcie_enable_clocks); 300 385 301 - void rockchip_pcie_disable_clocks(void *data) 386 + void rockchip_pcie_disable_clocks(struct rockchip_pcie *rockchip) 302 387 { 303 - struct rockchip_pcie *rockchip = data; 304 388 305 - clk_disable_unprepare(rockchip->clk_pcie_pm); 306 - clk_disable_unprepare(rockchip->hclk_pcie); 307 - clk_disable_unprepare(rockchip->aclk_perf_pcie); 308 - clk_disable_unprepare(rockchip->aclk_pcie); 389 + clk_bulk_disable_unprepare(rockchip->num_clks, rockchip->clks); 309 390 } 310 391 EXPORT_SYMBOL_GPL(rockchip_pcie_disable_clocks); 311 392
+23 -12
drivers/pci/controller/pcie-rockchip.h
··· 11 11 #ifndef _PCIE_ROCKCHIP_H 12 12 #define _PCIE_ROCKCHIP_H 13 13 14 + #include <linux/clk.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/pci.h> 16 17 #include <linux/pci-ecam.h> 18 + #include <linux/reset.h> 17 19 18 20 /* 19 21 * The upper 16 bits of PCIE_CLIENT_CONFIG are a write mask for the lower 16 ··· 311 309 (((c) << ((b) * 8 + 5)) & \ 312 310 ROCKCHIP_PCIE_CORE_EP_FUNC_BAR_CFG_BAR_CTRL_MASK(b)) 313 311 312 + #define ROCKCHIP_NUM_PM_RSTS ARRAY_SIZE(rockchip_pci_pm_rsts) 313 + #define ROCKCHIP_NUM_CORE_RSTS ARRAY_SIZE(rockchip_pci_core_rsts) 314 + 315 + static const char * const rockchip_pci_pm_rsts[] = { 316 + "pm", 317 + "pclk", 318 + "aclk", 319 + }; 320 + 321 + static const char * const rockchip_pci_core_rsts[] = { 322 + "mgmt-sticky", 323 + "core", 324 + "mgmt", 325 + "pipe", 326 + }; 327 + 314 328 struct rockchip_pcie { 315 329 void __iomem *reg_base; /* DT axi-base */ 316 330 void __iomem *apb_base; /* DT apb-base */ 317 331 bool legacy_phy; 318 332 struct phy *phys[MAX_LANE_NUM]; 319 - struct reset_control *core_rst; 320 - struct reset_control *mgmt_rst; 321 - struct reset_control *mgmt_sticky_rst; 322 - struct reset_control *pipe_rst; 323 - struct reset_control *pm_rst; 324 - struct reset_control *aclk_rst; 325 - struct reset_control *pclk_rst; 326 - struct clk *aclk_pcie; 327 - struct clk *aclk_perf_pcie; 328 - struct clk *hclk_pcie; 329 - struct clk *clk_pcie_pm; 333 + struct reset_control_bulk_data pm_rsts[ROCKCHIP_NUM_PM_RSTS]; 334 + struct reset_control_bulk_data core_rsts[ROCKCHIP_NUM_CORE_RSTS]; 335 + struct clk_bulk_data *clks; 336 + int num_clks; 330 337 struct regulator *vpcie12v; /* 12V power supply */ 331 338 struct regulator *vpcie3v3; /* 3.3V power supply */ 332 339 struct regulator *vpcie1v8; /* 1.8V power supply */ ··· 369 358 int rockchip_pcie_get_phys(struct rockchip_pcie *rockchip); 370 359 void rockchip_pcie_deinit_phys(struct rockchip_pcie *rockchip); 371 360 int rockchip_pcie_enable_clocks(struct rockchip_pcie *rockchip); 372 - void rockchip_pcie_disable_clocks(void *data); 361 + void rockchip_pcie_disable_clocks(struct rockchip_pcie *rockchip); 373 362 void rockchip_pcie_cfg_configuration_accesses( 374 363 struct rockchip_pcie *rockchip, u32 type); 375 364
+39 -11
drivers/pci/controller/pcie-xilinx-cpm.c
··· 30 30 #define XILINX_CPM_PCIE_REG_IDRN_MASK 0x00000E3C 31 31 #define XILINX_CPM_PCIE_MISC_IR_STATUS 0x00000340 32 32 #define XILINX_CPM_PCIE_MISC_IR_ENABLE 0x00000348 33 - #define XILINX_CPM_PCIE_MISC_IR_LOCAL BIT(1) 33 + #define XILINX_CPM_PCIE0_MISC_IR_LOCAL BIT(1) 34 + #define XILINX_CPM_PCIE1_MISC_IR_LOCAL BIT(2) 34 35 35 - #define XILINX_CPM_PCIE_IR_STATUS 0x000002A0 36 - #define XILINX_CPM_PCIE_IR_ENABLE 0x000002A8 37 - #define XILINX_CPM_PCIE_IR_LOCAL BIT(0) 36 + #define XILINX_CPM_PCIE0_IR_STATUS 0x000002A0 37 + #define XILINX_CPM_PCIE1_IR_STATUS 0x000002B4 38 + #define XILINX_CPM_PCIE0_IR_ENABLE 0x000002A8 39 + #define XILINX_CPM_PCIE1_IR_ENABLE 0x000002BC 40 + #define XILINX_CPM_PCIE_IR_LOCAL BIT(0) 38 41 39 42 #define IMR(x) BIT(XILINX_PCIE_INTR_ ##x) 40 43 ··· 83 80 enum xilinx_cpm_version { 84 81 CPM, 85 82 CPM5, 83 + CPM5_HOST1, 86 84 }; 87 85 88 86 /** 89 87 * struct xilinx_cpm_variant - CPM variant information 90 88 * @version: CPM version 89 + * @ir_status: Offset for the error interrupt status register 90 + * @ir_enable: Offset for the CPM5 local error interrupt enable register 91 + * @ir_misc_value: A bitmask for the miscellaneous interrupt status 91 92 */ 92 93 struct xilinx_cpm_variant { 93 94 enum xilinx_cpm_version version; 95 + u32 ir_status; 96 + u32 ir_enable; 97 + u32 ir_misc_value; 94 98 }; 95 99 96 100 /** ··· 279 269 { 280 270 struct xilinx_cpm_pcie *port = irq_desc_get_handler_data(desc); 281 271 struct irq_chip *chip = irq_desc_get_chip(desc); 272 + const struct xilinx_cpm_variant *variant = port->variant; 282 273 unsigned long val; 283 274 int i; 284 275 ··· 290 279 generic_handle_domain_irq(port->cpm_domain, i); 291 280 pcie_write(port, val, XILINX_CPM_PCIE_REG_IDR); 292 281 293 - if (port->variant->version == CPM5) { 294 - val = readl_relaxed(port->cpm_base + XILINX_CPM_PCIE_IR_STATUS); 282 + if (variant->ir_status) { 283 + val = readl_relaxed(port->cpm_base + variant->ir_status); 295 284 if (val) 296 285 writel_relaxed(val, port->cpm_base + 297 - XILINX_CPM_PCIE_IR_STATUS); 286 + variant->ir_status); 298 287 } 299 288 300 289 /* ··· 476 465 */ 477 466 static void xilinx_cpm_pcie_init_port(struct xilinx_cpm_pcie *port) 478 467 { 468 + const struct xilinx_cpm_variant *variant = port->variant; 469 + 479 470 if (cpm_pcie_link_up(port)) 480 471 dev_info(port->dev, "PCIe Link is UP\n"); 481 472 else ··· 496 483 * XILINX_CPM_PCIE_MISC_IR_ENABLE register is mapped to 497 484 * CPM SLCR block. 498 485 */ 499 - writel(XILINX_CPM_PCIE_MISC_IR_LOCAL, 486 + writel(variant->ir_misc_value, 500 487 port->cpm_base + XILINX_CPM_PCIE_MISC_IR_ENABLE); 501 488 502 - if (port->variant->version == CPM5) { 489 + if (variant->ir_enable) { 503 490 writel(XILINX_CPM_PCIE_IR_LOCAL, 504 - port->cpm_base + XILINX_CPM_PCIE_IR_ENABLE); 491 + port->cpm_base + variant->ir_enable); 505 492 } 506 493 507 - /* Enable the Bridge enable bit */ 494 + /* Set Bridge enable bit */ 508 495 pcie_write(port, pcie_read(port, XILINX_CPM_PCIE_REG_RPSC) | 509 496 XILINX_CPM_PCIE_REG_RPSC_BEN, 510 497 XILINX_CPM_PCIE_REG_RPSC); ··· 622 609 623 610 static const struct xilinx_cpm_variant cpm_host = { 624 611 .version = CPM, 612 + .ir_misc_value = XILINX_CPM_PCIE0_MISC_IR_LOCAL, 625 613 }; 626 614 627 615 static const struct xilinx_cpm_variant cpm5_host = { 628 616 .version = CPM5, 617 + .ir_misc_value = XILINX_CPM_PCIE0_MISC_IR_LOCAL, 618 + .ir_status = XILINX_CPM_PCIE0_IR_STATUS, 619 + .ir_enable = XILINX_CPM_PCIE0_IR_ENABLE, 620 + }; 621 + 622 + static const struct xilinx_cpm_variant cpm5_host1 = { 623 + .version = CPM5_HOST1, 624 + .ir_misc_value = XILINX_CPM_PCIE1_MISC_IR_LOCAL, 625 + .ir_status = XILINX_CPM_PCIE1_IR_STATUS, 626 + .ir_enable = XILINX_CPM_PCIE1_IR_ENABLE, 629 627 }; 630 628 631 629 static const struct of_device_id xilinx_cpm_pcie_of_match[] = { ··· 647 623 { 648 624 .compatible = "xlnx,versal-cpm5-host", 649 625 .data = &cpm5_host, 626 + }, 627 + { 628 + .compatible = "xlnx,versal-cpm5-host1", 629 + .data = &cpm5_host1, 650 630 }, 651 631 {} 652 632 };
+96
drivers/pci/controller/plda/pcie-microchip-host.c
··· 7 7 * Author: Daire McNamara <daire.mcnamara@microchip.com> 8 8 */ 9 9 10 + #include <linux/align.h> 11 + #include <linux/bits.h> 10 12 #include <linux/bitfield.h> 11 13 #include <linux/clk.h> 12 14 #include <linux/irqchip/chained_irq.h> 13 15 #include <linux/irqdomain.h> 16 + #include <linux/log2.h> 14 17 #include <linux/module.h> 15 18 #include <linux/msi.h> 16 19 #include <linux/of_address.h> 17 20 #include <linux/of_pci.h> 18 21 #include <linux/pci-ecam.h> 19 22 #include <linux/platform_device.h> 23 + #include <linux/wordpart.h> 20 24 21 25 #include "../../pci.h" 22 26 #include "pcie-plda.h" 27 + 28 + #define MC_MAX_NUM_INBOUND_WINDOWS 8 29 + #define MPFS_NC_BOUNCE_ADDR 0x80000000 23 30 24 31 /* PCIe Bridge Phy and Controller Phy offsets */ 25 32 #define MC_PCIE1_BRIDGE_ADDR 0x00008000u ··· 614 607 writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_HOST); 615 608 } 616 609 610 + static void mc_pcie_setup_inbound_atr(struct mc_pcie *port, int window_index, 611 + u64 axi_addr, u64 pcie_addr, u64 size) 612 + { 613 + u32 table_offset = window_index * ATR_ENTRY_SIZE; 614 + void __iomem *table_addr = port->bridge_base_addr + table_offset; 615 + u32 atr_sz; 616 + u32 val; 617 + 618 + atr_sz = ilog2(size) - 1; 619 + 620 + val = ALIGN_DOWN(lower_32_bits(pcie_addr), SZ_4K); 621 + val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz); 622 + val |= ATR_IMPL_ENABLE; 623 + 624 + writel(val, table_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM); 625 + 626 + writel(upper_32_bits(pcie_addr), table_addr + ATR0_PCIE_WIN0_SRC_ADDR); 627 + 628 + writel(lower_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_LSB); 629 + writel(upper_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_UDW); 630 + 631 + writel(TRSL_ID_AXI4_MASTER_0, table_addr + ATR0_PCIE_WIN0_TRSL_PARAM); 632 + } 633 + 634 + static int mc_pcie_setup_inbound_ranges(struct platform_device *pdev, 635 + struct mc_pcie *port) 636 + { 637 + struct device *dev = &pdev->dev; 638 + struct device_node *dn = dev->of_node; 639 + struct of_range_parser parser; 640 + struct of_range range; 641 + int atr_index = 0; 642 + 643 + /* 644 + * MPFS PCIe Root Port is 32-bit only, behind a Fabric Interface 645 + * Controller FPGA logic block which contains the AXI-S interface. 646 + * 647 + * From the point of view of the PCIe Root Port, there are only two 648 + * supported Root Port configurations: 649 + * 650 + * Configuration 1: for use with fully coherent designs; supports a 651 + * window from 0x0 (CPU space) to specified PCIe space. 652 + * 653 + * Configuration 2: for use with non-coherent designs; supports two 654 + * 1 GB windows to CPU space; one mapping CPU space 0 to PCIe space 655 + * 0x80000000 and a second mapping CPU space 0x40000000 to PCIe 656 + * space 0xc0000000. This cfg needs two windows because of how the 657 + * MSI space is allocated in the AXI-S range on MPFS. 658 + * 659 + * The FIC interface outside the PCIe block *must* complete the 660 + * inbound address translation as per MCHP MPFS FPGA design 661 + * guidelines. 662 + */ 663 + if (device_property_read_bool(dev, "dma-noncoherent")) { 664 + /* 665 + * Always need same two tables in this case. Need two tables 666 + * due to hardware interactions between address and size. 667 + */ 668 + mc_pcie_setup_inbound_atr(port, 0, 0, 669 + MPFS_NC_BOUNCE_ADDR, SZ_1G); 670 + mc_pcie_setup_inbound_atr(port, 1, SZ_1G, 671 + MPFS_NC_BOUNCE_ADDR + SZ_1G, SZ_1G); 672 + } else { 673 + /* Find any DMA ranges */ 674 + if (of_pci_dma_range_parser_init(&parser, dn)) { 675 + /* No DMA range property - setup default */ 676 + mc_pcie_setup_inbound_atr(port, 0, 0, 0, SZ_4G); 677 + return 0; 678 + } 679 + 680 + for_each_of_range(&parser, &range) { 681 + if (atr_index >= MC_MAX_NUM_INBOUND_WINDOWS) { 682 + dev_err(dev, "too many inbound ranges; %d available tables\n", 683 + MC_MAX_NUM_INBOUND_WINDOWS); 684 + return -EINVAL; 685 + } 686 + mc_pcie_setup_inbound_atr(port, atr_index, 0, 687 + range.pci_addr, range.size); 688 + atr_index++; 689 + } 690 + } 691 + 692 + return 0; 693 + } 694 + 617 695 static int mc_platform_init(struct pci_config_window *cfg) 618 696 { 619 697 struct device *dev = cfg->parent; ··· 716 624 717 625 /* Configure non-config space outbound ranges */ 718 626 ret = plda_pcie_setup_iomems(bridge, &port->plda); 627 + if (ret) 628 + return ret; 629 + 630 + ret = mc_pcie_setup_inbound_ranges(pdev, port); 719 631 if (ret) 720 632 return ret; 721 633
+14 -3
drivers/pci/controller/plda/pcie-plda-host.c
··· 8 8 * Author: Daire McNamara <daire.mcnamara@microchip.com> 9 9 */ 10 10 11 + #include <linux/align.h> 12 + #include <linux/bitfield.h> 11 13 #include <linux/irqchip/chained_irq.h> 12 14 #include <linux/irqdomain.h> 13 15 #include <linux/msi.h> 14 16 #include <linux/pci_regs.h> 15 17 #include <linux/pci-ecam.h> 18 + #include <linux/wordpart.h> 16 19 17 20 #include "pcie-plda.h" 18 21 ··· 505 502 writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 506 503 ATR0_AXI4_SLV0_TRSL_PARAM); 507 504 508 - val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) | 509 - ATR_IMPL_ENABLE; 505 + val = ALIGN_DOWN(lower_32_bits(axi_addr), SZ_4K); 506 + val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz); 507 + val |= ATR_IMPL_ENABLE; 510 508 writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 511 509 ATR0_AXI4_SLV0_SRCADDR_PARAM); 512 510 ··· 522 518 val = upper_32_bits(pci_addr); 523 519 writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) + 524 520 ATR0_AXI4_SLV0_TRSL_ADDR_UDW); 521 + } 522 + EXPORT_SYMBOL_GPL(plda_pcie_setup_window); 523 + 524 + void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port) 525 + { 526 + void __iomem *bridge_base_addr = port->bridge_addr; 527 + u32 val; 525 528 526 529 val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM); 527 530 val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT); 528 531 writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM); 529 532 writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR); 530 533 } 531 - EXPORT_SYMBOL_GPL(plda_pcie_setup_window); 534 + EXPORT_SYMBOL_GPL(plda_pcie_setup_inbound_address_translation); 532 535 533 536 int plda_pcie_setup_iomems(struct pci_host_bridge *bridge, 534 537 struct plda_pcie_rp *port)
+4 -2
drivers/pci/controller/plda/pcie-plda.h
··· 89 89 90 90 /* PCIe AXI slave table init defines */ 91 91 #define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u 92 - #define ATR_SIZE_SHIFT 1 93 - #define ATR_IMPL_ENABLE 1 92 + #define ATR_SIZE_MASK GENMASK(6, 1) 93 + #define ATR_IMPL_ENABLE BIT(0) 94 94 #define ATR0_AXI4_SLV0_SRC_ADDR 0x804u 95 95 #define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u 96 96 #define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu 97 97 #define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u 98 98 #define PCIE_TX_RX_INTERFACE 0x00000000u 99 99 #define PCIE_CONFIG_INTERFACE 0x00000001u 100 + #define TRSL_ID_AXI4_MASTER_0 0x00000004u 100 101 101 102 #define CONFIG_SPACE_ADDR_OFFSET 0x1000u 102 103 ··· 205 204 void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index, 206 205 phys_addr_t axi_addr, phys_addr_t pci_addr, 207 206 size_t size); 207 + void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port); 208 208 int plda_pcie_setup_iomems(struct pci_host_bridge *bridge, 209 209 struct plda_pcie_rp *port); 210 210 int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops,
+11 -29
drivers/pci/devres.c
··· 101 101 * @bar: BAR the range is within 102 102 * @offset: offset from the BAR's start address 103 103 * @maxlen: length in bytes, beginning at @offset 104 - * @name: name associated with the request 104 + * @name: name of the driver requesting the resource 105 105 * @req_flags: flags for the request, e.g., for kernel-exclusive requests 106 106 * 107 107 * Returns: 0 on success, a negative error code on failure. ··· 411 411 return mask & BIT(bar); 412 412 } 413 413 414 - /* 415 - * This is a copy of pci_intx() used to bypass the problem of recursive 416 - * function calls due to the hybrid nature of pci_intx(). 417 - */ 418 - static void __pcim_intx(struct pci_dev *pdev, int enable) 419 - { 420 - u16 pci_command, new; 421 - 422 - pci_read_config_word(pdev, PCI_COMMAND, &pci_command); 423 - 424 - if (enable) 425 - new = pci_command & ~PCI_COMMAND_INTX_DISABLE; 426 - else 427 - new = pci_command | PCI_COMMAND_INTX_DISABLE; 428 - 429 - if (new != pci_command) 430 - pci_write_config_word(pdev, PCI_COMMAND, new); 431 - } 432 - 433 414 static void pcim_intx_restore(struct device *dev, void *data) 434 415 { 435 416 struct pci_dev *pdev = to_pci_dev(dev); 436 417 struct pcim_intx_devres *res = data; 437 418 438 - __pcim_intx(pdev, res->orig_intx); 419 + pci_intx(pdev, res->orig_intx); 439 420 } 440 421 441 422 static struct pcim_intx_devres *get_or_create_intx_devres(struct device *dev) ··· 453 472 return -ENOMEM; 454 473 455 474 res->orig_intx = !enable; 456 - __pcim_intx(pdev, enable); 475 + pci_intx(pdev, enable); 457 476 458 477 return 0; 459 478 } 479 + EXPORT_SYMBOL_GPL(pcim_intx); 460 480 461 481 static void pcim_disable_device(void *pdev_raw) 462 482 { ··· 705 723 * pcim_iomap_region - Request and iomap a PCI BAR 706 724 * @pdev: PCI device to map IO resources for 707 725 * @bar: Index of a BAR to map 708 - * @name: Name associated with the request 726 + * @name: Name of the driver requesting the resource 709 727 * 710 728 * Returns: __iomem pointer on success, an IOMEM_ERR_PTR on failure. 711 729 * ··· 772 790 * pcim_iomap_regions - Request and iomap PCI BARs (DEPRECATED) 773 791 * @pdev: PCI device to map IO resources for 774 792 * @mask: Mask of BARs to request and iomap 775 - * @name: Name associated with the requests 793 + * @name: Name of the driver requesting the resources 776 794 * 777 795 * Returns: 0 on success, negative error code on failure. 778 796 * ··· 837 855 838 856 /** 839 857 * pcim_request_region - Request a PCI BAR 840 - * @pdev: PCI device to requestion region for 858 + * @pdev: PCI device to request region for 841 859 * @bar: Index of BAR to request 842 - * @name: Name associated with the request 860 + * @name: Name of the driver requesting the resource 843 861 * 844 862 * Returns: 0 on success, a negative error code on failure. 845 863 * ··· 856 874 857 875 /** 858 876 * pcim_request_region_exclusive - Request a PCI BAR exclusively 859 - * @pdev: PCI device to requestion region for 877 + * @pdev: PCI device to request region for 860 878 * @bar: Index of BAR to request 861 - * @name: Name associated with the request 879 + * @name: Name of the driver requesting the resource 862 880 * 863 881 * Returns: 0 on success, a negative error code on failure. 864 882 * ··· 914 932 /** 915 933 * pcim_request_all_regions - Request all regions 916 934 * @pdev: PCI device to map IO resources for 917 - * @name: name associated with the request 935 + * @name: name of the driver requesting the resources 918 936 * 919 937 * Returns: 0 on success, negative error code on failure. 920 938 *
+22 -3
drivers/pci/endpoint/functions/pci-epf-test.c
··· 44 44 45 45 #define TIMER_RESOLUTION 1 46 46 47 + #define CAP_UNALIGNED_ACCESS BIT(0) 48 + 47 49 static struct workqueue_struct *kpcitest_workqueue; 48 50 49 51 struct pci_epf_test { ··· 76 74 u32 irq_type; 77 75 u32 irq_number; 78 76 u32 flags; 77 + u32 caps; 79 78 } __packed; 80 79 81 80 static struct pci_epf_header test_header = { ··· 254 251 255 252 fail_back_rx: 256 253 dma_release_channel(epf_test->dma_chan_rx); 257 - epf_test->dma_chan_tx = NULL; 254 + epf_test->dma_chan_rx = NULL; 258 255 259 256 fail_back_tx: 260 257 dma_cap_zero(mask); ··· 331 328 void *copy_buf = NULL, *buf; 332 329 333 330 if (reg->flags & FLAG_USE_DMA) { 334 - if (epf_test->dma_private) { 335 - dev_err(dev, "Cannot transfer data using DMA\n"); 331 + if (!dma_has_cap(DMA_MEMCPY, epf_test->dma_chan_tx->device->cap_mask)) { 332 + dev_err(dev, "DMA controller doesn't support MEMCPY\n"); 336 333 ret = -EINVAL; 337 334 goto set_status; 338 335 } ··· 742 739 } 743 740 } 744 741 742 + static void pci_epf_test_set_capabilities(struct pci_epf *epf) 743 + { 744 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 745 + enum pci_barno test_reg_bar = epf_test->test_reg_bar; 746 + struct pci_epf_test_reg *reg = epf_test->reg[test_reg_bar]; 747 + struct pci_epc *epc = epf->epc; 748 + u32 caps = 0; 749 + 750 + if (epc->ops->align_addr) 751 + caps |= CAP_UNALIGNED_ACCESS; 752 + 753 + reg->caps = cpu_to_le32(caps); 754 + } 755 + 745 756 static int pci_epf_test_epc_init(struct pci_epf *epf) 746 757 { 747 758 struct pci_epf_test *epf_test = epf_get_drvdata(epf); ··· 779 762 return ret; 780 763 } 781 764 } 765 + 766 + pci_epf_test_set_capabilities(epf); 782 767 783 768 ret = pci_epf_test_set_bar(epf); 784 769 if (ret)
+19 -18
drivers/pci/endpoint/pci-epc-core.c
··· 60 60 int ret = -EINVAL; 61 61 struct pci_epc *epc; 62 62 struct device *dev; 63 - struct class_dev_iter iter; 64 63 65 - class_dev_iter_init(&iter, &pci_epc_class, NULL, NULL); 66 - while ((dev = class_dev_iter_next(&iter))) { 67 - if (strcmp(epc_name, dev_name(dev))) 68 - continue; 64 + dev = class_find_device_by_name(&pci_epc_class, epc_name); 65 + if (!dev) 66 + goto err; 69 67 70 - epc = to_pci_epc(dev); 71 - if (!try_module_get(epc->ops->owner)) { 72 - ret = -EINVAL; 73 - goto err; 74 - } 75 - 76 - class_dev_iter_exit(&iter); 77 - get_device(&epc->dev); 68 + epc = to_pci_epc(dev); 69 + if (try_module_get(epc->ops->owner)) 78 70 return epc; 79 - } 80 71 81 72 err: 82 - class_dev_iter_exit(&iter); 73 + put_device(dev); 83 74 return ERR_PTR(ret); 84 75 } 85 76 EXPORT_SYMBOL_GPL(pci_epc_get); ··· 600 609 int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 601 610 struct pci_epf_bar *epf_bar) 602 611 { 603 - int ret; 612 + const struct pci_epc_features *epc_features; 613 + enum pci_barno bar = epf_bar->barno; 604 614 int flags = epf_bar->flags; 615 + int ret; 605 616 606 - if (!pci_epc_function_is_valid(epc, func_no, vfunc_no)) 617 + epc_features = pci_epc_get_features(epc, func_no, vfunc_no); 618 + if (!epc_features) 619 + return -EINVAL; 620 + 621 + if (epc_features->bar[bar].type == BAR_FIXED && 622 + (epc_features->bar[bar].fixed_size != epf_bar->size)) 623 + return -EINVAL; 624 + 625 + if (!is_power_of_2(epf_bar->size)) 607 626 return -EINVAL; 608 627 609 628 if ((epf_bar->barno == BAR_5 && flags & PCI_BASE_ADDRESS_MEM_TYPE_64) || ··· 943 942 { 944 943 int r; 945 944 946 - r = devres_destroy(dev, devm_pci_epc_release, devm_pci_epc_match, 945 + r = devres_release(dev, devm_pci_epc_release, devm_pci_epc_match, 947 946 epc); 948 947 dev_WARN_ONCE(dev, r, "couldn't find PCI EPC resource\n"); 949 948 }
+1
drivers/pci/endpoint/pci-epf-core.c
··· 202 202 203 203 mutex_lock(&epf_pf->lock); 204 204 clear_bit(epf_vf->vfunc_no, &epf_pf->vfunction_num_map); 205 + epf_vf->epf_pf = NULL; 205 206 list_del(&epf_vf->list); 206 207 mutex_unlock(&epf_pf->lock); 207 208 }
+3 -3
drivers/pci/hotplug/acpiphp_ibm.c
··· 84 84 static void ibm_handle_events(acpi_handle handle, u32 event, void *context); 85 85 static int ibm_get_table_from_acpi(char **bufp); 86 86 static ssize_t ibm_read_apci_table(struct file *filp, struct kobject *kobj, 87 - struct bin_attribute *bin_attr, 87 + const struct bin_attribute *bin_attr, 88 88 char *buffer, loff_t pos, size_t size); 89 89 static acpi_status __init ibm_find_acpi_device(acpi_handle handle, 90 90 u32 lvl, void *context, void **rv); ··· 98 98 .name = "apci_table", 99 99 .mode = S_IRUGO, 100 100 }, 101 - .read = ibm_read_apci_table, 101 + .read_new = ibm_read_apci_table, 102 102 .write = NULL, 103 103 }; 104 104 static struct acpiphp_attention_info ibm_attention_info = ··· 353 353 * our solution is to only allow reading the table in all at once. 354 354 */ 355 355 static ssize_t ibm_read_apci_table(struct file *filp, struct kobject *kobj, 356 - struct bin_attribute *bin_attr, 356 + const struct bin_attribute *bin_attr, 357 357 char *buffer, loff_t pos, size_t size) 358 358 { 359 359 int bytes_read = -EINVAL;
+7 -1
drivers/pci/iov.c
··· 747 747 struct resource *res; 748 748 const char *res_name; 749 749 struct pci_dev *pdev; 750 + u32 sriovbars[PCI_SRIOV_NUM_BARS]; 750 751 751 752 pci_read_config_word(dev, pos + PCI_SRIOV_CTRL, &ctrl); 752 753 if (ctrl & PCI_SRIOV_CTRL_VFE) { ··· 784 783 if (!iov) 785 784 return -ENOMEM; 786 785 786 + /* Sizing SR-IOV BARs with VF Enable cleared - no decode */ 787 + __pci_size_stdbars(dev, PCI_SRIOV_NUM_BARS, 788 + pos + PCI_SRIOV_BAR, sriovbars); 789 + 787 790 nres = 0; 788 791 for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) { 789 792 res = &dev->resource[i + PCI_IOV_RESOURCES]; ··· 801 796 bar64 = (res->flags & IORESOURCE_MEM_64) ? 1 : 0; 802 797 else 803 798 bar64 = __pci_read_base(dev, pci_bar_unknown, res, 804 - pos + PCI_SRIOV_BAR + i * 4); 799 + pos + PCI_SRIOV_BAR + i * 4, 800 + &sriovbars[i]); 805 801 if (!res->flags) 806 802 continue; 807 803 if (resource_size(res) & (PAGE_SIZE - 1)) {
+10 -12
drivers/pci/of.c
··· 190 190 * 191 191 * Returns 0 on success or a negative error-code on failure. 192 192 */ 193 - int of_pci_parse_bus_range(struct device_node *node, struct resource *res) 193 + static int of_pci_parse_bus_range(struct device_node *node, 194 + struct resource *res) 194 195 { 195 196 u32 bus_range[2]; 196 197 int error; ··· 208 207 209 208 return 0; 210 209 } 211 - EXPORT_SYMBOL_GPL(of_pci_parse_bus_range); 212 210 213 211 /** 214 212 * of_get_pci_domain_nr - Find the host bridge domain number ··· 302 302 * devm_of_pci_get_host_bridge_resources() - Resource-managed parsing of PCI 303 303 * host bridge resources from DT 304 304 * @dev: host bridge device 305 - * @busno: bus number associated with the bridge root bus 306 - * @bus_max: maximum number of buses for this bridge 307 305 * @resources: list where the range of resources will be added after DT parsing 308 306 * @ib_resources: list where the range of inbound resources (with addresses 309 307 * from 'dma-ranges') will be added after DT parsing ··· 317 319 * value if it failed. 318 320 */ 319 321 static int devm_of_pci_get_host_bridge_resources(struct device *dev, 320 - unsigned char busno, unsigned char bus_max, 321 322 struct list_head *resources, 322 323 struct list_head *ib_resources, 323 324 resource_size_t *io_base) ··· 340 343 341 344 err = of_pci_parse_bus_range(dev_node, bus_range); 342 345 if (err) { 343 - bus_range->start = busno; 344 - bus_range->end = bus_max; 346 + bus_range->start = 0; 347 + bus_range->end = 0xff; 345 348 bus_range->flags = IORESOURCE_BUS; 346 - dev_info(dev, " No bus range found for %pOF, using %pR\n", 347 - dev_node, bus_range); 348 349 } else { 349 - if (bus_range->end > bus_range->start + bus_max) 350 - bus_range->end = bus_range->start + bus_max; 350 + if (bus_range->end > 0xff) { 351 + dev_warn(dev, " Invalid end bus number in %pR, defaulting to 0xff\n", 352 + bus_range); 353 + bus_range->end = 0xff; 354 + } 351 355 } 352 356 pci_add_resource(resources, bus_range); 353 357 ··· 595 597 INIT_LIST_HEAD(&bridge->windows); 596 598 INIT_LIST_HEAD(&bridge->dma_ranges); 597 599 598 - err = devm_of_pci_get_host_bridge_resources(dev, 0, 0xff, &bridge->windows, 600 + err = devm_of_pci_get_host_bridge_resources(dev, &bridge->windows, 599 601 &bridge->dma_ranges, &iobase); 600 602 if (err) 601 603 return err;
+2 -2
drivers/pci/of_property.c
··· 26 26 * side and the child address is the corresponding address on the secondary 27 27 * side. 28 28 */ 29 - struct of_pci_range { 29 + struct of_pci_range_entry { 30 30 u32 child_addr[OF_PCI_ADDRESS_CELLS]; 31 31 u32 parent_addr[OF_PCI_ADDRESS_CELLS]; 32 32 u32 size[OF_PCI_SIZE_CELLS]; ··· 101 101 static int of_pci_prop_ranges(struct pci_dev *pdev, struct of_changeset *ocs, 102 102 struct device_node *np) 103 103 { 104 - struct of_pci_range *rp; 104 + struct of_pci_range_entry *rp; 105 105 struct resource *res; 106 106 int i, j, ret; 107 107 u32 flags, num;
+3 -3
drivers/pci/p2pdma.c
··· 161 161 return ret; 162 162 } 163 163 164 - static struct bin_attribute p2pmem_alloc_attr = { 164 + static const struct bin_attribute p2pmem_alloc_attr = { 165 165 .attr = { .name = "allocate", .mode = 0660 }, 166 166 .mmap = p2pmem_alloc_mmap, 167 167 /* ··· 180 180 NULL, 181 181 }; 182 182 183 - static struct bin_attribute *p2pmem_bin_attrs[] = { 183 + static const struct bin_attribute *const p2pmem_bin_attrs[] = { 184 184 &p2pmem_alloc_attr, 185 185 NULL, 186 186 }; 187 187 188 188 static const struct attribute_group p2pmem_group = { 189 189 .attrs = p2pmem_attrs, 190 - .bin_attrs = p2pmem_bin_attrs, 190 + .bin_attrs_new = p2pmem_bin_attrs, 191 191 .name = "p2pmem", 192 192 }; 193 193
+129 -21
drivers/pci/pci-sysfs.c
··· 13 13 */ 14 14 15 15 #include <linux/bitfield.h> 16 + #include <linux/cleanup.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/sched.h> 18 19 #include <linux/pci.h> ··· 695 694 static DEVICE_ATTR_RO(boot_vga); 696 695 697 696 static ssize_t pci_read_config(struct file *filp, struct kobject *kobj, 698 - struct bin_attribute *bin_attr, char *buf, 697 + const struct bin_attribute *bin_attr, char *buf, 699 698 loff_t off, size_t count) 700 699 { 701 700 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); ··· 770 769 } 771 770 772 771 static ssize_t pci_write_config(struct file *filp, struct kobject *kobj, 773 - struct bin_attribute *bin_attr, char *buf, 772 + const struct bin_attribute *bin_attr, char *buf, 774 773 loff_t off, size_t count) 775 774 { 776 775 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); ··· 838 837 839 838 return count; 840 839 } 841 - static BIN_ATTR(config, 0644, pci_read_config, pci_write_config, 0); 840 + static const BIN_ATTR(config, 0644, pci_read_config, pci_write_config, 0); 842 841 843 - static struct bin_attribute *pci_dev_config_attrs[] = { 842 + static const struct bin_attribute *const pci_dev_config_attrs[] = { 844 843 &bin_attr_config, 845 844 NULL, 846 845 }; ··· 857 856 } 858 857 859 858 static const struct attribute_group pci_dev_config_attr_group = { 860 - .bin_attrs = pci_dev_config_attrs, 859 + .bin_attrs_new = pci_dev_config_attrs, 861 860 .bin_size = pci_dev_config_attr_bin_size, 862 861 }; 863 862 ··· 888 887 * callback routine (pci_legacy_read). 889 888 */ 890 889 static ssize_t pci_read_legacy_io(struct file *filp, struct kobject *kobj, 891 - struct bin_attribute *bin_attr, char *buf, 892 - loff_t off, size_t count) 890 + const struct bin_attribute *bin_attr, 891 + char *buf, loff_t off, size_t count) 893 892 { 894 893 struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj)); 895 894 ··· 913 912 * callback routine (pci_legacy_write). 914 913 */ 915 914 static ssize_t pci_write_legacy_io(struct file *filp, struct kobject *kobj, 916 - struct bin_attribute *bin_attr, char *buf, 917 - loff_t off, size_t count) 915 + const struct bin_attribute *bin_attr, 916 + char *buf, loff_t off, size_t count) 918 917 { 919 918 struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj)); 920 919 ··· 1004 1003 b->legacy_io->attr.name = "legacy_io"; 1005 1004 b->legacy_io->size = 0xffff; 1006 1005 b->legacy_io->attr.mode = 0600; 1007 - b->legacy_io->read = pci_read_legacy_io; 1008 - b->legacy_io->write = pci_write_legacy_io; 1006 + b->legacy_io->read_new = pci_read_legacy_io; 1007 + b->legacy_io->write_new = pci_write_legacy_io; 1009 1008 /* See pci_create_attr() for motivation */ 1010 1009 b->legacy_io->llseek = pci_llseek_resource; 1011 1010 b->legacy_io->mmap = pci_mmap_legacy_io; ··· 1100 1099 } 1101 1100 1102 1101 static ssize_t pci_resource_io(struct file *filp, struct kobject *kobj, 1103 - struct bin_attribute *attr, char *buf, 1102 + const struct bin_attribute *attr, char *buf, 1104 1103 loff_t off, size_t count, bool write) 1105 1104 { 1106 1105 #ifdef CONFIG_HAS_IOPORT ··· 1143 1142 } 1144 1143 1145 1144 static ssize_t pci_read_resource_io(struct file *filp, struct kobject *kobj, 1146 - struct bin_attribute *attr, char *buf, 1145 + const struct bin_attribute *attr, char *buf, 1147 1146 loff_t off, size_t count) 1148 1147 { 1149 1148 return pci_resource_io(filp, kobj, attr, buf, off, count, false); 1150 1149 } 1151 1150 1152 1151 static ssize_t pci_write_resource_io(struct file *filp, struct kobject *kobj, 1153 - struct bin_attribute *attr, char *buf, 1152 + const struct bin_attribute *attr, char *buf, 1154 1153 loff_t off, size_t count) 1155 1154 { 1156 1155 int ret; ··· 1211 1210 } else { 1212 1211 sprintf(res_attr_name, "resource%d", num); 1213 1212 if (pci_resource_flags(pdev, num) & IORESOURCE_IO) { 1214 - res_attr->read = pci_read_resource_io; 1215 - res_attr->write = pci_write_resource_io; 1213 + res_attr->read_new = pci_read_resource_io; 1214 + res_attr->write_new = pci_write_resource_io; 1216 1215 if (arch_can_pci_mmap_io()) 1217 1216 res_attr->mmap = pci_mmap_resource_uc; 1218 1217 } else { ··· 1293 1292 * writing anything except 0 enables it 1294 1293 */ 1295 1294 static ssize_t pci_write_rom(struct file *filp, struct kobject *kobj, 1296 - struct bin_attribute *bin_attr, char *buf, 1295 + const struct bin_attribute *bin_attr, char *buf, 1297 1296 loff_t off, size_t count) 1298 1297 { 1299 1298 struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); ··· 1319 1318 * device corresponding to @kobj. 1320 1319 */ 1321 1320 static ssize_t pci_read_rom(struct file *filp, struct kobject *kobj, 1322 - struct bin_attribute *bin_attr, char *buf, 1321 + const struct bin_attribute *bin_attr, char *buf, 1323 1322 loff_t off, size_t count) 1324 1323 { 1325 1324 struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); ··· 1345 1344 1346 1345 return count; 1347 1346 } 1348 - static BIN_ATTR(rom, 0600, pci_read_rom, pci_write_rom, 0); 1347 + static const BIN_ATTR(rom, 0600, pci_read_rom, pci_write_rom, 0); 1349 1348 1350 - static struct bin_attribute *pci_dev_rom_attrs[] = { 1349 + static const struct bin_attribute *const pci_dev_rom_attrs[] = { 1351 1350 &bin_attr_rom, 1352 1351 NULL, 1353 1352 }; ··· 1373 1372 } 1374 1373 1375 1374 static const struct attribute_group pci_dev_rom_attr_group = { 1376 - .bin_attrs = pci_dev_rom_attrs, 1375 + .bin_attrs_new = pci_dev_rom_attrs, 1377 1376 .is_bin_visible = pci_dev_rom_attr_is_visible, 1378 1377 .bin_size = pci_dev_rom_attr_bin_size, 1379 1378 }; ··· 1419 1418 1420 1419 static const struct attribute_group pci_dev_reset_attr_group = { 1421 1420 .attrs = pci_dev_reset_attrs, 1421 + .is_visible = pci_dev_reset_attr_is_visible, 1422 + }; 1423 + 1424 + static ssize_t reset_method_show(struct device *dev, 1425 + struct device_attribute *attr, char *buf) 1426 + { 1427 + struct pci_dev *pdev = to_pci_dev(dev); 1428 + ssize_t len = 0; 1429 + int i, m; 1430 + 1431 + for (i = 0; i < PCI_NUM_RESET_METHODS; i++) { 1432 + m = pdev->reset_methods[i]; 1433 + if (!m) 1434 + break; 1435 + 1436 + len += sysfs_emit_at(buf, len, "%s%s", len ? " " : "", 1437 + pci_reset_fn_methods[m].name); 1438 + } 1439 + 1440 + if (len) 1441 + len += sysfs_emit_at(buf, len, "\n"); 1442 + 1443 + return len; 1444 + } 1445 + 1446 + static int reset_method_lookup(const char *name) 1447 + { 1448 + int m; 1449 + 1450 + for (m = 1; m < PCI_NUM_RESET_METHODS; m++) { 1451 + if (sysfs_streq(name, pci_reset_fn_methods[m].name)) 1452 + return m; 1453 + } 1454 + 1455 + return 0; /* not found */ 1456 + } 1457 + 1458 + static ssize_t reset_method_store(struct device *dev, 1459 + struct device_attribute *attr, 1460 + const char *buf, size_t count) 1461 + { 1462 + struct pci_dev *pdev = to_pci_dev(dev); 1463 + char *tmp_options, *name; 1464 + int m, n; 1465 + u8 reset_methods[PCI_NUM_RESET_METHODS] = {}; 1466 + 1467 + if (sysfs_streq(buf, "")) { 1468 + pdev->reset_methods[0] = 0; 1469 + pci_warn(pdev, "All device reset methods disabled by user"); 1470 + return count; 1471 + } 1472 + 1473 + if (sysfs_streq(buf, "default")) { 1474 + pci_init_reset_methods(pdev); 1475 + return count; 1476 + } 1477 + 1478 + char *options __free(kfree) = kstrndup(buf, count, GFP_KERNEL); 1479 + if (!options) 1480 + return -ENOMEM; 1481 + 1482 + n = 0; 1483 + tmp_options = options; 1484 + while ((name = strsep(&tmp_options, " ")) != NULL) { 1485 + if (sysfs_streq(name, "")) 1486 + continue; 1487 + 1488 + name = strim(name); 1489 + 1490 + /* Leave previous methods unchanged if input is invalid */ 1491 + m = reset_method_lookup(name); 1492 + if (!m) { 1493 + pci_err(pdev, "Invalid reset method '%s'", name); 1494 + return -EINVAL; 1495 + } 1496 + 1497 + if (pci_reset_fn_methods[m].reset_fn(pdev, PCI_RESET_PROBE)) { 1498 + pci_err(pdev, "Unsupported reset method '%s'", name); 1499 + return -EINVAL; 1500 + } 1501 + 1502 + if (n == PCI_NUM_RESET_METHODS - 1) { 1503 + pci_err(pdev, "Too many reset methods\n"); 1504 + return -EINVAL; 1505 + } 1506 + 1507 + reset_methods[n++] = m; 1508 + } 1509 + 1510 + reset_methods[n] = 0; 1511 + 1512 + /* Warn if dev-specific supported but not highest priority */ 1513 + if (pci_reset_fn_methods[1].reset_fn(pdev, PCI_RESET_PROBE) == 0 && 1514 + reset_methods[0] != 1) 1515 + pci_warn(pdev, "Device-specific reset disabled/de-prioritized by user"); 1516 + memcpy(pdev->reset_methods, reset_methods, sizeof(pdev->reset_methods)); 1517 + return count; 1518 + } 1519 + static DEVICE_ATTR_RW(reset_method); 1520 + 1521 + static struct attribute *pci_dev_reset_method_attrs[] = { 1522 + &dev_attr_reset_method.attr, 1523 + NULL, 1524 + }; 1525 + 1526 + static const struct attribute_group pci_dev_reset_method_attr_group = { 1527 + .attrs = pci_dev_reset_method_attrs, 1422 1528 .is_visible = pci_dev_reset_attr_is_visible, 1423 1529 }; 1424 1530
+72 -203
drivers/pci/pci.c
··· 23 23 #include <linux/string.h> 24 24 #include <linux/log2.h> 25 25 #include <linux/logic_pio.h> 26 - #include <linux/pm_wakeup.h> 27 26 #include <linux/device.h> 28 27 #include <linux/pm_runtime.h> 29 28 #include <linux/pci_hotplug.h> ··· 1099 1100 } 1100 1101 1101 1102 /** 1102 - * pcie_read_tlp_log - read TLP Header Log 1103 - * @dev: PCIe device 1104 - * @where: PCI Config offset of TLP Header Log 1105 - * @tlp_log: TLP Log structure to fill 1106 - * 1107 - * Fill @tlp_log from TLP Header Log registers, e.g., AER or DPC. 1108 - * 1109 - * Return: 0 on success and filled TLP Log structure, <0 on error. 1110 - */ 1111 - int pcie_read_tlp_log(struct pci_dev *dev, int where, 1112 - struct pcie_tlp_log *tlp_log) 1113 - { 1114 - int i, ret; 1115 - 1116 - memset(tlp_log, 0, sizeof(*tlp_log)); 1117 - 1118 - for (i = 0; i < 4; i++) { 1119 - ret = pci_read_config_dword(dev, where + i * 4, 1120 - &tlp_log->dw[i]); 1121 - if (ret) 1122 - return pcibios_err_to_errno(ret); 1123 - } 1124 - 1125 - return 0; 1126 - } 1127 - EXPORT_SYMBOL_GPL(pcie_read_tlp_log); 1128 - 1129 - /** 1130 1103 * pci_restore_bars - restore a device's BAR values (e.g. after wake-up) 1131 1104 * @dev: PCI device to have its BARs restored 1132 1105 * ··· 2030 2059 return pci_enable_resources(dev, bars); 2031 2060 } 2032 2061 2062 + static int pci_host_bridge_enable_device(struct pci_dev *dev) 2063 + { 2064 + struct pci_host_bridge *host_bridge = pci_find_host_bridge(dev->bus); 2065 + int err; 2066 + 2067 + if (host_bridge && host_bridge->enable_device) { 2068 + err = host_bridge->enable_device(host_bridge, dev); 2069 + if (err) 2070 + return err; 2071 + } 2072 + 2073 + return 0; 2074 + } 2075 + 2076 + static void pci_host_bridge_disable_device(struct pci_dev *dev) 2077 + { 2078 + struct pci_host_bridge *host_bridge = pci_find_host_bridge(dev->bus); 2079 + 2080 + if (host_bridge && host_bridge->disable_device) 2081 + host_bridge->disable_device(host_bridge, dev); 2082 + } 2083 + 2033 2084 static int do_pci_enable_device(struct pci_dev *dev, int bars) 2034 2085 { 2035 2086 int err; ··· 2067 2074 if (bridge) 2068 2075 pcie_aspm_powersave_config_link(bridge); 2069 2076 2077 + err = pci_host_bridge_enable_device(dev); 2078 + if (err) 2079 + return err; 2080 + 2070 2081 err = pcibios_enable_device(dev, bars); 2071 2082 if (err < 0) 2072 - return err; 2083 + goto err_enable; 2073 2084 pci_fixup_device(pci_fixup_enable, dev); 2074 2085 2075 2086 if (dev->msi_enabled || dev->msix_enabled) ··· 2088 2091 } 2089 2092 2090 2093 return 0; 2094 + 2095 + err_enable: 2096 + pci_host_bridge_disable_device(dev); 2097 + 2098 + return err; 2099 + 2091 2100 } 2092 2101 2093 2102 /** ··· 2276 2273 2277 2274 if (atomic_dec_return(&dev->enable_cnt) != 0) 2278 2275 return; 2276 + 2277 + pci_host_bridge_disable_device(dev); 2279 2278 2280 2279 do_pci_disable_device(dev); 2281 2280 ··· 3946 3941 * __pci_request_region - Reserved PCI I/O and memory resource 3947 3942 * @pdev: PCI device whose resources are to be reserved 3948 3943 * @bar: BAR to be reserved 3949 - * @res_name: Name to be associated with resource. 3944 + * @name: name of the driver requesting the resource 3950 3945 * @exclusive: whether the region access is exclusive or not 3951 3946 * 3952 3947 * Returns: 0 on success, negative error code on failure. 3953 3948 * 3954 - * Mark the PCI region associated with PCI device @pdev BAR @bar as 3955 - * being reserved by owner @res_name. Do not access any 3956 - * address inside the PCI regions unless this call returns 3957 - * successfully. 3949 + * Mark the PCI region associated with PCI device @pdev BAR @bar as being 3950 + * reserved by owner @name. Do not access any address inside the PCI regions 3951 + * unless this call returns successfully. 3958 3952 * 3959 3953 * If @exclusive is set, then the region is marked so that userspace 3960 3954 * is explicitly not allowed to map the resource via /dev/mem or ··· 3963 3959 * message is also printed on failure. 3964 3960 */ 3965 3961 static int __pci_request_region(struct pci_dev *pdev, int bar, 3966 - const char *res_name, int exclusive) 3962 + const char *name, int exclusive) 3967 3963 { 3968 3964 if (pci_is_managed(pdev)) { 3969 3965 if (exclusive == IORESOURCE_EXCLUSIVE) 3970 - return pcim_request_region_exclusive(pdev, bar, res_name); 3966 + return pcim_request_region_exclusive(pdev, bar, name); 3971 3967 3972 - return pcim_request_region(pdev, bar, res_name); 3968 + return pcim_request_region(pdev, bar, name); 3973 3969 } 3974 3970 3975 3971 if (pci_resource_len(pdev, bar) == 0) ··· 3977 3973 3978 3974 if (pci_resource_flags(pdev, bar) & IORESOURCE_IO) { 3979 3975 if (!request_region(pci_resource_start(pdev, bar), 3980 - pci_resource_len(pdev, bar), res_name)) 3976 + pci_resource_len(pdev, bar), name)) 3981 3977 goto err_out; 3982 3978 } else if (pci_resource_flags(pdev, bar) & IORESOURCE_MEM) { 3983 3979 if (!__request_mem_region(pci_resource_start(pdev, bar), 3984 - pci_resource_len(pdev, bar), res_name, 3980 + pci_resource_len(pdev, bar), name, 3985 3981 exclusive)) 3986 3982 goto err_out; 3987 3983 } ··· 3998 3994 * pci_request_region - Reserve PCI I/O and memory resource 3999 3995 * @pdev: PCI device whose resources are to be reserved 4000 3996 * @bar: BAR to be reserved 4001 - * @res_name: Name to be associated with resource 3997 + * @name: name of the driver requesting the resource 4002 3998 * 4003 3999 * Returns: 0 on success, negative error code on failure. 4004 4000 * 4005 - * Mark the PCI region associated with PCI device @pdev BAR @bar as 4006 - * being reserved by owner @res_name. Do not access any 4007 - * address inside the PCI regions unless this call returns 4008 - * successfully. 4001 + * Mark the PCI region associated with PCI device @pdev BAR @bar as being 4002 + * reserved by owner @name. Do not access any address inside the PCI regions 4003 + * unless this call returns successfully. 4009 4004 * 4010 4005 * Returns 0 on success, or %EBUSY on error. A warning 4011 4006 * message is also printed on failure. ··· 4014 4011 * when pcim_enable_device() has been called in advance. This hybrid feature is 4015 4012 * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4016 4013 */ 4017 - int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name) 4014 + int pci_request_region(struct pci_dev *pdev, int bar, const char *name) 4018 4015 { 4019 - return __pci_request_region(pdev, bar, res_name, 0); 4016 + return __pci_request_region(pdev, bar, name, 0); 4020 4017 } 4021 4018 EXPORT_SYMBOL(pci_request_region); 4022 4019 ··· 4039 4036 EXPORT_SYMBOL(pci_release_selected_regions); 4040 4037 4041 4038 static int __pci_request_selected_regions(struct pci_dev *pdev, int bars, 4042 - const char *res_name, int excl) 4039 + const char *name, int excl) 4043 4040 { 4044 4041 int i; 4045 4042 4046 4043 for (i = 0; i < PCI_STD_NUM_BARS; i++) 4047 4044 if (bars & (1 << i)) 4048 - if (__pci_request_region(pdev, i, res_name, excl)) 4045 + if (__pci_request_region(pdev, i, name, excl)) 4049 4046 goto err_out; 4050 4047 return 0; 4051 4048 ··· 4062 4059 * pci_request_selected_regions - Reserve selected PCI I/O and memory resources 4063 4060 * @pdev: PCI device whose resources are to be reserved 4064 4061 * @bars: Bitmask of BARs to be requested 4065 - * @res_name: Name to be associated with resource 4062 + * @name: Name of the driver requesting the resources 4066 4063 * 4067 4064 * Returns: 0 on success, negative error code on failure. 4068 4065 * ··· 4072 4069 * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4073 4070 */ 4074 4071 int pci_request_selected_regions(struct pci_dev *pdev, int bars, 4075 - const char *res_name) 4072 + const char *name) 4076 4073 { 4077 - return __pci_request_selected_regions(pdev, bars, res_name, 0); 4074 + return __pci_request_selected_regions(pdev, bars, name, 0); 4078 4075 } 4079 4076 EXPORT_SYMBOL(pci_request_selected_regions); 4080 4077 ··· 4082 4079 * pci_request_selected_regions_exclusive - Request regions exclusively 4083 4080 * @pdev: PCI device to request regions from 4084 4081 * @bars: bit mask of BARs to request 4085 - * @res_name: name to be associated with the requests 4082 + * @name: name of the driver requesting the resources 4086 4083 * 4087 4084 * Returns: 0 on success, negative error code on failure. 4088 4085 * ··· 4092 4089 * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4093 4090 */ 4094 4091 int pci_request_selected_regions_exclusive(struct pci_dev *pdev, int bars, 4095 - const char *res_name) 4092 + const char *name) 4096 4093 { 4097 - return __pci_request_selected_regions(pdev, bars, res_name, 4094 + return __pci_request_selected_regions(pdev, bars, name, 4098 4095 IORESOURCE_EXCLUSIVE); 4099 4096 } 4100 4097 EXPORT_SYMBOL(pci_request_selected_regions_exclusive); ··· 4117 4114 /** 4118 4115 * pci_request_regions - Reserve PCI I/O and memory resources 4119 4116 * @pdev: PCI device whose resources are to be reserved 4120 - * @res_name: Name to be associated with resource. 4117 + * @name: name of the driver requesting the resources 4121 4118 * 4122 - * Mark all PCI regions associated with PCI device @pdev as 4123 - * being reserved by owner @res_name. Do not access any 4124 - * address inside the PCI regions unless this call returns 4125 - * successfully. 4119 + * Mark all PCI regions associated with PCI device @pdev as being reserved by 4120 + * owner @name. Do not access any address inside the PCI regions unless this 4121 + * call returns successfully. 4126 4122 * 4127 4123 * Returns 0 on success, or %EBUSY on error. A warning 4128 4124 * message is also printed on failure. ··· 4131 4129 * when pcim_enable_device() has been called in advance. This hybrid feature is 4132 4130 * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4133 4131 */ 4134 - int pci_request_regions(struct pci_dev *pdev, const char *res_name) 4132 + int pci_request_regions(struct pci_dev *pdev, const char *name) 4135 4133 { 4136 4134 return pci_request_selected_regions(pdev, 4137 - ((1 << PCI_STD_NUM_BARS) - 1), res_name); 4135 + ((1 << PCI_STD_NUM_BARS) - 1), name); 4138 4136 } 4139 4137 EXPORT_SYMBOL(pci_request_regions); 4140 4138 4141 4139 /** 4142 4140 * pci_request_regions_exclusive - Reserve PCI I/O and memory resources 4143 4141 * @pdev: PCI device whose resources are to be reserved 4144 - * @res_name: Name to be associated with resource. 4142 + * @name: name of the driver requesting the resources 4145 4143 * 4146 4144 * Returns: 0 on success, negative error code on failure. 4147 4145 * 4148 4146 * Mark all PCI regions associated with PCI device @pdev as being reserved 4149 - * by owner @res_name. Do not access any address inside the PCI regions 4147 + * by owner @name. Do not access any address inside the PCI regions 4150 4148 * unless this call returns successfully. 4151 4149 * 4152 4150 * pci_request_regions_exclusive() will mark the region so that /dev/mem ··· 4160 4158 * when pcim_enable_device() has been called in advance. This hybrid feature is 4161 4159 * DEPRECATED! If you want managed cleanup, use the pcim_* functions instead. 4162 4160 */ 4163 - int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name) 4161 + int pci_request_regions_exclusive(struct pci_dev *pdev, const char *name) 4164 4162 { 4165 4163 return pci_request_selected_regions_exclusive(pdev, 4166 - ((1 << PCI_STD_NUM_BARS) - 1), res_name); 4164 + ((1 << PCI_STD_NUM_BARS) - 1), name); 4167 4165 } 4168 4166 EXPORT_SYMBOL(pci_request_regions_exclusive); 4169 4167 ··· 4490 4488 * @enable: boolean: whether to enable or disable PCI INTx 4491 4489 * 4492 4490 * Enables/disables PCI INTx for device @pdev 4493 - * 4494 - * NOTE: 4495 - * This is a "hybrid" function: It's normally unmanaged, but becomes managed 4496 - * when pcim_enable_device() has been called in advance. This hybrid feature is 4497 - * DEPRECATED! If you want managed cleanup, use pcim_intx() instead. 4498 4491 */ 4499 4492 void pci_intx(struct pci_dev *pdev, int enable) 4500 4493 { ··· 4502 4505 else 4503 4506 new = pci_command | PCI_COMMAND_INTX_DISABLE; 4504 4507 4505 - if (new != pci_command) { 4506 - /* Preserve the "hybrid" behavior for backwards compatibility */ 4507 - if (pci_is_managed(pdev)) { 4508 - WARN_ON_ONCE(pcim_intx(pdev, enable) != 0); 4509 - return; 4510 - } 4508 + if (new == pci_command) 4509 + return; 4511 4510 4512 - pci_write_config_word(pdev, PCI_COMMAND, new); 4513 - } 4511 + pci_write_config_word(pdev, PCI_COMMAND, new); 4514 4512 } 4515 4513 EXPORT_SYMBOL_GPL(pci_intx); 4516 4514 ··· 5196 5204 } 5197 5205 5198 5206 /* dev->reset_methods[] is a 0-terminated list of indices into this array */ 5199 - static const struct pci_reset_fn_method pci_reset_fn_methods[] = { 5207 + const struct pci_reset_fn_method pci_reset_fn_methods[] = { 5200 5208 { }, 5201 5209 { pci_dev_specific_reset, .name = "device_specific" }, 5202 5210 { pci_dev_acpi_reset, .name = "acpi" }, ··· 5205 5213 { pci_pm_reset, .name = "pm" }, 5206 5214 { pci_reset_bus_function, .name = "bus" }, 5207 5215 { cxl_reset_bus_function, .name = "cxl_bus" }, 5208 - }; 5209 - 5210 - static ssize_t reset_method_show(struct device *dev, 5211 - struct device_attribute *attr, char *buf) 5212 - { 5213 - struct pci_dev *pdev = to_pci_dev(dev); 5214 - ssize_t len = 0; 5215 - int i, m; 5216 - 5217 - for (i = 0; i < PCI_NUM_RESET_METHODS; i++) { 5218 - m = pdev->reset_methods[i]; 5219 - if (!m) 5220 - break; 5221 - 5222 - len += sysfs_emit_at(buf, len, "%s%s", len ? " " : "", 5223 - pci_reset_fn_methods[m].name); 5224 - } 5225 - 5226 - if (len) 5227 - len += sysfs_emit_at(buf, len, "\n"); 5228 - 5229 - return len; 5230 - } 5231 - 5232 - static int reset_method_lookup(const char *name) 5233 - { 5234 - int m; 5235 - 5236 - for (m = 1; m < PCI_NUM_RESET_METHODS; m++) { 5237 - if (sysfs_streq(name, pci_reset_fn_methods[m].name)) 5238 - return m; 5239 - } 5240 - 5241 - return 0; /* not found */ 5242 - } 5243 - 5244 - static ssize_t reset_method_store(struct device *dev, 5245 - struct device_attribute *attr, 5246 - const char *buf, size_t count) 5247 - { 5248 - struct pci_dev *pdev = to_pci_dev(dev); 5249 - char *options, *tmp_options, *name; 5250 - int m, n; 5251 - u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 }; 5252 - 5253 - if (sysfs_streq(buf, "")) { 5254 - pdev->reset_methods[0] = 0; 5255 - pci_warn(pdev, "All device reset methods disabled by user"); 5256 - return count; 5257 - } 5258 - 5259 - if (sysfs_streq(buf, "default")) { 5260 - pci_init_reset_methods(pdev); 5261 - return count; 5262 - } 5263 - 5264 - options = kstrndup(buf, count, GFP_KERNEL); 5265 - if (!options) 5266 - return -ENOMEM; 5267 - 5268 - n = 0; 5269 - tmp_options = options; 5270 - while ((name = strsep(&tmp_options, " ")) != NULL) { 5271 - if (sysfs_streq(name, "")) 5272 - continue; 5273 - 5274 - name = strim(name); 5275 - 5276 - m = reset_method_lookup(name); 5277 - if (!m) { 5278 - pci_err(pdev, "Invalid reset method '%s'", name); 5279 - goto error; 5280 - } 5281 - 5282 - if (pci_reset_fn_methods[m].reset_fn(pdev, PCI_RESET_PROBE)) { 5283 - pci_err(pdev, "Unsupported reset method '%s'", name); 5284 - goto error; 5285 - } 5286 - 5287 - if (n == PCI_NUM_RESET_METHODS - 1) { 5288 - pci_err(pdev, "Too many reset methods\n"); 5289 - goto error; 5290 - } 5291 - 5292 - reset_methods[n++] = m; 5293 - } 5294 - 5295 - reset_methods[n] = 0; 5296 - 5297 - /* Warn if dev-specific supported but not highest priority */ 5298 - if (pci_reset_fn_methods[1].reset_fn(pdev, PCI_RESET_PROBE) == 0 && 5299 - reset_methods[0] != 1) 5300 - pci_warn(pdev, "Device-specific reset disabled/de-prioritized by user"); 5301 - memcpy(pdev->reset_methods, reset_methods, sizeof(pdev->reset_methods)); 5302 - kfree(options); 5303 - return count; 5304 - 5305 - error: 5306 - /* Leave previous methods unchanged */ 5307 - kfree(options); 5308 - return -EINVAL; 5309 - } 5310 - static DEVICE_ATTR_RW(reset_method); 5311 - 5312 - static struct attribute *pci_dev_reset_method_attrs[] = { 5313 - &dev_attr_reset_method.attr, 5314 - NULL, 5315 - }; 5316 - 5317 - static umode_t pci_dev_reset_method_attr_is_visible(struct kobject *kobj, 5318 - struct attribute *a, int n) 5319 - { 5320 - struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 5321 - 5322 - if (!pci_reset_supported(pdev)) 5323 - return 0; 5324 - 5325 - return a->mode; 5326 - } 5327 - 5328 - const struct attribute_group pci_dev_reset_method_attr_group = { 5329 - .attrs = pci_dev_reset_method_attrs, 5330 - .is_visible = pci_dev_reset_method_attr_is_visible, 5331 5216 }; 5332 5217 5333 5218 /**
+13 -10
drivers/pci/pci.h
··· 4 4 5 5 #include <linux/pci.h> 6 6 7 + struct pcie_tlp_log; 8 + 7 9 /* Number of possible devfns: 0.0 to 1f.7 inclusive */ 8 10 #define MAX_NR_DEVFNS 256 9 11 ··· 317 315 int pci_idt_bus_quirk(struct pci_bus *bus, int devfn, u32 *pl, int rrs_timeout); 318 316 319 317 int pci_setup_device(struct pci_dev *dev); 318 + void __pci_size_stdbars(struct pci_dev *dev, int count, 319 + unsigned int pos, u32 *sizes); 320 320 int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, 321 - struct resource *res, unsigned int reg); 321 + struct resource *res, unsigned int reg, u32 *sizes); 322 322 void pci_configure_ari(struct pci_dev *dev); 323 323 void __pci_bus_size_bridges(struct pci_bus *bus, 324 324 struct list_head *realloc_head); ··· 551 547 552 548 int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info); 553 549 void aer_print_error(struct pci_dev *dev, struct aer_err_info *info); 550 + 551 + int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2, 552 + unsigned int tlp_len, struct pcie_tlp_log *log); 553 + unsigned int aer_tlp_log_len(struct pci_dev *dev, u32 aercc); 554 + void pcie_print_tlp_log(const struct pci_dev *dev, 555 + const struct pcie_tlp_log *log, const char *pfx); 554 556 #endif /* CONFIG_PCIEAER */ 555 557 556 558 #ifdef CONFIG_PCIEPORTBUS ··· 575 565 void dpc_process_error(struct pci_dev *pdev); 576 566 pci_ers_result_t dpc_reset_link(struct pci_dev *pdev); 577 567 bool pci_dpc_recovered(struct pci_dev *pdev); 568 + unsigned int dpc_tlp_log_len(struct pci_dev *dev); 578 569 #else 579 570 static inline void pci_save_dpc_state(struct pci_dev *dev) { } 580 571 static inline void pci_restore_dpc_state(struct pci_dev *dev) { } ··· 777 766 int (*reset_fn)(struct pci_dev *pdev, bool probe); 778 767 char *name; 779 768 }; 769 + extern const struct pci_reset_fn_method pci_reset_fn_methods[]; 780 770 781 771 #ifdef CONFIG_PCI_QUIRKS 782 772 int pci_dev_specific_reset(struct pci_dev *dev, bool probe); ··· 809 797 struct device_node; 810 798 811 799 #ifdef CONFIG_OF 812 - int of_pci_parse_bus_range(struct device_node *node, struct resource *res); 813 800 int of_get_pci_domain_nr(struct device_node *node); 814 801 int of_pci_get_max_link_speed(struct device_node *node); 815 802 u32 of_pci_get_slot_power_limit(struct device_node *node, ··· 824 813 bool of_pci_supply_present(struct device_node *np); 825 814 826 815 #else 827 - static inline int 828 - of_pci_parse_bus_range(struct device_node *node, struct resource *res) 829 - { 830 - return -EINVAL; 831 - } 832 - 833 816 static inline int 834 817 of_get_pci_domain_nr(struct device_node *node) 835 818 { ··· 964 959 #ifdef CONFIG_PCIEASPM 965 960 extern const struct attribute_group aspm_ctrl_attr_group; 966 961 #endif 967 - 968 - extern const struct attribute_group pci_dev_reset_method_attr_group; 969 962 970 963 #ifdef CONFIG_X86_INTEL_MID 971 964 bool pci_use_mid_pm(void);
+1 -1
drivers/pci/pcie/Makefile
··· 7 7 obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o bwctrl.o 8 8 9 9 obj-y += aspm.o 10 - obj-$(CONFIG_PCIEAER) += aer.o err.o 10 + obj-$(CONFIG_PCIEAER) += aer.o err.o tlp.o 11 11 obj-$(CONFIG_PCIEAER_INJECT) += aer_inject.o 12 12 obj-$(CONFIG_PCIE_PME) += pme.o 13 13 obj-$(CONFIG_PCIE_DPC) += dpc.o
+6 -9
drivers/pci/pcie/aer.c
··· 665 665 } 666 666 } 667 667 668 - static void __print_tlp_header(struct pci_dev *dev, struct pcie_tlp_log *t) 669 - { 670 - pci_err(dev, " TLP Header: %08x %08x %08x %08x\n", 671 - t->dw[0], t->dw[1], t->dw[2], t->dw[3]); 672 - } 673 - 674 668 static void __aer_print_error(struct pci_dev *dev, 675 669 struct aer_err_info *info) 676 670 { ··· 719 725 __aer_print_error(dev, info); 720 726 721 727 if (info->tlp_header_valid) 722 - __print_tlp_header(dev, &info->tlp); 728 + pcie_print_tlp_log(dev, &info->tlp, dev_fmt(" ")); 723 729 724 730 out: 725 731 if (info->id && info->error_dev_num > 1 && info->id == id) ··· 791 797 aer->uncor_severity); 792 798 793 799 if (tlp_header_valid) 794 - __print_tlp_header(dev, &aer->header_log); 800 + pcie_print_tlp_log(dev, &aer->header_log, dev_fmt(" ")); 795 801 796 802 trace_aer_event(dev_name(&dev->dev), (status & ~mask), 797 803 aer_severity, tlp_header_valid, &aer->header_log); ··· 1242 1248 1243 1249 if (info->status & AER_LOG_TLP_MASKS) { 1244 1250 info->tlp_header_valid = 1; 1245 - pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, &info->tlp); 1251 + pcie_read_tlp_log(dev, aer + PCI_ERR_HEADER_LOG, 1252 + aer + PCI_ERR_PREFIX_LOG, 1253 + aer_tlp_log_len(dev, aercc), 1254 + &info->tlp); 1246 1255 } 1247 1256 } 1248 1257
+29 -6
drivers/pci/pcie/aspm.c
··· 81 81 82 82 void pci_save_aspm_l1ss_state(struct pci_dev *pdev) 83 83 { 84 + struct pci_dev *parent = pdev->bus->self; 84 85 struct pci_cap_saved_state *save_state; 85 - u16 l1ss = pdev->l1ss; 86 86 u32 *cap; 87 + 88 + /* 89 + * If this is a Downstream Port, we never restore the L1SS state 90 + * directly; we only restore it when we restore the state of the 91 + * Upstream Port below it. 92 + */ 93 + if (pcie_downstream_port(pdev) || !parent) 94 + return; 95 + 96 + if (!pdev->l1ss || !parent->l1ss) 97 + return; 87 98 88 99 /* 89 100 * Save L1 substate configuration. The ASPM L0s/L1 configuration 90 101 * in PCI_EXP_LNKCTL_ASPMC is saved by pci_save_pcie_state(). 91 102 */ 92 - if (!l1ss) 93 - return; 94 - 95 103 save_state = pci_find_saved_ext_cap(pdev, PCI_EXT_CAP_ID_L1SS); 96 104 if (!save_state) 97 105 return; 98 106 99 107 cap = &save_state->cap.data[0]; 100 - pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL2, cap++); 101 - pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, cap++); 108 + pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL2, cap++); 109 + pci_read_config_dword(pdev, pdev->l1ss + PCI_L1SS_CTL1, cap++); 110 + 111 + if (parent->state_saved) 112 + return; 113 + 114 + /* 115 + * Save parent's L1 substate configuration so we have it for 116 + * pci_restore_aspm_l1ss_state(pdev) to restore. 117 + */ 118 + save_state = pci_find_saved_ext_cap(parent, PCI_EXT_CAP_ID_L1SS); 119 + if (!save_state) 120 + return; 121 + 122 + cap = &save_state->cap.data[0]; 123 + pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, cap++); 124 + pci_read_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, cap++); 102 125 } 103 126 104 127 void pci_restore_aspm_l1ss_state(struct pci_dev *pdev)
+10 -12
drivers/pci/pcie/dpc.c
··· 190 190 static void dpc_process_rp_pio_error(struct pci_dev *pdev) 191 191 { 192 192 u16 cap = pdev->dpc_cap, dpc_status, first_error; 193 - u32 status, mask, sev, syserr, exc, log, prefix; 193 + u32 status, mask, sev, syserr, exc, log; 194 194 struct pcie_tlp_log tlp_log; 195 195 int i; 196 196 ··· 215 215 first_error == i ? " (First)" : ""); 216 216 } 217 217 218 - if (pdev->dpc_rp_log_size < 4) 218 + if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG) 219 219 goto clear_status; 220 - pcie_read_tlp_log(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG, &tlp_log); 221 - pci_err(pdev, "TLP Header: %#010x %#010x %#010x %#010x\n", 222 - tlp_log.dw[0], tlp_log.dw[1], tlp_log.dw[2], tlp_log.dw[3]); 220 + pcie_read_tlp_log(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG, 221 + cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, 222 + dpc_tlp_log_len(pdev), &tlp_log); 223 + pcie_print_tlp_log(pdev, &tlp_log, dev_fmt("")); 223 224 224 - if (pdev->dpc_rp_log_size < 5) 225 + if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG + 1) 225 226 goto clear_status; 226 227 pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log); 227 228 pci_err(pdev, "RP PIO ImpSpec Log %#010x\n", log); 228 229 229 - for (i = 0; i < pdev->dpc_rp_log_size - 5; i++) { 230 - pci_read_config_dword(pdev, 231 - cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG + i * 4, &prefix); 232 - pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix); 233 - } 234 230 clear_status: 235 231 pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status); 236 232 } ··· 400 404 if (!pdev->dpc_rp_log_size) { 401 405 pdev->dpc_rp_log_size = 402 406 FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, cap); 403 - if (pdev->dpc_rp_log_size < 4 || pdev->dpc_rp_log_size > 9) { 407 + if (pdev->dpc_rp_log_size < PCIE_STD_NUM_TLP_HEADERLOG || 408 + pdev->dpc_rp_log_size > PCIE_STD_NUM_TLP_HEADERLOG + 1 + 409 + PCIE_STD_MAX_TLP_PREFIXLOG) { 404 410 pci_err(pdev, "RP PIO log size %u is invalid\n", 405 411 pdev->dpc_rp_log_size); 406 412 pdev->dpc_rp_log_size = 0;
+115
drivers/pci/pcie/tlp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCIe TLP Log handling 4 + * 5 + * Copyright (C) 2024 Intel Corporation 6 + */ 7 + 8 + #include <linux/aer.h> 9 + #include <linux/array_size.h> 10 + #include <linux/pci.h> 11 + #include <linux/string.h> 12 + 13 + #include "../pci.h" 14 + 15 + /** 16 + * aer_tlp_log_len - Calculate AER Capability TLP Header/Prefix Log length 17 + * @dev: PCIe device 18 + * @aercc: AER Capabilities and Control register value 19 + * 20 + * Return: TLP Header/Prefix Log length 21 + */ 22 + unsigned int aer_tlp_log_len(struct pci_dev *dev, u32 aercc) 23 + { 24 + return PCIE_STD_NUM_TLP_HEADERLOG + 25 + ((aercc & PCI_ERR_CAP_PREFIX_LOG_PRESENT) ? 26 + dev->eetlp_prefix_max : 0); 27 + } 28 + 29 + #ifdef CONFIG_PCIE_DPC 30 + /** 31 + * dpc_tlp_log_len - Calculate DPC RP PIO TLP Header/Prefix Log length 32 + * @dev: PCIe device 33 + * 34 + * Return: TLP Header/Prefix Log length 35 + */ 36 + unsigned int dpc_tlp_log_len(struct pci_dev *dev) 37 + { 38 + /* Remove ImpSpec Log register from the count */ 39 + if (dev->dpc_rp_log_size >= PCIE_STD_NUM_TLP_HEADERLOG + 1) 40 + return dev->dpc_rp_log_size - 1; 41 + 42 + return dev->dpc_rp_log_size; 43 + } 44 + #endif 45 + 46 + /** 47 + * pcie_read_tlp_log - read TLP Header Log 48 + * @dev: PCIe device 49 + * @where: PCI Config offset of TLP Header Log 50 + * @where2: PCI Config offset of TLP Prefix Log 51 + * @tlp_len: TLP Log length (Header Log + TLP Prefix Log in DWORDs) 52 + * @log: TLP Log structure to fill 53 + * 54 + * Fill @log from TLP Header Log registers, e.g., AER or DPC. 55 + * 56 + * Return: 0 on success and filled TLP Log structure, <0 on error. 57 + */ 58 + int pcie_read_tlp_log(struct pci_dev *dev, int where, int where2, 59 + unsigned int tlp_len, struct pcie_tlp_log *log) 60 + { 61 + unsigned int i; 62 + int off, ret; 63 + u32 *to; 64 + 65 + memset(log, 0, sizeof(*log)); 66 + 67 + for (i = 0; i < tlp_len; i++) { 68 + if (i < PCIE_STD_NUM_TLP_HEADERLOG) { 69 + off = where + i * 4; 70 + to = &log->dw[i]; 71 + } else { 72 + off = where2 + (i - PCIE_STD_NUM_TLP_HEADERLOG) * 4; 73 + to = &log->prefix[i - PCIE_STD_NUM_TLP_HEADERLOG]; 74 + } 75 + 76 + ret = pci_read_config_dword(dev, off, to); 77 + if (ret) 78 + return pcibios_err_to_errno(ret); 79 + } 80 + 81 + return 0; 82 + } 83 + 84 + #define EE_PREFIX_STR " E-E Prefixes:" 85 + 86 + /** 87 + * pcie_print_tlp_log - Print TLP Header / Prefix Log contents 88 + * @dev: PCIe device 89 + * @log: TLP Log structure 90 + * @pfx: String prefix 91 + * 92 + * Prints TLP Header and Prefix Log information held by @log. 93 + */ 94 + void pcie_print_tlp_log(const struct pci_dev *dev, 95 + const struct pcie_tlp_log *log, const char *pfx) 96 + { 97 + char buf[11 * (PCIE_STD_NUM_TLP_HEADERLOG + ARRAY_SIZE(log->prefix)) + 98 + sizeof(EE_PREFIX_STR)]; 99 + unsigned int i; 100 + int len; 101 + 102 + len = scnprintf(buf, sizeof(buf), "%#010x %#010x %#010x %#010x", 103 + log->dw[0], log->dw[1], log->dw[2], log->dw[3]); 104 + 105 + if (log->prefix[0]) 106 + len += scnprintf(buf + len, sizeof(buf) - len, EE_PREFIX_STR); 107 + for (i = 0; i < ARRAY_SIZE(log->prefix); i++) { 108 + if (!log->prefix[i]) 109 + break; 110 + len += scnprintf(buf + len, sizeof(buf) - len, 111 + " %#010x", log->prefix[i]); 112 + } 113 + 114 + pci_err(dev, "%sTLP Header: %s\n", pfx, buf); 115 + }
+77 -30
drivers/pci/probe.c
··· 165 165 #define PCI_COMMAND_DECODE_ENABLE (PCI_COMMAND_MEMORY | PCI_COMMAND_IO) 166 166 167 167 /** 168 + * __pci_size_bars - Read the raw BAR mask for a range of PCI BARs 169 + * @dev: the PCI device 170 + * @count: number of BARs to size 171 + * @pos: starting config space position 172 + * @sizes: array to store mask values 173 + * @rom: indicate whether to use ROM mask, which avoids enabling ROM BARs 174 + * 175 + * Provided @sizes array must be sufficiently sized to store results for 176 + * @count u32 BARs. Caller is responsible for disabling decode to specified 177 + * BAR range around calling this function. This function is intended to avoid 178 + * disabling decode around sizing each BAR individually, which can result in 179 + * non-trivial overhead in virtualized environments with very large PCI BARs. 180 + */ 181 + static void __pci_size_bars(struct pci_dev *dev, int count, 182 + unsigned int pos, u32 *sizes, bool rom) 183 + { 184 + u32 orig, mask = rom ? PCI_ROM_ADDRESS_MASK : ~0; 185 + int i; 186 + 187 + for (i = 0; i < count; i++, pos += 4, sizes++) { 188 + pci_read_config_dword(dev, pos, &orig); 189 + pci_write_config_dword(dev, pos, mask); 190 + pci_read_config_dword(dev, pos, sizes); 191 + pci_write_config_dword(dev, pos, orig); 192 + } 193 + } 194 + 195 + void __pci_size_stdbars(struct pci_dev *dev, int count, 196 + unsigned int pos, u32 *sizes) 197 + { 198 + __pci_size_bars(dev, count, pos, sizes, false); 199 + } 200 + 201 + static void __pci_size_rom(struct pci_dev *dev, unsigned int pos, u32 *sizes) 202 + { 203 + __pci_size_bars(dev, 1, pos, sizes, true); 204 + } 205 + 206 + /** 168 207 * __pci_read_base - Read a PCI BAR 169 208 * @dev: the PCI device 170 209 * @type: type of the BAR 171 210 * @res: resource buffer to be filled in 172 211 * @pos: BAR position in the config space 212 + * @sizes: array of one or more pre-read BAR masks 173 213 * 174 214 * Returns 1 if the BAR is 64-bit, or 0 if 32-bit. 175 215 */ 176 216 int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, 177 - struct resource *res, unsigned int pos) 217 + struct resource *res, unsigned int pos, u32 *sizes) 178 218 { 179 - u32 l = 0, sz = 0, mask; 219 + u32 l = 0, sz; 180 220 u64 l64, sz64, mask64; 181 - u16 orig_cmd; 182 221 struct pci_bus_region region, inverted_region; 183 222 const char *res_name = pci_resource_name(dev, res - dev->resource); 184 - 185 - mask = type ? PCI_ROM_ADDRESS_MASK : ~0; 186 - 187 - /* No printks while decoding is disabled! */ 188 - if (!dev->mmio_always_on) { 189 - pci_read_config_word(dev, PCI_COMMAND, &orig_cmd); 190 - if (orig_cmd & PCI_COMMAND_DECODE_ENABLE) { 191 - pci_write_config_word(dev, PCI_COMMAND, 192 - orig_cmd & ~PCI_COMMAND_DECODE_ENABLE); 193 - } 194 - } 195 223 196 224 res->name = pci_name(dev); 197 225 198 226 pci_read_config_dword(dev, pos, &l); 199 - pci_write_config_dword(dev, pos, l | mask); 200 - pci_read_config_dword(dev, pos, &sz); 201 - pci_write_config_dword(dev, pos, l); 227 + sz = sizes[0]; 202 228 203 229 /* 204 230 * All bits set in sz means the device isn't working properly. ··· 264 238 265 239 if (res->flags & IORESOURCE_MEM_64) { 266 240 pci_read_config_dword(dev, pos + 4, &l); 267 - pci_write_config_dword(dev, pos + 4, ~0); 268 - pci_read_config_dword(dev, pos + 4, &sz); 269 - pci_write_config_dword(dev, pos + 4, l); 241 + sz = sizes[1]; 270 242 271 243 l64 |= ((u64)l << 32); 272 244 sz64 |= ((u64)sz << 32); 273 245 mask64 |= ((u64)~0 << 32); 274 246 } 275 - 276 - if (!dev->mmio_always_on && (orig_cmd & PCI_COMMAND_DECODE_ENABLE)) 277 - pci_write_config_word(dev, PCI_COMMAND, orig_cmd); 278 247 279 248 if (!sz64) 280 249 goto fail; ··· 341 320 342 321 static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom) 343 322 { 323 + u32 rombar, stdbars[PCI_STD_NUM_BARS]; 344 324 unsigned int pos, reg; 325 + u16 orig_cmd; 326 + 327 + BUILD_BUG_ON(howmany > PCI_STD_NUM_BARS); 345 328 346 329 if (dev->non_compliant_bars) 347 330 return; ··· 354 329 if (dev->is_virtfn) 355 330 return; 356 331 332 + /* No printks while decoding is disabled! */ 333 + if (!dev->mmio_always_on) { 334 + pci_read_config_word(dev, PCI_COMMAND, &orig_cmd); 335 + if (orig_cmd & PCI_COMMAND_DECODE_ENABLE) { 336 + pci_write_config_word(dev, PCI_COMMAND, 337 + orig_cmd & ~PCI_COMMAND_DECODE_ENABLE); 338 + } 339 + } 340 + 341 + __pci_size_stdbars(dev, howmany, PCI_BASE_ADDRESS_0, stdbars); 342 + if (rom) 343 + __pci_size_rom(dev, rom, &rombar); 344 + 345 + if (!dev->mmio_always_on && 346 + (orig_cmd & PCI_COMMAND_DECODE_ENABLE)) 347 + pci_write_config_word(dev, PCI_COMMAND, orig_cmd); 348 + 357 349 for (pos = 0; pos < howmany; pos++) { 358 350 struct resource *res = &dev->resource[pos]; 359 351 reg = PCI_BASE_ADDRESS_0 + (pos << 2); 360 - pos += __pci_read_base(dev, pci_bar_unknown, res, reg); 352 + pos += __pci_read_base(dev, pci_bar_unknown, 353 + res, reg, &stdbars[pos]); 361 354 } 362 355 363 356 if (rom) { ··· 383 340 dev->rom_base_reg = rom; 384 341 res->flags = IORESOURCE_MEM | IORESOURCE_PREFETCH | 385 342 IORESOURCE_READONLY | IORESOURCE_SIZEALIGN; 386 - __pci_read_base(dev, pci_bar_mem32, res, rom); 343 + __pci_read_base(dev, pci_bar_mem32, res, rom, &rombar); 387 344 } 388 345 } 389 346 ··· 2294 2251 2295 2252 static void pci_configure_eetlp_prefix(struct pci_dev *dev) 2296 2253 { 2297 - #ifdef CONFIG_PCI_PASID 2298 2254 struct pci_dev *bridge; 2255 + unsigned int eetlp_max; 2299 2256 int pcie_type; 2300 2257 u32 cap; 2301 2258 ··· 2307 2264 return; 2308 2265 2309 2266 pcie_type = pci_pcie_type(dev); 2267 + 2268 + eetlp_max = FIELD_GET(PCI_EXP_DEVCAP2_EE_PREFIX_MAX, cap); 2269 + /* 00b means 4 */ 2270 + eetlp_max = eetlp_max ?: 4; 2271 + 2310 2272 if (pcie_type == PCI_EXP_TYPE_ROOT_PORT || 2311 2273 pcie_type == PCI_EXP_TYPE_RC_END) 2312 - dev->eetlp_prefix_path = 1; 2274 + dev->eetlp_prefix_max = eetlp_max; 2313 2275 else { 2314 2276 bridge = pci_upstream_bridge(dev); 2315 - if (bridge && bridge->eetlp_prefix_path) 2316 - dev->eetlp_prefix_path = 1; 2277 + if (bridge && bridge->eetlp_prefix_max) 2278 + dev->eetlp_prefix_max = eetlp_max; 2317 2279 } 2318 - #endif 2319 2280 } 2320 2281 2321 2282 static void pci_configure_serr(struct pci_dev *dev)
+16 -2
drivers/pci/quirks.c
··· 12 12 * file, where their drivers can use them. 13 13 */ 14 14 15 + #include <linux/aer.h> 15 16 #include <linux/align.h> 16 17 #include <linux/bitfield.h> 17 18 #include <linux/types.h> ··· 5985 5984 SWITCHTEC_QUIRK(0x5536); /* PAXA 36XG5 */ 5986 5985 SWITCHTEC_QUIRK(0x5528); /* PAXA 28XG5 */ 5987 5986 5987 + #define SWITCHTEC_PCI100X_QUIRK(vid) \ 5988 + DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_EFAR, vid, \ 5989 + PCI_CLASS_BRIDGE_OTHER, 8, quirk_switchtec_ntb_dma_alias) 5990 + SWITCHTEC_PCI100X_QUIRK(0x1001); /* PCI1001XG4 */ 5991 + SWITCHTEC_PCI100X_QUIRK(0x1002); /* PCI1002XG4 */ 5992 + SWITCHTEC_PCI100X_QUIRK(0x1003); /* PCI1003XG4 */ 5993 + SWITCHTEC_PCI100X_QUIRK(0x1004); /* PCI1004XG4 */ 5994 + SWITCHTEC_PCI100X_QUIRK(0x1005); /* PCI1005XG4 */ 5995 + SWITCHTEC_PCI100X_QUIRK(0x1006); /* PCI1006XG4 */ 5996 + 5997 + 5988 5998 /* 5989 5999 * The PLX NTB uses devfn proxy IDs to move TLPs between NT endpoints. 5990 6000 * These IDs are used to forward responses to the originator on the other ··· 6245 6233 return; 6246 6234 6247 6235 if (FIELD_GET(PCI_EXP_DPC_RP_PIO_LOG_SIZE, val) == 0) { 6248 - pci_info(dev, "Overriding RP PIO Log Size to 4\n"); 6249 - dev->dpc_rp_log_size = 4; 6236 + pci_info(dev, "Overriding RP PIO Log Size to %d\n", 6237 + PCIE_STD_NUM_TLP_HEADERLOG); 6238 + dev->dpc_rp_log_size = PCIE_STD_NUM_TLP_HEADERLOG; 6250 6239 } 6251 6240 } 6252 6241 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x461f, dpc_log_size); ··· 6266 6253 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2d, dpc_log_size); 6267 6254 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2f, dpc_log_size); 6268 6255 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a31, dpc_log_size); 6256 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa72f, dpc_log_size); 6269 6257 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa73f, dpc_log_size); 6270 6258 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa76e, dpc_log_size); 6271 6259 #endif
+26
drivers/pci/switch/switchtec.c
··· 1739 1739 .driver_data = gen, \ 1740 1740 } 1741 1741 1742 + #define SWITCHTEC_PCI100X_DEVICE(device_id, gen) \ 1743 + { \ 1744 + .vendor = PCI_VENDOR_ID_EFAR, \ 1745 + .device = device_id, \ 1746 + .subvendor = PCI_ANY_ID, \ 1747 + .subdevice = PCI_ANY_ID, \ 1748 + .class = (PCI_CLASS_MEMORY_OTHER << 8), \ 1749 + .class_mask = 0xFFFFFFFF, \ 1750 + .driver_data = gen, \ 1751 + }, \ 1752 + { \ 1753 + .vendor = PCI_VENDOR_ID_EFAR, \ 1754 + .device = device_id, \ 1755 + .subvendor = PCI_ANY_ID, \ 1756 + .subdevice = PCI_ANY_ID, \ 1757 + .class = (PCI_CLASS_BRIDGE_OTHER << 8), \ 1758 + .class_mask = 0xFFFFFFFF, \ 1759 + .driver_data = gen, \ 1760 + } 1761 + 1742 1762 static const struct pci_device_id switchtec_pci_tbl[] = { 1743 1763 SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), /* PFX 24xG3 */ 1744 1764 SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), /* PFX 32xG3 */ ··· 1853 1833 SWITCHTEC_PCI_DEVICE(0x5552, SWITCHTEC_GEN5), /* PAXA 52XG5 */ 1854 1834 SWITCHTEC_PCI_DEVICE(0x5536, SWITCHTEC_GEN5), /* PAXA 36XG5 */ 1855 1835 SWITCHTEC_PCI_DEVICE(0x5528, SWITCHTEC_GEN5), /* PAXA 28XG5 */ 1836 + SWITCHTEC_PCI100X_DEVICE(0x1001, SWITCHTEC_GEN4), /* PCI1001 16XG4 */ 1837 + SWITCHTEC_PCI100X_DEVICE(0x1002, SWITCHTEC_GEN4), /* PCI1002 12XG4 */ 1838 + SWITCHTEC_PCI100X_DEVICE(0x1003, SWITCHTEC_GEN4), /* PCI1003 16XG4 */ 1839 + SWITCHTEC_PCI100X_DEVICE(0x1004, SWITCHTEC_GEN4), /* PCI1004 16XG4 */ 1840 + SWITCHTEC_PCI100X_DEVICE(0x1005, SWITCHTEC_GEN4), /* PCI1005 16XG4 */ 1841 + SWITCHTEC_PCI100X_DEVICE(0x1006, SWITCHTEC_GEN4), /* PCI1006 16XG4 */ 1856 1842 {0} 1857 1843 }; 1858 1844 MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);
+7 -7
drivers/pci/vpd.c
··· 271 271 } 272 272 273 273 static ssize_t vpd_read(struct file *filp, struct kobject *kobj, 274 - struct bin_attribute *bin_attr, char *buf, loff_t off, 275 - size_t count) 274 + const struct bin_attribute *bin_attr, char *buf, 275 + loff_t off, size_t count) 276 276 { 277 277 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 278 278 struct pci_dev *vpd_dev = dev; ··· 295 295 } 296 296 297 297 static ssize_t vpd_write(struct file *filp, struct kobject *kobj, 298 - struct bin_attribute *bin_attr, char *buf, loff_t off, 299 - size_t count) 298 + const struct bin_attribute *bin_attr, char *buf, 299 + loff_t off, size_t count) 300 300 { 301 301 struct pci_dev *dev = to_pci_dev(kobj_to_dev(kobj)); 302 302 struct pci_dev *vpd_dev = dev; ··· 317 317 318 318 return ret; 319 319 } 320 - static BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0); 320 + static const BIN_ATTR(vpd, 0600, vpd_read, vpd_write, 0); 321 321 322 - static struct bin_attribute *vpd_attrs[] = { 322 + static const struct bin_attribute *const vpd_attrs[] = { 323 323 &bin_attr_vpd, 324 324 NULL, 325 325 }; ··· 336 336 } 337 337 338 338 const struct attribute_group pci_dev_vpd_attr_group = { 339 - .bin_attrs = vpd_attrs, 339 + .bin_attrs_new = vpd_attrs, 340 340 .is_bin_visible = vpd_attr_is_visible, 341 341 }; 342 342
+3 -2
drivers/vfio/pci/vfio_pci_config.c
··· 1389 1389 1390 1390 switch (ecap) { 1391 1391 case PCI_EXT_CAP_ID_VNDR: 1392 - ret = pci_read_config_dword(pdev, epos + PCI_VSEC_HDR, &dword); 1392 + ret = pci_read_config_dword(pdev, epos + PCI_VNDR_HEADER, 1393 + &dword); 1393 1394 if (ret) 1394 1395 return pcibios_err_to_errno(ret); 1395 1396 1396 - return dword >> PCI_VSEC_HDR_LEN_SHIFT; 1397 + return PCI_VNDR_HEADER_LEN(dword); 1397 1398 case PCI_EXT_CAP_ID_VC: 1398 1399 case PCI_EXT_CAP_ID_VC9: 1399 1400 case PCI_EXT_CAP_ID_MFVC:
+9 -3
include/linux/aer.h
··· 16 16 #define AER_CORRECTABLE 2 17 17 #define DPC_FATAL 3 18 18 19 + /* 20 + * AER and DPC capabilities TLP Logging register sizes (PCIe r6.2, sec 7.8.4 21 + * & 7.9.14). 22 + */ 23 + #define PCIE_STD_NUM_TLP_HEADERLOG 4 24 + #define PCIE_STD_MAX_TLP_PREFIXLOG 4 25 + 19 26 struct pci_dev; 20 27 21 28 struct pcie_tlp_log { 22 - u32 dw[4]; 29 + u32 dw[PCIE_STD_NUM_TLP_HEADERLOG]; 30 + u32 prefix[PCIE_STD_MAX_TLP_PREFIXLOG]; 23 31 }; 24 32 25 33 struct aer_capability_regs { ··· 44 36 u16 cor_err_source; 45 37 u16 uncor_err_source; 46 38 }; 47 - 48 - int pcie_read_tlp_log(struct pci_dev *dev, int where, struct pcie_tlp_log *log); 49 39 50 40 #if defined(CONFIG_PCIEAER) 51 41 int pci_aer_clear_nonfatal_status(struct pci_dev *dev);
+1
include/linux/of_address.h
··· 26 26 u64 bus_addr; 27 27 }; 28 28 u64 cpu_addr; 29 + u64 parent_bus_addr; 29 30 u64 size; 30 31 u32 flags; 31 32 };
+4
include/linux/pci-ecam.h
··· 45 45 unsigned int bus_shift; 46 46 struct pci_ops pci_ops; 47 47 int (*init)(struct pci_config_window *); 48 + int (*enable_device)(struct pci_host_bridge *, 49 + struct pci_dev *); 50 + void (*disable_device)(struct pci_host_bridge *, 51 + struct pci_dev *); 48 52 }; 49 53 50 54 /*
+2 -2
include/linux/pci-epf.h
··· 157 157 struct device dev; 158 158 const char *name; 159 159 struct pci_epf_header *header; 160 - struct pci_epf_bar bar[6]; 160 + struct pci_epf_bar bar[PCI_STD_NUM_BARS]; 161 161 u8 msi_interrupts; 162 162 u16 msix_interrupts; 163 163 u8 func_no; ··· 174 174 /* Below members are to attach secondary EPC to an endpoint function */ 175 175 struct pci_epc *sec_epc; 176 176 struct list_head sec_epc_list; 177 - struct pci_epf_bar sec_epc_bar[6]; 177 + struct pci_epf_bar sec_epc_bar[PCI_STD_NUM_BARS]; 178 178 u8 sec_epc_func_no; 179 179 struct config_group *group; 180 180 unsigned int is_bound;
+4 -1
include/linux/pci.h
··· 407 407 supported from root to here */ 408 408 #endif 409 409 unsigned int pasid_no_tlp:1; /* PASID works without TLP Prefix */ 410 - unsigned int eetlp_prefix_path:1; /* End-to-End TLP Prefix */ 410 + unsigned int eetlp_prefix_max:3; /* Max # of End-End TLP Prefixes, 0=not supported */ 411 411 412 412 pci_channel_state_t error_state; /* Current connectivity state */ 413 413 struct device dev; /* Generic device interface */ ··· 595 595 u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */ 596 596 int (*map_irq)(const struct pci_dev *, u8, u8); 597 597 void (*release_fn)(struct pci_host_bridge *); 598 + int (*enable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev); 599 + void (*disable_device)(struct pci_host_bridge *bridge, struct pci_dev *dev); 598 600 void *release_data; 599 601 unsigned int ignore_reset_delay:1; /* For entire hierarchy */ 600 602 unsigned int no_ext_tags:1; /* No Extended Tags */ ··· 2313 2311 struct pci_dev *dev) { } 2314 2312 #endif 2315 2313 2314 + int pcim_intx(struct pci_dev *pdev, int enabled); 2316 2315 int pcim_request_all_regions(struct pci_dev *pdev, const char *name); 2317 2316 void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen); 2318 2317 void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar,
+8 -8
include/uapi/linux/pci_regs.h
··· 533 533 #define PCI_EXP_DEVSTA_TRPND 0x0020 /* Transactions Pending */ 534 534 #define PCI_CAP_EXP_RC_ENDPOINT_SIZEOF_V1 12 /* v1 endpoints without link end here */ 535 535 #define PCI_EXP_LNKCAP 0x0c /* Link Capabilities */ 536 - #define PCI_EXP_LNKCAP_SLS 0x0000000f /* Supported Link Speeds */ 536 + #define PCI_EXP_LNKCAP_SLS 0x0000000f /* Max Link Speed (prior to PCIe r3.0: Supported Link Speeds) */ 537 537 #define PCI_EXP_LNKCAP_SLS_2_5GB 0x00000001 /* LNKCAP2 SLS Vector bit 0 */ 538 538 #define PCI_EXP_LNKCAP_SLS_5_0GB 0x00000002 /* LNKCAP2 SLS Vector bit 1 */ 539 539 #define PCI_EXP_LNKCAP_SLS_8_0GB 0x00000003 /* LNKCAP2 SLS Vector bit 2 */ ··· 665 665 #define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */ 666 666 #define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */ 667 667 #define PCI_EXP_DEVCAP2_EE_PREFIX 0x00200000 /* End-End TLP Prefix */ 668 + #define PCI_EXP_DEVCAP2_EE_PREFIX_MAX 0x00c00000 /* Max End-End TLP Prefixes */ 668 669 #define PCI_EXP_DEVCTL2 0x28 /* Device Control 2 */ 669 670 #define PCI_EXP_DEVCTL2_COMP_TIMEOUT 0x000f /* Completion Timeout Value */ 670 671 #define PCI_EXP_DEVCTL2_COMP_TMOUT_DIS 0x0010 /* Completion Timeout Disable */ ··· 790 789 /* Same bits as above */ 791 790 #define PCI_ERR_CAP 0x18 /* Advanced Error Capabilities & Ctrl*/ 792 791 #define PCI_ERR_CAP_FEP(x) ((x) & 0x1f) /* First Error Pointer */ 793 - #define PCI_ERR_CAP_ECRC_GENC 0x00000020 /* ECRC Generation Capable */ 794 - #define PCI_ERR_CAP_ECRC_GENE 0x00000040 /* ECRC Generation Enable */ 795 - #define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */ 796 - #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ 792 + #define PCI_ERR_CAP_ECRC_GENC 0x00000020 /* ECRC Generation Capable */ 793 + #define PCI_ERR_CAP_ECRC_GENE 0x00000040 /* ECRC Generation Enable */ 794 + #define PCI_ERR_CAP_ECRC_CHKC 0x00000080 /* ECRC Check Capable */ 795 + #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ 796 + #define PCI_ERR_CAP_PREFIX_LOG_PRESENT 0x00000800 /* TLP Prefix Log Present */ 797 797 #define PCI_ERR_HEADER_LOG 0x1c /* Header Log Register (16 bytes) */ 798 798 #define PCI_ERR_ROOT_COMMAND 0x2c /* Root Error Command */ 799 799 #define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */ ··· 810 808 #define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */ 811 809 #define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */ 812 810 #define PCI_ERR_ROOT_ERR_SRC 0x34 /* Error Source Identification */ 811 + #define PCI_ERR_PREFIX_LOG 0x38 /* TLP Prefix LOG Register (up to 16 bytes) */ 813 812 814 813 /* Virtual Channel */ 815 814 #define PCI_VC_PORT_CAP1 0x04 ··· 1003 1000 #define PCI_ACS_EGRESS_BITS 0x05 /* ACS Egress Control Vector Size */ 1004 1001 #define PCI_ACS_CTRL 0x06 /* ACS Control Register */ 1005 1002 #define PCI_ACS_EGRESS_CTL_V 0x08 /* ACS Egress Control Vector */ 1006 - 1007 - #define PCI_VSEC_HDR 4 /* extended cap - vendor-specific */ 1008 - #define PCI_VSEC_HDR_LEN_SHIFT 20 /* shift for length field */ 1009 1003 1010 1004 /* SATA capability */ 1011 1005 #define PCI_SATA_REGS 4 /* SATA REGs specifier */
+1
include/uapi/linux/pcitest.h
··· 20 20 #define PCITEST_MSIX _IOW('P', 0x7, int) 21 21 #define PCITEST_SET_IRQTYPE _IOW('P', 0x8, int) 22 22 #define PCITEST_GET_IRQTYPE _IO('P', 0x9) 23 + #define PCITEST_BARS _IO('P', 0xa) 23 24 #define PCITEST_CLEAR_IRQ _IO('P', 0x10) 24 25 25 26 #define PCITEST_FLAGS_USE_DMA 0x00000001
-1
tools/pci/Build
··· 1 - pcitest-y += pcitest.o
-58
tools/pci/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - include ../scripts/Makefile.include 3 - 4 - bindir ?= /usr/bin 5 - 6 - ifeq ($(srctree),) 7 - srctree := $(patsubst %/,%,$(dir $(CURDIR))) 8 - srctree := $(patsubst %/,%,$(dir $(srctree))) 9 - endif 10 - 11 - # Do not use make's built-in rules 12 - # (this improves performance and avoids hard-to-debug behaviour); 13 - MAKEFLAGS += -r 14 - 15 - CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include 16 - 17 - ALL_TARGETS := pcitest 18 - ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS)) 19 - 20 - SCRIPTS := pcitest.sh 21 - 22 - all: $(ALL_PROGRAMS) 23 - 24 - export srctree OUTPUT CC LD CFLAGS 25 - include $(srctree)/tools/build/Makefile.include 26 - 27 - # 28 - # We need the following to be outside of kernel tree 29 - # 30 - $(OUTPUT)include/linux/: ../../include/uapi/linux/ 31 - mkdir -p $(OUTPUT)include/linux/ 2>&1 || true 32 - ln -sf $(CURDIR)/../../include/uapi/linux/pcitest.h $@ 33 - 34 - prepare: $(OUTPUT)include/linux/ 35 - 36 - PCITEST_IN := $(OUTPUT)pcitest-in.o 37 - $(PCITEST_IN): prepare FORCE 38 - $(Q)$(MAKE) $(build)=pcitest 39 - $(OUTPUT)pcitest: $(PCITEST_IN) 40 - $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@ 41 - 42 - clean: 43 - rm -f $(ALL_PROGRAMS) 44 - rm -rf $(OUTPUT)include/ 45 - find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete 46 - 47 - install: $(ALL_PROGRAMS) 48 - install -d -m 755 $(DESTDIR)$(bindir); \ 49 - for program in $(ALL_PROGRAMS); do \ 50 - install $$program $(DESTDIR)$(bindir); \ 51 - done; \ 52 - for script in $(SCRIPTS); do \ 53 - install $$script $(DESTDIR)$(bindir); \ 54 - done 55 - 56 - FORCE: 57 - 58 - .PHONY: all install clean FORCE prepare
-250
tools/pci/pcitest.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /** 3 - * Userspace PCI Endpoint Test Module 4 - * 5 - * Copyright (C) 2017 Texas Instruments 6 - * Author: Kishon Vijay Abraham I <kishon@ti.com> 7 - */ 8 - 9 - #include <errno.h> 10 - #include <fcntl.h> 11 - #include <stdbool.h> 12 - #include <stdio.h> 13 - #include <stdlib.h> 14 - #include <sys/ioctl.h> 15 - #include <unistd.h> 16 - 17 - #include <linux/pcitest.h> 18 - 19 - static char *result[] = { "NOT OKAY", "OKAY" }; 20 - static char *irq[] = { "LEGACY", "MSI", "MSI-X" }; 21 - 22 - struct pci_test { 23 - char *device; 24 - char barnum; 25 - bool legacyirq; 26 - unsigned int msinum; 27 - unsigned int msixnum; 28 - int irqtype; 29 - bool set_irqtype; 30 - bool get_irqtype; 31 - bool clear_irq; 32 - bool read; 33 - bool write; 34 - bool copy; 35 - unsigned long size; 36 - bool use_dma; 37 - }; 38 - 39 - static int run_test(struct pci_test *test) 40 - { 41 - struct pci_endpoint_test_xfer_param param = {}; 42 - int ret = -EINVAL; 43 - int fd; 44 - 45 - fd = open(test->device, O_RDWR); 46 - if (fd < 0) { 47 - perror("can't open PCI Endpoint Test device"); 48 - return -ENODEV; 49 - } 50 - 51 - if (test->barnum >= 0 && test->barnum <= 5) { 52 - ret = ioctl(fd, PCITEST_BAR, test->barnum); 53 - fprintf(stdout, "BAR%d:\t\t", test->barnum); 54 - if (ret < 0) 55 - fprintf(stdout, "TEST FAILED\n"); 56 - else 57 - fprintf(stdout, "%s\n", result[ret]); 58 - } 59 - 60 - if (test->set_irqtype) { 61 - ret = ioctl(fd, PCITEST_SET_IRQTYPE, test->irqtype); 62 - fprintf(stdout, "SET IRQ TYPE TO %s:\t\t", irq[test->irqtype]); 63 - if (ret < 0) 64 - fprintf(stdout, "FAILED\n"); 65 - else 66 - fprintf(stdout, "%s\n", result[ret]); 67 - } 68 - 69 - if (test->get_irqtype) { 70 - ret = ioctl(fd, PCITEST_GET_IRQTYPE); 71 - fprintf(stdout, "GET IRQ TYPE:\t\t"); 72 - if (ret < 0) 73 - fprintf(stdout, "FAILED\n"); 74 - else 75 - fprintf(stdout, "%s\n", irq[ret]); 76 - } 77 - 78 - if (test->clear_irq) { 79 - ret = ioctl(fd, PCITEST_CLEAR_IRQ); 80 - fprintf(stdout, "CLEAR IRQ:\t\t"); 81 - if (ret < 0) 82 - fprintf(stdout, "FAILED\n"); 83 - else 84 - fprintf(stdout, "%s\n", result[ret]); 85 - } 86 - 87 - if (test->legacyirq) { 88 - ret = ioctl(fd, PCITEST_LEGACY_IRQ, 0); 89 - fprintf(stdout, "LEGACY IRQ:\t"); 90 - if (ret < 0) 91 - fprintf(stdout, "TEST FAILED\n"); 92 - else 93 - fprintf(stdout, "%s\n", result[ret]); 94 - } 95 - 96 - if (test->msinum > 0 && test->msinum <= 32) { 97 - ret = ioctl(fd, PCITEST_MSI, test->msinum); 98 - fprintf(stdout, "MSI%u:\t\t", test->msinum); 99 - if (ret < 0) 100 - fprintf(stdout, "TEST FAILED\n"); 101 - else 102 - fprintf(stdout, "%s\n", result[ret]); 103 - } 104 - 105 - if (test->msixnum > 0 && test->msixnum <= 2048) { 106 - ret = ioctl(fd, PCITEST_MSIX, test->msixnum); 107 - fprintf(stdout, "MSI-X%u:\t\t", test->msixnum); 108 - if (ret < 0) 109 - fprintf(stdout, "TEST FAILED\n"); 110 - else 111 - fprintf(stdout, "%s\n", result[ret]); 112 - } 113 - 114 - if (test->write) { 115 - param.size = test->size; 116 - if (test->use_dma) 117 - param.flags = PCITEST_FLAGS_USE_DMA; 118 - ret = ioctl(fd, PCITEST_WRITE, &param); 119 - fprintf(stdout, "WRITE (%7lu bytes):\t\t", test->size); 120 - if (ret < 0) 121 - fprintf(stdout, "TEST FAILED\n"); 122 - else 123 - fprintf(stdout, "%s\n", result[ret]); 124 - } 125 - 126 - if (test->read) { 127 - param.size = test->size; 128 - if (test->use_dma) 129 - param.flags = PCITEST_FLAGS_USE_DMA; 130 - ret = ioctl(fd, PCITEST_READ, &param); 131 - fprintf(stdout, "READ (%7lu bytes):\t\t", test->size); 132 - if (ret < 0) 133 - fprintf(stdout, "TEST FAILED\n"); 134 - else 135 - fprintf(stdout, "%s\n", result[ret]); 136 - } 137 - 138 - if (test->copy) { 139 - param.size = test->size; 140 - if (test->use_dma) 141 - param.flags = PCITEST_FLAGS_USE_DMA; 142 - ret = ioctl(fd, PCITEST_COPY, &param); 143 - fprintf(stdout, "COPY (%7lu bytes):\t\t", test->size); 144 - if (ret < 0) 145 - fprintf(stdout, "TEST FAILED\n"); 146 - else 147 - fprintf(stdout, "%s\n", result[ret]); 148 - } 149 - 150 - fflush(stdout); 151 - close(fd); 152 - return (ret < 0) ? ret : 1 - ret; /* return 0 if test succeeded */ 153 - } 154 - 155 - int main(int argc, char **argv) 156 - { 157 - int c; 158 - struct pci_test *test; 159 - 160 - test = calloc(1, sizeof(*test)); 161 - if (!test) { 162 - perror("Fail to allocate memory for pci_test\n"); 163 - return -ENOMEM; 164 - } 165 - 166 - /* since '0' is a valid BAR number, initialize it to -1 */ 167 - test->barnum = -1; 168 - 169 - /* set default size as 100KB */ 170 - test->size = 0x19000; 171 - 172 - /* set default endpoint device */ 173 - test->device = "/dev/pci-endpoint-test.0"; 174 - 175 - while ((c = getopt(argc, argv, "D:b:m:x:i:deIlhrwcs:")) != EOF) 176 - switch (c) { 177 - case 'D': 178 - test->device = optarg; 179 - continue; 180 - case 'b': 181 - test->barnum = atoi(optarg); 182 - if (test->barnum < 0 || test->barnum > 5) 183 - goto usage; 184 - continue; 185 - case 'l': 186 - test->legacyirq = true; 187 - continue; 188 - case 'm': 189 - test->msinum = atoi(optarg); 190 - if (test->msinum < 1 || test->msinum > 32) 191 - goto usage; 192 - continue; 193 - case 'x': 194 - test->msixnum = atoi(optarg); 195 - if (test->msixnum < 1 || test->msixnum > 2048) 196 - goto usage; 197 - continue; 198 - case 'i': 199 - test->irqtype = atoi(optarg); 200 - if (test->irqtype < 0 || test->irqtype > 2) 201 - goto usage; 202 - test->set_irqtype = true; 203 - continue; 204 - case 'I': 205 - test->get_irqtype = true; 206 - continue; 207 - case 'r': 208 - test->read = true; 209 - continue; 210 - case 'w': 211 - test->write = true; 212 - continue; 213 - case 'c': 214 - test->copy = true; 215 - continue; 216 - case 'e': 217 - test->clear_irq = true; 218 - continue; 219 - case 's': 220 - test->size = strtoul(optarg, NULL, 0); 221 - continue; 222 - case 'd': 223 - test->use_dma = true; 224 - continue; 225 - case 'h': 226 - default: 227 - usage: 228 - fprintf(stderr, 229 - "usage: %s [options]\n" 230 - "Options:\n" 231 - "\t-D <dev> PCI endpoint test device {default: /dev/pci-endpoint-test.0}\n" 232 - "\t-b <bar num> BAR test (bar number between 0..5)\n" 233 - "\t-m <msi num> MSI test (msi number between 1..32)\n" 234 - "\t-x <msix num> \tMSI-X test (msix number between 1..2048)\n" 235 - "\t-i <irq type> \tSet IRQ type (0 - Legacy, 1 - MSI, 2 - MSI-X)\n" 236 - "\t-e Clear IRQ\n" 237 - "\t-I Get current IRQ type configured\n" 238 - "\t-d Use DMA\n" 239 - "\t-l Legacy IRQ test\n" 240 - "\t-r Read buffer test\n" 241 - "\t-w Write buffer test\n" 242 - "\t-c Copy buffer test\n" 243 - "\t-s <size> Size of buffer {default: 100KB}\n" 244 - "\t-h Print this help message\n", 245 - argv[0]); 246 - return -EINVAL; 247 - } 248 - 249 - return run_test(test); 250 - }
-72
tools/pci/pcitest.sh
··· 1 - #!/bin/sh 2 - # SPDX-License-Identifier: GPL-2.0 3 - 4 - echo "BAR tests" 5 - echo 6 - 7 - bar=0 8 - 9 - while [ $bar -lt 6 ] 10 - do 11 - pcitest -b $bar 12 - bar=`expr $bar + 1` 13 - done 14 - echo 15 - 16 - echo "Interrupt tests" 17 - echo 18 - 19 - pcitest -i 0 20 - pcitest -l 21 - 22 - pcitest -i 1 23 - msi=1 24 - 25 - while [ $msi -lt 33 ] 26 - do 27 - pcitest -m $msi 28 - msi=`expr $msi + 1` 29 - done 30 - echo 31 - 32 - pcitest -i 2 33 - msix=1 34 - 35 - while [ $msix -lt 2049 ] 36 - do 37 - pcitest -x $msix 38 - msix=`expr $msix + 1` 39 - done 40 - echo 41 - 42 - echo "Read Tests" 43 - echo 44 - 45 - pcitest -i 1 46 - 47 - pcitest -r -s 1 48 - pcitest -r -s 1024 49 - pcitest -r -s 1025 50 - pcitest -r -s 1024000 51 - pcitest -r -s 1024001 52 - echo 53 - 54 - echo "Write Tests" 55 - echo 56 - 57 - pcitest -w -s 1 58 - pcitest -w -s 1024 59 - pcitest -w -s 1025 60 - pcitest -w -s 1024000 61 - pcitest -w -s 1024001 62 - echo 63 - 64 - echo "Copy Tests" 65 - echo 66 - 67 - pcitest -c -s 1 68 - pcitest -c -s 1024 69 - pcitest -c -s 1025 70 - pcitest -c -s 1024000 71 - pcitest -c -s 1024001 72 - echo
+1
tools/testing/selftests/Makefile
··· 72 72 TARGETS += net/rds 73 73 TARGETS += net/tcp_ao 74 74 TARGETS += nsfs 75 + TARGETS += pci_endpoint 75 76 TARGETS += pcie_bwctrl 76 77 TARGETS += perf_events 77 78 TARGETS += pidfd
+2
tools/testing/selftests/pci_endpoint/.gitignore
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + pci_endpoint_test
+7
tools/testing/selftests/pci_endpoint/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + CFLAGS += -O2 -Wl,-no-as-needed -Wall $(KHDR_INCLUDES) 3 + LDFLAGS += -lrt -lpthread -lm 4 + 5 + TEST_GEN_PROGS = pci_endpoint_test 6 + 7 + include ../lib.mk
+4
tools/testing/selftests/pci_endpoint/config
··· 1 + CONFIG_PCI_ENDPOINT=y 2 + CONFIG_PCI_ENDPOINT_CONFIGFS=y 3 + CONFIG_PCI_EPF_TEST=m 4 + CONFIG_PCI_ENDPOINT_TEST=m
+221
tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Kselftest for PCI Endpoint Subsystem 4 + * 5 + * Copyright (c) 2022 Samsung Electronics Co., Ltd. 6 + * https://www.samsung.com 7 + * Author: Aman Gupta <aman1.gupta@samsung.com> 8 + * 9 + * Copyright (c) 2024, Linaro Ltd. 10 + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 11 + */ 12 + 13 + #include <errno.h> 14 + #include <fcntl.h> 15 + #include <stdbool.h> 16 + #include <stdio.h> 17 + #include <stdlib.h> 18 + #include <sys/ioctl.h> 19 + #include <unistd.h> 20 + 21 + #include "../../../../include/uapi/linux/pcitest.h" 22 + 23 + #include "../kselftest_harness.h" 24 + 25 + #define pci_ep_ioctl(cmd, arg) \ 26 + ({ \ 27 + ret = ioctl(self->fd, cmd, arg); \ 28 + ret = ret < 0 ? -errno : 0; \ 29 + }) 30 + 31 + static const char *test_device = "/dev/pci-endpoint-test.0"; 32 + static const unsigned long test_size[5] = { 1, 1024, 1025, 1024000, 1024001 }; 33 + 34 + FIXTURE(pci_ep_bar) 35 + { 36 + int fd; 37 + }; 38 + 39 + FIXTURE_SETUP(pci_ep_bar) 40 + { 41 + self->fd = open(test_device, O_RDWR); 42 + 43 + ASSERT_NE(-1, self->fd) TH_LOG("Can't open PCI Endpoint Test device"); 44 + } 45 + 46 + FIXTURE_TEARDOWN(pci_ep_bar) 47 + { 48 + close(self->fd); 49 + } 50 + 51 + FIXTURE_VARIANT(pci_ep_bar) 52 + { 53 + int barno; 54 + }; 55 + 56 + FIXTURE_VARIANT_ADD(pci_ep_bar, BAR0) { .barno = 0 }; 57 + FIXTURE_VARIANT_ADD(pci_ep_bar, BAR1) { .barno = 1 }; 58 + FIXTURE_VARIANT_ADD(pci_ep_bar, BAR2) { .barno = 2 }; 59 + FIXTURE_VARIANT_ADD(pci_ep_bar, BAR3) { .barno = 3 }; 60 + FIXTURE_VARIANT_ADD(pci_ep_bar, BAR4) { .barno = 4 }; 61 + FIXTURE_VARIANT_ADD(pci_ep_bar, BAR5) { .barno = 5 }; 62 + 63 + TEST_F(pci_ep_bar, BAR_TEST) 64 + { 65 + int ret; 66 + 67 + pci_ep_ioctl(PCITEST_BAR, variant->barno); 68 + EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno); 69 + } 70 + 71 + FIXTURE(pci_ep_basic) 72 + { 73 + int fd; 74 + }; 75 + 76 + FIXTURE_SETUP(pci_ep_basic) 77 + { 78 + self->fd = open(test_device, O_RDWR); 79 + 80 + ASSERT_NE(-1, self->fd) TH_LOG("Can't open PCI Endpoint Test device"); 81 + } 82 + 83 + FIXTURE_TEARDOWN(pci_ep_basic) 84 + { 85 + close(self->fd); 86 + } 87 + 88 + TEST_F(pci_ep_basic, CONSECUTIVE_BAR_TEST) 89 + { 90 + int ret; 91 + 92 + pci_ep_ioctl(PCITEST_BARS, 0); 93 + EXPECT_FALSE(ret) TH_LOG("Consecutive BAR test failed"); 94 + } 95 + 96 + TEST_F(pci_ep_basic, LEGACY_IRQ_TEST) 97 + { 98 + int ret; 99 + 100 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, 0); 101 + ASSERT_EQ(0, ret) TH_LOG("Can't set Legacy IRQ type"); 102 + 103 + pci_ep_ioctl(PCITEST_LEGACY_IRQ, 0); 104 + EXPECT_FALSE(ret) TH_LOG("Test failed for Legacy IRQ"); 105 + } 106 + 107 + TEST_F(pci_ep_basic, MSI_TEST) 108 + { 109 + int ret, i; 110 + 111 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1); 112 + ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type"); 113 + 114 + for (i = 1; i <= 32; i++) { 115 + pci_ep_ioctl(PCITEST_MSI, i); 116 + EXPECT_FALSE(ret) TH_LOG("Test failed for MSI%d", i); 117 + } 118 + } 119 + 120 + TEST_F(pci_ep_basic, MSIX_TEST) 121 + { 122 + int ret, i; 123 + 124 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, 2); 125 + ASSERT_EQ(0, ret) TH_LOG("Can't set MSI-X IRQ type"); 126 + 127 + for (i = 1; i <= 2048; i++) { 128 + pci_ep_ioctl(PCITEST_MSIX, i); 129 + EXPECT_FALSE(ret) TH_LOG("Test failed for MSI-X%d", i); 130 + } 131 + } 132 + 133 + FIXTURE(pci_ep_data_transfer) 134 + { 135 + int fd; 136 + }; 137 + 138 + FIXTURE_SETUP(pci_ep_data_transfer) 139 + { 140 + self->fd = open(test_device, O_RDWR); 141 + 142 + ASSERT_NE(-1, self->fd) TH_LOG("Can't open PCI Endpoint Test device"); 143 + } 144 + 145 + FIXTURE_TEARDOWN(pci_ep_data_transfer) 146 + { 147 + close(self->fd); 148 + } 149 + 150 + FIXTURE_VARIANT(pci_ep_data_transfer) 151 + { 152 + bool use_dma; 153 + }; 154 + 155 + FIXTURE_VARIANT_ADD(pci_ep_data_transfer, memcpy) 156 + { 157 + .use_dma = false, 158 + }; 159 + 160 + FIXTURE_VARIANT_ADD(pci_ep_data_transfer, dma) 161 + { 162 + .use_dma = true, 163 + }; 164 + 165 + TEST_F(pci_ep_data_transfer, READ_TEST) 166 + { 167 + struct pci_endpoint_test_xfer_param param = {}; 168 + int ret, i; 169 + 170 + if (variant->use_dma) 171 + param.flags = PCITEST_FLAGS_USE_DMA; 172 + 173 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1); 174 + ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type"); 175 + 176 + for (i = 0; i < ARRAY_SIZE(test_size); i++) { 177 + param.size = test_size[i]; 178 + pci_ep_ioctl(PCITEST_READ, &param); 179 + EXPECT_FALSE(ret) TH_LOG("Test failed for size (%ld)", 180 + test_size[i]); 181 + } 182 + } 183 + 184 + TEST_F(pci_ep_data_transfer, WRITE_TEST) 185 + { 186 + struct pci_endpoint_test_xfer_param param = {}; 187 + int ret, i; 188 + 189 + if (variant->use_dma) 190 + param.flags = PCITEST_FLAGS_USE_DMA; 191 + 192 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1); 193 + ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type"); 194 + 195 + for (i = 0; i < ARRAY_SIZE(test_size); i++) { 196 + param.size = test_size[i]; 197 + pci_ep_ioctl(PCITEST_WRITE, &param); 198 + EXPECT_FALSE(ret) TH_LOG("Test failed for size (%ld)", 199 + test_size[i]); 200 + } 201 + } 202 + 203 + TEST_F(pci_ep_data_transfer, COPY_TEST) 204 + { 205 + struct pci_endpoint_test_xfer_param param = {}; 206 + int ret, i; 207 + 208 + if (variant->use_dma) 209 + param.flags = PCITEST_FLAGS_USE_DMA; 210 + 211 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, 1); 212 + ASSERT_EQ(0, ret) TH_LOG("Can't set MSI IRQ type"); 213 + 214 + for (i = 0; i < ARRAY_SIZE(test_size); i++) { 215 + param.size = test_size[i]; 216 + pci_ep_ioctl(PCITEST_COPY, &param); 217 + EXPECT_FALSE(ret) TH_LOG("Test failed for size (%ld)", 218 + test_size[i]); 219 + } 220 + } 221 + TEST_HARNESS_MAIN