Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pci/controller/dwc'

- Extend PCI_FIND_NEXT_CAP() and PCI_FIND_NEXT_EXT_CAP() to return a
pointer to the preceding Capability (Qiang Yu)

- Add dw_pcie_remove_capability() and dw_pcie_remove_ext_capability() to
remove Capabilities that are advertised but not fully implemented (Qiang
Yu)

- Remove MSI and MSI-X Capabilities for DWC controllers in platforms that
can't support them, so we automatically fall back to INTx (Qiang Yu)

- Remove MSI-X and DPC Capabilities for Qualcomm platforms that advertise
but don't support them (Qiang Yu)

- Remove duplicate dw_pcie_ep_hide_ext_capability() function and replace
with dw_pcie_remove_ext_capability() (Qiang Yu)

- Add ASPM L1.1 and L1.2 Substates context to debugfs ltssm_status for
drivers that support this (Shawn Lin)

- Skip PME_Turn_Off broadcast and L2/L3 transition during suspend if link
is not up to avoid an unnecessary timeout (Manivannan Sadhasivam)

- Revert dw-rockchip, qcom, and DWC core changes that used link-up IRQs to
trigger enumeration instead of waiting for link to be up because the PCI
core doesn't allocate bus number space for hierarchies that might be
attached (Niklas Cassel)

- Make endpoint iATU entry for MSI permanent instead of programming it
dynamically, which is slow and racy with respect to other concurrent
traffic, e.g., eDMA (Koichiro Den)

- Use iMSI-RX MSI target address when possible to fix endpoints using
32-bit MSI (Shawn Lin)

- Make dw_pcie_ltssm_status_string() available and use it for logging
errors in dw_pcie_wait_for_link() (Manivannan Sadhasivam)

- Return -ENODEV when dw_pcie_wait_for_link() finds no devices, -EIO for
device present but inactive, -ETIMEDOUT for other failures, so callers
can handle these cases differently (Manivannan Sadhasivam)

- Allow DWC host controller driver probe to continue if device is not found
or found but inactive; only fail when there's an error with the link
(Manivannan Sadhasivam)

- For controllers like NXP i.MX6QP and i.MX7D, where LTSSM registers are
not accessible after PME_Turn_Off, simply wait 10ms instead of polling
for L2/L3 Ready (Richard Zhu)

- Use multiple iATU entries to map large bridge windows and DMA ranges when
necessary instead of failing (Samuel Holland)

- Rename struct dw_pcie_rp.has_msi_ctrl to .use_imsi_rx for clarity (Qiang
Yu)

- Add EPC dynamic_inbound_mapping feature bit for Endpoint Controllers that
can update BAR inbound address translation without requiring EPF driver
to clear/reset the BAR first, and advertise it for DWC-based Endpoints
(Koichiro Den)

- Add EPC subrange_mapping feature bit for Endpoint Controllers that can
map multiple independent inbound regions in a single BAR, implement
subrange mapping, advertise it for DWC-based Endpoints, and add Endpoint
selftests for it (Koichiro Den)

- Allow overriding default BAR sizes for pci-epf-test (Niklas Cassel)

- Make resizable BARs work for Endpoint multi-PF configurations; previously
it only worked for PF 0 (Aksh Garg)

- Fix Endpoint non-PF 0 support for BAR configuration, ATU mappings, and
Address Match Mode (Aksh Garg)

- Fix issues with outbound iATU index assignment that caused iATU index to
be out of bounds (Niklas Cassel)

- Clean up iATU index tracking to be consistent (Niklas Cassel)

- Set up iATU when ECAM is enabled; previously IO and MEM outbound windows
weren't programmed, and ECAM-related iATU entries weren't restored after
suspend/resume, so config accesses failed (Krishna Chaitanya Chundru)

* pci/controller/dwc:
PCI: dwc: Fix missing iATU setup when ECAM is enabled
PCI: dwc: Clean up iATU index usage in dw_pcie_iatu_setup()
PCI: dwc: Fix msg_atu_index assignment
PCI: dwc: ep: Add comment explaining controller level PTM access in multi PF setup
PCI: dwc: ep: Add per-PF BAR and inbound ATU mapping support
PCI: dwc: ep: Fix resizable BAR support for multi-PF configurations
PCI: endpoint: pci-epf-test: Allow overriding default BAR sizes
selftests: pci_endpoint: Add BAR subrange mapping test case
misc: pci_endpoint_test: Add BAR subrange mapping test case
PCI: endpoint: pci-epf-test: Add BAR subrange mapping test support
Documentation: PCI: endpoint: Clarify pci_epc_set_bar() usage
PCI: dwc: ep: Support BAR subrange inbound mapping via Address Match Mode iATU
PCI: dwc: Advertise dynamic inbound mapping support
PCI: endpoint: Add BAR subrange mapping support
PCI: endpoint: Add dynamic_inbound_mapping EPC feature
PCI: dwc: Rename dw_pcie_rp::has_msi_ctrl to dw_pcie_rp::use_imsi_rx for clarity
PCI: dwc: Fix grammar and formatting for comment in dw_pcie_remove_ext_capability()
PCI: dwc: Use multiple iATU windows for mapping large bridge windows and DMA ranges
PCI: dwc: Remove duplicate dw_pcie_ep_hide_ext_capability() function
PCI: dwc: Skip waiting for L2/L3 Ready if dw_pcie_rp::skip_l23_wait is true
PCI: dwc: Fail dw_pcie_host_init() if dw_pcie_wait_for_link() returns -ETIMEDOUT
PCI: dwc: Rework the error print of dw_pcie_wait_for_link()
PCI: dwc: Rename and move ltssm_status_string() to pcie-designware.c
PCI: dwc: Return -EIO from dw_pcie_wait_for_link() if device is not active
PCI: dwc: Return -ENODEV from dw_pcie_wait_for_link() if device is not found
PCI: dwc: Use cfg0_base as iMSI-RX target address to support 32-bit MSI devices
PCI: dwc: ep: Cache MSI outbound iATU mapping
Revert "PCI: dwc: Don't wait for link up if driver can detect Link Up event"
Revert "PCI: qcom: Enumerate endpoints based on Link up event in 'global_irq' interrupt"
Revert "PCI: qcom: Enable MSI interrupts together with Link up if 'Global IRQ' is supported"
Revert "PCI: qcom: Don't wait for link if we can detect Link Up"
Revert "PCI: dw-rockchip: Enumerate endpoints based on dll_link_up IRQ"
Revert "PCI: dw-rockchip: Don't wait for link since we can detect Link Up"
PCI: dwc: Skip PME_Turn_Off broadcast and L2/L3 transition during suspend if link is not up
PCI: dw-rockchip: Change get_ltssm() to provide L1 Substates info
PCI: dwc: Add L1 Substates context to ltssm_status of debugfs
PCI: qcom: Remove DPC Extended Capability
PCI: qcom: Remove MSI-X Capability for Root Ports
PCI: dwc: Remove MSI/MSIX capability for Root Port if iMSI-RX is used as MSI controller
PCI: dwc: Add new APIs to remove standard and extended Capability
PCI: Add preceding capability position support in PCI_FIND_NEXT_*_CAP macros

+1296 -350
+24
Documentation/PCI/endpoint/pci-endpoint.rst
··· 95 95 Register space of the function driver is usually configured 96 96 using this API. 97 97 98 + Some endpoint controllers also support calling pci_epc_set_bar() again 99 + for the same BAR (without calling pci_epc_clear_bar()) to update inbound 100 + address translations after the host has programmed the BAR base address. 101 + Endpoint function drivers can check this capability via the 102 + dynamic_inbound_mapping EPC feature bit. 103 + 104 + When pci_epf_bar.num_submap is non-zero, the endpoint function driver is 105 + requesting BAR subrange mapping using pci_epf_bar.submap. This requires 106 + the EPC to advertise support via the subrange_mapping EPC feature bit. 107 + 108 + When an EPF driver wants to make use of the inbound subrange mapping 109 + feature, it requires that the BAR base address has been programmed by 110 + the host during enumeration. Thus, it needs to call pci_epc_set_bar() 111 + twice for the same BAR (requires dynamic_inbound_mapping): first with 112 + num_submap set to zero and configuring the BAR size, then after the PCIe 113 + link is up and the host enumerates the endpoint and programs the BAR 114 + base address, again with num_submap set to non-zero value. 115 + 116 + Note that when making use of the inbound subrange mapping feature, the 117 + EPF driver must not call pci_epc_clear_bar() between the two 118 + pci_epc_set_bar() calls, because clearing the BAR can clear/disable the 119 + BAR register or BAR decode on the endpoint while the host still expects 120 + the assigned BAR address to remain valid. 121 + 98 122 * pci_epc_clear_bar() 99 123 100 124 The PCI endpoint function driver should use pci_epc_clear_bar() to reset
+19
Documentation/PCI/endpoint/pci-test-howto.rst
··· 84 84 # echo 32 > functions/pci_epf_test/func1/msi_interrupts 85 85 # echo 2048 > functions/pci_epf_test/func1/msix_interrupts 86 86 87 + By default, pci-epf-test uses the following BAR sizes:: 88 + 89 + # grep . functions/pci_epf_test/func1/pci_epf_test.0/bar?_size 90 + functions/pci_epf_test/func1/pci_epf_test.0/bar0_size:131072 91 + functions/pci_epf_test/func1/pci_epf_test.0/bar1_size:131072 92 + functions/pci_epf_test/func1/pci_epf_test.0/bar2_size:131072 93 + functions/pci_epf_test/func1/pci_epf_test.0/bar3_size:131072 94 + functions/pci_epf_test/func1/pci_epf_test.0/bar4_size:131072 95 + functions/pci_epf_test/func1/pci_epf_test.0/bar5_size:1048576 96 + 97 + The user can override a default value using e.g.:: 98 + # echo 1048576 > functions/pci_epf_test/func1/pci_epf_test.0/bar1_size 99 + 100 + Overriding the default BAR sizes can only be done before binding the 101 + pci-epf-test device to a PCI endpoint controller driver. 102 + 103 + Note: Some endpoint controllers might have fixed-size BARs or reserved BARs; 104 + for such controllers, the corresponding BAR size in configfs will be ignored. 105 + 87 106 88 107 Binding pci-epf-test Device to EP Controller 89 108 --------------------------------------------
+202 -1
drivers/misc/pci_endpoint_test.c
··· 39 39 #define COMMAND_COPY BIT(5) 40 40 #define COMMAND_ENABLE_DOORBELL BIT(6) 41 41 #define COMMAND_DISABLE_DOORBELL BIT(7) 42 + #define COMMAND_BAR_SUBRANGE_SETUP BIT(8) 43 + #define COMMAND_BAR_SUBRANGE_CLEAR BIT(9) 42 44 43 45 #define PCI_ENDPOINT_TEST_STATUS 0x8 44 46 #define STATUS_READ_SUCCESS BIT(0) ··· 57 55 #define STATUS_DOORBELL_ENABLE_FAIL BIT(11) 58 56 #define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12) 59 57 #define STATUS_DOORBELL_DISABLE_FAIL BIT(13) 58 + #define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14) 59 + #define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15) 60 + #define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16) 61 + #define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17) 60 62 61 63 #define PCI_ENDPOINT_TEST_LOWER_SRC_ADDR 0x0c 62 64 #define PCI_ENDPOINT_TEST_UPPER_SRC_ADDR 0x10 ··· 83 77 #define CAP_MSI BIT(1) 84 78 #define CAP_MSIX BIT(2) 85 79 #define CAP_INTX BIT(3) 80 + #define CAP_SUBRANGE_MAPPING BIT(4) 86 81 87 82 #define PCI_ENDPOINT_TEST_DB_BAR 0x34 88 83 #define PCI_ENDPOINT_TEST_DB_OFFSET 0x38 ··· 106 99 #define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031 107 100 108 101 #define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588 102 + 103 + #define PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB 2 109 104 110 105 static DEFINE_IDA(pci_endpoint_test_ida); 111 106 ··· 421 412 } 422 413 423 414 return 0; 415 + } 416 + 417 + static u8 pci_endpoint_test_subrange_sig_byte(enum pci_barno barno, 418 + unsigned int subno) 419 + { 420 + return 0x50 + (barno * 8) + subno; 421 + } 422 + 423 + static u8 pci_endpoint_test_subrange_test_byte(enum pci_barno barno, 424 + unsigned int subno) 425 + { 426 + return 0xa0 + (barno * 8) + subno; 427 + } 428 + 429 + static int pci_endpoint_test_bar_subrange_cmd(struct pci_endpoint_test *test, 430 + enum pci_barno barno, u32 command, 431 + u32 ok_bit, u32 fail_bit) 432 + { 433 + struct pci_dev *pdev = test->pdev; 434 + struct device *dev = &pdev->dev; 435 + int irq_type = test->irq_type; 436 + u32 status; 437 + 438 + if (irq_type < PCITEST_IRQ_TYPE_INTX || 439 + irq_type > PCITEST_IRQ_TYPE_MSIX) { 440 + dev_err(dev, "Invalid IRQ type\n"); 441 + return -EINVAL; 442 + } 443 + 444 + reinit_completion(&test->irq_raised); 445 + 446 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_STATUS, 0); 447 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_TYPE, irq_type); 448 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_IRQ_NUMBER, 1); 449 + /* Reuse SIZE as a command parameter: bar number. */ 450 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_SIZE, barno); 451 + pci_endpoint_test_writel(test, PCI_ENDPOINT_TEST_COMMAND, command); 452 + 453 + if (!wait_for_completion_timeout(&test->irq_raised, 454 + msecs_to_jiffies(1000))) 455 + return -ETIMEDOUT; 456 + 457 + status = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS); 458 + if (status & fail_bit) 459 + return -EIO; 460 + 461 + if (!(status & ok_bit)) 462 + return -EIO; 463 + 464 + return 0; 465 + } 466 + 467 + static int pci_endpoint_test_bar_subrange_setup(struct pci_endpoint_test *test, 468 + enum pci_barno barno) 469 + { 470 + return pci_endpoint_test_bar_subrange_cmd(test, barno, 471 + COMMAND_BAR_SUBRANGE_SETUP, 472 + STATUS_BAR_SUBRANGE_SETUP_SUCCESS, 473 + STATUS_BAR_SUBRANGE_SETUP_FAIL); 474 + } 475 + 476 + static int pci_endpoint_test_bar_subrange_clear(struct pci_endpoint_test *test, 477 + enum pci_barno barno) 478 + { 479 + return pci_endpoint_test_bar_subrange_cmd(test, barno, 480 + COMMAND_BAR_SUBRANGE_CLEAR, 481 + STATUS_BAR_SUBRANGE_CLEAR_SUCCESS, 482 + STATUS_BAR_SUBRANGE_CLEAR_FAIL); 483 + } 484 + 485 + static int pci_endpoint_test_bar_subrange(struct pci_endpoint_test *test, 486 + enum pci_barno barno) 487 + { 488 + u32 nsub = PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB; 489 + struct device *dev = &test->pdev->dev; 490 + size_t sub_size, buf_size; 491 + resource_size_t bar_size; 492 + void __iomem *bar_addr; 493 + void *read_buf = NULL; 494 + int ret, clear_ret; 495 + size_t off, chunk; 496 + u32 i, exp, val; 497 + u8 pattern; 498 + 499 + if (!(test->ep_caps & CAP_SUBRANGE_MAPPING)) 500 + return -EOPNOTSUPP; 501 + 502 + /* 503 + * The test register BAR is not safe to reprogram and write/read 504 + * over its full size. BAR_TEST already special-cases it to a tiny 505 + * range. For subrange mapping tests, let's simply skip it. 506 + */ 507 + if (barno == test->test_reg_bar) 508 + return -EBUSY; 509 + 510 + bar_size = pci_resource_len(test->pdev, barno); 511 + if (!bar_size) 512 + return -ENODATA; 513 + 514 + bar_addr = test->bar[barno]; 515 + if (!bar_addr) 516 + return -ENOMEM; 517 + 518 + ret = pci_endpoint_test_bar_subrange_setup(test, barno); 519 + if (ret) 520 + return ret; 521 + 522 + if (bar_size % nsub || bar_size / nsub > SIZE_MAX) { 523 + ret = -EINVAL; 524 + goto out_clear; 525 + } 526 + 527 + sub_size = bar_size / nsub; 528 + if (sub_size < sizeof(u32)) { 529 + ret = -ENOSPC; 530 + goto out_clear; 531 + } 532 + 533 + /* Limit the temporary buffer size */ 534 + buf_size = min_t(size_t, sub_size, SZ_1M); 535 + 536 + read_buf = kmalloc(buf_size, GFP_KERNEL); 537 + if (!read_buf) { 538 + ret = -ENOMEM; 539 + goto out_clear; 540 + } 541 + 542 + /* 543 + * Step 1: verify EP-provided signature per subrange. This detects 544 + * whether the EP actually applied the submap order. 545 + */ 546 + for (i = 0; i < nsub; i++) { 547 + exp = (u32)pci_endpoint_test_subrange_sig_byte(barno, i) * 548 + 0x01010101U; 549 + val = ioread32(bar_addr + (i * sub_size)); 550 + if (val != exp) { 551 + dev_err(dev, 552 + "BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n", 553 + barno, i, (size_t)i * sub_size, exp, val); 554 + ret = -EIO; 555 + goto out_clear; 556 + } 557 + val = ioread32(bar_addr + (i * sub_size) + sub_size - sizeof(u32)); 558 + if (val != exp) { 559 + dev_err(dev, 560 + "BAR%d subrange%u signature mismatch @%#zx: exp %#08x got %#08x\n", 561 + barno, i, 562 + ((size_t)i * sub_size) + sub_size - sizeof(u32), 563 + exp, val); 564 + ret = -EIO; 565 + goto out_clear; 566 + } 567 + } 568 + 569 + /* Step 2: write unique pattern per subrange (write all first). */ 570 + for (i = 0; i < nsub; i++) { 571 + pattern = pci_endpoint_test_subrange_test_byte(barno, i); 572 + memset_io(bar_addr + (i * sub_size), pattern, sub_size); 573 + } 574 + 575 + /* Step 3: read back and verify (read all after all writes). */ 576 + for (i = 0; i < nsub; i++) { 577 + pattern = pci_endpoint_test_subrange_test_byte(barno, i); 578 + for (off = 0; off < sub_size; off += chunk) { 579 + void *bad; 580 + 581 + chunk = min_t(size_t, buf_size, sub_size - off); 582 + memcpy_fromio(read_buf, bar_addr + (i * sub_size) + off, 583 + chunk); 584 + bad = memchr_inv(read_buf, pattern, chunk); 585 + if (bad) { 586 + size_t bad_off = (u8 *)bad - (u8 *)read_buf; 587 + 588 + dev_err(dev, 589 + "BAR%d subrange%u data mismatch @%#zx (pattern %#02x)\n", 590 + barno, i, (size_t)i * sub_size + off + bad_off, 591 + pattern); 592 + ret = -EIO; 593 + goto out_clear; 594 + } 595 + } 596 + } 597 + 598 + out_clear: 599 + kfree(read_buf); 600 + clear_ret = pci_endpoint_test_bar_subrange_clear(test, barno); 601 + return ret ?: clear_ret; 424 602 } 425 603 426 604 static int pci_endpoint_test_intx_irq(struct pci_endpoint_test *test) ··· 1132 936 1133 937 switch (cmd) { 1134 938 case PCITEST_BAR: 939 + case PCITEST_BAR_SUBRANGE: 1135 940 bar = arg; 1136 941 if (bar <= NO_BAR || bar > BAR_5) 1137 942 goto ret; 1138 943 if (is_am654_pci_dev(pdev) && bar == BAR_0) 1139 944 goto ret; 1140 - ret = pci_endpoint_test_bar(test, bar); 945 + 946 + if (cmd == PCITEST_BAR) 947 + ret = pci_endpoint_test_bar(test, bar); 948 + else 949 + ret = pci_endpoint_test_bar_subrange(test, bar); 1141 950 break; 1142 951 case PCITEST_BARS: 1143 952 ret = pci_endpoint_test_bars(test);
+2 -2
drivers/pci/controller/cadence/pcie-cadence.c
··· 13 13 u8 cdns_pcie_find_capability(struct cdns_pcie *pcie, u8 cap) 14 14 { 15 15 return PCI_FIND_NEXT_CAP(cdns_pcie_read_cfg, PCI_CAPABILITY_LIST, 16 - cap, pcie); 16 + cap, NULL, pcie); 17 17 } 18 18 EXPORT_SYMBOL_GPL(cdns_pcie_find_capability); 19 19 20 20 u16 cdns_pcie_find_ext_capability(struct cdns_pcie *pcie, u8 cap) 21 21 { 22 - return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, pcie); 22 + return PCI_FIND_NEXT_EXT_CAP(cdns_pcie_read_cfg, 0, cap, NULL, pcie); 23 23 } 24 24 EXPORT_SYMBOL_GPL(cdns_pcie_find_ext_capability); 25 25
+1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 424 424 } 425 425 426 426 static const struct pci_epc_features dra7xx_pcie_epc_features = { 427 + DWC_EPC_COMMON_FEATURES, 427 428 .linkup_notifier = true, 428 429 .msi_capable = true, 429 430 };
+8
drivers/pci/controller/dwc/pci-imx6.c
··· 114 114 #define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9) 115 115 #define IMX_PCIE_FLAG_HAS_LUT BIT(10) 116 116 #define IMX_PCIE_FLAG_8GT_ECN_ERR051586 BIT(11) 117 + #define IMX_PCIE_FLAG_SKIP_L23_READY BIT(12) 117 118 118 119 #define imx_check_flag(pci, val) (pci->drvdata->flags & val) 119 120 ··· 1388 1387 } 1389 1388 1390 1389 static const struct pci_epc_features imx8m_pcie_epc_features = { 1390 + DWC_EPC_COMMON_FEATURES, 1391 1391 .msi_capable = true, 1392 1392 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1393 1393 .bar[BAR_3] = { .type = BAR_RESERVED, }, ··· 1398 1396 }; 1399 1397 1400 1398 static const struct pci_epc_features imx8q_pcie_epc_features = { 1399 + DWC_EPC_COMMON_FEATURES, 1401 1400 .msi_capable = true, 1402 1401 .bar[BAR_1] = { .type = BAR_RESERVED, }, 1403 1402 .bar[BAR_3] = { .type = BAR_RESERVED, }, ··· 1419 1416 * BAR5 | Enable | 32-bit | 64 KB | Programmable Size 1420 1417 */ 1421 1418 static const struct pci_epc_features imx95_pcie_epc_features = { 1419 + DWC_EPC_COMMON_FEATURES, 1422 1420 .msi_capable = true, 1423 1421 .bar[BAR_1] = { .type = BAR_FIXED, .fixed_size = SZ_64K, }, 1424 1422 .align = SZ_4K, ··· 1781 1777 */ 1782 1778 imx_pcie_add_lut_by_rid(imx_pcie, 0); 1783 1779 } else { 1780 + if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_SKIP_L23_READY)) 1781 + pci->pp.skip_l23_ready = true; 1784 1782 pci->pp.use_atu_msg = true; 1785 1783 ret = dw_pcie_host_init(&pci->pp); 1786 1784 if (ret < 0) ··· 1844 1838 .variant = IMX6QP, 1845 1839 .flags = IMX_PCIE_FLAG_IMX_PHY | 1846 1840 IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND | 1841 + IMX_PCIE_FLAG_SKIP_L23_READY | 1847 1842 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1848 1843 .dbi_length = 0x200, 1849 1844 .gpr = "fsl,imx6q-iomuxc-gpr", ··· 1861 1854 .variant = IMX7D, 1862 1855 .flags = IMX_PCIE_FLAG_SUPPORTS_SUSPEND | 1863 1856 IMX_PCIE_FLAG_HAS_APP_RESET | 1857 + IMX_PCIE_FLAG_SKIP_L23_READY | 1864 1858 IMX_PCIE_FLAG_HAS_PHY_RESET, 1865 1859 .gpr = "fsl,imx7d-iomuxc-gpr", 1866 1860 .mode_off[0] = IOMUXC_GPR12,
+1
drivers/pci/controller/dwc/pci-keystone.c
··· 930 930 } 931 931 932 932 static const struct pci_epc_features ks_pcie_am654_epc_features = { 933 + DWC_EPC_COMMON_FEATURES, 933 934 .msi_capable = true, 934 935 .msix_capable = true, 935 936 .bar[BAR_0] = { .type = BAR_RESERVED, },
+1
drivers/pci/controller/dwc/pcie-artpec6.c
··· 370 370 } 371 371 372 372 static const struct pci_epc_features artpec6_pcie_epc_features = { 373 + DWC_EPC_COMMON_FEATURES, 373 374 .msi_capable = true, 374 375 }; 375 376
+1 -51
drivers/pci/controller/dwc/pcie-designware-debugfs.c
··· 443 443 return simple_read_from_buffer(buf, count, ppos, debugfs_buf, pos); 444 444 } 445 445 446 - static const char *ltssm_status_string(enum dw_pcie_ltssm ltssm) 447 - { 448 - const char *str; 449 - 450 - switch (ltssm) { 451 - #define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break 452 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET); 453 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT); 454 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE); 455 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE); 456 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG); 457 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET); 458 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT); 459 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START); 460 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT); 461 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI); 462 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT); 463 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE); 464 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE); 465 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK); 466 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED); 467 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG); 468 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE); 469 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0); 470 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S); 471 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE); 472 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE); 473 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE); 474 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE); 475 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY); 476 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE); 477 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED); 478 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY); 479 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE); 480 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT); 481 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT); 482 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY); 483 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET); 484 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0); 485 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1); 486 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2); 487 - DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3); 488 - default: 489 - str = "DW_PCIE_LTSSM_UNKNOWN"; 490 - break; 491 - } 492 - 493 - return str + strlen("DW_PCIE_LTSSM_"); 494 - } 495 - 496 446 static int ltssm_status_show(struct seq_file *s, void *v) 497 447 { 498 448 struct dw_pcie *pci = s->private; 499 449 enum dw_pcie_ltssm val; 500 450 501 451 val = dw_pcie_get_ltssm(pci); 502 - seq_printf(s, "%s (0x%02x)\n", ltssm_status_string(val), val); 452 + seq_printf(s, "%s (0x%02x)\n", dw_pcie_ltssm_status_string(val), val); 503 453 504 454 return 0; 505 455 }
+317 -82
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 72 72 static u8 dw_pcie_ep_find_capability(struct dw_pcie_ep *ep, u8 func_no, u8 cap) 73 73 { 74 74 return PCI_FIND_NEXT_CAP(dw_pcie_ep_read_cfg, PCI_CAPABILITY_LIST, 75 - cap, ep, func_no); 75 + cap, NULL, ep, func_no); 76 76 } 77 77 78 - /** 79 - * dw_pcie_ep_hide_ext_capability - Hide a capability from the linked list 80 - * @pci: DWC PCI device 81 - * @prev_cap: Capability preceding the capability that should be hidden 82 - * @cap: Capability that should be hidden 83 - * 84 - * Return: 0 if success, errno otherwise. 85 - */ 86 - int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap) 78 + static u16 dw_pcie_ep_find_ext_capability(struct dw_pcie_ep *ep, 79 + u8 func_no, u8 cap) 87 80 { 88 - u16 prev_cap_offset, cap_offset; 89 - u32 prev_cap_header, cap_header; 90 - 91 - prev_cap_offset = dw_pcie_find_ext_capability(pci, prev_cap); 92 - if (!prev_cap_offset) 93 - return -EINVAL; 94 - 95 - prev_cap_header = dw_pcie_readl_dbi(pci, prev_cap_offset); 96 - cap_offset = PCI_EXT_CAP_NEXT(prev_cap_header); 97 - cap_header = dw_pcie_readl_dbi(pci, cap_offset); 98 - 99 - /* cap must immediately follow prev_cap. */ 100 - if (PCI_EXT_CAP_ID(cap_header) != cap) 101 - return -EINVAL; 102 - 103 - /* Clear next ptr. */ 104 - prev_cap_header &= ~GENMASK(31, 20); 105 - 106 - /* Set next ptr to next ptr of cap. */ 107 - prev_cap_header |= cap_header & GENMASK(31, 20); 108 - 109 - dw_pcie_dbi_ro_wr_en(pci); 110 - dw_pcie_writel_dbi(pci, prev_cap_offset, prev_cap_header); 111 - dw_pcie_dbi_ro_wr_dis(pci); 112 - 113 - return 0; 81 + return PCI_FIND_NEXT_EXT_CAP(dw_pcie_ep_read_cfg, 0, 82 + cap, NULL, ep, func_no); 114 83 } 115 - EXPORT_SYMBOL_GPL(dw_pcie_ep_hide_ext_capability); 116 84 117 85 static int dw_pcie_ep_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no, 118 86 struct pci_epf_header *hdr) ··· 107 139 return 0; 108 140 } 109 141 110 - static int dw_pcie_ep_inbound_atu(struct dw_pcie_ep *ep, u8 func_no, int type, 111 - dma_addr_t parent_bus_addr, enum pci_barno bar, 112 - size_t size) 142 + /* BAR Match Mode inbound iATU mapping */ 143 + static int dw_pcie_ep_ib_atu_bar(struct dw_pcie_ep *ep, u8 func_no, int type, 144 + dma_addr_t parent_bus_addr, enum pci_barno bar, 145 + size_t size) 113 146 { 114 147 int ret; 115 148 u32 free_win; 116 149 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 150 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 117 151 118 - if (!ep->bar_to_atu[bar]) 152 + if (!ep_func) 153 + return -EINVAL; 154 + 155 + if (!ep_func->bar_to_atu[bar]) 119 156 free_win = find_first_zero_bit(ep->ib_window_map, pci->num_ib_windows); 120 157 else 121 - free_win = ep->bar_to_atu[bar] - 1; 158 + free_win = ep_func->bar_to_atu[bar] - 1; 122 159 123 160 if (free_win >= pci->num_ib_windows) { 124 161 dev_err(pci->dev, "No free inbound window\n"); ··· 141 168 * Always increment free_win before assignment, since value 0 is used to identify 142 169 * unallocated mapping. 143 170 */ 144 - ep->bar_to_atu[bar] = free_win + 1; 171 + ep_func->bar_to_atu[bar] = free_win + 1; 145 172 set_bit(free_win, ep->ib_window_map); 146 173 147 174 return 0; 175 + } 176 + 177 + static void dw_pcie_ep_clear_ib_maps(struct dw_pcie_ep *ep, u8 func_no, enum pci_barno bar) 178 + { 179 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 180 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 181 + struct device *dev = pci->dev; 182 + unsigned int i, num; 183 + u32 atu_index; 184 + u32 *indexes; 185 + 186 + if (!ep_func) 187 + return; 188 + 189 + /* Tear down the BAR Match Mode mapping, if any. */ 190 + if (ep_func->bar_to_atu[bar]) { 191 + atu_index = ep_func->bar_to_atu[bar] - 1; 192 + dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index); 193 + clear_bit(atu_index, ep->ib_window_map); 194 + ep_func->bar_to_atu[bar] = 0; 195 + } 196 + 197 + /* Tear down all Address Match Mode mappings, if any. */ 198 + indexes = ep_func->ib_atu_indexes[bar]; 199 + num = ep_func->num_ib_atu_indexes[bar]; 200 + ep_func->ib_atu_indexes[bar] = NULL; 201 + ep_func->num_ib_atu_indexes[bar] = 0; 202 + if (!indexes) 203 + return; 204 + for (i = 0; i < num; i++) { 205 + dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, indexes[i]); 206 + clear_bit(indexes[i], ep->ib_window_map); 207 + } 208 + devm_kfree(dev, indexes); 209 + } 210 + 211 + static u64 dw_pcie_ep_read_bar_assigned(struct dw_pcie_ep *ep, u8 func_no, 212 + enum pci_barno bar, int flags) 213 + { 214 + u32 reg = PCI_BASE_ADDRESS_0 + (4 * bar); 215 + u32 lo, hi; 216 + u64 addr; 217 + 218 + lo = dw_pcie_ep_readl_dbi(ep, func_no, reg); 219 + 220 + if (flags & PCI_BASE_ADDRESS_SPACE) 221 + return lo & PCI_BASE_ADDRESS_IO_MASK; 222 + 223 + addr = lo & PCI_BASE_ADDRESS_MEM_MASK; 224 + if (!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64)) 225 + return addr; 226 + 227 + hi = dw_pcie_ep_readl_dbi(ep, func_no, reg + 4); 228 + return addr | ((u64)hi << 32); 229 + } 230 + 231 + static int dw_pcie_ep_validate_submap(struct dw_pcie_ep *ep, 232 + const struct pci_epf_bar_submap *submap, 233 + unsigned int num_submap, size_t bar_size) 234 + { 235 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 236 + u32 align = pci->region_align; 237 + size_t off = 0; 238 + unsigned int i; 239 + size_t size; 240 + 241 + if (!align || !IS_ALIGNED(bar_size, align)) 242 + return -EINVAL; 243 + 244 + /* 245 + * The submap array order defines the BAR layout (submap[0] starts 246 + * at offset 0 and each entry immediately follows the previous 247 + * one). Here, validate that it forms a strict, gapless 248 + * decomposition of the BAR: 249 + * - each entry has a non-zero size 250 + * - sizes, implicit offsets and phys_addr are aligned to 251 + * pci->region_align 252 + * - each entry lies within the BAR range 253 + * - the entries exactly cover the whole BAR 254 + * 255 + * Note: dw_pcie_prog_inbound_atu() also checks alignment for the 256 + * PCI address and the target phys_addr, but validating up-front 257 + * avoids partially programming iATU windows in vain. 258 + */ 259 + for (i = 0; i < num_submap; i++) { 260 + size = submap[i].size; 261 + 262 + if (!size) 263 + return -EINVAL; 264 + 265 + if (!IS_ALIGNED(size, align) || !IS_ALIGNED(off, align)) 266 + return -EINVAL; 267 + 268 + if (!IS_ALIGNED(submap[i].phys_addr, align)) 269 + return -EINVAL; 270 + 271 + if (off > bar_size || size > bar_size - off) 272 + return -EINVAL; 273 + 274 + off += size; 275 + } 276 + if (off != bar_size) 277 + return -EINVAL; 278 + 279 + return 0; 280 + } 281 + 282 + /* Address Match Mode inbound iATU mapping */ 283 + static int dw_pcie_ep_ib_atu_addr(struct dw_pcie_ep *ep, u8 func_no, int type, 284 + const struct pci_epf_bar *epf_bar) 285 + { 286 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 287 + const struct pci_epf_bar_submap *submap = epf_bar->submap; 288 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 289 + enum pci_barno bar = epf_bar->barno; 290 + struct device *dev = pci->dev; 291 + u64 pci_addr, parent_bus_addr; 292 + u64 size, base, off = 0; 293 + int free_win, ret; 294 + unsigned int i; 295 + u32 *indexes; 296 + 297 + if (!ep_func || !epf_bar->num_submap || !submap || !epf_bar->size) 298 + return -EINVAL; 299 + 300 + ret = dw_pcie_ep_validate_submap(ep, submap, epf_bar->num_submap, 301 + epf_bar->size); 302 + if (ret) 303 + return ret; 304 + 305 + base = dw_pcie_ep_read_bar_assigned(ep, func_no, bar, epf_bar->flags); 306 + if (!base) { 307 + dev_err(dev, 308 + "BAR%u not assigned, cannot set up sub-range mappings\n", 309 + bar); 310 + return -EINVAL; 311 + } 312 + 313 + indexes = devm_kcalloc(dev, epf_bar->num_submap, sizeof(*indexes), 314 + GFP_KERNEL); 315 + if (!indexes) 316 + return -ENOMEM; 317 + 318 + ep_func->ib_atu_indexes[bar] = indexes; 319 + ep_func->num_ib_atu_indexes[bar] = 0; 320 + 321 + for (i = 0; i < epf_bar->num_submap; i++) { 322 + size = submap[i].size; 323 + parent_bus_addr = submap[i].phys_addr; 324 + 325 + if (off > (~0ULL) - base) { 326 + ret = -EINVAL; 327 + goto err; 328 + } 329 + 330 + pci_addr = base + off; 331 + off += size; 332 + 333 + free_win = find_first_zero_bit(ep->ib_window_map, 334 + pci->num_ib_windows); 335 + if (free_win >= pci->num_ib_windows) { 336 + ret = -ENOSPC; 337 + goto err; 338 + } 339 + 340 + ret = dw_pcie_prog_inbound_atu(pci, free_win, type, 341 + parent_bus_addr, pci_addr, size); 342 + if (ret) 343 + goto err; 344 + 345 + set_bit(free_win, ep->ib_window_map); 346 + indexes[i] = free_win; 347 + ep_func->num_ib_atu_indexes[bar] = i + 1; 348 + } 349 + return 0; 350 + err: 351 + dw_pcie_ep_clear_ib_maps(ep, func_no, bar); 352 + return ret; 148 353 } 149 354 150 355 static int dw_pcie_ep_outbound_atu(struct dw_pcie_ep *ep, ··· 355 204 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 356 205 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 357 206 enum pci_barno bar = epf_bar->barno; 358 - u32 atu_index = ep->bar_to_atu[bar] - 1; 207 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 359 208 360 - if (!ep->bar_to_atu[bar]) 209 + if (!ep_func || !ep_func->epf_bar[bar]) 361 210 return; 362 211 363 212 __dw_pcie_ep_reset_bar(pci, func_no, bar, epf_bar->flags); 364 213 365 - dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, atu_index); 366 - clear_bit(atu_index, ep->ib_window_map); 367 - ep->epf_bar[bar] = NULL; 368 - ep->bar_to_atu[bar] = 0; 214 + dw_pcie_ep_clear_ib_maps(ep, func_no, bar); 215 + 216 + ep_func->epf_bar[bar] = NULL; 369 217 } 370 218 371 - static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie *pci, 219 + static unsigned int dw_pcie_ep_get_rebar_offset(struct dw_pcie_ep *ep, u8 func_no, 372 220 enum pci_barno bar) 373 221 { 374 222 u32 reg, bar_index; 375 223 unsigned int offset, nbars; 376 224 int i; 377 225 378 - offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 226 + offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR); 379 227 if (!offset) 380 228 return offset; 381 229 382 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 230 + reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 383 231 nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg); 384 232 385 233 for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL) { 386 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 234 + reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 387 235 bar_index = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, reg); 388 236 if (bar_index == bar) 389 237 return offset; ··· 403 253 u32 rebar_cap, rebar_ctrl; 404 254 int ret; 405 255 406 - rebar_offset = dw_pcie_ep_get_rebar_offset(pci, bar); 256 + rebar_offset = dw_pcie_ep_get_rebar_offset(ep, func_no, bar); 407 257 if (!rebar_offset) 408 258 return -EINVAL; 409 259 ··· 433 283 * 1 MB to 128 TB. Bits 31:16 in PCI_REBAR_CTRL define "supported sizes" 434 284 * bits for sizes 256 TB to 8 EB. Disallow sizes 256 TB to 8 EB. 435 285 */ 436 - rebar_ctrl = dw_pcie_readl_dbi(pci, rebar_offset + PCI_REBAR_CTRL); 286 + rebar_ctrl = dw_pcie_ep_readl_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL); 437 287 rebar_ctrl &= ~GENMASK(31, 16); 438 - dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl); 288 + dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CTRL, rebar_ctrl); 439 289 440 290 /* 441 291 * The "selected size" (bits 13:8) in PCI_REBAR_CTRL are automatically 442 292 * updated when writing PCI_REBAR_CAP, see "Figure 3-26 Resizable BAR 443 293 * Example for 32-bit Memory BAR0" in DWC EP databook 5.96a. 444 294 */ 445 - dw_pcie_writel_dbi(pci, rebar_offset + PCI_REBAR_CAP, rebar_cap); 295 + dw_pcie_ep_writel_dbi(ep, func_no, rebar_offset + PCI_REBAR_CAP, rebar_cap); 446 296 447 297 dw_pcie_dbi_ro_wr_dis(pci); 448 298 ··· 491 341 { 492 342 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 493 343 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 344 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 494 345 enum pci_barno bar = epf_bar->barno; 495 346 size_t size = epf_bar->size; 496 347 enum pci_epc_bar_type bar_type; 497 348 int flags = epf_bar->flags; 498 349 int ret, type; 350 + 351 + if (!ep_func) 352 + return -EINVAL; 499 353 500 354 /* 501 355 * DWC does not allow BAR pairs to overlap, e.g. you cannot combine BARs ··· 514 360 * calling clear_bar() would clear the BAR's PCI address assigned by the 515 361 * host). 516 362 */ 517 - if (ep->epf_bar[bar]) { 363 + if (ep_func->epf_bar[bar]) { 518 364 /* 519 365 * We can only dynamically change a BAR if the new BAR size and 520 366 * BAR flags do not differ from the existing configuration. 521 367 */ 522 - if (ep->epf_bar[bar]->barno != bar || 523 - ep->epf_bar[bar]->size != size || 524 - ep->epf_bar[bar]->flags != flags) 368 + if (ep_func->epf_bar[bar]->barno != bar || 369 + ep_func->epf_bar[bar]->size != size || 370 + ep_func->epf_bar[bar]->flags != flags) 525 371 return -EINVAL; 372 + 373 + /* 374 + * When dynamically changing a BAR, tear down any existing 375 + * mappings before re-programming. 376 + */ 377 + if (ep_func->epf_bar[bar]->num_submap || epf_bar->num_submap) 378 + dw_pcie_ep_clear_ib_maps(ep, func_no, bar); 526 379 527 380 /* 528 381 * When dynamically changing a BAR, skip writing the BAR reg, as 529 382 * that would clear the BAR's PCI address assigned by the host. 530 383 */ 531 384 goto config_atu; 385 + } else { 386 + /* 387 + * Subrange mapping is an update-only operation. The BAR 388 + * must have been configured once without submaps so that 389 + * subsequent set_bar() calls can update inbound mappings 390 + * without touching the BAR register (and clobbering the 391 + * host-assigned address). 392 + */ 393 + if (epf_bar->num_submap) 394 + return -EINVAL; 532 395 } 533 396 534 397 bar_type = dw_pcie_ep_get_bar_type(ep, bar); ··· 579 408 else 580 409 type = PCIE_ATU_TYPE_IO; 581 410 582 - ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar, 583 - size); 411 + if (epf_bar->num_submap) 412 + ret = dw_pcie_ep_ib_atu_addr(ep, func_no, type, epf_bar); 413 + else 414 + ret = dw_pcie_ep_ib_atu_bar(ep, func_no, type, 415 + epf_bar->phys_addr, bar, size); 416 + 584 417 if (ret) 585 418 return ret; 586 419 587 - ep->epf_bar[bar] = epf_bar; 420 + ep_func->epf_bar[bar] = epf_bar; 588 421 589 422 return 0; 590 423 } ··· 776 601 struct dw_pcie_ep *ep = epc_get_drvdata(epc); 777 602 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 778 603 604 + /* 605 + * Tear down the dedicated outbound window used for MSI 606 + * generation. This avoids leaking an iATU window across 607 + * endpoint stop/start cycles. 608 + */ 609 + if (ep->msi_iatu_mapped) { 610 + dw_pcie_ep_unmap_addr(epc, 0, 0, ep->msi_mem_phys); 611 + ep->msi_iatu_mapped = false; 612 + } 613 + 779 614 dw_pcie_stop_link(pci); 780 615 } 781 616 ··· 887 702 msg_addr = ((u64)msg_addr_upper) << 32 | msg_addr_lower; 888 703 889 704 msg_addr = dw_pcie_ep_align_addr(epc, msg_addr, &map_size, &offset); 890 - ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr, 891 - map_size); 892 - if (ret) 893 - return ret; 705 + 706 + /* 707 + * Program the outbound iATU once and keep it enabled. 708 + * 709 + * The spec warns that updating iATU registers while there are 710 + * operations in flight on the AXI bridge interface is not 711 + * supported, so we avoid reprogramming the region on every MSI, 712 + * specifically unmapping immediately after writel(). 713 + */ 714 + if (!ep->msi_iatu_mapped) { 715 + ret = dw_pcie_ep_map_addr(epc, func_no, 0, 716 + ep->msi_mem_phys, msg_addr, 717 + map_size); 718 + if (ret) 719 + return ret; 720 + 721 + ep->msi_iatu_mapped = true; 722 + ep->msi_msg_addr = msg_addr; 723 + ep->msi_map_size = map_size; 724 + } else if (WARN_ON_ONCE(ep->msi_msg_addr != msg_addr || 725 + ep->msi_map_size != map_size)) { 726 + /* 727 + * The host changed the MSI target address or the required 728 + * mapping size changed. Reprogramming the iATU at runtime is 729 + * unsafe on this controller, so bail out instead of trying to 730 + * update the existing region. 731 + */ 732 + return -EINVAL; 733 + } 894 734 895 735 writel(msg_data | (interrupt_num - 1), ep->msi_mem + offset); 896 - 897 - dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys); 898 736 899 737 return 0; 900 738 } ··· 983 775 bir = FIELD_GET(PCI_MSIX_TABLE_BIR, tbl_offset); 984 776 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 985 777 986 - msix_tbl = ep->epf_bar[bir]->addr + tbl_offset; 778 + msix_tbl = ep_func->epf_bar[bir]->addr + tbl_offset; 987 779 msg_addr = msix_tbl[(interrupt_num - 1)].msg_addr; 988 780 msg_data = msix_tbl[(interrupt_num - 1)].msg_data; 989 781 vec_ctrl = msix_tbl[(interrupt_num - 1)].vector_ctrl; ··· 1044 836 } 1045 837 EXPORT_SYMBOL_GPL(dw_pcie_ep_deinit); 1046 838 1047 - static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci) 839 + static void dw_pcie_ep_init_rebar_registers(struct dw_pcie_ep *ep, u8 func_no) 1048 840 { 1049 - struct dw_pcie_ep *ep = &pci->ep; 1050 - unsigned int offset; 1051 - unsigned int nbars; 841 + struct dw_pcie_ep_func *ep_func = dw_pcie_ep_get_func_from_ep(ep, func_no); 842 + unsigned int offset, nbars; 1052 843 enum pci_barno bar; 1053 844 u32 reg, i, val; 1054 845 1055 - offset = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR); 846 + if (!ep_func) 847 + return; 1056 848 1057 - dw_pcie_dbi_ro_wr_en(pci); 849 + offset = dw_pcie_ep_find_ext_capability(ep, func_no, PCI_EXT_CAP_ID_REBAR); 1058 850 1059 851 if (offset) { 1060 - reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 852 + reg = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 1061 853 nbars = FIELD_GET(PCI_REBAR_CTRL_NBAR_MASK, reg); 1062 854 1063 855 /* ··· 1078 870 * the controller when RESBAR_CAP_REG is written, which 1079 871 * is why RESBAR_CAP_REG is written here. 1080 872 */ 1081 - val = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL); 873 + val = dw_pcie_ep_readl_dbi(ep, func_no, offset + PCI_REBAR_CTRL); 1082 874 bar = FIELD_GET(PCI_REBAR_CTRL_BAR_IDX, val); 1083 - if (ep->epf_bar[bar]) 1084 - pci_epc_bar_size_to_rebar_cap(ep->epf_bar[bar]->size, &val); 875 + if (ep_func->epf_bar[bar]) 876 + pci_epc_bar_size_to_rebar_cap(ep_func->epf_bar[bar]->size, &val); 1085 877 else 1086 878 val = BIT(4); 1087 879 1088 - dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, val); 880 + dw_pcie_ep_writel_dbi(ep, func_no, offset + PCI_REBAR_CAP, val); 1089 881 } 1090 882 } 883 + } 884 + 885 + static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci) 886 + { 887 + struct dw_pcie_ep *ep = &pci->ep; 888 + u8 funcs = ep->epc->max_functions; 889 + u8 func_no; 890 + 891 + dw_pcie_dbi_ro_wr_en(pci); 892 + 893 + for (func_no = 0; func_no < funcs; func_no++) 894 + dw_pcie_ep_init_rebar_registers(ep, func_no); 1091 895 1092 896 dw_pcie_setup(pci); 1093 897 dw_pcie_dbi_ro_wr_dis(pci); ··· 1187 967 if (ep->ops->init) 1188 968 ep->ops->init(ep); 1189 969 970 + /* 971 + * PCIe r6.0, section 7.9.15 states that for endpoints that support 972 + * PTM, this capability structure is required in exactly one 973 + * function, which controls the PTM behavior of all PTM capable 974 + * functions. This indicates the PTM capability structure 975 + * represents controller-level registers rather than per-function 976 + * registers. 977 + * 978 + * Therefore, PTM capability registers are configured using the 979 + * standard DBI accessors, instead of func_no indexed per-function 980 + * accessors. 981 + */ 1190 982 ptm_cap_base = dw_pcie_find_ext_capability(pci, PCI_EXT_CAP_ID_PTM); 1191 983 1192 984 /* ··· 1319 1087 struct device *dev = pci->dev; 1320 1088 1321 1089 INIT_LIST_HEAD(&ep->func_list); 1090 + ep->msi_iatu_mapped = false; 1091 + ep->msi_msg_addr = 0; 1092 + ep->msi_map_size = 0; 1322 1093 1323 1094 epc = devm_pci_epc_create(dev, &epc_ops); 1324 1095 if (IS_ERR(epc)) {
+156 -61
drivers/pci/controller/dwc/pcie-designware-host.c
··· 255 255 u64 msi_target = (u64)pp->msi_data; 256 256 u32 ctrl, num_ctrls; 257 257 258 - if (!pci_msi_enabled() || !pp->has_msi_ctrl) 258 + if (!pci_msi_enabled() || !pp->use_imsi_rx) 259 259 return; 260 260 261 261 num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; ··· 367 367 * order not to miss MSI TLPs from those devices the MSI target 368 368 * address has to be within the lowest 4GB. 369 369 * 370 - * Note until there is a better alternative found the reservation is 371 - * done by allocating from the artificially limited DMA-coherent 372 - * memory. 370 + * Per DWC databook r6.21a, section 3.10.2.3, the incoming MWr TLP 371 + * targeting the MSI_CTRL_ADDR is terminated by the iMSI-RX and never 372 + * appears on the AXI bus. So MSI_CTRL_ADDR address doesn't need to be 373 + * mapped and can be any memory that doesn't get allocated for the BAR 374 + * memory. Since most of the platforms provide 32-bit address for 375 + * 'config' region, try cfg0_base as the first option for the MSI target 376 + * address if it's a 32-bit address. Otherwise, try 32-bit and 64-bit 377 + * coherent memory allocation one by one. 373 378 */ 379 + if (!(pp->cfg0_base & GENMASK_ULL(63, 32))) { 380 + pp->msi_data = pp->cfg0_base; 381 + return 0; 382 + } 383 + 374 384 ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 375 385 if (!ret) 376 386 msi_vaddr = dmam_alloc_coherent(dev, sizeof(u64), &pp->msi_data, ··· 603 593 } 604 594 605 595 if (pci_msi_enabled()) { 606 - pp->has_msi_ctrl = !(pp->ops->msi_init || 596 + pp->use_imsi_rx = !(pp->ops->msi_init || 607 597 of_property_present(np, "msi-parent") || 608 598 of_property_present(np, "msi-map")); 609 599 610 600 /* 611 - * For the has_msi_ctrl case the default assignment is handled 601 + * For the use_imsi_rx case the default assignment is handled 612 602 * in the dw_pcie_msi_host_init(). 613 603 */ 614 - if (!pp->has_msi_ctrl && !pp->num_vectors) { 604 + if (!pp->use_imsi_rx && !pp->num_vectors) { 615 605 pp->num_vectors = MSI_DEF_NUM_VECTORS; 616 606 } else if (pp->num_vectors > MAX_MSI_IRQS) { 617 607 dev_err(dev, "Invalid number of vectors\n"); ··· 623 613 ret = pp->ops->msi_init(pp); 624 614 if (ret < 0) 625 615 goto err_deinit_host; 626 - } else if (pp->has_msi_ctrl) { 616 + } else if (pp->use_imsi_rx) { 627 617 ret = dw_pcie_msi_host_init(pp); 628 618 if (ret < 0) 629 619 goto err_deinit_host; ··· 640 630 ret = of_pci_get_equalization_presets(dev, &pp->presets, pci->num_lanes); 641 631 if (ret) 642 632 goto err_free_msi; 643 - 644 - if (pp->ecam_enabled) { 645 - ret = dw_pcie_config_ecam_iatu(pp); 646 - if (ret) { 647 - dev_err(dev, "Failed to configure iATU in ECAM mode\n"); 648 - goto err_free_msi; 649 - } 650 - } 651 633 652 634 /* 653 635 * Allocate the resource for MSG TLP before programming the iATU ··· 668 666 } 669 667 670 668 /* 671 - * Note: Skip the link up delay only when a Link Up IRQ is present. 672 - * If there is no Link Up IRQ, we should not bypass the delay 673 - * because that would require users to manually rescan for devices. 669 + * Only fail on timeout error. Other errors indicate the device may 670 + * become available later, so continue without failing. 674 671 */ 675 - if (!pp->use_linkup_irq) 676 - /* Ignore errors, the link may come up later */ 677 - dw_pcie_wait_for_link(pci); 672 + ret = dw_pcie_wait_for_link(pci); 673 + if (ret == -ETIMEDOUT) 674 + goto err_stop_link; 678 675 679 676 ret = pci_host_probe(bridge); 680 677 if (ret) ··· 693 692 dw_pcie_edma_remove(pci); 694 693 695 694 err_free_msi: 696 - if (pp->has_msi_ctrl) 695 + if (pp->use_imsi_rx) 697 696 dw_pcie_free_msi(pp); 698 697 699 698 err_deinit_host: ··· 721 720 722 721 dw_pcie_edma_remove(pci); 723 722 724 - if (pp->has_msi_ctrl) 723 + if (pp->use_imsi_rx) 725 724 dw_pcie_free_msi(pp); 726 725 727 726 if (pp->ops->deinit) ··· 875 874 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 876 875 struct dw_pcie_ob_atu_cfg atu = { 0 }; 877 876 struct resource_entry *entry; 877 + int ob_iatu_index; 878 + int ib_iatu_index; 878 879 int i, ret; 879 880 880 - /* Note the very first outbound ATU is used for CFG IOs */ 881 881 if (!pci->num_ob_windows) { 882 882 dev_err(pci->dev, "No outbound iATU found\n"); 883 883 return -EINVAL; ··· 894 892 for (i = 0; i < pci->num_ib_windows; i++) 895 893 dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_IB, i); 896 894 897 - i = 0; 895 + /* 896 + * NOTE: For outbound address translation, outbound iATU at index 0 is 897 + * reserved for CFG IOs (dw_pcie_other_conf_map_bus()), thus start at 898 + * index 1. 899 + * 900 + * If using ECAM, outbound iATU at index 0 and index 1 is reserved for 901 + * CFG IOs. 902 + */ 903 + if (pp->ecam_enabled) { 904 + ob_iatu_index = 2; 905 + ret = dw_pcie_config_ecam_iatu(pp); 906 + if (ret) { 907 + dev_err(pci->dev, "Failed to configure iATU in ECAM mode\n"); 908 + return ret; 909 + } 910 + } else { 911 + ob_iatu_index = 1; 912 + } 913 + 898 914 resource_list_for_each_entry(entry, &pp->bridge->windows) { 915 + resource_size_t res_size; 916 + 899 917 if (resource_type(entry->res) != IORESOURCE_MEM) 900 918 continue; 901 919 902 - if (pci->num_ob_windows <= ++i) 903 - break; 904 - 905 - atu.index = i; 906 920 atu.type = PCIE_ATU_TYPE_MEM; 907 921 atu.parent_bus_addr = entry->res->start - pci->parent_bus_offset; 908 922 atu.pci_addr = entry->res->start - entry->offset; 909 923 910 924 /* Adjust iATU size if MSG TLP region was allocated before */ 911 925 if (pp->msg_res && pp->msg_res->parent == entry->res) 912 - atu.size = resource_size(entry->res) - 926 + res_size = resource_size(entry->res) - 913 927 resource_size(pp->msg_res); 914 928 else 915 - atu.size = resource_size(entry->res); 929 + res_size = resource_size(entry->res); 916 930 917 - ret = dw_pcie_prog_outbound_atu(pci, &atu); 918 - if (ret) { 919 - dev_err(pci->dev, "Failed to set MEM range %pr\n", 920 - entry->res); 921 - return ret; 931 + while (res_size > 0) { 932 + /* 933 + * Return failure if we run out of windows in the 934 + * middle. Otherwise, we would end up only partially 935 + * mapping a single resource. 936 + */ 937 + if (ob_iatu_index >= pci->num_ob_windows) { 938 + dev_err(pci->dev, "Cannot add outbound window for region: %pr\n", 939 + entry->res); 940 + return -ENOMEM; 941 + } 942 + 943 + atu.index = ob_iatu_index; 944 + atu.size = MIN(pci->region_limit + 1, res_size); 945 + 946 + ret = dw_pcie_prog_outbound_atu(pci, &atu); 947 + if (ret) { 948 + dev_err(pci->dev, "Failed to set MEM range %pr\n", 949 + entry->res); 950 + return ret; 951 + } 952 + 953 + ob_iatu_index++; 954 + atu.parent_bus_addr += atu.size; 955 + atu.pci_addr += atu.size; 956 + res_size -= atu.size; 922 957 } 923 958 } 924 959 925 960 if (pp->io_size) { 926 - if (pci->num_ob_windows > ++i) { 927 - atu.index = i; 961 + if (ob_iatu_index < pci->num_ob_windows) { 962 + atu.index = ob_iatu_index; 928 963 atu.type = PCIE_ATU_TYPE_IO; 929 964 atu.parent_bus_addr = pp->io_base - pci->parent_bus_offset; 930 965 atu.pci_addr = pp->io_bus_addr; ··· 973 934 entry->res); 974 935 return ret; 975 936 } 937 + ob_iatu_index++; 976 938 } else { 939 + /* 940 + * If there are not enough outbound windows to give I/O 941 + * space its own iATU, the outbound iATU at index 0 will 942 + * be shared between I/O space and CFG IOs, by 943 + * temporarily reconfiguring the iATU to CFG space, in 944 + * order to do a CFG IO, and then immediately restoring 945 + * it to I/O space. This is only implemented when using 946 + * dw_pcie_other_conf_map_bus(), which is not the case 947 + * when using ECAM. 948 + */ 949 + if (pp->ecam_enabled) { 950 + dev_err(pci->dev, "Cannot add outbound window for I/O\n"); 951 + return -ENOMEM; 952 + } 977 953 pp->cfg0_io_shared = true; 978 954 } 979 955 } 980 956 981 - if (pci->num_ob_windows <= i) 982 - dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n", 983 - pci->num_ob_windows); 957 + if (pp->use_atu_msg) { 958 + if (ob_iatu_index >= pci->num_ob_windows) { 959 + dev_err(pci->dev, "Cannot add outbound window for MSG TLP\n"); 960 + return -ENOMEM; 961 + } 962 + pp->msg_atu_index = ob_iatu_index++; 963 + } 984 964 985 - pp->msg_atu_index = i; 986 - 987 - i = 0; 965 + ib_iatu_index = 0; 988 966 resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) { 967 + resource_size_t res_start, res_size, window_size; 968 + 989 969 if (resource_type(entry->res) != IORESOURCE_MEM) 990 970 continue; 991 971 992 - if (pci->num_ib_windows <= i) 993 - break; 972 + res_size = resource_size(entry->res); 973 + res_start = entry->res->start; 974 + while (res_size > 0) { 975 + /* 976 + * Return failure if we run out of windows in the 977 + * middle. Otherwise, we would end up only partially 978 + * mapping a single resource. 979 + */ 980 + if (ib_iatu_index >= pci->num_ib_windows) { 981 + dev_err(pci->dev, "Cannot add inbound window for region: %pr\n", 982 + entry->res); 983 + return -ENOMEM; 984 + } 994 985 995 - ret = dw_pcie_prog_inbound_atu(pci, i++, PCIE_ATU_TYPE_MEM, 996 - entry->res->start, 997 - entry->res->start - entry->offset, 998 - resource_size(entry->res)); 999 - if (ret) { 1000 - dev_err(pci->dev, "Failed to set DMA range %pr\n", 1001 - entry->res); 1002 - return ret; 986 + window_size = MIN(pci->region_limit + 1, res_size); 987 + ret = dw_pcie_prog_inbound_atu(pci, ib_iatu_index, 988 + PCIE_ATU_TYPE_MEM, res_start, 989 + res_start - entry->offset, window_size); 990 + if (ret) { 991 + dev_err(pci->dev, "Failed to set DMA range %pr\n", 992 + entry->res); 993 + return ret; 994 + } 995 + 996 + ib_iatu_index++; 997 + res_start += window_size; 998 + res_size -= window_size; 1003 999 } 1004 1000 } 1005 - 1006 - if (pci->num_ib_windows <= i) 1007 - dev_warn(pci->dev, "Dma-ranges exceed inbound iATU size (%u)\n", 1008 - pci->num_ib_windows); 1009 1001 1010 1002 return 0; 1011 1003 } ··· 1159 1089 * the platform uses its own address translation component rather than 1160 1090 * ATU, so we should not program the ATU here. 1161 1091 */ 1162 - if (pp->bridge->child_ops == &dw_child_pcie_ops) { 1092 + if (pp->bridge->child_ops == &dw_child_pcie_ops || pp->ecam_enabled) { 1163 1093 ret = dw_pcie_iatu_setup(pp); 1164 1094 if (ret) 1165 1095 return ret; ··· 1175 1105 dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 1176 1106 1177 1107 dw_pcie_dbi_ro_wr_dis(pci); 1108 + 1109 + /* 1110 + * The iMSI-RX module does not support receiving MSI or MSI-X generated 1111 + * by the Root Port. If iMSI-RX is used as the MSI controller, remove 1112 + * the MSI and MSI-X capabilities of the Root Port to allow the drivers 1113 + * to fall back to INTx instead. 1114 + */ 1115 + if (pp->use_imsi_rx) { 1116 + dw_pcie_remove_capability(pci, PCI_CAP_ID_MSI); 1117 + dw_pcie_remove_capability(pci, PCI_CAP_ID_MSIX); 1118 + } 1178 1119 1179 1120 return 0; 1180 1121 } ··· 1230 1149 int dw_pcie_suspend_noirq(struct dw_pcie *pci) 1231 1150 { 1232 1151 u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1152 + int ret = 0; 1233 1153 u32 val; 1234 - int ret; 1154 + 1155 + if (!dw_pcie_link_up(pci)) 1156 + goto stop_link; 1235 1157 1236 1158 /* 1237 1159 * If L1SS is supported, then do not put the link into L2 as some ··· 1249 1165 ret = dw_pcie_pme_turn_off(pci); 1250 1166 if (ret) 1251 1167 return ret; 1168 + } 1169 + 1170 + /* 1171 + * Some SoCs do not support reading the LTSSM register after 1172 + * PME_Turn_Off broadcast. For those SoCs, skip waiting for L2/L3 Ready 1173 + * state and wait 10ms as recommended in PCIe spec r6.0, sec 5.3.3.2.1. 1174 + */ 1175 + if (pci->pp.skip_l23_ready) { 1176 + mdelay(PCIE_PME_TO_L2_TIMEOUT_US/1000); 1177 + goto stop_link; 1252 1178 } 1253 1179 1254 1180 ret = read_poll_timeout(dw_pcie_get_ltssm, val, ··· 1279 1185 */ 1280 1186 udelay(1); 1281 1187 1188 + stop_link: 1282 1189 dw_pcie_stop_link(pci); 1283 1190 if (pci->pp.ops->deinit) 1284 1191 pci->pp.ops->deinit(&pci->pp);
+1
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 61 61 } 62 62 63 63 static const struct pci_epc_features dw_plat_pcie_epc_features = { 64 + DWC_EPC_COMMON_FEATURES, 64 65 .msi_capable = true, 65 66 .msix_capable = true, 66 67 };
+147 -5
drivers/pci/controller/dwc/pcie-designware.c
··· 226 226 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap) 227 227 { 228 228 return PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap, 229 - pci); 229 + NULL, pci); 230 230 } 231 231 EXPORT_SYMBOL_GPL(dw_pcie_find_capability); 232 232 233 233 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap) 234 234 { 235 - return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, pci); 235 + return PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, NULL, pci); 236 236 } 237 237 EXPORT_SYMBOL_GPL(dw_pcie_find_ext_capability); 238 + 239 + void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap) 240 + { 241 + u8 cap_pos, pre_pos, next_pos; 242 + u16 reg; 243 + 244 + cap_pos = PCI_FIND_NEXT_CAP(dw_pcie_read_cfg, PCI_CAPABILITY_LIST, cap, 245 + &pre_pos, pci); 246 + if (!cap_pos) 247 + return; 248 + 249 + reg = dw_pcie_readw_dbi(pci, cap_pos); 250 + next_pos = (reg & 0xff00) >> 8; 251 + 252 + dw_pcie_dbi_ro_wr_en(pci); 253 + if (pre_pos == PCI_CAPABILITY_LIST) 254 + dw_pcie_writeb_dbi(pci, PCI_CAPABILITY_LIST, next_pos); 255 + else 256 + dw_pcie_writeb_dbi(pci, pre_pos + 1, next_pos); 257 + dw_pcie_dbi_ro_wr_dis(pci); 258 + } 259 + EXPORT_SYMBOL_GPL(dw_pcie_remove_capability); 260 + 261 + void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap) 262 + { 263 + int cap_pos, next_pos, pre_pos; 264 + u32 pre_header, header; 265 + 266 + cap_pos = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, 0, cap, &pre_pos, pci); 267 + if (!cap_pos) 268 + return; 269 + 270 + header = dw_pcie_readl_dbi(pci, cap_pos); 271 + 272 + /* 273 + * If the first cap at offset PCI_CFG_SPACE_SIZE is removed, 274 + * only set its capid to zero as it cannot be skipped. 275 + */ 276 + if (cap_pos == PCI_CFG_SPACE_SIZE) { 277 + dw_pcie_dbi_ro_wr_en(pci); 278 + dw_pcie_writel_dbi(pci, cap_pos, header & 0xffff0000); 279 + dw_pcie_dbi_ro_wr_dis(pci); 280 + return; 281 + } 282 + 283 + pre_header = dw_pcie_readl_dbi(pci, pre_pos); 284 + next_pos = PCI_EXT_CAP_NEXT(header); 285 + 286 + dw_pcie_dbi_ro_wr_en(pci); 287 + dw_pcie_writel_dbi(pci, pre_pos, 288 + (pre_header & 0xfffff) | (next_pos << 20)); 289 + dw_pcie_dbi_ro_wr_dis(pci); 290 + } 291 + EXPORT_SYMBOL_GPL(dw_pcie_remove_ext_capability); 238 292 239 293 static u16 __dw_pcie_find_vsec_capability(struct dw_pcie *pci, u16 vendor_id, 240 294 u16 vsec_id) ··· 300 246 return 0; 301 247 302 248 while ((vsec = PCI_FIND_NEXT_EXT_CAP(dw_pcie_read_cfg, vsec, 303 - PCI_EXT_CAP_ID_VNDR, pci))) { 249 + PCI_EXT_CAP_ID_VNDR, NULL, pci))) { 304 250 header = dw_pcie_readl_dbi(pci, vsec + PCI_VNDR_HEADER); 305 251 if (PCI_VNDR_HEADER_ID(header) == vsec_id) 306 252 return vsec; ··· 532 478 u32 retries, val; 533 479 u64 limit_addr; 534 480 481 + if (atu->index >= pci->num_ob_windows) 482 + return -ENOSPC; 483 + 535 484 limit_addr = parent_bus_addr + atu->size - 1; 536 485 537 486 if ((limit_addr & ~pci->region_limit) != (parent_bus_addr & ~pci->region_limit) || ··· 607 550 { 608 551 u64 limit_addr = pci_addr + size - 1; 609 552 u32 retries, val; 553 + 554 + if (index >= pci->num_ib_windows) 555 + return -ENOSPC; 610 556 611 557 if ((limit_addr & ~pci->region_limit) != (pci_addr & ~pci->region_limit) || 612 558 !IS_ALIGNED(parent_bus_addr, pci->region_align) || ··· 699 639 dw_pcie_writel_atu(pci, dir, index, PCIE_ATU_REGION_CTRL2, 0); 700 640 } 701 641 642 + const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm) 643 + { 644 + const char *str; 645 + 646 + switch (ltssm) { 647 + #define DW_PCIE_LTSSM_NAME(n) case n: str = #n; break 648 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_QUIET); 649 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_ACT); 650 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_ACTIVE); 651 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_COMPLIANCE); 652 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_POLL_CONFIG); 653 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_PRE_DETECT_QUIET); 654 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DETECT_WAIT); 655 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_START); 656 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LINKWD_ACEPT); 657 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_WAI); 658 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_LANENUM_ACEPT); 659 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_COMPLETE); 660 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_CFG_IDLE); 661 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_LOCK); 662 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_SPEED); 663 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_RCVRCFG); 664 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_IDLE); 665 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0); 666 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L0S); 667 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L123_SEND_EIDLE); 668 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_IDLE); 669 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_IDLE); 670 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L2_WAKE); 671 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_ENTRY); 672 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED_IDLE); 673 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_DISABLED); 674 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ENTRY); 675 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_ACTIVE); 676 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT); 677 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_LPBK_EXIT_TIMEOUT); 678 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET_ENTRY); 679 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_HOT_RESET); 680 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ0); 681 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ1); 682 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ2); 683 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_RCVRY_EQ3); 684 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_1); 685 + DW_PCIE_LTSSM_NAME(DW_PCIE_LTSSM_L1_2); 686 + default: 687 + str = "DW_PCIE_LTSSM_UNKNOWN"; 688 + break; 689 + } 690 + 691 + return str + strlen("DW_PCIE_LTSSM_"); 692 + } 693 + 694 + /** 695 + * dw_pcie_wait_for_link - Wait for the PCIe link to be up 696 + * @pci: DWC instance 697 + * 698 + * Returns: 0 if link is up, -ENODEV if device is not found, -EIO if the device 699 + * is found but not active and -ETIMEDOUT if the link fails to come up for other 700 + * reasons. 701 + */ 702 702 int dw_pcie_wait_for_link(struct dw_pcie *pci) 703 703 { 704 - u32 offset, val; 704 + u32 offset, val, ltssm; 705 705 int retries; 706 706 707 707 /* Check if the link is up or not */ ··· 773 653 } 774 654 775 655 if (retries >= PCIE_LINK_WAIT_MAX_RETRIES) { 776 - dev_info(pci->dev, "Phy link never came up\n"); 656 + /* 657 + * If the link is in Detect.Quiet or Detect.Active state, it 658 + * indicates that no device is detected. 659 + */ 660 + ltssm = dw_pcie_get_ltssm(pci); 661 + if (ltssm == DW_PCIE_LTSSM_DETECT_QUIET || 662 + ltssm == DW_PCIE_LTSSM_DETECT_ACT) { 663 + dev_info(pci->dev, "Device not found\n"); 664 + return -ENODEV; 665 + 666 + /* 667 + * If the link is in POLL.{Active/Compliance} state, then the 668 + * device is found to be connected to the bus, but it is not 669 + * active i.e., the device firmware might not yet initialized. 670 + */ 671 + } else if (ltssm == DW_PCIE_LTSSM_POLL_ACTIVE || 672 + ltssm == DW_PCIE_LTSSM_POLL_COMPLIANCE) { 673 + dev_info(pci->dev, "Device found, but not active\n"); 674 + return -EIO; 675 + } 676 + 677 + dev_err(pci->dev, "Link failed to come up. LTSSM: %s\n", 678 + dw_pcie_ltssm_status_string(ltssm)); 777 679 return -ETIMEDOUT; 778 680 } 779 681
+25 -11
drivers/pci/controller/dwc/pcie-designware.h
··· 305 305 /* Default eDMA LLP memory size */ 306 306 #define DMA_LLP_MEM_SIZE PAGE_SIZE 307 307 308 + /* Common struct pci_epc_feature bits among DWC EP glue drivers */ 309 + #define DWC_EPC_COMMON_FEATURES .dynamic_inbound_mapping = true, \ 310 + .subrange_mapping = true 311 + 308 312 struct dw_pcie; 309 313 struct dw_pcie_rp; 310 314 struct dw_pcie_ep; ··· 392 388 DW_PCIE_LTSSM_RCVRY_EQ2 = 0x22, 393 389 DW_PCIE_LTSSM_RCVRY_EQ3 = 0x23, 394 390 391 + /* Vendor glue drivers provide pseudo L1 substates from get_ltssm() */ 392 + DW_PCIE_LTSSM_L1_1 = 0x141, 393 + DW_PCIE_LTSSM_L1_2 = 0x142, 394 + 395 395 DW_PCIE_LTSSM_UNKNOWN = 0xFFFFFFFF, 396 396 }; 397 397 ··· 420 412 }; 421 413 422 414 struct dw_pcie_rp { 423 - bool has_msi_ctrl:1; 415 + bool use_imsi_rx:1; 424 416 bool cfg0_io_shared:1; 425 417 u64 cfg0_base; 426 418 void __iomem *va_cfg0_base; ··· 442 434 bool use_atu_msg; 443 435 int msg_atu_index; 444 436 struct resource *msg_res; 445 - bool use_linkup_irq; 446 437 struct pci_eq_presets presets; 447 438 struct pci_config_window *cfg; 448 439 bool ecam_enabled; 449 440 bool native_ecam; 441 + bool skip_l23_ready; 450 442 }; 451 443 452 444 struct dw_pcie_ep_ops { ··· 471 463 u8 func_no; 472 464 u8 msi_cap; /* MSI capability offset */ 473 465 u8 msix_cap; /* MSI-X capability offset */ 466 + u8 bar_to_atu[PCI_STD_NUM_BARS]; 467 + struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS]; 468 + 469 + /* Only for Address Match Mode inbound iATU */ 470 + u32 *ib_atu_indexes[PCI_STD_NUM_BARS]; 471 + unsigned int num_ib_atu_indexes[PCI_STD_NUM_BARS]; 474 472 }; 475 473 476 474 struct dw_pcie_ep { ··· 486 472 phys_addr_t phys_base; 487 473 size_t addr_size; 488 474 size_t page_size; 489 - u8 bar_to_atu[PCI_STD_NUM_BARS]; 490 475 phys_addr_t *outbound_addr; 491 476 unsigned long *ib_window_map; 492 477 unsigned long *ob_window_map; 493 478 void __iomem *msi_mem; 494 479 phys_addr_t msi_mem_phys; 495 - struct pci_epf_bar *epf_bar[PCI_STD_NUM_BARS]; 480 + 481 + /* MSI outbound iATU state */ 482 + bool msi_iatu_mapped; 483 + u64 msi_msg_addr; 484 + size_t msi_map_size; 496 485 }; 497 486 498 487 struct dw_pcie_ops { ··· 578 561 579 562 u8 dw_pcie_find_capability(struct dw_pcie *pci, u8 cap); 580 563 u16 dw_pcie_find_ext_capability(struct dw_pcie *pci, u8 cap); 564 + void dw_pcie_remove_capability(struct dw_pcie *pci, u8 cap); 565 + void dw_pcie_remove_ext_capability(struct dw_pcie *pci, u8 cap); 581 566 u16 dw_pcie_find_rasdes_capability(struct dw_pcie *pci); 582 567 u16 dw_pcie_find_ptm_capability(struct dw_pcie *pci); 583 568 ··· 828 809 return (enum dw_pcie_ltssm)FIELD_GET(PORT_LOGIC_LTSSM_STATE_MASK, val); 829 810 } 830 811 812 + const char *dw_pcie_ltssm_status_string(enum dw_pcie_ltssm ltssm); 813 + 831 814 #ifdef CONFIG_PCIE_DW_HOST 832 815 int dw_pcie_suspend_noirq(struct dw_pcie *pci); 833 816 int dw_pcie_resume_noirq(struct dw_pcie *pci); ··· 911 890 int dw_pcie_ep_raise_msix_irq_doorbell(struct dw_pcie_ep *ep, u8 func_no, 912 891 u16 interrupt_num); 913 892 void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar); 914 - int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, u8 prev_cap, u8 cap); 915 893 struct dw_pcie_ep_func * 916 894 dw_pcie_ep_get_func_from_ep(struct dw_pcie_ep *ep, u8 func_no); 917 895 #else ··· 966 946 967 947 static inline void dw_pcie_ep_reset_bar(struct dw_pcie *pci, enum pci_barno bar) 968 948 { 969 - } 970 - 971 - static inline int dw_pcie_ep_hide_ext_capability(struct dw_pcie *pci, 972 - u8 prev_cap, u8 cap) 973 - { 974 - return 0; 975 949 } 976 950 977 951 static inline struct dw_pcie_ep_func *
+30 -63
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 68 68 #define PCIE_CLKREQ_NOT_READY FIELD_PREP_WM16(BIT(0), 0) 69 69 #define PCIE_CLKREQ_PULL_DOWN FIELD_PREP_WM16(GENMASK(13, 12), 1) 70 70 71 + /* RASDES TBA information */ 72 + #define PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN 0x154 73 + #define PCIE_CLIENT_CDM_RASDES_TBA_L1_1 BIT(4) 74 + #define PCIE_CLIENT_CDM_RASDES_TBA_L1_2 BIT(5) 75 + 71 76 /* Hot Reset Control Register */ 72 77 #define PCIE_CLIENT_HOT_RESET_CTRL 0x180 73 78 #define PCIE_LTSSM_APP_DLY2_EN BIT(1) ··· 186 181 return 0; 187 182 } 188 183 189 - static u32 rockchip_pcie_get_ltssm(struct rockchip_pcie *rockchip) 184 + static u32 rockchip_pcie_get_ltssm_reg(struct rockchip_pcie *rockchip) 190 185 { 191 186 return rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_LTSSM_STATUS); 187 + } 188 + 189 + static enum dw_pcie_ltssm rockchip_pcie_get_ltssm(struct dw_pcie *pci) 190 + { 191 + struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 192 + u32 val = rockchip_pcie_readl_apb(rockchip, 193 + PCIE_CLIENT_CDM_RASDES_TBA_INFO_CMN); 194 + 195 + if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_1) 196 + return DW_PCIE_LTSSM_L1_1; 197 + 198 + if (val & PCIE_CLIENT_CDM_RASDES_TBA_L1_2) 199 + return DW_PCIE_LTSSM_L1_2; 200 + 201 + return rockchip_pcie_get_ltssm_reg(rockchip) & PCIE_LTSSM_STATUS_MASK; 192 202 } 193 203 194 204 static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip) ··· 221 201 static bool rockchip_pcie_link_up(struct dw_pcie *pci) 222 202 { 223 203 struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 224 - u32 val = rockchip_pcie_get_ltssm(rockchip); 204 + u32 val = rockchip_pcie_get_ltssm_reg(rockchip); 225 205 226 206 return FIELD_GET(PCIE_LINKUP_MASK, val) == PCIE_LINKUP; 227 207 } ··· 347 327 if (!of_device_is_compatible(dev->of_node, "rockchip,rk3588-pcie-ep")) 348 328 return; 349 329 350 - if (dw_pcie_ep_hide_ext_capability(pci, PCI_EXT_CAP_ID_SECPCI, 351 - PCI_EXT_CAP_ID_ATS)) 352 - dev_err(dev, "failed to hide ATS capability\n"); 330 + dw_pcie_remove_ext_capability(pci, PCI_EXT_CAP_ID_ATS); 353 331 } 354 332 355 333 static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep) ··· 382 364 } 383 365 384 366 static const struct pci_epc_features rockchip_pcie_epc_features_rk3568 = { 367 + DWC_EPC_COMMON_FEATURES, 385 368 .linkup_notifier = true, 386 369 .msi_capable = true, 387 370 .msix_capable = true, ··· 403 384 * BARs) would be overwritten, resulting in (all other BARs) no longer working. 404 385 */ 405 386 static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = { 387 + DWC_EPC_COMMON_FEATURES, 406 388 .linkup_notifier = true, 407 389 .msi_capable = true, 408 390 .msix_capable = true, ··· 505 485 .link_up = rockchip_pcie_link_up, 506 486 .start_link = rockchip_pcie_start_link, 507 487 .stop_link = rockchip_pcie_stop_link, 488 + .get_ltssm = rockchip_pcie_get_ltssm, 508 489 }; 509 - 510 - static irqreturn_t rockchip_pcie_rc_sys_irq_thread(int irq, void *arg) 511 - { 512 - struct rockchip_pcie *rockchip = arg; 513 - struct dw_pcie *pci = &rockchip->pci; 514 - struct dw_pcie_rp *pp = &pci->pp; 515 - struct device *dev = pci->dev; 516 - u32 reg; 517 - 518 - reg = rockchip_pcie_readl_apb(rockchip, PCIE_CLIENT_INTR_STATUS_MISC); 519 - rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); 520 - 521 - dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg); 522 - dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip)); 523 - 524 - if (reg & PCIE_RDLH_LINK_UP_CHGED) { 525 - if (rockchip_pcie_link_up(pci)) { 526 - msleep(PCIE_RESET_CONFIG_WAIT_MS); 527 - dev_dbg(dev, "Received Link up event. Starting enumeration!\n"); 528 - /* Rescan the bus to enumerate endpoint devices */ 529 - pci_lock_rescan_remove(); 530 - pci_rescan_bus(pp->bridge->bus); 531 - pci_unlock_rescan_remove(); 532 - } 533 - } 534 - 535 - return IRQ_HANDLED; 536 - } 537 490 538 491 static irqreturn_t rockchip_pcie_ep_sys_irq_thread(int irq, void *arg) 539 492 { ··· 519 526 rockchip_pcie_writel_apb(rockchip, reg, PCIE_CLIENT_INTR_STATUS_MISC); 520 527 521 528 dev_dbg(dev, "PCIE_CLIENT_INTR_STATUS_MISC: %#x\n", reg); 522 - dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm(rockchip)); 529 + dev_dbg(dev, "LTSSM_STATUS: %#x\n", rockchip_pcie_get_ltssm_reg(rockchip)); 523 530 524 531 if (reg & PCIE_LINK_REQ_RST_NOT_INT) { 525 532 dev_dbg(dev, "hot reset or link-down reset\n"); ··· 540 547 return IRQ_HANDLED; 541 548 } 542 549 543 - static int rockchip_pcie_configure_rc(struct platform_device *pdev, 544 - struct rockchip_pcie *rockchip) 550 + static int rockchip_pcie_configure_rc(struct rockchip_pcie *rockchip) 545 551 { 546 - struct device *dev = &pdev->dev; 547 552 struct dw_pcie_rp *pp; 548 - int irq, ret; 549 553 u32 val; 550 554 551 555 if (!IS_ENABLED(CONFIG_PCIE_ROCKCHIP_DW_HOST)) 552 556 return -ENODEV; 553 - 554 - irq = platform_get_irq_byname(pdev, "sys"); 555 - if (irq < 0) 556 - return irq; 557 - 558 - ret = devm_request_threaded_irq(dev, irq, NULL, 559 - rockchip_pcie_rc_sys_irq_thread, 560 - IRQF_ONESHOT, "pcie-sys-rc", rockchip); 561 - if (ret) { 562 - dev_err(dev, "failed to request PCIe sys IRQ\n"); 563 - return ret; 564 - } 565 557 566 558 /* LTSSM enable control mode */ 567 559 val = FIELD_PREP_WM16(PCIE_LTSSM_ENABLE_ENHANCE, 1); ··· 558 580 559 581 pp = &rockchip->pci.pp; 560 582 pp->ops = &rockchip_pcie_host_ops; 561 - pp->use_linkup_irq = true; 562 583 563 - ret = dw_pcie_host_init(pp); 564 - if (ret) { 565 - dev_err(dev, "failed to initialize host\n"); 566 - return ret; 567 - } 568 - 569 - /* unmask DLL up/down indicator */ 570 - val = FIELD_PREP_WM16(PCIE_RDLH_LINK_UP_CHGED, 0); 571 - rockchip_pcie_writel_apb(rockchip, val, PCIE_CLIENT_INTR_MASK_MISC); 572 - 573 - return ret; 584 + return dw_pcie_host_init(pp); 574 585 } 575 586 576 587 static int rockchip_pcie_configure_ep(struct platform_device *pdev, ··· 678 711 679 712 switch (data->mode) { 680 713 case DW_PCIE_RC_TYPE: 681 - ret = rockchip_pcie_configure_rc(pdev, rockchip); 714 + ret = rockchip_pcie_configure_rc(rockchip); 682 715 if (ret) 683 716 goto deinit_clk; 684 717 break;
+1
drivers/pci/controller/dwc/pcie-keembay.c
··· 309 309 } 310 310 311 311 static const struct pci_epc_features keembay_pcie_epc_features = { 312 + DWC_EPC_COMMON_FEATURES, 312 313 .msi_capable = true, 313 314 .msix_capable = true, 314 315 .bar[BAR_0] = { .only_64bit = true, },
+1
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 820 820 } 821 821 822 822 static const struct pci_epc_features qcom_pcie_epc_features = { 823 + DWC_EPC_COMMON_FEATURES, 823 824 .linkup_notifier = true, 824 825 .msi_capable = true, 825 826 .align = SZ_4K,
+6 -64
drivers/pci/controller/dwc/pcie-qcom.c
··· 56 56 #define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a8 57 57 #define PARF_Q2A_FLUSH 0x1ac 58 58 #define PARF_LTSSM 0x1b0 59 - #define PARF_INT_ALL_STATUS 0x224 60 - #define PARF_INT_ALL_CLEAR 0x228 61 - #define PARF_INT_ALL_MASK 0x22c 62 59 #define PARF_SID_OFFSET 0x234 63 60 #define PARF_BDF_TRANSLATE_CFG 0x24c 64 61 #define PARF_DBI_BASE_ADDR_V2 0x350 ··· 131 134 132 135 /* PARF_LTSSM register fields */ 133 136 #define LTSSM_EN BIT(8) 134 - 135 - /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */ 136 - #define PARF_INT_ALL_LINK_UP BIT(13) 137 - #define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23) 138 137 139 138 /* PARF_NO_SNOOP_OVERRIDE register fields */ 140 139 #define WR_NO_SNOOP_OVERRIDE_EN BIT(1) ··· 1306 1313 goto err_pwrctrl_power_off; 1307 1314 } 1308 1315 1316 + dw_pcie_remove_capability(pcie->pci, PCI_CAP_ID_MSIX); 1317 + dw_pcie_remove_ext_capability(pcie->pci, PCI_EXT_CAP_ID_DPC); 1318 + 1309 1319 qcom_ep_reset_deassert(pcie); 1310 1320 1311 1321 if (pcie->cfg->ops->config_sid) { ··· 1636 1640 qcom_pcie_link_transition_count); 1637 1641 } 1638 1642 1639 - static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data) 1640 - { 1641 - struct qcom_pcie *pcie = data; 1642 - struct dw_pcie_rp *pp = &pcie->pci->pp; 1643 - struct device *dev = pcie->pci->dev; 1644 - u32 status = readl_relaxed(pcie->parf + PARF_INT_ALL_STATUS); 1645 - 1646 - writel_relaxed(status, pcie->parf + PARF_INT_ALL_CLEAR); 1647 - 1648 - if (FIELD_GET(PARF_INT_ALL_LINK_UP, status)) { 1649 - msleep(PCIE_RESET_CONFIG_WAIT_MS); 1650 - dev_dbg(dev, "Received Link up event. Starting enumeration!\n"); 1651 - /* Rescan the bus to enumerate endpoint devices */ 1652 - pci_lock_rescan_remove(); 1653 - pci_rescan_bus(pp->bridge->bus); 1654 - pci_unlock_rescan_remove(); 1655 - 1656 - qcom_pcie_icc_opp_update(pcie); 1657 - } else { 1658 - dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n", 1659 - status); 1660 - } 1661 - 1662 - return IRQ_HANDLED; 1663 - } 1664 - 1665 1643 static void qcom_pci_free_msi(void *ptr) 1666 1644 { 1667 1645 struct dw_pcie_rp *pp = (struct dw_pcie_rp *)ptr; 1668 1646 1669 - if (pp && pp->has_msi_ctrl) 1647 + if (pp && pp->use_imsi_rx) 1670 1648 dw_pcie_free_msi(pp); 1671 1649 } 1672 1650 ··· 1664 1694 if (ret) 1665 1695 return ret; 1666 1696 1667 - pp->has_msi_ctrl = true; 1697 + pp->use_imsi_rx = true; 1668 1698 dw_pcie_msi_init(pp); 1669 1699 1670 1700 return devm_add_action_or_reset(dev, qcom_pci_free_msi, pp); ··· 1780 1810 struct dw_pcie_rp *pp; 1781 1811 struct resource *res; 1782 1812 struct dw_pcie *pci; 1783 - int ret, irq; 1784 - char *name; 1813 + int ret; 1785 1814 1786 1815 pcie_cfg = of_device_get_match_data(dev); 1787 1816 if (!pcie_cfg) { ··· 1931 1962 1932 1963 platform_set_drvdata(pdev, pcie); 1933 1964 1934 - irq = platform_get_irq_byname_optional(pdev, "global"); 1935 - if (irq > 0) 1936 - pp->use_linkup_irq = true; 1937 - 1938 1965 ret = dw_pcie_host_init(pp); 1939 1966 if (ret) { 1940 1967 dev_err_probe(dev, ret, "cannot initialize host\n"); 1941 1968 goto err_phy_exit; 1942 - } 1943 - 1944 - name = devm_kasprintf(dev, GFP_KERNEL, "qcom_pcie_global_irq%d", 1945 - pci_domain_nr(pp->bridge->bus)); 1946 - if (!name) { 1947 - ret = -ENOMEM; 1948 - goto err_host_deinit; 1949 - } 1950 - 1951 - if (irq > 0) { 1952 - ret = devm_request_threaded_irq(&pdev->dev, irq, NULL, 1953 - qcom_pcie_global_irq_thread, 1954 - IRQF_ONESHOT, name, pcie); 1955 - if (ret) { 1956 - dev_err_probe(&pdev->dev, ret, 1957 - "Failed to request Global IRQ\n"); 1958 - goto err_host_deinit; 1959 - } 1960 - 1961 - writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7, 1962 - pcie->parf + PARF_INT_ALL_MASK); 1963 1969 } 1964 1970 1965 1971 qcom_pcie_icc_opp_update(pcie); ··· 1944 2000 1945 2001 return 0; 1946 2002 1947 - err_host_deinit: 1948 - dw_pcie_host_deinit(pp); 1949 2003 err_phy_exit: 1950 2004 list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 1951 2005 phy_exit(port->phy);
+1
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 420 420 } 421 421 422 422 static const struct pci_epc_features rcar_gen4_pcie_epc_features = { 423 + DWC_EPC_COMMON_FEATURES, 423 424 .msi_capable = true, 424 425 .bar[BAR_1] = { .type = BAR_RESERVED, }, 425 426 .bar[BAR_3] = { .type = BAR_RESERVED, },
+1
drivers/pci/controller/dwc/pcie-stm32-ep.c
··· 70 70 } 71 71 72 72 static const struct pci_epc_features stm32_pcie_epc_features = { 73 + DWC_EPC_COMMON_FEATURES, 73 74 .msi_capable = true, 74 75 .align = SZ_64K, 75 76 };
+1
drivers/pci/controller/dwc/pcie-tegra194.c
··· 1988 1988 } 1989 1989 1990 1990 static const struct pci_epc_features tegra_pcie_epc_features = { 1991 + DWC_EPC_COMMON_FEATURES, 1991 1992 .linkup_notifier = true, 1992 1993 .msi_capable = true, 1993 1994 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M,
+2
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 420 420 .init = uniphier_pcie_pro5_init_ep, 421 421 .wait = NULL, 422 422 .features = { 423 + DWC_EPC_COMMON_FEATURES, 423 424 .linkup_notifier = false, 424 425 .msi_capable = true, 425 426 .msix_capable = false, ··· 439 438 .init = uniphier_pcie_nx1_init_ep, 440 439 .wait = uniphier_pcie_nx1_wait_ep, 441 440 .features = { 441 + DWC_EPC_COMMON_FEATURES, 442 442 .linkup_notifier = false, 443 443 .msi_capable = true, 444 444 .msix_capable = false,
+267 -2
drivers/pci/endpoint/functions/pci-epf-test.c
··· 33 33 #define COMMAND_COPY BIT(5) 34 34 #define COMMAND_ENABLE_DOORBELL BIT(6) 35 35 #define COMMAND_DISABLE_DOORBELL BIT(7) 36 + #define COMMAND_BAR_SUBRANGE_SETUP BIT(8) 37 + #define COMMAND_BAR_SUBRANGE_CLEAR BIT(9) 36 38 37 39 #define STATUS_READ_SUCCESS BIT(0) 38 40 #define STATUS_READ_FAIL BIT(1) ··· 50 48 #define STATUS_DOORBELL_ENABLE_FAIL BIT(11) 51 49 #define STATUS_DOORBELL_DISABLE_SUCCESS BIT(12) 52 50 #define STATUS_DOORBELL_DISABLE_FAIL BIT(13) 51 + #define STATUS_BAR_SUBRANGE_SETUP_SUCCESS BIT(14) 52 + #define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15) 53 + #define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16) 54 + #define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17) 53 55 54 56 #define FLAG_USE_DMA BIT(0) 55 57 ··· 63 57 #define CAP_MSI BIT(1) 64 58 #define CAP_MSIX BIT(2) 65 59 #define CAP_INTX BIT(3) 60 + #define CAP_SUBRANGE_MAPPING BIT(4) 61 + 62 + #define PCI_EPF_TEST_BAR_SUBRANGE_NSUB 2 66 63 67 64 static struct workqueue_struct *kpcitest_workqueue; 68 65 69 66 struct pci_epf_test { 70 67 void *reg[PCI_STD_NUM_BARS]; 71 68 struct pci_epf *epf; 69 + struct config_group group; 72 70 enum pci_barno test_reg_bar; 73 71 size_t msix_table_offset; 74 72 struct delayed_work cmd_handler; ··· 86 76 bool dma_private; 87 77 const struct pci_epc_features *epc_features; 88 78 struct pci_epf_bar db_bar; 79 + size_t bar_size[PCI_STD_NUM_BARS]; 89 80 }; 90 81 91 82 struct pci_epf_test_reg { ··· 113 102 .interrupt_pin = PCI_INTERRUPT_INTA, 114 103 }; 115 104 116 - static size_t bar_size[] = { 512, 512, 1024, 16384, 131072, 1048576 }; 105 + /* default BAR sizes, can be overridden by the user using configfs */ 106 + static size_t default_bar_size[] = { 131072, 131072, 131072, 131072, 131072, 1048576 }; 117 107 118 108 static void pci_epf_test_dma_callback(void *param) 119 109 { ··· 818 806 reg->status = cpu_to_le32(status); 819 807 } 820 808 809 + static u8 pci_epf_test_subrange_sig_byte(enum pci_barno barno, 810 + unsigned int subno) 811 + { 812 + return 0x50 + (barno * 8) + subno; 813 + } 814 + 815 + static void pci_epf_test_bar_subrange_setup(struct pci_epf_test *epf_test, 816 + struct pci_epf_test_reg *reg) 817 + { 818 + struct pci_epf_bar_submap *submap, *old_submap; 819 + struct pci_epf *epf = epf_test->epf; 820 + struct pci_epc *epc = epf->epc; 821 + struct pci_epf_bar *bar; 822 + unsigned int nsub = PCI_EPF_TEST_BAR_SUBRANGE_NSUB, old_nsub; 823 + /* reg->size carries BAR number for BAR_SUBRANGE_* commands. */ 824 + enum pci_barno barno = le32_to_cpu(reg->size); 825 + u32 status = le32_to_cpu(reg->status); 826 + unsigned int i, phys_idx; 827 + size_t sub_size; 828 + u8 *addr; 829 + int ret; 830 + 831 + if (barno >= PCI_STD_NUM_BARS) { 832 + dev_err(&epf->dev, "Invalid barno: %d\n", barno); 833 + goto err; 834 + } 835 + 836 + /* Host side should've avoided test_reg_bar, this is a safeguard. */ 837 + if (barno == epf_test->test_reg_bar) { 838 + dev_err(&epf->dev, "test_reg_bar cannot be used for subrange test\n"); 839 + goto err; 840 + } 841 + 842 + if (!epf_test->epc_features->dynamic_inbound_mapping || 843 + !epf_test->epc_features->subrange_mapping) { 844 + dev_err(&epf->dev, "epc driver does not support subrange mapping\n"); 845 + goto err; 846 + } 847 + 848 + bar = &epf->bar[barno]; 849 + if (!bar->size || !bar->addr) { 850 + dev_err(&epf->dev, "bar size/addr (%zu/%p) is invalid\n", 851 + bar->size, bar->addr); 852 + goto err; 853 + } 854 + 855 + if (bar->size % nsub) { 856 + dev_err(&epf->dev, "BAR size %zu is not divisible by %u\n", 857 + bar->size, nsub); 858 + goto err; 859 + } 860 + 861 + sub_size = bar->size / nsub; 862 + 863 + submap = kcalloc(nsub, sizeof(*submap), GFP_KERNEL); 864 + if (!submap) 865 + goto err; 866 + 867 + for (i = 0; i < nsub; i++) { 868 + /* Swap the two halves so RC can verify ordering. */ 869 + phys_idx = i ^ 1; 870 + submap[i].phys_addr = bar->phys_addr + (phys_idx * sub_size); 871 + submap[i].size = sub_size; 872 + } 873 + 874 + old_submap = bar->submap; 875 + old_nsub = bar->num_submap; 876 + 877 + bar->submap = submap; 878 + bar->num_submap = nsub; 879 + 880 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); 881 + if (ret) { 882 + dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret); 883 + bar->submap = old_submap; 884 + bar->num_submap = old_nsub; 885 + kfree(submap); 886 + goto err; 887 + } 888 + kfree(old_submap); 889 + 890 + /* 891 + * Fill deterministic signatures into the physical regions that 892 + * each BAR subrange maps to. RC verifies these to ensure the 893 + * submap order is really applied. 894 + */ 895 + addr = (u8 *)bar->addr; 896 + for (i = 0; i < nsub; i++) { 897 + phys_idx = i ^ 1; 898 + memset(addr + (phys_idx * sub_size), 899 + pci_epf_test_subrange_sig_byte(barno, i), 900 + sub_size); 901 + } 902 + 903 + status |= STATUS_BAR_SUBRANGE_SETUP_SUCCESS; 904 + reg->status = cpu_to_le32(status); 905 + return; 906 + 907 + err: 908 + status |= STATUS_BAR_SUBRANGE_SETUP_FAIL; 909 + reg->status = cpu_to_le32(status); 910 + } 911 + 912 + static void pci_epf_test_bar_subrange_clear(struct pci_epf_test *epf_test, 913 + struct pci_epf_test_reg *reg) 914 + { 915 + struct pci_epf *epf = epf_test->epf; 916 + struct pci_epf_bar_submap *submap; 917 + struct pci_epc *epc = epf->epc; 918 + /* reg->size carries BAR number for BAR_SUBRANGE_* commands. */ 919 + enum pci_barno barno = le32_to_cpu(reg->size); 920 + u32 status = le32_to_cpu(reg->status); 921 + struct pci_epf_bar *bar; 922 + unsigned int nsub; 923 + int ret; 924 + 925 + if (barno >= PCI_STD_NUM_BARS) { 926 + dev_err(&epf->dev, "Invalid barno: %d\n", barno); 927 + goto err; 928 + } 929 + 930 + bar = &epf->bar[barno]; 931 + submap = bar->submap; 932 + nsub = bar->num_submap; 933 + 934 + if (!submap || !nsub) 935 + goto err; 936 + 937 + bar->submap = NULL; 938 + bar->num_submap = 0; 939 + 940 + ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); 941 + if (ret) { 942 + bar->submap = submap; 943 + bar->num_submap = nsub; 944 + dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret); 945 + goto err; 946 + } 947 + kfree(submap); 948 + 949 + status |= STATUS_BAR_SUBRANGE_CLEAR_SUCCESS; 950 + reg->status = cpu_to_le32(status); 951 + return; 952 + 953 + err: 954 + status |= STATUS_BAR_SUBRANGE_CLEAR_FAIL; 955 + reg->status = cpu_to_le32(status); 956 + } 957 + 821 958 static void pci_epf_test_cmd_handler(struct work_struct *work) 822 959 { 823 960 u32 command; ··· 1020 859 break; 1021 860 case COMMAND_DISABLE_DOORBELL: 1022 861 pci_epf_test_disable_doorbell(epf_test, reg); 862 + pci_epf_test_raise_irq(epf_test, reg); 863 + break; 864 + case COMMAND_BAR_SUBRANGE_SETUP: 865 + pci_epf_test_bar_subrange_setup(epf_test, reg); 866 + pci_epf_test_raise_irq(epf_test, reg); 867 + break; 868 + case COMMAND_BAR_SUBRANGE_CLEAR: 869 + pci_epf_test_bar_subrange_clear(epf_test, reg); 1023 870 pci_epf_test_raise_irq(epf_test, reg); 1024 871 break; 1025 872 default: ··· 1101 932 1102 933 if (epf_test->epc_features->intx_capable) 1103 934 caps |= CAP_INTX; 935 + 936 + if (epf_test->epc_features->dynamic_inbound_mapping && 937 + epf_test->epc_features->subrange_mapping) 938 + caps |= CAP_SUBRANGE_MAPPING; 1104 939 1105 940 reg->caps = cpu_to_le32(caps); 1106 941 } ··· 1243 1070 if (epc_features->bar[bar].type == BAR_FIXED) 1244 1071 test_reg_size = epc_features->bar[bar].fixed_size; 1245 1072 else 1246 - test_reg_size = bar_size[bar]; 1073 + test_reg_size = epf_test->bar_size[bar]; 1247 1074 1248 1075 base = pci_epf_alloc_space(epf, test_reg_size, bar, 1249 1076 epc_features, PRIMARY_INTERFACE); ··· 1315 1142 pci_epf_test_free_space(epf); 1316 1143 } 1317 1144 1145 + #define PCI_EPF_TEST_BAR_SIZE_R(_name, _id) \ 1146 + static ssize_t pci_epf_test_##_name##_show(struct config_item *item, \ 1147 + char *page) \ 1148 + { \ 1149 + struct config_group *group = to_config_group(item); \ 1150 + struct pci_epf_test *epf_test = \ 1151 + container_of(group, struct pci_epf_test, group); \ 1152 + \ 1153 + return sysfs_emit(page, "%zu\n", epf_test->bar_size[_id]); \ 1154 + } 1155 + 1156 + #define PCI_EPF_TEST_BAR_SIZE_W(_name, _id) \ 1157 + static ssize_t pci_epf_test_##_name##_store(struct config_item *item, \ 1158 + const char *page, \ 1159 + size_t len) \ 1160 + { \ 1161 + struct config_group *group = to_config_group(item); \ 1162 + struct pci_epf_test *epf_test = \ 1163 + container_of(group, struct pci_epf_test, group); \ 1164 + int val, ret; \ 1165 + \ 1166 + /* \ 1167 + * BAR sizes can only be modified before binding to an EPC, \ 1168 + * because pci_epf_test_alloc_space() is called in .bind(). \ 1169 + */ \ 1170 + if (epf_test->epf->epc) \ 1171 + return -EOPNOTSUPP; \ 1172 + \ 1173 + ret = kstrtouint(page, 0, &val); \ 1174 + if (ret) \ 1175 + return ret; \ 1176 + \ 1177 + if (!is_power_of_2(val)) \ 1178 + return -EINVAL; \ 1179 + \ 1180 + epf_test->bar_size[_id] = val; \ 1181 + \ 1182 + return len; \ 1183 + } 1184 + 1185 + PCI_EPF_TEST_BAR_SIZE_R(bar0_size, BAR_0) 1186 + PCI_EPF_TEST_BAR_SIZE_W(bar0_size, BAR_0) 1187 + PCI_EPF_TEST_BAR_SIZE_R(bar1_size, BAR_1) 1188 + PCI_EPF_TEST_BAR_SIZE_W(bar1_size, BAR_1) 1189 + PCI_EPF_TEST_BAR_SIZE_R(bar2_size, BAR_2) 1190 + PCI_EPF_TEST_BAR_SIZE_W(bar2_size, BAR_2) 1191 + PCI_EPF_TEST_BAR_SIZE_R(bar3_size, BAR_3) 1192 + PCI_EPF_TEST_BAR_SIZE_W(bar3_size, BAR_3) 1193 + PCI_EPF_TEST_BAR_SIZE_R(bar4_size, BAR_4) 1194 + PCI_EPF_TEST_BAR_SIZE_W(bar4_size, BAR_4) 1195 + PCI_EPF_TEST_BAR_SIZE_R(bar5_size, BAR_5) 1196 + PCI_EPF_TEST_BAR_SIZE_W(bar5_size, BAR_5) 1197 + 1198 + CONFIGFS_ATTR(pci_epf_test_, bar0_size); 1199 + CONFIGFS_ATTR(pci_epf_test_, bar1_size); 1200 + CONFIGFS_ATTR(pci_epf_test_, bar2_size); 1201 + CONFIGFS_ATTR(pci_epf_test_, bar3_size); 1202 + CONFIGFS_ATTR(pci_epf_test_, bar4_size); 1203 + CONFIGFS_ATTR(pci_epf_test_, bar5_size); 1204 + 1205 + static struct configfs_attribute *pci_epf_test_attrs[] = { 1206 + &pci_epf_test_attr_bar0_size, 1207 + &pci_epf_test_attr_bar1_size, 1208 + &pci_epf_test_attr_bar2_size, 1209 + &pci_epf_test_attr_bar3_size, 1210 + &pci_epf_test_attr_bar4_size, 1211 + &pci_epf_test_attr_bar5_size, 1212 + NULL, 1213 + }; 1214 + 1215 + static const struct config_item_type pci_epf_test_group_type = { 1216 + .ct_attrs = pci_epf_test_attrs, 1217 + .ct_owner = THIS_MODULE, 1218 + }; 1219 + 1220 + static struct config_group *pci_epf_test_add_cfs(struct pci_epf *epf, 1221 + struct config_group *group) 1222 + { 1223 + struct pci_epf_test *epf_test = epf_get_drvdata(epf); 1224 + struct config_group *epf_group = &epf_test->group; 1225 + struct device *dev = &epf->dev; 1226 + 1227 + config_group_init_type_name(epf_group, dev_name(dev), 1228 + &pci_epf_test_group_type); 1229 + 1230 + return epf_group; 1231 + } 1232 + 1318 1233 static const struct pci_epf_device_id pci_epf_test_ids[] = { 1319 1234 { 1320 1235 .name = "pci_epf_test", ··· 1415 1154 { 1416 1155 struct pci_epf_test *epf_test; 1417 1156 struct device *dev = &epf->dev; 1157 + enum pci_barno bar; 1418 1158 1419 1159 epf_test = devm_kzalloc(dev, sizeof(*epf_test), GFP_KERNEL); 1420 1160 if (!epf_test) ··· 1423 1161 1424 1162 epf->header = &test_header; 1425 1163 epf_test->epf = epf; 1164 + for (bar = BAR_0; bar < PCI_STD_NUM_BARS; bar++) 1165 + epf_test->bar_size[bar] = default_bar_size[bar]; 1426 1166 1427 1167 INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler); 1428 1168 ··· 1437 1173 static const struct pci_epf_ops ops = { 1438 1174 .unbind = pci_epf_test_unbind, 1439 1175 .bind = pci_epf_test_bind, 1176 + .add_cfs = pci_epf_test_add_cfs, 1440 1177 }; 1441 1178 1442 1179 static struct pci_epf_driver test_driver = {
+8
drivers/pci/endpoint/pci-epc-core.c
··· 596 596 if (!epc_features) 597 597 return -EINVAL; 598 598 599 + if (epf_bar->num_submap && !epf_bar->submap) 600 + return -EINVAL; 601 + 602 + if (epf_bar->num_submap && 603 + !(epc_features->dynamic_inbound_mapping && 604 + epc_features->subrange_mapping)) 605 + return -EINVAL; 606 + 599 607 if (epc_features->bar[bar].type == BAR_RESIZABLE && 600 608 (epf_bar->size < SZ_1M || (u64)epf_bar->size > (SZ_128G * 1024))) 601 609 return -EINVAL;
+4 -4
drivers/pci/pci.c
··· 421 421 static u8 __pci_find_next_cap(struct pci_bus *bus, unsigned int devfn, 422 422 u8 pos, int cap) 423 423 { 424 - return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, bus, devfn); 424 + return PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, cap, NULL, bus, devfn); 425 425 } 426 426 427 427 u8 pci_find_next_capability(struct pci_dev *dev, u8 pos, int cap) ··· 526 526 return 0; 527 527 528 528 return PCI_FIND_NEXT_EXT_CAP(pci_bus_read_config, start, cap, 529 - dev->bus, dev->devfn); 529 + NULL, dev->bus, dev->devfn); 530 530 } 531 531 EXPORT_SYMBOL_GPL(pci_find_next_ext_capability); 532 532 ··· 595 595 mask = HT_5BIT_CAP_MASK; 596 596 597 597 pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, pos, 598 - PCI_CAP_ID_HT, dev->bus, dev->devfn); 598 + PCI_CAP_ID_HT, NULL, dev->bus, dev->devfn); 599 599 while (pos) { 600 600 rc = pci_read_config_byte(dev, pos + 3, &cap); 601 601 if (rc != PCIBIOS_SUCCESSFUL) ··· 606 606 607 607 pos = PCI_FIND_NEXT_CAP(pci_bus_read_config, 608 608 pos + PCI_CAP_LIST_NEXT, 609 - PCI_CAP_ID_HT, dev->bus, 609 + PCI_CAP_ID_HT, NULL, dev->bus, 610 610 dev->devfn); 611 611 } 612 612
+19 -4
drivers/pci/pci.h
··· 122 122 * @read_cfg: Function pointer for reading PCI config space 123 123 * @start: Starting position to begin search 124 124 * @cap: Capability ID to find 125 + * @prev_ptr: Pointer to store position of preceding capability (optional) 125 126 * @args: Arguments to pass to read_cfg function 126 127 * 127 - * Search the capability list in PCI config space to find @cap. 128 + * Search the capability list in PCI config space to find @cap. If 129 + * found, update *prev_ptr with the position of the preceding capability 130 + * (if prev_ptr != NULL) 128 131 * Implements TTL (time-to-live) protection against infinite loops. 129 132 * 130 133 * Return: Position of the capability if found, 0 otherwise. 131 134 */ 132 - #define PCI_FIND_NEXT_CAP(read_cfg, start, cap, args...) \ 135 + #define PCI_FIND_NEXT_CAP(read_cfg, start, cap, prev_ptr, args...) \ 133 136 ({ \ 134 137 int __ttl = PCI_FIND_CAP_TTL; \ 135 - u8 __id, __found_pos = 0; \ 138 + u8 __id, __found_pos = 0; \ 139 + u8 __prev_pos = (start); \ 136 140 u8 __pos = (start); \ 137 141 u16 __ent; \ 138 142 \ ··· 155 151 \ 156 152 if (__id == (cap)) { \ 157 153 __found_pos = __pos; \ 154 + if (prev_ptr != NULL) \ 155 + *(u8 *)prev_ptr = __prev_pos; \ 158 156 break; \ 159 157 } \ 160 158 \ 159 + __prev_pos = __pos; \ 161 160 __pos = FIELD_GET(PCI_CAP_LIST_NEXT_MASK, __ent); \ 162 161 } \ 163 162 __found_pos; \ ··· 172 165 * @read_cfg: Function pointer for reading PCI config space 173 166 * @start: Starting position to begin search (0 for initial search) 174 167 * @cap: Extended capability ID to find 168 + * @prev_ptr: Pointer to store position of preceding capability (optional) 175 169 * @args: Arguments to pass to read_cfg function 176 170 * 177 171 * Search the extended capability list in PCI config space to find @cap. 172 + * If found, update *prev_ptr with the position of the preceding capability 173 + * (if prev_ptr != NULL) 178 174 * Implements TTL protection against infinite loops using a calculated 179 175 * maximum search count. 180 176 * 181 177 * Return: Position of the capability if found, 0 otherwise. 182 178 */ 183 - #define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, args...) \ 179 + #define PCI_FIND_NEXT_EXT_CAP(read_cfg, start, cap, prev_ptr, args...) \ 184 180 ({ \ 185 181 u16 __pos = (start) ?: PCI_CFG_SPACE_SIZE; \ 186 182 u16 __found_pos = 0; \ 183 + u16 __prev_pos; \ 187 184 int __ttl, __ret; \ 188 185 u32 __header; \ 189 186 \ 187 + __prev_pos = __pos; \ 190 188 __ttl = (PCI_CFG_SPACE_EXP_SIZE - PCI_CFG_SPACE_SIZE) / 8; \ 191 189 while (__ttl-- > 0 && __pos >= PCI_CFG_SPACE_SIZE) { \ 192 190 __ret = read_cfg##_dword(args, __pos, &__header); \ ··· 203 191 \ 204 192 if (PCI_EXT_CAP_ID(__header) == (cap) && __pos != start) {\ 205 193 __found_pos = __pos; \ 194 + if (prev_ptr != NULL) \ 195 + *(u16 *)prev_ptr = __prev_pos; \ 206 196 break; \ 207 197 } \ 208 198 \ 199 + __prev_pos = __pos; \ 209 200 __pos = PCI_EXT_CAP_NEXT(__header); \ 210 201 } \ 211 202 __found_pos; \
+9
include/linux/pci-epc.h
··· 223 223 /** 224 224 * struct pci_epc_features - features supported by a EPC device per function 225 225 * @linkup_notifier: indicate if the EPC device can notify EPF driver on link up 226 + * @dynamic_inbound_mapping: indicate if the EPC device supports updating 227 + * inbound mappings for an already configured BAR 228 + * (i.e. allow calling pci_epc_set_bar() again 229 + * without first calling pci_epc_clear_bar()) 230 + * @subrange_mapping: indicate if the EPC device can map inbound subranges for a 231 + * BAR. This feature depends on @dynamic_inbound_mapping 232 + * feature. 226 233 * @msi_capable: indicate if the endpoint function has MSI capability 227 234 * @msix_capable: indicate if the endpoint function has MSI-X capability 228 235 * @intx_capable: indicate if the endpoint can raise INTx interrupts ··· 238 231 */ 239 232 struct pci_epc_features { 240 233 unsigned int linkup_notifier : 1; 234 + unsigned int dynamic_inbound_mapping : 1; 235 + unsigned int subrange_mapping : 1; 241 236 unsigned int msi_capable : 1; 242 237 unsigned int msix_capable : 1; 243 238 unsigned int intx_capable : 1;
+23
include/linux/pci-epf.h
··· 111 111 #define to_pci_epf_driver(drv) container_of_const((drv), struct pci_epf_driver, driver) 112 112 113 113 /** 114 + * struct pci_epf_bar_submap - BAR subrange for inbound mapping 115 + * @phys_addr: target physical/DMA address for this subrange 116 + * @size: the size of the subrange to be mapped 117 + * 118 + * When pci_epf_bar.num_submap is >0, pci_epf_bar.submap describes the 119 + * complete BAR layout. This allows an EPC driver to program multiple 120 + * inbound translation windows for a single BAR when supported by the 121 + * controller. The array order defines the BAR layout (submap[0] at offset 122 + * 0, and each immediately follows the previous one). 123 + */ 124 + struct pci_epf_bar_submap { 125 + dma_addr_t phys_addr; 126 + size_t size; 127 + }; 128 + 129 + /** 114 130 * struct pci_epf_bar - represents the BAR of EPF device 115 131 * @phys_addr: physical address that should be mapped to the BAR 116 132 * @addr: virtual address corresponding to the @phys_addr ··· 135 119 * requirement 136 120 * @barno: BAR number 137 121 * @flags: flags that are set for the BAR 122 + * @num_submap: number of entries in @submap 123 + * @submap: array of subrange descriptors allocated by the caller. See 124 + * struct pci_epf_bar_submap for the semantics in detail. 138 125 */ 139 126 struct pci_epf_bar { 140 127 dma_addr_t phys_addr; ··· 146 127 size_t mem_size; 147 128 enum pci_barno barno; 148 129 int flags; 130 + 131 + /* Optional sub-range mapping */ 132 + unsigned int num_submap; 133 + struct pci_epf_bar_submap *submap; 149 134 }; 150 135 151 136 /**
+1
include/uapi/linux/pcitest.h
··· 22 22 #define PCITEST_GET_IRQTYPE _IO('P', 0x9) 23 23 #define PCITEST_BARS _IO('P', 0xa) 24 24 #define PCITEST_DOORBELL _IO('P', 0xb) 25 + #define PCITEST_BAR_SUBRANGE _IO('P', 0xc) 25 26 #define PCITEST_CLEAR_IRQ _IO('P', 0x10) 26 27 27 28 #define PCITEST_IRQ_TYPE_UNDEFINED -1
+17
tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
··· 70 70 EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno); 71 71 } 72 72 73 + TEST_F(pci_ep_bar, BAR_SUBRANGE_TEST) 74 + { 75 + int ret; 76 + 77 + pci_ep_ioctl(PCITEST_SET_IRQTYPE, PCITEST_IRQ_TYPE_AUTO); 78 + ASSERT_EQ(0, ret) TH_LOG("Can't set AUTO IRQ type"); 79 + 80 + pci_ep_ioctl(PCITEST_BAR_SUBRANGE, variant->barno); 81 + if (ret == -ENODATA) 82 + SKIP(return, "BAR is disabled"); 83 + if (ret == -EBUSY) 84 + SKIP(return, "BAR is test register space"); 85 + if (ret == -EOPNOTSUPP) 86 + SKIP(return, "Subrange map is not supported"); 87 + EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno); 88 + } 89 + 73 90 FIXTURE(pci_ep_basic) 74 91 { 75 92 int fd;