Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v6.3-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull PCI updates from Bjorn Helgaas:
"Enumeration:

- Rework portdrv shutdown so it disables interrupts but doesn't
disable bus mastering, which leads to hangs on Loongson LS7A

- Add mechanism to prevent Max_Read_Request_Size (MRRS) increases,
again to avoid hardware issues on Loongson LS7A (and likely other
devices based on DesignWare IP)

- Ignore devices with a firmware (DT or ACPI) node that says the
device is disabled

Resource management:

- Distribute spare resources to unconfigured hotplug bridges at
boot-time (not just when hot-adding such a bridge), which makes
hot-adding devices to docks work better. Tried this in v6.1 but had
to revert for regressions, so try again

- Fix root bus issue that dropped resources that happened to end
at 0, e.g., [bus 00]

PCI device hotplug:

- Remove device locking when marking device as disconnected so this
doesn't have to wait for concurrent driver bind/unbind to complete

- Quirk more Qualcomm bridges that don't fully implement the PCIe
Slot Status 'Command Completed' bit

Power management:

- Account for _S0W of the target bridge in acpi_pci_bridge_d3() so we
don't miss hot-add notifications for USB4 docks, Thunderbolt, etc

Reset:

- Observe delay after reset, e.g., resuming from system sleep,
regardless of whether a bridge can suspend to D3cold at runtime

- Wait for secondary bus to become ready after a bridge reset

Virtualization:

- Avoid FLR on some AMD FCH AHCI adapters where it doesn't work

- Allow independent IOMMU groups for some Wangxun NICs that prevent
peer-to-peer transactions but don't advertise an ACS Capability

Error handling:

- Configure End-to-End-CRC (ECRC) only if Linux owns the AER
Capability

- Remove redundant Device Control Error Reporting Enable in the AER
service driver since this is already done for all devices during
enumeration

ASPM:

- Add pci_enable_link_state() interface to allow drivers to enable
ASPM link state

Endpoint framework:

- Move dra7xx and tegra194 linkup processing from hard IRQ to
threaded IRQ handler

- Add a separate lock for endpoint controller list of endpoint
function drivers to prevent deadlock in callbacks

- Pass events from endpoint controller to endpoint function drivers
via callbacks instead of notifiers

Synopsys DesignWare eDMA controller driver (acked by Vinod):

- Fix CPU vs PCI address issues

- Fix source vs destination address issues

- Fix issues with interleaved transfer semantics

- Fix channel count initialization issue (issue still exists in
several other drivers)

- Clean up and improve debugfs usage so it will work on platforms
with several eDMA devices

Baikal T-1 PCIe controller driver:

- Set a 64-bit DMA mask

Freescale i.MX6 PCIe controller driver:

- Add i.MX8MM, i.MX8MQ, i.MX8MP endpoint mode DT binding and driver
support

Intel VMD host bridge driver:

- Add quirk to configure PCIe ASPM and LTR. This is normally done by
BIOS, and will be for future products

Marvell MVEBU PCIe controller driver:

- Mark this driver as broken in Kconfig since bugs prevent its daily
usage

MediaTek MT7621 PCIe controller driver:

- Delay PHY port initialization to improve boot reliability for ZBT
WE1326, ZBT WF3526-P, and some Netgear models

Qualcomm PCIe controller driver:

- Add MSM8998 DT compatible string

- Unify MSM8996 and MSM8998 clock orderings

- Add SM8350 DT binding and driver support

- Add IPQ8074 Gen3 DT binding and driver support

- Correct qcom,perst-regs in DT binding

- Add qcom_pcie_host_deinit() so the PHY is powered off and
regulators and clocks are disabled on late host-init errors

Socionext UniPhier Pro5 controller driver:

- Clean up uniphier-ep reg, clocks, resets, and their names in DT
binding

Synopsys DesignWare PCIe controller driver:

- Restrict coherent DMA mask to 32 bits for MSI, but allow controller
drivers to set 64-bit streaming DMA mask

- Add eDMA engine support in both Root Port and Endpoint controllers

Miscellaneous:

- Remove MODULE_LICENSE from boolean drivers so they don't look like
modules so modprobe can complain about them"

* tag 'pci-v6.3-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (86 commits)
PCI: dwc: Add Root Port and Endpoint controller eDMA engine support
PCI: bt1: Set 64-bit DMA mask
PCI: dwc: Restrict only coherent DMA mask for MSI address allocation
dmaengine: dw-edma: Prepare dw_edma_probe() for builtin callers
dmaengine: dw-edma: Depend on DW_EDMA instead of selecting it
dmaengine: dw-edma: Add mem-mapped LL-entries support
PCI: Remove MODULE_LICENSE so boolean drivers don't look like modules
PCI: hv: Drop duplicate PCI_MSI dependency
PCI/P2PDMA: Annotate RCU dereference
PCI/sysfs: Constify struct kobj_type pci_slot_ktype
PCI: hotplug: Allow marking devices as disconnected during bind/unbind
PCI: pciehp: Add Qualcomm quirk for Command Completed erratum
PCI: qcom: Add IPQ8074 Gen3 port support
dt-bindings: PCI: qcom: Add IPQ8074 Gen3 port
dt-bindings: PCI: qcom: Sort compatibles alphabetically
PCI: qcom: Fix host-init error handling
PCI: qcom: Add SM8350 support
dt-bindings: PCI: qcom: Add SM8350
dt-bindings: PCI: qcom-ep: Correct qcom,perst-regs
dt-bindings: PCI: qcom: Unify MSM8996 and MSM8998 clock order
...

+1573 -814
+3 -1
Documentation/admin-guide/kernel-parameters.txt
··· 4303 4303 specified, e.g., 12@pci:8086:9c22:103c:198f 4304 4304 for 4096-byte alignment. 4305 4305 ecrc= Enable/disable PCIe ECRC (transaction layer 4306 - end-to-end CRC checking). 4306 + end-to-end CRC checking). Only effective if 4307 + OS has native AER control (either granted by 4308 + ACPI _OSC or forced via "pcie_ports=native") 4307 4309 bios: Use BIOS/firmware settings. This is the 4308 4310 the default. 4309 4311 off: Turn ECRC off
+3
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml
··· 24 24 - fsl,imx8mq-pcie 25 25 - fsl,imx8mm-pcie 26 26 - fsl,imx8mp-pcie 27 + - fsl,imx8mm-pcie-ep 28 + - fsl,imx8mq-pcie-ep 29 + - fsl,imx8mp-pcie-ep 27 30 28 31 reg: 29 32 items:
+4 -2
Documentation/devicetree/bindings/pci/qcom,pcie-ep.yaml
··· 47 47 enable registers 48 48 $ref: "/schemas/types.yaml#/definitions/phandle-array" 49 49 items: 50 - minItems: 3 51 - maxItems: 3 50 + - items: 51 + - description: Syscon to TCSR system registers 52 + - description: Perst enable offset 53 + - description: Perst separation enable offset 52 54 53 55 interrupts: 54 56 items:
+67 -36
Documentation/devicetree/bindings/pci/qcom,pcie.yaml
··· 16 16 17 17 properties: 18 18 compatible: 19 - enum: 20 - - qcom,pcie-ipq8064 21 - - qcom,pcie-ipq8064-v2 22 - - qcom,pcie-apq8064 23 - - qcom,pcie-apq8084 24 - - qcom,pcie-msm8996 25 - - qcom,pcie-ipq4019 26 - - qcom,pcie-ipq8074 27 - - qcom,pcie-qcs404 28 - - qcom,pcie-sa8540p 29 - - qcom,pcie-sc7280 30 - - qcom,pcie-sc8180x 31 - - qcom,pcie-sc8280xp 32 - - qcom,pcie-sdm845 33 - - qcom,pcie-sm8150 34 - - qcom,pcie-sm8250 35 - - qcom,pcie-sm8450-pcie0 36 - - qcom,pcie-sm8450-pcie1 37 - - qcom,pcie-ipq6018 19 + oneOf: 20 + - enum: 21 + - qcom,pcie-apq8064 22 + - qcom,pcie-apq8084 23 + - qcom,pcie-ipq4019 24 + - qcom,pcie-ipq6018 25 + - qcom,pcie-ipq8064 26 + - qcom,pcie-ipq8064-v2 27 + - qcom,pcie-ipq8074 28 + - qcom,pcie-ipq8074-gen3 29 + - qcom,pcie-msm8996 30 + - qcom,pcie-qcs404 31 + - qcom,pcie-sa8540p 32 + - qcom,pcie-sc7280 33 + - qcom,pcie-sc8180x 34 + - qcom,pcie-sc8280xp 35 + - qcom,pcie-sdm845 36 + - qcom,pcie-sm8150 37 + - qcom,pcie-sm8250 38 + - qcom,pcie-sm8350 39 + - qcom,pcie-sm8450-pcie0 40 + - qcom,pcie-sm8450-pcie1 41 + - items: 42 + - const: qcom,pcie-msm8998 43 + - const: qcom,pcie-msm8996 38 44 39 45 reg: 40 46 minItems: 4 ··· 159 153 contains: 160 154 enum: 161 155 - qcom,pcie-ipq6018 156 + - qcom,pcie-ipq8074-gen3 162 157 then: 163 158 properties: 164 159 reg: ··· 202 195 - qcom,pcie-sc8180x 203 196 - qcom,pcie-sc8280xp 204 197 - qcom,pcie-sm8250 198 + - qcom,pcie-sm8350 205 199 - qcom,pcie-sm8450-pcie0 206 200 - qcom,pcie-sm8450-pcie1 207 201 then: ··· 320 312 enum: 321 313 - qcom,pcie-msm8996 322 314 then: 323 - oneOf: 324 - - properties: 325 - clock-names: 326 - items: 327 - - const: pipe # Pipe Clock driving internal logic 328 - - const: aux # Auxiliary (AUX) clock 329 - - const: cfg # Configuration clock 330 - - const: bus_master # Master AXI clock 331 - - const: bus_slave # Slave AXI clock 332 - - properties: 333 - clock-names: 334 - items: 335 - - const: pipe # Pipe Clock driving internal logic 336 - - const: bus_master # Master AXI clock 337 - - const: bus_slave # Slave AXI clock 338 - - const: cfg # Configuration clock 339 - - const: aux # Auxiliary (AUX) clock 340 315 properties: 341 316 clocks: 342 317 minItems: 5 343 318 maxItems: 5 319 + clock-names: 320 + items: 321 + - const: pipe # Pipe Clock driving internal logic 322 + - const: aux # Auxiliary (AUX) clock 323 + - const: cfg # Configuration clock 324 + - const: bus_master # Master AXI clock 325 + - const: bus_slave # Slave AXI clock 344 326 resets: false 345 327 reset-names: false 346 328 ··· 371 373 contains: 372 374 enum: 373 375 - qcom,pcie-ipq6018 376 + - qcom,pcie-ipq8074-gen3 374 377 then: 375 378 properties: 376 379 clocks: ··· 554 555 compatible: 555 556 contains: 556 557 enum: 558 + - qcom,pcie-sm8350 559 + then: 560 + properties: 561 + clocks: 562 + minItems: 8 563 + maxItems: 9 564 + clock-names: 565 + minItems: 8 566 + items: 567 + - const: aux # Auxiliary clock 568 + - const: cfg # Configuration clock 569 + - const: bus_master # Master AXI clock 570 + - const: bus_slave # Slave AXI clock 571 + - const: slave_q2a # Slave Q2A clock 572 + - const: tbu # PCIe TBU clock 573 + - const: ddrss_sf_tbu # PCIe SF TBU clock 574 + - const: aggre1 # Aggre NoC PCIe1 AXI clock 575 + - const: aggre0 # Aggre NoC PCIe0 AXI clock 576 + resets: 577 + maxItems: 1 578 + reset-names: 579 + items: 580 + - const: pci # PCIe core reset 581 + 582 + - if: 583 + properties: 584 + compatible: 585 + contains: 586 + enum: 557 587 - qcom,pcie-sm8450-pcie0 558 588 then: 559 589 properties: ··· 692 664 - qcom,pcie-ipq8064 693 665 - qcom,pcie-ipq8064v2 694 666 - qcom,pcie-ipq8074 667 + - qcom,pcie-ipq8074-gen3 695 668 - qcom,pcie-qcs404 696 669 then: 697 670 required: ··· 721 692 - qcom,pcie-sdm845 722 693 - qcom,pcie-sm8150 723 694 - qcom,pcie-sm8250 695 + - qcom,pcie-sm8350 724 696 - qcom,pcie-sm8450-pcie0 725 697 - qcom,pcie-sm8450-pcie1 726 698 then: ··· 776 746 - qcom,pcie-ipq8064 777 747 - qcom,pcie-ipq8064-v2 778 748 - qcom,pcie-ipq8074 749 + - qcom,pcie-ipq8074-gen3 779 750 - qcom,pcie-qcs404 780 751 - qcom,pcie-sa8540p 781 752 then:
+49 -27
Documentation/devicetree/bindings/pci/socionext,uniphier-pcie-ep.yaml
··· 15 15 maintainers: 16 16 - Kunihiko Hayashi <hayashi.kunihiko@socionext.com> 17 17 18 - allOf: 19 - - $ref: /schemas/pci/snps,dw-pcie-ep.yaml# 20 - 21 18 properties: 22 19 compatible: 23 20 enum: ··· 26 29 maxItems: 5 27 30 28 31 reg-names: 29 - oneOf: 30 - - items: 31 - - const: dbi 32 - - const: dbi2 33 - - const: link 34 - - const: addr_space 35 - - items: 36 - - const: dbi 37 - - const: dbi2 38 - - const: link 39 - - const: addr_space 40 - - const: atu 32 + minItems: 4 33 + items: 34 + - const: dbi 35 + - const: dbi2 36 + - const: link 37 + - const: addr_space 38 + - const: atu 41 39 42 40 clocks: 43 41 minItems: 1 44 42 maxItems: 2 45 43 46 - clock-names: 47 - oneOf: 48 - - items: # for Pro5 49 - - const: gio 50 - - const: link 51 - - const: link # for NX1 44 + clock-names: true 52 45 53 46 resets: 54 47 minItems: 1 55 48 maxItems: 2 56 49 57 - reset-names: 58 - oneOf: 59 - - items: # for Pro5 60 - - const: gio 61 - - const: link 62 - - const: link # for NX1 50 + reset-names: true 63 51 64 52 num-ib-windows: 65 53 const: 16 ··· 59 77 60 78 phy-names: 61 79 const: pcie-phy 80 + 81 + allOf: 82 + - $ref: /schemas/pci/snps,dw-pcie-ep.yaml# 83 + - if: 84 + properties: 85 + compatible: 86 + contains: 87 + const: socionext,uniphier-pro5-pcie-ep 88 + then: 89 + properties: 90 + reg: 91 + maxItems: 4 92 + reg-names: 93 + maxItems: 4 94 + clocks: 95 + minItems: 2 96 + clock-names: 97 + items: 98 + - const: gio 99 + - const: link 100 + resets: 101 + minItems: 2 102 + reset-names: 103 + items: 104 + - const: gio 105 + - const: link 106 + else: 107 + properties: 108 + reg: 109 + minItems: 5 110 + reg-names: 111 + minItems: 5 112 + clocks: 113 + maxItems: 1 114 + clock-names: 115 + const: link 116 + resets: 117 + maxItems: 1 118 + reset-names: 119 + const: link 62 120 63 121 required: 64 122 - compatible
+19
drivers/acpi/device_pm.c
··· 484 484 acpi_dev_for_each_child(adev, acpi_power_up_if_adr_present, NULL); 485 485 } 486 486 487 + /** 488 + * acpi_dev_power_state_for_wake - Deepest power state for wakeup signaling 489 + * @adev: ACPI companion of the target device. 490 + * 491 + * Evaluate _S0W for @adev and return the value produced by it or return 492 + * ACPI_STATE_UNKNOWN on errors (including _S0W not present). 493 + */ 494 + u8 acpi_dev_power_state_for_wake(struct acpi_device *adev) 495 + { 496 + unsigned long long state; 497 + acpi_status status; 498 + 499 + status = acpi_evaluate_integer(adev->handle, "_S0W", NULL, &state); 500 + if (ACPI_FAILURE(status)) 501 + return ACPI_STATE_UNKNOWN; 502 + 503 + return state; 504 + } 505 + 487 506 #ifdef CONFIG_PM 488 507 static DEFINE_MUTEX(acpi_pm_notifier_lock); 489 508 static DEFINE_MUTEX(acpi_pm_notifier_install_lock);
+4 -1
drivers/dma/dw-edma/Kconfig
··· 9 9 Support the Synopsys DesignWare eDMA controller, normally 10 10 implemented on endpoints SoCs. 11 11 12 + if DW_EDMA 13 + 12 14 config DW_EDMA_PCIE 13 15 tristate "Synopsys DesignWare eDMA PCIe driver" 14 16 depends on PCI && PCI_MSI 15 - select DW_EDMA 16 17 help 17 18 Provides a glue-logic between the Synopsys DesignWare 18 19 eDMA controller and an endpoint PCIe device. This also serves 19 20 as a reference design to whom desires to use this IP. 21 + 22 + endif # DW_EDMA
+105 -93
drivers/dma/dw-edma/dw-edma-core.c
··· 39 39 return container_of(vd, struct dw_edma_desc, vd); 40 40 } 41 41 42 + static inline 43 + u64 dw_edma_get_pci_address(struct dw_edma_chan *chan, phys_addr_t cpu_addr) 44 + { 45 + struct dw_edma_chip *chip = chan->dw->chip; 46 + 47 + if (chip->ops->pci_address) 48 + return chip->ops->pci_address(chip->dev, cpu_addr); 49 + 50 + return cpu_addr; 51 + } 52 + 42 53 static struct dw_edma_burst *dw_edma_alloc_burst(struct dw_edma_chunk *chunk) 43 54 { 44 55 struct dw_edma_burst *burst; ··· 208 197 desc->chunks_alloc--; 209 198 } 210 199 200 + static void dw_edma_device_caps(struct dma_chan *dchan, 201 + struct dma_slave_caps *caps) 202 + { 203 + struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan); 204 + 205 + if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { 206 + if (chan->dir == EDMA_DIR_READ) 207 + caps->directions = BIT(DMA_DEV_TO_MEM); 208 + else 209 + caps->directions = BIT(DMA_MEM_TO_DEV); 210 + } else { 211 + if (chan->dir == EDMA_DIR_WRITE) 212 + caps->directions = BIT(DMA_DEV_TO_MEM); 213 + else 214 + caps->directions = BIT(DMA_MEM_TO_DEV); 215 + } 216 + } 217 + 211 218 static int dw_edma_device_config(struct dma_chan *dchan, 212 219 struct dma_slave_config *config) 213 220 { ··· 356 327 { 357 328 struct dw_edma_chan *chan = dchan2dw_edma_chan(xfer->dchan); 358 329 enum dma_transfer_direction dir = xfer->direction; 359 - phys_addr_t src_addr, dst_addr; 360 330 struct scatterlist *sg = NULL; 361 331 struct dw_edma_chunk *chunk; 362 332 struct dw_edma_burst *burst; 363 333 struct dw_edma_desc *desc; 334 + u64 src_addr, dst_addr; 335 + size_t fsz = 0; 364 336 u32 cnt = 0; 365 337 int i; 366 338 ··· 411 381 if (xfer->xfer.sg.len < 1) 412 382 return NULL; 413 383 } else if (xfer->type == EDMA_XFER_INTERLEAVED) { 414 - if (!xfer->xfer.il->numf) 384 + if (!xfer->xfer.il->numf || xfer->xfer.il->frame_size < 1) 415 385 return NULL; 416 - if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) 386 + if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc) 417 387 return NULL; 418 388 } else { 419 389 return NULL; ··· 435 405 dst_addr = chan->config.dst_addr; 436 406 } 437 407 408 + if (dir == DMA_DEV_TO_MEM) 409 + src_addr = dw_edma_get_pci_address(chan, (phys_addr_t)src_addr); 410 + else 411 + dst_addr = dw_edma_get_pci_address(chan, (phys_addr_t)dst_addr); 412 + 438 413 if (xfer->type == EDMA_XFER_CYCLIC) { 439 414 cnt = xfer->xfer.cyclic.cnt; 440 415 } else if (xfer->type == EDMA_XFER_SCATTER_GATHER) { 441 416 cnt = xfer->xfer.sg.len; 442 417 sg = xfer->xfer.sg.sgl; 443 418 } else if (xfer->type == EDMA_XFER_INTERLEAVED) { 444 - if (xfer->xfer.il->numf > 0) 445 - cnt = xfer->xfer.il->numf; 446 - else 447 - cnt = xfer->xfer.il->frame_size; 419 + cnt = xfer->xfer.il->numf * xfer->xfer.il->frame_size; 420 + fsz = xfer->xfer.il->frame_size; 448 421 } 449 422 450 423 for (i = 0; i < cnt; i++) { ··· 469 436 else if (xfer->type == EDMA_XFER_SCATTER_GATHER) 470 437 burst->sz = sg_dma_len(sg); 471 438 else if (xfer->type == EDMA_XFER_INTERLEAVED) 472 - burst->sz = xfer->xfer.il->sgl[i].size; 439 + burst->sz = xfer->xfer.il->sgl[i % fsz].size; 473 440 474 441 chunk->ll_region.sz += burst->sz; 475 442 desc->alloc_sz += burst->sz; ··· 488 455 * and destination addresses are increased 489 456 * by the same portion (data length) 490 457 */ 458 + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { 459 + burst->dar = dst_addr; 491 460 } 492 461 } else { 493 462 burst->dar = dst_addr; ··· 505 470 * and destination addresses are increased 506 471 * by the same portion (data length) 507 472 */ 473 + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { 474 + burst->sar = src_addr; 508 475 } 509 476 } 510 477 511 478 if (xfer->type == EDMA_XFER_SCATTER_GATHER) { 512 479 sg = sg_next(sg); 513 - } else if (xfer->type == EDMA_XFER_INTERLEAVED && 514 - xfer->xfer.il->frame_size > 0) { 480 + } else if (xfer->type == EDMA_XFER_INTERLEAVED) { 515 481 struct dma_interleaved_template *il = xfer->xfer.il; 516 - struct data_chunk *dc = &il->sgl[i]; 482 + struct data_chunk *dc = &il->sgl[i % fsz]; 517 483 518 - if (il->src_sgl) { 519 - src_addr += burst->sz; 484 + src_addr += burst->sz; 485 + if (il->src_sgl) 520 486 src_addr += dmaengine_get_src_icg(il, dc); 521 - } 522 487 523 - if (il->dst_sgl) { 524 - dst_addr += burst->sz; 488 + dst_addr += burst->sz; 489 + if (il->dst_sgl) 525 490 dst_addr += dmaengine_get_dst_icg(il, dc); 526 - } 527 491 } 528 492 } 529 493 ··· 735 701 } 736 702 } 737 703 738 - static int dw_edma_channel_setup(struct dw_edma *dw, bool write, 739 - u32 wr_alloc, u32 rd_alloc) 704 + static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc) 740 705 { 741 706 struct dw_edma_chip *chip = dw->chip; 742 - struct dw_edma_region *dt_region; 743 707 struct device *dev = chip->dev; 744 708 struct dw_edma_chan *chan; 745 709 struct dw_edma_irq *irq; 746 710 struct dma_device *dma; 747 - u32 alloc, off_alloc; 748 - u32 i, j, cnt; 749 - int err = 0; 711 + u32 i, ch_cnt; 750 712 u32 pos; 751 713 752 - if (write) { 753 - i = 0; 754 - cnt = dw->wr_ch_cnt; 755 - dma = &dw->wr_edma; 756 - alloc = wr_alloc; 757 - off_alloc = 0; 758 - } else { 759 - i = dw->wr_ch_cnt; 760 - cnt = dw->rd_ch_cnt; 761 - dma = &dw->rd_edma; 762 - alloc = rd_alloc; 763 - off_alloc = wr_alloc; 764 - } 714 + ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt; 715 + dma = &dw->dma; 765 716 766 717 INIT_LIST_HEAD(&dma->channels); 767 - for (j = 0; (alloc || dw->nr_irqs == 1) && j < cnt; j++, i++) { 718 + 719 + for (i = 0; i < ch_cnt; i++) { 768 720 chan = &dw->chan[i]; 769 721 770 - dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL); 771 - if (!dt_region) 772 - return -ENOMEM; 773 - 774 - chan->vc.chan.private = dt_region; 775 - 776 722 chan->dw = dw; 777 - chan->id = j; 778 - chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ; 723 + 724 + if (i < dw->wr_ch_cnt) { 725 + chan->id = i; 726 + chan->dir = EDMA_DIR_WRITE; 727 + } else { 728 + chan->id = i - dw->wr_ch_cnt; 729 + chan->dir = EDMA_DIR_READ; 730 + } 731 + 779 732 chan->configured = false; 780 733 chan->request = EDMA_REQ_NONE; 781 734 chan->status = EDMA_ST_IDLE; 782 735 783 - if (write) 784 - chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ); 736 + if (chan->dir == EDMA_DIR_WRITE) 737 + chan->ll_max = (chip->ll_region_wr[chan->id].sz / EDMA_LL_SZ); 785 738 else 786 - chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ); 739 + chan->ll_max = (chip->ll_region_rd[chan->id].sz / EDMA_LL_SZ); 787 740 chan->ll_max -= 1; 788 741 789 742 dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n", 790 - write ? "write" : "read", j, chan->ll_max); 743 + chan->dir == EDMA_DIR_WRITE ? "write" : "read", 744 + chan->id, chan->ll_max); 791 745 792 746 if (dw->nr_irqs == 1) 793 747 pos = 0; 748 + else if (chan->dir == EDMA_DIR_WRITE) 749 + pos = chan->id % wr_alloc; 794 750 else 795 - pos = off_alloc + (j % alloc); 751 + pos = wr_alloc + chan->id % rd_alloc; 796 752 797 753 irq = &dw->irq[pos]; 798 754 799 - if (write) 800 - irq->wr_mask |= BIT(j); 755 + if (chan->dir == EDMA_DIR_WRITE) 756 + irq->wr_mask |= BIT(chan->id); 801 757 else 802 - irq->rd_mask |= BIT(j); 758 + irq->rd_mask |= BIT(chan->id); 803 759 804 760 irq->dw = dw; 805 761 memcpy(&chan->msi, &irq->msi, sizeof(chan->msi)); 806 762 807 763 dev_vdbg(dev, "MSI:\t\tChannel %s[%u] addr=0x%.8x%.8x, data=0x%.8x\n", 808 - write ? "write" : "read", j, 764 + chan->dir == EDMA_DIR_WRITE ? "write" : "read", chan->id, 809 765 chan->msi.address_hi, chan->msi.address_lo, 810 766 chan->msi.data); 811 767 812 768 chan->vc.desc_free = vchan_free_desc; 813 - vchan_init(&chan->vc, dma); 769 + chan->vc.chan.private = chan->dir == EDMA_DIR_WRITE ? 770 + &dw->chip->dt_region_wr[chan->id] : 771 + &dw->chip->dt_region_rd[chan->id]; 814 772 815 - if (write) { 816 - dt_region->paddr = chip->dt_region_wr[j].paddr; 817 - dt_region->vaddr = chip->dt_region_wr[j].vaddr; 818 - dt_region->sz = chip->dt_region_wr[j].sz; 819 - } else { 820 - dt_region->paddr = chip->dt_region_rd[j].paddr; 821 - dt_region->vaddr = chip->dt_region_rd[j].vaddr; 822 - dt_region->sz = chip->dt_region_rd[j].sz; 823 - } 773 + vchan_init(&chan->vc, dma); 824 774 825 775 dw_edma_v0_core_device_config(chan); 826 776 } ··· 815 797 dma_cap_set(DMA_CYCLIC, dma->cap_mask); 816 798 dma_cap_set(DMA_PRIVATE, dma->cap_mask); 817 799 dma_cap_set(DMA_INTERLEAVE, dma->cap_mask); 818 - dma->directions = BIT(write ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV); 800 + dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 819 801 dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 820 802 dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 821 803 dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR; 822 - dma->chancnt = cnt; 823 804 824 805 /* Set DMA channel callbacks */ 825 806 dma->dev = chip->dev; 826 807 dma->device_alloc_chan_resources = dw_edma_alloc_chan_resources; 827 808 dma->device_free_chan_resources = dw_edma_free_chan_resources; 809 + dma->device_caps = dw_edma_device_caps; 828 810 dma->device_config = dw_edma_device_config; 829 811 dma->device_pause = dw_edma_device_pause; 830 812 dma->device_resume = dw_edma_device_resume; ··· 838 820 dma_set_max_seg_size(dma->dev, U32_MAX); 839 821 840 822 /* Register DMA device */ 841 - err = dma_async_device_register(dma); 842 - 843 - return err; 823 + return dma_async_device_register(dma); 844 824 } 845 825 846 826 static inline void dw_edma_dec_irq_alloc(int *nr_irqs, u32 *alloc, u16 cnt) ··· 909 893 dw_edma_interrupt_read, 910 894 IRQF_SHARED, dw->name, 911 895 &dw->irq[i]); 912 - if (err) { 913 - dw->nr_irqs = i; 914 - return err; 915 - } 896 + if (err) 897 + goto err_irq_free; 916 898 917 899 if (irq_get_msi_desc(irq)) 918 900 get_cached_msi_msg(irq, &dw->irq[i].msi); 919 901 } 920 902 921 903 dw->nr_irqs = i; 904 + } 905 + 906 + return 0; 907 + 908 + err_irq_free: 909 + for (i--; i >= 0; i--) { 910 + irq = chip->ops->irq_vector(dev, i); 911 + free_irq(irq, &dw->irq[i]); 922 912 } 923 913 924 914 return err; ··· 973 951 if (!dw->chan) 974 952 return -ENOMEM; 975 953 976 - snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%d", chip->id); 954 + snprintf(dw->name, sizeof(dw->name), "dw-edma-core:%s", 955 + dev_name(chip->dev)); 977 956 978 957 /* Disable eDMA, only to establish the ideal initial conditions */ 979 958 dw_edma_v0_core_off(dw); ··· 984 961 if (err) 985 962 return err; 986 963 987 - /* Setup write channels */ 988 - err = dw_edma_channel_setup(dw, true, wr_alloc, rd_alloc); 989 - if (err) 990 - goto err_irq_free; 991 - 992 - /* Setup read channels */ 993 - err = dw_edma_channel_setup(dw, false, wr_alloc, rd_alloc); 964 + /* Setup write/read channels */ 965 + err = dw_edma_channel_setup(dw, wr_alloc, rd_alloc); 994 966 if (err) 995 967 goto err_irq_free; 996 968 ··· 1011 993 struct dw_edma *dw = chip->dw; 1012 994 int i; 1013 995 996 + /* Skip removal if no private data found */ 997 + if (!dw) 998 + return -ENODEV; 999 + 1014 1000 /* Disable eDMA */ 1015 1001 dw_edma_v0_core_off(dw); 1016 1002 ··· 1023 1001 free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]); 1024 1002 1025 1003 /* Deregister eDMA device */ 1026 - dma_async_device_unregister(&dw->wr_edma); 1027 - list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels, 1004 + dma_async_device_unregister(&dw->dma); 1005 + list_for_each_entry_safe(chan, _chan, &dw->dma.channels, 1028 1006 vc.chan.device_node) { 1029 1007 tasklet_kill(&chan->vc.task); 1030 1008 list_del(&chan->vc.chan.device_node); 1031 1009 } 1032 - 1033 - dma_async_device_unregister(&dw->rd_edma); 1034 - list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels, 1035 - vc.chan.device_node) { 1036 - tasklet_kill(&chan->vc.task); 1037 - list_del(&chan->vc.chan.device_node); 1038 - } 1039 - 1040 - /* Turn debugfs off */ 1041 - dw_edma_v0_core_debugfs_off(dw); 1042 1010 1043 1011 return 0; 1044 1012 }
+3 -7
drivers/dma/dw-edma/dw-edma-core.h
··· 96 96 }; 97 97 98 98 struct dw_edma { 99 - char name[20]; 99 + char name[32]; 100 100 101 - struct dma_device wr_edma; 101 + struct dma_device dma; 102 + 102 103 u16 wr_ch_cnt; 103 - 104 - struct dma_device rd_edma; 105 104 u16 rd_ch_cnt; 106 105 107 106 struct dw_edma_irq *irq; ··· 111 112 raw_spinlock_t lock; /* Only for legacy */ 112 113 113 114 struct dw_edma_chip *chip; 114 - #ifdef CONFIG_DEBUG_FS 115 - struct dentry *debugfs; 116 - #endif /* CONFIG_DEBUG_FS */ 117 115 }; 118 116 119 117 struct dw_edma_sg {
+35 -21
drivers/dma/dw-edma/dw-edma-pcie.c
··· 95 95 return pci_irq_vector(to_pci_dev(dev), nr); 96 96 } 97 97 98 + static u64 dw_edma_pcie_address(struct device *dev, phys_addr_t cpu_addr) 99 + { 100 + struct pci_dev *pdev = to_pci_dev(dev); 101 + struct pci_bus_region region; 102 + struct resource res = { 103 + .flags = IORESOURCE_MEM, 104 + .start = cpu_addr, 105 + .end = cpu_addr, 106 + }; 107 + 108 + pcibios_resource_to_bus(pdev->bus, &region, &res); 109 + return region.start; 110 + } 111 + 98 112 static const struct dw_edma_core_ops dw_edma_pcie_core_ops = { 99 113 .irq_vector = dw_edma_pcie_irq_vector, 114 + .pci_address = dw_edma_pcie_address, 100 115 }; 101 116 102 117 static void dw_edma_pcie_get_vsec_dma_data(struct pci_dev *pdev, ··· 222 207 223 208 /* Data structure initialization */ 224 209 chip->dev = dev; 225 - chip->id = pdev->devfn; 226 210 227 211 chip->mf = vsec_data.mf; 228 212 chip->nr_irqs = nr_irqs; ··· 240 226 struct dw_edma_block *ll_block = &vsec_data.ll_wr[i]; 241 227 struct dw_edma_block *dt_block = &vsec_data.dt_wr[i]; 242 228 243 - ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; 244 - if (!ll_region->vaddr) 229 + ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar]; 230 + if (!ll_region->vaddr.io) 245 231 return -ENOMEM; 246 232 247 - ll_region->vaddr += ll_block->off; 248 - ll_region->paddr = pdev->resource[ll_block->bar].start; 233 + ll_region->vaddr.io += ll_block->off; 234 + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); 249 235 ll_region->paddr += ll_block->off; 250 236 ll_region->sz = ll_block->sz; 251 237 252 - dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; 253 - if (!dt_region->vaddr) 238 + dt_region->vaddr.io = pcim_iomap_table(pdev)[dt_block->bar]; 239 + if (!dt_region->vaddr.io) 254 240 return -ENOMEM; 255 241 256 - dt_region->vaddr += dt_block->off; 257 - dt_region->paddr = pdev->resource[dt_block->bar].start; 242 + dt_region->vaddr.io += dt_block->off; 243 + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); 258 244 dt_region->paddr += dt_block->off; 259 245 dt_region->sz = dt_block->sz; 260 246 } ··· 265 251 struct dw_edma_block *ll_block = &vsec_data.ll_rd[i]; 266 252 struct dw_edma_block *dt_block = &vsec_data.dt_rd[i]; 267 253 268 - ll_region->vaddr = pcim_iomap_table(pdev)[ll_block->bar]; 269 - if (!ll_region->vaddr) 254 + ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar]; 255 + if (!ll_region->vaddr.io) 270 256 return -ENOMEM; 271 257 272 - ll_region->vaddr += ll_block->off; 273 - ll_region->paddr = pdev->resource[ll_block->bar].start; 258 + ll_region->vaddr.io += ll_block->off; 259 + ll_region->paddr = pci_bus_address(pdev, ll_block->bar); 274 260 ll_region->paddr += ll_block->off; 275 261 ll_region->sz = ll_block->sz; 276 262 277 - dt_region->vaddr = pcim_iomap_table(pdev)[dt_block->bar]; 278 - if (!dt_region->vaddr) 263 + dt_region->vaddr.io = pcim_iomap_table(pdev)[dt_block->bar]; 264 + if (!dt_region->vaddr.io) 279 265 return -ENOMEM; 280 266 281 - dt_region->vaddr += dt_block->off; 282 - dt_region->paddr = pdev->resource[dt_block->bar].start; 267 + dt_region->vaddr.io += dt_block->off; 268 + dt_region->paddr = pci_bus_address(pdev, dt_block->bar); 283 269 dt_region->paddr += dt_block->off; 284 270 dt_region->sz = dt_block->sz; 285 271 } ··· 303 289 pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 304 290 i, vsec_data.ll_wr[i].bar, 305 291 vsec_data.ll_wr[i].off, chip->ll_region_wr[i].sz, 306 - chip->ll_region_wr[i].vaddr, &chip->ll_region_wr[i].paddr); 292 + chip->ll_region_wr[i].vaddr.io, &chip->ll_region_wr[i].paddr); 307 293 308 294 pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 309 295 i, vsec_data.dt_wr[i].bar, 310 296 vsec_data.dt_wr[i].off, chip->dt_region_wr[i].sz, 311 - chip->dt_region_wr[i].vaddr, &chip->dt_region_wr[i].paddr); 297 + chip->dt_region_wr[i].vaddr.io, &chip->dt_region_wr[i].paddr); 312 298 } 313 299 314 300 for (i = 0; i < chip->ll_rd_cnt; i++) { 315 301 pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 316 302 i, vsec_data.ll_rd[i].bar, 317 303 vsec_data.ll_rd[i].off, chip->ll_region_rd[i].sz, 318 - chip->ll_region_rd[i].vaddr, &chip->ll_region_rd[i].paddr); 304 + chip->ll_region_rd[i].vaddr.io, &chip->ll_region_rd[i].paddr); 319 305 320 306 pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n", 321 307 i, vsec_data.dt_rd[i].bar, 322 308 vsec_data.dt_rd[i].off, chip->dt_region_rd[i].sz, 323 - chip->dt_region_rd[i].vaddr, &chip->dt_region_rd[i].paddr); 309 + chip->dt_region_rd[i].vaddr.io, &chip->dt_region_rd[i].paddr); 324 310 } 325 311 326 312 pci_dbg(pdev, "Nr. IRQs:\t%u\n", chip->nr_irqs);
+47 -53
drivers/dma/dw-edma/dw-edma-v0-core.c
··· 8 8 9 9 #include <linux/bitfield.h> 10 10 11 + #include <linux/io-64-nonatomic-lo-hi.h> 12 + 11 13 #include "dw-edma-core.h" 12 14 #include "dw-edma-v0-core.h" 13 15 #include "dw-edma-v0-regs.h" ··· 55 53 SET_32(dw, rd_##name, value); \ 56 54 } while (0) 57 55 58 - #ifdef CONFIG_64BIT 59 - 60 56 #define SET_64(dw, name, value) \ 61 57 writeq(value, &(__dw_regs(dw)->name)) 62 58 ··· 79 79 SET_64(dw, wr_##name, value); \ 80 80 SET_64(dw, rd_##name, value); \ 81 81 } while (0) 82 - 83 - #endif /* CONFIG_64BIT */ 84 82 85 83 #define SET_COMPAT(dw, name, value) \ 86 84 writel(value, &(__dw_regs(dw)->type.unroll.name)) ··· 159 161 #define GET_CH_32(dw, dir, ch, name) \ 160 162 readl_ch(dw, dir, ch, &(__dw_ch_regs(dw, dir, ch)->name)) 161 163 162 - #define SET_LL_32(ll, value) \ 163 - writel(value, ll) 164 - 165 - #ifdef CONFIG_64BIT 166 - 167 164 static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, 168 165 u64 value, void __iomem *addr) 169 166 { ··· 185 192 static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch, 186 193 const void __iomem *addr) 187 194 { 188 - u32 value; 195 + u64 value; 189 196 190 197 if (dw->chip->mf == EDMA_MF_EDMA_LEGACY) { 191 198 u32 viewport_sel; ··· 214 221 215 222 #define GET_CH_64(dw, dir, ch, name) \ 216 223 readq_ch(dw, dir, ch, &(__dw_ch_regs(dw, dir, ch)->name)) 217 - 218 - #define SET_LL_64(ll, value) \ 219 - writeq(value, ll) 220 - 221 - #endif /* CONFIG_64BIT */ 222 224 223 225 /* eDMA management callbacks */ 224 226 void dw_edma_v0_core_off(struct dw_edma *dw) ··· 286 298 GET_RW_32(dw, dir, int_status)); 287 299 } 288 300 301 + static void dw_edma_v0_write_ll_data(struct dw_edma_chunk *chunk, int i, 302 + u32 control, u32 size, u64 sar, u64 dar) 303 + { 304 + ptrdiff_t ofs = i * sizeof(struct dw_edma_v0_lli); 305 + 306 + if (chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { 307 + struct dw_edma_v0_lli *lli = chunk->ll_region.vaddr.mem + ofs; 308 + 309 + lli->control = control; 310 + lli->transfer_size = size; 311 + lli->sar.reg = sar; 312 + lli->dar.reg = dar; 313 + } else { 314 + struct dw_edma_v0_lli __iomem *lli = chunk->ll_region.vaddr.io + ofs; 315 + 316 + writel(control, &lli->control); 317 + writel(size, &lli->transfer_size); 318 + writeq(sar, &lli->sar.reg); 319 + writeq(dar, &lli->dar.reg); 320 + } 321 + } 322 + 323 + static void dw_edma_v0_write_ll_link(struct dw_edma_chunk *chunk, 324 + int i, u32 control, u64 pointer) 325 + { 326 + ptrdiff_t ofs = i * sizeof(struct dw_edma_v0_lli); 327 + 328 + if (chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) { 329 + struct dw_edma_v0_llp *llp = chunk->ll_region.vaddr.mem + ofs; 330 + 331 + llp->control = control; 332 + llp->llp.reg = pointer; 333 + } else { 334 + struct dw_edma_v0_llp __iomem *llp = chunk->ll_region.vaddr.io + ofs; 335 + 336 + writel(control, &llp->control); 337 + writeq(pointer, &llp->llp.reg); 338 + } 339 + } 340 + 289 341 static void dw_edma_v0_core_write_chunk(struct dw_edma_chunk *chunk) 290 342 { 291 343 struct dw_edma_burst *child; 292 344 struct dw_edma_chan *chan = chunk->chan; 293 - struct dw_edma_v0_lli __iomem *lli; 294 - struct dw_edma_v0_llp __iomem *llp; 295 345 u32 control = 0, i = 0; 296 346 int j; 297 - 298 - lli = chunk->ll_region.vaddr; 299 347 300 348 if (chunk->cb) 301 349 control = DW_EDMA_V0_CB; ··· 344 320 if (!(chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL)) 345 321 control |= DW_EDMA_V0_RIE; 346 322 } 347 - /* Channel control */ 348 - SET_LL_32(&lli[i].control, control); 349 - /* Transfer size */ 350 - SET_LL_32(&lli[i].transfer_size, child->sz); 351 - /* SAR */ 352 - #ifdef CONFIG_64BIT 353 - SET_LL_64(&lli[i].sar.reg, child->sar); 354 - #else /* CONFIG_64BIT */ 355 - SET_LL_32(&lli[i].sar.lsb, lower_32_bits(child->sar)); 356 - SET_LL_32(&lli[i].sar.msb, upper_32_bits(child->sar)); 357 - #endif /* CONFIG_64BIT */ 358 - /* DAR */ 359 - #ifdef CONFIG_64BIT 360 - SET_LL_64(&lli[i].dar.reg, child->dar); 361 - #else /* CONFIG_64BIT */ 362 - SET_LL_32(&lli[i].dar.lsb, lower_32_bits(child->dar)); 363 - SET_LL_32(&lli[i].dar.msb, upper_32_bits(child->dar)); 364 - #endif /* CONFIG_64BIT */ 365 - i++; 323 + 324 + dw_edma_v0_write_ll_data(chunk, i++, control, child->sz, 325 + child->sar, child->dar); 366 326 } 367 327 368 - llp = (void __iomem *)&lli[i]; 369 328 control = DW_EDMA_V0_LLP | DW_EDMA_V0_TCB; 370 329 if (!chunk->cb) 371 330 control |= DW_EDMA_V0_CB; 372 331 373 - /* Channel control */ 374 - SET_LL_32(&llp->control, control); 375 - /* Linked list */ 376 - #ifdef CONFIG_64BIT 377 - SET_LL_64(&llp->llp.reg, chunk->ll_region.paddr); 378 - #else /* CONFIG_64BIT */ 379 - SET_LL_32(&llp->llp.lsb, lower_32_bits(chunk->ll_region.paddr)); 380 - SET_LL_32(&llp->llp.msb, upper_32_bits(chunk->ll_region.paddr)); 381 - #endif /* CONFIG_64BIT */ 332 + dw_edma_v0_write_ll_link(chunk, i, control, chunk->ll_region.paddr); 382 333 } 383 334 384 335 void dw_edma_v0_core_start(struct dw_edma_chunk *chunk, bool first) ··· 502 503 void dw_edma_v0_core_debugfs_on(struct dw_edma *dw) 503 504 { 504 505 dw_edma_v0_debugfs_on(dw); 505 - } 506 - 507 - void dw_edma_v0_core_debugfs_off(struct dw_edma *dw) 508 - { 509 - dw_edma_v0_debugfs_off(dw); 510 506 }
-1
drivers/dma/dw-edma/dw-edma-v0-core.h
··· 23 23 int dw_edma_v0_core_device_config(struct dw_edma_chan *chan); 24 24 /* eDMA debug fs callbacks */ 25 25 void dw_edma_v0_core_debugfs_on(struct dw_edma *dw); 26 - void dw_edma_v0_core_debugfs_off(struct dw_edma *dw); 27 26 28 27 #endif /* _DW_EDMA_V0_CORE_H */
+173 -195
drivers/dma/dw-edma/dw-edma-v0-debugfs.c
··· 13 13 #include "dw-edma-v0-regs.h" 14 14 #include "dw-edma-core.h" 15 15 16 - #define REGS_ADDR(name) \ 17 - ((void __force *)&regs->name) 18 - #define REGISTER(name) \ 19 - { #name, REGS_ADDR(name) } 16 + #define REGS_ADDR(dw, name) \ 17 + ({ \ 18 + struct dw_edma_v0_regs __iomem *__regs = (dw)->chip->reg_base; \ 19 + \ 20 + (void __iomem *)&__regs->name; \ 21 + }) 20 22 21 - #define WR_REGISTER(name) \ 22 - { #name, REGS_ADDR(wr_##name) } 23 - #define RD_REGISTER(name) \ 24 - { #name, REGS_ADDR(rd_##name) } 23 + #define REGS_CH_ADDR(dw, name, _dir, _ch) \ 24 + ({ \ 25 + struct dw_edma_v0_ch_regs __iomem *__ch_regs; \ 26 + \ 27 + if ((dw)->chip->mf == EDMA_MF_EDMA_LEGACY) \ 28 + __ch_regs = REGS_ADDR(dw, type.legacy.ch); \ 29 + else if (_dir == EDMA_DIR_READ) \ 30 + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].rd); \ 31 + else \ 32 + __ch_regs = REGS_ADDR(dw, type.unroll.ch[_ch].wr); \ 33 + \ 34 + (void __iomem *)&__ch_regs->name; \ 35 + }) 25 36 26 - #define WR_REGISTER_LEGACY(name) \ 27 - { #name, REGS_ADDR(type.legacy.wr_##name) } 37 + #define REGISTER(dw, name) \ 38 + { dw, #name, REGS_ADDR(dw, name) } 39 + 40 + #define CTX_REGISTER(dw, name, dir, ch) \ 41 + { dw, #name, REGS_CH_ADDR(dw, name, dir, ch), dir, ch } 42 + 43 + #define WR_REGISTER(dw, name) \ 44 + { dw, #name, REGS_ADDR(dw, wr_##name) } 45 + #define RD_REGISTER(dw, name) \ 46 + { dw, #name, REGS_ADDR(dw, rd_##name) } 47 + 48 + #define WR_REGISTER_LEGACY(dw, name) \ 49 + { dw, #name, REGS_ADDR(dw, type.legacy.wr_##name) } 28 50 #define RD_REGISTER_LEGACY(name) \ 29 - { #name, REGS_ADDR(type.legacy.rd_##name) } 51 + { dw, #name, REGS_ADDR(dw, type.legacy.rd_##name) } 30 52 31 - #define WR_REGISTER_UNROLL(name) \ 32 - { #name, REGS_ADDR(type.unroll.wr_##name) } 33 - #define RD_REGISTER_UNROLL(name) \ 34 - { #name, REGS_ADDR(type.unroll.rd_##name) } 53 + #define WR_REGISTER_UNROLL(dw, name) \ 54 + { dw, #name, REGS_ADDR(dw, type.unroll.wr_##name) } 55 + #define RD_REGISTER_UNROLL(dw, name) \ 56 + { dw, #name, REGS_ADDR(dw, type.unroll.rd_##name) } 35 57 36 58 #define WRITE_STR "write" 37 59 #define READ_STR "read" 38 60 #define CHANNEL_STR "channel" 39 61 #define REGISTERS_STR "registers" 40 62 41 - static struct dw_edma *dw; 42 - static struct dw_edma_v0_regs __iomem *regs; 43 - 44 - static struct { 45 - void __iomem *start; 46 - void __iomem *end; 47 - } lim[2][EDMA_V0_MAX_NR_CH]; 48 - 49 - struct debugfs_entries { 63 + struct dw_edma_debugfs_entry { 64 + struct dw_edma *dw; 50 65 const char *name; 51 - dma_addr_t *reg; 66 + void __iomem *reg; 67 + enum dw_edma_dir dir; 68 + u16 ch; 52 69 }; 53 70 54 71 static int dw_edma_debugfs_u32_get(void *data, u64 *val) 55 72 { 56 - void __iomem *reg = (void __force __iomem *)data; 73 + struct dw_edma_debugfs_entry *entry = data; 74 + struct dw_edma *dw = entry->dw; 75 + void __iomem *reg = entry->reg; 76 + 57 77 if (dw->chip->mf == EDMA_MF_EDMA_LEGACY && 58 - reg >= (void __iomem *)&regs->type.legacy.ch) { 59 - void __iomem *ptr = &regs->type.legacy.ch; 60 - u32 viewport_sel = 0; 78 + reg >= REGS_ADDR(dw, type.legacy.ch)) { 61 79 unsigned long flags; 62 - u16 ch; 80 + u32 viewport_sel; 63 81 64 - for (ch = 0; ch < dw->wr_ch_cnt; ch++) 65 - if (lim[0][ch].start >= reg && reg < lim[0][ch].end) { 66 - ptr += (reg - lim[0][ch].start); 67 - goto legacy_sel_wr; 68 - } 69 - 70 - for (ch = 0; ch < dw->rd_ch_cnt; ch++) 71 - if (lim[1][ch].start >= reg && reg < lim[1][ch].end) { 72 - ptr += (reg - lim[1][ch].start); 73 - goto legacy_sel_rd; 74 - } 75 - 76 - return 0; 77 - legacy_sel_rd: 78 - viewport_sel = BIT(31); 79 - legacy_sel_wr: 80 - viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, ch); 82 + viewport_sel = entry->dir == EDMA_DIR_READ ? BIT(31) : 0; 83 + viewport_sel |= FIELD_PREP(EDMA_V0_VIEWPORT_MASK, entry->ch); 81 84 82 85 raw_spin_lock_irqsave(&dw->lock, flags); 83 86 84 - writel(viewport_sel, &regs->type.legacy.viewport_sel); 85 - *val = readl(ptr); 87 + writel(viewport_sel, REGS_ADDR(dw, type.legacy.viewport_sel)); 88 + *val = readl(reg); 86 89 87 90 raw_spin_unlock_irqrestore(&dw->lock, flags); 88 91 } else { ··· 96 93 } 97 94 DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n"); 98 95 99 - static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[], 100 - int nr_entries, struct dentry *dir) 96 + static void dw_edma_debugfs_create_x32(struct dw_edma *dw, 97 + const struct dw_edma_debugfs_entry ini[], 98 + int nr_entries, struct dentry *dent) 101 99 { 100 + struct dw_edma_debugfs_entry *entries; 102 101 int i; 103 102 103 + entries = devm_kcalloc(dw->chip->dev, nr_entries, sizeof(*entries), 104 + GFP_KERNEL); 105 + if (!entries) 106 + return; 107 + 104 108 for (i = 0; i < nr_entries; i++) { 105 - if (!debugfs_create_file_unsafe(entries[i].name, 0444, dir, 106 - entries[i].reg, &fops_x32)) 107 - break; 109 + entries[i] = ini[i]; 110 + 111 + debugfs_create_file_unsafe(entries[i].name, 0444, dent, 112 + &entries[i], &fops_x32); 108 113 } 109 114 } 110 115 111 - static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs, 112 - struct dentry *dir) 116 + static void dw_edma_debugfs_regs_ch(struct dw_edma *dw, enum dw_edma_dir dir, 117 + u16 ch, struct dentry *dent) 113 118 { 114 - int nr_entries; 115 - const struct debugfs_entries debugfs_regs[] = { 116 - REGISTER(ch_control1), 117 - REGISTER(ch_control2), 118 - REGISTER(transfer_size), 119 - REGISTER(sar.lsb), 120 - REGISTER(sar.msb), 121 - REGISTER(dar.lsb), 122 - REGISTER(dar.msb), 123 - REGISTER(llp.lsb), 124 - REGISTER(llp.msb), 119 + struct dw_edma_debugfs_entry debugfs_regs[] = { 120 + CTX_REGISTER(dw, ch_control1, dir, ch), 121 + CTX_REGISTER(dw, ch_control2, dir, ch), 122 + CTX_REGISTER(dw, transfer_size, dir, ch), 123 + CTX_REGISTER(dw, sar.lsb, dir, ch), 124 + CTX_REGISTER(dw, sar.msb, dir, ch), 125 + CTX_REGISTER(dw, dar.lsb, dir, ch), 126 + CTX_REGISTER(dw, dar.msb, dir, ch), 127 + CTX_REGISTER(dw, llp.lsb, dir, ch), 128 + CTX_REGISTER(dw, llp.msb, dir, ch), 125 129 }; 130 + int nr_entries; 126 131 127 132 nr_entries = ARRAY_SIZE(debugfs_regs); 128 - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir); 133 + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, dent); 129 134 } 130 135 131 - static void dw_edma_debugfs_regs_wr(struct dentry *dir) 136 + static noinline_for_stack void 137 + dw_edma_debugfs_regs_wr(struct dw_edma *dw, struct dentry *dent) 132 138 { 133 - const struct debugfs_entries debugfs_regs[] = { 139 + const struct dw_edma_debugfs_entry debugfs_regs[] = { 134 140 /* eDMA global registers */ 135 - WR_REGISTER(engine_en), 136 - WR_REGISTER(doorbell), 137 - WR_REGISTER(ch_arb_weight.lsb), 138 - WR_REGISTER(ch_arb_weight.msb), 141 + WR_REGISTER(dw, engine_en), 142 + WR_REGISTER(dw, doorbell), 143 + WR_REGISTER(dw, ch_arb_weight.lsb), 144 + WR_REGISTER(dw, ch_arb_weight.msb), 139 145 /* eDMA interrupts registers */ 140 - WR_REGISTER(int_status), 141 - WR_REGISTER(int_mask), 142 - WR_REGISTER(int_clear), 143 - WR_REGISTER(err_status), 144 - WR_REGISTER(done_imwr.lsb), 145 - WR_REGISTER(done_imwr.msb), 146 - WR_REGISTER(abort_imwr.lsb), 147 - WR_REGISTER(abort_imwr.msb), 148 - WR_REGISTER(ch01_imwr_data), 149 - WR_REGISTER(ch23_imwr_data), 150 - WR_REGISTER(ch45_imwr_data), 151 - WR_REGISTER(ch67_imwr_data), 152 - WR_REGISTER(linked_list_err_en), 146 + WR_REGISTER(dw, int_status), 147 + WR_REGISTER(dw, int_mask), 148 + WR_REGISTER(dw, int_clear), 149 + WR_REGISTER(dw, err_status), 150 + WR_REGISTER(dw, done_imwr.lsb), 151 + WR_REGISTER(dw, done_imwr.msb), 152 + WR_REGISTER(dw, abort_imwr.lsb), 153 + WR_REGISTER(dw, abort_imwr.msb), 154 + WR_REGISTER(dw, ch01_imwr_data), 155 + WR_REGISTER(dw, ch23_imwr_data), 156 + WR_REGISTER(dw, ch45_imwr_data), 157 + WR_REGISTER(dw, ch67_imwr_data), 158 + WR_REGISTER(dw, linked_list_err_en), 153 159 }; 154 - const struct debugfs_entries debugfs_unroll_regs[] = { 160 + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { 155 161 /* eDMA channel context grouping */ 156 - WR_REGISTER_UNROLL(engine_chgroup), 157 - WR_REGISTER_UNROLL(engine_hshake_cnt.lsb), 158 - WR_REGISTER_UNROLL(engine_hshake_cnt.msb), 159 - WR_REGISTER_UNROLL(ch0_pwr_en), 160 - WR_REGISTER_UNROLL(ch1_pwr_en), 161 - WR_REGISTER_UNROLL(ch2_pwr_en), 162 - WR_REGISTER_UNROLL(ch3_pwr_en), 163 - WR_REGISTER_UNROLL(ch4_pwr_en), 164 - WR_REGISTER_UNROLL(ch5_pwr_en), 165 - WR_REGISTER_UNROLL(ch6_pwr_en), 166 - WR_REGISTER_UNROLL(ch7_pwr_en), 162 + WR_REGISTER_UNROLL(dw, engine_chgroup), 163 + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), 164 + WR_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), 165 + WR_REGISTER_UNROLL(dw, ch0_pwr_en), 166 + WR_REGISTER_UNROLL(dw, ch1_pwr_en), 167 + WR_REGISTER_UNROLL(dw, ch2_pwr_en), 168 + WR_REGISTER_UNROLL(dw, ch3_pwr_en), 169 + WR_REGISTER_UNROLL(dw, ch4_pwr_en), 170 + WR_REGISTER_UNROLL(dw, ch5_pwr_en), 171 + WR_REGISTER_UNROLL(dw, ch6_pwr_en), 172 + WR_REGISTER_UNROLL(dw, ch7_pwr_en), 167 173 }; 168 - struct dentry *regs_dir, *ch_dir; 174 + struct dentry *regs_dent, *ch_dent; 169 175 int nr_entries, i; 170 176 char name[16]; 171 177 172 - regs_dir = debugfs_create_dir(WRITE_STR, dir); 173 - if (!regs_dir) 174 - return; 178 + regs_dent = debugfs_create_dir(WRITE_STR, dent); 175 179 176 180 nr_entries = ARRAY_SIZE(debugfs_regs); 177 - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); 181 + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); 178 182 179 183 if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { 180 184 nr_entries = ARRAY_SIZE(debugfs_unroll_regs); 181 - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, 182 - regs_dir); 185 + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, 186 + regs_dent); 183 187 } 184 188 185 189 for (i = 0; i < dw->wr_ch_cnt; i++) { 186 190 snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); 187 191 188 - ch_dir = debugfs_create_dir(name, regs_dir); 189 - if (!ch_dir) 190 - return; 192 + ch_dent = debugfs_create_dir(name, regs_dent); 191 193 192 - dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].wr, ch_dir); 193 - 194 - lim[0][i].start = &regs->type.unroll.ch[i].wr; 195 - lim[0][i].end = &regs->type.unroll.ch[i].padding_1[0]; 194 + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_WRITE, i, ch_dent); 196 195 } 197 196 } 198 197 199 - static void dw_edma_debugfs_regs_rd(struct dentry *dir) 198 + static noinline_for_stack void dw_edma_debugfs_regs_rd(struct dw_edma *dw, 199 + struct dentry *dent) 200 200 { 201 - const struct debugfs_entries debugfs_regs[] = { 201 + const struct dw_edma_debugfs_entry debugfs_regs[] = { 202 202 /* eDMA global registers */ 203 - RD_REGISTER(engine_en), 204 - RD_REGISTER(doorbell), 205 - RD_REGISTER(ch_arb_weight.lsb), 206 - RD_REGISTER(ch_arb_weight.msb), 203 + RD_REGISTER(dw, engine_en), 204 + RD_REGISTER(dw, doorbell), 205 + RD_REGISTER(dw, ch_arb_weight.lsb), 206 + RD_REGISTER(dw, ch_arb_weight.msb), 207 207 /* eDMA interrupts registers */ 208 - RD_REGISTER(int_status), 209 - RD_REGISTER(int_mask), 210 - RD_REGISTER(int_clear), 211 - RD_REGISTER(err_status.lsb), 212 - RD_REGISTER(err_status.msb), 213 - RD_REGISTER(linked_list_err_en), 214 - RD_REGISTER(done_imwr.lsb), 215 - RD_REGISTER(done_imwr.msb), 216 - RD_REGISTER(abort_imwr.lsb), 217 - RD_REGISTER(abort_imwr.msb), 218 - RD_REGISTER(ch01_imwr_data), 219 - RD_REGISTER(ch23_imwr_data), 220 - RD_REGISTER(ch45_imwr_data), 221 - RD_REGISTER(ch67_imwr_data), 208 + RD_REGISTER(dw, int_status), 209 + RD_REGISTER(dw, int_mask), 210 + RD_REGISTER(dw, int_clear), 211 + RD_REGISTER(dw, err_status.lsb), 212 + RD_REGISTER(dw, err_status.msb), 213 + RD_REGISTER(dw, linked_list_err_en), 214 + RD_REGISTER(dw, done_imwr.lsb), 215 + RD_REGISTER(dw, done_imwr.msb), 216 + RD_REGISTER(dw, abort_imwr.lsb), 217 + RD_REGISTER(dw, abort_imwr.msb), 218 + RD_REGISTER(dw, ch01_imwr_data), 219 + RD_REGISTER(dw, ch23_imwr_data), 220 + RD_REGISTER(dw, ch45_imwr_data), 221 + RD_REGISTER(dw, ch67_imwr_data), 222 222 }; 223 - const struct debugfs_entries debugfs_unroll_regs[] = { 223 + const struct dw_edma_debugfs_entry debugfs_unroll_regs[] = { 224 224 /* eDMA channel context grouping */ 225 - RD_REGISTER_UNROLL(engine_chgroup), 226 - RD_REGISTER_UNROLL(engine_hshake_cnt.lsb), 227 - RD_REGISTER_UNROLL(engine_hshake_cnt.msb), 228 - RD_REGISTER_UNROLL(ch0_pwr_en), 229 - RD_REGISTER_UNROLL(ch1_pwr_en), 230 - RD_REGISTER_UNROLL(ch2_pwr_en), 231 - RD_REGISTER_UNROLL(ch3_pwr_en), 232 - RD_REGISTER_UNROLL(ch4_pwr_en), 233 - RD_REGISTER_UNROLL(ch5_pwr_en), 234 - RD_REGISTER_UNROLL(ch6_pwr_en), 235 - RD_REGISTER_UNROLL(ch7_pwr_en), 225 + RD_REGISTER_UNROLL(dw, engine_chgroup), 226 + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.lsb), 227 + RD_REGISTER_UNROLL(dw, engine_hshake_cnt.msb), 228 + RD_REGISTER_UNROLL(dw, ch0_pwr_en), 229 + RD_REGISTER_UNROLL(dw, ch1_pwr_en), 230 + RD_REGISTER_UNROLL(dw, ch2_pwr_en), 231 + RD_REGISTER_UNROLL(dw, ch3_pwr_en), 232 + RD_REGISTER_UNROLL(dw, ch4_pwr_en), 233 + RD_REGISTER_UNROLL(dw, ch5_pwr_en), 234 + RD_REGISTER_UNROLL(dw, ch6_pwr_en), 235 + RD_REGISTER_UNROLL(dw, ch7_pwr_en), 236 236 }; 237 - struct dentry *regs_dir, *ch_dir; 237 + struct dentry *regs_dent, *ch_dent; 238 238 int nr_entries, i; 239 239 char name[16]; 240 240 241 - regs_dir = debugfs_create_dir(READ_STR, dir); 242 - if (!regs_dir) 243 - return; 241 + regs_dent = debugfs_create_dir(READ_STR, dent); 244 242 245 243 nr_entries = ARRAY_SIZE(debugfs_regs); 246 - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); 244 + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); 247 245 248 246 if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) { 249 247 nr_entries = ARRAY_SIZE(debugfs_unroll_regs); 250 - dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries, 251 - regs_dir); 248 + dw_edma_debugfs_create_x32(dw, debugfs_unroll_regs, nr_entries, 249 + regs_dent); 252 250 } 253 251 254 252 for (i = 0; i < dw->rd_ch_cnt; i++) { 255 253 snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i); 256 254 257 - ch_dir = debugfs_create_dir(name, regs_dir); 258 - if (!ch_dir) 259 - return; 255 + ch_dent = debugfs_create_dir(name, regs_dent); 260 256 261 - dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].rd, ch_dir); 262 - 263 - lim[1][i].start = &regs->type.unroll.ch[i].rd; 264 - lim[1][i].end = &regs->type.unroll.ch[i].padding_2[0]; 257 + dw_edma_debugfs_regs_ch(dw, EDMA_DIR_READ, i, ch_dent); 265 258 } 266 259 } 267 260 268 - static void dw_edma_debugfs_regs(void) 261 + static void dw_edma_debugfs_regs(struct dw_edma *dw) 269 262 { 270 - const struct debugfs_entries debugfs_regs[] = { 271 - REGISTER(ctrl_data_arb_prior), 272 - REGISTER(ctrl), 263 + const struct dw_edma_debugfs_entry debugfs_regs[] = { 264 + REGISTER(dw, ctrl_data_arb_prior), 265 + REGISTER(dw, ctrl), 273 266 }; 274 - struct dentry *regs_dir; 267 + struct dentry *regs_dent; 275 268 int nr_entries; 276 269 277 - regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs); 278 - if (!regs_dir) 279 - return; 270 + regs_dent = debugfs_create_dir(REGISTERS_STR, dw->dma.dbg_dev_root); 280 271 281 272 nr_entries = ARRAY_SIZE(debugfs_regs); 282 - dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir); 273 + dw_edma_debugfs_create_x32(dw, debugfs_regs, nr_entries, regs_dent); 283 274 284 - dw_edma_debugfs_regs_wr(regs_dir); 285 - dw_edma_debugfs_regs_rd(regs_dir); 275 + dw_edma_debugfs_regs_wr(dw, regs_dent); 276 + dw_edma_debugfs_regs_rd(dw, regs_dent); 286 277 } 287 278 288 - void dw_edma_v0_debugfs_on(struct dw_edma *_dw) 279 + void dw_edma_v0_debugfs_on(struct dw_edma *dw) 289 280 { 290 - dw = _dw; 291 - if (!dw) 281 + if (!debugfs_initialized()) 292 282 return; 293 283 294 - regs = dw->chip->reg_base; 295 - if (!regs) 296 - return; 284 + debugfs_create_u32("mf", 0444, dw->dma.dbg_dev_root, &dw->chip->mf); 285 + debugfs_create_u16("wr_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->wr_ch_cnt); 286 + debugfs_create_u16("rd_ch_cnt", 0444, dw->dma.dbg_dev_root, &dw->rd_ch_cnt); 297 287 298 - dw->debugfs = debugfs_create_dir(dw->name, NULL); 299 - if (!dw->debugfs) 300 - return; 301 - 302 - debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf); 303 - debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt); 304 - debugfs_create_u16("rd_ch_cnt", 0444, dw->debugfs, &dw->rd_ch_cnt); 305 - 306 - dw_edma_debugfs_regs(); 307 - } 308 - 309 - void dw_edma_v0_debugfs_off(struct dw_edma *_dw) 310 - { 311 - dw = _dw; 312 - if (!dw) 313 - return; 314 - 315 - debugfs_remove_recursive(dw->debugfs); 316 - dw->debugfs = NULL; 288 + dw_edma_debugfs_regs(dw); 317 289 }
-5
drivers/dma/dw-edma/dw-edma-v0-debugfs.h
··· 13 13 14 14 #ifdef CONFIG_DEBUG_FS 15 15 void dw_edma_v0_debugfs_on(struct dw_edma *dw); 16 - void dw_edma_v0_debugfs_off(struct dw_edma *dw); 17 16 #else 18 17 static inline void dw_edma_v0_debugfs_on(struct dw_edma *dw) 19 - { 20 - } 21 - 22 - static inline void dw_edma_v0_debugfs_off(struct dw_edma *dw) 23 18 { 24 19 } 25 20 #endif /* CONFIG_DEBUG_FS */
+3 -1
drivers/misc/pci_endpoint_test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 2 + /* 3 3 * Host side test driver to test endpoint functionality 4 4 * 5 5 * Copyright (C) 2017 Texas Instruments ··· 72 72 #define PCI_DEVICE_ID_TI_J7200 0xb00f 73 73 #define PCI_DEVICE_ID_TI_AM64 0xb010 74 74 #define PCI_DEVICE_ID_LS1088A 0x80c0 75 + #define PCI_DEVICE_ID_IMX8 0x0808 75 76 76 77 #define is_am654_pci_dev(pdev) \ 77 78 ((pdev)->device == PCI_DEVICE_ID_TI_AM654) ··· 981 980 { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0), 982 981 .driver_data = (kernel_ulong_t)&default_data, 983 982 }, 983 + { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, PCI_DEVICE_ID_IMX8),}, 984 984 { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, PCI_DEVICE_ID_LS1088A), 985 985 .driver_data = (kernel_ulong_t)&default_data, 986 986 },
+2 -1
drivers/pci/controller/Kconfig
··· 9 9 depends on MVEBU_MBUS 10 10 depends on ARM 11 11 depends on OF 12 + depends on BROKEN 12 13 select PCI_BRIDGE_EMUL 13 14 help 14 15 Add support for Marvell EBU PCIe controller. This PCIe controller ··· 286 285 287 286 config PCI_HYPERV_INTERFACE 288 287 tristate "Hyper-V PCI Interface" 289 - depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI && PCI_MSI 288 + depends on ((X86 && X86_64) || ARM64) && HYPERV && PCI_MSI 290 289 help 291 290 The Hyper-V PCI Interface is a helper driver allows other drivers to 292 291 have a common interface with the Hyper-V PCI frontend driver.
+22 -1
drivers/pci/controller/dwc/Kconfig
··· 92 92 functions to implement the driver. 93 93 94 94 config PCI_IMX6 95 - bool "Freescale i.MX6/7/8 PCIe controller" 95 + bool 96 + 97 + config PCI_IMX6_HOST 98 + bool "Freescale i.MX6/7/8 PCIe controller host mode" 96 99 depends on ARCH_MXC || COMPILE_TEST 97 100 depends on PCI_MSI 98 101 select PCIE_DW_HOST 102 + select PCI_IMX6 103 + help 104 + Enables support for the PCIe controller in the i.MX SoCs to 105 + work in Root Complex mode. The PCI controller on i.MX is based 106 + on DesignWare hardware and therefore the driver re-uses the 107 + DesignWare core functions to implement the driver. 108 + 109 + config PCI_IMX6_EP 110 + bool "Freescale i.MX6/7/8 PCIe controller endpoint mode" 111 + depends on ARCH_MXC || COMPILE_TEST 112 + depends on PCI_ENDPOINT 113 + select PCIE_DW_EP 114 + select PCI_IMX6 115 + help 116 + Enables support for the PCIe controller in the i.MX SoCs to 117 + work in endpoint mode. The PCI controller on i.MX is based 118 + on DesignWare hardware and therefore the driver re-uses the 119 + DesignWare core functions to implement the driver. 99 120 100 121 config PCIE_SPEAR13XX 101 122 bool "STMicroelectronics SPEAr PCIe controller"
+1 -1
drivers/pci/controller/dwc/pci-dra7xx.c
··· 840 840 } 841 841 dra7xx->mode = mode; 842 842 843 - ret = devm_request_irq(dev, irq, dra7xx_pcie_irq_handler, 843 + ret = devm_request_threaded_irq(dev, irq, NULL, dra7xx_pcie_irq_handler, 844 844 IRQF_SHARED, "dra7xx-pcie-main", dra7xx); 845 845 if (ret) { 846 846 dev_err(dev, "failed to request irq\n");
+182 -18
drivers/pci/controller/dwc/pci-imx6.c
··· 52 52 IMX8MQ, 53 53 IMX8MM, 54 54 IMX8MP, 55 + IMX8MQ_EP, 56 + IMX8MM_EP, 57 + IMX8MP_EP, 55 58 }; 56 59 57 60 #define IMX6_PCIE_FLAG_IMX6_PHY BIT(0) ··· 63 60 64 61 struct imx6_pcie_drvdata { 65 62 enum imx6_pcie_variants variant; 63 + enum dw_pcie_device_mode mode; 66 64 u32 flags; 67 65 int dbi_length; 68 66 const char *gpr; ··· 156 152 static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie) 157 153 { 158 154 WARN_ON(imx6_pcie->drvdata->variant != IMX8MQ && 155 + imx6_pcie->drvdata->variant != IMX8MQ_EP && 159 156 imx6_pcie->drvdata->variant != IMX8MM && 160 - imx6_pcie->drvdata->variant != IMX8MP); 157 + imx6_pcie->drvdata->variant != IMX8MM_EP && 158 + imx6_pcie->drvdata->variant != IMX8MP && 159 + imx6_pcie->drvdata->variant != IMX8MP_EP); 161 160 return imx6_pcie->controller_id == 1 ? IOMUXC_GPR16 : IOMUXC_GPR14; 162 161 } 163 162 164 163 static void imx6_pcie_configure_type(struct imx6_pcie *imx6_pcie) 165 164 { 166 - unsigned int mask, val; 165 + unsigned int mask, val, mode; 167 166 168 - if (imx6_pcie->drvdata->variant == IMX8MQ && 169 - imx6_pcie->controller_id == 1) { 170 - mask = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE; 171 - val = FIELD_PREP(IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE, 172 - PCI_EXP_TYPE_ROOT_PORT); 173 - } else { 167 + if (imx6_pcie->drvdata->mode == DW_PCIE_EP_TYPE) 168 + mode = PCI_EXP_TYPE_ENDPOINT; 169 + else 170 + mode = PCI_EXP_TYPE_ROOT_PORT; 171 + 172 + switch (imx6_pcie->drvdata->variant) { 173 + case IMX8MQ: 174 + case IMX8MQ_EP: 175 + if (imx6_pcie->controller_id == 1) { 176 + mask = IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE; 177 + val = FIELD_PREP(IMX8MQ_GPR12_PCIE2_CTRL_DEVICE_TYPE, 178 + mode); 179 + } else { 180 + mask = IMX6Q_GPR12_DEVICE_TYPE; 181 + val = FIELD_PREP(IMX6Q_GPR12_DEVICE_TYPE, mode); 182 + } 183 + break; 184 + default: 174 185 mask = IMX6Q_GPR12_DEVICE_TYPE; 175 - val = FIELD_PREP(IMX6Q_GPR12_DEVICE_TYPE, 176 - PCI_EXP_TYPE_ROOT_PORT); 186 + val = FIELD_PREP(IMX6Q_GPR12_DEVICE_TYPE, mode); 187 + break; 177 188 } 178 189 179 190 regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, mask, val); ··· 323 304 { 324 305 switch (imx6_pcie->drvdata->variant) { 325 306 case IMX8MM: 307 + case IMX8MM_EP: 326 308 case IMX8MP: 309 + case IMX8MP_EP: 327 310 /* 328 311 * The PHY initialization had been done in the PHY 329 312 * driver, break here directly. 330 313 */ 331 314 break; 332 315 case IMX8MQ: 316 + case IMX8MQ_EP: 333 317 /* 334 318 * TODO: Currently this code assumes external 335 319 * oscillator is being used ··· 583 561 case IMX7D: 584 562 break; 585 563 case IMX8MM: 564 + case IMX8MM_EP: 586 565 case IMX8MQ: 566 + case IMX8MQ_EP: 587 567 case IMX8MP: 568 + case IMX8MP_EP: 588 569 ret = clk_prepare_enable(imx6_pcie->pcie_aux); 589 570 if (ret) { 590 571 dev_err(dev, "unable to enable pcie_aux clock\n"); ··· 631 606 IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); 632 607 break; 633 608 case IMX8MM: 609 + case IMX8MM_EP: 634 610 case IMX8MQ: 611 + case IMX8MQ_EP: 635 612 case IMX8MP: 613 + case IMX8MP_EP: 636 614 clk_disable_unprepare(imx6_pcie->pcie_aux); 637 615 break; 638 616 default: ··· 700 672 switch (imx6_pcie->drvdata->variant) { 701 673 case IMX7D: 702 674 case IMX8MQ: 675 + case IMX8MQ_EP: 703 676 reset_control_assert(imx6_pcie->pciephy_reset); 704 677 fallthrough; 705 678 case IMX8MM: 679 + case IMX8MM_EP: 706 680 case IMX8MP: 681 + case IMX8MP_EP: 707 682 reset_control_assert(imx6_pcie->apps_reset); 708 683 break; 709 684 case IMX6SX: ··· 744 713 745 714 switch (imx6_pcie->drvdata->variant) { 746 715 case IMX8MQ: 716 + case IMX8MQ_EP: 747 717 reset_control_deassert(imx6_pcie->pciephy_reset); 748 718 break; 749 719 case IMX7D: ··· 783 751 break; 784 752 case IMX6Q: /* Nothing to do */ 785 753 case IMX8MM: 754 + case IMX8MM_EP: 786 755 case IMX8MP: 756 + case IMX8MP_EP: 787 757 break; 788 758 } 789 759 ··· 834 800 break; 835 801 case IMX7D: 836 802 case IMX8MQ: 803 + case IMX8MQ_EP: 837 804 case IMX8MM: 805 + case IMX8MM_EP: 838 806 case IMX8MP: 807 + case IMX8MP_EP: 839 808 reset_control_deassert(imx6_pcie->apps_reset); 840 809 break; 841 810 } ··· 857 820 break; 858 821 case IMX7D: 859 822 case IMX8MQ: 823 + case IMX8MQ_EP: 860 824 case IMX8MM: 825 + case IMX8MM_EP: 861 826 case IMX8MP: 827 + case IMX8MP_EP: 862 828 reset_control_assert(imx6_pcie->apps_reset); 863 829 break; 864 830 } ··· 1043 1003 1044 1004 static const struct dw_pcie_ops dw_pcie_ops = { 1045 1005 .start_link = imx6_pcie_start_link, 1006 + .stop_link = imx6_pcie_stop_link, 1046 1007 }; 1008 + 1009 + static void imx6_pcie_ep_init(struct dw_pcie_ep *ep) 1010 + { 1011 + enum pci_barno bar; 1012 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1013 + 1014 + for (bar = BAR_0; bar <= BAR_5; bar++) 1015 + dw_pcie_ep_reset_bar(pci, bar); 1016 + } 1017 + 1018 + static int imx6_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 1019 + enum pci_epc_irq_type type, 1020 + u16 interrupt_num) 1021 + { 1022 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1023 + 1024 + switch (type) { 1025 + case PCI_EPC_IRQ_LEGACY: 1026 + return dw_pcie_ep_raise_legacy_irq(ep, func_no); 1027 + case PCI_EPC_IRQ_MSI: 1028 + return dw_pcie_ep_raise_msi_irq(ep, func_no, interrupt_num); 1029 + case PCI_EPC_IRQ_MSIX: 1030 + return dw_pcie_ep_raise_msix_irq(ep, func_no, interrupt_num); 1031 + default: 1032 + dev_err(pci->dev, "UNKNOWN IRQ type\n"); 1033 + return -EINVAL; 1034 + } 1035 + 1036 + return 0; 1037 + } 1038 + 1039 + static const struct pci_epc_features imx8m_pcie_epc_features = { 1040 + .linkup_notifier = false, 1041 + .msi_capable = true, 1042 + .msix_capable = false, 1043 + .reserved_bar = 1 << BAR_1 | 1 << BAR_3, 1044 + .align = SZ_64K, 1045 + }; 1046 + 1047 + static const struct pci_epc_features* 1048 + imx6_pcie_ep_get_features(struct dw_pcie_ep *ep) 1049 + { 1050 + return &imx8m_pcie_epc_features; 1051 + } 1052 + 1053 + static const struct dw_pcie_ep_ops pcie_ep_ops = { 1054 + .ep_init = imx6_pcie_ep_init, 1055 + .raise_irq = imx6_pcie_ep_raise_irq, 1056 + .get_features = imx6_pcie_ep_get_features, 1057 + }; 1058 + 1059 + static int imx6_add_pcie_ep(struct imx6_pcie *imx6_pcie, 1060 + struct platform_device *pdev) 1061 + { 1062 + int ret; 1063 + unsigned int pcie_dbi2_offset; 1064 + struct dw_pcie_ep *ep; 1065 + struct resource *res; 1066 + struct dw_pcie *pci = imx6_pcie->pci; 1067 + struct dw_pcie_rp *pp = &pci->pp; 1068 + struct device *dev = pci->dev; 1069 + 1070 + imx6_pcie_host_init(pp); 1071 + ep = &pci->ep; 1072 + ep->ops = &pcie_ep_ops; 1073 + 1074 + switch (imx6_pcie->drvdata->variant) { 1075 + case IMX8MQ_EP: 1076 + case IMX8MM_EP: 1077 + case IMX8MP_EP: 1078 + pcie_dbi2_offset = SZ_1M; 1079 + break; 1080 + default: 1081 + pcie_dbi2_offset = SZ_4K; 1082 + break; 1083 + } 1084 + pci->dbi_base2 = pci->dbi_base + pcie_dbi2_offset; 1085 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "addr_space"); 1086 + if (!res) 1087 + return -EINVAL; 1088 + 1089 + ep->phys_base = res->start; 1090 + ep->addr_size = resource_size(res); 1091 + ep->page_size = SZ_64K; 1092 + 1093 + ret = dw_pcie_ep_init(ep); 1094 + if (ret) { 1095 + dev_err(dev, "failed to initialize endpoint\n"); 1096 + return ret; 1097 + } 1098 + /* Start LTSSM. */ 1099 + imx6_pcie_ltssm_enable(dev); 1100 + 1101 + return 0; 1102 + } 1047 1103 1048 1104 static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie) 1049 1105 { ··· 1302 1166 "pcie_inbound_axi clock missing or invalid\n"); 1303 1167 break; 1304 1168 case IMX8MQ: 1169 + case IMX8MQ_EP: 1305 1170 imx6_pcie->pcie_aux = devm_clk_get(dev, "pcie_aux"); 1306 1171 if (IS_ERR(imx6_pcie->pcie_aux)) 1307 1172 return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_aux), ··· 1327 1190 } 1328 1191 break; 1329 1192 case IMX8MM: 1193 + case IMX8MM_EP: 1330 1194 case IMX8MP: 1195 + case IMX8MP_EP: 1331 1196 imx6_pcie->pcie_aux = devm_clk_get(dev, "pcie_aux"); 1332 1197 if (IS_ERR(imx6_pcie->pcie_aux)) 1333 1198 return dev_err_probe(dev, PTR_ERR(imx6_pcie->pcie_aux), ··· 1418 1279 if (ret) 1419 1280 return ret; 1420 1281 1421 - ret = dw_pcie_host_init(&pci->pp); 1422 - if (ret < 0) 1423 - return ret; 1282 + if (imx6_pcie->drvdata->mode == DW_PCIE_EP_TYPE) { 1283 + ret = imx6_add_pcie_ep(imx6_pcie, pdev); 1284 + if (ret < 0) 1285 + return ret; 1286 + } else { 1287 + ret = dw_pcie_host_init(&pci->pp); 1288 + if (ret < 0) 1289 + return ret; 1424 1290 1425 - if (pci_msi_enabled()) { 1426 - u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); 1427 - val = dw_pcie_readw_dbi(pci, offset + PCI_MSI_FLAGS); 1428 - val |= PCI_MSI_FLAGS_ENABLE; 1429 - dw_pcie_writew_dbi(pci, offset + PCI_MSI_FLAGS, val); 1291 + if (pci_msi_enabled()) { 1292 + u8 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_MSI); 1293 + 1294 + val = dw_pcie_readw_dbi(pci, offset + PCI_MSI_FLAGS); 1295 + val |= PCI_MSI_FLAGS_ENABLE; 1296 + dw_pcie_writew_dbi(pci, offset + PCI_MSI_FLAGS, val); 1297 + } 1430 1298 } 1431 1299 1432 1300 return 0; ··· 1489 1343 .flags = IMX6_PCIE_FLAG_SUPPORTS_SUSPEND, 1490 1344 .gpr = "fsl,imx8mp-iomuxc-gpr", 1491 1345 }, 1346 + [IMX8MQ_EP] = { 1347 + .variant = IMX8MQ_EP, 1348 + .mode = DW_PCIE_EP_TYPE, 1349 + .gpr = "fsl,imx8mq-iomuxc-gpr", 1350 + }, 1351 + [IMX8MM_EP] = { 1352 + .variant = IMX8MM_EP, 1353 + .mode = DW_PCIE_EP_TYPE, 1354 + .gpr = "fsl,imx8mm-iomuxc-gpr", 1355 + }, 1356 + [IMX8MP_EP] = { 1357 + .variant = IMX8MP_EP, 1358 + .mode = DW_PCIE_EP_TYPE, 1359 + .gpr = "fsl,imx8mp-iomuxc-gpr", 1360 + }, 1492 1361 }; 1493 1362 1494 1363 static const struct of_device_id imx6_pcie_of_match[] = { ··· 1514 1353 { .compatible = "fsl,imx8mq-pcie", .data = &drvdata[IMX8MQ], }, 1515 1354 { .compatible = "fsl,imx8mm-pcie", .data = &drvdata[IMX8MM], }, 1516 1355 { .compatible = "fsl,imx8mp-pcie", .data = &drvdata[IMX8MP], }, 1356 + { .compatible = "fsl,imx8mq-pcie-ep", .data = &drvdata[IMX8MQ_EP], }, 1357 + { .compatible = "fsl,imx8mm-pcie-ep", .data = &drvdata[IMX8MM_EP], }, 1358 + { .compatible = "fsl,imx8mp-pcie-ep", .data = &drvdata[IMX8MP_EP], }, 1517 1359 {}, 1518 1360 }; 1519 1361
+4
drivers/pci/controller/dwc/pcie-bt1.c
··· 583 583 struct device *dev = &btpci->pdev->dev; 584 584 int ret; 585 585 586 + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); 587 + if (ret) 588 + return ret; 589 + 586 590 btpci->dw.version = DW_PCIE_VER_460A; 587 591 btpci->dw.dev = dev; 588 592 btpci->dw.ops = &bt1_pcie_ops;
+11 -1
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 612 612 613 613 void dw_pcie_ep_exit(struct dw_pcie_ep *ep) 614 614 { 615 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 615 616 struct pci_epc *epc = ep->epc; 617 + 618 + dw_pcie_edma_remove(pci); 616 619 617 620 pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem, 618 621 epc->mem->window.page_size); ··· 788 785 goto err_exit_epc_mem; 789 786 } 790 787 788 + ret = dw_pcie_edma_detect(pci); 789 + if (ret) 790 + goto err_free_epc_mem; 791 + 791 792 if (ep->ops->get_features) { 792 793 epc_features = ep->ops->get_features(ep); 793 794 if (epc_features->core_init_notifier) ··· 800 793 801 794 ret = dw_pcie_ep_init_complete(ep); 802 795 if (ret) 803 - goto err_free_epc_mem; 796 + goto err_remove_edma; 804 797 805 798 return 0; 799 + 800 + err_remove_edma: 801 + dw_pcie_edma_remove(pci); 806 802 807 803 err_free_epc_mem: 808 804 pci_epc_mem_free_addr(epc, ep->msi_mem_phys, ep->msi_mem,
+22 -3
drivers/pci/controller/dwc/pcie-designware-host.c
··· 366 366 dw_chained_msi_isr, pp); 367 367 } 368 368 369 - ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); 369 + /* 370 + * Even though the iMSI-RX Module supports 64-bit addresses some 371 + * peripheral PCIe devices may lack 64-bit message support. In 372 + * order not to miss MSI TLPs from those devices the MSI target 373 + * address has to be within the lowest 4GB. 374 + * 375 + * Note until there is a better alternative found the reservation is 376 + * done by allocating from the artificially limited DMA-coherent 377 + * memory. 378 + */ 379 + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 370 380 if (ret) 371 381 dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n"); 372 382 ··· 477 467 478 468 dw_pcie_iatu_detect(pci); 479 469 480 - ret = dw_pcie_setup_rc(pp); 470 + ret = dw_pcie_edma_detect(pci); 481 471 if (ret) 482 472 goto err_free_msi; 473 + 474 + ret = dw_pcie_setup_rc(pp); 475 + if (ret) 476 + goto err_remove_edma; 483 477 484 478 if (!dw_pcie_link_up(pci)) { 485 479 ret = dw_pcie_start_link(pci); 486 480 if (ret) 487 - goto err_free_msi; 481 + goto err_remove_edma; 488 482 } 489 483 490 484 /* Ignore errors, the link may come up later */ ··· 504 490 505 491 err_stop_link: 506 492 dw_pcie_stop_link(pci); 493 + 494 + err_remove_edma: 495 + dw_pcie_edma_remove(pci); 507 496 508 497 err_free_msi: 509 498 if (pp->has_msi_ctrl) ··· 528 511 pci_remove_root_bus(pp->bridge->bus); 529 512 530 513 dw_pcie_stop_link(pci); 514 + 515 + dw_pcie_edma_remove(pci); 531 516 532 517 if (pp->has_msi_ctrl) 533 518 dw_pcie_free_msi(pp);
+195
drivers/pci/controller/dwc/pcie-designware.c
··· 12 12 #include <linux/bitops.h> 13 13 #include <linux/clk.h> 14 14 #include <linux/delay.h> 15 + #include <linux/dma/edma.h> 15 16 #include <linux/gpio/consumer.h> 16 17 #include <linux/ioport.h> 17 18 #include <linux/of.h> ··· 142 141 /* Set a default value suitable for at most 8 in and 8 out windows */ 143 142 if (!pci->atu_size) 144 143 pci->atu_size = SZ_4K; 144 + 145 + /* eDMA region can be mapped to a custom base address */ 146 + if (!pci->edma.reg_base) { 147 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dma"); 148 + if (res) { 149 + pci->edma.reg_base = devm_ioremap_resource(pci->dev, res); 150 + if (IS_ERR(pci->edma.reg_base)) 151 + return PTR_ERR(pci->edma.reg_base); 152 + } else if (pci->atu_size >= 2 * DEFAULT_DBI_DMA_OFFSET) { 153 + pci->edma.reg_base = pci->atu_base + DEFAULT_DBI_DMA_OFFSET; 154 + } 155 + } 145 156 146 157 /* LLDD is supposed to manually switch the clocks and resets state */ 147 158 if (dw_pcie_cap_is(pci, REQ_RES)) { ··· 793 780 dw_pcie_cap_is(pci, IATU_UNROLL) ? "T" : "F", 794 781 pci->num_ob_windows, pci->num_ib_windows, 795 782 pci->region_align / SZ_1K, (pci->region_limit + 1) / SZ_1G); 783 + } 784 + 785 + static u32 dw_pcie_readl_dma(struct dw_pcie *pci, u32 reg) 786 + { 787 + u32 val = 0; 788 + int ret; 789 + 790 + if (pci->ops && pci->ops->read_dbi) 791 + return pci->ops->read_dbi(pci, pci->edma.reg_base, reg, 4); 792 + 793 + ret = dw_pcie_read(pci->edma.reg_base + reg, 4, &val); 794 + if (ret) 795 + dev_err(pci->dev, "Read DMA address failed\n"); 796 + 797 + return val; 798 + } 799 + 800 + static int dw_pcie_edma_irq_vector(struct device *dev, unsigned int nr) 801 + { 802 + struct platform_device *pdev = to_platform_device(dev); 803 + char name[6]; 804 + int ret; 805 + 806 + if (nr >= EDMA_MAX_WR_CH + EDMA_MAX_RD_CH) 807 + return -EINVAL; 808 + 809 + ret = platform_get_irq_byname_optional(pdev, "dma"); 810 + if (ret > 0) 811 + return ret; 812 + 813 + snprintf(name, sizeof(name), "dma%u", nr); 814 + 815 + return platform_get_irq_byname_optional(pdev, name); 816 + } 817 + 818 + static struct dw_edma_core_ops dw_pcie_edma_ops = { 819 + .irq_vector = dw_pcie_edma_irq_vector, 820 + }; 821 + 822 + static int dw_pcie_edma_find_chip(struct dw_pcie *pci) 823 + { 824 + u32 val; 825 + 826 + /* 827 + * Indirect eDMA CSRs access has been completely removed since v5.40a 828 + * thus no space is now reserved for the eDMA channels viewport and 829 + * former DMA CTRL register is no longer fixed to FFs. 830 + */ 831 + if (dw_pcie_ver_is_ge(pci, 540A)) 832 + val = 0xFFFFFFFF; 833 + else 834 + val = dw_pcie_readl_dbi(pci, PCIE_DMA_VIEWPORT_BASE + PCIE_DMA_CTRL); 835 + 836 + if (val == 0xFFFFFFFF && pci->edma.reg_base) { 837 + pci->edma.mf = EDMA_MF_EDMA_UNROLL; 838 + 839 + val = dw_pcie_readl_dma(pci, PCIE_DMA_CTRL); 840 + } else if (val != 0xFFFFFFFF) { 841 + pci->edma.mf = EDMA_MF_EDMA_LEGACY; 842 + 843 + pci->edma.reg_base = pci->dbi_base + PCIE_DMA_VIEWPORT_BASE; 844 + } else { 845 + return -ENODEV; 846 + } 847 + 848 + pci->edma.dev = pci->dev; 849 + 850 + if (!pci->edma.ops) 851 + pci->edma.ops = &dw_pcie_edma_ops; 852 + 853 + pci->edma.flags |= DW_EDMA_CHIP_LOCAL; 854 + 855 + pci->edma.ll_wr_cnt = FIELD_GET(PCIE_DMA_NUM_WR_CHAN, val); 856 + pci->edma.ll_rd_cnt = FIELD_GET(PCIE_DMA_NUM_RD_CHAN, val); 857 + 858 + /* Sanity check the channels count if the mapping was incorrect */ 859 + if (!pci->edma.ll_wr_cnt || pci->edma.ll_wr_cnt > EDMA_MAX_WR_CH || 860 + !pci->edma.ll_rd_cnt || pci->edma.ll_rd_cnt > EDMA_MAX_RD_CH) 861 + return -EINVAL; 862 + 863 + return 0; 864 + } 865 + 866 + static int dw_pcie_edma_irq_verify(struct dw_pcie *pci) 867 + { 868 + struct platform_device *pdev = to_platform_device(pci->dev); 869 + u16 ch_cnt = pci->edma.ll_wr_cnt + pci->edma.ll_rd_cnt; 870 + char name[6]; 871 + int ret; 872 + 873 + if (pci->edma.nr_irqs == 1) 874 + return 0; 875 + else if (pci->edma.nr_irqs > 1) 876 + return pci->edma.nr_irqs != ch_cnt ? -EINVAL : 0; 877 + 878 + ret = platform_get_irq_byname_optional(pdev, "dma"); 879 + if (ret > 0) { 880 + pci->edma.nr_irqs = 1; 881 + return 0; 882 + } 883 + 884 + for (; pci->edma.nr_irqs < ch_cnt; pci->edma.nr_irqs++) { 885 + snprintf(name, sizeof(name), "dma%d", pci->edma.nr_irqs); 886 + 887 + ret = platform_get_irq_byname_optional(pdev, name); 888 + if (ret <= 0) 889 + return -EINVAL; 890 + } 891 + 892 + return 0; 893 + } 894 + 895 + static int dw_pcie_edma_ll_alloc(struct dw_pcie *pci) 896 + { 897 + struct dw_edma_region *ll; 898 + dma_addr_t paddr; 899 + int i; 900 + 901 + for (i = 0; i < pci->edma.ll_wr_cnt; i++) { 902 + ll = &pci->edma.ll_region_wr[i]; 903 + ll->sz = DMA_LLP_MEM_SIZE; 904 + ll->vaddr.mem = dmam_alloc_coherent(pci->dev, ll->sz, 905 + &paddr, GFP_KERNEL); 906 + if (!ll->vaddr.mem) 907 + return -ENOMEM; 908 + 909 + ll->paddr = paddr; 910 + } 911 + 912 + for (i = 0; i < pci->edma.ll_rd_cnt; i++) { 913 + ll = &pci->edma.ll_region_rd[i]; 914 + ll->sz = DMA_LLP_MEM_SIZE; 915 + ll->vaddr.mem = dmam_alloc_coherent(pci->dev, ll->sz, 916 + &paddr, GFP_KERNEL); 917 + if (!ll->vaddr.mem) 918 + return -ENOMEM; 919 + 920 + ll->paddr = paddr; 921 + } 922 + 923 + return 0; 924 + } 925 + 926 + int dw_pcie_edma_detect(struct dw_pcie *pci) 927 + { 928 + int ret; 929 + 930 + /* Don't fail if no eDMA was found (for the backward compatibility) */ 931 + ret = dw_pcie_edma_find_chip(pci); 932 + if (ret) 933 + return 0; 934 + 935 + /* Don't fail on the IRQs verification (for the backward compatibility) */ 936 + ret = dw_pcie_edma_irq_verify(pci); 937 + if (ret) { 938 + dev_err(pci->dev, "Invalid eDMA IRQs found\n"); 939 + return 0; 940 + } 941 + 942 + ret = dw_pcie_edma_ll_alloc(pci); 943 + if (ret) { 944 + dev_err(pci->dev, "Couldn't allocate LLP memory\n"); 945 + return ret; 946 + } 947 + 948 + /* Don't fail if the DW eDMA driver can't find the device */ 949 + ret = dw_edma_probe(&pci->edma); 950 + if (ret && ret != -ENODEV) { 951 + dev_err(pci->dev, "Couldn't register eDMA device\n"); 952 + return ret; 953 + } 954 + 955 + dev_info(pci->dev, "eDMA: unroll %s, %hu wr, %hu rd\n", 956 + pci->edma.mf == EDMA_MF_EDMA_UNROLL ? "T" : "F", 957 + pci->edma.ll_wr_cnt, pci->edma.ll_rd_cnt); 958 + 959 + return 0; 960 + } 961 + 962 + void dw_pcie_edma_remove(struct dw_pcie *pci) 963 + { 964 + dw_edma_remove(&pci->edma); 796 965 } 797 966 798 967 void dw_pcie_setup(struct dw_pcie *pci)
+21
drivers/pci/controller/dwc/pcie-designware.h
··· 15 15 #include <linux/bitops.h> 16 16 #include <linux/clk.h> 17 17 #include <linux/dma-mapping.h> 18 + #include <linux/dma/edma.h> 18 19 #include <linux/gpio/consumer.h> 19 20 #include <linux/irq.h> 20 21 #include <linux/msi.h> ··· 32 31 #define DW_PCIE_VER_480A 0x3438302a 33 32 #define DW_PCIE_VER_490A 0x3439302a 34 33 #define DW_PCIE_VER_520A 0x3532302a 34 + #define DW_PCIE_VER_540A 0x3534302a 35 35 36 36 #define __dw_pcie_ver_cmp(_pci, _ver, _op) \ 37 37 ((_pci)->version _op DW_PCIE_VER_ ## _ver) ··· 169 167 #define PCIE_MSIX_DOORBELL 0x948 170 168 #define PCIE_MSIX_DOORBELL_PF_SHIFT 24 171 169 170 + /* 171 + * eDMA CSRs. DW PCIe IP-core v4.70a and older had the eDMA registers accessible 172 + * over the Port Logic registers space. Afterwards the unrolled mapping was 173 + * introduced so eDMA and iATU could be accessed via a dedicated registers 174 + * space. 175 + */ 176 + #define PCIE_DMA_VIEWPORT_BASE 0x970 177 + #define PCIE_DMA_UNROLL_BASE 0x80000 178 + #define PCIE_DMA_CTRL 0x008 179 + #define PCIE_DMA_NUM_WR_CHAN GENMASK(3, 0) 180 + #define PCIE_DMA_NUM_RD_CHAN GENMASK(19, 16) 181 + 172 182 #define PCIE_PL_CHK_REG_CONTROL_STATUS 0xB20 173 183 #define PCIE_PL_CHK_REG_CHK_REG_START BIT(0) 174 184 #define PCIE_PL_CHK_REG_CHK_REG_CONTINUOUS BIT(1) ··· 229 215 * this offset, if atu_base not set. 230 216 */ 231 217 #define DEFAULT_DBI_ATU_OFFSET (0x3 << 20) 218 + #define DEFAULT_DBI_DMA_OFFSET PCIE_DMA_UNROLL_BASE 232 219 233 220 #define MAX_MSI_IRQS 256 234 221 #define MAX_MSI_IRQS_PER_CTRL 32 ··· 240 225 /* Maximum number of inbound/outbound iATUs */ 241 226 #define MAX_IATU_IN 256 242 227 #define MAX_IATU_OUT 256 228 + 229 + /* Default eDMA LLP memory size */ 230 + #define DMA_LLP_MEM_SIZE PAGE_SIZE 243 231 244 232 struct dw_pcie; 245 233 struct dw_pcie_rp; ··· 387 369 int num_lanes; 388 370 int link_gen; 389 371 u8 n_fts[2]; 372 + struct dw_edma_chip edma; 390 373 struct clk_bulk_data app_clks[DW_PCIE_NUM_APP_CLKS]; 391 374 struct clk_bulk_data core_clks[DW_PCIE_NUM_CORE_CLKS]; 392 375 struct reset_control_bulk_data app_rsts[DW_PCIE_NUM_APP_RSTS]; ··· 427 408 void dw_pcie_disable_atu(struct dw_pcie *pci, u32 dir, int index); 428 409 void dw_pcie_setup(struct dw_pcie *pci); 429 410 void dw_pcie_iatu_detect(struct dw_pcie *pci); 411 + int dw_pcie_edma_detect(struct dw_pcie *pci); 412 + void dw_pcie_edma_remove(struct dw_pcie *pci); 430 413 431 414 static inline void dw_pcie_writel_dbi(struct dw_pcie *pci, u32 reg, u32 val) 432 415 {
-1
drivers/pci/controller/dwc/pcie-histb.c
··· 450 450 module_platform_driver(histb_pcie_platform_driver); 451 451 452 452 MODULE_DESCRIPTION("HiSilicon STB PCIe host controller driver"); 453 - MODULE_LICENSE("GPL v2");
+14 -1
drivers/pci/controller/dwc/pcie-qcom.c
··· 1534 1534 return ret; 1535 1535 } 1536 1536 1537 + static void qcom_pcie_host_deinit(struct dw_pcie_rp *pp) 1538 + { 1539 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1540 + struct qcom_pcie *pcie = to_qcom_pcie(pci); 1541 + 1542 + qcom_ep_reset_assert(pcie); 1543 + phy_power_off(pcie->phy); 1544 + pcie->cfg->ops->deinit(pcie); 1545 + } 1546 + 1537 1547 static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { 1538 - .host_init = qcom_pcie_host_init, 1548 + .host_init = qcom_pcie_host_init, 1549 + .host_deinit = qcom_pcie_host_deinit, 1539 1550 }; 1540 1551 1541 1552 /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ ··· 1828 1817 { .compatible = "qcom,pcie-ipq8064", .data = &cfg_2_1_0 }, 1829 1818 { .compatible = "qcom,pcie-ipq8064-v2", .data = &cfg_2_1_0 }, 1830 1819 { .compatible = "qcom,pcie-ipq8074", .data = &cfg_2_3_3 }, 1820 + { .compatible = "qcom,pcie-ipq8074-gen3", .data = &cfg_2_9_0 }, 1831 1821 { .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 }, 1832 1822 { .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 }, 1833 1823 { .compatible = "qcom,pcie-sa8540p", .data = &cfg_1_9_0 }, ··· 1838 1826 { .compatible = "qcom,pcie-sdm845", .data = &cfg_2_7_0 }, 1839 1827 { .compatible = "qcom,pcie-sm8150", .data = &cfg_1_9_0 }, 1840 1828 { .compatible = "qcom,pcie-sm8250", .data = &cfg_1_9_0 }, 1829 + { .compatible = "qcom,pcie-sm8350", .data = &cfg_1_9_0 }, 1841 1830 { .compatible = "qcom,pcie-sm8450-pcie0", .data = &cfg_1_9_0 }, 1842 1831 { .compatible = "qcom,pcie-sm8450-pcie1", .data = &cfg_1_9_0 }, 1843 1832 { }
+7 -2
drivers/pci/controller/dwc/pcie-tegra194.c
··· 286 286 struct gpio_desc *pex_refclk_sel_gpiod; 287 287 unsigned int pex_rst_irq; 288 288 int ep_state; 289 + long link_status; 289 290 }; 290 291 291 292 static inline struct tegra_pcie_dw *to_tegra_pcie(struct dw_pcie *pci) ··· 450 449 static irqreturn_t tegra_pcie_ep_irq_thread(int irq, void *arg) 451 450 { 452 451 struct tegra_pcie_dw *pcie = arg; 452 + struct dw_pcie_ep *ep = &pcie->pci.ep; 453 453 struct dw_pcie *pci = &pcie->pci; 454 454 u32 val, speed; 455 + 456 + if (test_and_clear_bit(0, &pcie->link_status)) 457 + dw_pcie_ep_linkup(ep); 455 458 456 459 speed = dw_pcie_readw_dbi(pci, pcie->pcie_cap_base + PCI_EXP_LNKSTA) & 457 460 PCI_EXP_LNKSTA_CLS; ··· 503 498 static irqreturn_t tegra_pcie_ep_hard_irq(int irq, void *arg) 504 499 { 505 500 struct tegra_pcie_dw *pcie = arg; 506 - struct dw_pcie_ep *ep = &pcie->pci.ep; 507 501 int spurious = 1; 508 502 u32 status_l0, status_l1, link_status; 509 503 ··· 518 514 link_status = appl_readl(pcie, APPL_LINK_STATUS); 519 515 if (link_status & APPL_LINK_STATUS_RDLH_LINK_UP) { 520 516 dev_dbg(pcie->dev, "Link is up with Host\n"); 521 - dw_pcie_ep_linkup(ep); 517 + set_bit(0, &pcie->link_status); 518 + return IRQ_WAKE_THREAD; 522 519 } 523 520 } 524 521
-1
drivers/pci/controller/mobiveil/pcie-mobiveil-plat.c
··· 56 56 57 57 builtin_platform_driver(mobiveil_pcie_driver); 58 58 59 - MODULE_LICENSE("GPL v2"); 60 59 MODULE_DESCRIPTION("Mobiveil PCIe host controller driver"); 61 60 MODULE_AUTHOR("Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>");
+35 -34
drivers/pci/controller/pci-loongson.c
··· 15 15 #include "../pci.h" 16 16 17 17 /* Device IDs */ 18 - #define DEV_PCIE_PORT_0 0x7a09 19 - #define DEV_PCIE_PORT_1 0x7a19 20 - #define DEV_PCIE_PORT_2 0x7a29 18 + #define DEV_LS2K_PCIE_PORT0 0x1a05 19 + #define DEV_LS7A_PCIE_PORT0 0x7a09 20 + #define DEV_LS7A_PCIE_PORT1 0x7a19 21 + #define DEV_LS7A_PCIE_PORT2 0x7a29 22 + #define DEV_LS7A_PCIE_PORT3 0x7a39 23 + #define DEV_LS7A_PCIE_PORT4 0x7a49 24 + #define DEV_LS7A_PCIE_PORT5 0x7a59 25 + #define DEV_LS7A_PCIE_PORT6 0x7a69 21 26 22 27 #define DEV_LS2K_APB 0x7a02 23 28 #define DEV_LS7A_GMAC 0x7a03 ··· 58 53 dev->class = PCI_CLASS_BRIDGE_PCI_NORMAL; 59 54 } 60 55 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 61 - DEV_PCIE_PORT_0, bridge_class_quirk); 56 + DEV_LS7A_PCIE_PORT0, bridge_class_quirk); 62 57 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 63 - DEV_PCIE_PORT_1, bridge_class_quirk); 58 + DEV_LS7A_PCIE_PORT1, bridge_class_quirk); 64 59 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 65 - DEV_PCIE_PORT_2, bridge_class_quirk); 60 + DEV_LS7A_PCIE_PORT2, bridge_class_quirk); 66 61 67 62 static void system_bus_quirk(struct pci_dev *pdev) 68 63 { ··· 80 75 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 81 76 DEV_LS7A_LPC, system_bus_quirk); 82 77 83 - static void loongson_mrrs_quirk(struct pci_dev *dev) 78 + static void loongson_mrrs_quirk(struct pci_dev *pdev) 84 79 { 85 - struct pci_bus *bus = dev->bus; 86 - struct pci_dev *bridge; 87 - static const struct pci_device_id bridge_devids[] = { 88 - { PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_0) }, 89 - { PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_1) }, 90 - { PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_2) }, 91 - { 0, }, 92 - }; 80 + /* 81 + * Some Loongson PCIe ports have h/w limitations of maximum read 82 + * request size. They can't handle anything larger than this. So 83 + * force this limit on any devices attached under these ports. 84 + */ 85 + struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus); 93 86 94 - /* look for the matching bridge */ 95 - while (!pci_is_root_bus(bus)) { 96 - bridge = bus->self; 97 - bus = bus->parent; 98 - /* 99 - * Some Loongson PCIe ports have a h/w limitation of 100 - * 256 bytes maximum read request size. They can't handle 101 - * anything larger than this. So force this limit on 102 - * any devices attached under these ports. 103 - */ 104 - if (pci_match_id(bridge_devids, bridge)) { 105 - if (pcie_get_readrq(dev) > 256) { 106 - pci_info(dev, "limiting MRRS to 256\n"); 107 - pcie_set_readrq(dev, 256); 108 - } 109 - break; 110 - } 111 - } 87 + bridge->no_inc_mrrs = 1; 112 88 } 113 - DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, loongson_mrrs_quirk); 89 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 90 + DEV_LS2K_PCIE_PORT0, loongson_mrrs_quirk); 91 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 92 + DEV_LS7A_PCIE_PORT0, loongson_mrrs_quirk); 93 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 94 + DEV_LS7A_PCIE_PORT1, loongson_mrrs_quirk); 95 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 96 + DEV_LS7A_PCIE_PORT2, loongson_mrrs_quirk); 97 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 98 + DEV_LS7A_PCIE_PORT3, loongson_mrrs_quirk); 99 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 100 + DEV_LS7A_PCIE_PORT4, loongson_mrrs_quirk); 101 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 102 + DEV_LS7A_PCIE_PORT5, loongson_mrrs_quirk); 103 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON, 104 + DEV_LS7A_PCIE_PORT6, loongson_mrrs_quirk); 114 105 115 106 static void loongson_pci_pin_quirk(struct pci_dev *pdev) 116 107 {
-1
drivers/pci/controller/pci-tegra.c
··· 2814 2814 .remove = tegra_pcie_remove, 2815 2815 }; 2816 2816 module_platform_driver(tegra_pcie_driver); 2817 - MODULE_LICENSE("GPL");
-1
drivers/pci/controller/pci-versatile.c
··· 169 169 module_platform_driver(versatile_pci_driver); 170 170 171 171 MODULE_DESCRIPTION("Versatile PCI driver"); 172 - MODULE_LICENSE("GPL v2");
-1
drivers/pci/controller/pcie-hisi-error.c
··· 324 324 module_platform_driver(hisi_pcie_error_handler_driver); 325 325 326 326 MODULE_DESCRIPTION("HiSilicon HIP PCIe controller error handling driver"); 327 - MODULE_LICENSE("GPL v2");
-1
drivers/pci/controller/pcie-microchip-host.c
··· 1135 1135 }; 1136 1136 1137 1137 builtin_platform_driver(mc_pcie_driver); 1138 - MODULE_LICENSE("GPL"); 1139 1138 MODULE_DESCRIPTION("Microchip PCIe host controller driver"); 1140 1139 MODULE_AUTHOR("Daire McNamara <daire.mcnamara@microchip.com>");
+2
drivers/pci/controller/pcie-mt7621.c
··· 60 60 #define PCIE_PORT_LINKUP BIT(0) 61 61 #define PCIE_PORT_CNT 3 62 62 63 + #define INIT_PORTS_DELAY_MS 100 63 64 #define PERST_DELAY_MS 100 64 65 65 66 /** ··· 370 369 } 371 370 } 372 371 372 + msleep(INIT_PORTS_DELAY_MS); 373 373 mt7621_pcie_reset_ep_deassert(pcie); 374 374 375 375 tmp = NULL;
+71 -26
drivers/pci/controller/vmd.c
··· 66 66 * interrupt handling. 67 67 */ 68 68 VMD_FEAT_CAN_BYPASS_MSI_REMAP = (1 << 4), 69 + 70 + /* 71 + * Enable ASPM on the PCIE root ports and set the default LTR of the 72 + * storage devices on platforms where these values are not configured by 73 + * BIOS. This is needed for laptops, which require these settings for 74 + * proper power management of the SoC. 75 + */ 76 + VMD_FEAT_BIOS_PM_QUIRK = (1 << 5), 69 77 }; 78 + 79 + #define VMD_BIOS_PM_QUIRK_LTR 0x1003 /* 3145728 ns */ 80 + 81 + #define VMD_FEATS_CLIENT (VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | \ 82 + VMD_FEAT_HAS_BUS_RESTRICTIONS | \ 83 + VMD_FEAT_OFFSET_FIRST_VECTOR | \ 84 + VMD_FEAT_BIOS_PM_QUIRK) 70 85 71 86 static DEFINE_IDA(vmd_instance_ida); 72 87 ··· 724 709 vmd_bridge->native_dpc = root_bridge->native_dpc; 725 710 } 726 711 712 + /* 713 + * Enable ASPM and LTR settings on devices that aren't configured by BIOS. 714 + */ 715 + static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata) 716 + { 717 + unsigned long features = *(unsigned long *)userdata; 718 + u16 ltr = VMD_BIOS_PM_QUIRK_LTR; 719 + u32 ltr_reg; 720 + int pos; 721 + 722 + if (!(features & VMD_FEAT_BIOS_PM_QUIRK)) 723 + return 0; 724 + 725 + pci_enable_link_state(pdev, PCIE_LINK_STATE_ALL); 726 + 727 + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR); 728 + if (!pos) 729 + return 0; 730 + 731 + /* 732 + * Skip if the max snoop LTR is non-zero, indicating BIOS has set it 733 + * so the LTR quirk is not needed. 734 + */ 735 + pci_read_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, &ltr_reg); 736 + if (!!(ltr_reg & (PCI_LTR_VALUE_MASK | PCI_LTR_SCALE_MASK))) 737 + return 0; 738 + 739 + /* 740 + * Set the default values to the maximum required by the platform to 741 + * allow the deepest power management savings. Write as a DWORD where 742 + * the lower word is the max snoop latency and the upper word is the 743 + * max non-snoop latency. 744 + */ 745 + ltr_reg = (ltr << 16) | ltr; 746 + pci_write_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, ltr_reg); 747 + pci_info(pdev, "VMD: Default LTR value set by driver\n"); 748 + 749 + return 0; 750 + } 751 + 727 752 static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) 728 753 { 729 754 struct pci_sysdata *sd = &vmd->sysdata; ··· 936 881 937 882 pci_assign_unassigned_bus_resources(vmd->bus); 938 883 884 + pci_walk_bus(vmd->bus, vmd_pm_enable_quirk, &features); 885 + 939 886 /* 940 887 * VMD root buses are virtual and don't return true on pci_is_pcie() 941 888 * and will fail pcie_bus_configure_settings() early. It can instead be ··· 1074 1017 static SIMPLE_DEV_PM_OPS(vmd_dev_pm_ops, vmd_suspend, vmd_resume); 1075 1018 1076 1019 static const struct pci_device_id vmd_ids[] = { 1077 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_201D), 1020 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_VMD_201D), 1078 1021 .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP,}, 1079 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0), 1022 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_VMD_28C0), 1080 1023 .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW | 1081 1024 VMD_FEAT_HAS_BUS_RESTRICTIONS | 1082 1025 VMD_FEAT_CAN_BYPASS_MSI_REMAP,}, 1083 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f), 1084 - .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1085 - VMD_FEAT_HAS_BUS_RESTRICTIONS | 1086 - VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1087 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c3d), 1088 - .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1089 - VMD_FEAT_HAS_BUS_RESTRICTIONS | 1090 - VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1091 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa77f), 1092 - .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1093 - VMD_FEAT_HAS_BUS_RESTRICTIONS | 1094 - VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1095 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7d0b), 1096 - .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1097 - VMD_FEAT_HAS_BUS_RESTRICTIONS | 1098 - VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1099 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xad0b), 1100 - .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1101 - VMD_FEAT_HAS_BUS_RESTRICTIONS | 1102 - VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1103 - {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B), 1104 - .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | 1105 - VMD_FEAT_HAS_BUS_RESTRICTIONS | 1106 - VMD_FEAT_OFFSET_FIRST_VECTOR,}, 1026 + {PCI_VDEVICE(INTEL, 0x467f), 1027 + .driver_data = VMD_FEATS_CLIENT,}, 1028 + {PCI_VDEVICE(INTEL, 0x4c3d), 1029 + .driver_data = VMD_FEATS_CLIENT,}, 1030 + {PCI_VDEVICE(INTEL, 0xa77f), 1031 + .driver_data = VMD_FEATS_CLIENT,}, 1032 + {PCI_VDEVICE(INTEL, 0x7d0b), 1033 + .driver_data = VMD_FEATS_CLIENT,}, 1034 + {PCI_VDEVICE(INTEL, 0xad0b), 1035 + .driver_data = VMD_FEATS_CLIENT,}, 1036 + {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B), 1037 + .driver_data = VMD_FEATS_CLIENT,}, 1107 1038 {0,} 1108 1039 }; 1109 1040 MODULE_DEVICE_TABLE(pci, vmd_ids);
+12 -26
drivers/pci/endpoint/functions/pci-epf-test.c
··· 826 826 return 0; 827 827 } 828 828 829 - static int pci_epf_test_notifier(struct notifier_block *nb, unsigned long val, 830 - void *data) 829 + static int pci_epf_test_link_up(struct pci_epf *epf) 831 830 { 832 - struct pci_epf *epf = container_of(nb, struct pci_epf, nb); 833 831 struct pci_epf_test *epf_test = epf_get_drvdata(epf); 834 - int ret; 835 832 836 - switch (val) { 837 - case CORE_INIT: 838 - ret = pci_epf_test_core_init(epf); 839 - if (ret) 840 - return NOTIFY_BAD; 841 - break; 833 + queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler, 834 + msecs_to_jiffies(1)); 842 835 843 - case LINK_UP: 844 - queue_delayed_work(kpcitest_workqueue, &epf_test->cmd_handler, 845 - msecs_to_jiffies(1)); 846 - break; 847 - 848 - default: 849 - dev_err(&epf->dev, "Invalid EPF test notifier event\n"); 850 - return NOTIFY_BAD; 851 - } 852 - 853 - return NOTIFY_OK; 836 + return 0; 854 837 } 838 + 839 + static const struct pci_epc_event_ops pci_epf_test_event_ops = { 840 + .core_init = pci_epf_test_core_init, 841 + .link_up = pci_epf_test_link_up, 842 + }; 855 843 856 844 static int pci_epf_test_alloc_space(struct pci_epf *epf) 857 845 { ··· 967 979 if (ret) 968 980 epf_test->dma_supported = false; 969 981 970 - if (linkup_notifier || core_init_notifier) { 971 - epf->nb.notifier_call = pci_epf_test_notifier; 972 - pci_epc_register_notifier(epc, &epf->nb); 973 - } else { 982 + if (!linkup_notifier && !core_init_notifier) 974 983 queue_work(kpcitest_workqueue, &epf_test->cmd_handler.work); 975 - } 976 984 977 985 return 0; 978 986 } ··· 993 1009 epf_test->epf = epf; 994 1010 995 1011 INIT_DELAYED_WORK(&epf_test->cmd_handler, pci_epf_test_cmd_handler); 1012 + 1013 + epf->event_ops = &pci_epf_test_event_ops; 996 1014 997 1015 epf_set_drvdata(epf, epf_test); 998 1016 return 0;
+1
drivers/pci/endpoint/functions/pci-epf-vntb.c
··· 652 652 /** 653 653 * epf_ntb_mw_bar_clear() - Clear Memory window BARs 654 654 * @ntb: NTB device that facilitates communication between HOST and VHOST 655 + * @num_mws: the number of Memory window BARs that to be cleared 655 656 */ 656 657 static void epf_ntb_mw_bar_clear(struct epf_ntb *ntb, int num_mws) 657 658 {
-1
drivers/pci/endpoint/pci-ep-cfs.c
··· 728 728 729 729 MODULE_DESCRIPTION("PCI EP CONFIGFS"); 730 730 MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>"); 731 - MODULE_LICENSE("GPL v2");
+25 -8
drivers/pci/endpoint/pci-epc-core.c
··· 613 613 if (type == SECONDARY_INTERFACE && epf->sec_epc) 614 614 return -EBUSY; 615 615 616 - mutex_lock(&epc->lock); 616 + mutex_lock(&epc->list_lock); 617 617 func_no = find_first_zero_bit(&epc->function_num_map, 618 618 BITS_PER_LONG); 619 619 if (func_no >= BITS_PER_LONG) { ··· 640 640 641 641 list_add_tail(list, &epc->pci_epf); 642 642 ret: 643 - mutex_unlock(&epc->lock); 643 + mutex_unlock(&epc->list_lock); 644 644 645 645 return ret; 646 646 } ··· 672 672 list = &epf->sec_epc_list; 673 673 } 674 674 675 - mutex_lock(&epc->lock); 675 + mutex_lock(&epc->list_lock); 676 676 clear_bit(func_no, &epc->function_num_map); 677 677 list_del(list); 678 678 epf->epc = NULL; 679 - mutex_unlock(&epc->lock); 679 + mutex_unlock(&epc->list_lock); 680 680 } 681 681 EXPORT_SYMBOL_GPL(pci_epc_remove_epf); 682 682 ··· 690 690 */ 691 691 void pci_epc_linkup(struct pci_epc *epc) 692 692 { 693 + struct pci_epf *epf; 694 + 693 695 if (!epc || IS_ERR(epc)) 694 696 return; 695 697 696 - atomic_notifier_call_chain(&epc->notifier, LINK_UP, NULL); 698 + mutex_lock(&epc->list_lock); 699 + list_for_each_entry(epf, &epc->pci_epf, list) { 700 + mutex_lock(&epf->lock); 701 + if (epf->event_ops && epf->event_ops->link_up) 702 + epf->event_ops->link_up(epf); 703 + mutex_unlock(&epf->lock); 704 + } 705 + mutex_unlock(&epc->list_lock); 697 706 } 698 707 EXPORT_SYMBOL_GPL(pci_epc_linkup); 699 708 ··· 716 707 */ 717 708 void pci_epc_init_notify(struct pci_epc *epc) 718 709 { 710 + struct pci_epf *epf; 711 + 719 712 if (!epc || IS_ERR(epc)) 720 713 return; 721 714 722 - atomic_notifier_call_chain(&epc->notifier, CORE_INIT, NULL); 715 + mutex_lock(&epc->list_lock); 716 + list_for_each_entry(epf, &epc->pci_epf, list) { 717 + mutex_lock(&epf->lock); 718 + if (epf->event_ops && epf->event_ops->core_init) 719 + epf->event_ops->core_init(epf); 720 + mutex_unlock(&epf->lock); 721 + } 722 + mutex_unlock(&epc->list_lock); 723 723 } 724 724 EXPORT_SYMBOL_GPL(pci_epc_init_notify); 725 725 ··· 795 777 } 796 778 797 779 mutex_init(&epc->lock); 780 + mutex_init(&epc->list_lock); 798 781 INIT_LIST_HEAD(&epc->pci_epf); 799 - ATOMIC_INIT_NOTIFIER_HEAD(&epc->notifier); 800 782 801 783 device_initialize(&epc->dev); 802 784 epc->dev.class = pci_epc_class; ··· 879 861 880 862 MODULE_DESCRIPTION("PCI EPC Library"); 881 863 MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>"); 882 - MODULE_LICENSE("GPL v2");
-1
drivers/pci/endpoint/pci-epc-mem.c
··· 260 260 261 261 MODULE_DESCRIPTION("PCI EPC Address Space Management"); 262 262 MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>"); 263 - MODULE_LICENSE("GPL v2");
-1
drivers/pci/endpoint/pci-epf-core.c
··· 568 568 569 569 MODULE_DESCRIPTION("PCI EPF Library"); 570 570 MODULE_AUTHOR("Kishon Vijay Abraham I <kishon@ti.com>"); 571 - MODULE_LICENSE("GPL v2");
-1
drivers/pci/hotplug/acpiphp_core.c
··· 45 45 46 46 MODULE_AUTHOR(DRIVER_AUTHOR); 47 47 MODULE_DESCRIPTION(DRIVER_DESC); 48 - MODULE_LICENSE("GPL"); 49 48 MODULE_PARM_DESC(disable, "disable acpiphp driver"); 50 49 module_param_named(disable, acpiphp_disabled, bool, 0444); 51 50
+2
drivers/pci/hotplug/pciehp_hpc.c
··· 1088 1088 } 1089 1089 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, 1090 1090 PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); 1091 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x010e, 1092 + PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); 1091 1093 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0110, 1092 1094 PCI_CLASS_BRIDGE_PCI, 8, quirk_cmd_compl); 1093 1095 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_QCOM, 0x0400,
-1
drivers/pci/hotplug/shpchp_core.c
··· 32 32 33 33 MODULE_AUTHOR(DRIVER_AUTHOR); 34 34 MODULE_DESCRIPTION(DRIVER_DESC); 35 - MODULE_LICENSE("GPL"); 36 35 37 36 module_param(shpchp_debug, bool, 0644); 38 37 module_param(shpchp_poll_mode, bool, 0644);
+1 -1
drivers/pci/iov.c
··· 14 14 #include <linux/delay.h> 15 15 #include "pci.h" 16 16 17 - #define VIRTFN_ID_LEN 16 17 + #define VIRTFN_ID_LEN 17 /* "virtfn%u\0" for 2^32 - 1 */ 18 18 19 19 int pci_iov_virtfn_bus(struct pci_dev *dev, int vf_id) 20 20 {
+5 -3
drivers/pci/p2pdma.c
··· 194 194 static void p2pdma_page_free(struct page *page) 195 195 { 196 196 struct pci_p2pdma_pagemap *pgmap = to_p2p_pgmap(page->pgmap); 197 + /* safe to dereference while a reference is held to the percpu ref */ 198 + struct pci_p2pdma *p2pdma = 199 + rcu_dereference_protected(pgmap->provider->p2pdma, 1); 197 200 struct percpu_ref *ref; 198 201 199 - gen_pool_free_owner(pgmap->provider->p2pdma->pool, 200 - (uintptr_t)page_to_virt(page), PAGE_SIZE, 201 - (void **)&ref); 202 + gen_pool_free_owner(p2pdma->pool, (uintptr_t)page_to_virt(page), 203 + PAGE_SIZE, (void **)&ref); 202 204 percpu_ref_put(ref); 203 205 } 204 206
+31 -14
drivers/pci/pci-acpi.c
··· 976 976 bool acpi_pci_bridge_d3(struct pci_dev *dev) 977 977 { 978 978 struct pci_dev *rpdev; 979 - struct acpi_device *adev; 980 - acpi_status status; 981 - unsigned long long state; 979 + struct acpi_device *adev, *rpadev; 982 980 const union acpi_object *obj; 983 981 984 982 if (acpi_pci_disabled || !dev->is_hotplug_bridge) 985 983 return false; 986 984 987 - /* Assume D3 support if the bridge is power-manageable by ACPI. */ 988 - if (acpi_pci_power_manageable(dev)) 989 - return true; 985 + adev = ACPI_COMPANION(&dev->dev); 986 + if (adev) { 987 + /* 988 + * If the bridge has _S0W, whether or not it can go into D3 989 + * depends on what is returned by that object. In particular, 990 + * if the power state returned by _S0W is D2 or shallower, 991 + * entering D3 should not be allowed. 992 + */ 993 + if (acpi_dev_power_state_for_wake(adev) <= ACPI_STATE_D2) 994 + return false; 995 + 996 + /* 997 + * Otherwise, assume that the bridge can enter D3 so long as it 998 + * is power-manageable via ACPI. 999 + */ 1000 + if (acpi_device_power_manageable(adev)) 1001 + return true; 1002 + } 990 1003 991 1004 rpdev = pcie_find_root_port(dev); 992 1005 if (!rpdev) 993 1006 return false; 994 1007 995 - adev = ACPI_COMPANION(&rpdev->dev); 996 - if (!adev) 1008 + if (rpdev == dev) 1009 + rpadev = adev; 1010 + else 1011 + rpadev = ACPI_COMPANION(&rpdev->dev); 1012 + 1013 + if (!rpadev) 997 1014 return false; 998 1015 999 1016 /* ··· 1018 1001 * doesn't supply a wakeup GPE via _PRW, it cannot signal hotplug 1019 1002 * events from low-power states including D3hot and D3cold. 1020 1003 */ 1021 - if (!adev->wakeup.flags.valid) 1004 + if (!rpadev->wakeup.flags.valid) 1022 1005 return false; 1023 1006 1024 1007 /* 1025 - * If the Root Port cannot wake itself from D3hot or D3cold, we 1026 - * can't use D3. 1008 + * In the bridge-below-a-Root-Port case, evaluate _S0W for the Root Port 1009 + * to verify whether or not it can signal wakeup from D3. 1027 1010 */ 1028 - status = acpi_evaluate_integer(adev->handle, "_S0W", NULL, &state); 1029 - if (ACPI_SUCCESS(status) && state < ACPI_STATE_D3_HOT) 1011 + if (rpadev != adev && 1012 + acpi_dev_power_state_for_wake(rpadev) <= ACPI_STATE_D2) 1030 1013 return false; 1031 1014 1032 1015 /* ··· 1035 1018 * bridges *below* that Root Port can also signal hotplug events 1036 1019 * while in D3. 1037 1020 */ 1038 - if (!acpi_dev_get_property(adev, "HotPlugSupportInD3", 1021 + if (!acpi_dev_get_property(rpadev, "HotPlugSupportInD3", 1039 1022 ACPI_TYPE_INTEGER, &obj) && 1040 1023 obj->integer.value == 1) 1041 1024 return true;
+1 -1
drivers/pci/pci-driver.c
··· 572 572 573 573 static void pci_pm_bridge_power_up_actions(struct pci_dev *pci_dev) 574 574 { 575 - pci_bridge_wait_for_secondary_bus(pci_dev); 575 + pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT); 576 576 /* 577 577 * When powering on a bridge from D3cold, the whole hierarchy may be 578 578 * powered on into D0uninitialized state, resume them to give them a
+35 -34
drivers/pci/pci.c
··· 167 167 } 168 168 __setup("pcie_port_pm=", pcie_port_pm_setup); 169 169 170 - /* Time to wait after a reset for device to become responsive */ 171 - #define PCIE_RESET_READY_POLL_MS 60000 172 - 173 170 /** 174 171 * pci_bus_max_busnr - returns maximum PCI bus number of given bus' children 175 172 * @bus: pointer to PCI bus structure to search ··· 1171 1174 return -ENOTTY; 1172 1175 } 1173 1176 1174 - if (delay > 1000) 1177 + if (delay > PCI_RESET_WAIT) 1175 1178 pci_info(dev, "not ready %dms after %s; waiting\n", 1176 1179 delay - 1, reset_type); 1177 1180 ··· 1180 1183 pci_read_config_dword(dev, PCI_COMMAND, &id); 1181 1184 } 1182 1185 1183 - if (delay > 1000) 1186 + if (delay > PCI_RESET_WAIT) 1184 1187 pci_info(dev, "ready %dms after %s\n", delay - 1, 1185 1188 reset_type); 1186 1189 ··· 4938 4941 /** 4939 4942 * pci_bridge_wait_for_secondary_bus - Wait for secondary bus to be accessible 4940 4943 * @dev: PCI bridge 4944 + * @reset_type: reset type in human-readable form 4945 + * @timeout: maximum time to wait for devices on secondary bus (milliseconds) 4941 4946 * 4942 4947 * Handle necessary delays before access to the devices on the secondary 4943 - * side of the bridge are permitted after D3cold to D0 transition. 4948 + * side of the bridge are permitted after D3cold to D0 transition 4949 + * or Conventional Reset. 4944 4950 * 4945 4951 * For PCIe this means the delays in PCIe 5.0 section 6.6.1. For 4946 4952 * conventional PCI it means Tpvrh + Trhfa specified in PCI 3.0 section 4947 4953 * 4.3.2. 4954 + * 4955 + * Return 0 on success or -ENOTTY if the first device on the secondary bus 4956 + * failed to become accessible. 4948 4957 */ 4949 - void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev) 4958 + int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type, 4959 + int timeout) 4950 4960 { 4951 4961 struct pci_dev *child; 4952 4962 int delay; 4953 4963 4954 4964 if (pci_dev_is_disconnected(dev)) 4955 - return; 4965 + return 0; 4956 4966 4957 - if (!pci_is_bridge(dev) || !dev->bridge_d3) 4958 - return; 4967 + if (!pci_is_bridge(dev)) 4968 + return 0; 4959 4969 4960 4970 down_read(&pci_bus_sem); 4961 4971 ··· 4974 4970 */ 4975 4971 if (!dev->subordinate || list_empty(&dev->subordinate->devices)) { 4976 4972 up_read(&pci_bus_sem); 4977 - return; 4973 + return 0; 4978 4974 } 4979 4975 4980 4976 /* Take d3cold_delay requirements into account */ 4981 4977 delay = pci_bus_max_d3cold_delay(dev->subordinate); 4982 4978 if (!delay) { 4983 4979 up_read(&pci_bus_sem); 4984 - return; 4980 + return 0; 4985 4981 } 4986 4982 4987 4983 child = list_first_entry(&dev->subordinate->devices, struct pci_dev, ··· 4990 4986 4991 4987 /* 4992 4988 * Conventional PCI and PCI-X we need to wait Tpvrh + Trhfa before 4993 - * accessing the device after reset (that is 1000 ms + 100 ms). In 4994 - * practice this should not be needed because we don't do power 4995 - * management for them (see pci_bridge_d3_possible()). 4989 + * accessing the device after reset (that is 1000 ms + 100 ms). 4996 4990 */ 4997 4991 if (!pci_is_pcie(dev)) { 4998 4992 pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay); 4999 4993 msleep(1000 + delay); 5000 - return; 4994 + return 0; 5001 4995 } 5002 4996 5003 4997 /* ··· 5012 5010 * configuration requests if we only wait for 100 ms (see 5013 5011 * https://bugzilla.kernel.org/show_bug.cgi?id=203885). 5014 5012 * 5015 - * Therefore we wait for 100 ms and check for the device presence. 5016 - * If it is still not present give it an additional 100 ms. 5013 + * Therefore we wait for 100 ms and check for the device presence 5014 + * until the timeout expires. 5017 5015 */ 5018 5016 if (!pcie_downstream_port(dev)) 5019 - return; 5017 + return 0; 5020 5018 5021 5019 if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { 5022 5020 pci_dbg(dev, "waiting %d ms for downstream link\n", delay); ··· 5027 5025 if (!pcie_wait_for_link_delay(dev, true, delay)) { 5028 5026 /* Did not train, no need to wait any further */ 5029 5027 pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n"); 5030 - return; 5028 + return -ENOTTY; 5031 5029 } 5032 5030 } 5033 5031 5034 - if (!pci_device_is_present(child)) { 5035 - pci_dbg(child, "waiting additional %d ms to become accessible\n", delay); 5036 - msleep(delay); 5037 - } 5032 + return pci_dev_wait(child, reset_type, timeout - delay); 5038 5033 } 5039 5034 5040 5035 void pci_reset_secondary_bus(struct pci_dev *dev) ··· 5050 5051 5051 5052 ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET; 5052 5053 pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl); 5053 - 5054 - /* 5055 - * Trhfa for conventional PCI is 2^25 clock cycles. 5056 - * Assuming a minimum 33MHz clock this results in a 1s 5057 - * delay before we can consider subordinate devices to 5058 - * be re-initialized. PCIe has some ways to shorten this, 5059 - * but we don't make use of them yet. 5060 - */ 5061 - ssleep(1); 5062 5054 } 5063 5055 5064 5056 void __weak pcibios_reset_secondary_bus(struct pci_dev *dev) ··· 5068 5078 { 5069 5079 pcibios_reset_secondary_bus(dev); 5070 5080 5071 - return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS); 5081 + return pci_bridge_wait_for_secondary_bus(dev, "bus reset", 5082 + PCIE_RESET_READY_POLL_MS); 5072 5083 } 5073 5084 EXPORT_SYMBOL_GPL(pci_bridge_secondary_bus_reset); 5074 5085 ··· 6017 6026 { 6018 6027 u16 v; 6019 6028 int ret; 6029 + struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); 6020 6030 6021 6031 if (rq < 128 || rq > 4096 || !is_power_of_2(rq)) 6022 6032 return -EINVAL; ··· 6035 6043 } 6036 6044 6037 6045 v = (ffs(rq) - 8) << 12; 6046 + 6047 + if (bridge->no_inc_mrrs) { 6048 + int max_mrrs = pcie_get_readrq(dev); 6049 + 6050 + if (rq > max_mrrs) { 6051 + pci_info(dev, "can't set Max_Read_Request_Size to %d; max is %d\n", rq, max_mrrs); 6052 + return -EINVAL; 6053 + } 6054 + } 6038 6055 6039 6056 ret = pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL, 6040 6057 PCI_EXP_DEVCTL_READRQ, v);
+28 -31
drivers/pci/pci.h
··· 64 64 #define PCI_PM_D3HOT_WAIT 10 /* msec */ 65 65 #define PCI_PM_D3COLD_WAIT 100 /* msec */ 66 66 67 + /* 68 + * Following exit from Conventional Reset, devices must be ready within 1 sec 69 + * (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional 70 + * Reset (PCIe r6.0 sec 5.8). 71 + */ 72 + #define PCI_RESET_WAIT 1000 /* msec */ 73 + /* 74 + * Devices may extend the 1 sec period through Request Retry Status completions 75 + * (PCIe r6.0 sec 2.3.1). The spec does not provide an upper limit, but 60 sec 76 + * ought to be enough for any device to become responsive. 77 + */ 78 + #define PCIE_RESET_READY_POLL_MS 60000 /* msec */ 79 + 67 80 void pci_update_current_state(struct pci_dev *dev, pci_power_t state); 68 81 void pci_refresh_power_state(struct pci_dev *dev); 69 82 int pci_power_up(struct pci_dev *dev); ··· 99 86 void pci_msix_init(struct pci_dev *dev); 100 87 bool pci_bridge_d3_possible(struct pci_dev *dev); 101 88 void pci_bridge_d3_update(struct pci_dev *dev); 102 - void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev); 103 89 void pci_bridge_reconfigure_ltr(struct pci_dev *dev); 90 + int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type, 91 + int timeout); 104 92 105 93 static inline void pci_wakeup_event(struct pci_dev *dev) 106 94 { ··· 324 310 * @dev: PCI device to set new error_state 325 311 * @new: the state we want dev to be in 326 312 * 327 - * Must be called with device_lock held. 313 + * If the device is experiencing perm_failure, it has to remain in that state. 314 + * Any other transition is allowed. 328 315 * 329 316 * Returns true if state has been changed to the requested state. 330 317 */ 331 318 static inline bool pci_dev_set_io_state(struct pci_dev *dev, 332 319 pci_channel_state_t new) 333 320 { 334 - bool changed = false; 321 + pci_channel_state_t old; 335 322 336 - device_lock_assert(&dev->dev); 337 323 switch (new) { 338 324 case pci_channel_io_perm_failure: 339 - switch (dev->error_state) { 340 - case pci_channel_io_frozen: 341 - case pci_channel_io_normal: 342 - case pci_channel_io_perm_failure: 343 - changed = true; 344 - break; 345 - } 346 - break; 325 + xchg(&dev->error_state, pci_channel_io_perm_failure); 326 + return true; 347 327 case pci_channel_io_frozen: 348 - switch (dev->error_state) { 349 - case pci_channel_io_frozen: 350 - case pci_channel_io_normal: 351 - changed = true; 352 - break; 353 - } 354 - break; 328 + old = cmpxchg(&dev->error_state, pci_channel_io_normal, 329 + pci_channel_io_frozen); 330 + return old != pci_channel_io_perm_failure; 355 331 case pci_channel_io_normal: 356 - switch (dev->error_state) { 357 - case pci_channel_io_frozen: 358 - case pci_channel_io_normal: 359 - changed = true; 360 - break; 361 - } 362 - break; 332 + old = cmpxchg(&dev->error_state, pci_channel_io_frozen, 333 + pci_channel_io_normal); 334 + return old != pci_channel_io_perm_failure; 335 + default: 336 + return false; 363 337 } 364 - if (changed) 365 - dev->error_state = new; 366 - return changed; 367 338 } 368 339 369 340 static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused) 370 341 { 371 - device_lock(&dev->dev); 372 342 pci_dev_set_io_state(dev, pci_channel_io_perm_failure); 373 - device_unlock(&dev->dev); 374 343 375 344 return 0; 376 345 }
+3 -48
drivers/pci/pcie/aer.c
··· 184 184 */ 185 185 void pcie_set_ecrc_checking(struct pci_dev *dev) 186 186 { 187 + if (!pcie_aer_is_native(dev)) 188 + return; 189 + 187 190 switch (ecrc_policy) { 188 191 case ECRC_POLICY_DEFAULT: 189 192 return; ··· 1227 1224 return IRQ_WAKE_THREAD; 1228 1225 } 1229 1226 1230 - static int set_device_error_reporting(struct pci_dev *dev, void *data) 1231 - { 1232 - bool enable = *((bool *)data); 1233 - int type = pci_pcie_type(dev); 1234 - 1235 - if ((type == PCI_EXP_TYPE_ROOT_PORT) || 1236 - (type == PCI_EXP_TYPE_RC_EC) || 1237 - (type == PCI_EXP_TYPE_UPSTREAM) || 1238 - (type == PCI_EXP_TYPE_DOWNSTREAM)) { 1239 - if (enable) 1240 - pci_enable_pcie_error_reporting(dev); 1241 - else 1242 - pci_disable_pcie_error_reporting(dev); 1243 - } 1244 - 1245 - return 0; 1246 - } 1247 - 1248 - /** 1249 - * set_downstream_devices_error_reporting - enable/disable the error reporting bits on the root port and its downstream ports. 1250 - * @dev: pointer to root port's pci_dev data structure 1251 - * @enable: true = enable error reporting, false = disable error reporting. 1252 - */ 1253 - static void set_downstream_devices_error_reporting(struct pci_dev *dev, 1254 - bool enable) 1255 - { 1256 - set_device_error_reporting(dev, &enable); 1257 - 1258 - if (pci_pcie_type(dev) == PCI_EXP_TYPE_RC_EC) 1259 - pcie_walk_rcec(dev, set_device_error_reporting, &enable); 1260 - else if (dev->subordinate) 1261 - pci_walk_bus(dev->subordinate, set_device_error_reporting, 1262 - &enable); 1263 - 1264 - } 1265 - 1266 1227 /** 1267 1228 * aer_enable_rootport - enable Root Port's interrupts when receiving messages 1268 1229 * @rpc: pointer to a Root Port data structure ··· 1256 1289 pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32); 1257 1290 pci_write_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, reg32); 1258 1291 1259 - /* 1260 - * Enable error reporting for the root port device and downstream port 1261 - * devices. 1262 - */ 1263 - set_downstream_devices_error_reporting(pdev, true); 1264 - 1265 1292 /* Enable Root Port's interrupt in response to error messages */ 1266 1293 pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32); 1267 1294 reg32 |= ROOT_PORT_INTR_ON_MESG_MASK; ··· 1273 1312 struct pci_dev *pdev = rpc->rpd; 1274 1313 int aer = pdev->aer_cap; 1275 1314 u32 reg32; 1276 - 1277 - /* 1278 - * Disable error reporting for the root port device and downstream port 1279 - * devices. 1280 - */ 1281 - set_downstream_devices_error_reporting(pdev, false); 1282 1315 1283 1316 /* Disable Root's interrupt in response to error messages */ 1284 1317 pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, &reg32);
+54
drivers/pci/pcie/aspm.c
··· 1138 1138 } 1139 1139 EXPORT_SYMBOL(pci_disable_link_state); 1140 1140 1141 + /** 1142 + * pci_enable_link_state - Clear and set the default device link state so that 1143 + * the link may be allowed to enter the specified states. Note that if the 1144 + * BIOS didn't grant ASPM control to the OS, this does nothing because we can't 1145 + * touch the LNKCTL register. Also note that this does not enable states 1146 + * disabled by pci_disable_link_state(). Return 0 or a negative errno. 1147 + * 1148 + * @pdev: PCI device 1149 + * @state: Mask of ASPM link states to enable 1150 + */ 1151 + int pci_enable_link_state(struct pci_dev *pdev, int state) 1152 + { 1153 + struct pcie_link_state *link = pcie_aspm_get_link(pdev); 1154 + 1155 + if (!link) 1156 + return -EINVAL; 1157 + /* 1158 + * A driver requested that ASPM be enabled on this device, but 1159 + * if we don't have permission to manage ASPM (e.g., on ACPI 1160 + * systems we have to observe the FADT ACPI_FADT_NO_ASPM bit and 1161 + * the _OSC method), we can't honor that request. 1162 + */ 1163 + if (aspm_disabled) { 1164 + pci_warn(pdev, "can't override BIOS ASPM; OS doesn't have ASPM control\n"); 1165 + return -EPERM; 1166 + } 1167 + 1168 + down_read(&pci_bus_sem); 1169 + mutex_lock(&aspm_lock); 1170 + link->aspm_default = 0; 1171 + if (state & PCIE_LINK_STATE_L0S) 1172 + link->aspm_default |= ASPM_STATE_L0S; 1173 + if (state & PCIE_LINK_STATE_L1) 1174 + /* L1 PM substates require L1 */ 1175 + link->aspm_default |= ASPM_STATE_L1 | ASPM_STATE_L1SS; 1176 + if (state & PCIE_LINK_STATE_L1_1) 1177 + link->aspm_default |= ASPM_STATE_L1_1; 1178 + if (state & PCIE_LINK_STATE_L1_2) 1179 + link->aspm_default |= ASPM_STATE_L1_2; 1180 + if (state & PCIE_LINK_STATE_L1_1_PCIPM) 1181 + link->aspm_default |= ASPM_STATE_L1_1_PCIPM; 1182 + if (state & PCIE_LINK_STATE_L1_2_PCIPM) 1183 + link->aspm_default |= ASPM_STATE_L1_2_PCIPM; 1184 + pcie_config_aspm_link(link, policy_to_aspm_state(link)); 1185 + 1186 + link->clkpm_default = (state & PCIE_LINK_STATE_CLKPM) ? 1 : 0; 1187 + pcie_set_clkpm(link, policy_to_clkpm_state(link)); 1188 + mutex_unlock(&aspm_lock); 1189 + up_read(&pci_bus_sem); 1190 + 1191 + return 0; 1192 + } 1193 + EXPORT_SYMBOL(pci_enable_link_state); 1194 + 1141 1195 static int pcie_aspm_set_policy(const char *val, 1142 1196 const struct kernel_param *kp) 1143 1197 {
+2 -2
drivers/pci/pcie/dpc.c
··· 170 170 pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, 171 171 PCI_EXP_DPC_STATUS_TRIGGER); 172 172 173 - if (!pcie_wait_for_link(pdev, true)) { 174 - pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n"); 173 + if (pci_bridge_wait_for_secondary_bus(pdev, "DPC", 174 + PCIE_RESET_READY_POLL_MS)) { 175 175 clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags); 176 176 ret = PCI_ERS_RESULT_DISCONNECT; 177 177 } else {
+14 -2
drivers/pci/pcie/portdrv.c
··· 501 501 { 502 502 device_for_each_child(&dev->dev, NULL, remove_iter); 503 503 pci_free_irq_vectors(dev); 504 - pci_disable_device(dev); 505 504 } 506 505 507 506 /** ··· 726 727 } 727 728 728 729 pcie_port_device_remove(dev); 730 + 731 + pci_disable_device(dev); 732 + } 733 + 734 + static void pcie_portdrv_shutdown(struct pci_dev *dev) 735 + { 736 + if (pci_bridge_d3_possible(dev)) { 737 + pm_runtime_forbid(&dev->dev); 738 + pm_runtime_get_noresume(&dev->dev); 739 + pm_runtime_dont_use_autosuspend(&dev->dev); 740 + } 741 + 742 + pcie_port_device_remove(dev); 729 743 } 730 744 731 745 static pci_ers_result_t pcie_portdrv_error_detected(struct pci_dev *dev, ··· 789 777 790 778 .probe = pcie_portdrv_probe, 791 779 .remove = pcie_portdrv_remove, 792 - .shutdown = pcie_portdrv_remove, 780 + .shutdown = pcie_portdrv_shutdown, 793 781 794 782 .err_handler = &pcie_portdrv_err_handler, 795 783
+3 -1
drivers/pci/probe.c
··· 996 996 resource_list_for_each_entry_safe(window, n, &resources) { 997 997 offset = window->offset; 998 998 res = window->res; 999 - if (!res->end) 999 + if (!res->flags && !res->start && !res->end) 1000 1000 continue; 1001 1001 1002 1002 list_move_tail(&window->node, &bridge->windows); ··· 1841 1841 1842 1842 pci_set_of_node(dev); 1843 1843 pci_set_acpi_fwnode(dev); 1844 + if (dev->dev.fwnode && !fwnode_device_is_available(dev->dev.fwnode)) 1845 + return -ENODEV; 1844 1846 1845 1847 pci_dev_assign_slot(dev); 1846 1848
+23
drivers/pci/quirks.c
··· 4835 4835 PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4836 4836 } 4837 4837 4838 + /* 4839 + * Wangxun 10G/1G NICs have no ACS capability, and on multi-function 4840 + * devices, peer-to-peer transactions are not be used between the functions. 4841 + * So add an ACS quirk for below devices to isolate functions. 4842 + * SFxxx 1G NICs(em). 4843 + * RP1000/RP2000 10G NICs(sp). 4844 + */ 4845 + static int pci_quirk_wangxun_nic_acs(struct pci_dev *dev, u16 acs_flags) 4846 + { 4847 + switch (dev->device) { 4848 + case 0x0100 ... 0x010F: 4849 + case 0x1001: 4850 + case 0x2001: 4851 + return pci_acs_ctrl_enabled(acs_flags, 4852 + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); 4853 + } 4854 + 4855 + return false; 4856 + } 4857 + 4838 4858 static const struct pci_dev_acs_enabled { 4839 4859 u16 vendor; 4840 4860 u16 device; ··· 5000 4980 { PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs }, 5001 4981 /* Zhaoxin Root/Downstream Ports */ 5002 4982 { PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs }, 4983 + /* Wangxun nics */ 4984 + { PCI_VENDOR_ID_WANGXUN, PCI_ANY_ID, pci_quirk_wangxun_nic_acs }, 5003 4985 { 0 } 5004 4986 }; 5005 4987 ··· 5362 5340 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1487, quirk_no_flr); 5363 5341 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr); 5364 5342 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr); 5343 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x7901, quirk_no_flr); 5365 5344 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr); 5366 5345 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr); 5367 5346
+176 -72
drivers/pci/setup-bus.c
··· 1765 1765 add_size = size - new_size; 1766 1766 pci_dbg(bridge, "bridge window %pR shrunken by %pa\n", res, 1767 1767 &add_size); 1768 + } else { 1769 + return; 1768 1770 } 1769 1771 1770 1772 res->end = res->start + new_size - 1; 1771 - remove_from_list(add_list, res); 1773 + 1774 + /* If the resource is part of the add_list, remove it now */ 1775 + if (add_list) 1776 + remove_from_list(add_list, res); 1772 1777 } 1773 1778 1779 + static void remove_dev_resource(struct resource *avail, struct pci_dev *dev, 1780 + struct resource *res) 1781 + { 1782 + resource_size_t size, align, tmp; 1783 + 1784 + size = resource_size(res); 1785 + if (!size) 1786 + return; 1787 + 1788 + align = pci_resource_alignment(dev, res); 1789 + align = align ? ALIGN(avail->start, align) - avail->start : 0; 1790 + tmp = align + size; 1791 + avail->start = min(avail->start + tmp, avail->end + 1); 1792 + } 1793 + 1794 + static void remove_dev_resources(struct pci_dev *dev, struct resource *io, 1795 + struct resource *mmio, 1796 + struct resource *mmio_pref) 1797 + { 1798 + int i; 1799 + 1800 + for (i = 0; i < PCI_NUM_RESOURCES; i++) { 1801 + struct resource *res = &dev->resource[i]; 1802 + 1803 + if (resource_type(res) == IORESOURCE_IO) { 1804 + remove_dev_resource(io, dev, res); 1805 + } else if (resource_type(res) == IORESOURCE_MEM) { 1806 + 1807 + /* 1808 + * Make sure prefetchable memory is reduced from 1809 + * the correct resource. Specifically we put 32-bit 1810 + * prefetchable memory in non-prefetchable window 1811 + * if there is an 64-bit pretchable window. 1812 + * 1813 + * See comments in __pci_bus_size_bridges() for 1814 + * more information. 1815 + */ 1816 + if ((res->flags & IORESOURCE_PREFETCH) && 1817 + ((res->flags & IORESOURCE_MEM_64) == 1818 + (mmio_pref->flags & IORESOURCE_MEM_64))) 1819 + remove_dev_resource(mmio_pref, dev, res); 1820 + else 1821 + remove_dev_resource(mmio, dev, res); 1822 + } 1823 + } 1824 + } 1825 + 1826 + /* 1827 + * io, mmio and mmio_pref contain the total amount of bridge window space 1828 + * available. This includes the minimal space needed to cover all the 1829 + * existing devices on the bus and the possible extra space that can be 1830 + * shared with the bridges. 1831 + */ 1774 1832 static void pci_bus_distribute_available_resources(struct pci_bus *bus, 1775 1833 struct list_head *add_list, 1776 1834 struct resource io, ··· 1838 1780 unsigned int normal_bridges = 0, hotplug_bridges = 0; 1839 1781 struct resource *io_res, *mmio_res, *mmio_pref_res; 1840 1782 struct pci_dev *dev, *bridge = bus->self; 1841 - resource_size_t io_per_hp, mmio_per_hp, mmio_pref_per_hp, align; 1783 + resource_size_t io_per_b, mmio_per_b, mmio_pref_per_b, align; 1842 1784 1843 1785 io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW]; 1844 1786 mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW]; ··· 1882 1824 normal_bridges++; 1883 1825 } 1884 1826 1885 - /* 1886 - * There is only one bridge on the bus so it gets all available 1887 - * resources which it can then distribute to the possible hotplug 1888 - * bridges below. 1889 - */ 1890 - if (hotplug_bridges + normal_bridges == 1) { 1891 - dev = list_first_entry(&bus->devices, struct pci_dev, bus_list); 1892 - if (dev->subordinate) 1893 - pci_bus_distribute_available_resources(dev->subordinate, 1894 - add_list, io, mmio, mmio_pref); 1827 + if (!(hotplug_bridges + normal_bridges)) 1895 1828 return; 1829 + 1830 + /* 1831 + * Calculate the amount of space we can forward from "bus" to any 1832 + * downstream buses, i.e., the space left over after assigning the 1833 + * BARs and windows on "bus". 1834 + */ 1835 + list_for_each_entry(dev, &bus->devices, bus_list) { 1836 + if (!dev->is_virtfn) 1837 + remove_dev_resources(dev, &io, &mmio, &mmio_pref); 1896 1838 } 1897 1839 1898 - if (hotplug_bridges == 0) 1899 - return; 1900 - 1901 1840 /* 1902 - * Calculate the total amount of extra resource space we can 1903 - * pass to bridges below this one. This is basically the 1904 - * extra space reduced by the minimal required space for the 1905 - * non-hotplug bridges. 1841 + * If there is at least one hotplug bridge on this bus it gets all 1842 + * the extra resource space that was left after the reductions 1843 + * above. 1844 + * 1845 + * If there are no hotplug bridges the extra resource space is 1846 + * split between non-hotplug bridges. This is to allow possible 1847 + * hotplug bridges below them to get the extra space as well. 1906 1848 */ 1849 + if (hotplug_bridges) { 1850 + io_per_b = div64_ul(resource_size(&io), hotplug_bridges); 1851 + mmio_per_b = div64_ul(resource_size(&mmio), hotplug_bridges); 1852 + mmio_pref_per_b = div64_ul(resource_size(&mmio_pref), 1853 + hotplug_bridges); 1854 + } else { 1855 + io_per_b = div64_ul(resource_size(&io), normal_bridges); 1856 + mmio_per_b = div64_ul(resource_size(&mmio), normal_bridges); 1857 + mmio_pref_per_b = div64_ul(resource_size(&mmio_pref), 1858 + normal_bridges); 1859 + } 1860 + 1907 1861 for_each_pci_bridge(dev, bus) { 1908 - resource_size_t used_size; 1909 1862 struct resource *res; 1910 - 1911 - if (dev->is_hotplug_bridge) 1912 - continue; 1913 - 1914 - /* 1915 - * Reduce the available resource space by what the 1916 - * bridge and devices below it occupy. 1917 - */ 1918 - res = &dev->resource[PCI_BRIDGE_IO_WINDOW]; 1919 - align = pci_resource_alignment(dev, res); 1920 - align = align ? ALIGN(io.start, align) - io.start : 0; 1921 - used_size = align + resource_size(res); 1922 - if (!res->parent) 1923 - io.start = min(io.start + used_size, io.end + 1); 1924 - 1925 - res = &dev->resource[PCI_BRIDGE_MEM_WINDOW]; 1926 - align = pci_resource_alignment(dev, res); 1927 - align = align ? ALIGN(mmio.start, align) - mmio.start : 0; 1928 - used_size = align + resource_size(res); 1929 - if (!res->parent) 1930 - mmio.start = min(mmio.start + used_size, mmio.end + 1); 1931 - 1932 - res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 1933 - align = pci_resource_alignment(dev, res); 1934 - align = align ? ALIGN(mmio_pref.start, align) - 1935 - mmio_pref.start : 0; 1936 - used_size = align + resource_size(res); 1937 - if (!res->parent) 1938 - mmio_pref.start = min(mmio_pref.start + used_size, 1939 - mmio_pref.end + 1); 1940 - } 1941 - 1942 - io_per_hp = div64_ul(resource_size(&io), hotplug_bridges); 1943 - mmio_per_hp = div64_ul(resource_size(&mmio), hotplug_bridges); 1944 - mmio_pref_per_hp = div64_ul(resource_size(&mmio_pref), 1945 - hotplug_bridges); 1946 - 1947 - /* 1948 - * Go over devices on this bus and distribute the remaining 1949 - * resource space between hotplug bridges. 1950 - */ 1951 - for_each_pci_bridge(dev, bus) { 1952 1863 struct pci_bus *b; 1953 1864 1954 1865 b = dev->subordinate; 1955 - if (!b || !dev->is_hotplug_bridge) 1866 + if (!b) 1867 + continue; 1868 + if (hotplug_bridges && !dev->is_hotplug_bridge) 1956 1869 continue; 1957 1870 1871 + res = &dev->resource[PCI_BRIDGE_IO_WINDOW]; 1872 + 1958 1873 /* 1959 - * Distribute available extra resources equally between 1960 - * hotplug-capable downstream ports taking alignment into 1961 - * account. 1874 + * Make sure the split resource space is properly aligned 1875 + * for bridge windows (align it down to avoid going above 1876 + * what is available). 1962 1877 */ 1963 - io.end = io.start + io_per_hp - 1; 1964 - mmio.end = mmio.start + mmio_per_hp - 1; 1965 - mmio_pref.end = mmio_pref.start + mmio_pref_per_hp - 1; 1878 + align = pci_resource_alignment(dev, res); 1879 + io.end = align ? io.start + ALIGN_DOWN(io_per_b, align) - 1 1880 + : io.start + io_per_b - 1; 1881 + 1882 + /* 1883 + * The x_per_b holds the extra resource space that can be 1884 + * added for each bridge but there is the minimal already 1885 + * reserved as well so adjust x.start down accordingly to 1886 + * cover the whole space. 1887 + */ 1888 + io.start -= resource_size(res); 1889 + 1890 + res = &dev->resource[PCI_BRIDGE_MEM_WINDOW]; 1891 + align = pci_resource_alignment(dev, res); 1892 + mmio.end = align ? mmio.start + ALIGN_DOWN(mmio_per_b, align) - 1 1893 + : mmio.start + mmio_per_b - 1; 1894 + mmio.start -= resource_size(res); 1895 + 1896 + res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 1897 + align = pci_resource_alignment(dev, res); 1898 + mmio_pref.end = align ? mmio_pref.start + 1899 + ALIGN_DOWN(mmio_pref_per_b, align) - 1 1900 + : mmio_pref.start + mmio_pref_per_b - 1; 1901 + mmio_pref.start -= resource_size(res); 1966 1902 1967 1903 pci_bus_distribute_available_resources(b, add_list, io, mmio, 1968 1904 mmio_pref); 1969 1905 1970 - io.start += io_per_hp; 1971 - mmio.start += mmio_per_hp; 1972 - mmio_pref.start += mmio_pref_per_hp; 1906 + io.start += io.end + 1; 1907 + mmio.start += mmio.end + 1; 1908 + mmio_pref.start += mmio_pref.end + 1; 1973 1909 } 1974 1910 } 1975 1911 ··· 1975 1923 if (!bridge->is_hotplug_bridge) 1976 1924 return; 1977 1925 1926 + pci_dbg(bridge, "distributing available resources\n"); 1927 + 1978 1928 /* Take the initial extra resources from the hotplug port */ 1979 1929 available_io = bridge->resource[PCI_BRIDGE_IO_WINDOW]; 1980 1930 available_mmio = bridge->resource[PCI_BRIDGE_MEM_WINDOW]; ··· 1986 1932 add_list, available_io, 1987 1933 available_mmio, 1988 1934 available_mmio_pref); 1935 + } 1936 + 1937 + static bool pci_bridge_resources_not_assigned(struct pci_dev *dev) 1938 + { 1939 + const struct resource *r; 1940 + 1941 + /* 1942 + * If the child device's resources are not yet assigned it means we 1943 + * are configuring them (not the boot firmware), so we should be 1944 + * able to extend the upstream bridge resources in the same way we 1945 + * do with the normal hotplug case. 1946 + */ 1947 + r = &dev->resource[PCI_BRIDGE_IO_WINDOW]; 1948 + if (r->flags && !(r->flags & IORESOURCE_STARTALIGN)) 1949 + return false; 1950 + r = &dev->resource[PCI_BRIDGE_MEM_WINDOW]; 1951 + if (r->flags && !(r->flags & IORESOURCE_STARTALIGN)) 1952 + return false; 1953 + r = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW]; 1954 + if (r->flags && !(r->flags & IORESOURCE_STARTALIGN)) 1955 + return false; 1956 + 1957 + return true; 1958 + } 1959 + 1960 + static void 1961 + pci_root_bus_distribute_available_resources(struct pci_bus *bus, 1962 + struct list_head *add_list) 1963 + { 1964 + struct pci_dev *dev, *bridge = bus->self; 1965 + 1966 + for_each_pci_bridge(dev, bus) { 1967 + struct pci_bus *b; 1968 + 1969 + b = dev->subordinate; 1970 + if (!b) 1971 + continue; 1972 + 1973 + /* 1974 + * Need to check "bridge" here too because it is NULL 1975 + * in case of root bus. 1976 + */ 1977 + if (bridge && pci_bridge_resources_not_assigned(dev)) 1978 + pci_bridge_distribute_available_resources(bridge, 1979 + add_list); 1980 + else 1981 + pci_root_bus_distribute_available_resources(b, add_list); 1982 + } 1989 1983 } 1990 1984 1991 1985 /* ··· 2074 1972 * Depth first, calculate sizes and alignments of all subordinate buses. 2075 1973 */ 2076 1974 __pci_bus_size_bridges(bus, add_list); 1975 + 1976 + pci_root_bus_distribute_available_resources(bus, add_list); 2077 1977 2078 1978 /* Depth last, allocate resources and update the hardware. */ 2079 1979 __pci_bus_assign_resources(bus, add_list, &fail_head);
+1 -1
drivers/pci/slot.c
··· 98 98 }; 99 99 ATTRIBUTE_GROUPS(pci_slot_default); 100 100 101 - static struct kobj_type pci_slot_ktype = { 101 + static const struct kobj_type pci_slot_ktype = { 102 102 .sysfs_ops = &pci_slot_sysfs_ops, 103 103 .release = &pci_slot_release, 104 104 .default_groups = pci_slot_default_groups,
+5 -8
drivers/pci/switch/switchtec.c
··· 606 606 rc = copy_to_user(data, &stuser->return_code, 607 607 sizeof(stuser->return_code)); 608 608 if (rc) { 609 - rc = -EFAULT; 610 - goto out; 609 + mutex_unlock(&stdev->mrpc_mutex); 610 + return -EFAULT; 611 611 } 612 612 613 613 data += sizeof(stuser->return_code); 614 614 rc = copy_to_user(data, &stuser->data, 615 615 size - sizeof(stuser->return_code)); 616 616 if (rc) { 617 - rc = -EFAULT; 618 - goto out; 617 + mutex_unlock(&stdev->mrpc_mutex); 618 + return -EFAULT; 619 619 } 620 620 621 621 stuser_set_state(stuser, MRPC_IDLE); 622 622 623 - out: 624 623 mutex_unlock(&stdev->mrpc_mutex); 625 624 626 625 if (stuser->status == SWITCHTEC_MRPC_STATUS_DONE || ··· 1479 1480 static irqreturn_t switchtec_dma_mrpc_isr(int irq, void *dev) 1480 1481 { 1481 1482 struct switchtec_dev *stdev = dev; 1482 - irqreturn_t ret = IRQ_NONE; 1483 1483 1484 1484 iowrite32(SWITCHTEC_EVENT_CLEAR | 1485 1485 SWITCHTEC_EVENT_EN_IRQ, 1486 1486 &stdev->mmio_part_cfg->mrpc_comp_hdr); 1487 1487 schedule_work(&stdev->mrpc_work); 1488 1488 1489 - ret = IRQ_HANDLED; 1490 - return ret; 1489 + return IRQ_HANDLED; 1491 1490 } 1492 1491 1493 1492 static int switchtec_init_isr(struct switchtec_dev *stdev)
+1
include/acpi/acpi_bus.h
··· 534 534 int acpi_device_update_power(struct acpi_device *device, int *state_p); 535 535 bool acpi_bus_power_manageable(acpi_handle handle); 536 536 void acpi_dev_power_up_children_with_adr(struct acpi_device *adev); 537 + u8 acpi_dev_power_state_for_wake(struct acpi_device *adev); 537 538 int acpi_device_power_add_dependent(struct acpi_device *adev, 538 539 struct device *dev); 539 540 void acpi_device_power_remove_dependent(struct acpi_device *adev,
+21 -4
include/linux/dma/edma.h
··· 18 18 struct dw_edma; 19 19 20 20 struct dw_edma_region { 21 - phys_addr_t paddr; 22 - void __iomem *vaddr; 21 + u64 paddr; 22 + union { 23 + void *mem; 24 + void __iomem *io; 25 + } vaddr; 23 26 size_t sz; 24 27 }; 25 28 29 + /** 30 + * struct dw_edma_core_ops - platform-specific eDMA methods 31 + * @irq_vector: Get IRQ number of the passed eDMA channel. Note the 32 + * method accepts the channel id in the end-to-end 33 + * numbering with the eDMA write channels being placed 34 + * first in the row. 35 + * @pci_address: Get PCIe bus address corresponding to the passed CPU 36 + * address. Note there is no need in specifying this 37 + * function if the address translation is performed by 38 + * the DW PCIe RP/EP controller with the DW eDMA device in 39 + * subject and DMA_BYPASS isn't set for all the outbound 40 + * iATU windows. That will be done by the controller 41 + * automatically. 42 + */ 26 43 struct dw_edma_core_ops { 27 44 int (*irq_vector)(struct device *dev, unsigned int nr); 45 + u64 (*pci_address)(struct device *dev, phys_addr_t cpu_addr); 28 46 }; 29 47 30 48 enum dw_edma_map_format { ··· 79 61 */ 80 62 struct dw_edma_chip { 81 63 struct device *dev; 82 - int id; 83 64 int nr_irqs; 84 65 const struct dw_edma_core_ops *ops; 85 66 u32 flags; ··· 101 84 }; 102 85 103 86 /* Export to the platform drivers */ 104 - #if IS_ENABLED(CONFIG_DW_EDMA) 87 + #if IS_REACHABLE(CONFIG_DW_EDMA) 105 88 int dw_edma_probe(struct dw_edma_chip *chip); 106 89 int dw_edma_remove(struct dw_edma_chip *chip); 107 90 #else
+1 -1
include/linux/dmaengine.h
··· 394 394 * should be read (RX), if the source is memory this argument is 395 395 * ignored. 396 396 * @dst_addr: this is the physical address where DMA slave data 397 - * should be written (TX), if the source is memory this argument 397 + * should be written (TX), if the destination is memory this argument 398 398 * is ignored. 399 399 * @src_addr_width: this is the width in bytes of the source (RX) 400 400 * register where DMA data shall be read. If the source
+2 -8
include/linux/pci-epc.h
··· 122 122 * struct pci_epc - represents the PCI EPC device 123 123 * @dev: PCI EPC device 124 124 * @pci_epf: list of endpoint functions present in this EPC device 125 + * list_lock: Mutex for protecting pci_epf list 125 126 * @ops: function pointers for performing endpoint operations 126 127 * @windows: array of address space of the endpoint controller 127 128 * @mem: first window of the endpoint controller, which corresponds to ··· 135 134 * @group: configfs group representing the PCI EPC device 136 135 * @lock: mutex to protect pci_epc ops 137 136 * @function_num_map: bitmap to manage physical function number 138 - * @notifier: used to notify EPF of any EPC events (like linkup) 139 137 */ 140 138 struct pci_epc { 141 139 struct device dev; 142 140 struct list_head pci_epf; 141 + struct mutex list_lock; 143 142 const struct pci_epc_ops *ops; 144 143 struct pci_epc_mem **windows; 145 144 struct pci_epc_mem *mem; ··· 150 149 /* mutex to protect against concurrent access of EP controller */ 151 150 struct mutex lock; 152 151 unsigned long function_num_map; 153 - struct atomic_notifier_head notifier; 154 152 }; 155 153 156 154 /** ··· 190 190 static inline void *epc_get_drvdata(struct pci_epc *epc) 191 191 { 192 192 return dev_get_drvdata(&epc->dev); 193 - } 194 - 195 - static inline int 196 - pci_epc_register_notifier(struct pci_epc *epc, struct notifier_block *nb) 197 - { 198 - return atomic_notifier_chain_register(&epc->notifier, nb); 199 193 } 200 194 201 195 struct pci_epc *
+12 -7
include/linux/pci-epf.h
··· 17 17 struct pci_epf; 18 18 enum pci_epc_interface_type; 19 19 20 - enum pci_notify_event { 21 - CORE_INIT, 22 - LINK_UP, 23 - }; 24 - 25 20 enum pci_barno { 26 21 NO_BAR = -1, 27 22 BAR_0, ··· 65 70 void (*unbind)(struct pci_epf *epf); 66 71 struct config_group *(*add_cfs)(struct pci_epf *epf, 67 72 struct config_group *group); 73 + }; 74 + 75 + /** 76 + * struct pci_epf_event_ops - Callbacks for capturing the EPC events 77 + * @core_init: Callback for the EPC initialization complete event 78 + * @link_up: Callback for the EPC link up event 79 + */ 80 + struct pci_epc_event_ops { 81 + int (*core_init)(struct pci_epf *epf); 82 + int (*link_up)(struct pci_epf *epf); 68 83 }; 69 84 70 85 /** ··· 132 127 * @epf_pf: the physical EPF device to which this virtual EPF device is bound 133 128 * @driver: the EPF driver to which this EPF device is bound 134 129 * @list: to add pci_epf as a list of PCI endpoint functions to pci_epc 135 - * @nb: notifier block to notify EPF of any EPC events (like linkup) 136 130 * @lock: mutex to protect pci_epf_ops 137 131 * @sec_epc: the secondary EPC device to which this EPF device is bound 138 132 * @sec_epc_list: to add pci_epf as list of PCI endpoint functions to secondary ··· 143 139 * @is_vf: true - virtual function, false - physical function 144 140 * @vfunction_num_map: bitmap to manage virtual function number 145 141 * @pci_vepf: list of virtual endpoint functions associated with this function 142 + * @event_ops: Callbacks for capturing the EPC events 146 143 */ 147 144 struct pci_epf { 148 145 struct device dev; ··· 159 154 struct pci_epf *epf_pf; 160 155 struct pci_epf_driver *driver; 161 156 struct list_head list; 162 - struct notifier_block nb; 163 157 /* mutex to protect against concurrent access of pci_epf_ops */ 164 158 struct mutex lock; 165 159 ··· 172 168 unsigned int is_vf; 173 169 unsigned long vfunction_num_map; 174 170 struct list_head pci_vepf; 171 + const struct pci_epc_event_ops *event_ops; 175 172 }; 176 173 177 174 /**
+8
include/linux/pci.h
··· 572 572 void *release_data; 573 573 unsigned int ignore_reset_delay:1; /* For entire hierarchy */ 574 574 unsigned int no_ext_tags:1; /* No Extended Tags */ 575 + unsigned int no_inc_mrrs:1; /* No Increase MRRS */ 575 576 unsigned int native_aer:1; /* OS may use PCIe AER */ 576 577 unsigned int native_pcie_hotplug:1; /* OS may use PCIe hotplug */ 577 578 unsigned int native_shpc_hotplug:1; /* OS may use SHPC hotplug */ ··· 1698 1697 #define PCIE_LINK_STATE_L1_2 BIT(4) 1699 1698 #define PCIE_LINK_STATE_L1_1_PCIPM BIT(5) 1700 1699 #define PCIE_LINK_STATE_L1_2_PCIPM BIT(6) 1700 + #define PCIE_LINK_STATE_ALL (PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1 |\ 1701 + PCIE_LINK_STATE_CLKPM | PCIE_LINK_STATE_L1_1 |\ 1702 + PCIE_LINK_STATE_L1_2 | PCIE_LINK_STATE_L1_1_PCIPM |\ 1703 + PCIE_LINK_STATE_L1_2_PCIPM) 1701 1704 1702 1705 #ifdef CONFIG_PCIEASPM 1703 1706 int pci_disable_link_state(struct pci_dev *pdev, int state); 1704 1707 int pci_disable_link_state_locked(struct pci_dev *pdev, int state); 1708 + int pci_enable_link_state(struct pci_dev *pdev, int state); 1705 1709 void pcie_no_aspm(void); 1706 1710 bool pcie_aspm_support_enabled(void); 1707 1711 bool pcie_aspm_enabled(struct pci_dev *pdev); ··· 1714 1708 static inline int pci_disable_link_state(struct pci_dev *pdev, int state) 1715 1709 { return 0; } 1716 1710 static inline int pci_disable_link_state_locked(struct pci_dev *pdev, int state) 1711 + { return 0; } 1712 + static inline int pci_enable_link_state(struct pci_dev *pdev, int state) 1717 1713 { return 0; } 1718 1714 static inline void pcie_no_aspm(void) { } 1719 1715 static inline bool pcie_aspm_support_enabled(void) { return false; }
+2
include/linux/pci_ids.h
··· 3012 3012 #define PCI_DEVICE_ID_INTEL_VMD_9A0B 0x9a0b 3013 3013 #define PCI_DEVICE_ID_INTEL_S21152BB 0xb152 3014 3014 3015 + #define PCI_VENDOR_ID_WANGXUN 0x8088 3016 + 3015 3017 #define PCI_VENDOR_ID_SCALEMP 0x8686 3016 3018 #define PCI_DEVICE_ID_SCALEMP_VSMP_CTL 0x1010 3017 3019