Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'pci-v4.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:

- Fix ASPM link_state teardown on removal (Lukas Wunner)

- Fix misleading _OSC ASPM message (Sinan Kaya)

- Make _OSC optional for PCI (Sinan Kaya)

- Don't initialize ASPM link state when ACPI_FADT_NO_ASPM is set
(Patrick Talbert)

- Remove x86 and arm64 node-local allocation for host bridge structures
(Punit Agrawal)

- Pay attention to device-specific _PXM node values (Jonathan Cameron)

- Support new Immediate Readiness bit (Felipe Balbi)

- Differentiate between pciehp surprise and safe removal (Lukas Wunner)

- Remove unnecessary pciehp includes (Lukas Wunner)

- Drop pciehp hotplug_slot_ops wrappers (Lukas Wunner)

- Tolerate PCIe Slot Presence Detect being hardwired to zero to
workaround broken hardware, e.g., the Wilocity switch/wireless device
(Lukas Wunner)

- Unify pciehp controller & slot structs (Lukas Wunner)

- Constify hotplug_slot_ops (Lukas Wunner)

- Drop hotplug_slot_info (Lukas Wunner)

- Embed hotplug_slot struct into users instead of allocating it
separately (Lukas Wunner)

- Initialize PCIe port service drivers directly instead of relying on
initcall ordering (Keith Busch)

- Restore PCI config state after a slot reset (Keith Busch)

- Save/restore DPC config state along with other PCI config state
(Keith Busch)

- Reference count devices during AER handling to avoid race issue with
concurrent hot removal (Keith Busch)

- If an Upstream Port reports ERR_FATAL, don't try to read the Port's
config space because it is probably unreachable (Keith Busch)

- During error handling, use slot-specific reset instead of secondary
bus reset to avoid link up/down issues on hotplug ports (Keith Busch)

- Restore previous AER/DPC handling that does not remove and
re-enumerate devices on ERR_FATAL (Keith Busch)

- Notify all drivers that may be affected by error recovery resets
(Keith Busch)

- Always generate error recovery uevents, even if a driver doesn't have
error callbacks (Keith Busch)

- Make PCIe link active reporting detection generic (Keith Busch)

- Support D3cold in PCIe hierarchies during system sleep and runtime,
including hotplug and Thunderbolt ports (Mika Westerberg)

- Handle hpmemsize/hpiosize kernel parameters uniformly, whether slots
are empty or occupied (Jon Derrick)

- Remove duplicated include from pci/pcie/err.c and unused variable
from cpqphp (YueHaibing)

- Remove driver pci_cleanup_aer_uncorrect_error_status() calls (Oza
Pawandeep)

- Uninline PCI bus accessors for better ftracing (Keith Busch)

- Remove unused AER Root Port .error_resume method (Keith Busch)

- Use kfifo in AER instead of a local version (Keith Busch)

- Use threaded IRQ in AER bottom half (Keith Busch)

- Use managed resources in AER core (Keith Busch)

- Reuse pcie_port_find_device() for AER injection (Keith Busch)

- Abstract AER interrupt handling to disconnect error injection (Keith
Busch)

- Refactor AER injection callbacks to simplify future improvments
(Keith Busch)

- Remove unused Netronome NFP32xx Device IDs (Jakub Kicinski)

- Use bitmap_zalloc() for dma_alias_mask (Andy Shevchenko)

- Add switch fall-through annotations (Gustavo A. R. Silva)

- Remove unused Switchtec quirk variable (Joshua Abraham)

- Fix pci.c kernel-doc warning (Randy Dunlap)

- Remove trivial PCI wrappers for DMA APIs (Christoph Hellwig)

- Add Intel GPU device IDs to spurious interrupt quirk (Bin Meng)

- Run Switchtec DMA aliasing quirk only on NTB endpoints to avoid
useless dmesg errors (Logan Gunthorpe)

- Update Switchtec NTB documentation (Wesley Yung)

- Remove redundant "default n" from Kconfig (Bartlomiej Zolnierkiewicz)

- Avoid panic when drivers enable MSI/MSI-X twice (Tonghao Zhang)

- Add PCI support for peer-to-peer DMA (Logan Gunthorpe)

- Add sysfs group for PCI peer-to-peer memory statistics (Logan
Gunthorpe)

- Add PCI peer-to-peer DMA scatterlist mapping interface (Logan
Gunthorpe)

- Add PCI configfs/sysfs helpers for use by peer-to-peer users (Logan
Gunthorpe)

- Add PCI peer-to-peer DMA driver writer's documentation (Logan
Gunthorpe)

- Add block layer flag to indicate driver support for PCI peer-to-peer
DMA (Logan Gunthorpe)

- Map Infiniband scatterlists for peer-to-peer DMA if they contain P2P
memory (Logan Gunthorpe)

- Register nvme-pci CMB buffer as PCI peer-to-peer memory (Logan
Gunthorpe)

- Add nvme-pci support for PCI peer-to-peer memory in requests (Logan
Gunthorpe)

- Use PCI peer-to-peer memory in nvme (Stephen Bates, Steve Wise,
Christoph Hellwig, Logan Gunthorpe)

- Cache VF config space size to optimize enumeration of many VFs
(KarimAllah Ahmed)

- Remove unnecessary <linux/pci-ats.h> include (Bjorn Helgaas)

- Fix VMD AERSID quirk Device ID matching (Jon Derrick)

- Fix Cadence PHY handling during probe (Alan Douglas)

- Signal Cadence Endpoint interrupts via AXI region 0 instead of last
region (Alan Douglas)

- Write Cadence Endpoint MSI interrupts with 32 bits of data (Alan
Douglas)

- Remove redundant controller tests for "device_type == pci" (Rob
Herring)

- Document R-Car E3 (R8A77990) bindings (Tho Vu)

- Add device tree support for R-Car r8a7744 (Biju Das)

- Drop unused mvebu PCIe capability code (Thomas Petazzoni)

- Add shared PCI bridge emulation code (Thomas Petazzoni)

- Convert mvebu to use shared PCI bridge emulation (Thomas Petazzoni)

- Add aardvark Root Port emulation (Thomas Petazzoni)

- Support 100MHz/200MHz refclocks for i.MX6 (Lucas Stach)

- Add initial power management for i.MX7 (Leonard Crestez)

- Add PME_Turn_Off support for i.MX7 (Leonard Crestez)

- Fix qcom runtime power management error handling (Bjorn Andersson)

- Update TI dra7xx unaligned access errata workaround for host mode as
well as endpoint mode (Vignesh R)

- Fix kirin section mismatch warning (Nathan Chancellor)

- Remove iproc PAXC slot check to allow VF support (Jitendra Bhivare)

- Quirk Keystone K2G to limit MRRS to 256 (Kishon Vijay Abraham I)

- Update Keystone to use MRRS quirk for host bridge instead of open
coding (Kishon Vijay Abraham I)

- Refactor Keystone link establishment (Kishon Vijay Abraham I)

- Simplify and speed up Keystone link training (Kishon Vijay Abraham I)

- Remove unused Keystone host_init argument (Kishon Vijay Abraham I)

- Merge Keystone driver files into one (Kishon Vijay Abraham I)

- Remove redundant Keystone platform_set_drvdata() (Kishon Vijay
Abraham I)

- Rename Keystone functions for uniformity (Kishon Vijay Abraham I)

- Add Keystone device control module DT binding (Kishon Vijay Abraham
I)

- Use SYSCON API to get Keystone control module device IDs (Kishon
Vijay Abraham I)

- Clean up Keystone PHY handling (Kishon Vijay Abraham I)

- Use runtime PM APIs to enable Keystone clock (Kishon Vijay Abraham I)

- Clean up Keystone config space access checks (Kishon Vijay Abraham I)

- Get Keystone outbound window count from DT (Kishon Vijay Abraham I)

- Clean up Keystone outbound window configuration (Kishon Vijay Abraham
I)

- Clean up Keystone DBI setup (Kishon Vijay Abraham I)

- Clean up Keystone ks_pcie_link_up() (Kishon Vijay Abraham I)

- Fix Keystone IRQ status checking (Kishon Vijay Abraham I)

- Add debug messages for all Keystone errors (Kishon Vijay Abraham I)

- Clean up Keystone includes and macros (Kishon Vijay Abraham I)

- Fix Mediatek unchecked return value from devm_pci_remap_iospace()
(Gustavo A. R. Silva)

- Fix Mediatek endpoint/port matching logic (Honghui Zhang)

- Change Mediatek Root Port Class Code to PCI_CLASS_BRIDGE_PCI (Honghui
Zhang)

- Remove redundant Mediatek PM domain check (Honghui Zhang)

- Convert Mediatek to pci_host_probe() (Honghui Zhang)

- Fix Mediatek MSI enablement (Honghui Zhang)

- Add Mediatek system PM support for MT2712 and MT7622 (Honghui Zhang)

- Add Mediatek loadable module support (Honghui Zhang)

- Detach VMD resources after stopping root bus to prevent orphan
resources (Jon Derrick)

- Convert pcitest build process to that used by other tools (iio, perf,
etc) (Gustavo Pimentel)

* tag 'pci-v4.20-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (140 commits)
PCI/AER: Refactor error injection fallbacks
PCI/AER: Abstract AER interrupt handling
PCI/AER: Reuse existing pcie_port_find_device() interface
PCI/AER: Use managed resource allocations
PCI: pcie: Remove redundant 'default n' from Kconfig
PCI: aardvark: Implement emulated root PCI bridge config space
PCI: mvebu: Convert to PCI emulated bridge config space
PCI: mvebu: Drop unused PCI express capability code
PCI: Introduce PCI bridge emulated config space common logic
PCI: vmd: Detach resources after stopping root bus
nvmet: Optionally use PCI P2P memory
nvmet: Introduce helper functions to allocate and free request SGLs
nvme-pci: Add support for P2P memory in requests
nvme-pci: Use PCI p2pmem subsystem to manage the CMB
IB/core: Ensure we map P2P memory correctly in rdma_rw_ctx_[init|destroy]()
block: Add PCI P2P flag for request queue
PCI/P2PDMA: Add P2P DMA driver writer's documentation
docs-rst: Add a new directory for PCI documentation
PCI/P2PDMA: Introduce configfs/sysfs enable attribute helpers
PCI/P2PDMA: Add PCI p2pmem DMA mappings to adjust the bus offset
...

+5009 -3119
+24
Documentation/ABI/testing/sysfs-bus-pci
··· 323 323 324 324 This is similar to /sys/bus/pci/drivers_autoprobe, but 325 325 affects only the VFs associated with a specific PF. 326 + 327 + What: /sys/bus/pci/devices/.../p2pmem/size 328 + Date: November 2017 329 + Contact: Logan Gunthorpe <logang@deltatee.com> 330 + Description: 331 + If the device has any Peer-to-Peer memory registered, this 332 + file contains the total amount of memory that the device 333 + provides (in decimal). 334 + 335 + What: /sys/bus/pci/devices/.../p2pmem/available 336 + Date: November 2017 337 + Contact: Logan Gunthorpe <logang@deltatee.com> 338 + Description: 339 + If the device has any Peer-to-Peer memory registered, this 340 + file contains the amount of memory that has not been 341 + allocated (in decimal). 342 + 343 + What: /sys/bus/pci/devices/.../p2pmem/published 344 + Date: November 2017 345 + Contact: Logan Gunthorpe <logang@deltatee.com> 346 + Description: 347 + If the device has any Peer-to-Peer memory registered, this 348 + file contains a '1' if the memory has been published for 349 + use outside the driver that owns the device.
+11 -8
Documentation/PCI/endpoint/pci-test-howto.txt
··· 99 99 2.2 Using Endpoint Test function Device 100 100 101 101 pcitest.sh added in tools/pci/ can be used to run all the default PCI endpoint 102 - tests. Before pcitest.sh can be used pcitest.c should be compiled using the 103 - following commands. 102 + tests. To compile this tool the following commands should be used: 104 103 105 - cd <kernel-dir> 106 - make headers_install ARCH=arm 107 - arm-linux-gnueabihf-gcc -Iusr/include tools/pci/pcitest.c -o pcitest 108 - cp pcitest <rootfs>/usr/sbin/ 109 - cp tools/pci/pcitest.sh <rootfs> 104 + # cd <kernel-dir> 105 + # make -C tools/pci 106 + 107 + or if you desire to compile and install in your system: 108 + 109 + # cd <kernel-dir> 110 + # make -C tools/pci install 111 + 112 + The tool and script will be located in <rootfs>/usr/bin/ 110 113 111 114 2.2.1 pcitest.sh Output 112 - # ./pcitest.sh 115 + # pcitest.sh 113 116 BAR tests 114 117 115 118 BAR0: OKAY
+10 -25
Documentation/PCI/pci-error-recovery.txt
··· 110 110 event will be platform-dependent, but will follow the general 111 111 sequence described below. 112 112 113 - STEP 0: Error Event: ERR_NONFATAL 113 + STEP 0: Error Event 114 114 ------------------- 115 115 A PCI bus error is detected by the PCI hardware. On powerpc, the slot 116 116 is isolated, in that all I/O is blocked: all reads return 0xffffffff, ··· 228 228 If any driver returned PCI_ERS_RESULT_NEED_RESET, then the platform 229 229 proceeds to STEP 4 (Slot Reset) 230 230 231 - STEP 3: Slot Reset 231 + STEP 3: Link Reset 232 + ------------------ 233 + The platform resets the link. This is a PCI-Express specific step 234 + and is done whenever a fatal error has been detected that can be 235 + "solved" by resetting the link. 236 + 237 + STEP 4: Slot Reset 232 238 ------------------ 233 239 234 240 In response to a return value of PCI_ERS_RESULT_NEED_RESET, the ··· 320 314 >>> However, it probably should. 321 315 322 316 323 - STEP 4: Resume Operations 317 + STEP 5: Resume Operations 324 318 ------------------------- 325 319 The platform will call the resume() callback on all affected device 326 320 drivers if all drivers on the segment have returned ··· 332 326 At this point, if a new error happens, the platform will restart 333 327 a new error recovery sequence. 334 328 335 - STEP 5: Permanent Failure 329 + STEP 6: Permanent Failure 336 330 ------------------------- 337 331 A "permanent failure" has occurred, and the platform cannot recover 338 332 the device. The platform will call error_detected() with a ··· 355 349 for additional detail on real-life experience of the causes of 356 350 software errors. 357 351 358 - STEP 0: Error Event: ERR_FATAL 359 - ------------------- 360 - PCI bus error is detected by the PCI hardware. On powerpc, the slot is 361 - isolated, in that all I/O is blocked: all reads return 0xffffffff, all 362 - writes are ignored. 363 - 364 - STEP 1: Remove devices 365 - -------------------- 366 - Platform removes the devices depending on the error agent, it could be 367 - this port for all subordinates or upstream component (likely downstream 368 - port) 369 - 370 - STEP 2: Reset link 371 - -------------------- 372 - The platform resets the link. This is a PCI-Express specific step and is 373 - done whenever a fatal error has been detected that can be "solved" by 374 - resetting the link. 375 - 376 - STEP 3: Re-enumerate the devices 377 - -------------------- 378 - Initiates the re-enumeration. 379 352 380 353 Conclusion; General Remarks 381 354 ---------------------------
+1
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
··· 50 50 - reset-names: Must contain the following entires: 51 51 - "pciephy" 52 52 - "apps" 53 + - "turnoff" 53 54 54 55 Example: 55 56
+3
Documentation/devicetree/bindings/pci/pci-keystone.txt
··· 19 19 interrupt-cells: should be set to 1 20 20 interrupts: GIC interrupt lines connected to PCI MSI interrupt lines 21 21 22 + ti,syscon-pcie-id : phandle to the device control module required to set device 23 + id and vendor id. 24 + 22 25 Example: 23 26 pcie_msi_intc: msi-interrupt-controller { 24 27 interrupt-controller;
+1
Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
··· 7 7 8 8 Required properties: 9 9 - compatible: "renesas,pci-r8a7743" for the R8A7743 SoC; 10 + "renesas,pci-r8a7744" for the R8A7744 SoC; 10 11 "renesas,pci-r8a7745" for the R8A7745 SoC; 11 12 "renesas,pci-r8a7790" for the R8A7790 SoC; 12 13 "renesas,pci-r8a7791" for the R8A7791 SoC;
+2
Documentation/devicetree/bindings/pci/rcar-pci.txt
··· 2 2 3 3 Required properties: 4 4 compatible: "renesas,pcie-r8a7743" for the R8A7743 SoC; 5 + "renesas,pcie-r8a7744" for the R8A7744 SoC; 5 6 "renesas,pcie-r8a7779" for the R8A7779 SoC; 6 7 "renesas,pcie-r8a7790" for the R8A7790 SoC; 7 8 "renesas,pcie-r8a7791" for the R8A7791 SoC; ··· 10 9 "renesas,pcie-r8a7795" for the R8A7795 SoC; 11 10 "renesas,pcie-r8a7796" for the R8A7796 SoC; 12 11 "renesas,pcie-r8a77980" for the R8A77980 SoC; 12 + "renesas,pcie-r8a77990" for the R8A77990 SoC; 13 13 "renesas,pcie-rcar-gen2" for a generic R-Car Gen2 or 14 14 RZ/G1 compatible device. 15 15 "renesas,pcie-rcar-gen3" for a generic R-Car Gen3 compatible device.
+5
Documentation/devicetree/bindings/pci/ti-pci.txt
··· 26 26 ranges, 27 27 interrupt-map-mask, 28 28 interrupt-map : as specified in ../designware-pcie.txt 29 + - ti,syscon-unaligned-access: phandle to the syscon DT node. The 1st argument 30 + should contain the register offset within syscon 31 + and the 2nd argument should contain the bit field 32 + for setting the bit to enable unaligned 33 + access. 29 34 30 35 DEVICE MODE 31 36 ===========
+1 -1
Documentation/driver-api/index.rst
··· 30 30 input 31 31 usb/index 32 32 firewire 33 - pci 33 + pci/index 34 34 spi 35 35 i2c 36 36 hsi
Documentation/driver-api/pci.rst Documentation/driver-api/pci/pci.rst
+22
Documentation/driver-api/pci/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================================ 4 + The Linux PCI driver implementer's API guide 5 + ============================================ 6 + 7 + .. class:: toc-title 8 + 9 + Table of contents 10 + 11 + .. toctree:: 12 + :maxdepth: 2 13 + 14 + pci 15 + p2pdma 16 + 17 + .. only:: subproject and html 18 + 19 + Indices 20 + ======= 21 + 22 + * :ref:`genindex`
+145
Documentation/driver-api/pci/p2pdma.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================ 4 + PCI Peer-to-Peer DMA Support 5 + ============================ 6 + 7 + The PCI bus has pretty decent support for performing DMA transfers 8 + between two devices on the bus. This type of transaction is henceforth 9 + called Peer-to-Peer (or P2P). However, there are a number of issues that 10 + make P2P transactions tricky to do in a perfectly safe way. 11 + 12 + One of the biggest issues is that PCI doesn't require forwarding 13 + transactions between hierarchy domains, and in PCIe, each Root Port 14 + defines a separate hierarchy domain. To make things worse, there is no 15 + simple way to determine if a given Root Complex supports this or not. 16 + (See PCIe r4.0, sec 1.3.1). Therefore, as of this writing, the kernel 17 + only supports doing P2P when the endpoints involved are all behind the 18 + same PCI bridge, as such devices are all in the same PCI hierarchy 19 + domain, and the spec guarantees that all transactions within the 20 + hierarchy will be routable, but it does not require routing 21 + between hierarchies. 22 + 23 + The second issue is that to make use of existing interfaces in Linux, 24 + memory that is used for P2P transactions needs to be backed by struct 25 + pages. However, PCI BARs are not typically cache coherent so there are 26 + a few corner case gotchas with these pages so developers need to 27 + be careful about what they do with them. 28 + 29 + 30 + Driver Writer's Guide 31 + ===================== 32 + 33 + In a given P2P implementation there may be three or more different 34 + types of kernel drivers in play: 35 + 36 + * Provider - A driver which provides or publishes P2P resources like 37 + memory or doorbell registers to other drivers. 38 + * Client - A driver which makes use of a resource by setting up a 39 + DMA transaction to or from it. 40 + * Orchestrator - A driver which orchestrates the flow of data between 41 + clients and providers. 42 + 43 + In many cases there could be overlap between these three types (i.e., 44 + it may be typical for a driver to be both a provider and a client). 45 + 46 + For example, in the NVMe Target Copy Offload implementation: 47 + 48 + * The NVMe PCI driver is both a client, provider and orchestrator 49 + in that it exposes any CMB (Controller Memory Buffer) as a P2P memory 50 + resource (provider), it accepts P2P memory pages as buffers in requests 51 + to be used directly (client) and it can also make use of the CMB as 52 + submission queue entries (orchastrator). 53 + * The RDMA driver is a client in this arrangement so that an RNIC 54 + can DMA directly to the memory exposed by the NVMe device. 55 + * The NVMe Target driver (nvmet) can orchestrate the data from the RNIC 56 + to the P2P memory (CMB) and then to the NVMe device (and vice versa). 57 + 58 + This is currently the only arrangement supported by the kernel but 59 + one could imagine slight tweaks to this that would allow for the same 60 + functionality. For example, if a specific RNIC added a BAR with some 61 + memory behind it, its driver could add support as a P2P provider and 62 + then the NVMe Target could use the RNIC's memory instead of the CMB 63 + in cases where the NVMe cards in use do not have CMB support. 64 + 65 + 66 + Provider Drivers 67 + ---------------- 68 + 69 + A provider simply needs to register a BAR (or a portion of a BAR) 70 + as a P2P DMA resource using :c:func:`pci_p2pdma_add_resource()`. 71 + This will register struct pages for all the specified memory. 72 + 73 + After that it may optionally publish all of its resources as 74 + P2P memory using :c:func:`pci_p2pmem_publish()`. This will allow 75 + any orchestrator drivers to find and use the memory. When marked in 76 + this way, the resource must be regular memory with no side effects. 77 + 78 + For the time being this is fairly rudimentary in that all resources 79 + are typically going to be P2P memory. Future work will likely expand 80 + this to include other types of resources like doorbells. 81 + 82 + 83 + Client Drivers 84 + -------------- 85 + 86 + A client driver typically only has to conditionally change its DMA map 87 + routine to use the mapping function :c:func:`pci_p2pdma_map_sg()` instead 88 + of the usual :c:func:`dma_map_sg()` function. Memory mapped in this 89 + way does not need to be unmapped. 90 + 91 + The client may also, optionally, make use of 92 + :c:func:`is_pci_p2pdma_page()` to determine when to use the P2P mapping 93 + functions and when to use the regular mapping functions. In some 94 + situations, it may be more appropriate to use a flag to indicate a 95 + given request is P2P memory and map appropriately. It is important to 96 + ensure that struct pages that back P2P memory stay out of code that 97 + does not have support for them as other code may treat the pages as 98 + regular memory which may not be appropriate. 99 + 100 + 101 + Orchestrator Drivers 102 + -------------------- 103 + 104 + The first task an orchestrator driver must do is compile a list of 105 + all client devices that will be involved in a given transaction. For 106 + example, the NVMe Target driver creates a list including the namespace 107 + block device and the RNIC in use. If the orchestrator has access to 108 + a specific P2P provider to use it may check compatibility using 109 + :c:func:`pci_p2pdma_distance()` otherwise it may find a memory provider 110 + that's compatible with all clients using :c:func:`pci_p2pmem_find()`. 111 + If more than one provider is supported, the one nearest to all the clients will 112 + be chosen first. If more than one provider is an equal distance away, the 113 + one returned will be chosen at random (it is not an arbitrary but 114 + truely random). This function returns the PCI device to use for the provider 115 + with a reference taken and therefore when it's no longer needed it should be 116 + returned with pci_dev_put(). 117 + 118 + Once a provider is selected, the orchestrator can then use 119 + :c:func:`pci_alloc_p2pmem()` and :c:func:`pci_free_p2pmem()` to 120 + allocate P2P memory from the provider. :c:func:`pci_p2pmem_alloc_sgl()` 121 + and :c:func:`pci_p2pmem_free_sgl()` are convenience functions for 122 + allocating scatter-gather lists with P2P memory. 123 + 124 + Struct Page Caveats 125 + ------------------- 126 + 127 + Driver writers should be very careful about not passing these special 128 + struct pages to code that isn't prepared for it. At this time, the kernel 129 + interfaces do not have any checks for ensuring this. This obviously 130 + precludes passing these pages to userspace. 131 + 132 + P2P memory is also technically IO memory but should never have any side 133 + effects behind it. Thus, the order of loads and stores should not be important 134 + and ioreadX(), iowriteX() and friends should not be necessary. 135 + However, as the memory is not cache coherent, if access ever needs to 136 + be protected by a spinlock then :c:func:`mmiowb()` must be used before 137 + unlocking the lock. (See ACQUIRES VS I/O ACCESSES in 138 + Documentation/memory-barriers.txt) 139 + 140 + 141 + P2P DMA Support Library 142 + ======================= 143 + 144 + .. kernel-doc:: drivers/pci/p2pdma.c 145 + :export:
+20 -10
Documentation/switchtec.txt
··· 23 23 through the Memory-mapped Remote Procedure Call (MRPC) interface. 24 24 Commands are submitted to the interface with a 4-byte command 25 25 identifier and up to 1KB of command specific data. The firmware will 26 - respond with a 4 bytes return code and up to 1KB of command specific 26 + respond with a 4-byte return code and up to 1KB of command-specific 27 27 data. The interface only processes a single command at a time. 28 28 29 29 ··· 36 36 The char device has the following semantics: 37 37 38 38 * A write must consist of at least 4 bytes and no more than 1028 bytes. 39 - The first four bytes will be interpreted as the command to run and 40 - the remainder will be used as the input data. A write will send the 39 + The first 4 bytes will be interpreted as the Command ID and the 40 + remainder will be used as the input data. A write will send the 41 41 command to the firmware to begin processing. 42 42 43 43 * Each write must be followed by exactly one read. Any double write will ··· 45 45 produce an error. 46 46 47 47 * A read will block until the firmware completes the command and return 48 - the four bytes of status plus up to 1024 bytes of output data. (The 49 - length will be specified by the size parameter of the read call -- 50 - reading less than 4 bytes will produce an error. 48 + the 4-byte Command Return Value plus up to 1024 bytes of output 49 + data. (The length will be specified by the size parameter of the read 50 + call -- reading less than 4 bytes will produce an error.) 51 51 52 52 * The poll call will also be supported for userspace applications that 53 53 need to do other things while waiting for the command to complete. ··· 83 83 Non-Transparent Bridge (NTB) Driver 84 84 =================================== 85 85 86 - An NTB driver is provided for the switchtec hardware in switchtec_ntb. 87 - Currently, it only supports switches configured with exactly 2 88 - partitions. It also requires the following configuration settings: 86 + An NTB hardware driver is provided for the Switchtec hardware in 87 + ntb_hw_switchtec. Currently, it only supports switches configured with 88 + exactly 2 NT partitions and zero or more non-NT partitions. It also requires 89 + the following configuration settings: 89 90 90 - * Both partitions must be able to access each other's GAS spaces. 91 + * Both NT partitions must be able to access each other's GAS spaces. 91 92 Thus, the bits in the GAS Access Vector under Management Settings 92 93 must be set to support this. 94 + * Kernel configuration MUST include support for NTB (CONFIG_NTB needs 95 + to be set) 96 + 97 + NT EP BAR 2 will be dynamically configured as a Direct Window, and 98 + the configuration file does not need to configure it explicitly. 99 + 100 + Please refer to Documentation/ntb.txt in Linux source tree for an overall 101 + understanding of the Linux NTB stack. ntb_hw_switchtec works as an NTB 102 + Hardware Driver in this stack.
+1 -1
MAINTAINERS
··· 11299 11299 L: linux-pci@vger.kernel.org 11300 11300 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 11301 11301 S: Maintained 11302 - F: drivers/pci/controller/dwc/*keystone* 11302 + F: drivers/pci/controller/dwc/pci-keystone.c 11303 11303 11304 11304 PCI ENDPOINT SUBSYSTEM 11305 11305 M: Kishon Vijay Abraham I <kishon@ti.com>
+3 -2
arch/arm/boot/dts/imx7d.dtsi
··· 146 146 fsl,max-link-speed = <2>; 147 147 power-domains = <&pgc_pcie_phy>; 148 148 resets = <&src IMX7_RESET_PCIEPHY>, 149 - <&src IMX7_RESET_PCIE_CTRL_APPS_EN>; 150 - reset-names = "pciephy", "apps"; 149 + <&src IMX7_RESET_PCIE_CTRL_APPS_EN>, 150 + <&src IMX7_RESET_PCIE_CTRL_APPS_TURNOFF>; 151 + reset-names = "pciephy", "apps", "turnoff"; 151 152 status = "disabled"; 152 153 }; 153 154 };
+2 -3
arch/arm64/kernel/pci.c
··· 165 165 /* Interface called from ACPI code to setup PCI host controller */ 166 166 struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root) 167 167 { 168 - int node = acpi_get_node(root->device->handle); 169 168 struct acpi_pci_generic_root_info *ri; 170 169 struct pci_bus *bus, *child; 171 170 struct acpi_pci_root_ops *root_ops; 172 171 173 - ri = kzalloc_node(sizeof(*ri), GFP_KERNEL, node); 172 + ri = kzalloc(sizeof(*ri), GFP_KERNEL); 174 173 if (!ri) 175 174 return NULL; 176 175 177 - root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node); 176 + root_ops = kzalloc(sizeof(*root_ops), GFP_KERNEL); 178 177 if (!root_ops) { 179 178 kfree(ri); 180 179 return NULL;
+1 -1
arch/powerpc/include/asm/pnv-pci.h
··· 54 54 55 55 struct pnv_php_slot { 56 56 struct hotplug_slot slot; 57 - struct hotplug_slot_info slot_info; 58 57 uint64_t id; 59 58 char *name; 60 59 int slot_no; ··· 71 72 struct pci_dev *pdev; 72 73 struct pci_bus *bus; 73 74 bool power_state_check; 75 + u8 attention_state; 74 76 void *fdt; 75 77 void *dt; 76 78 struct of_changeset ocs;
+1 -1
arch/x86/pci/acpi.c
··· 356 356 } else { 357 357 struct pci_root_info *info; 358 358 359 - info = kzalloc_node(sizeof(*info), GFP_KERNEL, node); 359 + info = kzalloc(sizeof(*info), GFP_KERNEL); 360 360 if (!info) 361 361 dev_err(&root->device->dev, 362 362 "pci_bus %04x:%02x: ignored (out of memory)\n",
+3 -9
arch/x86/pci/fixup.c
··· 629 629 static void quirk_no_aersid(struct pci_dev *pdev) 630 630 { 631 631 /* VMD Domain */ 632 - if (is_vmd(pdev->bus)) 632 + if (is_vmd(pdev->bus) && pci_is_root_bus(pdev->bus)) 633 633 pdev->bus->bus_flags |= PCI_BUS_FLAGS_NO_AERSID; 634 634 } 635 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2030, quirk_no_aersid); 636 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2031, quirk_no_aersid); 637 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2032, quirk_no_aersid); 638 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2033, quirk_no_aersid); 639 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334a, quirk_no_aersid); 640 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334b, quirk_no_aersid); 641 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334c, quirk_no_aersid); 642 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x334d, quirk_no_aersid); 635 + DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID, 636 + PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid); 643 637 644 638 #ifdef CONFIG_PHYS_ADDR_T_64BIT 645 639
+13 -4
drivers/acpi/pci_root.c
··· 421 421 } 422 422 EXPORT_SYMBOL(acpi_pci_osc_control_set); 423 423 424 - static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm) 424 + static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm, 425 + bool is_pcie) 425 426 { 426 427 u32 support, control, requested; 427 428 acpi_status status; ··· 456 455 decode_osc_support(root, "OS supports", support); 457 456 status = acpi_pci_osc_support(root, support); 458 457 if (ACPI_FAILURE(status)) { 459 - dev_info(&device->dev, "_OSC failed (%s); disabling ASPM\n", 460 - acpi_format_exception(status)); 461 458 *no_aspm = 1; 459 + 460 + /* _OSC is optional for PCI host bridges */ 461 + if ((status == AE_NOT_FOUND) && !is_pcie) 462 + return; 463 + 464 + dev_info(&device->dev, "_OSC failed (%s)%s\n", 465 + acpi_format_exception(status), 466 + pcie_aspm_support_enabled() ? "; disabling ASPM" : ""); 462 467 return; 463 468 } 464 469 ··· 540 533 acpi_handle handle = device->handle; 541 534 int no_aspm = 0; 542 535 bool hotadd = system_state == SYSTEM_RUNNING; 536 + bool is_pcie; 543 537 544 538 root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL); 545 539 if (!root) ··· 598 590 599 591 root->mcfg_addr = acpi_pci_root_get_mcfg_addr(handle); 600 592 601 - negotiate_os_control(root, &no_aspm); 593 + is_pcie = strcmp(acpi_device_hid(device), "PNP0A08") == 0; 594 + negotiate_os_control(root, &no_aspm, is_pcie); 602 595 603 596 /* 604 597 * TBD: Need PCI interface for enumeration/configuration of roots.
+71 -26
drivers/acpi/property.c
··· 24 24 acpi_object_type type, 25 25 const union acpi_object **obj); 26 26 27 - /* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */ 28 - static const guid_t prp_guid = 27 + static const guid_t prp_guids[] = { 28 + /* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */ 29 29 GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c, 30 - 0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01); 31 - /* ACPI _DSD data subnodes GUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */ 30 + 0x8a, 0x91, 0xbc, 0x9b, 0xbf, 0x4a, 0xa3, 0x01), 31 + /* Hotplug in D3 GUID: 6211e2c0-58a3-4af3-90e1-927a4e0c55a4 */ 32 + GUID_INIT(0x6211e2c0, 0x58a3, 0x4af3, 33 + 0x90, 0xe1, 0x92, 0x7a, 0x4e, 0x0c, 0x55, 0xa4), 34 + }; 35 + 32 36 static const guid_t ads_guid = 33 37 GUID_INIT(0xdbb8e3e6, 0x5886, 0x4ba6, 34 38 0x87, 0x95, 0x13, 0x19, 0xf5, 0x2a, 0x96, 0x6b); ··· 60 56 dn->name = link->package.elements[0].string.pointer; 61 57 dn->fwnode.ops = &acpi_data_fwnode_ops; 62 58 dn->parent = parent; 59 + INIT_LIST_HEAD(&dn->data.properties); 63 60 INIT_LIST_HEAD(&dn->data.subnodes); 64 61 65 62 result = acpi_extract_properties(desc, &dn->data); ··· 293 288 adev->flags.of_compatible_ok = 1; 294 289 } 295 290 291 + static bool acpi_is_property_guid(const guid_t *guid) 292 + { 293 + int i; 294 + 295 + for (i = 0; i < ARRAY_SIZE(prp_guids); i++) { 296 + if (guid_equal(guid, &prp_guids[i])) 297 + return true; 298 + } 299 + 300 + return false; 301 + } 302 + 303 + struct acpi_device_properties * 304 + acpi_data_add_props(struct acpi_device_data *data, const guid_t *guid, 305 + const union acpi_object *properties) 306 + { 307 + struct acpi_device_properties *props; 308 + 309 + props = kzalloc(sizeof(*props), GFP_KERNEL); 310 + if (props) { 311 + INIT_LIST_HEAD(&props->list); 312 + props->guid = guid; 313 + props->properties = properties; 314 + list_add_tail(&props->list, &data->properties); 315 + } 316 + 317 + return props; 318 + } 319 + 296 320 static bool acpi_extract_properties(const union acpi_object *desc, 297 321 struct acpi_device_data *data) 298 322 { ··· 346 312 properties->type != ACPI_TYPE_PACKAGE) 347 313 break; 348 314 349 - if (!guid_equal((guid_t *)guid->buffer.pointer, &prp_guid)) 315 + if (!acpi_is_property_guid((guid_t *)guid->buffer.pointer)) 350 316 continue; 351 317 352 318 /* ··· 354 320 * package immediately following it. 355 321 */ 356 322 if (!acpi_properties_format_valid(properties)) 357 - break; 323 + continue; 358 324 359 - data->properties = properties; 360 - return true; 325 + acpi_data_add_props(data, (const guid_t *)guid->buffer.pointer, 326 + properties); 361 327 } 362 328 363 - return false; 329 + return !list_empty(&data->properties); 364 330 } 365 331 366 332 void acpi_init_properties(struct acpi_device *adev) ··· 370 336 acpi_status status; 371 337 bool acpi_of = false; 372 338 339 + INIT_LIST_HEAD(&adev->data.properties); 373 340 INIT_LIST_HEAD(&adev->data.subnodes); 374 341 375 342 if (!adev->handle) ··· 433 398 434 399 void acpi_free_properties(struct acpi_device *adev) 435 400 { 401 + struct acpi_device_properties *props, *tmp; 402 + 436 403 acpi_destroy_nondev_subnodes(&adev->data.subnodes); 437 404 ACPI_FREE((void *)adev->data.pointer); 438 405 adev->data.of_compatible = NULL; 439 406 adev->data.pointer = NULL; 440 - adev->data.properties = NULL; 407 + list_for_each_entry_safe(props, tmp, &adev->data.properties, list) { 408 + list_del(&props->list); 409 + kfree(props); 410 + } 441 411 } 442 412 443 413 /** ··· 467 427 const char *name, acpi_object_type type, 468 428 const union acpi_object **obj) 469 429 { 470 - const union acpi_object *properties; 471 - int i; 430 + const struct acpi_device_properties *props; 472 431 473 432 if (!data || !name) 474 433 return -EINVAL; 475 434 476 - if (!data->pointer || !data->properties) 435 + if (!data->pointer || list_empty(&data->properties)) 477 436 return -EINVAL; 478 437 479 - properties = data->properties; 480 - for (i = 0; i < properties->package.count; i++) { 481 - const union acpi_object *propname, *propvalue; 482 - const union acpi_object *property; 438 + list_for_each_entry(props, &data->properties, list) { 439 + const union acpi_object *properties; 440 + unsigned int i; 483 441 484 - property = &properties->package.elements[i]; 442 + properties = props->properties; 443 + for (i = 0; i < properties->package.count; i++) { 444 + const union acpi_object *propname, *propvalue; 445 + const union acpi_object *property; 485 446 486 - propname = &property->package.elements[0]; 487 - propvalue = &property->package.elements[1]; 447 + property = &properties->package.elements[i]; 488 448 489 - if (!strcmp(name, propname->string.pointer)) { 490 - if (type != ACPI_TYPE_ANY && propvalue->type != type) 491 - return -EPROTO; 492 - if (obj) 493 - *obj = propvalue; 449 + propname = &property->package.elements[0]; 450 + propvalue = &property->package.elements[1]; 494 451 495 - return 0; 452 + if (!strcmp(name, propname->string.pointer)) { 453 + if (type != ACPI_TYPE_ANY && 454 + propvalue->type != type) 455 + return -EPROTO; 456 + if (obj) 457 + *obj = propvalue; 458 + 459 + return 0; 460 + } 496 461 } 497 462 } 498 463 return -EINVAL;
+1 -1
drivers/acpi/x86/apple.c
··· 132 132 } 133 133 WARN_ON(free_space != (void *)newprops + newsize); 134 134 135 - adev->data.properties = newprops; 136 135 adev->data.pointer = newprops; 136 + acpi_data_add_props(&adev->data, &apple_prp_guid, newprops); 137 137 138 138 out_free: 139 139 ACPI_FREE(props);
+1 -1
drivers/ata/sata_inic162x.c
··· 873 873 * like others but it will lock up the whole machine HARD if 874 874 * 65536 byte PRD entry is fed. Reduce maximum segment size. 875 875 */ 876 - rc = pci_set_dma_max_seg_size(pdev, 65536 - 512); 876 + rc = dma_set_max_seg_size(&pdev->dev, 65536 - 512); 877 877 if (rc) { 878 878 dev_err(&pdev->dev, "failed to set the maximum segment size\n"); 879 879 return rc;
+1 -1
drivers/block/rsxx/core.c
··· 780 780 goto failed_enable; 781 781 782 782 pci_set_master(dev); 783 - pci_set_dma_max_seg_size(dev, RSXX_HW_BLK_SIZE); 783 + dma_set_max_seg_size(&dev->dev, RSXX_HW_BLK_SIZE); 784 784 785 785 st = dma_set_mask(&dev->dev, DMA_BIT_MASK(64)); 786 786 if (st) {
-1
drivers/crypto/qat/qat_common/adf_aer.c
··· 198 198 pr_err("QAT: Can't find acceleration device\n"); 199 199 return PCI_ERS_RESULT_DISCONNECT; 200 200 } 201 - pci_cleanup_aer_uncorrect_error_status(pdev); 202 201 if (adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_SYNC)) 203 202 return PCI_ERS_RESULT_DISCONNECT; 204 203
-7
drivers/dma/ioat/init.c
··· 1258 1258 static pci_ers_result_t ioat_pcie_error_slot_reset(struct pci_dev *pdev) 1259 1259 { 1260 1260 pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED; 1261 - int err; 1262 1261 1263 1262 dev_dbg(&pdev->dev, "%s post reset handling\n", DRV_NAME); 1264 1263 ··· 1270 1271 pci_restore_state(pdev); 1271 1272 pci_save_state(pdev); 1272 1273 pci_wake_from_d3(pdev, false); 1273 - } 1274 - 1275 - err = pci_cleanup_aer_uncorrect_error_status(pdev); 1276 - if (err) { 1277 - dev_err(&pdev->dev, 1278 - "AER uncorrect error status clear failed: %#x\n", err); 1279 1274 } 1280 1275 1281 1276 return result;
+1 -1
drivers/gpio/gpiolib-acpi.c
··· 1194 1194 bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id) 1195 1195 { 1196 1196 /* Never allow fallback if the device has properties */ 1197 - if (adev->data.properties || adev->driver_gpios) 1197 + if (acpi_dev_has_props(adev) || adev->driver_gpios) 1198 1198 return false; 1199 1199 1200 1200 return con_id == NULL;
+9 -2
drivers/infiniband/core/rw.c
··· 12 12 */ 13 13 #include <linux/moduleparam.h> 14 14 #include <linux/slab.h> 15 + #include <linux/pci-p2pdma.h> 15 16 #include <rdma/mr_pool.h> 16 17 #include <rdma/rw.h> 17 18 ··· 281 280 struct ib_device *dev = qp->pd->device; 282 281 int ret; 283 282 284 - ret = ib_dma_map_sg(dev, sg, sg_cnt, dir); 283 + if (is_pci_p2pdma_page(sg_page(sg))) 284 + ret = pci_p2pdma_map_sg(dev->dma_device, sg, sg_cnt, dir); 285 + else 286 + ret = ib_dma_map_sg(dev, sg, sg_cnt, dir); 287 + 285 288 if (!ret) 286 289 return -ENOMEM; 287 290 sg_cnt = ret; ··· 607 602 break; 608 603 } 609 604 610 - ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); 605 + /* P2PDMA contexts do not need to be unmapped */ 606 + if (!is_pci_p2pdma_page(sg_page(sg))) 607 + ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir); 611 608 } 612 609 EXPORT_SYMBOL(rdma_rw_ctx_destroy); 613 610
+5 -5
drivers/infiniband/hw/cxgb4/qp.c
··· 99 99 static void dealloc_host_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) 100 100 { 101 101 dma_free_coherent(&(rdev->lldi.pdev->dev), sq->memsize, sq->queue, 102 - pci_unmap_addr(sq, mapping)); 102 + dma_unmap_addr(sq, mapping)); 103 103 } 104 104 105 105 static void dealloc_sq(struct c4iw_rdev *rdev, struct t4_sq *sq) ··· 132 132 if (!sq->queue) 133 133 return -ENOMEM; 134 134 sq->phys_addr = virt_to_phys(sq->queue); 135 - pci_unmap_addr_set(sq, mapping, sq->dma_addr); 135 + dma_unmap_addr_set(sq, mapping, sq->dma_addr); 136 136 return 0; 137 137 } 138 138 ··· 2521 2521 2522 2522 dma_free_coherent(&rdev->lldi.pdev->dev, 2523 2523 wq->memsize, wq->queue, 2524 - pci_unmap_addr(wq, mapping)); 2524 + dma_unmap_addr(wq, mapping)); 2525 2525 c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size); 2526 2526 kfree(wq->sw_rq); 2527 2527 c4iw_put_qpid(rdev, wq->qid, uctx); ··· 2570 2570 goto err_free_rqtpool; 2571 2571 2572 2572 memset(wq->queue, 0, wq->memsize); 2573 - pci_unmap_addr_set(wq, mapping, wq->dma_addr); 2573 + dma_unmap_addr_set(wq, mapping, wq->dma_addr); 2574 2574 2575 2575 wq->bar2_va = c4iw_bar2_addrs(rdev, wq->qid, T4_BAR2_QTYPE_EGRESS, 2576 2576 &wq->bar2_qid, ··· 2649 2649 err_free_queue: 2650 2650 dma_free_coherent(&rdev->lldi.pdev->dev, 2651 2651 wq->memsize, wq->queue, 2652 - pci_unmap_addr(wq, mapping)); 2652 + dma_unmap_addr(wq, mapping)); 2653 2653 err_free_rqtpool: 2654 2654 c4iw_rqtpool_free(rdev, wq->rqt_hwaddr, wq->rqt_size); 2655 2655 err_free_pending_wrs:
+1 -1
drivers/infiniband/hw/cxgb4/t4.h
··· 397 397 struct t4_srq { 398 398 union t4_recv_wr *queue; 399 399 dma_addr_t dma_addr; 400 - DECLARE_PCI_UNMAP_ADDR(mapping); 400 + DEFINE_DMA_UNMAP_ADDR(mapping); 401 401 struct t4_swrqe *sw_rq; 402 402 void __iomem *bar2_va; 403 403 u64 bar2_pa;
-1
drivers/infiniband/hw/hfi1/pcie.c
··· 650 650 struct hfi1_devdata *dd = pci_get_drvdata(pdev); 651 651 652 652 dd_dev_info(dd, "HFI1 resume function called\n"); 653 - pci_cleanup_aer_uncorrect_error_status(pdev); 654 653 /* 655 654 * Running jobs will fail, since it's asynchronous 656 655 * unlike sysfs-requested reset. Better than
-1
drivers/infiniband/hw/qib/qib_pcie.c
··· 597 597 struct qib_devdata *dd = pci_get_drvdata(pdev); 598 598 599 599 qib_devinfo(pdev, "QIB resume function called\n"); 600 - pci_cleanup_aer_uncorrect_error_status(pdev); 601 600 /* 602 601 * Running jobs will fail, since it's asynchronous 603 602 * unlike sysfs-requested reset. Better than
-2
drivers/net/ethernet/atheros/alx/main.c
··· 1964 1964 if (!alx_reset_mac(hw)) 1965 1965 rc = PCI_ERS_RESULT_RECOVERED; 1966 1966 out: 1967 - pci_cleanup_aer_uncorrect_error_status(pdev); 1968 - 1969 1967 rtnl_unlock(); 1970 1968 1971 1969 return rc;
-7
drivers/net/ethernet/broadcom/bnx2.c
··· 8793 8793 if (!(bp->flags & BNX2_FLAG_AER_ENABLED)) 8794 8794 return result; 8795 8795 8796 - err = pci_cleanup_aer_uncorrect_error_status(pdev); 8797 - if (err) { 8798 - dev_err(&pdev->dev, 8799 - "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", 8800 - err); /* non-fatal, continue */ 8801 - } 8802 - 8803 8796 return result; 8804 8797 } 8805 8798
-8
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 14380 14380 14381 14381 rtnl_unlock(); 14382 14382 14383 - /* If AER, perform cleanup of the PCIe registers */ 14384 - if (bp->flags & AER_ENABLED) { 14385 - if (pci_cleanup_aer_uncorrect_error_status(pdev)) 14386 - BNX2X_ERR("pci_cleanup_aer_uncorrect_error_status failed\n"); 14387 - else 14388 - DP(NETIF_MSG_HW, "pci_cleanup_aer_uncorrect_error_status succeeded\n"); 14389 - } 14390 - 14391 14383 return PCI_ERS_RESULT_RECOVERED; 14392 14384 } 14393 14385
-7
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 10354 10354 10355 10355 rtnl_unlock(); 10356 10356 10357 - err = pci_cleanup_aer_uncorrect_error_status(pdev); 10358 - if (err) { 10359 - dev_err(&pdev->dev, 10360 - "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", 10361 - err); /* non-fatal, continue */ 10362 - } 10363 - 10364 10357 return PCI_ERS_RESULT_RECOVERED; 10365 10358 } 10366 10359
-1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 4767 4767 pci_set_master(pdev); 4768 4768 pci_restore_state(pdev); 4769 4769 pci_save_state(pdev); 4770 - pci_cleanup_aer_uncorrect_error_status(pdev); 4771 4770 4772 4771 if (t4_wait_dev_ready(adap->regs) < 0) 4773 4772 return PCI_ERS_RESULT_DISCONNECT;
-1
drivers/net/ethernet/emulex/benet/be_main.c
··· 6146 6146 if (status) 6147 6147 return PCI_ERS_RESULT_DISCONNECT; 6148 6148 6149 - pci_cleanup_aer_uncorrect_error_status(pdev); 6150 6149 be_clear_error(adapter, BE_CLEAR_ALL); 6151 6150 return PCI_ERS_RESULT_RECOVERED; 6152 6151 }
-2
drivers/net/ethernet/intel/e1000e/netdev.c
··· 6854 6854 result = PCI_ERS_RESULT_RECOVERED; 6855 6855 } 6856 6856 6857 - pci_cleanup_aer_uncorrect_error_status(pdev); 6858 - 6859 6857 return result; 6860 6858 } 6861 6859
-2
drivers/net/ethernet/intel/fm10k/fm10k_pci.c
··· 2440 2440 result = PCI_ERS_RESULT_RECOVERED; 2441 2441 } 2442 2442 2443 - pci_cleanup_aer_uncorrect_error_status(pdev); 2444 - 2445 2443 return result; 2446 2444 } 2447 2445
-9
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 14552 14552 { 14553 14553 struct i40e_pf *pf = pci_get_drvdata(pdev); 14554 14554 pci_ers_result_t result; 14555 - int err; 14556 14555 u32 reg; 14557 14556 14558 14557 dev_dbg(&pdev->dev, "%s\n", __func__); ··· 14570 14571 result = PCI_ERS_RESULT_RECOVERED; 14571 14572 else 14572 14573 result = PCI_ERS_RESULT_DISCONNECT; 14573 - } 14574 - 14575 - err = pci_cleanup_aer_uncorrect_error_status(pdev); 14576 - if (err) { 14577 - dev_info(&pdev->dev, 14578 - "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", 14579 - err); 14580 - /* non-fatal, continue */ 14581 14574 } 14582 14575 14583 14576 return result;
-9
drivers/net/ethernet/intel/igb/igb_main.c
··· 9086 9086 struct igb_adapter *adapter = netdev_priv(netdev); 9087 9087 struct e1000_hw *hw = &adapter->hw; 9088 9088 pci_ers_result_t result; 9089 - int err; 9090 9089 9091 9090 if (pci_enable_device_mem(pdev)) { 9092 9091 dev_err(&pdev->dev, ··· 9107 9108 igb_reset(adapter); 9108 9109 wr32(E1000_WUS, ~0); 9109 9110 result = PCI_ERS_RESULT_RECOVERED; 9110 - } 9111 - 9112 - err = pci_cleanup_aer_uncorrect_error_status(pdev); 9113 - if (err) { 9114 - dev_err(&pdev->dev, 9115 - "pci_cleanup_aer_uncorrect_error_status failed 0x%0x\n", 9116 - err); 9117 - /* non-fatal, continue */ 9118 9111 } 9119 9112 9120 9113 return result;
-10
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 11322 11322 /* Free device reference count */ 11323 11323 pci_dev_put(vfdev); 11324 11324 } 11325 - 11326 - pci_cleanup_aer_uncorrect_error_status(pdev); 11327 11325 } 11328 11326 11329 11327 /* ··· 11371 11373 { 11372 11374 struct ixgbe_adapter *adapter = pci_get_drvdata(pdev); 11373 11375 pci_ers_result_t result; 11374 - int err; 11375 11376 11376 11377 if (pci_enable_device_mem(pdev)) { 11377 11378 e_err(probe, "Cannot re-enable PCI device after reset.\n"); ··· 11388 11391 ixgbe_reset(adapter); 11389 11392 IXGBE_WRITE_REG(&adapter->hw, IXGBE_WUS, ~0); 11390 11393 result = PCI_ERS_RESULT_RECOVERED; 11391 - } 11392 - 11393 - err = pci_cleanup_aer_uncorrect_error_status(pdev); 11394 - if (err) { 11395 - e_dev_err("pci_cleanup_aer_uncorrect_error_status " 11396 - "failed 0x%0x\n", err); 11397 - /* non-fatal, continue */ 11398 11394 } 11399 11395 11400 11396 return result;
-6
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
··· 1784 1784 return err ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; 1785 1785 } 1786 1786 1787 - static void netxen_io_resume(struct pci_dev *pdev) 1788 - { 1789 - pci_cleanup_aer_uncorrect_error_status(pdev); 1790 - } 1791 - 1792 1787 static void netxen_nic_shutdown(struct pci_dev *pdev) 1793 1788 { 1794 1789 struct netxen_adapter *adapter = pci_get_drvdata(pdev); ··· 3460 3465 static const struct pci_error_handlers netxen_err_handler = { 3461 3466 .error_detected = netxen_io_error_detected, 3462 3467 .slot_reset = netxen_io_slot_reset, 3463 - .resume = netxen_io_resume, 3464 3468 }; 3465 3469 3466 3470 static struct pci_driver netxen_driver = {
-1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 4233 4233 { 4234 4234 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 4235 4235 4236 - pci_cleanup_aer_uncorrect_error_status(pdev); 4237 4236 if (test_and_clear_bit(__QLCNIC_AER, &adapter->state)) 4238 4237 qlcnic_83xx_aer_start_poll_work(adapter); 4239 4238 }
-1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
··· 3930 3930 u32 state; 3931 3931 struct qlcnic_adapter *adapter = pci_get_drvdata(pdev); 3932 3932 3933 - pci_cleanup_aer_uncorrect_error_status(pdev); 3934 3933 state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE); 3935 3934 if (state == QLCNIC_DEV_READY && test_and_clear_bit(__QLCNIC_AER, 3936 3935 &adapter->state))
-8
drivers/net/ethernet/sfc/efx.c
··· 3821 3821 { 3822 3822 struct efx_nic *efx = pci_get_drvdata(pdev); 3823 3823 pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; 3824 - int rc; 3825 3824 3826 3825 if (pci_enable_device(pdev)) { 3827 3826 netif_err(efx, hw, efx->net_dev, 3828 3827 "Cannot re-enable PCI device after reset.\n"); 3829 3828 status = PCI_ERS_RESULT_DISCONNECT; 3830 - } 3831 - 3832 - rc = pci_cleanup_aer_uncorrect_error_status(pdev); 3833 - if (rc) { 3834 - netif_err(efx, hw, efx->net_dev, 3835 - "pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc); 3836 - /* Non-fatal error. Continue. */ 3837 3829 } 3838 3830 3839 3831 return status;
-8
drivers/net/ethernet/sfc/falcon/efx.c
··· 3160 3160 { 3161 3161 struct ef4_nic *efx = pci_get_drvdata(pdev); 3162 3162 pci_ers_result_t status = PCI_ERS_RESULT_RECOVERED; 3163 - int rc; 3164 3163 3165 3164 if (pci_enable_device(pdev)) { 3166 3165 netif_err(efx, hw, efx->net_dev, 3167 3166 "Cannot re-enable PCI device after reset.\n"); 3168 3167 status = PCI_ERS_RESULT_DISCONNECT; 3169 - } 3170 - 3171 - rc = pci_cleanup_aer_uncorrect_error_status(pdev); 3172 - if (rc) { 3173 - netif_err(efx, hw, efx->net_dev, 3174 - "pci_cleanup_aer_uncorrect_error_status failed (%d)\n", rc); 3175 - /* Non-fatal error. Continue. */ 3176 3168 } 3177 3169 3178 3170 return status;
+4
drivers/nvme/host/core.c
··· 3064 3064 ns->queue = blk_mq_init_queue(ctrl->tagset); 3065 3065 if (IS_ERR(ns->queue)) 3066 3066 goto out_free_ns; 3067 + 3067 3068 blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue); 3069 + if (ctrl->ops->flags & NVME_F_PCI_P2PDMA) 3070 + blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue); 3071 + 3068 3072 ns->queue->queuedata = ns; 3069 3073 ns->ctrl = ctrl; 3070 3074
+1
drivers/nvme/host/nvme.h
··· 343 343 unsigned int flags; 344 344 #define NVME_F_FABRICS (1 << 0) 345 345 #define NVME_F_METADATA_SUPPORTED (1 << 1) 346 + #define NVME_F_PCI_P2PDMA (1 << 2) 346 347 int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val); 347 348 int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val); 348 349 int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
+58 -40
drivers/nvme/host/pci.c
··· 30 30 #include <linux/types.h> 31 31 #include <linux/io-64-nonatomic-lo-hi.h> 32 32 #include <linux/sed-opal.h> 33 + #include <linux/pci-p2pdma.h> 33 34 34 35 #include "nvme.h" 35 36 ··· 100 99 struct work_struct remove_work; 101 100 struct mutex shutdown_lock; 102 101 bool subsystem; 103 - void __iomem *cmb; 104 - pci_bus_addr_t cmb_bus_addr; 105 102 u64 cmb_size; 103 + bool cmb_use_sqes; 106 104 u32 cmbsz; 107 105 u32 cmbloc; 108 106 struct nvme_ctrl ctrl; ··· 158 158 struct nvme_dev *dev; 159 159 spinlock_t sq_lock; 160 160 struct nvme_command *sq_cmds; 161 - struct nvme_command __iomem *sq_cmds_io; 161 + bool sq_cmds_is_io; 162 162 spinlock_t cq_lock ____cacheline_aligned_in_smp; 163 163 volatile struct nvme_completion *cqes; 164 164 struct blk_mq_tags **tags; ··· 447 447 static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd) 448 448 { 449 449 spin_lock(&nvmeq->sq_lock); 450 - if (nvmeq->sq_cmds_io) 451 - memcpy_toio(&nvmeq->sq_cmds_io[nvmeq->sq_tail], cmd, 452 - sizeof(*cmd)); 453 - else 454 - memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd)); 450 + 451 + memcpy(&nvmeq->sq_cmds[nvmeq->sq_tail], cmd, sizeof(*cmd)); 455 452 456 453 if (++nvmeq->sq_tail == nvmeq->q_depth) 457 454 nvmeq->sq_tail = 0; ··· 745 748 goto out; 746 749 747 750 ret = BLK_STS_RESOURCE; 748 - nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, dma_dir, 749 - DMA_ATTR_NO_WARN); 751 + 752 + if (is_pci_p2pdma_page(sg_page(iod->sg))) 753 + nr_mapped = pci_p2pdma_map_sg(dev->dev, iod->sg, iod->nents, 754 + dma_dir); 755 + else 756 + nr_mapped = dma_map_sg_attrs(dev->dev, iod->sg, iod->nents, 757 + dma_dir, DMA_ATTR_NO_WARN); 750 758 if (!nr_mapped) 751 759 goto out; 752 760 ··· 793 791 DMA_TO_DEVICE : DMA_FROM_DEVICE; 794 792 795 793 if (iod->nents) { 796 - dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir); 794 + /* P2PDMA requests do not need to be unmapped */ 795 + if (!is_pci_p2pdma_page(sg_page(iod->sg))) 796 + dma_unmap_sg(dev->dev, iod->sg, iod->nents, dma_dir); 797 + 797 798 if (blk_integrity_rq(req)) 798 799 dma_unmap_sg(dev->dev, &iod->meta_sg, 1, dma_dir); 799 800 } ··· 1237 1232 { 1238 1233 dma_free_coherent(nvmeq->q_dmadev, CQ_SIZE(nvmeq->q_depth), 1239 1234 (void *)nvmeq->cqes, nvmeq->cq_dma_addr); 1240 - if (nvmeq->sq_cmds) 1241 - dma_free_coherent(nvmeq->q_dmadev, SQ_SIZE(nvmeq->q_depth), 1242 - nvmeq->sq_cmds, nvmeq->sq_dma_addr); 1235 + 1236 + if (nvmeq->sq_cmds) { 1237 + if (nvmeq->sq_cmds_is_io) 1238 + pci_free_p2pmem(to_pci_dev(nvmeq->q_dmadev), 1239 + nvmeq->sq_cmds, 1240 + SQ_SIZE(nvmeq->q_depth)); 1241 + else 1242 + dma_free_coherent(nvmeq->q_dmadev, 1243 + SQ_SIZE(nvmeq->q_depth), 1244 + nvmeq->sq_cmds, 1245 + nvmeq->sq_dma_addr); 1246 + } 1243 1247 } 1244 1248 1245 1249 static void nvme_free_queues(struct nvme_dev *dev, int lowest) ··· 1337 1323 static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq, 1338 1324 int qid, int depth) 1339 1325 { 1340 - /* CMB SQEs will be mapped before creation */ 1341 - if (qid && dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) 1342 - return 0; 1326 + struct pci_dev *pdev = to_pci_dev(dev->dev); 1343 1327 1344 - nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), 1345 - &nvmeq->sq_dma_addr, GFP_KERNEL); 1328 + if (qid && dev->cmb_use_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) { 1329 + nvmeq->sq_cmds = pci_alloc_p2pmem(pdev, SQ_SIZE(depth)); 1330 + nvmeq->sq_dma_addr = pci_p2pmem_virt_to_bus(pdev, 1331 + nvmeq->sq_cmds); 1332 + nvmeq->sq_cmds_is_io = true; 1333 + } 1334 + 1335 + if (!nvmeq->sq_cmds) { 1336 + nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth), 1337 + &nvmeq->sq_dma_addr, GFP_KERNEL); 1338 + nvmeq->sq_cmds_is_io = false; 1339 + } 1340 + 1346 1341 if (!nvmeq->sq_cmds) 1347 1342 return -ENOMEM; 1348 1343 return 0; ··· 1427 1404 struct nvme_dev *dev = nvmeq->dev; 1428 1405 int result; 1429 1406 s16 vector; 1430 - 1431 - if (dev->cmb && use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS)) { 1432 - unsigned offset = (qid - 1) * roundup(SQ_SIZE(nvmeq->q_depth), 1433 - dev->ctrl.page_size); 1434 - nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset; 1435 - nvmeq->sq_cmds_io = dev->cmb + offset; 1436 - } 1437 1407 1438 1408 /* 1439 1409 * A queue's vector matches the queue identifier unless the controller ··· 1668 1652 return; 1669 1653 dev->cmbloc = readl(dev->bar + NVME_REG_CMBLOC); 1670 1654 1671 - if (!use_cmb_sqes) 1672 - return; 1673 - 1674 1655 size = nvme_cmb_size_unit(dev) * nvme_cmb_size(dev); 1675 1656 offset = nvme_cmb_size_unit(dev) * NVME_CMB_OFST(dev->cmbloc); 1676 1657 bar = NVME_CMB_BIR(dev->cmbloc); ··· 1684 1671 if (size > bar_size - offset) 1685 1672 size = bar_size - offset; 1686 1673 1687 - dev->cmb = ioremap_wc(pci_resource_start(pdev, bar) + offset, size); 1688 - if (!dev->cmb) 1674 + if (pci_p2pdma_add_resource(pdev, bar, size, offset)) { 1675 + dev_warn(dev->ctrl.device, 1676 + "failed to register the CMB\n"); 1689 1677 return; 1690 - dev->cmb_bus_addr = pci_bus_address(pdev, bar) + offset; 1678 + } 1679 + 1691 1680 dev->cmb_size = size; 1681 + dev->cmb_use_sqes = use_cmb_sqes && (dev->cmbsz & NVME_CMBSZ_SQS); 1682 + 1683 + if ((dev->cmbsz & (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) == 1684 + (NVME_CMBSZ_WDS | NVME_CMBSZ_RDS)) 1685 + pci_p2pmem_publish(pdev, true); 1692 1686 1693 1687 if (sysfs_add_file_to_group(&dev->ctrl.device->kobj, 1694 1688 &dev_attr_cmb.attr, NULL)) ··· 1705 1685 1706 1686 static inline void nvme_release_cmb(struct nvme_dev *dev) 1707 1687 { 1708 - if (dev->cmb) { 1709 - iounmap(dev->cmb); 1710 - dev->cmb = NULL; 1688 + if (dev->cmb_size) { 1711 1689 sysfs_remove_file_from_group(&dev->ctrl.device->kobj, 1712 1690 &dev_attr_cmb.attr, NULL); 1713 - dev->cmbsz = 0; 1691 + dev->cmb_size = 0; 1714 1692 } 1715 1693 } 1716 1694 ··· 1907 1889 if (nr_io_queues == 0) 1908 1890 return 0; 1909 1891 1910 - if (dev->cmb && (dev->cmbsz & NVME_CMBSZ_SQS)) { 1892 + if (dev->cmb_use_sqes) { 1911 1893 result = nvme_cmb_qdepth(dev, nr_io_queues, 1912 1894 sizeof(struct nvme_command)); 1913 1895 if (result > 0) 1914 1896 dev->q_depth = result; 1915 1897 else 1916 - nvme_release_cmb(dev); 1898 + dev->cmb_use_sqes = false; 1917 1899 } 1918 1900 1919 1901 do { ··· 2408 2390 static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { 2409 2391 .name = "pcie", 2410 2392 .module = THIS_MODULE, 2411 - .flags = NVME_F_METADATA_SUPPORTED, 2393 + .flags = NVME_F_METADATA_SUPPORTED | 2394 + NVME_F_PCI_P2PDMA, 2412 2395 .reg_read32 = nvme_pci_reg_read32, 2413 2396 .reg_write32 = nvme_pci_reg_write32, 2414 2397 .reg_read64 = nvme_pci_reg_read64, ··· 2667 2648 struct nvme_dev *dev = pci_get_drvdata(pdev); 2668 2649 2669 2650 flush_work(&dev->ctrl.reset_work); 2670 - pci_cleanup_aer_uncorrect_error_status(pdev); 2671 2651 } 2672 2652 2673 2653 static const struct pci_error_handlers nvme_err_handler = {
+47
drivers/nvme/target/configfs.c
··· 17 17 #include <linux/slab.h> 18 18 #include <linux/stat.h> 19 19 #include <linux/ctype.h> 20 + #include <linux/pci.h> 21 + #include <linux/pci-p2pdma.h> 20 22 21 23 #include "nvmet.h" 22 24 ··· 342 340 343 341 CONFIGFS_ATTR(nvmet_ns_, device_path); 344 342 343 + #ifdef CONFIG_PCI_P2PDMA 344 + static ssize_t nvmet_ns_p2pmem_show(struct config_item *item, char *page) 345 + { 346 + struct nvmet_ns *ns = to_nvmet_ns(item); 347 + 348 + return pci_p2pdma_enable_show(page, ns->p2p_dev, ns->use_p2pmem); 349 + } 350 + 351 + static ssize_t nvmet_ns_p2pmem_store(struct config_item *item, 352 + const char *page, size_t count) 353 + { 354 + struct nvmet_ns *ns = to_nvmet_ns(item); 355 + struct pci_dev *p2p_dev = NULL; 356 + bool use_p2pmem; 357 + int ret = count; 358 + int error; 359 + 360 + mutex_lock(&ns->subsys->lock); 361 + if (ns->enabled) { 362 + ret = -EBUSY; 363 + goto out_unlock; 364 + } 365 + 366 + error = pci_p2pdma_enable_store(page, &p2p_dev, &use_p2pmem); 367 + if (error) { 368 + ret = error; 369 + goto out_unlock; 370 + } 371 + 372 + ns->use_p2pmem = use_p2pmem; 373 + pci_dev_put(ns->p2p_dev); 374 + ns->p2p_dev = p2p_dev; 375 + 376 + out_unlock: 377 + mutex_unlock(&ns->subsys->lock); 378 + 379 + return ret; 380 + } 381 + 382 + CONFIGFS_ATTR(nvmet_ns_, p2pmem); 383 + #endif /* CONFIG_PCI_P2PDMA */ 384 + 345 385 static ssize_t nvmet_ns_device_uuid_show(struct config_item *item, char *page) 346 386 { 347 387 return sprintf(page, "%pUb\n", &to_nvmet_ns(item)->uuid); ··· 553 509 &nvmet_ns_attr_ana_grpid, 554 510 &nvmet_ns_attr_enable, 555 511 &nvmet_ns_attr_buffered_io, 512 + #ifdef CONFIG_PCI_P2PDMA 513 + &nvmet_ns_attr_p2pmem, 514 + #endif 556 515 NULL, 557 516 }; 558 517
+180
drivers/nvme/target/core.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/random.h> 17 17 #include <linux/rculist.h> 18 + #include <linux/pci-p2pdma.h> 18 19 19 20 #include "nvmet.h" 20 21 ··· 366 365 nvmet_file_ns_disable(ns); 367 366 } 368 367 368 + static int nvmet_p2pmem_ns_enable(struct nvmet_ns *ns) 369 + { 370 + int ret; 371 + struct pci_dev *p2p_dev; 372 + 373 + if (!ns->use_p2pmem) 374 + return 0; 375 + 376 + if (!ns->bdev) { 377 + pr_err("peer-to-peer DMA is not supported by non-block device namespaces\n"); 378 + return -EINVAL; 379 + } 380 + 381 + if (!blk_queue_pci_p2pdma(ns->bdev->bd_queue)) { 382 + pr_err("peer-to-peer DMA is not supported by the driver of %s\n", 383 + ns->device_path); 384 + return -EINVAL; 385 + } 386 + 387 + if (ns->p2p_dev) { 388 + ret = pci_p2pdma_distance(ns->p2p_dev, nvmet_ns_dev(ns), true); 389 + if (ret < 0) 390 + return -EINVAL; 391 + } else { 392 + /* 393 + * Right now we just check that there is p2pmem available so 394 + * we can report an error to the user right away if there 395 + * is not. We'll find the actual device to use once we 396 + * setup the controller when the port's device is available. 397 + */ 398 + 399 + p2p_dev = pci_p2pmem_find(nvmet_ns_dev(ns)); 400 + if (!p2p_dev) { 401 + pr_err("no peer-to-peer memory is available for %s\n", 402 + ns->device_path); 403 + return -EINVAL; 404 + } 405 + 406 + pci_dev_put(p2p_dev); 407 + } 408 + 409 + return 0; 410 + } 411 + 412 + /* 413 + * Note: ctrl->subsys->lock should be held when calling this function 414 + */ 415 + static void nvmet_p2pmem_ns_add_p2p(struct nvmet_ctrl *ctrl, 416 + struct nvmet_ns *ns) 417 + { 418 + struct device *clients[2]; 419 + struct pci_dev *p2p_dev; 420 + int ret; 421 + 422 + if (!ctrl->p2p_client) 423 + return; 424 + 425 + if (ns->p2p_dev) { 426 + ret = pci_p2pdma_distance(ns->p2p_dev, ctrl->p2p_client, true); 427 + if (ret < 0) 428 + return; 429 + 430 + p2p_dev = pci_dev_get(ns->p2p_dev); 431 + } else { 432 + clients[0] = ctrl->p2p_client; 433 + clients[1] = nvmet_ns_dev(ns); 434 + 435 + p2p_dev = pci_p2pmem_find_many(clients, ARRAY_SIZE(clients)); 436 + if (!p2p_dev) { 437 + pr_err("no peer-to-peer memory is available that's supported by %s and %s\n", 438 + dev_name(ctrl->p2p_client), ns->device_path); 439 + return; 440 + } 441 + } 442 + 443 + ret = radix_tree_insert(&ctrl->p2p_ns_map, ns->nsid, p2p_dev); 444 + if (ret < 0) 445 + pci_dev_put(p2p_dev); 446 + 447 + pr_info("using p2pmem on %s for nsid %d\n", pci_name(p2p_dev), 448 + ns->nsid); 449 + } 450 + 369 451 int nvmet_ns_enable(struct nvmet_ns *ns) 370 452 { 371 453 struct nvmet_subsys *subsys = ns->subsys; 454 + struct nvmet_ctrl *ctrl; 372 455 int ret; 373 456 374 457 mutex_lock(&subsys->lock); ··· 468 383 ret = nvmet_file_ns_enable(ns); 469 384 if (ret) 470 385 goto out_unlock; 386 + 387 + ret = nvmet_p2pmem_ns_enable(ns); 388 + if (ret) 389 + goto out_unlock; 390 + 391 + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 392 + nvmet_p2pmem_ns_add_p2p(ctrl, ns); 471 393 472 394 ret = percpu_ref_init(&ns->ref, nvmet_destroy_namespace, 473 395 0, GFP_KERNEL); ··· 510 418 mutex_unlock(&subsys->lock); 511 419 return ret; 512 420 out_dev_put: 421 + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 422 + pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid)); 423 + 513 424 nvmet_ns_dev_disable(ns); 514 425 goto out_unlock; 515 426 } ··· 520 425 void nvmet_ns_disable(struct nvmet_ns *ns) 521 426 { 522 427 struct nvmet_subsys *subsys = ns->subsys; 428 + struct nvmet_ctrl *ctrl; 523 429 524 430 mutex_lock(&subsys->lock); 525 431 if (!ns->enabled) ··· 530 434 list_del_rcu(&ns->dev_link); 531 435 if (ns->nsid == subsys->max_nsid) 532 436 subsys->max_nsid = nvmet_max_nsid(subsys); 437 + 438 + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) 439 + pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid)); 440 + 533 441 mutex_unlock(&subsys->lock); 534 442 535 443 /* ··· 550 450 percpu_ref_exit(&ns->ref); 551 451 552 452 mutex_lock(&subsys->lock); 453 + 553 454 subsys->nr_namespaces--; 554 455 nvmet_ns_changed(subsys, ns->nsid); 555 456 nvmet_ns_dev_disable(ns); ··· 826 725 } 827 726 EXPORT_SYMBOL_GPL(nvmet_req_execute); 828 727 728 + int nvmet_req_alloc_sgl(struct nvmet_req *req) 729 + { 730 + struct pci_dev *p2p_dev = NULL; 731 + 732 + if (IS_ENABLED(CONFIG_PCI_P2PDMA)) { 733 + if (req->sq->ctrl && req->ns) 734 + p2p_dev = radix_tree_lookup(&req->sq->ctrl->p2p_ns_map, 735 + req->ns->nsid); 736 + 737 + req->p2p_dev = NULL; 738 + if (req->sq->qid && p2p_dev) { 739 + req->sg = pci_p2pmem_alloc_sgl(p2p_dev, &req->sg_cnt, 740 + req->transfer_len); 741 + if (req->sg) { 742 + req->p2p_dev = p2p_dev; 743 + return 0; 744 + } 745 + } 746 + 747 + /* 748 + * If no P2P memory was available we fallback to using 749 + * regular memory 750 + */ 751 + } 752 + 753 + req->sg = sgl_alloc(req->transfer_len, GFP_KERNEL, &req->sg_cnt); 754 + if (!req->sg) 755 + return -ENOMEM; 756 + 757 + return 0; 758 + } 759 + EXPORT_SYMBOL_GPL(nvmet_req_alloc_sgl); 760 + 761 + void nvmet_req_free_sgl(struct nvmet_req *req) 762 + { 763 + if (req->p2p_dev) 764 + pci_p2pmem_free_sgl(req->p2p_dev, req->sg); 765 + else 766 + sgl_free(req->sg); 767 + 768 + req->sg = NULL; 769 + req->sg_cnt = 0; 770 + } 771 + EXPORT_SYMBOL_GPL(nvmet_req_free_sgl); 772 + 829 773 static inline bool nvmet_cc_en(u32 cc) 830 774 { 831 775 return (cc >> NVME_CC_EN_SHIFT) & 0x1; ··· 1067 921 return __nvmet_host_allowed(subsys, hostnqn); 1068 922 } 1069 923 924 + /* 925 + * Note: ctrl->subsys->lock should be held when calling this function 926 + */ 927 + static void nvmet_setup_p2p_ns_map(struct nvmet_ctrl *ctrl, 928 + struct nvmet_req *req) 929 + { 930 + struct nvmet_ns *ns; 931 + 932 + if (!req->p2p_client) 933 + return; 934 + 935 + ctrl->p2p_client = get_device(req->p2p_client); 936 + 937 + list_for_each_entry_rcu(ns, &ctrl->subsys->namespaces, dev_link) 938 + nvmet_p2pmem_ns_add_p2p(ctrl, ns); 939 + } 940 + 941 + /* 942 + * Note: ctrl->subsys->lock should be held when calling this function 943 + */ 944 + static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl) 945 + { 946 + struct radix_tree_iter iter; 947 + void __rcu **slot; 948 + 949 + radix_tree_for_each_slot(slot, &ctrl->p2p_ns_map, &iter, 0) 950 + pci_dev_put(radix_tree_deref_slot(slot)); 951 + 952 + put_device(ctrl->p2p_client); 953 + } 954 + 1070 955 u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, 1071 956 struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp) 1072 957 { ··· 1139 962 1140 963 INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work); 1141 964 INIT_LIST_HEAD(&ctrl->async_events); 965 + INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL); 1142 966 1143 967 memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE); 1144 968 memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE); ··· 1204 1026 1205 1027 mutex_lock(&subsys->lock); 1206 1028 list_add_tail(&ctrl->subsys_entry, &subsys->ctrls); 1029 + nvmet_setup_p2p_ns_map(ctrl, req); 1207 1030 mutex_unlock(&subsys->lock); 1208 1031 1209 1032 *ctrlp = ctrl; ··· 1232 1053 struct nvmet_subsys *subsys = ctrl->subsys; 1233 1054 1234 1055 mutex_lock(&subsys->lock); 1056 + nvmet_release_p2p_ns_map(ctrl); 1235 1057 list_del(&ctrl->subsys_entry); 1236 1058 mutex_unlock(&subsys->lock); 1237 1059
+3
drivers/nvme/target/io-cmd-bdev.c
··· 78 78 op = REQ_OP_READ; 79 79 } 80 80 81 + if (is_pci_p2pdma_page(sg_page(req->sg))) 82 + op_flags |= REQ_NOMERGE; 83 + 81 84 sector = le64_to_cpu(req->cmd->rw.slba); 82 85 sector <<= (req->ns->blksize_shift - 9); 83 86
+17
drivers/nvme/target/nvmet.h
··· 26 26 #include <linux/configfs.h> 27 27 #include <linux/rcupdate.h> 28 28 #include <linux/blkdev.h> 29 + #include <linux/radix-tree.h> 29 30 30 31 #define NVMET_ASYNC_EVENTS 4 31 32 #define NVMET_ERROR_LOG_SLOTS 128 ··· 78 77 struct completion disable_done; 79 78 mempool_t *bvec_pool; 80 79 struct kmem_cache *bvec_cache; 80 + 81 + int use_p2pmem; 82 + struct pci_dev *p2p_dev; 81 83 }; 82 84 83 85 static inline struct nvmet_ns *to_nvmet_ns(struct config_item *item) 84 86 { 85 87 return container_of(to_config_group(item), struct nvmet_ns, group); 88 + } 89 + 90 + static inline struct device *nvmet_ns_dev(struct nvmet_ns *ns) 91 + { 92 + return ns->bdev ? disk_to_dev(ns->bdev->bd_disk) : NULL; 86 93 } 87 94 88 95 struct nvmet_cq { ··· 193 184 194 185 char subsysnqn[NVMF_NQN_FIELD_LEN]; 195 186 char hostnqn[NVMF_NQN_FIELD_LEN]; 187 + 188 + struct device *p2p_client; 189 + struct radix_tree_root p2p_ns_map; 196 190 }; 197 191 198 192 struct nvmet_subsys { ··· 307 295 308 296 void (*execute)(struct nvmet_req *req); 309 297 const struct nvmet_fabrics_ops *ops; 298 + 299 + struct pci_dev *p2p_dev; 300 + struct device *p2p_client; 310 301 }; 311 302 312 303 extern struct workqueue_struct *buffered_io_wq; ··· 352 337 void nvmet_req_uninit(struct nvmet_req *req); 353 338 void nvmet_req_execute(struct nvmet_req *req); 354 339 void nvmet_req_complete(struct nvmet_req *req, u16 status); 340 + int nvmet_req_alloc_sgl(struct nvmet_req *req); 341 + void nvmet_req_free_sgl(struct nvmet_req *req); 355 342 356 343 void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid, 357 344 u16 size);
+14 -8
drivers/nvme/target/rdma.c
··· 504 504 } 505 505 506 506 if (rsp->req.sg != rsp->cmd->inline_sg) 507 - sgl_free(rsp->req.sg); 507 + nvmet_req_free_sgl(&rsp->req); 508 508 509 509 if (unlikely(!list_empty_careful(&queue->rsp_wr_wait_list))) 510 510 nvmet_rdma_process_wr_wait_list(queue); ··· 653 653 { 654 654 struct rdma_cm_id *cm_id = rsp->queue->cm_id; 655 655 u64 addr = le64_to_cpu(sgl->addr); 656 - u32 len = get_unaligned_le24(sgl->length); 657 656 u32 key = get_unaligned_le32(sgl->key); 658 657 int ret; 659 658 659 + rsp->req.transfer_len = get_unaligned_le24(sgl->length); 660 + 660 661 /* no data command? */ 661 - if (!len) 662 + if (!rsp->req.transfer_len) 662 663 return 0; 663 664 664 - rsp->req.sg = sgl_alloc(len, GFP_KERNEL, &rsp->req.sg_cnt); 665 - if (!rsp->req.sg) 666 - return NVME_SC_INTERNAL; 665 + ret = nvmet_req_alloc_sgl(&rsp->req); 666 + if (ret < 0) 667 + goto error_out; 667 668 668 669 ret = rdma_rw_ctx_init(&rsp->rw, cm_id->qp, cm_id->port_num, 669 670 rsp->req.sg, rsp->req.sg_cnt, 0, addr, key, 670 671 nvmet_data_dir(&rsp->req)); 671 672 if (ret < 0) 672 - return NVME_SC_INTERNAL; 673 - rsp->req.transfer_len += len; 673 + goto error_out; 674 674 rsp->n_rdma += ret; 675 675 676 676 if (invalidate) { ··· 679 679 } 680 680 681 681 return 0; 682 + 683 + error_out: 684 + rsp->req.transfer_len = 0; 685 + return NVME_SC_INTERNAL; 682 686 } 683 687 684 688 static u16 nvmet_rdma_map_sgl(struct nvmet_rdma_rsp *rsp) ··· 749 745 ib_dma_sync_single_for_cpu(queue->dev->device, 750 746 cmd->send_sge.addr, cmd->send_sge.length, 751 747 DMA_TO_DEVICE); 748 + 749 + cmd->req.p2p_client = &queue->dev->device->dev; 752 750 753 751 if (!nvmet_req_init(&cmd->req, &queue->nvme_cq, 754 752 &queue->nvme_sq, &nvmet_rdma_ops))
+20
drivers/pci/Kconfig
··· 98 98 config PCI_LOCKLESS_CONFIG 99 99 bool 100 100 101 + config PCI_BRIDGE_EMUL 102 + bool 103 + 101 104 config PCI_IOV 102 105 bool "PCI IOV support" 103 106 depends on PCI ··· 132 129 use of this feature an IOMMU is required which also supports PASIDs. 133 130 Select this option if you have such an IOMMU and want to compile the 134 131 driver for it into your kernel. 132 + 133 + If unsure, say N. 134 + 135 + config PCI_P2PDMA 136 + bool "PCI peer-to-peer transfer support" 137 + depends on PCI && ZONE_DEVICE 138 + select GENERIC_ALLOCATOR 139 + help 140 + Enableѕ drivers to do PCI peer-to-peer transactions to and from 141 + BARs that are exposed in other devices that are the part of 142 + the hierarchy where peer-to-peer DMA is guaranteed by the PCI 143 + specification to work (ie. anything below a single PCI bridge). 144 + 145 + Many PCIe root complexes do not support P2P transactions and 146 + it's hard to tell which support it at all, so at this time, 147 + P2P DMA transations must be between devices behind the same root 148 + port. 135 149 136 150 If unsure, say N. 137 151
+2
drivers/pci/Makefile
··· 19 19 obj-$(CONFIG_PCI_MSI) += msi.o 20 20 obj-$(CONFIG_PCI_ATS) += ats.o 21 21 obj-$(CONFIG_PCI_IOV) += iov.o 22 + obj-$(CONFIG_PCI_BRIDGE_EMUL) += pci-bridge-emul.o 22 23 obj-$(CONFIG_ACPI) += pci-acpi.o 23 24 obj-$(CONFIG_PCI_LABEL) += pci-label.o 24 25 obj-$(CONFIG_X86_INTEL_MID) += pci-mid.o ··· 27 26 obj-$(CONFIG_PCI_STUB) += pci-stub.o 28 27 obj-$(CONFIG_PCI_PF_STUB) += pci-pf-stub.o 29 28 obj-$(CONFIG_PCI_ECAM) += ecam.o 29 + obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o 30 30 obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o 31 31 32 32 # Endpoint library must be initialized before its users
+2 -2
drivers/pci/access.c
··· 33 33 #endif 34 34 35 35 #define PCI_OP_READ(size, type, len) \ 36 - int pci_bus_read_config_##size \ 36 + int noinline pci_bus_read_config_##size \ 37 37 (struct pci_bus *bus, unsigned int devfn, int pos, type *value) \ 38 38 { \ 39 39 int res; \ ··· 48 48 } 49 49 50 50 #define PCI_OP_WRITE(size, type, len) \ 51 - int pci_bus_write_config_##size \ 51 + int noinline pci_bus_write_config_##size \ 52 52 (struct pci_bus *bus, unsigned int devfn, int pos, type value) \ 53 53 { \ 54 54 int res; \
+3 -1
drivers/pci/controller/Kconfig
··· 9 9 depends on MVEBU_MBUS 10 10 depends on ARM 11 11 depends on OF 12 + select PCI_BRIDGE_EMUL 12 13 13 14 config PCI_AARDVARK 14 15 bool "Aardvark PCIe controller" 15 16 depends on (ARCH_MVEBU && ARM64) || COMPILE_TEST 16 17 depends on OF 17 18 depends on PCI_MSI_IRQ_DOMAIN 19 + select PCI_BRIDGE_EMUL 18 20 help 19 21 Add support for Aardvark 64bit PCIe Host Controller. This 20 22 controller is part of the South Bridge of the Marvel Armada ··· 233 231 available to support GEN2 with 4 slots. 234 232 235 233 config PCIE_MEDIATEK 236 - bool "MediaTek PCIe controller" 234 + tristate "MediaTek PCIe controller" 237 235 depends on ARCH_MEDIATEK || COMPILE_TEST 238 236 depends on OF 239 237 depends on PCI_MSI_IRQ_DOMAIN
+1 -1
drivers/pci/controller/dwc/Makefile
··· 7 7 obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o 8 8 obj-$(CONFIG_PCI_IMX6) += pci-imx6.o 9 9 obj-$(CONFIG_PCIE_SPEAR13XX) += pcie-spear13xx.o 10 - obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone-dw.o pci-keystone.o 10 + obj-$(CONFIG_PCI_KEYSTONE) += pci-keystone.o 11 11 obj-$(CONFIG_PCI_LAYERSCAPE) += pci-layerscape.o 12 12 obj-$(CONFIG_PCIE_QCOM) += pcie-qcom.o 13 13 obj-$(CONFIG_PCIE_ARMADA_8K) += pcie-armada8k.o
+8 -3
drivers/pci/controller/dwc/pci-dra7xx.c
··· 542 542 }; 543 543 544 544 /* 545 - * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870 545 + * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870 546 546 * @dra7xx: the dra7xx device where the workaround should be applied 547 547 * 548 548 * Access to the PCIe slave port that are not 32-bit aligned will result ··· 552 552 * 553 553 * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1. 554 554 */ 555 - static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev) 555 + static int dra7xx_pcie_unaligned_memaccess(struct device *dev) 556 556 { 557 557 int ret; 558 558 struct device_node *np = dev->of_node; ··· 704 704 705 705 dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE, 706 706 DEVICE_TYPE_RC); 707 + 708 + ret = dra7xx_pcie_unaligned_memaccess(dev); 709 + if (ret) 710 + dev_err(dev, "WA for Errata i870 not applied\n"); 711 + 707 712 ret = dra7xx_add_pcie_port(dra7xx, pdev); 708 713 if (ret < 0) 709 714 goto err_gpio; ··· 722 717 dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE, 723 718 DEVICE_TYPE_EP); 724 719 725 - ret = dra7xx_pcie_ep_unaligned_memaccess(dev); 720 + ret = dra7xx_pcie_unaligned_memaccess(dev); 726 721 if (ret) 727 722 goto err_gpio; 728 723
+171 -5
drivers/pci/controller/dwc/pci-imx6.c
··· 50 50 struct regmap *iomuxc_gpr; 51 51 struct reset_control *pciephy_reset; 52 52 struct reset_control *apps_reset; 53 + struct reset_control *turnoff_reset; 53 54 enum imx6_pcie_variants variant; 54 55 u32 tx_deemph_gen1; 55 56 u32 tx_deemph_gen2_3p5db; ··· 98 97 #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) 99 98 100 99 /* PHY registers (not memory-mapped) */ 100 + #define PCIE_PHY_ATEOVRD 0x10 101 + #define PCIE_PHY_ATEOVRD_EN (0x1 << 2) 102 + #define PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT 0 103 + #define PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK 0x1 104 + 105 + #define PCIE_PHY_MPLL_OVRD_IN_LO 0x11 106 + #define PCIE_PHY_MPLL_MULTIPLIER_SHIFT 2 107 + #define PCIE_PHY_MPLL_MULTIPLIER_MASK 0x7f 108 + #define PCIE_PHY_MPLL_MULTIPLIER_OVRD (0x1 << 9) 109 + 101 110 #define PCIE_PHY_RX_ASIC_OUT 0x100D 102 111 #define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0) 103 112 ··· 519 508 IMX6Q_GPR12_DEVICE_TYPE, PCI_EXP_TYPE_ROOT_PORT << 12); 520 509 } 521 510 511 + static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie) 512 + { 513 + unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy); 514 + int mult, div; 515 + u32 val; 516 + 517 + switch (phy_rate) { 518 + case 125000000: 519 + /* 520 + * The default settings of the MPLL are for a 125MHz input 521 + * clock, so no need to reconfigure anything in that case. 522 + */ 523 + return 0; 524 + case 100000000: 525 + mult = 25; 526 + div = 0; 527 + break; 528 + case 200000000: 529 + mult = 25; 530 + div = 1; 531 + break; 532 + default: 533 + dev_err(imx6_pcie->pci->dev, 534 + "Unsupported PHY reference clock rate %lu\n", phy_rate); 535 + return -EINVAL; 536 + } 537 + 538 + pcie_phy_read(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, &val); 539 + val &= ~(PCIE_PHY_MPLL_MULTIPLIER_MASK << 540 + PCIE_PHY_MPLL_MULTIPLIER_SHIFT); 541 + val |= mult << PCIE_PHY_MPLL_MULTIPLIER_SHIFT; 542 + val |= PCIE_PHY_MPLL_MULTIPLIER_OVRD; 543 + pcie_phy_write(imx6_pcie, PCIE_PHY_MPLL_OVRD_IN_LO, val); 544 + 545 + pcie_phy_read(imx6_pcie, PCIE_PHY_ATEOVRD, &val); 546 + val &= ~(PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK << 547 + PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT); 548 + val |= div << PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT; 549 + val |= PCIE_PHY_ATEOVRD_EN; 550 + pcie_phy_write(imx6_pcie, PCIE_PHY_ATEOVRD, val); 551 + 552 + return 0; 553 + } 554 + 522 555 static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie) 523 556 { 524 557 struct dw_pcie *pci = imx6_pcie->pci; ··· 597 542 return -EINVAL; 598 543 } 599 544 545 + static void imx6_pcie_ltssm_enable(struct device *dev) 546 + { 547 + struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 548 + 549 + switch (imx6_pcie->variant) { 550 + case IMX6Q: 551 + case IMX6SX: 552 + case IMX6QP: 553 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 554 + IMX6Q_GPR12_PCIE_CTL_2, 555 + IMX6Q_GPR12_PCIE_CTL_2); 556 + break; 557 + case IMX7D: 558 + reset_control_deassert(imx6_pcie->apps_reset); 559 + break; 560 + } 561 + } 562 + 600 563 static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie) 601 564 { 602 565 struct dw_pcie *pci = imx6_pcie->pci; ··· 633 560 dw_pcie_writel_dbi(pci, PCIE_RC_LCR, tmp); 634 561 635 562 /* Start LTSSM. */ 636 - if (imx6_pcie->variant == IMX7D) 637 - reset_control_deassert(imx6_pcie->apps_reset); 638 - else 639 - regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 640 - IMX6Q_GPR12_PCIE_CTL_2, 1 << 10); 563 + imx6_pcie_ltssm_enable(dev); 641 564 642 565 ret = imx6_pcie_wait_for_link(imx6_pcie); 643 566 if (ret) ··· 701 632 imx6_pcie_assert_core_reset(imx6_pcie); 702 633 imx6_pcie_init_phy(imx6_pcie); 703 634 imx6_pcie_deassert_core_reset(imx6_pcie); 635 + imx6_setup_phy_mpll(imx6_pcie); 704 636 dw_pcie_setup_rc(pp); 705 637 imx6_pcie_establish_link(imx6_pcie); 706 638 ··· 750 680 751 681 static const struct dw_pcie_ops dw_pcie_ops = { 752 682 .link_up = imx6_pcie_link_up, 683 + }; 684 + 685 + #ifdef CONFIG_PM_SLEEP 686 + static void imx6_pcie_ltssm_disable(struct device *dev) 687 + { 688 + struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 689 + 690 + switch (imx6_pcie->variant) { 691 + case IMX6SX: 692 + case IMX6QP: 693 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 694 + IMX6Q_GPR12_PCIE_CTL_2, 0); 695 + break; 696 + case IMX7D: 697 + reset_control_assert(imx6_pcie->apps_reset); 698 + break; 699 + default: 700 + dev_err(dev, "ltssm_disable not supported\n"); 701 + } 702 + } 703 + 704 + static void imx6_pcie_pm_turnoff(struct imx6_pcie *imx6_pcie) 705 + { 706 + reset_control_assert(imx6_pcie->turnoff_reset); 707 + reset_control_deassert(imx6_pcie->turnoff_reset); 708 + 709 + /* 710 + * Components with an upstream port must respond to 711 + * PME_Turn_Off with PME_TO_Ack but we can't check. 712 + * 713 + * The standard recommends a 1-10ms timeout after which to 714 + * proceed anyway as if acks were received. 715 + */ 716 + usleep_range(1000, 10000); 717 + } 718 + 719 + static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie) 720 + { 721 + clk_disable_unprepare(imx6_pcie->pcie); 722 + clk_disable_unprepare(imx6_pcie->pcie_phy); 723 + clk_disable_unprepare(imx6_pcie->pcie_bus); 724 + 725 + if (imx6_pcie->variant == IMX7D) { 726 + regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR12, 727 + IMX7D_GPR12_PCIE_PHY_REFCLK_SEL, 728 + IMX7D_GPR12_PCIE_PHY_REFCLK_SEL); 729 + } 730 + } 731 + 732 + static int imx6_pcie_suspend_noirq(struct device *dev) 733 + { 734 + struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 735 + 736 + if (imx6_pcie->variant != IMX7D) 737 + return 0; 738 + 739 + imx6_pcie_pm_turnoff(imx6_pcie); 740 + imx6_pcie_clk_disable(imx6_pcie); 741 + imx6_pcie_ltssm_disable(dev); 742 + 743 + return 0; 744 + } 745 + 746 + static int imx6_pcie_resume_noirq(struct device *dev) 747 + { 748 + int ret; 749 + struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); 750 + struct pcie_port *pp = &imx6_pcie->pci->pp; 751 + 752 + if (imx6_pcie->variant != IMX7D) 753 + return 0; 754 + 755 + imx6_pcie_assert_core_reset(imx6_pcie); 756 + imx6_pcie_init_phy(imx6_pcie); 757 + imx6_pcie_deassert_core_reset(imx6_pcie); 758 + dw_pcie_setup_rc(pp); 759 + 760 + ret = imx6_pcie_establish_link(imx6_pcie); 761 + if (ret < 0) 762 + dev_info(dev, "pcie link is down after resume.\n"); 763 + 764 + return 0; 765 + } 766 + #endif 767 + 768 + static const struct dev_pm_ops imx6_pcie_pm_ops = { 769 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx6_pcie_suspend_noirq, 770 + imx6_pcie_resume_noirq) 753 771 }; 754 772 755 773 static int imx6_pcie_probe(struct platform_device *pdev) ··· 934 776 break; 935 777 } 936 778 779 + /* Grab turnoff reset */ 780 + imx6_pcie->turnoff_reset = devm_reset_control_get_optional_exclusive(dev, "turnoff"); 781 + if (IS_ERR(imx6_pcie->turnoff_reset)) { 782 + dev_err(dev, "Failed to get TURNOFF reset control\n"); 783 + return PTR_ERR(imx6_pcie->turnoff_reset); 784 + } 785 + 937 786 /* Grab GPR config register range */ 938 787 imx6_pcie->iomuxc_gpr = 939 788 syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr"); ··· 1013 848 .name = "imx6q-pcie", 1014 849 .of_match_table = imx6_pcie_of_match, 1015 850 .suppress_bind_attrs = true, 851 + .pm = &imx6_pcie_pm_ops, 1016 852 }, 1017 853 .probe = imx6_pcie_probe, 1018 854 .shutdown = imx6_pcie_shutdown,
-484
drivers/pci/controller/dwc/pci-keystone-dw.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * DesignWare application register space functions for Keystone PCI controller 4 - * 5 - * Copyright (C) 2013-2014 Texas Instruments., Ltd. 6 - * http://www.ti.com 7 - * 8 - * Author: Murali Karicheri <m-karicheri2@ti.com> 9 - */ 10 - 11 - #include <linux/irq.h> 12 - #include <linux/irqdomain.h> 13 - #include <linux/irqreturn.h> 14 - #include <linux/module.h> 15 - #include <linux/of.h> 16 - #include <linux/of_pci.h> 17 - #include <linux/pci.h> 18 - #include <linux/platform_device.h> 19 - 20 - #include "pcie-designware.h" 21 - #include "pci-keystone.h" 22 - 23 - /* Application register defines */ 24 - #define LTSSM_EN_VAL 1 25 - #define LTSSM_STATE_MASK 0x1f 26 - #define LTSSM_STATE_L0 0x11 27 - #define DBI_CS2_EN_VAL 0x20 28 - #define OB_XLAT_EN_VAL 2 29 - 30 - /* Application registers */ 31 - #define CMD_STATUS 0x004 32 - #define CFG_SETUP 0x008 33 - #define OB_SIZE 0x030 34 - #define CFG_PCIM_WIN_SZ_IDX 3 35 - #define CFG_PCIM_WIN_CNT 32 36 - #define SPACE0_REMOTE_CFG_OFFSET 0x1000 37 - #define OB_OFFSET_INDEX(n) (0x200 + (8 * n)) 38 - #define OB_OFFSET_HI(n) (0x204 + (8 * n)) 39 - 40 - /* IRQ register defines */ 41 - #define IRQ_EOI 0x050 42 - #define IRQ_STATUS 0x184 43 - #define IRQ_ENABLE_SET 0x188 44 - #define IRQ_ENABLE_CLR 0x18c 45 - 46 - #define MSI_IRQ 0x054 47 - #define MSI0_IRQ_STATUS 0x104 48 - #define MSI0_IRQ_ENABLE_SET 0x108 49 - #define MSI0_IRQ_ENABLE_CLR 0x10c 50 - #define IRQ_STATUS 0x184 51 - #define MSI_IRQ_OFFSET 4 52 - 53 - /* Error IRQ bits */ 54 - #define ERR_AER BIT(5) /* ECRC error */ 55 - #define ERR_AXI BIT(4) /* AXI tag lookup fatal error */ 56 - #define ERR_CORR BIT(3) /* Correctable error */ 57 - #define ERR_NONFATAL BIT(2) /* Non-fatal error */ 58 - #define ERR_FATAL BIT(1) /* Fatal error */ 59 - #define ERR_SYS BIT(0) /* System (fatal, non-fatal, or correctable) */ 60 - #define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \ 61 - ERR_NONFATAL | ERR_FATAL | ERR_SYS) 62 - #define ERR_FATAL_IRQ (ERR_FATAL | ERR_AXI) 63 - #define ERR_IRQ_STATUS_RAW 0x1c0 64 - #define ERR_IRQ_STATUS 0x1c4 65 - #define ERR_IRQ_ENABLE_SET 0x1c8 66 - #define ERR_IRQ_ENABLE_CLR 0x1cc 67 - 68 - /* Config space registers */ 69 - #define DEBUG0 0x728 70 - 71 - #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) 72 - 73 - static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset, 74 - u32 *bit_pos) 75 - { 76 - *reg_offset = offset % 8; 77 - *bit_pos = offset >> 3; 78 - } 79 - 80 - phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp) 81 - { 82 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 83 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 84 - 85 - return ks_pcie->app.start + MSI_IRQ; 86 - } 87 - 88 - static u32 ks_dw_app_readl(struct keystone_pcie *ks_pcie, u32 offset) 89 - { 90 - return readl(ks_pcie->va_app_base + offset); 91 - } 92 - 93 - static void ks_dw_app_writel(struct keystone_pcie *ks_pcie, u32 offset, u32 val) 94 - { 95 - writel(val, ks_pcie->va_app_base + offset); 96 - } 97 - 98 - void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) 99 - { 100 - struct dw_pcie *pci = ks_pcie->pci; 101 - struct pcie_port *pp = &pci->pp; 102 - struct device *dev = pci->dev; 103 - u32 pending, vector; 104 - int src, virq; 105 - 106 - pending = ks_dw_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4)); 107 - 108 - /* 109 - * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit 110 - * shows 1, 9, 17, 25 and so forth 111 - */ 112 - for (src = 0; src < 4; src++) { 113 - if (BIT(src) & pending) { 114 - vector = offset + (src << 3); 115 - virq = irq_linear_revmap(pp->irq_domain, vector); 116 - dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", 117 - src, vector, virq); 118 - generic_handle_irq(virq); 119 - } 120 - } 121 - } 122 - 123 - void ks_dw_pcie_msi_irq_ack(int irq, struct pcie_port *pp) 124 - { 125 - u32 reg_offset, bit_pos; 126 - struct keystone_pcie *ks_pcie; 127 - struct dw_pcie *pci; 128 - 129 - pci = to_dw_pcie_from_pp(pp); 130 - ks_pcie = to_keystone_pcie(pci); 131 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 132 - 133 - ks_dw_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4), 134 - BIT(bit_pos)); 135 - ks_dw_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET); 136 - } 137 - 138 - void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq) 139 - { 140 - u32 reg_offset, bit_pos; 141 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 142 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 143 - 144 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 145 - ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4), 146 - BIT(bit_pos)); 147 - } 148 - 149 - void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq) 150 - { 151 - u32 reg_offset, bit_pos; 152 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 153 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 154 - 155 - update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 156 - ks_dw_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4), 157 - BIT(bit_pos)); 158 - } 159 - 160 - int ks_dw_pcie_msi_host_init(struct pcie_port *pp) 161 - { 162 - return dw_pcie_allocate_domains(pp); 163 - } 164 - 165 - void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) 166 - { 167 - int i; 168 - 169 - for (i = 0; i < PCI_NUM_INTX; i++) 170 - ks_dw_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); 171 - } 172 - 173 - void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset) 174 - { 175 - struct dw_pcie *pci = ks_pcie->pci; 176 - struct device *dev = pci->dev; 177 - u32 pending; 178 - int virq; 179 - 180 - pending = ks_dw_app_readl(ks_pcie, IRQ_STATUS + (offset << 4)); 181 - 182 - if (BIT(0) & pending) { 183 - virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); 184 - dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq); 185 - generic_handle_irq(virq); 186 - } 187 - 188 - /* EOI the INTx interrupt */ 189 - ks_dw_app_writel(ks_pcie, IRQ_EOI, offset); 190 - } 191 - 192 - void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie) 193 - { 194 - ks_dw_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL); 195 - } 196 - 197 - irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie) 198 - { 199 - u32 status; 200 - 201 - status = ks_dw_app_readl(ks_pcie, ERR_IRQ_STATUS_RAW) & ERR_IRQ_ALL; 202 - if (!status) 203 - return IRQ_NONE; 204 - 205 - if (status & ERR_FATAL_IRQ) 206 - dev_err(ks_pcie->pci->dev, "fatal error (status %#010x)\n", 207 - status); 208 - 209 - /* Ack the IRQ; status bits are RW1C */ 210 - ks_dw_app_writel(ks_pcie, ERR_IRQ_STATUS, status); 211 - return IRQ_HANDLED; 212 - } 213 - 214 - static void ks_dw_pcie_ack_legacy_irq(struct irq_data *d) 215 - { 216 - } 217 - 218 - static void ks_dw_pcie_mask_legacy_irq(struct irq_data *d) 219 - { 220 - } 221 - 222 - static void ks_dw_pcie_unmask_legacy_irq(struct irq_data *d) 223 - { 224 - } 225 - 226 - static struct irq_chip ks_dw_pcie_legacy_irq_chip = { 227 - .name = "Keystone-PCI-Legacy-IRQ", 228 - .irq_ack = ks_dw_pcie_ack_legacy_irq, 229 - .irq_mask = ks_dw_pcie_mask_legacy_irq, 230 - .irq_unmask = ks_dw_pcie_unmask_legacy_irq, 231 - }; 232 - 233 - static int ks_dw_pcie_init_legacy_irq_map(struct irq_domain *d, 234 - unsigned int irq, irq_hw_number_t hw_irq) 235 - { 236 - irq_set_chip_and_handler(irq, &ks_dw_pcie_legacy_irq_chip, 237 - handle_level_irq); 238 - irq_set_chip_data(irq, d->host_data); 239 - 240 - return 0; 241 - } 242 - 243 - static const struct irq_domain_ops ks_dw_pcie_legacy_irq_domain_ops = { 244 - .map = ks_dw_pcie_init_legacy_irq_map, 245 - .xlate = irq_domain_xlate_onetwocell, 246 - }; 247 - 248 - /** 249 - * ks_dw_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask 250 - * registers 251 - * 252 - * Since modification of dbi_cs2 involves different clock domain, read the 253 - * status back to ensure the transition is complete. 254 - */ 255 - static void ks_dw_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie) 256 - { 257 - u32 val; 258 - 259 - val = ks_dw_app_readl(ks_pcie, CMD_STATUS); 260 - ks_dw_app_writel(ks_pcie, CMD_STATUS, DBI_CS2_EN_VAL | val); 261 - 262 - do { 263 - val = ks_dw_app_readl(ks_pcie, CMD_STATUS); 264 - } while (!(val & DBI_CS2_EN_VAL)); 265 - } 266 - 267 - /** 268 - * ks_dw_pcie_clear_dbi_mode() - Disable DBI mode 269 - * 270 - * Since modification of dbi_cs2 involves different clock domain, read the 271 - * status back to ensure the transition is complete. 272 - */ 273 - static void ks_dw_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie) 274 - { 275 - u32 val; 276 - 277 - val = ks_dw_app_readl(ks_pcie, CMD_STATUS); 278 - ks_dw_app_writel(ks_pcie, CMD_STATUS, ~DBI_CS2_EN_VAL & val); 279 - 280 - do { 281 - val = ks_dw_app_readl(ks_pcie, CMD_STATUS); 282 - } while (val & DBI_CS2_EN_VAL); 283 - } 284 - 285 - void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) 286 - { 287 - struct dw_pcie *pci = ks_pcie->pci; 288 - struct pcie_port *pp = &pci->pp; 289 - u32 start = pp->mem->start, end = pp->mem->end; 290 - int i, tr_size; 291 - u32 val; 292 - 293 - /* Disable BARs for inbound access */ 294 - ks_dw_pcie_set_dbi_mode(ks_pcie); 295 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); 296 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0); 297 - ks_dw_pcie_clear_dbi_mode(ks_pcie); 298 - 299 - /* Set outbound translation size per window division */ 300 - ks_dw_app_writel(ks_pcie, OB_SIZE, CFG_PCIM_WIN_SZ_IDX & 0x7); 301 - 302 - tr_size = (1 << (CFG_PCIM_WIN_SZ_IDX & 0x7)) * SZ_1M; 303 - 304 - /* Using Direct 1:1 mapping of RC <-> PCI memory space */ 305 - for (i = 0; (i < CFG_PCIM_WIN_CNT) && (start < end); i++) { 306 - ks_dw_app_writel(ks_pcie, OB_OFFSET_INDEX(i), start | 1); 307 - ks_dw_app_writel(ks_pcie, OB_OFFSET_HI(i), 0); 308 - start += tr_size; 309 - } 310 - 311 - /* Enable OB translation */ 312 - val = ks_dw_app_readl(ks_pcie, CMD_STATUS); 313 - ks_dw_app_writel(ks_pcie, CMD_STATUS, OB_XLAT_EN_VAL | val); 314 - } 315 - 316 - /** 317 - * ks_pcie_cfg_setup() - Set up configuration space address for a device 318 - * 319 - * @ks_pcie: ptr to keystone_pcie structure 320 - * @bus: Bus number the device is residing on 321 - * @devfn: device, function number info 322 - * 323 - * Forms and returns the address of configuration space mapped in PCIESS 324 - * address space 0. Also configures CFG_SETUP for remote configuration space 325 - * access. 326 - * 327 - * The address space has two regions to access configuration - local and remote. 328 - * We access local region for bus 0 (as RC is attached on bus 0) and remote 329 - * region for others with TYPE 1 access when bus > 1. As for device on bus = 1, 330 - * we will do TYPE 0 access as it will be on our secondary bus (logical). 331 - * CFG_SETUP is needed only for remote configuration access. 332 - */ 333 - static void __iomem *ks_pcie_cfg_setup(struct keystone_pcie *ks_pcie, u8 bus, 334 - unsigned int devfn) 335 - { 336 - u8 device = PCI_SLOT(devfn), function = PCI_FUNC(devfn); 337 - struct dw_pcie *pci = ks_pcie->pci; 338 - struct pcie_port *pp = &pci->pp; 339 - u32 regval; 340 - 341 - if (bus == 0) 342 - return pci->dbi_base; 343 - 344 - regval = (bus << 16) | (device << 8) | function; 345 - 346 - /* 347 - * Since Bus#1 will be a virtual bus, we need to have TYPE0 348 - * access only. 349 - * TYPE 1 350 - */ 351 - if (bus != 1) 352 - regval |= BIT(24); 353 - 354 - ks_dw_app_writel(ks_pcie, CFG_SETUP, regval); 355 - return pp->va_cfg0_base; 356 - } 357 - 358 - int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 359 - unsigned int devfn, int where, int size, u32 *val) 360 - { 361 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 362 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 363 - u8 bus_num = bus->number; 364 - void __iomem *addr; 365 - 366 - addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); 367 - 368 - return dw_pcie_read(addr + where, size, val); 369 - } 370 - 371 - int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 372 - unsigned int devfn, int where, int size, u32 val) 373 - { 374 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 375 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 376 - u8 bus_num = bus->number; 377 - void __iomem *addr; 378 - 379 - addr = ks_pcie_cfg_setup(ks_pcie, bus_num, devfn); 380 - 381 - return dw_pcie_write(addr + where, size, val); 382 - } 383 - 384 - /** 385 - * ks_dw_pcie_v3_65_scan_bus() - keystone scan_bus post initialization 386 - * 387 - * This sets BAR0 to enable inbound access for MSI_IRQ register 388 - */ 389 - void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp) 390 - { 391 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 392 - struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 393 - 394 - /* Configure and set up BAR0 */ 395 - ks_dw_pcie_set_dbi_mode(ks_pcie); 396 - 397 - /* Enable BAR0 */ 398 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1); 399 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1); 400 - 401 - ks_dw_pcie_clear_dbi_mode(ks_pcie); 402 - 403 - /* 404 - * For BAR0, just setting bus address for inbound writes (MSI) should 405 - * be sufficient. Use physical address to avoid any conflicts. 406 - */ 407 - dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start); 408 - } 409 - 410 - /** 411 - * ks_dw_pcie_link_up() - Check if link up 412 - */ 413 - int ks_dw_pcie_link_up(struct dw_pcie *pci) 414 - { 415 - u32 val; 416 - 417 - val = dw_pcie_readl_dbi(pci, DEBUG0); 418 - return (val & LTSSM_STATE_MASK) == LTSSM_STATE_L0; 419 - } 420 - 421 - void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie) 422 - { 423 - u32 val; 424 - 425 - /* Disable Link training */ 426 - val = ks_dw_app_readl(ks_pcie, CMD_STATUS); 427 - val &= ~LTSSM_EN_VAL; 428 - ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 429 - 430 - /* Initiate Link Training */ 431 - val = ks_dw_app_readl(ks_pcie, CMD_STATUS); 432 - ks_dw_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 433 - } 434 - 435 - /** 436 - * ks_dw_pcie_host_init() - initialize host for v3_65 dw hardware 437 - * 438 - * Ioremap the register resources, initialize legacy irq domain 439 - * and call dw_pcie_v3_65_host_init() API to initialize the Keystone 440 - * PCI host controller. 441 - */ 442 - int __init ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, 443 - struct device_node *msi_intc_np) 444 - { 445 - struct dw_pcie *pci = ks_pcie->pci; 446 - struct pcie_port *pp = &pci->pp; 447 - struct device *dev = pci->dev; 448 - struct platform_device *pdev = to_platform_device(dev); 449 - struct resource *res; 450 - 451 - /* Index 0 is the config reg. space address */ 452 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 453 - pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); 454 - if (IS_ERR(pci->dbi_base)) 455 - return PTR_ERR(pci->dbi_base); 456 - 457 - /* 458 - * We set these same and is used in pcie rd/wr_other_conf 459 - * functions 460 - */ 461 - pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET; 462 - pp->va_cfg1_base = pp->va_cfg0_base; 463 - 464 - /* Index 1 is the application reg. space address */ 465 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 466 - ks_pcie->va_app_base = devm_ioremap_resource(dev, res); 467 - if (IS_ERR(ks_pcie->va_app_base)) 468 - return PTR_ERR(ks_pcie->va_app_base); 469 - 470 - ks_pcie->app = *res; 471 - 472 - /* Create legacy IRQ domain */ 473 - ks_pcie->legacy_irq_domain = 474 - irq_domain_add_linear(ks_pcie->legacy_intc_np, 475 - PCI_NUM_INTX, 476 - &ks_dw_pcie_legacy_irq_domain_ops, 477 - NULL); 478 - if (!ks_pcie->legacy_irq_domain) { 479 - dev_err(dev, "Failed to add irq domain for legacy irqs\n"); 480 - return -EINVAL; 481 - } 482 - 483 - return dw_pcie_host_init(pp); 484 - }
+682 -116
drivers/pci/controller/dwc/pci-keystone.c
··· 9 9 * Implementation based on pci-exynos.c and pcie-designware.c 10 10 */ 11 11 12 - #include <linux/irqchip/chained_irq.h> 13 12 #include <linux/clk.h> 14 13 #include <linux/delay.h> 15 - #include <linux/interrupt.h> 16 - #include <linux/irqdomain.h> 17 14 #include <linux/init.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/irqchip/chained_irq.h> 17 + #include <linux/irqdomain.h> 18 + #include <linux/mfd/syscon.h> 18 19 #include <linux/msi.h> 19 - #include <linux/of_irq.h> 20 20 #include <linux/of.h> 21 + #include <linux/of_irq.h> 21 22 #include <linux/of_pci.h> 22 - #include <linux/platform_device.h> 23 23 #include <linux/phy/phy.h> 24 + #include <linux/platform_device.h> 25 + #include <linux/regmap.h> 24 26 #include <linux/resource.h> 25 27 #include <linux/signal.h> 26 28 27 29 #include "pcie-designware.h" 28 - #include "pci-keystone.h" 29 30 30 - #define DRIVER_NAME "keystone-pcie" 31 + #define PCIE_VENDORID_MASK 0xffff 32 + #define PCIE_DEVICEID_SHIFT 16 31 33 32 - /* DEV_STAT_CTRL */ 33 - #define PCIE_CAP_BASE 0x70 34 + /* Application registers */ 35 + #define CMD_STATUS 0x004 36 + #define LTSSM_EN_VAL BIT(0) 37 + #define OB_XLAT_EN_VAL BIT(1) 38 + #define DBI_CS2 BIT(5) 34 39 40 + #define CFG_SETUP 0x008 41 + #define CFG_BUS(x) (((x) & 0xff) << 16) 42 + #define CFG_DEVICE(x) (((x) & 0x1f) << 8) 43 + #define CFG_FUNC(x) ((x) & 0x7) 44 + #define CFG_TYPE1 BIT(24) 45 + 46 + #define OB_SIZE 0x030 47 + #define SPACE0_REMOTE_CFG_OFFSET 0x1000 48 + #define OB_OFFSET_INDEX(n) (0x200 + (8 * (n))) 49 + #define OB_OFFSET_HI(n) (0x204 + (8 * (n))) 50 + #define OB_ENABLEN BIT(0) 51 + #define OB_WIN_SIZE 8 /* 8MB */ 52 + 53 + /* IRQ register defines */ 54 + #define IRQ_EOI 0x050 55 + #define IRQ_STATUS 0x184 56 + #define IRQ_ENABLE_SET 0x188 57 + #define IRQ_ENABLE_CLR 0x18c 58 + 59 + #define MSI_IRQ 0x054 60 + #define MSI0_IRQ_STATUS 0x104 61 + #define MSI0_IRQ_ENABLE_SET 0x108 62 + #define MSI0_IRQ_ENABLE_CLR 0x10c 63 + #define IRQ_STATUS 0x184 64 + #define MSI_IRQ_OFFSET 4 65 + 66 + #define ERR_IRQ_STATUS 0x1c4 67 + #define ERR_IRQ_ENABLE_SET 0x1c8 68 + #define ERR_AER BIT(5) /* ECRC error */ 69 + #define ERR_AXI BIT(4) /* AXI tag lookup fatal error */ 70 + #define ERR_CORR BIT(3) /* Correctable error */ 71 + #define ERR_NONFATAL BIT(2) /* Non-fatal error */ 72 + #define ERR_FATAL BIT(1) /* Fatal error */ 73 + #define ERR_SYS BIT(0) /* System error */ 74 + #define ERR_IRQ_ALL (ERR_AER | ERR_AXI | ERR_CORR | \ 75 + ERR_NONFATAL | ERR_FATAL | ERR_SYS) 76 + 77 + #define MAX_MSI_HOST_IRQS 8 35 78 /* PCIE controller device IDs */ 36 - #define PCIE_RC_K2HK 0xb008 37 - #define PCIE_RC_K2E 0xb009 38 - #define PCIE_RC_K2L 0xb00a 79 + #define PCIE_RC_K2HK 0xb008 80 + #define PCIE_RC_K2E 0xb009 81 + #define PCIE_RC_K2L 0xb00a 82 + #define PCIE_RC_K2G 0xb00b 39 83 40 - #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) 84 + #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) 41 85 42 - static void quirk_limit_mrrs(struct pci_dev *dev) 86 + struct keystone_pcie { 87 + struct dw_pcie *pci; 88 + /* PCI Device ID */ 89 + u32 device_id; 90 + int num_legacy_host_irqs; 91 + int legacy_host_irqs[PCI_NUM_INTX]; 92 + struct device_node *legacy_intc_np; 93 + 94 + int num_msi_host_irqs; 95 + int msi_host_irqs[MAX_MSI_HOST_IRQS]; 96 + int num_lanes; 97 + u32 num_viewport; 98 + struct phy **phy; 99 + struct device_link **link; 100 + struct device_node *msi_intc_np; 101 + struct irq_domain *legacy_irq_domain; 102 + struct device_node *np; 103 + 104 + int error_irq; 105 + 106 + /* Application register space */ 107 + void __iomem *va_app_base; /* DT 1st resource */ 108 + struct resource app; 109 + }; 110 + 111 + static inline void update_reg_offset_bit_pos(u32 offset, u32 *reg_offset, 112 + u32 *bit_pos) 113 + { 114 + *reg_offset = offset % 8; 115 + *bit_pos = offset >> 3; 116 + } 117 + 118 + static phys_addr_t ks_pcie_get_msi_addr(struct pcie_port *pp) 119 + { 120 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 121 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 122 + 123 + return ks_pcie->app.start + MSI_IRQ; 124 + } 125 + 126 + static u32 ks_pcie_app_readl(struct keystone_pcie *ks_pcie, u32 offset) 127 + { 128 + return readl(ks_pcie->va_app_base + offset); 129 + } 130 + 131 + static void ks_pcie_app_writel(struct keystone_pcie *ks_pcie, u32 offset, 132 + u32 val) 133 + { 134 + writel(val, ks_pcie->va_app_base + offset); 135 + } 136 + 137 + static void ks_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset) 138 + { 139 + struct dw_pcie *pci = ks_pcie->pci; 140 + struct pcie_port *pp = &pci->pp; 141 + struct device *dev = pci->dev; 142 + u32 pending, vector; 143 + int src, virq; 144 + 145 + pending = ks_pcie_app_readl(ks_pcie, MSI0_IRQ_STATUS + (offset << 4)); 146 + 147 + /* 148 + * MSI0 status bit 0-3 shows vectors 0, 8, 16, 24, MSI1 status bit 149 + * shows 1, 9, 17, 25 and so forth 150 + */ 151 + for (src = 0; src < 4; src++) { 152 + if (BIT(src) & pending) { 153 + vector = offset + (src << 3); 154 + virq = irq_linear_revmap(pp->irq_domain, vector); 155 + dev_dbg(dev, "irq: bit %d, vector %d, virq %d\n", 156 + src, vector, virq); 157 + generic_handle_irq(virq); 158 + } 159 + } 160 + } 161 + 162 + static void ks_pcie_msi_irq_ack(int irq, struct pcie_port *pp) 163 + { 164 + u32 reg_offset, bit_pos; 165 + struct keystone_pcie *ks_pcie; 166 + struct dw_pcie *pci; 167 + 168 + pci = to_dw_pcie_from_pp(pp); 169 + ks_pcie = to_keystone_pcie(pci); 170 + update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 171 + 172 + ks_pcie_app_writel(ks_pcie, MSI0_IRQ_STATUS + (reg_offset << 4), 173 + BIT(bit_pos)); 174 + ks_pcie_app_writel(ks_pcie, IRQ_EOI, reg_offset + MSI_IRQ_OFFSET); 175 + } 176 + 177 + static void ks_pcie_msi_set_irq(struct pcie_port *pp, int irq) 178 + { 179 + u32 reg_offset, bit_pos; 180 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 181 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 182 + 183 + update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 184 + ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_SET + (reg_offset << 4), 185 + BIT(bit_pos)); 186 + } 187 + 188 + static void ks_pcie_msi_clear_irq(struct pcie_port *pp, int irq) 189 + { 190 + u32 reg_offset, bit_pos; 191 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 192 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 193 + 194 + update_reg_offset_bit_pos(irq, &reg_offset, &bit_pos); 195 + ks_pcie_app_writel(ks_pcie, MSI0_IRQ_ENABLE_CLR + (reg_offset << 4), 196 + BIT(bit_pos)); 197 + } 198 + 199 + static int ks_pcie_msi_host_init(struct pcie_port *pp) 200 + { 201 + return dw_pcie_allocate_domains(pp); 202 + } 203 + 204 + static void ks_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie) 205 + { 206 + int i; 207 + 208 + for (i = 0; i < PCI_NUM_INTX; i++) 209 + ks_pcie_app_writel(ks_pcie, IRQ_ENABLE_SET + (i << 4), 0x1); 210 + } 211 + 212 + static void ks_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, 213 + int offset) 214 + { 215 + struct dw_pcie *pci = ks_pcie->pci; 216 + struct device *dev = pci->dev; 217 + u32 pending; 218 + int virq; 219 + 220 + pending = ks_pcie_app_readl(ks_pcie, IRQ_STATUS + (offset << 4)); 221 + 222 + if (BIT(0) & pending) { 223 + virq = irq_linear_revmap(ks_pcie->legacy_irq_domain, offset); 224 + dev_dbg(dev, ": irq: irq_offset %d, virq %d\n", offset, virq); 225 + generic_handle_irq(virq); 226 + } 227 + 228 + /* EOI the INTx interrupt */ 229 + ks_pcie_app_writel(ks_pcie, IRQ_EOI, offset); 230 + } 231 + 232 + static void ks_pcie_enable_error_irq(struct keystone_pcie *ks_pcie) 233 + { 234 + ks_pcie_app_writel(ks_pcie, ERR_IRQ_ENABLE_SET, ERR_IRQ_ALL); 235 + } 236 + 237 + static irqreturn_t ks_pcie_handle_error_irq(struct keystone_pcie *ks_pcie) 238 + { 239 + u32 reg; 240 + struct device *dev = ks_pcie->pci->dev; 241 + 242 + reg = ks_pcie_app_readl(ks_pcie, ERR_IRQ_STATUS); 243 + if (!reg) 244 + return IRQ_NONE; 245 + 246 + if (reg & ERR_SYS) 247 + dev_err(dev, "System Error\n"); 248 + 249 + if (reg & ERR_FATAL) 250 + dev_err(dev, "Fatal Error\n"); 251 + 252 + if (reg & ERR_NONFATAL) 253 + dev_dbg(dev, "Non Fatal Error\n"); 254 + 255 + if (reg & ERR_CORR) 256 + dev_dbg(dev, "Correctable Error\n"); 257 + 258 + if (reg & ERR_AXI) 259 + dev_err(dev, "AXI tag lookup fatal Error\n"); 260 + 261 + if (reg & ERR_AER) 262 + dev_err(dev, "ECRC Error\n"); 263 + 264 + ks_pcie_app_writel(ks_pcie, ERR_IRQ_STATUS, reg); 265 + 266 + return IRQ_HANDLED; 267 + } 268 + 269 + static void ks_pcie_ack_legacy_irq(struct irq_data *d) 270 + { 271 + } 272 + 273 + static void ks_pcie_mask_legacy_irq(struct irq_data *d) 274 + { 275 + } 276 + 277 + static void ks_pcie_unmask_legacy_irq(struct irq_data *d) 278 + { 279 + } 280 + 281 + static struct irq_chip ks_pcie_legacy_irq_chip = { 282 + .name = "Keystone-PCI-Legacy-IRQ", 283 + .irq_ack = ks_pcie_ack_legacy_irq, 284 + .irq_mask = ks_pcie_mask_legacy_irq, 285 + .irq_unmask = ks_pcie_unmask_legacy_irq, 286 + }; 287 + 288 + static int ks_pcie_init_legacy_irq_map(struct irq_domain *d, 289 + unsigned int irq, 290 + irq_hw_number_t hw_irq) 291 + { 292 + irq_set_chip_and_handler(irq, &ks_pcie_legacy_irq_chip, 293 + handle_level_irq); 294 + irq_set_chip_data(irq, d->host_data); 295 + 296 + return 0; 297 + } 298 + 299 + static const struct irq_domain_ops ks_pcie_legacy_irq_domain_ops = { 300 + .map = ks_pcie_init_legacy_irq_map, 301 + .xlate = irq_domain_xlate_onetwocell, 302 + }; 303 + 304 + /** 305 + * ks_pcie_set_dbi_mode() - Set DBI mode to access overlaid BAR mask 306 + * registers 307 + * 308 + * Since modification of dbi_cs2 involves different clock domain, read the 309 + * status back to ensure the transition is complete. 310 + */ 311 + static void ks_pcie_set_dbi_mode(struct keystone_pcie *ks_pcie) 312 + { 313 + u32 val; 314 + 315 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 316 + val |= DBI_CS2; 317 + ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 318 + 319 + do { 320 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 321 + } while (!(val & DBI_CS2)); 322 + } 323 + 324 + /** 325 + * ks_pcie_clear_dbi_mode() - Disable DBI mode 326 + * 327 + * Since modification of dbi_cs2 involves different clock domain, read the 328 + * status back to ensure the transition is complete. 329 + */ 330 + static void ks_pcie_clear_dbi_mode(struct keystone_pcie *ks_pcie) 331 + { 332 + u32 val; 333 + 334 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 335 + val &= ~DBI_CS2; 336 + ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 337 + 338 + do { 339 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 340 + } while (val & DBI_CS2); 341 + } 342 + 343 + static void ks_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie) 344 + { 345 + u32 val; 346 + u32 num_viewport = ks_pcie->num_viewport; 347 + struct dw_pcie *pci = ks_pcie->pci; 348 + struct pcie_port *pp = &pci->pp; 349 + u64 start = pp->mem->start; 350 + u64 end = pp->mem->end; 351 + int i; 352 + 353 + /* Disable BARs for inbound access */ 354 + ks_pcie_set_dbi_mode(ks_pcie); 355 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 0); 356 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_1, 0); 357 + ks_pcie_clear_dbi_mode(ks_pcie); 358 + 359 + val = ilog2(OB_WIN_SIZE); 360 + ks_pcie_app_writel(ks_pcie, OB_SIZE, val); 361 + 362 + /* Using Direct 1:1 mapping of RC <-> PCI memory space */ 363 + for (i = 0; i < num_viewport && (start < end); i++) { 364 + ks_pcie_app_writel(ks_pcie, OB_OFFSET_INDEX(i), 365 + lower_32_bits(start) | OB_ENABLEN); 366 + ks_pcie_app_writel(ks_pcie, OB_OFFSET_HI(i), 367 + upper_32_bits(start)); 368 + start += OB_WIN_SIZE; 369 + } 370 + 371 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 372 + val |= OB_XLAT_EN_VAL; 373 + ks_pcie_app_writel(ks_pcie, CMD_STATUS, val); 374 + } 375 + 376 + static int ks_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 377 + unsigned int devfn, int where, int size, 378 + u32 *val) 379 + { 380 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 381 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 382 + u32 reg; 383 + 384 + reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) | 385 + CFG_FUNC(PCI_FUNC(devfn)); 386 + if (bus->parent->number != pp->root_bus_nr) 387 + reg |= CFG_TYPE1; 388 + ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg); 389 + 390 + return dw_pcie_read(pp->va_cfg0_base + where, size, val); 391 + } 392 + 393 + static int ks_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 394 + unsigned int devfn, int where, int size, 395 + u32 val) 396 + { 397 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 398 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 399 + u32 reg; 400 + 401 + reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) | 402 + CFG_FUNC(PCI_FUNC(devfn)); 403 + if (bus->parent->number != pp->root_bus_nr) 404 + reg |= CFG_TYPE1; 405 + ks_pcie_app_writel(ks_pcie, CFG_SETUP, reg); 406 + 407 + return dw_pcie_write(pp->va_cfg0_base + where, size, val); 408 + } 409 + 410 + /** 411 + * ks_pcie_v3_65_scan_bus() - keystone scan_bus post initialization 412 + * 413 + * This sets BAR0 to enable inbound access for MSI_IRQ register 414 + */ 415 + static void ks_pcie_v3_65_scan_bus(struct pcie_port *pp) 416 + { 417 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 418 + struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 419 + 420 + /* Configure and set up BAR0 */ 421 + ks_pcie_set_dbi_mode(ks_pcie); 422 + 423 + /* Enable BAR0 */ 424 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, 1); 425 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, SZ_4K - 1); 426 + 427 + ks_pcie_clear_dbi_mode(ks_pcie); 428 + 429 + /* 430 + * For BAR0, just setting bus address for inbound writes (MSI) should 431 + * be sufficient. Use physical address to avoid any conflicts. 432 + */ 433 + dw_pcie_writel_dbi(pci, PCI_BASE_ADDRESS_0, ks_pcie->app.start); 434 + } 435 + 436 + /** 437 + * ks_pcie_link_up() - Check if link up 438 + */ 439 + static int ks_pcie_link_up(struct dw_pcie *pci) 440 + { 441 + u32 val; 442 + 443 + val = dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0); 444 + val &= PORT_LOGIC_LTSSM_STATE_MASK; 445 + return (val == PORT_LOGIC_LTSSM_STATE_L0); 446 + } 447 + 448 + static void ks_pcie_initiate_link_train(struct keystone_pcie *ks_pcie) 449 + { 450 + u32 val; 451 + 452 + /* Disable Link training */ 453 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 454 + val &= ~LTSSM_EN_VAL; 455 + ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 456 + 457 + /* Initiate Link Training */ 458 + val = ks_pcie_app_readl(ks_pcie, CMD_STATUS); 459 + ks_pcie_app_writel(ks_pcie, CMD_STATUS, LTSSM_EN_VAL | val); 460 + } 461 + 462 + /** 463 + * ks_pcie_dw_host_init() - initialize host for v3_65 dw hardware 464 + * 465 + * Ioremap the register resources, initialize legacy irq domain 466 + * and call dw_pcie_v3_65_host_init() API to initialize the Keystone 467 + * PCI host controller. 468 + */ 469 + static int __init ks_pcie_dw_host_init(struct keystone_pcie *ks_pcie) 470 + { 471 + struct dw_pcie *pci = ks_pcie->pci; 472 + struct pcie_port *pp = &pci->pp; 473 + struct device *dev = pci->dev; 474 + struct platform_device *pdev = to_platform_device(dev); 475 + struct resource *res; 476 + 477 + /* Index 0 is the config reg. space address */ 478 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 479 + pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); 480 + if (IS_ERR(pci->dbi_base)) 481 + return PTR_ERR(pci->dbi_base); 482 + 483 + /* 484 + * We set these same and is used in pcie rd/wr_other_conf 485 + * functions 486 + */ 487 + pp->va_cfg0_base = pci->dbi_base + SPACE0_REMOTE_CFG_OFFSET; 488 + pp->va_cfg1_base = pp->va_cfg0_base; 489 + 490 + /* Index 1 is the application reg. space address */ 491 + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 492 + ks_pcie->va_app_base = devm_ioremap_resource(dev, res); 493 + if (IS_ERR(ks_pcie->va_app_base)) 494 + return PTR_ERR(ks_pcie->va_app_base); 495 + 496 + ks_pcie->app = *res; 497 + 498 + /* Create legacy IRQ domain */ 499 + ks_pcie->legacy_irq_domain = 500 + irq_domain_add_linear(ks_pcie->legacy_intc_np, 501 + PCI_NUM_INTX, 502 + &ks_pcie_legacy_irq_domain_ops, 503 + NULL); 504 + if (!ks_pcie->legacy_irq_domain) { 505 + dev_err(dev, "Failed to add irq domain for legacy irqs\n"); 506 + return -EINVAL; 507 + } 508 + 509 + return dw_pcie_host_init(pp); 510 + } 511 + 512 + static void ks_pcie_quirk(struct pci_dev *dev) 43 513 { 44 514 struct pci_bus *bus = dev->bus; 45 - struct pci_dev *bridge = bus->self; 515 + struct pci_dev *bridge; 46 516 static const struct pci_device_id rc_pci_devids[] = { 47 517 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), 48 518 .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, ··· 520 50 .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 521 51 { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2L), 522 52 .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 53 + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2G), 54 + .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, 523 55 { 0, }, 524 56 }; 525 57 526 58 if (pci_is_root_bus(bus)) 527 - return; 59 + bridge = dev; 528 60 529 61 /* look for the host bridge */ 530 62 while (!pci_is_root_bus(bus)) { ··· 534 62 bus = bus->parent; 535 63 } 536 64 537 - if (bridge) { 538 - /* 539 - * Keystone PCI controller has a h/w limitation of 540 - * 256 bytes maximum read request size. It can't handle 541 - * anything higher than this. So force this limit on 542 - * all downstream devices. 543 - */ 544 - if (pci_match_id(rc_pci_devids, bridge)) { 545 - if (pcie_get_readrq(dev) > 256) { 546 - dev_info(&dev->dev, "limiting MRRS to 256\n"); 547 - pcie_set_readrq(dev, 256); 548 - } 65 + if (!bridge) 66 + return; 67 + 68 + /* 69 + * Keystone PCI controller has a h/w limitation of 70 + * 256 bytes maximum read request size. It can't handle 71 + * anything higher than this. So force this limit on 72 + * all downstream devices. 73 + */ 74 + if (pci_match_id(rc_pci_devids, bridge)) { 75 + if (pcie_get_readrq(dev) > 256) { 76 + dev_info(&dev->dev, "limiting MRRS to 256\n"); 77 + pcie_set_readrq(dev, 256); 549 78 } 550 79 } 551 80 } 552 - DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, quirk_limit_mrrs); 81 + DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk); 553 82 554 83 static int ks_pcie_establish_link(struct keystone_pcie *ks_pcie) 555 84 { 556 85 struct dw_pcie *pci = ks_pcie->pci; 557 - struct pcie_port *pp = &pci->pp; 558 86 struct device *dev = pci->dev; 559 - unsigned int retries; 560 - 561 - dw_pcie_setup_rc(pp); 562 87 563 88 if (dw_pcie_link_up(pci)) { 564 89 dev_info(dev, "Link already up\n"); 565 90 return 0; 566 91 } 567 92 93 + ks_pcie_initiate_link_train(ks_pcie); 94 + 568 95 /* check if the link is up or not */ 569 - for (retries = 0; retries < 5; retries++) { 570 - ks_dw_pcie_initiate_link_train(ks_pcie); 571 - if (!dw_pcie_wait_for_link(pci)) 572 - return 0; 573 - } 96 + if (!dw_pcie_wait_for_link(pci)) 97 + return 0; 574 98 575 99 dev_err(dev, "phy link never came up\n"); 576 100 return -ETIMEDOUT; ··· 589 121 * ack operation. 590 122 */ 591 123 chained_irq_enter(chip, desc); 592 - ks_dw_pcie_handle_msi_irq(ks_pcie, offset); 124 + ks_pcie_handle_msi_irq(ks_pcie, offset); 593 125 chained_irq_exit(chip, desc); 594 126 } 595 127 ··· 618 150 * ack operation. 619 151 */ 620 152 chained_irq_enter(chip, desc); 621 - ks_dw_pcie_handle_legacy_irq(ks_pcie, irq_offset); 153 + ks_pcie_handle_legacy_irq(ks_pcie, irq_offset); 622 154 chained_irq_exit(chip, desc); 623 155 } 624 156 ··· 690 222 ks_pcie_legacy_irq_handler, 691 223 ks_pcie); 692 224 } 693 - ks_dw_pcie_enable_legacy_irqs(ks_pcie); 225 + ks_pcie_enable_legacy_irqs(ks_pcie); 694 226 695 227 /* MSI IRQ */ 696 228 if (IS_ENABLED(CONFIG_PCI_MSI)) { ··· 702 234 } 703 235 704 236 if (ks_pcie->error_irq > 0) 705 - ks_dw_pcie_enable_error_irq(ks_pcie); 237 + ks_pcie_enable_error_irq(ks_pcie); 706 238 } 707 239 708 240 /* ··· 710 242 * bus error instead of returning 0xffffffff. This handler always returns 0 711 243 * for this kind of faults. 712 244 */ 713 - static int keystone_pcie_fault(unsigned long addr, unsigned int fsr, 714 - struct pt_regs *regs) 245 + static int ks_pcie_fault(unsigned long addr, unsigned int fsr, 246 + struct pt_regs *regs) 715 247 { 716 248 unsigned long instr = *(unsigned long *) instruction_pointer(regs); 717 249 ··· 725 257 return 0; 726 258 } 727 259 260 + static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie) 261 + { 262 + int ret; 263 + unsigned int id; 264 + struct regmap *devctrl_regs; 265 + struct dw_pcie *pci = ks_pcie->pci; 266 + struct device *dev = pci->dev; 267 + struct device_node *np = dev->of_node; 268 + 269 + devctrl_regs = syscon_regmap_lookup_by_phandle(np, "ti,syscon-pcie-id"); 270 + if (IS_ERR(devctrl_regs)) 271 + return PTR_ERR(devctrl_regs); 272 + 273 + ret = regmap_read(devctrl_regs, 0, &id); 274 + if (ret) 275 + return ret; 276 + 277 + dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, id & PCIE_VENDORID_MASK); 278 + dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, id >> PCIE_DEVICEID_SHIFT); 279 + 280 + return 0; 281 + } 282 + 728 283 static int __init ks_pcie_host_init(struct pcie_port *pp) 729 284 { 730 285 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 731 286 struct keystone_pcie *ks_pcie = to_keystone_pcie(pci); 732 - u32 val; 287 + int ret; 288 + 289 + dw_pcie_setup_rc(pp); 733 290 734 291 ks_pcie_establish_link(ks_pcie); 735 - ks_dw_pcie_setup_rc_app_regs(ks_pcie); 292 + ks_pcie_setup_rc_app_regs(ks_pcie); 736 293 ks_pcie_setup_interrupts(ks_pcie); 737 294 writew(PCI_IO_RANGE_TYPE_32 | (PCI_IO_RANGE_TYPE_32 << 8), 738 295 pci->dbi_base + PCI_IO_BASE); 739 296 740 - /* update the Vendor ID */ 741 - writew(ks_pcie->device_id, pci->dbi_base + PCI_DEVICE_ID); 742 - 743 - /* update the DEV_STAT_CTRL to publish right mrrs */ 744 - val = readl(pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL); 745 - val &= ~PCI_EXP_DEVCTL_READRQ; 746 - /* set the mrrs to 256 bytes */ 747 - val |= BIT(12); 748 - writel(val, pci->dbi_base + PCIE_CAP_BASE + PCI_EXP_DEVCTL); 297 + ret = ks_pcie_init_id(ks_pcie); 298 + if (ret < 0) 299 + return ret; 749 300 750 301 /* 751 302 * PCIe access errors that result into OCP errors are caught by ARM as 752 303 * "External aborts" 753 304 */ 754 - hook_fault_code(17, keystone_pcie_fault, SIGBUS, 0, 305 + hook_fault_code(17, ks_pcie_fault, SIGBUS, 0, 755 306 "Asynchronous external abort"); 756 307 757 308 return 0; 758 309 } 759 310 760 - static const struct dw_pcie_host_ops keystone_pcie_host_ops = { 761 - .rd_other_conf = ks_dw_pcie_rd_other_conf, 762 - .wr_other_conf = ks_dw_pcie_wr_other_conf, 311 + static const struct dw_pcie_host_ops ks_pcie_host_ops = { 312 + .rd_other_conf = ks_pcie_rd_other_conf, 313 + .wr_other_conf = ks_pcie_wr_other_conf, 763 314 .host_init = ks_pcie_host_init, 764 - .msi_set_irq = ks_dw_pcie_msi_set_irq, 765 - .msi_clear_irq = ks_dw_pcie_msi_clear_irq, 766 - .get_msi_addr = ks_dw_pcie_get_msi_addr, 767 - .msi_host_init = ks_dw_pcie_msi_host_init, 768 - .msi_irq_ack = ks_dw_pcie_msi_irq_ack, 769 - .scan_bus = ks_dw_pcie_v3_65_scan_bus, 315 + .msi_set_irq = ks_pcie_msi_set_irq, 316 + .msi_clear_irq = ks_pcie_msi_clear_irq, 317 + .get_msi_addr = ks_pcie_get_msi_addr, 318 + .msi_host_init = ks_pcie_msi_host_init, 319 + .msi_irq_ack = ks_pcie_msi_irq_ack, 320 + .scan_bus = ks_pcie_v3_65_scan_bus, 770 321 }; 771 322 772 - static irqreturn_t pcie_err_irq_handler(int irq, void *priv) 323 + static irqreturn_t ks_pcie_err_irq_handler(int irq, void *priv) 773 324 { 774 325 struct keystone_pcie *ks_pcie = priv; 775 326 776 - return ks_dw_pcie_handle_error_irq(ks_pcie); 327 + return ks_pcie_handle_error_irq(ks_pcie); 777 328 } 778 329 779 - static int __init ks_add_pcie_port(struct keystone_pcie *ks_pcie, 780 - struct platform_device *pdev) 330 + static int __init ks_pcie_add_pcie_port(struct keystone_pcie *ks_pcie, 331 + struct platform_device *pdev) 781 332 { 782 333 struct dw_pcie *pci = ks_pcie->pci; 783 334 struct pcie_port *pp = &pci->pp; ··· 825 338 if (ks_pcie->error_irq <= 0) 826 339 dev_info(dev, "no error IRQ defined\n"); 827 340 else { 828 - ret = request_irq(ks_pcie->error_irq, pcie_err_irq_handler, 341 + ret = request_irq(ks_pcie->error_irq, ks_pcie_err_irq_handler, 829 342 IRQF_SHARED, "pcie-error-irq", ks_pcie); 830 343 if (ret < 0) { 831 344 dev_err(dev, "failed to request error IRQ %d\n", ··· 834 347 } 835 348 } 836 349 837 - pp->ops = &keystone_pcie_host_ops; 838 - ret = ks_dw_pcie_host_init(ks_pcie, ks_pcie->msi_intc_np); 350 + pp->ops = &ks_pcie_host_ops; 351 + ret = ks_pcie_dw_host_init(ks_pcie); 839 352 if (ret) { 840 353 dev_err(dev, "failed to initialize host\n"); 841 354 return ret; ··· 852 365 { }, 853 366 }; 854 367 855 - static const struct dw_pcie_ops dw_pcie_ops = { 856 - .link_up = ks_dw_pcie_link_up, 368 + static const struct dw_pcie_ops ks_pcie_dw_pcie_ops = { 369 + .link_up = ks_pcie_link_up, 857 370 }; 858 371 859 - static int __exit ks_pcie_remove(struct platform_device *pdev) 372 + static void ks_pcie_disable_phy(struct keystone_pcie *ks_pcie) 860 373 { 861 - struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev); 374 + int num_lanes = ks_pcie->num_lanes; 862 375 863 - clk_disable_unprepare(ks_pcie->clk); 376 + while (num_lanes--) { 377 + phy_power_off(ks_pcie->phy[num_lanes]); 378 + phy_exit(ks_pcie->phy[num_lanes]); 379 + } 380 + } 381 + 382 + static int ks_pcie_enable_phy(struct keystone_pcie *ks_pcie) 383 + { 384 + int i; 385 + int ret; 386 + int num_lanes = ks_pcie->num_lanes; 387 + 388 + for (i = 0; i < num_lanes; i++) { 389 + ret = phy_init(ks_pcie->phy[i]); 390 + if (ret < 0) 391 + goto err_phy; 392 + 393 + ret = phy_power_on(ks_pcie->phy[i]); 394 + if (ret < 0) { 395 + phy_exit(ks_pcie->phy[i]); 396 + goto err_phy; 397 + } 398 + } 864 399 865 400 return 0; 401 + 402 + err_phy: 403 + while (--i >= 0) { 404 + phy_power_off(ks_pcie->phy[i]); 405 + phy_exit(ks_pcie->phy[i]); 406 + } 407 + 408 + return ret; 866 409 } 867 410 868 411 static int __init ks_pcie_probe(struct platform_device *pdev) 869 412 { 870 413 struct device *dev = &pdev->dev; 414 + struct device_node *np = dev->of_node; 871 415 struct dw_pcie *pci; 872 416 struct keystone_pcie *ks_pcie; 873 - struct resource *res; 874 - void __iomem *reg_p; 875 - struct phy *phy; 417 + struct device_link **link; 418 + u32 num_viewport; 419 + struct phy **phy; 420 + u32 num_lanes; 421 + char name[10]; 876 422 int ret; 423 + int i; 877 424 878 425 ks_pcie = devm_kzalloc(dev, sizeof(*ks_pcie), GFP_KERNEL); 879 426 if (!ks_pcie) ··· 918 397 return -ENOMEM; 919 398 920 399 pci->dev = dev; 921 - pci->ops = &dw_pcie_ops; 400 + pci->ops = &ks_pcie_dw_pcie_ops; 922 401 923 - ks_pcie->pci = pci; 924 - 925 - /* initialize SerDes Phy if present */ 926 - phy = devm_phy_get(dev, "pcie-phy"); 927 - if (PTR_ERR_OR_ZERO(phy) == -EPROBE_DEFER) 928 - return PTR_ERR(phy); 929 - 930 - if (!IS_ERR_OR_NULL(phy)) { 931 - ret = phy_init(phy); 932 - if (ret < 0) 933 - return ret; 934 - } 935 - 936 - /* index 2 is to read PCI DEVICE_ID */ 937 - res = platform_get_resource(pdev, IORESOURCE_MEM, 2); 938 - reg_p = devm_ioremap_resource(dev, res); 939 - if (IS_ERR(reg_p)) 940 - return PTR_ERR(reg_p); 941 - ks_pcie->device_id = readl(reg_p) >> 16; 942 - devm_iounmap(dev, reg_p); 943 - devm_release_mem_region(dev, res->start, resource_size(res)); 944 - 945 - ks_pcie->np = dev->of_node; 946 - platform_set_drvdata(pdev, ks_pcie); 947 - ks_pcie->clk = devm_clk_get(dev, "pcie"); 948 - if (IS_ERR(ks_pcie->clk)) { 949 - dev_err(dev, "Failed to get pcie rc clock\n"); 950 - return PTR_ERR(ks_pcie->clk); 951 - } 952 - ret = clk_prepare_enable(ks_pcie->clk); 953 - if (ret) 402 + ret = of_property_read_u32(np, "num-viewport", &num_viewport); 403 + if (ret < 0) { 404 + dev_err(dev, "unable to read *num-viewport* property\n"); 954 405 return ret; 406 + } 407 + 408 + ret = of_property_read_u32(np, "num-lanes", &num_lanes); 409 + if (ret) 410 + num_lanes = 1; 411 + 412 + phy = devm_kzalloc(dev, sizeof(*phy) * num_lanes, GFP_KERNEL); 413 + if (!phy) 414 + return -ENOMEM; 415 + 416 + link = devm_kzalloc(dev, sizeof(*link) * num_lanes, GFP_KERNEL); 417 + if (!link) 418 + return -ENOMEM; 419 + 420 + for (i = 0; i < num_lanes; i++) { 421 + snprintf(name, sizeof(name), "pcie-phy%d", i); 422 + phy[i] = devm_phy_optional_get(dev, name); 423 + if (IS_ERR(phy[i])) { 424 + ret = PTR_ERR(phy[i]); 425 + goto err_link; 426 + } 427 + 428 + if (!phy[i]) 429 + continue; 430 + 431 + link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS); 432 + if (!link[i]) { 433 + ret = -EINVAL; 434 + goto err_link; 435 + } 436 + } 437 + 438 + ks_pcie->np = np; 439 + ks_pcie->pci = pci; 440 + ks_pcie->link = link; 441 + ks_pcie->num_lanes = num_lanes; 442 + ks_pcie->num_viewport = num_viewport; 443 + ks_pcie->phy = phy; 444 + 445 + ret = ks_pcie_enable_phy(ks_pcie); 446 + if (ret) { 447 + dev_err(dev, "failed to enable phy\n"); 448 + goto err_link; 449 + } 955 450 956 451 platform_set_drvdata(pdev, ks_pcie); 452 + pm_runtime_enable(dev); 453 + ret = pm_runtime_get_sync(dev); 454 + if (ret < 0) { 455 + dev_err(dev, "pm_runtime_get_sync failed\n"); 456 + goto err_get_sync; 457 + } 957 458 958 - ret = ks_add_pcie_port(ks_pcie, pdev); 459 + ret = ks_pcie_add_pcie_port(ks_pcie, pdev); 959 460 if (ret < 0) 960 - goto fail_clk; 461 + goto err_get_sync; 961 462 962 463 return 0; 963 - fail_clk: 964 - clk_disable_unprepare(ks_pcie->clk); 464 + 465 + err_get_sync: 466 + pm_runtime_put(dev); 467 + pm_runtime_disable(dev); 468 + ks_pcie_disable_phy(ks_pcie); 469 + 470 + err_link: 471 + while (--i >= 0 && link[i]) 472 + device_link_del(link[i]); 965 473 966 474 return ret; 475 + } 476 + 477 + static int __exit ks_pcie_remove(struct platform_device *pdev) 478 + { 479 + struct keystone_pcie *ks_pcie = platform_get_drvdata(pdev); 480 + struct device_link **link = ks_pcie->link; 481 + int num_lanes = ks_pcie->num_lanes; 482 + struct device *dev = &pdev->dev; 483 + 484 + pm_runtime_put(dev); 485 + pm_runtime_disable(dev); 486 + ks_pcie_disable_phy(ks_pcie); 487 + while (num_lanes--) 488 + device_link_del(link[num_lanes]); 489 + 490 + return 0; 967 491 } 968 492 969 493 static struct platform_driver ks_pcie_driver __refdata = {
-57
drivers/pci/controller/dwc/pci-keystone.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Keystone PCI Controller's common includes 4 - * 5 - * Copyright (C) 2013-2014 Texas Instruments., Ltd. 6 - * http://www.ti.com 7 - * 8 - * Author: Murali Karicheri <m-karicheri2@ti.com> 9 - */ 10 - 11 - #define MAX_MSI_HOST_IRQS 8 12 - 13 - struct keystone_pcie { 14 - struct dw_pcie *pci; 15 - struct clk *clk; 16 - /* PCI Device ID */ 17 - u32 device_id; 18 - int num_legacy_host_irqs; 19 - int legacy_host_irqs[PCI_NUM_INTX]; 20 - struct device_node *legacy_intc_np; 21 - 22 - int num_msi_host_irqs; 23 - int msi_host_irqs[MAX_MSI_HOST_IRQS]; 24 - struct device_node *msi_intc_np; 25 - struct irq_domain *legacy_irq_domain; 26 - struct device_node *np; 27 - 28 - int error_irq; 29 - 30 - /* Application register space */ 31 - void __iomem *va_app_base; /* DT 1st resource */ 32 - struct resource app; 33 - }; 34 - 35 - /* Keystone DW specific MSI controller APIs/definitions */ 36 - void ks_dw_pcie_handle_msi_irq(struct keystone_pcie *ks_pcie, int offset); 37 - phys_addr_t ks_dw_pcie_get_msi_addr(struct pcie_port *pp); 38 - 39 - /* Keystone specific PCI controller APIs */ 40 - void ks_dw_pcie_enable_legacy_irqs(struct keystone_pcie *ks_pcie); 41 - void ks_dw_pcie_handle_legacy_irq(struct keystone_pcie *ks_pcie, int offset); 42 - void ks_dw_pcie_enable_error_irq(struct keystone_pcie *ks_pcie); 43 - irqreturn_t ks_dw_pcie_handle_error_irq(struct keystone_pcie *ks_pcie); 44 - int ks_dw_pcie_host_init(struct keystone_pcie *ks_pcie, 45 - struct device_node *msi_intc_np); 46 - int ks_dw_pcie_wr_other_conf(struct pcie_port *pp, struct pci_bus *bus, 47 - unsigned int devfn, int where, int size, u32 val); 48 - int ks_dw_pcie_rd_other_conf(struct pcie_port *pp, struct pci_bus *bus, 49 - unsigned int devfn, int where, int size, u32 *val); 50 - void ks_dw_pcie_setup_rc_app_regs(struct keystone_pcie *ks_pcie); 51 - void ks_dw_pcie_initiate_link_train(struct keystone_pcie *ks_pcie); 52 - void ks_dw_pcie_msi_irq_ack(int i, struct pcie_port *pp); 53 - void ks_dw_pcie_msi_set_irq(struct pcie_port *pp, int irq); 54 - void ks_dw_pcie_msi_clear_irq(struct pcie_port *pp, int irq); 55 - void ks_dw_pcie_v3_65_scan_bus(struct pcie_port *pp); 56 - int ks_dw_pcie_msi_host_init(struct pcie_port *pp); 57 - int ks_dw_pcie_link_up(struct dw_pcie *pci);
+4
drivers/pci/controller/dwc/pcie-designware.h
··· 36 36 #define PORT_LINK_MODE_4_LANES (0x7 << 16) 37 37 #define PORT_LINK_MODE_8_LANES (0xf << 16) 38 38 39 + #define PCIE_PORT_DEBUG0 0x728 40 + #define PORT_LOGIC_LTSSM_STATE_MASK 0x1f 41 + #define PORT_LOGIC_LTSSM_STATE_L0 0x11 42 + 39 43 #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C 40 44 #define PORT_LOGIC_SPEED_CHANGE (0x1 << 17) 41 45 #define PORT_LOGIC_LINK_WIDTH_MASK (0x1f << 8)
+2 -2
drivers/pci/controller/dwc/pcie-kirin.c
··· 467 467 return 0; 468 468 } 469 469 470 - static int __init kirin_add_pcie_port(struct dw_pcie *pci, 471 - struct platform_device *pdev) 470 + static int kirin_add_pcie_port(struct dw_pcie *pci, 471 + struct platform_device *pdev) 472 472 { 473 473 int ret; 474 474
+39 -17
drivers/pci/controller/dwc/pcie-qcom.c
··· 1089 1089 struct qcom_pcie *pcie = to_qcom_pcie(pci); 1090 1090 int ret; 1091 1091 1092 - pm_runtime_get_sync(pci->dev); 1093 1092 qcom_ep_reset_assert(pcie); 1094 1093 1095 1094 ret = pcie->ops->init(pcie); ··· 1125 1126 phy_power_off(pcie->phy); 1126 1127 err_deinit: 1127 1128 pcie->ops->deinit(pcie); 1128 - pm_runtime_put(pci->dev); 1129 1129 1130 1130 return ret; 1131 1131 } ··· 1214 1216 return -ENOMEM; 1215 1217 1216 1218 pm_runtime_enable(dev); 1219 + ret = pm_runtime_get_sync(dev); 1220 + if (ret < 0) { 1221 + pm_runtime_disable(dev); 1222 + return ret; 1223 + } 1224 + 1217 1225 pci->dev = dev; 1218 1226 pci->ops = &dw_pcie_ops; 1219 1227 pp = &pci->pp; ··· 1229 1225 pcie->ops = of_device_get_match_data(dev); 1230 1226 1231 1227 pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW); 1232 - if (IS_ERR(pcie->reset)) 1233 - return PTR_ERR(pcie->reset); 1228 + if (IS_ERR(pcie->reset)) { 1229 + ret = PTR_ERR(pcie->reset); 1230 + goto err_pm_runtime_put; 1231 + } 1234 1232 1235 1233 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "parf"); 1236 1234 pcie->parf = devm_ioremap_resource(dev, res); 1237 - if (IS_ERR(pcie->parf)) 1238 - return PTR_ERR(pcie->parf); 1235 + if (IS_ERR(pcie->parf)) { 1236 + ret = PTR_ERR(pcie->parf); 1237 + goto err_pm_runtime_put; 1238 + } 1239 1239 1240 1240 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi"); 1241 1241 pci->dbi_base = devm_pci_remap_cfg_resource(dev, res); 1242 - if (IS_ERR(pci->dbi_base)) 1243 - return PTR_ERR(pci->dbi_base); 1242 + if (IS_ERR(pci->dbi_base)) { 1243 + ret = PTR_ERR(pci->dbi_base); 1244 + goto err_pm_runtime_put; 1245 + } 1244 1246 1245 1247 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "elbi"); 1246 1248 pcie->elbi = devm_ioremap_resource(dev, res); 1247 - if (IS_ERR(pcie->elbi)) 1248 - return PTR_ERR(pcie->elbi); 1249 + if (IS_ERR(pcie->elbi)) { 1250 + ret = PTR_ERR(pcie->elbi); 1251 + goto err_pm_runtime_put; 1252 + } 1249 1253 1250 1254 pcie->phy = devm_phy_optional_get(dev, "pciephy"); 1251 - if (IS_ERR(pcie->phy)) 1252 - return PTR_ERR(pcie->phy); 1255 + if (IS_ERR(pcie->phy)) { 1256 + ret = PTR_ERR(pcie->phy); 1257 + goto err_pm_runtime_put; 1258 + } 1253 1259 1254 1260 ret = pcie->ops->get_resources(pcie); 1255 1261 if (ret) 1256 - return ret; 1262 + goto err_pm_runtime_put; 1257 1263 1258 1264 pp->ops = &qcom_pcie_dw_ops; 1259 1265 1260 1266 if (IS_ENABLED(CONFIG_PCI_MSI)) { 1261 1267 pp->msi_irq = platform_get_irq_byname(pdev, "msi"); 1262 - if (pp->msi_irq < 0) 1263 - return pp->msi_irq; 1268 + if (pp->msi_irq < 0) { 1269 + ret = pp->msi_irq; 1270 + goto err_pm_runtime_put; 1271 + } 1264 1272 } 1265 1273 1266 1274 ret = phy_init(pcie->phy); 1267 1275 if (ret) { 1268 1276 pm_runtime_disable(&pdev->dev); 1269 - return ret; 1277 + goto err_pm_runtime_put; 1270 1278 } 1271 1279 1272 1280 platform_set_drvdata(pdev, pcie); ··· 1287 1271 if (ret) { 1288 1272 dev_err(dev, "cannot initialize host\n"); 1289 1273 pm_runtime_disable(&pdev->dev); 1290 - return ret; 1274 + goto err_pm_runtime_put; 1291 1275 } 1292 1276 1293 1277 return 0; 1278 + 1279 + err_pm_runtime_put: 1280 + pm_runtime_put(dev); 1281 + pm_runtime_disable(dev); 1282 + 1283 + return ret; 1294 1284 } 1295 1285 1296 1286 static const struct of_device_id qcom_pcie_match[] = {
+126 -3
drivers/pci/controller/pci-aardvark.c
··· 20 20 #include <linux/of_pci.h> 21 21 22 22 #include "../pci.h" 23 + #include "../pci-bridge-emul.h" 23 24 24 25 /* PCIe core registers */ 26 + #define PCIE_CORE_DEV_ID_REG 0x0 25 27 #define PCIE_CORE_CMD_STATUS_REG 0x4 26 28 #define PCIE_CORE_CMD_IO_ACCESS_EN BIT(0) 27 29 #define PCIE_CORE_CMD_MEM_ACCESS_EN BIT(1) 28 30 #define PCIE_CORE_CMD_MEM_IO_REQ_EN BIT(2) 31 + #define PCIE_CORE_DEV_REV_REG 0x8 32 + #define PCIE_CORE_PCIEXP_CAP 0xc0 29 33 #define PCIE_CORE_DEV_CTRL_STATS_REG 0xc8 30 34 #define PCIE_CORE_DEV_CTRL_STATS_RELAX_ORDER_DISABLE (0 << 4) 31 35 #define PCIE_CORE_DEV_CTRL_STATS_MAX_PAYLOAD_SZ_SHIFT 5 ··· 45 41 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHK_TX_EN BIT(6) 46 42 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK BIT(7) 47 43 #define PCIE_CORE_ERR_CAPCTL_ECRC_CHCK_RCV BIT(8) 48 - 44 + #define PCIE_CORE_INT_A_ASSERT_ENABLE 1 45 + #define PCIE_CORE_INT_B_ASSERT_ENABLE 2 46 + #define PCIE_CORE_INT_C_ASSERT_ENABLE 3 47 + #define PCIE_CORE_INT_D_ASSERT_ENABLE 4 49 48 /* PIO registers base address and register offsets */ 50 49 #define PIO_BASE_ADDR 0x4000 51 50 #define PIO_CTRL (PIO_BASE_ADDR + 0x0) ··· 100 93 #define PCIE_CORE_CTRL2_STRICT_ORDER_ENABLE BIT(5) 101 94 #define PCIE_CORE_CTRL2_OB_WIN_ENABLE BIT(6) 102 95 #define PCIE_CORE_CTRL2_MSI_ENABLE BIT(10) 96 + #define PCIE_MSG_LOG_REG (CONTROL_BASE_ADDR + 0x30) 103 97 #define PCIE_ISR0_REG (CONTROL_BASE_ADDR + 0x40) 98 + #define PCIE_MSG_PM_PME_MASK BIT(7) 104 99 #define PCIE_ISR0_MASK_REG (CONTROL_BASE_ADDR + 0x44) 105 100 #define PCIE_ISR0_MSI_INT_PENDING BIT(24) 106 101 #define PCIE_ISR0_INTX_ASSERT(val) BIT(16 + (val)) ··· 198 189 struct mutex msi_used_lock; 199 190 u16 msi_msg; 200 191 int root_bus_nr; 192 + struct pci_bridge_emul bridge; 201 193 }; 202 194 203 195 static inline void advk_writel(struct advk_pcie *pcie, u32 val, u64 reg) ··· 400 390 return -ETIMEDOUT; 401 391 } 402 392 393 + 394 + static pci_bridge_emul_read_status_t 395 + advk_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, 396 + int reg, u32 *value) 397 + { 398 + struct advk_pcie *pcie = bridge->data; 399 + 400 + 401 + switch (reg) { 402 + case PCI_EXP_SLTCTL: 403 + *value = PCI_EXP_SLTSTA_PDS << 16; 404 + return PCI_BRIDGE_EMUL_HANDLED; 405 + 406 + case PCI_EXP_RTCTL: { 407 + u32 val = advk_readl(pcie, PCIE_ISR0_MASK_REG); 408 + *value = (val & PCIE_MSG_PM_PME_MASK) ? PCI_EXP_RTCTL_PMEIE : 0; 409 + return PCI_BRIDGE_EMUL_HANDLED; 410 + } 411 + 412 + case PCI_EXP_RTSTA: { 413 + u32 isr0 = advk_readl(pcie, PCIE_ISR0_REG); 414 + u32 msglog = advk_readl(pcie, PCIE_MSG_LOG_REG); 415 + *value = (isr0 & PCIE_MSG_PM_PME_MASK) << 16 | (msglog >> 16); 416 + return PCI_BRIDGE_EMUL_HANDLED; 417 + } 418 + 419 + case PCI_CAP_LIST_ID: 420 + case PCI_EXP_DEVCAP: 421 + case PCI_EXP_DEVCTL: 422 + case PCI_EXP_LNKCAP: 423 + case PCI_EXP_LNKCTL: 424 + *value = advk_readl(pcie, PCIE_CORE_PCIEXP_CAP + reg); 425 + return PCI_BRIDGE_EMUL_HANDLED; 426 + default: 427 + return PCI_BRIDGE_EMUL_NOT_HANDLED; 428 + } 429 + 430 + } 431 + 432 + static void 433 + advk_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, 434 + int reg, u32 old, u32 new, u32 mask) 435 + { 436 + struct advk_pcie *pcie = bridge->data; 437 + 438 + switch (reg) { 439 + case PCI_EXP_DEVCTL: 440 + case PCI_EXP_LNKCTL: 441 + advk_writel(pcie, new, PCIE_CORE_PCIEXP_CAP + reg); 442 + break; 443 + 444 + case PCI_EXP_RTCTL: 445 + new = (new & PCI_EXP_RTCTL_PMEIE) << 3; 446 + advk_writel(pcie, new, PCIE_ISR0_MASK_REG); 447 + break; 448 + 449 + case PCI_EXP_RTSTA: 450 + new = (new & PCI_EXP_RTSTA_PME) >> 9; 451 + advk_writel(pcie, new, PCIE_ISR0_REG); 452 + break; 453 + 454 + default: 455 + break; 456 + } 457 + } 458 + 459 + struct pci_bridge_emul_ops advk_pci_bridge_emul_ops = { 460 + .read_pcie = advk_pci_bridge_emul_pcie_conf_read, 461 + .write_pcie = advk_pci_bridge_emul_pcie_conf_write, 462 + }; 463 + 464 + /* 465 + * Initialize the configuration space of the PCI-to-PCI bridge 466 + * associated with the given PCIe interface. 467 + */ 468 + static void advk_sw_pci_bridge_init(struct advk_pcie *pcie) 469 + { 470 + struct pci_bridge_emul *bridge = &pcie->bridge; 471 + 472 + bridge->conf.vendor = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) & 0xffff; 473 + bridge->conf.device = advk_readl(pcie, PCIE_CORE_DEV_ID_REG) >> 16; 474 + bridge->conf.class_revision = 475 + advk_readl(pcie, PCIE_CORE_DEV_REV_REG) & 0xff; 476 + 477 + /* Support 32 bits I/O addressing */ 478 + bridge->conf.iobase = PCI_IO_RANGE_TYPE_32; 479 + bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32; 480 + 481 + /* Support 64 bits memory pref */ 482 + bridge->conf.pref_mem_base = PCI_PREF_RANGE_TYPE_64; 483 + bridge->conf.pref_mem_limit = PCI_PREF_RANGE_TYPE_64; 484 + 485 + /* Support interrupt A for MSI feature */ 486 + bridge->conf.intpin = PCIE_CORE_INT_A_ASSERT_ENABLE; 487 + 488 + bridge->has_pcie = true; 489 + bridge->data = pcie; 490 + bridge->ops = &advk_pci_bridge_emul_ops; 491 + 492 + pci_bridge_emul_init(bridge); 493 + 494 + } 495 + 403 496 static bool advk_pcie_valid_device(struct advk_pcie *pcie, struct pci_bus *bus, 404 497 int devfn) 405 498 { ··· 524 411 return PCIBIOS_DEVICE_NOT_FOUND; 525 412 } 526 413 414 + if (bus->number == pcie->root_bus_nr) 415 + return pci_bridge_emul_conf_read(&pcie->bridge, where, 416 + size, val); 417 + 527 418 /* Start PIO */ 528 419 advk_writel(pcie, 0, PIO_START); 529 420 advk_writel(pcie, 1, PIO_ISR); ··· 535 418 /* Program the control register */ 536 419 reg = advk_readl(pcie, PIO_CTRL); 537 420 reg &= ~PIO_CTRL_TYPE_MASK; 538 - if (bus->number == pcie->root_bus_nr) 421 + if (bus->primary == pcie->root_bus_nr) 539 422 reg |= PCIE_CONFIG_RD_TYPE0; 540 423 else 541 424 reg |= PCIE_CONFIG_RD_TYPE1; ··· 580 463 if (!advk_pcie_valid_device(pcie, bus, devfn)) 581 464 return PCIBIOS_DEVICE_NOT_FOUND; 582 465 466 + if (bus->number == pcie->root_bus_nr) 467 + return pci_bridge_emul_conf_write(&pcie->bridge, where, 468 + size, val); 469 + 583 470 if (where % size) 584 471 return PCIBIOS_SET_FAILED; 585 472 ··· 594 473 /* Program the control register */ 595 474 reg = advk_readl(pcie, PIO_CTRL); 596 475 reg &= ~PIO_CTRL_TYPE_MASK; 597 - if (bus->number == pcie->root_bus_nr) 476 + if (bus->primary == pcie->root_bus_nr) 598 477 reg |= PCIE_CONFIG_WR_TYPE0; 599 478 else 600 479 reg |= PCIE_CONFIG_WR_TYPE1; ··· 995 874 } 996 875 997 876 advk_pcie_setup_hw(pcie); 877 + 878 + advk_sw_pci_bridge_init(pcie); 998 879 999 880 ret = advk_pcie_init_irq_domain(pcie); 1000 881 if (ret) {
-8
drivers/pci/controller/pci-host-common.c
··· 58 58 int pci_host_common_probe(struct platform_device *pdev, 59 59 struct pci_ecam_ops *ops) 60 60 { 61 - const char *type; 62 61 struct device *dev = &pdev->dev; 63 - struct device_node *np = dev->of_node; 64 62 struct pci_host_bridge *bridge; 65 63 struct pci_config_window *cfg; 66 64 struct list_head resources; ··· 67 69 bridge = devm_pci_alloc_host_bridge(dev, 0); 68 70 if (!bridge) 69 71 return -ENOMEM; 70 - 71 - type = of_get_property(np, "device_type", NULL); 72 - if (!type || strcmp(type, "pci")) { 73 - dev_err(dev, "invalid \"device_type\" %s\n", type); 74 - return -EINVAL; 75 - } 76 72 77 73 of_pci_check_probe_only(); 78 74
+101 -285
drivers/pci/controller/pci-mvebu.c
··· 22 22 #include <linux/of_platform.h> 23 23 24 24 #include "../pci.h" 25 + #include "../pci-bridge-emul.h" 25 26 26 27 /* 27 28 * PCIe unit register offsets. ··· 64 63 #define PCIE_DEBUG_CTRL 0x1a60 65 64 #define PCIE_DEBUG_SOFT_RESET BIT(20) 66 65 67 - enum { 68 - PCISWCAP = PCI_BRIDGE_CONTROL + 2, 69 - PCISWCAP_EXP_LIST_ID = PCISWCAP + PCI_CAP_LIST_ID, 70 - PCISWCAP_EXP_DEVCAP = PCISWCAP + PCI_EXP_DEVCAP, 71 - PCISWCAP_EXP_DEVCTL = PCISWCAP + PCI_EXP_DEVCTL, 72 - PCISWCAP_EXP_LNKCAP = PCISWCAP + PCI_EXP_LNKCAP, 73 - PCISWCAP_EXP_LNKCTL = PCISWCAP + PCI_EXP_LNKCTL, 74 - PCISWCAP_EXP_SLTCAP = PCISWCAP + PCI_EXP_SLTCAP, 75 - PCISWCAP_EXP_SLTCTL = PCISWCAP + PCI_EXP_SLTCTL, 76 - PCISWCAP_EXP_RTCTL = PCISWCAP + PCI_EXP_RTCTL, 77 - PCISWCAP_EXP_RTSTA = PCISWCAP + PCI_EXP_RTSTA, 78 - PCISWCAP_EXP_DEVCAP2 = PCISWCAP + PCI_EXP_DEVCAP2, 79 - PCISWCAP_EXP_DEVCTL2 = PCISWCAP + PCI_EXP_DEVCTL2, 80 - PCISWCAP_EXP_LNKCAP2 = PCISWCAP + PCI_EXP_LNKCAP2, 81 - PCISWCAP_EXP_LNKCTL2 = PCISWCAP + PCI_EXP_LNKCTL2, 82 - PCISWCAP_EXP_SLTCAP2 = PCISWCAP + PCI_EXP_SLTCAP2, 83 - PCISWCAP_EXP_SLTCTL2 = PCISWCAP + PCI_EXP_SLTCTL2, 84 - }; 85 - 86 - /* PCI configuration space of a PCI-to-PCI bridge */ 87 - struct mvebu_sw_pci_bridge { 88 - u16 vendor; 89 - u16 device; 90 - u16 command; 91 - u16 status; 92 - u16 class; 93 - u8 interface; 94 - u8 revision; 95 - u8 bist; 96 - u8 header_type; 97 - u8 latency_timer; 98 - u8 cache_line_size; 99 - u32 bar[2]; 100 - u8 primary_bus; 101 - u8 secondary_bus; 102 - u8 subordinate_bus; 103 - u8 secondary_latency_timer; 104 - u8 iobase; 105 - u8 iolimit; 106 - u16 secondary_status; 107 - u16 membase; 108 - u16 memlimit; 109 - u16 iobaseupper; 110 - u16 iolimitupper; 111 - u32 romaddr; 112 - u8 intline; 113 - u8 intpin; 114 - u16 bridgectrl; 115 - 116 - /* PCI express capability */ 117 - u32 pcie_sltcap; 118 - u16 pcie_devctl; 119 - u16 pcie_rtctl; 120 - }; 121 - 122 66 struct mvebu_pcie_port; 123 67 124 68 /* Structure representing all PCIe interfaces */ ··· 99 153 struct clk *clk; 100 154 struct gpio_desc *reset_gpio; 101 155 char *reset_name; 102 - struct mvebu_sw_pci_bridge bridge; 156 + struct pci_bridge_emul bridge; 103 157 struct device_node *dn; 104 158 struct mvebu_pcie *pcie; 105 159 struct mvebu_pcie_window memwin; ··· 361 415 static void mvebu_pcie_handle_iobase_change(struct mvebu_pcie_port *port) 362 416 { 363 417 struct mvebu_pcie_window desired = {}; 418 + struct pci_bridge_emul_conf *conf = &port->bridge.conf; 364 419 365 420 /* Are the new iobase/iolimit values invalid? */ 366 - if (port->bridge.iolimit < port->bridge.iobase || 367 - port->bridge.iolimitupper < port->bridge.iobaseupper || 368 - !(port->bridge.command & PCI_COMMAND_IO)) { 421 + if (conf->iolimit < conf->iobase || 422 + conf->iolimitupper < conf->iobaseupper || 423 + !(conf->command & PCI_COMMAND_IO)) { 369 424 mvebu_pcie_set_window(port, port->io_target, port->io_attr, 370 425 &desired, &port->iowin); 371 426 return; ··· 385 438 * specifications. iobase is the bus address, port->iowin_base 386 439 * is the CPU address. 387 440 */ 388 - desired.remap = ((port->bridge.iobase & 0xF0) << 8) | 389 - (port->bridge.iobaseupper << 16); 441 + desired.remap = ((conf->iobase & 0xF0) << 8) | 442 + (conf->iobaseupper << 16); 390 443 desired.base = port->pcie->io.start + desired.remap; 391 - desired.size = ((0xFFF | ((port->bridge.iolimit & 0xF0) << 8) | 392 - (port->bridge.iolimitupper << 16)) - 444 + desired.size = ((0xFFF | ((conf->iolimit & 0xF0) << 8) | 445 + (conf->iolimitupper << 16)) - 393 446 desired.remap) + 394 447 1; 395 448 ··· 400 453 static void mvebu_pcie_handle_membase_change(struct mvebu_pcie_port *port) 401 454 { 402 455 struct mvebu_pcie_window desired = {.remap = MVEBU_MBUS_NO_REMAP}; 456 + struct pci_bridge_emul_conf *conf = &port->bridge.conf; 403 457 404 458 /* Are the new membase/memlimit values invalid? */ 405 - if (port->bridge.memlimit < port->bridge.membase || 406 - !(port->bridge.command & PCI_COMMAND_MEMORY)) { 459 + if (conf->memlimit < conf->membase || 460 + !(conf->command & PCI_COMMAND_MEMORY)) { 407 461 mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, 408 462 &desired, &port->memwin); 409 463 return; ··· 416 468 * window to setup, according to the PCI-to-PCI bridge 417 469 * specifications. 418 470 */ 419 - desired.base = ((port->bridge.membase & 0xFFF0) << 16); 420 - desired.size = (((port->bridge.memlimit & 0xFFF0) << 16) | 0xFFFFF) - 471 + desired.base = ((conf->membase & 0xFFF0) << 16); 472 + desired.size = (((conf->memlimit & 0xFFF0) << 16) | 0xFFFFF) - 421 473 desired.base + 1; 422 474 423 475 mvebu_pcie_set_window(port, port->mem_target, port->mem_attr, &desired, 424 476 &port->memwin); 425 477 } 426 478 427 - /* 428 - * Initialize the configuration space of the PCI-to-PCI bridge 429 - * associated with the given PCIe interface. 430 - */ 431 - static void mvebu_sw_pci_bridge_init(struct mvebu_pcie_port *port) 479 + static pci_bridge_emul_read_status_t 480 + mvebu_pci_bridge_emul_pcie_conf_read(struct pci_bridge_emul *bridge, 481 + int reg, u32 *value) 432 482 { 433 - struct mvebu_sw_pci_bridge *bridge = &port->bridge; 483 + struct mvebu_pcie_port *port = bridge->data; 434 484 435 - memset(bridge, 0, sizeof(struct mvebu_sw_pci_bridge)); 436 - 437 - bridge->class = PCI_CLASS_BRIDGE_PCI; 438 - bridge->vendor = PCI_VENDOR_ID_MARVELL; 439 - bridge->device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16; 440 - bridge->revision = mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff; 441 - bridge->header_type = PCI_HEADER_TYPE_BRIDGE; 442 - bridge->cache_line_size = 0x10; 443 - 444 - /* We support 32 bits I/O addressing */ 445 - bridge->iobase = PCI_IO_RANGE_TYPE_32; 446 - bridge->iolimit = PCI_IO_RANGE_TYPE_32; 447 - 448 - /* Add capabilities */ 449 - bridge->status = PCI_STATUS_CAP_LIST; 450 - } 451 - 452 - /* 453 - * Read the configuration space of the PCI-to-PCI bridge associated to 454 - * the given PCIe interface. 455 - */ 456 - static int mvebu_sw_pci_bridge_read(struct mvebu_pcie_port *port, 457 - unsigned int where, int size, u32 *value) 458 - { 459 - struct mvebu_sw_pci_bridge *bridge = &port->bridge; 460 - 461 - switch (where & ~3) { 462 - case PCI_VENDOR_ID: 463 - *value = bridge->device << 16 | bridge->vendor; 464 - break; 465 - 466 - case PCI_COMMAND: 467 - *value = bridge->command | bridge->status << 16; 468 - break; 469 - 470 - case PCI_CLASS_REVISION: 471 - *value = bridge->class << 16 | bridge->interface << 8 | 472 - bridge->revision; 473 - break; 474 - 475 - case PCI_CACHE_LINE_SIZE: 476 - *value = bridge->bist << 24 | bridge->header_type << 16 | 477 - bridge->latency_timer << 8 | bridge->cache_line_size; 478 - break; 479 - 480 - case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1: 481 - *value = bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4]; 482 - break; 483 - 484 - case PCI_PRIMARY_BUS: 485 - *value = (bridge->secondary_latency_timer << 24 | 486 - bridge->subordinate_bus << 16 | 487 - bridge->secondary_bus << 8 | 488 - bridge->primary_bus); 489 - break; 490 - 491 - case PCI_IO_BASE: 492 - if (!mvebu_has_ioport(port)) 493 - *value = bridge->secondary_status << 16; 494 - else 495 - *value = (bridge->secondary_status << 16 | 496 - bridge->iolimit << 8 | 497 - bridge->iobase); 498 - break; 499 - 500 - case PCI_MEMORY_BASE: 501 - *value = (bridge->memlimit << 16 | bridge->membase); 502 - break; 503 - 504 - case PCI_PREF_MEMORY_BASE: 505 - *value = 0; 506 - break; 507 - 508 - case PCI_IO_BASE_UPPER16: 509 - *value = (bridge->iolimitupper << 16 | bridge->iobaseupper); 510 - break; 511 - 512 - case PCI_CAPABILITY_LIST: 513 - *value = PCISWCAP; 514 - break; 515 - 516 - case PCI_ROM_ADDRESS1: 517 - *value = 0; 518 - break; 519 - 520 - case PCI_INTERRUPT_LINE: 521 - /* LINE PIN MIN_GNT MAX_LAT */ 522 - *value = 0; 523 - break; 524 - 525 - case PCISWCAP_EXP_LIST_ID: 526 - /* Set PCIe v2, root port, slot support */ 527 - *value = (PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | 528 - PCI_EXP_FLAGS_SLOT) << 16 | PCI_CAP_ID_EXP; 529 - break; 530 - 531 - case PCISWCAP_EXP_DEVCAP: 485 + switch (reg) { 486 + case PCI_EXP_DEVCAP: 532 487 *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCAP); 533 488 break; 534 489 535 - case PCISWCAP_EXP_DEVCTL: 490 + case PCI_EXP_DEVCTL: 536 491 *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL) & 537 492 ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | 538 493 PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); 539 - *value |= bridge->pcie_devctl; 540 494 break; 541 495 542 - case PCISWCAP_EXP_LNKCAP: 496 + case PCI_EXP_LNKCAP: 543 497 /* 544 498 * PCIe requires the clock power management capability to be 545 499 * hard-wired to zero for downstream ports ··· 450 600 ~PCI_EXP_LNKCAP_CLKPM; 451 601 break; 452 602 453 - case PCISWCAP_EXP_LNKCTL: 603 + case PCI_EXP_LNKCTL: 454 604 *value = mvebu_readl(port, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); 455 605 break; 456 606 457 - case PCISWCAP_EXP_SLTCAP: 458 - *value = bridge->pcie_sltcap; 459 - break; 460 - 461 - case PCISWCAP_EXP_SLTCTL: 607 + case PCI_EXP_SLTCTL: 462 608 *value = PCI_EXP_SLTSTA_PDS << 16; 463 609 break; 464 610 465 - case PCISWCAP_EXP_RTCTL: 466 - *value = bridge->pcie_rtctl; 467 - break; 468 - 469 - case PCISWCAP_EXP_RTSTA: 611 + case PCI_EXP_RTSTA: 470 612 *value = mvebu_readl(port, PCIE_RC_RTSTA); 471 613 break; 472 614 473 - /* PCIe requires the v2 fields to be hard-wired to zero */ 474 - case PCISWCAP_EXP_DEVCAP2: 475 - case PCISWCAP_EXP_DEVCTL2: 476 - case PCISWCAP_EXP_LNKCAP2: 477 - case PCISWCAP_EXP_LNKCTL2: 478 - case PCISWCAP_EXP_SLTCAP2: 479 - case PCISWCAP_EXP_SLTCTL2: 480 615 default: 481 - /* 482 - * PCI defines configuration read accesses to reserved or 483 - * unimplemented registers to read as zero and complete 484 - * normally. 485 - */ 486 - *value = 0; 487 - return PCIBIOS_SUCCESSFUL; 616 + return PCI_BRIDGE_EMUL_NOT_HANDLED; 488 617 } 489 618 490 - if (size == 2) 491 - *value = (*value >> (8 * (where & 3))) & 0xffff; 492 - else if (size == 1) 493 - *value = (*value >> (8 * (where & 3))) & 0xff; 494 - 495 - return PCIBIOS_SUCCESSFUL; 619 + return PCI_BRIDGE_EMUL_HANDLED; 496 620 } 497 621 498 - /* Write to the PCI-to-PCI bridge configuration space */ 499 - static int mvebu_sw_pci_bridge_write(struct mvebu_pcie_port *port, 500 - unsigned int where, int size, u32 value) 622 + static void 623 + mvebu_pci_bridge_emul_base_conf_write(struct pci_bridge_emul *bridge, 624 + int reg, u32 old, u32 new, u32 mask) 501 625 { 502 - struct mvebu_sw_pci_bridge *bridge = &port->bridge; 503 - u32 mask, reg; 504 - int err; 626 + struct mvebu_pcie_port *port = bridge->data; 627 + struct pci_bridge_emul_conf *conf = &bridge->conf; 505 628 506 - if (size == 4) 507 - mask = 0x0; 508 - else if (size == 2) 509 - mask = ~(0xffff << ((where & 3) * 8)); 510 - else if (size == 1) 511 - mask = ~(0xff << ((where & 3) * 8)); 512 - else 513 - return PCIBIOS_BAD_REGISTER_NUMBER; 514 - 515 - err = mvebu_sw_pci_bridge_read(port, where & ~3, 4, &reg); 516 - if (err) 517 - return err; 518 - 519 - value = (reg & mask) | value << ((where & 3) * 8); 520 - 521 - switch (where & ~3) { 629 + switch (reg) { 522 630 case PCI_COMMAND: 523 631 { 524 - u32 old = bridge->command; 525 - 526 632 if (!mvebu_has_ioport(port)) 527 - value &= ~PCI_COMMAND_IO; 633 + conf->command &= ~PCI_COMMAND_IO; 528 634 529 - bridge->command = value & 0xffff; 530 - if ((old ^ bridge->command) & PCI_COMMAND_IO) 635 + if ((old ^ new) & PCI_COMMAND_IO) 531 636 mvebu_pcie_handle_iobase_change(port); 532 - if ((old ^ bridge->command) & PCI_COMMAND_MEMORY) 637 + if ((old ^ new) & PCI_COMMAND_MEMORY) 533 638 mvebu_pcie_handle_membase_change(port); 639 + 534 640 break; 535 641 } 536 - 537 - case PCI_BASE_ADDRESS_0 ... PCI_BASE_ADDRESS_1: 538 - bridge->bar[((where & ~3) - PCI_BASE_ADDRESS_0) / 4] = value; 539 - break; 540 642 541 643 case PCI_IO_BASE: 542 644 /* 543 - * We also keep bit 1 set, it is a read-only bit that 645 + * We keep bit 1 set, it is a read-only bit that 544 646 * indicates we support 32 bits addressing for the 545 647 * I/O 546 648 */ 547 - bridge->iobase = (value & 0xff) | PCI_IO_RANGE_TYPE_32; 548 - bridge->iolimit = ((value >> 8) & 0xff) | PCI_IO_RANGE_TYPE_32; 649 + conf->iobase |= PCI_IO_RANGE_TYPE_32; 650 + conf->iolimit |= PCI_IO_RANGE_TYPE_32; 549 651 mvebu_pcie_handle_iobase_change(port); 550 652 break; 551 653 552 654 case PCI_MEMORY_BASE: 553 - bridge->membase = value & 0xffff; 554 - bridge->memlimit = value >> 16; 555 655 mvebu_pcie_handle_membase_change(port); 556 656 break; 557 657 558 658 case PCI_IO_BASE_UPPER16: 559 - bridge->iobaseupper = value & 0xffff; 560 - bridge->iolimitupper = value >> 16; 561 659 mvebu_pcie_handle_iobase_change(port); 562 660 break; 563 661 564 662 case PCI_PRIMARY_BUS: 565 - bridge->primary_bus = value & 0xff; 566 - bridge->secondary_bus = (value >> 8) & 0xff; 567 - bridge->subordinate_bus = (value >> 16) & 0xff; 568 - bridge->secondary_latency_timer = (value >> 24) & 0xff; 569 - mvebu_pcie_set_local_bus_nr(port, bridge->secondary_bus); 663 + mvebu_pcie_set_local_bus_nr(port, conf->secondary_bus); 570 664 break; 571 665 572 - case PCISWCAP_EXP_DEVCTL: 666 + default: 667 + break; 668 + } 669 + } 670 + 671 + static void 672 + mvebu_pci_bridge_emul_pcie_conf_write(struct pci_bridge_emul *bridge, 673 + int reg, u32 old, u32 new, u32 mask) 674 + { 675 + struct mvebu_pcie_port *port = bridge->data; 676 + 677 + switch (reg) { 678 + case PCI_EXP_DEVCTL: 573 679 /* 574 680 * Armada370 data says these bits must always 575 681 * be zero when in root complex mode. 576 682 */ 577 - value &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | 578 - PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); 683 + new &= ~(PCI_EXP_DEVCTL_URRE | PCI_EXP_DEVCTL_FERE | 684 + PCI_EXP_DEVCTL_NFERE | PCI_EXP_DEVCTL_CERE); 579 685 580 - /* 581 - * If the mask is 0xffff0000, then we only want to write 582 - * the device control register, rather than clearing the 583 - * RW1C bits in the device status register. Mask out the 584 - * status register bits. 585 - */ 586 - if (mask == 0xffff0000) 587 - value &= 0xffff; 588 - 589 - mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL); 686 + mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_DEVCTL); 590 687 break; 591 688 592 - case PCISWCAP_EXP_LNKCTL: 689 + case PCI_EXP_LNKCTL: 593 690 /* 594 691 * If we don't support CLKREQ, we must ensure that the 595 692 * CLKREQ enable bit always reads zero. Since we haven't 596 693 * had this capability, and it's dependent on board wiring, 597 694 * disable it for the time being. 598 695 */ 599 - value &= ~PCI_EXP_LNKCTL_CLKREQ_EN; 696 + new &= ~PCI_EXP_LNKCTL_CLKREQ_EN; 600 697 601 - /* 602 - * If the mask is 0xffff0000, then we only want to write 603 - * the link control register, rather than clearing the 604 - * RW1C bits in the link status register. Mask out the 605 - * RW1C status register bits. 606 - */ 607 - if (mask == 0xffff0000) 608 - value &= ~((PCI_EXP_LNKSTA_LABS | 609 - PCI_EXP_LNKSTA_LBMS) << 16); 610 - 611 - mvebu_writel(port, value, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); 698 + mvebu_writel(port, new, PCIE_CAP_PCIEXP + PCI_EXP_LNKCTL); 612 699 break; 613 700 614 - case PCISWCAP_EXP_RTSTA: 615 - mvebu_writel(port, value, PCIE_RC_RTSTA); 616 - break; 617 - 618 - default: 701 + case PCI_EXP_RTSTA: 702 + mvebu_writel(port, new, PCIE_RC_RTSTA); 619 703 break; 620 704 } 705 + } 621 706 622 - return PCIBIOS_SUCCESSFUL; 707 + struct pci_bridge_emul_ops mvebu_pci_bridge_emul_ops = { 708 + .write_base = mvebu_pci_bridge_emul_base_conf_write, 709 + .read_pcie = mvebu_pci_bridge_emul_pcie_conf_read, 710 + .write_pcie = mvebu_pci_bridge_emul_pcie_conf_write, 711 + }; 712 + 713 + /* 714 + * Initialize the configuration space of the PCI-to-PCI bridge 715 + * associated with the given PCIe interface. 716 + */ 717 + static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port) 718 + { 719 + struct pci_bridge_emul *bridge = &port->bridge; 720 + 721 + bridge->conf.vendor = PCI_VENDOR_ID_MARVELL; 722 + bridge->conf.device = mvebu_readl(port, PCIE_DEV_ID_OFF) >> 16; 723 + bridge->conf.class_revision = 724 + mvebu_readl(port, PCIE_DEV_REV_OFF) & 0xff; 725 + 726 + if (mvebu_has_ioport(port)) { 727 + /* We support 32 bits I/O addressing */ 728 + bridge->conf.iobase = PCI_IO_RANGE_TYPE_32; 729 + bridge->conf.iolimit = PCI_IO_RANGE_TYPE_32; 730 + } 731 + 732 + bridge->has_pcie = true; 733 + bridge->data = port; 734 + bridge->ops = &mvebu_pci_bridge_emul_ops; 735 + 736 + pci_bridge_emul_init(bridge); 623 737 } 624 738 625 739 static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys) ··· 603 789 if (bus->number == 0 && port->devfn == devfn) 604 790 return port; 605 791 if (bus->number != 0 && 606 - bus->number >= port->bridge.secondary_bus && 607 - bus->number <= port->bridge.subordinate_bus) 792 + bus->number >= port->bridge.conf.secondary_bus && 793 + bus->number <= port->bridge.conf.subordinate_bus) 608 794 return port; 609 795 } 610 796 ··· 625 811 626 812 /* Access the emulated PCI-to-PCI bridge */ 627 813 if (bus->number == 0) 628 - return mvebu_sw_pci_bridge_write(port, where, size, val); 814 + return pci_bridge_emul_conf_write(&port->bridge, where, 815 + size, val); 629 816 630 817 if (!mvebu_pcie_link_up(port)) 631 818 return PCIBIOS_DEVICE_NOT_FOUND; ··· 654 839 655 840 /* Access the emulated PCI-to-PCI bridge */ 656 841 if (bus->number == 0) 657 - return mvebu_sw_pci_bridge_read(port, where, size, val); 842 + return pci_bridge_emul_conf_read(&port->bridge, where, 843 + size, val); 658 844 659 845 if (!mvebu_pcie_link_up(port)) { 660 846 *val = 0xffffffff; ··· 1113 1297 1114 1298 mvebu_pcie_setup_hw(port); 1115 1299 mvebu_pcie_set_local_dev_nr(port, 1); 1116 - mvebu_sw_pci_bridge_init(port); 1300 + mvebu_pci_bridge_emul_init(port); 1117 1301 } 1118 1302 1119 1303 pcie->nports = i;
+7 -6
drivers/pci/controller/pcie-cadence-ep.c
··· 258 258 u8 intx, bool is_asserted) 259 259 { 260 260 struct cdns_pcie *pcie = &ep->pcie; 261 - u32 r = ep->max_regions - 1; 262 261 u32 offset; 263 262 u16 status; 264 263 u8 msg_code; ··· 267 268 /* Set the outbound region if needed. */ 268 269 if (unlikely(ep->irq_pci_addr != CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY || 269 270 ep->irq_pci_fn != fn)) { 270 - /* Last region was reserved for IRQ writes. */ 271 - cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, r, 271 + /* First region was reserved for IRQ writes. */ 272 + cdns_pcie_set_outbound_region_for_normal_msg(pcie, fn, 0, 272 273 ep->irq_phys_addr); 273 274 ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_LEGACY; 274 275 ep->irq_pci_fn = fn; ··· 346 347 /* Set the outbound region if needed. */ 347 348 if (unlikely(ep->irq_pci_addr != (pci_addr & ~pci_addr_mask) || 348 349 ep->irq_pci_fn != fn)) { 349 - /* Last region was reserved for IRQ writes. */ 350 - cdns_pcie_set_outbound_region(pcie, fn, ep->max_regions - 1, 350 + /* First region was reserved for IRQ writes. */ 351 + cdns_pcie_set_outbound_region(pcie, fn, 0, 351 352 false, 352 353 ep->irq_phys_addr, 353 354 pci_addr & ~pci_addr_mask, ··· 355 356 ep->irq_pci_addr = (pci_addr & ~pci_addr_mask); 356 357 ep->irq_pci_fn = fn; 357 358 } 358 - writew(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); 359 + writel(data, ep->irq_cpu_addr + (pci_addr & pci_addr_mask)); 359 360 360 361 return 0; 361 362 } ··· 516 517 goto free_epc_mem; 517 518 } 518 519 ep->irq_pci_addr = CDNS_PCIE_EP_IRQ_PCI_ADDR_NONE; 520 + /* Reserve region 0 for IRQs */ 521 + set_bit(0, &ep->ob_region_map); 519 522 520 523 return 0; 521 524
-7
drivers/pci/controller/pcie-cadence-host.c
··· 235 235 236 236 static int cdns_pcie_host_probe(struct platform_device *pdev) 237 237 { 238 - const char *type; 239 238 struct device *dev = &pdev->dev; 240 239 struct device_node *np = dev->of_node; 241 240 struct pci_host_bridge *bridge; ··· 266 267 267 268 rc->device_id = 0xffff; 268 269 of_property_read_u16(np, "device-id", &rc->device_id); 269 - 270 - type = of_get_property(np, "device_type", NULL); 271 - if (!type || strcmp(type, "pci")) { 272 - dev_err(dev, "invalid \"device_type\" %s\n", type); 273 - return -EINVAL; 274 - } 275 270 276 271 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg"); 277 272 pcie->reg_base = devm_ioremap_resource(dev, res);
+12 -8
drivers/pci/controller/pcie-cadence.c
··· 190 190 191 191 for (i = 0; i < phy_count; i++) { 192 192 of_property_read_string_index(np, "phy-names", i, &name); 193 - phy[i] = devm_phy_optional_get(dev, name); 194 - if (IS_ERR(phy)) 195 - return PTR_ERR(phy); 196 - 193 + phy[i] = devm_phy_get(dev, name); 194 + if (IS_ERR(phy[i])) { 195 + ret = PTR_ERR(phy[i]); 196 + goto err_phy; 197 + } 197 198 link[i] = device_link_add(dev, &phy[i]->dev, DL_FLAG_STATELESS); 198 199 if (!link[i]) { 200 + devm_phy_put(dev, phy[i]); 199 201 ret = -EINVAL; 200 - goto err_link; 202 + goto err_phy; 201 203 } 202 204 } 203 205 ··· 209 207 210 208 ret = cdns_pcie_enable_phy(pcie); 211 209 if (ret) 212 - goto err_link; 210 + goto err_phy; 213 211 214 212 return 0; 215 213 216 - err_link: 217 - while (--i >= 0) 214 + err_phy: 215 + while (--i >= 0) { 218 216 device_link_del(link[i]); 217 + devm_phy_put(dev, phy[i]); 218 + } 219 219 220 220 return ret; 221 221 }
-8
drivers/pci/controller/pcie-iproc.c
··· 630 630 return (pcie->base + offset); 631 631 } 632 632 633 - /* 634 - * PAXC is connected to an internally emulated EP within the SoC. It 635 - * allows only one device. 636 - */ 637 - if (pcie->ep_is_internal) 638 - if (slot > 0) 639 - return NULL; 640 - 641 633 return iproc_pcie_map_ep_cfg_reg(pcie, busno, slot, fn, where); 642 634 } 643 635
+205 -116
drivers/pci/controller/pcie-mediatek.c
··· 15 15 #include <linux/irqdomain.h> 16 16 #include <linux/kernel.h> 17 17 #include <linux/msi.h> 18 + #include <linux/module.h> 18 19 #include <linux/of_address.h> 19 20 #include <linux/of_pci.h> 20 21 #include <linux/of_platform.h> ··· 163 162 * @phy: pointer to PHY control block 164 163 * @lane: lane count 165 164 * @slot: port slot 165 + * @irq: GIC irq 166 166 * @irq_domain: legacy INTx IRQ domain 167 167 * @inner_domain: inner IRQ domain 168 168 * @msi_domain: MSI IRQ domain ··· 184 182 struct phy *phy; 185 183 u32 lane; 186 184 u32 slot; 185 + int irq; 187 186 struct irq_domain *irq_domain; 188 187 struct irq_domain *inner_domain; 189 188 struct irq_domain *msi_domain; ··· 228 225 229 226 clk_disable_unprepare(pcie->free_ck); 230 227 231 - if (dev->pm_domain) { 232 - pm_runtime_put_sync(dev); 233 - pm_runtime_disable(dev); 234 - } 228 + pm_runtime_put_sync(dev); 229 + pm_runtime_disable(dev); 235 230 } 236 231 237 232 static void mtk_pcie_port_free(struct mtk_pcie_port *port) ··· 338 337 { 339 338 struct mtk_pcie *pcie = bus->sysdata; 340 339 struct mtk_pcie_port *port; 340 + struct pci_dev *dev = NULL; 341 + 342 + /* 343 + * Walk the bus hierarchy to get the devfn value 344 + * of the port in the root bus. 345 + */ 346 + while (bus && bus->number) { 347 + dev = bus->self; 348 + bus = dev->bus; 349 + devfn = dev->devfn; 350 + } 341 351 342 352 list_for_each_entry(port, &pcie->ports, list) 343 353 if (port->slot == PCI_SLOT(devfn)) ··· 394 382 .read = mtk_pcie_config_read, 395 383 .write = mtk_pcie_config_write, 396 384 }; 397 - 398 - static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) 399 - { 400 - struct mtk_pcie *pcie = port->pcie; 401 - struct resource *mem = &pcie->mem; 402 - const struct mtk_pcie_soc *soc = port->pcie->soc; 403 - u32 val; 404 - size_t size; 405 - int err; 406 - 407 - /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ 408 - if (pcie->base) { 409 - val = readl(pcie->base + PCIE_SYS_CFG_V2); 410 - val |= PCIE_CSR_LTSSM_EN(port->slot) | 411 - PCIE_CSR_ASPM_L1_EN(port->slot); 412 - writel(val, pcie->base + PCIE_SYS_CFG_V2); 413 - } 414 - 415 - /* Assert all reset signals */ 416 - writel(0, port->base + PCIE_RST_CTRL); 417 - 418 - /* 419 - * Enable PCIe link down reset, if link status changed from link up to 420 - * link down, this will reset MAC control registers and configuration 421 - * space. 422 - */ 423 - writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL); 424 - 425 - /* De-assert PHY, PE, PIPE, MAC and configuration reset */ 426 - val = readl(port->base + PCIE_RST_CTRL); 427 - val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB | 428 - PCIE_MAC_SRSTB | PCIE_CRSTB; 429 - writel(val, port->base + PCIE_RST_CTRL); 430 - 431 - /* Set up vendor ID and class code */ 432 - if (soc->need_fix_class_id) { 433 - val = PCI_VENDOR_ID_MEDIATEK; 434 - writew(val, port->base + PCIE_CONF_VEND_ID); 435 - 436 - val = PCI_CLASS_BRIDGE_HOST; 437 - writew(val, port->base + PCIE_CONF_CLASS_ID); 438 - } 439 - 440 - /* 100ms timeout value should be enough for Gen1/2 training */ 441 - err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, 442 - !!(val & PCIE_PORT_LINKUP_V2), 20, 443 - 100 * USEC_PER_MSEC); 444 - if (err) 445 - return -ETIMEDOUT; 446 - 447 - /* Set INTx mask */ 448 - val = readl(port->base + PCIE_INT_MASK); 449 - val &= ~INTX_MASK; 450 - writel(val, port->base + PCIE_INT_MASK); 451 - 452 - /* Set AHB to PCIe translation windows */ 453 - size = mem->end - mem->start; 454 - val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size)); 455 - writel(val, port->base + PCIE_AHB_TRANS_BASE0_L); 456 - 457 - val = upper_32_bits(mem->start); 458 - writel(val, port->base + PCIE_AHB_TRANS_BASE0_H); 459 - 460 - /* Set PCIe to AXI translation memory space.*/ 461 - val = fls(0xffffffff) | WIN_ENABLE; 462 - writel(val, port->base + PCIE_AXI_WINDOW0); 463 - 464 - return 0; 465 - } 466 385 467 386 static void mtk_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 468 387 { ··· 533 590 writel(val, port->base + PCIE_INT_MASK); 534 591 } 535 592 593 + static void mtk_pcie_irq_teardown(struct mtk_pcie *pcie) 594 + { 595 + struct mtk_pcie_port *port, *tmp; 596 + 597 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 598 + irq_set_chained_handler_and_data(port->irq, NULL, NULL); 599 + 600 + if (port->irq_domain) 601 + irq_domain_remove(port->irq_domain); 602 + 603 + if (IS_ENABLED(CONFIG_PCI_MSI)) { 604 + if (port->msi_domain) 605 + irq_domain_remove(port->msi_domain); 606 + if (port->inner_domain) 607 + irq_domain_remove(port->inner_domain); 608 + } 609 + 610 + irq_dispose_mapping(port->irq); 611 + } 612 + } 613 + 536 614 static int mtk_pcie_intx_map(struct irq_domain *domain, unsigned int irq, 537 615 irq_hw_number_t hwirq) 538 616 { ··· 592 628 ret = mtk_pcie_allocate_msi_domains(port); 593 629 if (ret) 594 630 return ret; 595 - 596 - mtk_pcie_enable_msi(port); 597 631 } 598 632 599 633 return 0; ··· 644 682 struct mtk_pcie *pcie = port->pcie; 645 683 struct device *dev = pcie->dev; 646 684 struct platform_device *pdev = to_platform_device(dev); 647 - int err, irq; 685 + int err; 648 686 649 687 err = mtk_pcie_init_irq_domain(port, node); 650 688 if (err) { ··· 652 690 return err; 653 691 } 654 692 655 - irq = platform_get_irq(pdev, port->slot); 656 - irq_set_chained_handler_and_data(irq, mtk_pcie_intr_handler, port); 693 + port->irq = platform_get_irq(pdev, port->slot); 694 + irq_set_chained_handler_and_data(port->irq, 695 + mtk_pcie_intr_handler, port); 696 + 697 + return 0; 698 + } 699 + 700 + static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port) 701 + { 702 + struct mtk_pcie *pcie = port->pcie; 703 + struct resource *mem = &pcie->mem; 704 + const struct mtk_pcie_soc *soc = port->pcie->soc; 705 + u32 val; 706 + size_t size; 707 + int err; 708 + 709 + /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */ 710 + if (pcie->base) { 711 + val = readl(pcie->base + PCIE_SYS_CFG_V2); 712 + val |= PCIE_CSR_LTSSM_EN(port->slot) | 713 + PCIE_CSR_ASPM_L1_EN(port->slot); 714 + writel(val, pcie->base + PCIE_SYS_CFG_V2); 715 + } 716 + 717 + /* Assert all reset signals */ 718 + writel(0, port->base + PCIE_RST_CTRL); 719 + 720 + /* 721 + * Enable PCIe link down reset, if link status changed from link up to 722 + * link down, this will reset MAC control registers and configuration 723 + * space. 724 + */ 725 + writel(PCIE_LINKDOWN_RST_EN, port->base + PCIE_RST_CTRL); 726 + 727 + /* De-assert PHY, PE, PIPE, MAC and configuration reset */ 728 + val = readl(port->base + PCIE_RST_CTRL); 729 + val |= PCIE_PHY_RSTB | PCIE_PERSTB | PCIE_PIPE_SRSTB | 730 + PCIE_MAC_SRSTB | PCIE_CRSTB; 731 + writel(val, port->base + PCIE_RST_CTRL); 732 + 733 + /* Set up vendor ID and class code */ 734 + if (soc->need_fix_class_id) { 735 + val = PCI_VENDOR_ID_MEDIATEK; 736 + writew(val, port->base + PCIE_CONF_VEND_ID); 737 + 738 + val = PCI_CLASS_BRIDGE_PCI; 739 + writew(val, port->base + PCIE_CONF_CLASS_ID); 740 + } 741 + 742 + /* 100ms timeout value should be enough for Gen1/2 training */ 743 + err = readl_poll_timeout(port->base + PCIE_LINK_STATUS_V2, val, 744 + !!(val & PCIE_PORT_LINKUP_V2), 20, 745 + 100 * USEC_PER_MSEC); 746 + if (err) 747 + return -ETIMEDOUT; 748 + 749 + /* Set INTx mask */ 750 + val = readl(port->base + PCIE_INT_MASK); 751 + val &= ~INTX_MASK; 752 + writel(val, port->base + PCIE_INT_MASK); 753 + 754 + if (IS_ENABLED(CONFIG_PCI_MSI)) 755 + mtk_pcie_enable_msi(port); 756 + 757 + /* Set AHB to PCIe translation windows */ 758 + size = mem->end - mem->start; 759 + val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size)); 760 + writel(val, port->base + PCIE_AHB_TRANS_BASE0_L); 761 + 762 + val = upper_32_bits(mem->start); 763 + writel(val, port->base + PCIE_AHB_TRANS_BASE0_H); 764 + 765 + /* Set PCIe to AXI translation memory space.*/ 766 + val = fls(0xffffffff) | WIN_ENABLE; 767 + writel(val, port->base + PCIE_AXI_WINDOW0); 657 768 658 769 return 0; 659 770 } ··· 1022 987 pcie->free_ck = NULL; 1023 988 } 1024 989 1025 - if (dev->pm_domain) { 1026 - pm_runtime_enable(dev); 1027 - pm_runtime_get_sync(dev); 1028 - } 990 + pm_runtime_enable(dev); 991 + pm_runtime_get_sync(dev); 1029 992 1030 993 /* enable top level clock */ 1031 994 err = clk_prepare_enable(pcie->free_ck); ··· 1035 1002 return 0; 1036 1003 1037 1004 err_free_ck: 1038 - if (dev->pm_domain) { 1039 - pm_runtime_put_sync(dev); 1040 - pm_runtime_disable(dev); 1041 - } 1005 + pm_runtime_put_sync(dev); 1006 + pm_runtime_disable(dev); 1042 1007 1043 1008 return err; 1044 1009 } ··· 1140 1109 if (err < 0) 1141 1110 return err; 1142 1111 1143 - devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start); 1144 - 1145 - return 0; 1146 - } 1147 - 1148 - static int mtk_pcie_register_host(struct pci_host_bridge *host) 1149 - { 1150 - struct mtk_pcie *pcie = pci_host_bridge_priv(host); 1151 - struct pci_bus *child; 1152 - int err; 1153 - 1154 - host->busnr = pcie->busn.start; 1155 - host->dev.parent = pcie->dev; 1156 - host->ops = pcie->soc->ops; 1157 - host->map_irq = of_irq_parse_and_map_pci; 1158 - host->swizzle_irq = pci_common_swizzle; 1159 - host->sysdata = pcie; 1160 - 1161 - err = pci_scan_root_bus_bridge(host); 1162 - if (err < 0) 1112 + err = devm_pci_remap_iospace(dev, &pcie->pio, pcie->io.start); 1113 + if (err) 1163 1114 return err; 1164 - 1165 - pci_bus_size_bridges(host->bus); 1166 - pci_bus_assign_resources(host->bus); 1167 - 1168 - list_for_each_entry(child, &host->bus->children, node) 1169 - pcie_bus_configure_settings(child); 1170 - 1171 - pci_bus_add_devices(host->bus); 1172 1115 1173 1116 return 0; 1174 1117 } ··· 1173 1168 if (err) 1174 1169 goto put_resources; 1175 1170 1176 - err = mtk_pcie_register_host(host); 1171 + host->busnr = pcie->busn.start; 1172 + host->dev.parent = pcie->dev; 1173 + host->ops = pcie->soc->ops; 1174 + host->map_irq = of_irq_parse_and_map_pci; 1175 + host->swizzle_irq = pci_common_swizzle; 1176 + host->sysdata = pcie; 1177 + 1178 + err = pci_host_probe(host); 1177 1179 if (err) 1178 1180 goto put_resources; 1179 1181 ··· 1192 1180 1193 1181 return err; 1194 1182 } 1183 + 1184 + 1185 + static void mtk_pcie_free_resources(struct mtk_pcie *pcie) 1186 + { 1187 + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 1188 + struct list_head *windows = &host->windows; 1189 + 1190 + pci_free_resource_list(windows); 1191 + } 1192 + 1193 + static int mtk_pcie_remove(struct platform_device *pdev) 1194 + { 1195 + struct mtk_pcie *pcie = platform_get_drvdata(pdev); 1196 + struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie); 1197 + 1198 + pci_stop_root_bus(host->bus); 1199 + pci_remove_root_bus(host->bus); 1200 + mtk_pcie_free_resources(pcie); 1201 + 1202 + mtk_pcie_irq_teardown(pcie); 1203 + 1204 + mtk_pcie_put_resources(pcie); 1205 + 1206 + return 0; 1207 + } 1208 + 1209 + static int __maybe_unused mtk_pcie_suspend_noirq(struct device *dev) 1210 + { 1211 + struct mtk_pcie *pcie = dev_get_drvdata(dev); 1212 + struct mtk_pcie_port *port; 1213 + 1214 + if (list_empty(&pcie->ports)) 1215 + return 0; 1216 + 1217 + list_for_each_entry(port, &pcie->ports, list) { 1218 + clk_disable_unprepare(port->pipe_ck); 1219 + clk_disable_unprepare(port->obff_ck); 1220 + clk_disable_unprepare(port->axi_ck); 1221 + clk_disable_unprepare(port->aux_ck); 1222 + clk_disable_unprepare(port->ahb_ck); 1223 + clk_disable_unprepare(port->sys_ck); 1224 + phy_power_off(port->phy); 1225 + phy_exit(port->phy); 1226 + } 1227 + 1228 + clk_disable_unprepare(pcie->free_ck); 1229 + 1230 + return 0; 1231 + } 1232 + 1233 + static int __maybe_unused mtk_pcie_resume_noirq(struct device *dev) 1234 + { 1235 + struct mtk_pcie *pcie = dev_get_drvdata(dev); 1236 + struct mtk_pcie_port *port, *tmp; 1237 + 1238 + if (list_empty(&pcie->ports)) 1239 + return 0; 1240 + 1241 + clk_prepare_enable(pcie->free_ck); 1242 + 1243 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) 1244 + mtk_pcie_enable_port(port); 1245 + 1246 + /* In case of EP was removed while system suspend. */ 1247 + if (list_empty(&pcie->ports)) 1248 + clk_disable_unprepare(pcie->free_ck); 1249 + 1250 + return 0; 1251 + } 1252 + 1253 + static const struct dev_pm_ops mtk_pcie_pm_ops = { 1254 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mtk_pcie_suspend_noirq, 1255 + mtk_pcie_resume_noirq) 1256 + }; 1195 1257 1196 1258 static const struct mtk_pcie_soc mtk_pcie_soc_v1 = { 1197 1259 .ops = &mtk_pcie_ops, ··· 1295 1209 1296 1210 static struct platform_driver mtk_pcie_driver = { 1297 1211 .probe = mtk_pcie_probe, 1212 + .remove = mtk_pcie_remove, 1298 1213 .driver = { 1299 1214 .name = "mtk-pcie", 1300 1215 .of_match_table = mtk_pcie_ids, 1301 1216 .suppress_bind_attrs = true, 1217 + .pm = &mtk_pcie_pm_ops, 1302 1218 }, 1303 1219 }; 1304 - builtin_platform_driver(mtk_pcie_driver); 1220 + module_platform_driver(mtk_pcie_driver); 1221 + MODULE_LICENSE("GPL v2");
-7
drivers/pci/controller/pcie-mobiveil.c
··· 301 301 struct platform_device *pdev = pcie->pdev; 302 302 struct device_node *node = dev->of_node; 303 303 struct resource *res; 304 - const char *type; 305 - 306 - type = of_get_property(node, "device_type", NULL); 307 - if (!type || strcmp(type, "pci")) { 308 - dev_err(dev, "invalid \"device_type\" %s\n", type); 309 - return -EINVAL; 310 - } 311 304 312 305 /* map config resource */ 313 306 res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
-9
drivers/pci/controller/pcie-xilinx-nwl.c
··· 777 777 struct platform_device *pdev) 778 778 { 779 779 struct device *dev = pcie->dev; 780 - struct device_node *node = dev->of_node; 781 780 struct resource *res; 782 - const char *type; 783 - 784 - /* Check for device type */ 785 - type = of_get_property(node, "device_type", NULL); 786 - if (!type || strcmp(type, "pci")) { 787 - dev_err(dev, "invalid \"device_type\" %s\n", type); 788 - return -EINVAL; 789 - } 790 781 791 782 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "breg"); 792 783 pcie->breg_base = devm_ioremap_resource(dev, res);
-7
drivers/pci/controller/pcie-xilinx.c
··· 574 574 struct device *dev = port->dev; 575 575 struct device_node *node = dev->of_node; 576 576 struct resource regs; 577 - const char *type; 578 577 int err; 579 - 580 - type = of_get_property(node, "device_type", NULL); 581 - if (!type || strcmp(type, "pci")) { 582 - dev_err(dev, "invalid \"device_type\" %s\n", type); 583 - return -EINVAL; 584 - } 585 578 586 579 err = of_address_to_resource(node, 0, &regs); 587 580 if (err) {
+1 -1
drivers/pci/controller/vmd.c
··· 809 809 { 810 810 struct vmd_dev *vmd = pci_get_drvdata(dev); 811 811 812 - vmd_detach_resources(vmd); 813 812 sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); 814 813 pci_stop_root_bus(vmd->bus); 815 814 pci_remove_root_bus(vmd->bus); 816 815 vmd_cleanup_srcu(vmd); 817 816 vmd_teardown_dma_ops(vmd); 817 + vmd_detach_resources(vmd); 818 818 irq_domain_remove(vmd->irq_domain); 819 819 } 820 820
+74
drivers/pci/hotplug/TODO
··· 1 + Contributions are solicited in particular to remedy the following issues: 2 + 3 + cpcihp: 4 + 5 + * There are no implementations of the ->hardware_test, ->get_power and 6 + ->set_power callbacks in struct cpci_hp_controller_ops. Why were they 7 + introduced? Can they be removed from the struct? 8 + 9 + cpqphp: 10 + 11 + * The driver spawns a kthread cpqhp_event_thread() which is woken by the 12 + hardirq handler cpqhp_ctrl_intr(). Convert this to threaded IRQ handling. 13 + The kthread is also woken from the timer pushbutton_helper_thread(), 14 + convert it to call irq_wake_thread(). Use pciehp as a template. 15 + 16 + * A large portion of cpqphp_ctrl.c and cpqphp_pci.c concerns resource 17 + management. Doesn't this duplicate functionality in the core? 18 + 19 + ibmphp: 20 + 21 + * Implementations of hotplug_slot_ops callbacks such as get_adapter_present() 22 + in ibmphp_core.c create a copy of the struct slot on the stack, then perform 23 + the actual operation on that copy. Determine if this overhead is necessary, 24 + delete it if not. The functions also perform a NULL pointer check on the 25 + struct hotplug_slot, this seems superfluous. 26 + 27 + * Several functions access the pci_slot member in struct hotplug_slot even 28 + though pci_hotplug.h declares it private. See get_max_bus_speed() for an 29 + example. Either the pci_slot member should no longer be declared private 30 + or ibmphp should store a pointer to its bus in struct slot. Probably the 31 + former. 32 + 33 + * The functions get_max_adapter_speed() and get_bus_name() are commented out. 34 + Can they be deleted? There are also forward declarations at the top of 35 + ibmphp_core.c as well as pointers in ibmphp_hotplug_slot_ops, likewise 36 + commented out. 37 + 38 + * ibmphp_init_devno() takes a struct slot **, it could instead take a 39 + struct slot *. 40 + 41 + * The return value of pci_hp_register() is not checked. 42 + 43 + * iounmap(io_mem) is called in the error path of ebda_rsrc_controller() 44 + and once more in the error path of its caller ibmphp_access_ebda(). 45 + 46 + * The various slot data structures are difficult to follow and need to be 47 + simplified. A lot of functions are too large and too complex, they need 48 + to be broken up into smaller, manageable pieces. Negative examples are 49 + ebda_rsrc_controller() and configure_bridge(). 50 + 51 + * A large portion of ibmphp_res.c and ibmphp_pci.c concerns resource 52 + management. Doesn't this duplicate functionality in the core? 53 + 54 + sgi_hotplug: 55 + 56 + * Several functions access the pci_slot member in struct hotplug_slot even 57 + though pci_hotplug.h declares it private. See sn_hp_destroy() for an 58 + example. Either the pci_slot member should no longer be declared private 59 + or sgi_hotplug should store a pointer to it in struct slot. Probably the 60 + former. 61 + 62 + shpchp: 63 + 64 + * There is only a single implementation of struct hpc_ops. Can the struct be 65 + removed and its functions invoked directly? This has already been done in 66 + pciehp with commit 82a9e79ef132 ("PCI: pciehp: remove hpc_ops"). Clarify 67 + if there was a specific reason not to apply the same change to shpchp. 68 + 69 + * The ->get_mode1_ECC_cap callback in shpchp_hpc_ops is never invoked. 70 + Why was it introduced? Can it be removed? 71 + 72 + * The hardirq handler shpc_isr() queues events on a workqueue. It can be 73 + simplified by converting it to threaded IRQ handling. Use pciehp as a 74 + template.
+7 -3
drivers/pci/hotplug/acpiphp.h
··· 33 33 * struct slot - slot information for each *physical* slot 34 34 */ 35 35 struct slot { 36 - struct hotplug_slot *hotplug_slot; 36 + struct hotplug_slot hotplug_slot; 37 37 struct acpiphp_slot *acpi_slot; 38 - struct hotplug_slot_info info; 39 38 unsigned int sun; /* ACPI _SUN (Slot User Number) value */ 40 39 }; 41 40 42 41 static inline const char *slot_name(struct slot *slot) 43 42 { 44 - return hotplug_slot_name(slot->hotplug_slot); 43 + return hotplug_slot_name(&slot->hotplug_slot); 44 + } 45 + 46 + static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) 47 + { 48 + return container_of(hotplug_slot, struct slot, hotplug_slot); 45 49 } 46 50 47 51 /*
+11 -25
drivers/pci/hotplug/acpiphp_core.c
··· 57 57 static int get_latch_status(struct hotplug_slot *slot, u8 *value); 58 58 static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 59 59 60 - static struct hotplug_slot_ops acpi_hotplug_slot_ops = { 60 + static const struct hotplug_slot_ops acpi_hotplug_slot_ops = { 61 61 .enable_slot = enable_slot, 62 62 .disable_slot = disable_slot, 63 63 .set_attention_status = set_attention_status, ··· 118 118 */ 119 119 static int enable_slot(struct hotplug_slot *hotplug_slot) 120 120 { 121 - struct slot *slot = hotplug_slot->private; 121 + struct slot *slot = to_slot(hotplug_slot); 122 122 123 123 pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); 124 124 ··· 135 135 */ 136 136 static int disable_slot(struct hotplug_slot *hotplug_slot) 137 137 { 138 - struct slot *slot = hotplug_slot->private; 138 + struct slot *slot = to_slot(hotplug_slot); 139 139 140 140 pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); 141 141 ··· 179 179 */ 180 180 static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 181 181 { 182 - struct slot *slot = hotplug_slot->private; 182 + struct slot *slot = to_slot(hotplug_slot); 183 183 184 184 pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); 185 185 ··· 225 225 */ 226 226 static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) 227 227 { 228 - struct slot *slot = hotplug_slot->private; 228 + struct slot *slot = to_slot(hotplug_slot); 229 229 230 230 pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); 231 231 ··· 245 245 */ 246 246 static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) 247 247 { 248 - struct slot *slot = hotplug_slot->private; 248 + struct slot *slot = to_slot(hotplug_slot); 249 249 250 250 pr_debug("%s - physical_slot = %s\n", __func__, slot_name(slot)); 251 251 ··· 266 266 if (!slot) 267 267 goto error; 268 268 269 - slot->hotplug_slot = kzalloc(sizeof(*slot->hotplug_slot), GFP_KERNEL); 270 - if (!slot->hotplug_slot) 271 - goto error_slot; 272 - 273 - slot->hotplug_slot->info = &slot->info; 274 - 275 - slot->hotplug_slot->private = slot; 276 - slot->hotplug_slot->ops = &acpi_hotplug_slot_ops; 269 + slot->hotplug_slot.ops = &acpi_hotplug_slot_ops; 277 270 278 271 slot->acpi_slot = acpiphp_slot; 279 - slot->hotplug_slot->info->power_status = acpiphp_get_power_status(slot->acpi_slot); 280 - slot->hotplug_slot->info->attention_status = 0; 281 - slot->hotplug_slot->info->latch_status = acpiphp_get_latch_status(slot->acpi_slot); 282 - slot->hotplug_slot->info->adapter_status = acpiphp_get_adapter_status(slot->acpi_slot); 283 272 284 273 acpiphp_slot->slot = slot; 285 274 slot->sun = sun; 286 275 snprintf(name, SLOT_NAME_SIZE, "%u", sun); 287 276 288 - retval = pci_hp_register(slot->hotplug_slot, acpiphp_slot->bus, 277 + retval = pci_hp_register(&slot->hotplug_slot, acpiphp_slot->bus, 289 278 acpiphp_slot->device, name); 290 279 if (retval == -EBUSY) 291 - goto error_hpslot; 280 + goto error_slot; 292 281 if (retval) { 293 282 pr_err("pci_hp_register failed with error %d\n", retval); 294 - goto error_hpslot; 283 + goto error_slot; 295 284 } 296 285 297 286 pr_info("Slot [%s] registered\n", slot_name(slot)); 298 287 299 288 return 0; 300 - error_hpslot: 301 - kfree(slot->hotplug_slot); 302 289 error_slot: 303 290 kfree(slot); 304 291 error: ··· 299 312 300 313 pr_info("Slot [%s] unregistered\n", slot_name(slot)); 301 314 302 - pci_hp_deregister(slot->hotplug_slot); 303 - kfree(slot->hotplug_slot); 315 + pci_hp_deregister(&slot->hotplug_slot); 304 316 kfree(slot); 305 317 } 306 318
+1 -1
drivers/pci/hotplug/acpiphp_ibm.c
··· 41 41 #define IBM_HARDWARE_ID1 "IBM37D0" 42 42 #define IBM_HARDWARE_ID2 "IBM37D4" 43 43 44 - #define hpslot_to_sun(A) (((struct slot *)((A)->private))->sun) 44 + #define hpslot_to_sun(A) (to_slot(A)->sun) 45 45 46 46 /* union apci_descriptor - allows access to the 47 47 * various device descriptors that are embedded in the
+9 -2
drivers/pci/hotplug/cpci_hotplug.h
··· 32 32 unsigned int devfn; 33 33 struct pci_bus *bus; 34 34 struct pci_dev *dev; 35 + unsigned int latch_status:1; 36 + unsigned int adapter_status:1; 35 37 unsigned int extracting; 36 - struct hotplug_slot *hotplug_slot; 38 + struct hotplug_slot hotplug_slot; 37 39 struct list_head slot_list; 38 40 }; 39 41 ··· 60 58 61 59 static inline const char *slot_name(struct slot *slot) 62 60 { 63 - return hotplug_slot_name(slot->hotplug_slot); 61 + return hotplug_slot_name(&slot->hotplug_slot); 62 + } 63 + 64 + static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) 65 + { 66 + return container_of(hotplug_slot, struct slot, hotplug_slot); 64 67 } 65 68 66 69 int cpci_hp_register_controller(struct cpci_hp_controller *controller);
+24 -81
drivers/pci/hotplug/cpci_hotplug_core.c
··· 57 57 static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 58 58 static int get_latch_status(struct hotplug_slot *slot, u8 *value); 59 59 60 - static struct hotplug_slot_ops cpci_hotplug_slot_ops = { 60 + static const struct hotplug_slot_ops cpci_hotplug_slot_ops = { 61 61 .enable_slot = enable_slot, 62 62 .disable_slot = disable_slot, 63 63 .set_attention_status = set_attention_status, ··· 68 68 }; 69 69 70 70 static int 71 - update_latch_status(struct hotplug_slot *hotplug_slot, u8 value) 72 - { 73 - struct hotplug_slot_info info; 74 - 75 - memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info)); 76 - info.latch_status = value; 77 - return pci_hp_change_slot_info(hotplug_slot, &info); 78 - } 79 - 80 - static int 81 - update_adapter_status(struct hotplug_slot *hotplug_slot, u8 value) 82 - { 83 - struct hotplug_slot_info info; 84 - 85 - memcpy(&info, hotplug_slot->info, sizeof(struct hotplug_slot_info)); 86 - info.adapter_status = value; 87 - return pci_hp_change_slot_info(hotplug_slot, &info); 88 - } 89 - 90 - static int 91 71 enable_slot(struct hotplug_slot *hotplug_slot) 92 72 { 93 - struct slot *slot = hotplug_slot->private; 73 + struct slot *slot = to_slot(hotplug_slot); 94 74 int retval = 0; 95 75 96 76 dbg("%s - physical_slot = %s", __func__, slot_name(slot)); ··· 83 103 static int 84 104 disable_slot(struct hotplug_slot *hotplug_slot) 85 105 { 86 - struct slot *slot = hotplug_slot->private; 106 + struct slot *slot = to_slot(hotplug_slot); 87 107 int retval = 0; 88 108 89 109 dbg("%s - physical_slot = %s", __func__, slot_name(slot)); ··· 115 135 goto disable_error; 116 136 } 117 137 118 - if (update_adapter_status(slot->hotplug_slot, 0)) 119 - warn("failure to update adapter file"); 138 + slot->adapter_status = 0; 120 139 121 140 if (slot->extracting) { 122 141 slot->extracting = 0; ··· 139 160 static int 140 161 get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 141 162 { 142 - struct slot *slot = hotplug_slot->private; 163 + struct slot *slot = to_slot(hotplug_slot); 143 164 144 165 *value = cpci_get_power_status(slot); 145 166 return 0; ··· 148 169 static int 149 170 get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) 150 171 { 151 - struct slot *slot = hotplug_slot->private; 172 + struct slot *slot = to_slot(hotplug_slot); 152 173 153 174 *value = cpci_get_attention_status(slot); 154 175 return 0; ··· 157 178 static int 158 179 set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) 159 180 { 160 - return cpci_set_attention_status(hotplug_slot->private, status); 181 + return cpci_set_attention_status(to_slot(hotplug_slot), status); 161 182 } 162 183 163 184 static int 164 185 get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) 165 186 { 166 - *value = hotplug_slot->info->adapter_status; 187 + struct slot *slot = to_slot(hotplug_slot); 188 + 189 + *value = slot->adapter_status; 167 190 return 0; 168 191 } 169 192 170 193 static int 171 194 get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) 172 195 { 173 - *value = hotplug_slot->info->latch_status; 196 + struct slot *slot = to_slot(hotplug_slot); 197 + 198 + *value = slot->latch_status; 174 199 return 0; 175 200 } 176 201 177 202 static void release_slot(struct slot *slot) 178 203 { 179 - kfree(slot->hotplug_slot->info); 180 - kfree(slot->hotplug_slot); 181 204 pci_dev_put(slot->dev); 182 205 kfree(slot); 183 206 } ··· 190 209 cpci_hp_register_bus(struct pci_bus *bus, u8 first, u8 last) 191 210 { 192 211 struct slot *slot; 193 - struct hotplug_slot *hotplug_slot; 194 - struct hotplug_slot_info *info; 195 212 char name[SLOT_NAME_SIZE]; 196 213 int status; 197 214 int i; ··· 208 229 goto error; 209 230 } 210 231 211 - hotplug_slot = 212 - kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); 213 - if (!hotplug_slot) { 214 - status = -ENOMEM; 215 - goto error_slot; 216 - } 217 - slot->hotplug_slot = hotplug_slot; 218 - 219 - info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL); 220 - if (!info) { 221 - status = -ENOMEM; 222 - goto error_hpslot; 223 - } 224 - hotplug_slot->info = info; 225 - 226 232 slot->bus = bus; 227 233 slot->number = i; 228 234 slot->devfn = PCI_DEVFN(i, 0); 229 235 230 236 snprintf(name, SLOT_NAME_SIZE, "%02x:%02x", bus->number, i); 231 237 232 - hotplug_slot->private = slot; 233 - hotplug_slot->ops = &cpci_hotplug_slot_ops; 234 - 235 - /* 236 - * Initialize the slot info structure with some known 237 - * good values. 238 - */ 239 - dbg("initializing slot %s", name); 240 - info->power_status = cpci_get_power_status(slot); 241 - info->attention_status = cpci_get_attention_status(slot); 238 + slot->hotplug_slot.ops = &cpci_hotplug_slot_ops; 242 239 243 240 dbg("registering slot %s", name); 244 - status = pci_hp_register(slot->hotplug_slot, bus, i, name); 241 + status = pci_hp_register(&slot->hotplug_slot, bus, i, name); 245 242 if (status) { 246 243 err("pci_hp_register failed with error %d", status); 247 - goto error_info; 244 + goto error_slot; 248 245 } 249 246 dbg("slot registered with name: %s", slot_name(slot)); 250 247 ··· 231 276 up_write(&list_rwsem); 232 277 } 233 278 return 0; 234 - error_info: 235 - kfree(info); 236 - error_hpslot: 237 - kfree(hotplug_slot); 238 279 error_slot: 239 280 kfree(slot); 240 281 error: ··· 256 305 slots--; 257 306 258 307 dbg("deregistering slot %s", slot_name(slot)); 259 - pci_hp_deregister(slot->hotplug_slot); 308 + pci_hp_deregister(&slot->hotplug_slot); 260 309 release_slot(slot); 261 310 } 262 311 } ··· 310 359 __func__, slot_name(slot)); 311 360 dev = pci_get_slot(slot->bus, PCI_DEVFN(slot->number, 0)); 312 361 if (dev) { 313 - if (update_adapter_status(slot->hotplug_slot, 1)) 314 - warn("failure to update adapter file"); 315 - if (update_latch_status(slot->hotplug_slot, 1)) 316 - warn("failure to update latch file"); 362 + slot->adapter_status = 1; 363 + slot->latch_status = 1; 317 364 slot->dev = dev; 318 365 } 319 366 } ··· 373 424 dbg("%s - slot %s HS_CSR (2) = %04x", 374 425 __func__, slot_name(slot), hs_csr); 375 426 376 - if (update_latch_status(slot->hotplug_slot, 1)) 377 - warn("failure to update latch file"); 378 - 379 - if (update_adapter_status(slot->hotplug_slot, 1)) 380 - warn("failure to update adapter file"); 427 + slot->latch_status = 1; 428 + slot->adapter_status = 1; 381 429 382 430 cpci_led_off(slot); 383 431 ··· 395 449 __func__, slot_name(slot), hs_csr); 396 450 397 451 if (!slot->extracting) { 398 - if (update_latch_status(slot->hotplug_slot, 0)) 399 - warn("failure to update latch file"); 400 - 452 + slot->latch_status = 0; 401 453 slot->extracting = 1; 402 454 atomic_inc(&extracting); 403 455 } ··· 409 465 */ 410 466 err("card in slot %s was improperly removed", 411 467 slot_name(slot)); 412 - if (update_adapter_status(slot->hotplug_slot, 0)) 413 - warn("failure to update adapter file"); 468 + slot->adapter_status = 0; 414 469 slot->extracting = 0; 415 470 atomic_dec(&extracting); 416 471 } ··· 558 615 goto cleanup_null; 559 616 list_for_each_entry_safe(slot, tmp, &slot_list, slot_list) { 560 617 list_del(&slot->slot_list); 561 - pci_hp_deregister(slot->hotplug_slot); 618 + pci_hp_deregister(&slot->hotplug_slot); 562 619 release_slot(slot); 563 620 } 564 621 cleanup_null:
+2 -4
drivers/pci/hotplug/cpci_hotplug_pci.c
··· 194 194 slot->devfn, 195 195 hs_cap + 2, 196 196 hs_csr)) { 197 - err("Could not set LOO for slot %s", 198 - hotplug_slot_name(slot->hotplug_slot)); 197 + err("Could not set LOO for slot %s", slot_name(slot)); 199 198 return -ENODEV; 200 199 } 201 200 } ··· 222 223 slot->devfn, 223 224 hs_cap + 2, 224 225 hs_csr)) { 225 - err("Could not clear LOO for slot %s", 226 - hotplug_slot_name(slot->hotplug_slot)); 226 + err("Could not clear LOO for slot %s", slot_name(slot)); 227 227 return -ENODEV; 228 228 } 229 229 }
+7 -2
drivers/pci/hotplug/cpqphp.h
··· 260 260 u8 hp_slot; 261 261 struct controller *ctrl; 262 262 void __iomem *p_sm_slot; 263 - struct hotplug_slot *hotplug_slot; 263 + struct hotplug_slot hotplug_slot; 264 264 }; 265 265 266 266 struct pci_resource { ··· 445 445 446 446 static inline const char *slot_name(struct slot *slot) 447 447 { 448 - return hotplug_slot_name(slot->hotplug_slot); 448 + return hotplug_slot_name(&slot->hotplug_slot); 449 + } 450 + 451 + static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) 452 + { 453 + return container_of(hotplug_slot, struct slot, hotplug_slot); 449 454 } 450 455 451 456 /*
+13 -48
drivers/pci/hotplug/cpqphp_core.c
··· 121 121 { 122 122 u32 tempdword; 123 123 u32 number_of_slots; 124 - u8 physical_slot; 125 124 126 125 if (!ctrl) 127 126 return 1; ··· 130 131 number_of_slots = readb(ctrl->hpc_reg + SLOT_MASK) & 0x0F; 131 132 /* Loop through slots */ 132 133 while (number_of_slots) { 133 - physical_slot = tempdword; 134 134 writeb(0, ctrl->hpc_reg + SLOT_SERR); 135 135 tempdword++; 136 136 number_of_slots--; ··· 273 275 274 276 while (old_slot) { 275 277 next_slot = old_slot->next; 276 - pci_hp_deregister(old_slot->hotplug_slot); 277 - kfree(old_slot->hotplug_slot->info); 278 - kfree(old_slot->hotplug_slot); 278 + pci_hp_deregister(&old_slot->hotplug_slot); 279 279 kfree(old_slot); 280 280 old_slot = next_slot; 281 281 } ··· 415 419 static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) 416 420 { 417 421 struct pci_func *slot_func; 418 - struct slot *slot = hotplug_slot->private; 422 + struct slot *slot = to_slot(hotplug_slot); 419 423 struct controller *ctrl = slot->ctrl; 420 424 u8 bus; 421 425 u8 devfn; ··· 442 446 static int process_SI(struct hotplug_slot *hotplug_slot) 443 447 { 444 448 struct pci_func *slot_func; 445 - struct slot *slot = hotplug_slot->private; 449 + struct slot *slot = to_slot(hotplug_slot); 446 450 struct controller *ctrl = slot->ctrl; 447 451 u8 bus; 448 452 u8 devfn; ··· 474 478 static int process_SS(struct hotplug_slot *hotplug_slot) 475 479 { 476 480 struct pci_func *slot_func; 477 - struct slot *slot = hotplug_slot->private; 481 + struct slot *slot = to_slot(hotplug_slot); 478 482 struct controller *ctrl = slot->ctrl; 479 483 u8 bus; 480 484 u8 devfn; ··· 501 505 502 506 static int hardware_test(struct hotplug_slot *hotplug_slot, u32 value) 503 507 { 504 - struct slot *slot = hotplug_slot->private; 508 + struct slot *slot = to_slot(hotplug_slot); 505 509 struct controller *ctrl = slot->ctrl; 506 510 507 511 dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); ··· 512 516 513 517 static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 514 518 { 515 - struct slot *slot = hotplug_slot->private; 519 + struct slot *slot = to_slot(hotplug_slot); 516 520 struct controller *ctrl = slot->ctrl; 517 521 518 522 dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); ··· 523 527 524 528 static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) 525 529 { 526 - struct slot *slot = hotplug_slot->private; 530 + struct slot *slot = to_slot(hotplug_slot); 527 531 struct controller *ctrl = slot->ctrl; 528 532 529 533 dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); ··· 534 538 535 539 static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) 536 540 { 537 - struct slot *slot = hotplug_slot->private; 541 + struct slot *slot = to_slot(hotplug_slot); 538 542 struct controller *ctrl = slot->ctrl; 539 543 540 544 dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); ··· 546 550 547 551 static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) 548 552 { 549 - struct slot *slot = hotplug_slot->private; 553 + struct slot *slot = to_slot(hotplug_slot); 550 554 struct controller *ctrl = slot->ctrl; 551 555 552 556 dbg("%s - physical_slot = %s\n", __func__, slot_name(slot)); ··· 556 560 return 0; 557 561 } 558 562 559 - static struct hotplug_slot_ops cpqphp_hotplug_slot_ops = { 563 + static const struct hotplug_slot_ops cpqphp_hotplug_slot_ops = { 560 564 .set_attention_status = set_attention_status, 561 565 .enable_slot = process_SI, 562 566 .disable_slot = process_SS, ··· 574 578 void __iomem *smbios_table) 575 579 { 576 580 struct slot *slot; 577 - struct hotplug_slot *hotplug_slot; 578 - struct hotplug_slot_info *hotplug_slot_info; 579 581 struct pci_bus *bus = ctrl->pci_bus; 580 582 u8 number_of_slots; 581 583 u8 slot_device; ··· 598 604 result = -ENOMEM; 599 605 goto error; 600 606 } 601 - 602 - slot->hotplug_slot = kzalloc(sizeof(*(slot->hotplug_slot)), 603 - GFP_KERNEL); 604 - if (!slot->hotplug_slot) { 605 - result = -ENOMEM; 606 - goto error_slot; 607 - } 608 - hotplug_slot = slot->hotplug_slot; 609 - 610 - hotplug_slot->info = kzalloc(sizeof(*(hotplug_slot->info)), 611 - GFP_KERNEL); 612 - if (!hotplug_slot->info) { 613 - result = -ENOMEM; 614 - goto error_hpslot; 615 - } 616 - hotplug_slot_info = hotplug_slot->info; 617 607 618 608 slot->ctrl = ctrl; 619 609 slot->bus = ctrl->bus; ··· 647 669 ((read_slot_enable(ctrl) << 2) >> ctrl_slot) & 0x04; 648 670 649 671 /* register this slot with the hotplug pci core */ 650 - hotplug_slot->private = slot; 651 672 snprintf(name, SLOT_NAME_SIZE, "%u", slot->number); 652 - hotplug_slot->ops = &cpqphp_hotplug_slot_ops; 653 - 654 - hotplug_slot_info->power_status = get_slot_enabled(ctrl, slot); 655 - hotplug_slot_info->attention_status = 656 - cpq_get_attention_status(ctrl, slot); 657 - hotplug_slot_info->latch_status = 658 - cpq_get_latch_status(ctrl, slot); 659 - hotplug_slot_info->adapter_status = 660 - get_presence_status(ctrl, slot); 673 + slot->hotplug_slot.ops = &cpqphp_hotplug_slot_ops; 661 674 662 675 dbg("registering bus %d, dev %d, number %d, ctrl->slot_device_offset %d, slot %d\n", 663 676 slot->bus, slot->device, 664 677 slot->number, ctrl->slot_device_offset, 665 678 slot_number); 666 - result = pci_hp_register(hotplug_slot, 679 + result = pci_hp_register(&slot->hotplug_slot, 667 680 ctrl->pci_dev->bus, 668 681 slot->device, 669 682 name); 670 683 if (result) { 671 684 err("pci_hp_register failed with error %d\n", result); 672 - goto error_info; 685 + goto error_slot; 673 686 } 674 687 675 688 slot->next = ctrl->slot; ··· 672 703 } 673 704 674 705 return 0; 675 - error_info: 676 - kfree(hotplug_slot_info); 677 - error_hpslot: 678 - kfree(hotplug_slot); 679 706 error_slot: 680 707 kfree(slot); 681 708 error:
+1 -30
drivers/pci/hotplug/cpqphp_ctrl.c
··· 1130 1130 for (slot = ctrl->slot; slot; slot = slot->next) { 1131 1131 if (slot->device == (hp_slot + ctrl->slot_device_offset)) 1132 1132 continue; 1133 - if (!slot->hotplug_slot || !slot->hotplug_slot->info) 1134 - continue; 1135 - if (slot->hotplug_slot->info->adapter_status == 0) 1133 + if (get_presence_status(ctrl, slot) == 0) 1136 1134 continue; 1137 1135 /* If another adapter is running on the same segment but at a 1138 1136 * lower speed/mode, we allow the new adapter to function at ··· 1765 1767 } 1766 1768 1767 1769 1768 - static int update_slot_info(struct controller *ctrl, struct slot *slot) 1769 - { 1770 - struct hotplug_slot_info *info; 1771 - int result; 1772 - 1773 - info = kmalloc(sizeof(*info), GFP_KERNEL); 1774 - if (!info) 1775 - return -ENOMEM; 1776 - 1777 - info->power_status = get_slot_enabled(ctrl, slot); 1778 - info->attention_status = cpq_get_attention_status(ctrl, slot); 1779 - info->latch_status = cpq_get_latch_status(ctrl, slot); 1780 - info->adapter_status = get_presence_status(ctrl, slot); 1781 - result = pci_hp_change_slot_info(slot->hotplug_slot, info); 1782 - kfree(info); 1783 - return result; 1784 - } 1785 - 1786 1770 static void interrupt_event_handler(struct controller *ctrl) 1787 1771 { 1788 1772 int loop = 0; ··· 1864 1884 /***********POWER FAULT */ 1865 1885 else if (ctrl->event_queue[loop].event_type == INT_POWER_FAULT) { 1866 1886 dbg("power fault\n"); 1867 - } else { 1868 - /* refresh notification */ 1869 - update_slot_info(ctrl, p_slot); 1870 1887 } 1871 1888 1872 1889 ctrl->event_queue[loop].event_type = 0; ··· 2034 2057 if (rc) 2035 2058 dbg("%s: rc = %d\n", __func__, rc); 2036 2059 2037 - if (p_slot) 2038 - update_slot_info(ctrl, p_slot); 2039 - 2040 2060 return rc; 2041 2061 } 2042 2062 ··· 2098 2124 } else if (!rc) { 2099 2125 rc = 1; 2100 2126 } 2101 - 2102 - if (p_slot) 2103 - update_slot_info(ctrl, p_slot); 2104 2127 2105 2128 return rc; 2106 2129 }
+7 -2
drivers/pci/hotplug/ibmphp.h
··· 698 698 u8 supported_bus_mode; 699 699 u8 flag; /* this is for disable slot and polling */ 700 700 u8 ctlr_index; 701 - struct hotplug_slot *hotplug_slot; 701 + struct hotplug_slot hotplug_slot; 702 702 struct controller *ctrl; 703 703 struct pci_func *func; 704 704 u8 irq[4]; ··· 740 740 int ibmphp_update_slot_info(struct slot *); /* This function is called from HPC, so we need it to not be be static */ 741 741 int ibmphp_configure_card(struct pci_func *, u8); 742 742 int ibmphp_unconfigure_card(struct slot **, int); 743 - extern struct hotplug_slot_ops ibmphp_hotplug_slot_ops; 743 + extern const struct hotplug_slot_ops ibmphp_hotplug_slot_ops; 744 + 745 + static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) 746 + { 747 + return container_of(hotplug_slot, struct slot, hotplug_slot); 748 + } 744 749 745 750 #endif //__IBMPHP_H 746 751
+42 -79
drivers/pci/hotplug/ibmphp_core.c
··· 247 247 break; 248 248 } 249 249 if (rc == 0) { 250 - pslot = hotplug_slot->private; 251 - if (pslot) 252 - rc = ibmphp_hpc_writeslot(pslot, cmd); 253 - else 254 - rc = -ENODEV; 250 + pslot = to_slot(hotplug_slot); 251 + rc = ibmphp_hpc_writeslot(pslot, cmd); 255 252 } 256 253 } else 257 254 rc = -ENODEV; ··· 270 273 271 274 ibmphp_lock_operations(); 272 275 if (hotplug_slot) { 273 - pslot = hotplug_slot->private; 274 - if (pslot) { 275 - memcpy(&myslot, pslot, sizeof(struct slot)); 276 - rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 277 - &(myslot.status)); 278 - if (!rc) 279 - rc = ibmphp_hpc_readslot(pslot, 280 - READ_EXTSLOTSTATUS, 281 - &(myslot.ext_status)); 282 - if (!rc) 283 - *value = SLOT_ATTN(myslot.status, 284 - myslot.ext_status); 285 - } 276 + pslot = to_slot(hotplug_slot); 277 + memcpy(&myslot, pslot, sizeof(struct slot)); 278 + rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 279 + &myslot.status); 280 + if (!rc) 281 + rc = ibmphp_hpc_readslot(pslot, READ_EXTSLOTSTATUS, 282 + &myslot.ext_status); 283 + if (!rc) 284 + *value = SLOT_ATTN(myslot.status, myslot.ext_status); 286 285 } 287 286 288 287 ibmphp_unlock_operations(); ··· 296 303 (ulong) hotplug_slot, (ulong) value); 297 304 ibmphp_lock_operations(); 298 305 if (hotplug_slot) { 299 - pslot = hotplug_slot->private; 300 - if (pslot) { 301 - memcpy(&myslot, pslot, sizeof(struct slot)); 302 - rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 303 - &(myslot.status)); 304 - if (!rc) 305 - *value = SLOT_LATCH(myslot.status); 306 - } 306 + pslot = to_slot(hotplug_slot); 307 + memcpy(&myslot, pslot, sizeof(struct slot)); 308 + rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 309 + &myslot.status); 310 + if (!rc) 311 + *value = SLOT_LATCH(myslot.status); 307 312 } 308 313 309 314 ibmphp_unlock_operations(); ··· 321 330 (ulong) hotplug_slot, (ulong) value); 322 331 ibmphp_lock_operations(); 323 332 if (hotplug_slot) { 324 - pslot = hotplug_slot->private; 325 - if (pslot) { 326 - memcpy(&myslot, pslot, sizeof(struct slot)); 327 - rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 328 - &(myslot.status)); 329 - if (!rc) 330 - *value = SLOT_PWRGD(myslot.status); 331 - } 333 + pslot = to_slot(hotplug_slot); 334 + memcpy(&myslot, pslot, sizeof(struct slot)); 335 + rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 336 + &myslot.status); 337 + if (!rc) 338 + *value = SLOT_PWRGD(myslot.status); 332 339 } 333 340 334 341 ibmphp_unlock_operations(); ··· 346 357 (ulong) hotplug_slot, (ulong) value); 347 358 ibmphp_lock_operations(); 348 359 if (hotplug_slot) { 349 - pslot = hotplug_slot->private; 350 - if (pslot) { 351 - memcpy(&myslot, pslot, sizeof(struct slot)); 352 - rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 353 - &(myslot.status)); 354 - if (!rc) { 355 - present = SLOT_PRESENT(myslot.status); 356 - if (present == HPC_SLOT_EMPTY) 357 - *value = 0; 358 - else 359 - *value = 1; 360 - } 360 + pslot = to_slot(hotplug_slot); 361 + memcpy(&myslot, pslot, sizeof(struct slot)); 362 + rc = ibmphp_hpc_readslot(pslot, READ_SLOTSTATUS, 363 + &myslot.status); 364 + if (!rc) { 365 + present = SLOT_PRESENT(myslot.status); 366 + if (present == HPC_SLOT_EMPTY) 367 + *value = 0; 368 + else 369 + *value = 1; 361 370 } 362 371 } 363 372 ··· 369 382 int rc = 0; 370 383 u8 mode = 0; 371 384 enum pci_bus_speed speed; 372 - struct pci_bus *bus = slot->hotplug_slot->pci_slot->bus; 385 + struct pci_bus *bus = slot->hotplug_slot.pci_slot->bus; 373 386 374 387 debug("%s - Entry slot[%p]\n", __func__, slot); 375 388 ··· 569 582 ****************************************************************************/ 570 583 int ibmphp_update_slot_info(struct slot *slot_cur) 571 584 { 572 - struct hotplug_slot_info *info; 573 - struct pci_bus *bus = slot_cur->hotplug_slot->pci_slot->bus; 574 - int rc; 585 + struct pci_bus *bus = slot_cur->hotplug_slot.pci_slot->bus; 575 586 u8 bus_speed; 576 587 u8 mode; 577 - 578 - info = kmalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL); 579 - if (!info) 580 - return -ENOMEM; 581 - 582 - info->power_status = SLOT_PWRGD(slot_cur->status); 583 - info->attention_status = SLOT_ATTN(slot_cur->status, 584 - slot_cur->ext_status); 585 - info->latch_status = SLOT_LATCH(slot_cur->status); 586 - if (!SLOT_PRESENT(slot_cur->status)) { 587 - info->adapter_status = 0; 588 - /* info->max_adapter_speed_status = MAX_ADAPTER_NONE; */ 589 - } else { 590 - info->adapter_status = 1; 591 - /* get_max_adapter_speed_1(slot_cur->hotplug_slot, 592 - &info->max_adapter_speed_status, 0); */ 593 - } 594 588 595 589 bus_speed = slot_cur->bus_on->current_speed; 596 590 mode = slot_cur->bus_on->current_bus_mode; ··· 598 630 bus->cur_bus_speed = bus_speed; 599 631 // To do: bus_names 600 632 601 - rc = pci_hp_change_slot_info(slot_cur->hotplug_slot, info); 602 - kfree(info); 603 - return rc; 633 + return 0; 604 634 } 605 635 606 636 ··· 639 673 640 674 list_for_each_entry_safe(slot_cur, next, &ibmphp_slot_head, 641 675 ibm_slot_list) { 642 - pci_hp_del(slot_cur->hotplug_slot); 676 + pci_hp_del(&slot_cur->hotplug_slot); 643 677 slot_cur->ctrl = NULL; 644 678 slot_cur->bus_on = NULL; 645 679 ··· 649 683 */ 650 684 ibmphp_unconfigure_card(&slot_cur, -1); 651 685 652 - pci_hp_destroy(slot_cur->hotplug_slot); 653 - kfree(slot_cur->hotplug_slot->info); 654 - kfree(slot_cur->hotplug_slot); 686 + pci_hp_destroy(&slot_cur->hotplug_slot); 655 687 kfree(slot_cur); 656 688 } 657 689 debug("%s -- exit\n", __func__); ··· 971 1007 ibmphp_lock_operations(); 972 1008 973 1009 debug("ENABLING SLOT........\n"); 974 - slot_cur = hs->private; 1010 + slot_cur = to_slot(hs); 975 1011 976 1012 rc = validate(slot_cur, ENABLE); 977 1013 if (rc) { ··· 1059 1095 1060 1096 slot_cur->func = kzalloc(sizeof(struct pci_func), GFP_KERNEL); 1061 1097 if (!slot_cur->func) { 1062 - /* We cannot do update_slot_info here, since no memory for 1063 - * kmalloc n.e.ways, and update_slot_info allocates some */ 1098 + /* do update_slot_info here? */ 1064 1099 rc = -ENOMEM; 1065 1100 goto error_power; 1066 1101 } ··· 1132 1169 **************************************************************/ 1133 1170 static int ibmphp_disable_slot(struct hotplug_slot *hotplug_slot) 1134 1171 { 1135 - struct slot *slot = hotplug_slot->private; 1172 + struct slot *slot = to_slot(hotplug_slot); 1136 1173 int rc; 1137 1174 1138 1175 ibmphp_lock_operations(); ··· 1222 1259 goto exit; 1223 1260 } 1224 1261 1225 - struct hotplug_slot_ops ibmphp_hotplug_slot_ops = { 1262 + const struct hotplug_slot_ops ibmphp_hotplug_slot_ops = { 1226 1263 .set_attention_status = set_attention_status, 1227 1264 .enable_slot = enable_slot, 1228 1265 .disable_slot = ibmphp_disable_slot,
+10 -60
drivers/pci/hotplug/ibmphp_ebda.c
··· 666 666 struct slot *slot; 667 667 int rc = 0; 668 668 669 - if (!hotplug_slot || !hotplug_slot->private) 670 - return -EINVAL; 671 - 672 - slot = hotplug_slot->private; 669 + slot = to_slot(hotplug_slot); 673 670 rc = ibmphp_hpc_readslot(slot, READ_ALLSTAT, NULL); 674 - if (rc) 675 - return rc; 676 - 677 - // power - enabled:1 not:0 678 - hotplug_slot->info->power_status = SLOT_POWER(slot->status); 679 - 680 - // attention - off:0, on:1, blinking:2 681 - hotplug_slot->info->attention_status = SLOT_ATTN(slot->status, slot->ext_status); 682 - 683 - // latch - open:1 closed:0 684 - hotplug_slot->info->latch_status = SLOT_LATCH(slot->status); 685 - 686 - // pci board - present:1 not:0 687 - if (SLOT_PRESENT(slot->status)) 688 - hotplug_slot->info->adapter_status = 1; 689 - else 690 - hotplug_slot->info->adapter_status = 0; 691 - /* 692 - if (slot->bus_on->supported_bus_mode 693 - && (slot->bus_on->supported_speed == BUS_SPEED_66)) 694 - hotplug_slot->info->max_bus_speed_status = BUS_SPEED_66PCIX; 695 - else 696 - hotplug_slot->info->max_bus_speed_status = slot->bus_on->supported_speed; 697 - */ 698 - 699 671 return rc; 700 672 } 701 673 ··· 684 712 u8 ctlr_id, temp, bus_index; 685 713 u16 ctlr, slot, bus; 686 714 u16 slot_num, bus_num, index; 687 - struct hotplug_slot *hp_slot_ptr; 688 715 struct controller *hpc_ptr; 689 716 struct ebda_hpc_bus *bus_ptr; 690 717 struct ebda_hpc_slot *slot_ptr; ··· 742 771 bus_info_ptr1 = kzalloc(sizeof(struct bus_info), GFP_KERNEL); 743 772 if (!bus_info_ptr1) { 744 773 rc = -ENOMEM; 745 - goto error_no_hp_slot; 774 + goto error_no_slot; 746 775 } 747 776 bus_info_ptr1->slot_min = slot_ptr->slot_num; 748 777 bus_info_ptr1->slot_max = slot_ptr->slot_num; ··· 813 842 (hpc_ptr->u.isa_ctlr.io_end - hpc_ptr->u.isa_ctlr.io_start + 1), 814 843 "ibmphp")) { 815 844 rc = -ENODEV; 816 - goto error_no_hp_slot; 845 + goto error_no_slot; 817 846 } 818 847 hpc_ptr->irq = readb(io_mem + addr + 4); 819 848 addr += 5; ··· 828 857 break; 829 858 default: 830 859 rc = -ENODEV; 831 - goto error_no_hp_slot; 860 + goto error_no_slot; 832 861 } 833 862 834 863 //reorganize chassis' linked list ··· 841 870 842 871 // register slots with hpc core as well as create linked list of ibm slot 843 872 for (index = 0; index < hpc_ptr->slot_count; index++) { 844 - 845 - hp_slot_ptr = kzalloc(sizeof(*hp_slot_ptr), GFP_KERNEL); 846 - if (!hp_slot_ptr) { 847 - rc = -ENOMEM; 848 - goto error_no_hp_slot; 849 - } 850 - 851 - hp_slot_ptr->info = kzalloc(sizeof(struct hotplug_slot_info), GFP_KERNEL); 852 - if (!hp_slot_ptr->info) { 853 - rc = -ENOMEM; 854 - goto error_no_hp_info; 855 - } 856 - 857 873 tmp_slot = kzalloc(sizeof(*tmp_slot), GFP_KERNEL); 858 874 if (!tmp_slot) { 859 875 rc = -ENOMEM; ··· 867 909 868 910 bus_info_ptr1 = ibmphp_find_same_bus_num(hpc_ptr->slots[index].slot_bus_num); 869 911 if (!bus_info_ptr1) { 870 - kfree(tmp_slot); 871 912 rc = -ENODEV; 872 913 goto error; 873 914 } ··· 876 919 877 920 tmp_slot->ctlr_index = hpc_ptr->slots[index].ctl_index; 878 921 tmp_slot->number = hpc_ptr->slots[index].slot_num; 879 - tmp_slot->hotplug_slot = hp_slot_ptr; 880 922 881 - hp_slot_ptr->private = tmp_slot; 882 - 883 - rc = fillslotinfo(hp_slot_ptr); 923 + rc = fillslotinfo(&tmp_slot->hotplug_slot); 884 924 if (rc) 885 925 goto error; 886 926 887 - rc = ibmphp_init_devno((struct slot **) &hp_slot_ptr->private); 927 + rc = ibmphp_init_devno(&tmp_slot); 888 928 if (rc) 889 929 goto error; 890 - hp_slot_ptr->ops = &ibmphp_hotplug_slot_ops; 930 + tmp_slot->hotplug_slot.ops = &ibmphp_hotplug_slot_ops; 891 931 892 932 // end of registering ibm slot with hotplug core 893 933 894 - list_add(&((struct slot *)(hp_slot_ptr->private))->ibm_slot_list, &ibmphp_slot_head); 934 + list_add(&tmp_slot->ibm_slot_list, &ibmphp_slot_head); 895 935 } 896 936 897 937 print_bus_info(); ··· 898 944 899 945 list_for_each_entry(tmp_slot, &ibmphp_slot_head, ibm_slot_list) { 900 946 snprintf(name, SLOT_NAME_SIZE, "%s", create_file_name(tmp_slot)); 901 - pci_hp_register(tmp_slot->hotplug_slot, 947 + pci_hp_register(&tmp_slot->hotplug_slot, 902 948 pci_find_bus(0, tmp_slot->bus), tmp_slot->device, name); 903 949 } 904 950 ··· 907 953 return 0; 908 954 909 955 error: 910 - kfree(hp_slot_ptr->private); 956 + kfree(tmp_slot); 911 957 error_no_slot: 912 - kfree(hp_slot_ptr->info); 913 - error_no_hp_info: 914 - kfree(hp_slot_ptr); 915 - error_no_hp_slot: 916 958 free_ebda_hpc(hpc_ptr); 917 959 error_no_hpc: 918 960 iounmap(io_mem);
+15 -38
drivers/pci/hotplug/pci_hotplug_core.c
··· 49 49 #define GET_STATUS(name, type) \ 50 50 static int get_##name(struct hotplug_slot *slot, type *value) \ 51 51 { \ 52 - struct hotplug_slot_ops *ops = slot->ops; \ 52 + const struct hotplug_slot_ops *ops = slot->ops; \ 53 53 int retval = 0; \ 54 - if (!try_module_get(ops->owner)) \ 54 + if (!try_module_get(slot->owner)) \ 55 55 return -ENODEV; \ 56 56 if (ops->get_##name) \ 57 57 retval = ops->get_##name(slot, value); \ 58 - else \ 59 - *value = slot->info->name; \ 60 - module_put(ops->owner); \ 58 + module_put(slot->owner); \ 61 59 return retval; \ 62 60 } 63 61 ··· 88 90 power = (u8)(lpower & 0xff); 89 91 dbg("power = %d\n", power); 90 92 91 - if (!try_module_get(slot->ops->owner)) { 93 + if (!try_module_get(slot->owner)) { 92 94 retval = -ENODEV; 93 95 goto exit; 94 96 } ··· 107 109 err("Illegal value specified for power\n"); 108 110 retval = -EINVAL; 109 111 } 110 - module_put(slot->ops->owner); 112 + module_put(slot->owner); 111 113 112 114 exit: 113 115 if (retval) ··· 136 138 static ssize_t attention_write_file(struct pci_slot *pci_slot, const char *buf, 137 139 size_t count) 138 140 { 139 - struct hotplug_slot_ops *ops = pci_slot->hotplug->ops; 141 + struct hotplug_slot *slot = pci_slot->hotplug; 142 + const struct hotplug_slot_ops *ops = slot->ops; 140 143 unsigned long lattention; 141 144 u8 attention; 142 145 int retval = 0; ··· 146 147 attention = (u8)(lattention & 0xff); 147 148 dbg(" - attention = %d\n", attention); 148 149 149 - if (!try_module_get(ops->owner)) { 150 + if (!try_module_get(slot->owner)) { 150 151 retval = -ENODEV; 151 152 goto exit; 152 153 } 153 154 if (ops->set_attention_status) 154 - retval = ops->set_attention_status(pci_slot->hotplug, attention); 155 - module_put(ops->owner); 155 + retval = ops->set_attention_status(slot, attention); 156 + module_put(slot->owner); 156 157 157 158 exit: 158 159 if (retval) ··· 212 213 test = (u32)(ltest & 0xffffffff); 213 214 dbg("test = %d\n", test); 214 215 215 - if (!try_module_get(slot->ops->owner)) { 216 + if (!try_module_get(slot->owner)) { 216 217 retval = -ENODEV; 217 218 goto exit; 218 219 } 219 220 if (slot->ops->hardware_test) 220 221 retval = slot->ops->hardware_test(slot, test); 221 - module_put(slot->ops->owner); 222 + module_put(slot->owner); 222 223 223 224 exit: 224 225 if (retval) ··· 443 444 444 445 if (slot == NULL) 445 446 return -ENODEV; 446 - if ((slot->info == NULL) || (slot->ops == NULL)) 447 + if (slot->ops == NULL) 447 448 return -EINVAL; 448 449 449 - slot->ops->owner = owner; 450 - slot->ops->mod_name = mod_name; 450 + slot->owner = owner; 451 + slot->mod_name = mod_name; 451 452 452 453 /* 453 454 * No problems if we call this interface from both ACPI_PCI_SLOT ··· 557 558 pci_destroy_slot(pci_slot); 558 559 } 559 560 EXPORT_SYMBOL_GPL(pci_hp_destroy); 560 - 561 - /** 562 - * pci_hp_change_slot_info - changes the slot's information structure in the core 563 - * @slot: pointer to the slot whose info has changed 564 - * @info: pointer to the info copy into the slot's info structure 565 - * 566 - * @slot must have been registered with the pci 567 - * hotplug subsystem previously with a call to pci_hp_register(). 568 - * 569 - * Returns 0 if successful, anything else for an error. 570 - */ 571 - int pci_hp_change_slot_info(struct hotplug_slot *slot, 572 - struct hotplug_slot_info *info) 573 - { 574 - if (!slot || !info) 575 - return -ENODEV; 576 - 577 - memcpy(slot->info, info, sizeof(struct hotplug_slot_info)); 578 - 579 - return 0; 580 - } 581 - EXPORT_SYMBOL_GPL(pci_hp_change_slot_info); 582 561 583 562 static int __init pci_hotplug_init(void) 584 563 {
+65 -66
drivers/pci/hotplug/pciehp.h
··· 19 19 #include <linux/pci.h> 20 20 #include <linux/pci_hotplug.h> 21 21 #include <linux/delay.h> 22 - #include <linux/sched/signal.h> /* signal_pending() */ 23 22 #include <linux/mutex.h> 24 23 #include <linux/rwsem.h> 25 24 #include <linux/workqueue.h> ··· 59 60 #define SLOT_NAME_SIZE 10 60 61 61 62 /** 62 - * struct slot - PCIe hotplug slot 63 - * @state: current state machine position 64 - * @ctrl: pointer to the slot's controller structure 65 - * @hotplug_slot: pointer to the structure registered with the PCI hotplug core 66 - * @work: work item to turn the slot on or off after 5 seconds in response to 67 - * an Attention Button press 68 - * @lock: protects reads and writes of @state; 69 - * protects scheduling, execution and cancellation of @work 70 - */ 71 - struct slot { 72 - u8 state; 73 - struct controller *ctrl; 74 - struct hotplug_slot *hotplug_slot; 75 - struct delayed_work work; 76 - struct mutex lock; 77 - }; 78 - 79 - /** 80 63 * struct controller - PCIe hotplug controller 81 - * @ctrl_lock: serializes writes to the Slot Control register 82 64 * @pcie: pointer to the controller's PCIe port service device 83 - * @reset_lock: prevents access to the Data Link Layer Link Active bit in the 84 - * Link Status register and to the Presence Detect State bit in the Slot 85 - * Status register during a slot reset which may cause them to flap 86 - * @slot: pointer to the controller's slot structure 87 - * @queue: wait queue to wake up on reception of a Command Completed event, 88 - * used for synchronous writes to the Slot Control register 89 65 * @slot_cap: cached copy of the Slot Capabilities register 90 66 * @slot_ctrl: cached copy of the Slot Control register 91 - * @poll_thread: thread to poll for slot events if no IRQ is available, 92 - * enabled with pciehp_poll_mode module parameter 67 + * @ctrl_lock: serializes writes to the Slot Control register 93 68 * @cmd_started: jiffies when the Slot Control register was last written; 94 69 * the next write is allowed 1 second later, absent a Command Completed 95 70 * interrupt (PCIe r4.0, sec 6.7.3.2) 96 71 * @cmd_busy: flag set on Slot Control register write, cleared by IRQ handler 97 72 * on reception of a Command Completed event 98 - * @link_active_reporting: cached copy of Data Link Layer Link Active Reporting 99 - * Capable bit in Link Capabilities register; if this bit is zero, the 100 - * Data Link Layer Link Active bit in the Link Status register will never 101 - * be set and the driver is thus confined to wait 1 second before assuming 102 - * the link to a hotplugged device is up and accessing it 73 + * @queue: wait queue to wake up on reception of a Command Completed event, 74 + * used for synchronous writes to the Slot Control register 75 + * @pending_events: used by the IRQ handler to save events retrieved from the 76 + * Slot Status register for later consumption by the IRQ thread 103 77 * @notification_enabled: whether the IRQ was requested successfully 104 78 * @power_fault_detected: whether a power fault was detected by the hardware 105 79 * that has not yet been cleared by the user 106 - * @pending_events: used by the IRQ handler to save events retrieved from the 107 - * Slot Status register for later consumption by the IRQ thread 80 + * @poll_thread: thread to poll for slot events if no IRQ is available, 81 + * enabled with pciehp_poll_mode module parameter 82 + * @state: current state machine position 83 + * @state_lock: protects reads and writes of @state; 84 + * protects scheduling, execution and cancellation of @button_work 85 + * @button_work: work item to turn the slot on or off after 5 seconds 86 + * in response to an Attention Button press 87 + * @hotplug_slot: structure registered with the PCI hotplug core 88 + * @reset_lock: prevents access to the Data Link Layer Link Active bit in the 89 + * Link Status register and to the Presence Detect State bit in the Slot 90 + * Status register during a slot reset which may cause them to flap 108 91 * @request_result: result of last user request submitted to the IRQ thread 109 92 * @requester: wait queue to wake up on completion of user request, 110 93 * used for synchronous slot enable/disable request via sysfs 94 + * 95 + * PCIe hotplug has a 1:1 relationship between controller and slot, hence 96 + * unlike other drivers, the two aren't represented by separate structures. 111 97 */ 112 98 struct controller { 113 - struct mutex ctrl_lock; 114 99 struct pcie_device *pcie; 115 - struct rw_semaphore reset_lock; 116 - struct slot *slot; 117 - wait_queue_head_t queue; 118 - u32 slot_cap; 119 - u16 slot_ctrl; 120 - struct task_struct *poll_thread; 121 - unsigned long cmd_started; /* jiffies */ 100 + 101 + u32 slot_cap; /* capabilities and quirks */ 102 + 103 + u16 slot_ctrl; /* control register access */ 104 + struct mutex ctrl_lock; 105 + unsigned long cmd_started; 122 106 unsigned int cmd_busy:1; 123 - unsigned int link_active_reporting:1; 107 + wait_queue_head_t queue; 108 + 109 + atomic_t pending_events; /* event handling */ 124 110 unsigned int notification_enabled:1; 125 111 unsigned int power_fault_detected; 126 - atomic_t pending_events; 112 + struct task_struct *poll_thread; 113 + 114 + u8 state; /* state machine */ 115 + struct mutex state_lock; 116 + struct delayed_work button_work; 117 + 118 + struct hotplug_slot hotplug_slot; /* hotplug core interface */ 119 + struct rw_semaphore reset_lock; 127 120 int request_result; 128 121 wait_queue_head_t requester; 129 122 }; ··· 165 174 #define NO_CMD_CMPL(ctrl) ((ctrl)->slot_cap & PCI_EXP_SLTCAP_NCCS) 166 175 #define PSN(ctrl) (((ctrl)->slot_cap & PCI_EXP_SLTCAP_PSN) >> 19) 167 176 168 - int pciehp_sysfs_enable_slot(struct slot *slot); 169 - int pciehp_sysfs_disable_slot(struct slot *slot); 170 177 void pciehp_request(struct controller *ctrl, int action); 171 - void pciehp_handle_button_press(struct slot *slot); 172 - void pciehp_handle_disable_request(struct slot *slot); 173 - void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events); 174 - int pciehp_configure_device(struct slot *p_slot); 175 - void pciehp_unconfigure_device(struct slot *p_slot); 178 + void pciehp_handle_button_press(struct controller *ctrl); 179 + void pciehp_handle_disable_request(struct controller *ctrl); 180 + void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events); 181 + int pciehp_configure_device(struct controller *ctrl); 182 + void pciehp_unconfigure_device(struct controller *ctrl, bool presence); 176 183 void pciehp_queue_pushbutton_work(struct work_struct *work); 177 184 struct controller *pcie_init(struct pcie_device *dev); 178 185 int pcie_init_notification(struct controller *ctrl); 179 186 void pcie_shutdown_notification(struct controller *ctrl); 180 187 void pcie_clear_hotplug_events(struct controller *ctrl); 181 - int pciehp_power_on_slot(struct slot *slot); 182 - void pciehp_power_off_slot(struct slot *slot); 183 - void pciehp_get_power_status(struct slot *slot, u8 *status); 184 - void pciehp_get_attention_status(struct slot *slot, u8 *status); 188 + void pcie_enable_interrupt(struct controller *ctrl); 189 + void pcie_disable_interrupt(struct controller *ctrl); 190 + int pciehp_power_on_slot(struct controller *ctrl); 191 + void pciehp_power_off_slot(struct controller *ctrl); 192 + void pciehp_get_power_status(struct controller *ctrl, u8 *status); 185 193 186 - void pciehp_set_attention_status(struct slot *slot, u8 status); 187 - void pciehp_get_latch_status(struct slot *slot, u8 *status); 188 - void pciehp_get_adapter_status(struct slot *slot, u8 *status); 189 - int pciehp_query_power_fault(struct slot *slot); 190 - void pciehp_green_led_on(struct slot *slot); 191 - void pciehp_green_led_off(struct slot *slot); 192 - void pciehp_green_led_blink(struct slot *slot); 194 + void pciehp_set_attention_status(struct controller *ctrl, u8 status); 195 + void pciehp_get_latch_status(struct controller *ctrl, u8 *status); 196 + int pciehp_query_power_fault(struct controller *ctrl); 197 + void pciehp_green_led_on(struct controller *ctrl); 198 + void pciehp_green_led_off(struct controller *ctrl); 199 + void pciehp_green_led_blink(struct controller *ctrl); 200 + bool pciehp_card_present(struct controller *ctrl); 201 + bool pciehp_card_present_or_link_active(struct controller *ctrl); 193 202 int pciehp_check_link_status(struct controller *ctrl); 194 203 bool pciehp_check_link_active(struct controller *ctrl); 195 204 void pciehp_release_ctrl(struct controller *ctrl); 196 - int pciehp_reset_slot(struct slot *slot, int probe); 197 205 206 + int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot); 207 + int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot); 208 + int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe); 209 + int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status); 198 210 int pciehp_set_raw_indicator_status(struct hotplug_slot *h_slot, u8 status); 199 211 int pciehp_get_raw_indicator_status(struct hotplug_slot *h_slot, u8 *status); 200 212 201 - static inline const char *slot_name(struct slot *slot) 213 + static inline const char *slot_name(struct controller *ctrl) 202 214 { 203 - return hotplug_slot_name(slot->hotplug_slot); 215 + return hotplug_slot_name(&ctrl->hotplug_slot); 216 + } 217 + 218 + static inline struct controller *to_ctrl(struct hotplug_slot *hotplug_slot) 219 + { 220 + return container_of(hotplug_slot, struct controller, hotplug_slot); 204 221 } 205 222 206 223 #endif /* _PCIEHP_H */
+71 -97
drivers/pci/hotplug/pciehp_core.c
··· 23 23 #include <linux/types.h> 24 24 #include <linux/pci.h> 25 25 #include "pciehp.h" 26 - #include <linux/interrupt.h> 27 - #include <linux/time.h> 28 26 29 27 #include "../pci.h" 30 28 ··· 45 47 #define PCIE_MODULE_NAME "pciehp" 46 48 47 49 static int set_attention_status(struct hotplug_slot *slot, u8 value); 48 - static int enable_slot(struct hotplug_slot *slot); 49 - static int disable_slot(struct hotplug_slot *slot); 50 50 static int get_power_status(struct hotplug_slot *slot, u8 *value); 51 - static int get_attention_status(struct hotplug_slot *slot, u8 *value); 52 51 static int get_latch_status(struct hotplug_slot *slot, u8 *value); 53 52 static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 54 - static int reset_slot(struct hotplug_slot *slot, int probe); 55 53 56 54 static int init_slot(struct controller *ctrl) 57 55 { 58 - struct slot *slot = ctrl->slot; 59 - struct hotplug_slot *hotplug = NULL; 60 - struct hotplug_slot_info *info = NULL; 61 - struct hotplug_slot_ops *ops = NULL; 56 + struct hotplug_slot_ops *ops; 62 57 char name[SLOT_NAME_SIZE]; 63 - int retval = -ENOMEM; 64 - 65 - hotplug = kzalloc(sizeof(*hotplug), GFP_KERNEL); 66 - if (!hotplug) 67 - goto out; 68 - 69 - info = kzalloc(sizeof(*info), GFP_KERNEL); 70 - if (!info) 71 - goto out; 58 + int retval; 72 59 73 60 /* Setup hotplug slot ops */ 74 61 ops = kzalloc(sizeof(*ops), GFP_KERNEL); 75 62 if (!ops) 76 - goto out; 63 + return -ENOMEM; 77 64 78 - ops->enable_slot = enable_slot; 79 - ops->disable_slot = disable_slot; 65 + ops->enable_slot = pciehp_sysfs_enable_slot; 66 + ops->disable_slot = pciehp_sysfs_disable_slot; 80 67 ops->get_power_status = get_power_status; 81 68 ops->get_adapter_status = get_adapter_status; 82 - ops->reset_slot = reset_slot; 69 + ops->reset_slot = pciehp_reset_slot; 83 70 if (MRL_SENS(ctrl)) 84 71 ops->get_latch_status = get_latch_status; 85 72 if (ATTN_LED(ctrl)) { 86 - ops->get_attention_status = get_attention_status; 73 + ops->get_attention_status = pciehp_get_attention_status; 87 74 ops->set_attention_status = set_attention_status; 88 75 } else if (ctrl->pcie->port->hotplug_user_indicators) { 89 76 ops->get_attention_status = pciehp_get_raw_indicator_status; ··· 76 93 } 77 94 78 95 /* register this slot with the hotplug pci core */ 79 - hotplug->info = info; 80 - hotplug->private = slot; 81 - hotplug->ops = ops; 82 - slot->hotplug_slot = hotplug; 96 + ctrl->hotplug_slot.ops = ops; 83 97 snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl)); 84 98 85 - retval = pci_hp_initialize(hotplug, 99 + retval = pci_hp_initialize(&ctrl->hotplug_slot, 86 100 ctrl->pcie->port->subordinate, 0, name); 87 - if (retval) 88 - ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval); 89 - out: 90 101 if (retval) { 102 + ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval); 91 103 kfree(ops); 92 - kfree(info); 93 - kfree(hotplug); 94 104 } 95 105 return retval; 96 106 } 97 107 98 108 static void cleanup_slot(struct controller *ctrl) 99 109 { 100 - struct hotplug_slot *hotplug_slot = ctrl->slot->hotplug_slot; 110 + struct hotplug_slot *hotplug_slot = &ctrl->hotplug_slot; 101 111 102 112 pci_hp_destroy(hotplug_slot); 103 113 kfree(hotplug_slot->ops); 104 - kfree(hotplug_slot->info); 105 - kfree(hotplug_slot); 106 114 } 107 115 108 116 /* ··· 101 127 */ 102 128 static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 status) 103 129 { 104 - struct slot *slot = hotplug_slot->private; 105 - struct pci_dev *pdev = slot->ctrl->pcie->port; 130 + struct controller *ctrl = to_ctrl(hotplug_slot); 131 + struct pci_dev *pdev = ctrl->pcie->port; 106 132 107 133 pci_config_pm_runtime_get(pdev); 108 - pciehp_set_attention_status(slot, status); 134 + pciehp_set_attention_status(ctrl, status); 109 135 pci_config_pm_runtime_put(pdev); 110 136 return 0; 111 - } 112 - 113 - 114 - static int enable_slot(struct hotplug_slot *hotplug_slot) 115 - { 116 - struct slot *slot = hotplug_slot->private; 117 - 118 - return pciehp_sysfs_enable_slot(slot); 119 - } 120 - 121 - 122 - static int disable_slot(struct hotplug_slot *hotplug_slot) 123 - { 124 - struct slot *slot = hotplug_slot->private; 125 - 126 - return pciehp_sysfs_disable_slot(slot); 127 137 } 128 138 129 139 static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 130 140 { 131 - struct slot *slot = hotplug_slot->private; 132 - struct pci_dev *pdev = slot->ctrl->pcie->port; 141 + struct controller *ctrl = to_ctrl(hotplug_slot); 142 + struct pci_dev *pdev = ctrl->pcie->port; 133 143 134 144 pci_config_pm_runtime_get(pdev); 135 - pciehp_get_power_status(slot, value); 145 + pciehp_get_power_status(ctrl, value); 136 146 pci_config_pm_runtime_put(pdev); 137 - return 0; 138 - } 139 - 140 - static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) 141 - { 142 - struct slot *slot = hotplug_slot->private; 143 - 144 - pciehp_get_attention_status(slot, value); 145 147 return 0; 146 148 } 147 149 148 150 static int get_latch_status(struct hotplug_slot *hotplug_slot, u8 *value) 149 151 { 150 - struct slot *slot = hotplug_slot->private; 151 - struct pci_dev *pdev = slot->ctrl->pcie->port; 152 + struct controller *ctrl = to_ctrl(hotplug_slot); 153 + struct pci_dev *pdev = ctrl->pcie->port; 152 154 153 155 pci_config_pm_runtime_get(pdev); 154 - pciehp_get_latch_status(slot, value); 156 + pciehp_get_latch_status(ctrl, value); 155 157 pci_config_pm_runtime_put(pdev); 156 158 return 0; 157 159 } 158 160 159 161 static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) 160 162 { 161 - struct slot *slot = hotplug_slot->private; 162 - struct pci_dev *pdev = slot->ctrl->pcie->port; 163 + struct controller *ctrl = to_ctrl(hotplug_slot); 164 + struct pci_dev *pdev = ctrl->pcie->port; 163 165 164 166 pci_config_pm_runtime_get(pdev); 165 - pciehp_get_adapter_status(slot, value); 167 + *value = pciehp_card_present_or_link_active(ctrl); 166 168 pci_config_pm_runtime_put(pdev); 167 169 return 0; 168 - } 169 - 170 - static int reset_slot(struct hotplug_slot *hotplug_slot, int probe) 171 - { 172 - struct slot *slot = hotplug_slot->private; 173 - 174 - return pciehp_reset_slot(slot, probe); 175 170 } 176 171 177 172 /** ··· 155 212 */ 156 213 static void pciehp_check_presence(struct controller *ctrl) 157 214 { 158 - struct slot *slot = ctrl->slot; 159 - u8 occupied; 215 + bool occupied; 160 216 161 217 down_read(&ctrl->reset_lock); 162 - mutex_lock(&slot->lock); 218 + mutex_lock(&ctrl->state_lock); 163 219 164 - pciehp_get_adapter_status(slot, &occupied); 165 - if ((occupied && (slot->state == OFF_STATE || 166 - slot->state == BLINKINGON_STATE)) || 167 - (!occupied && (slot->state == ON_STATE || 168 - slot->state == BLINKINGOFF_STATE))) 220 + occupied = pciehp_card_present_or_link_active(ctrl); 221 + if ((occupied && (ctrl->state == OFF_STATE || 222 + ctrl->state == BLINKINGON_STATE)) || 223 + (!occupied && (ctrl->state == ON_STATE || 224 + ctrl->state == BLINKINGOFF_STATE))) 169 225 pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC); 170 226 171 - mutex_unlock(&slot->lock); 227 + mutex_unlock(&ctrl->state_lock); 172 228 up_read(&ctrl->reset_lock); 173 229 } 174 230 ··· 175 233 { 176 234 int rc; 177 235 struct controller *ctrl; 178 - struct slot *slot; 179 236 180 237 /* If this is not a "hotplug" service, we have no business here. */ 181 238 if (dev->service != PCIE_PORT_SERVICE_HP) ··· 212 271 } 213 272 214 273 /* Publish to user space */ 215 - slot = ctrl->slot; 216 - rc = pci_hp_add(slot->hotplug_slot); 274 + rc = pci_hp_add(&ctrl->hotplug_slot); 217 275 if (rc) { 218 276 ctrl_err(ctrl, "Publication to user space failed (%d)\n", rc); 219 277 goto err_out_shutdown_notification; ··· 235 295 { 236 296 struct controller *ctrl = get_service_data(dev); 237 297 238 - pci_hp_del(ctrl->slot->hotplug_slot); 298 + pci_hp_del(&ctrl->hotplug_slot); 239 299 pcie_shutdown_notification(ctrl); 240 300 cleanup_slot(ctrl); 241 301 pciehp_release_ctrl(ctrl); 242 302 } 243 303 244 304 #ifdef CONFIG_PM 305 + static bool pme_is_native(struct pcie_device *dev) 306 + { 307 + const struct pci_host_bridge *host; 308 + 309 + host = pci_find_host_bridge(dev->port->bus); 310 + return pcie_ports_native || host->native_pme; 311 + } 312 + 245 313 static int pciehp_suspend(struct pcie_device *dev) 246 314 { 315 + /* 316 + * Disable hotplug interrupt so that it does not trigger 317 + * immediately when the downstream link goes down. 318 + */ 319 + if (pme_is_native(dev)) 320 + pcie_disable_interrupt(get_service_data(dev)); 321 + 247 322 return 0; 248 323 } 249 324 250 325 static int pciehp_resume_noirq(struct pcie_device *dev) 251 326 { 252 327 struct controller *ctrl = get_service_data(dev); 253 - struct slot *slot = ctrl->slot; 254 328 255 329 /* pci_restore_state() just wrote to the Slot Control register */ 256 330 ctrl->cmd_started = jiffies; 257 331 ctrl->cmd_busy = true; 258 332 259 333 /* clear spurious events from rediscovery of inserted card */ 260 - if (slot->state == ON_STATE || slot->state == BLINKINGOFF_STATE) 334 + if (ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) 261 335 pcie_clear_hotplug_events(ctrl); 262 336 263 337 return 0; ··· 281 327 { 282 328 struct controller *ctrl = get_service_data(dev); 283 329 330 + if (pme_is_native(dev)) 331 + pcie_enable_interrupt(ctrl); 332 + 284 333 pciehp_check_presence(ctrl); 285 334 286 335 return 0; 336 + } 337 + 338 + static int pciehp_runtime_resume(struct pcie_device *dev) 339 + { 340 + struct controller *ctrl = get_service_data(dev); 341 + 342 + /* pci_restore_state() just wrote to the Slot Control register */ 343 + ctrl->cmd_started = jiffies; 344 + ctrl->cmd_busy = true; 345 + 346 + /* clear spurious events from rediscovery of inserted card */ 347 + if ((ctrl->state == ON_STATE || ctrl->state == BLINKINGOFF_STATE) && 348 + pme_is_native(dev)) 349 + pcie_clear_hotplug_events(ctrl); 350 + 351 + return pciehp_resume(dev); 287 352 } 288 353 #endif /* PM */ 289 354 ··· 318 345 .suspend = pciehp_suspend, 319 346 .resume_noirq = pciehp_resume_noirq, 320 347 .resume = pciehp_resume, 348 + .runtime_suspend = pciehp_suspend, 349 + .runtime_resume = pciehp_runtime_resume, 321 350 #endif /* PM */ 322 351 }; 323 352 324 - static int __init pcied_init(void) 353 + int __init pcie_hp_init(void) 325 354 { 326 355 int retval = 0; 327 356 ··· 334 359 335 360 return retval; 336 361 } 337 - device_initcall(pcied_init);
+123 -140
drivers/pci/hotplug/pciehp_ctrl.c
··· 13 13 * 14 14 */ 15 15 16 - #include <linux/module.h> 17 16 #include <linux/kernel.h> 18 17 #include <linux/types.h> 19 - #include <linux/slab.h> 20 18 #include <linux/pm_runtime.h> 21 19 #include <linux/pci.h> 22 - #include "../pci.h" 23 20 #include "pciehp.h" 24 21 25 22 /* The following routines constitute the bulk of the 26 23 hotplug controller logic 27 24 */ 28 25 29 - static void set_slot_off(struct controller *ctrl, struct slot *pslot) 26 + #define SAFE_REMOVAL true 27 + #define SURPRISE_REMOVAL false 28 + 29 + static void set_slot_off(struct controller *ctrl) 30 30 { 31 31 /* turn off slot, turn on Amber LED, turn off Green LED if supported*/ 32 32 if (POWER_CTRL(ctrl)) { 33 - pciehp_power_off_slot(pslot); 33 + pciehp_power_off_slot(ctrl); 34 34 35 35 /* 36 36 * After turning power off, we must wait for at least 1 second ··· 40 40 msleep(1000); 41 41 } 42 42 43 - pciehp_green_led_off(pslot); 44 - pciehp_set_attention_status(pslot, 1); 43 + pciehp_green_led_off(ctrl); 44 + pciehp_set_attention_status(ctrl, 1); 45 45 } 46 46 47 47 /** 48 48 * board_added - Called after a board has been added to the system. 49 - * @p_slot: &slot where board is added 49 + * @ctrl: PCIe hotplug controller where board is added 50 50 * 51 51 * Turns power on for the board. 52 52 * Configures board. 53 53 */ 54 - static int board_added(struct slot *p_slot) 54 + static int board_added(struct controller *ctrl) 55 55 { 56 56 int retval = 0; 57 - struct controller *ctrl = p_slot->ctrl; 58 57 struct pci_bus *parent = ctrl->pcie->port->subordinate; 59 58 60 59 if (POWER_CTRL(ctrl)) { 61 60 /* Power on slot */ 62 - retval = pciehp_power_on_slot(p_slot); 61 + retval = pciehp_power_on_slot(ctrl); 63 62 if (retval) 64 63 return retval; 65 64 } 66 65 67 - pciehp_green_led_blink(p_slot); 66 + pciehp_green_led_blink(ctrl); 68 67 69 68 /* Check link training status */ 70 69 retval = pciehp_check_link_status(ctrl); ··· 73 74 } 74 75 75 76 /* Check for a power fault */ 76 - if (ctrl->power_fault_detected || pciehp_query_power_fault(p_slot)) { 77 - ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(p_slot)); 77 + if (ctrl->power_fault_detected || pciehp_query_power_fault(ctrl)) { 78 + ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl)); 78 79 retval = -EIO; 79 80 goto err_exit; 80 81 } 81 82 82 - retval = pciehp_configure_device(p_slot); 83 + retval = pciehp_configure_device(ctrl); 83 84 if (retval) { 84 85 if (retval != -EEXIST) { 85 86 ctrl_err(ctrl, "Cannot add device at %04x:%02x:00\n", ··· 88 89 } 89 90 } 90 91 91 - pciehp_green_led_on(p_slot); 92 - pciehp_set_attention_status(p_slot, 0); 92 + pciehp_green_led_on(ctrl); 93 + pciehp_set_attention_status(ctrl, 0); 93 94 return 0; 94 95 95 96 err_exit: 96 - set_slot_off(ctrl, p_slot); 97 + set_slot_off(ctrl); 97 98 return retval; 98 99 } 99 100 100 101 /** 101 102 * remove_board - Turns off slot and LEDs 102 - * @p_slot: slot where board is being removed 103 + * @ctrl: PCIe hotplug controller where board is being removed 104 + * @safe_removal: whether the board is safely removed (versus surprise removed) 103 105 */ 104 - static void remove_board(struct slot *p_slot) 106 + static void remove_board(struct controller *ctrl, bool safe_removal) 105 107 { 106 - struct controller *ctrl = p_slot->ctrl; 107 - 108 - pciehp_unconfigure_device(p_slot); 108 + pciehp_unconfigure_device(ctrl, safe_removal); 109 109 110 110 if (POWER_CTRL(ctrl)) { 111 - pciehp_power_off_slot(p_slot); 111 + pciehp_power_off_slot(ctrl); 112 112 113 113 /* 114 114 * After turning power off, we must wait for at least 1 second ··· 118 120 } 119 121 120 122 /* turn off Green LED */ 121 - pciehp_green_led_off(p_slot); 123 + pciehp_green_led_off(ctrl); 122 124 } 123 125 124 - static int pciehp_enable_slot(struct slot *slot); 125 - static int pciehp_disable_slot(struct slot *slot); 126 + static int pciehp_enable_slot(struct controller *ctrl); 127 + static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal); 126 128 127 129 void pciehp_request(struct controller *ctrl, int action) 128 130 { ··· 133 135 134 136 void pciehp_queue_pushbutton_work(struct work_struct *work) 135 137 { 136 - struct slot *p_slot = container_of(work, struct slot, work.work); 137 - struct controller *ctrl = p_slot->ctrl; 138 + struct controller *ctrl = container_of(work, struct controller, 139 + button_work.work); 138 140 139 - mutex_lock(&p_slot->lock); 140 - switch (p_slot->state) { 141 + mutex_lock(&ctrl->state_lock); 142 + switch (ctrl->state) { 141 143 case BLINKINGOFF_STATE: 142 144 pciehp_request(ctrl, DISABLE_SLOT); 143 145 break; ··· 147 149 default: 148 150 break; 149 151 } 150 - mutex_unlock(&p_slot->lock); 152 + mutex_unlock(&ctrl->state_lock); 151 153 } 152 154 153 - void pciehp_handle_button_press(struct slot *p_slot) 155 + void pciehp_handle_button_press(struct controller *ctrl) 154 156 { 155 - struct controller *ctrl = p_slot->ctrl; 156 - 157 - mutex_lock(&p_slot->lock); 158 - switch (p_slot->state) { 157 + mutex_lock(&ctrl->state_lock); 158 + switch (ctrl->state) { 159 159 case OFF_STATE: 160 160 case ON_STATE: 161 - if (p_slot->state == ON_STATE) { 162 - p_slot->state = BLINKINGOFF_STATE; 161 + if (ctrl->state == ON_STATE) { 162 + ctrl->state = BLINKINGOFF_STATE; 163 163 ctrl_info(ctrl, "Slot(%s): Powering off due to button press\n", 164 - slot_name(p_slot)); 164 + slot_name(ctrl)); 165 165 } else { 166 - p_slot->state = BLINKINGON_STATE; 166 + ctrl->state = BLINKINGON_STATE; 167 167 ctrl_info(ctrl, "Slot(%s) Powering on due to button press\n", 168 - slot_name(p_slot)); 168 + slot_name(ctrl)); 169 169 } 170 170 /* blink green LED and turn off amber */ 171 - pciehp_green_led_blink(p_slot); 172 - pciehp_set_attention_status(p_slot, 0); 173 - schedule_delayed_work(&p_slot->work, 5 * HZ); 171 + pciehp_green_led_blink(ctrl); 172 + pciehp_set_attention_status(ctrl, 0); 173 + schedule_delayed_work(&ctrl->button_work, 5 * HZ); 174 174 break; 175 175 case BLINKINGOFF_STATE: 176 176 case BLINKINGON_STATE: ··· 177 181 * press the attention again before the 5 sec. limit 178 182 * expires to cancel hot-add or hot-remove 179 183 */ 180 - ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(p_slot)); 181 - cancel_delayed_work(&p_slot->work); 182 - if (p_slot->state == BLINKINGOFF_STATE) { 183 - p_slot->state = ON_STATE; 184 - pciehp_green_led_on(p_slot); 184 + ctrl_info(ctrl, "Slot(%s): Button cancel\n", slot_name(ctrl)); 185 + cancel_delayed_work(&ctrl->button_work); 186 + if (ctrl->state == BLINKINGOFF_STATE) { 187 + ctrl->state = ON_STATE; 188 + pciehp_green_led_on(ctrl); 185 189 } else { 186 - p_slot->state = OFF_STATE; 187 - pciehp_green_led_off(p_slot); 190 + ctrl->state = OFF_STATE; 191 + pciehp_green_led_off(ctrl); 188 192 } 189 - pciehp_set_attention_status(p_slot, 0); 193 + pciehp_set_attention_status(ctrl, 0); 190 194 ctrl_info(ctrl, "Slot(%s): Action canceled due to button press\n", 191 - slot_name(p_slot)); 195 + slot_name(ctrl)); 192 196 break; 193 197 default: 194 198 ctrl_err(ctrl, "Slot(%s): Ignoring invalid state %#x\n", 195 - slot_name(p_slot), p_slot->state); 199 + slot_name(ctrl), ctrl->state); 196 200 break; 197 201 } 198 - mutex_unlock(&p_slot->lock); 202 + mutex_unlock(&ctrl->state_lock); 199 203 } 200 204 201 - void pciehp_handle_disable_request(struct slot *slot) 205 + void pciehp_handle_disable_request(struct controller *ctrl) 202 206 { 203 - struct controller *ctrl = slot->ctrl; 204 - 205 - mutex_lock(&slot->lock); 206 - switch (slot->state) { 207 + mutex_lock(&ctrl->state_lock); 208 + switch (ctrl->state) { 207 209 case BLINKINGON_STATE: 208 210 case BLINKINGOFF_STATE: 209 - cancel_delayed_work(&slot->work); 211 + cancel_delayed_work(&ctrl->button_work); 210 212 break; 211 213 } 212 - slot->state = POWEROFF_STATE; 213 - mutex_unlock(&slot->lock); 214 + ctrl->state = POWEROFF_STATE; 215 + mutex_unlock(&ctrl->state_lock); 214 216 215 - ctrl->request_result = pciehp_disable_slot(slot); 217 + ctrl->request_result = pciehp_disable_slot(ctrl, SAFE_REMOVAL); 216 218 } 217 219 218 - void pciehp_handle_presence_or_link_change(struct slot *slot, u32 events) 220 + void pciehp_handle_presence_or_link_change(struct controller *ctrl, u32 events) 219 221 { 220 - struct controller *ctrl = slot->ctrl; 221 - bool link_active; 222 - u8 present; 222 + bool present, link_active; 223 223 224 224 /* 225 225 * If the slot is on and presence or link has changed, turn it off. 226 226 * Even if it's occupied again, we cannot assume the card is the same. 227 227 */ 228 - mutex_lock(&slot->lock); 229 - switch (slot->state) { 228 + mutex_lock(&ctrl->state_lock); 229 + switch (ctrl->state) { 230 230 case BLINKINGOFF_STATE: 231 - cancel_delayed_work(&slot->work); 231 + cancel_delayed_work(&ctrl->button_work); 232 232 /* fall through */ 233 233 case ON_STATE: 234 - slot->state = POWEROFF_STATE; 235 - mutex_unlock(&slot->lock); 234 + ctrl->state = POWEROFF_STATE; 235 + mutex_unlock(&ctrl->state_lock); 236 236 if (events & PCI_EXP_SLTSTA_DLLSC) 237 237 ctrl_info(ctrl, "Slot(%s): Link Down\n", 238 - slot_name(slot)); 238 + slot_name(ctrl)); 239 239 if (events & PCI_EXP_SLTSTA_PDC) 240 240 ctrl_info(ctrl, "Slot(%s): Card not present\n", 241 - slot_name(slot)); 242 - pciehp_disable_slot(slot); 241 + slot_name(ctrl)); 242 + pciehp_disable_slot(ctrl, SURPRISE_REMOVAL); 243 243 break; 244 244 default: 245 - mutex_unlock(&slot->lock); 245 + mutex_unlock(&ctrl->state_lock); 246 246 break; 247 247 } 248 248 249 249 /* Turn the slot on if it's occupied or link is up */ 250 - mutex_lock(&slot->lock); 251 - pciehp_get_adapter_status(slot, &present); 250 + mutex_lock(&ctrl->state_lock); 251 + present = pciehp_card_present(ctrl); 252 252 link_active = pciehp_check_link_active(ctrl); 253 253 if (!present && !link_active) { 254 - mutex_unlock(&slot->lock); 254 + mutex_unlock(&ctrl->state_lock); 255 255 return; 256 256 } 257 257 258 - switch (slot->state) { 258 + switch (ctrl->state) { 259 259 case BLINKINGON_STATE: 260 - cancel_delayed_work(&slot->work); 260 + cancel_delayed_work(&ctrl->button_work); 261 261 /* fall through */ 262 262 case OFF_STATE: 263 - slot->state = POWERON_STATE; 264 - mutex_unlock(&slot->lock); 263 + ctrl->state = POWERON_STATE; 264 + mutex_unlock(&ctrl->state_lock); 265 265 if (present) 266 266 ctrl_info(ctrl, "Slot(%s): Card present\n", 267 - slot_name(slot)); 267 + slot_name(ctrl)); 268 268 if (link_active) 269 269 ctrl_info(ctrl, "Slot(%s): Link Up\n", 270 - slot_name(slot)); 271 - ctrl->request_result = pciehp_enable_slot(slot); 270 + slot_name(ctrl)); 271 + ctrl->request_result = pciehp_enable_slot(ctrl); 272 272 break; 273 273 default: 274 - mutex_unlock(&slot->lock); 274 + mutex_unlock(&ctrl->state_lock); 275 275 break; 276 276 } 277 277 } 278 278 279 - static int __pciehp_enable_slot(struct slot *p_slot) 279 + static int __pciehp_enable_slot(struct controller *ctrl) 280 280 { 281 281 u8 getstatus = 0; 282 - struct controller *ctrl = p_slot->ctrl; 283 282 284 - pciehp_get_adapter_status(p_slot, &getstatus); 285 - if (!getstatus) { 286 - ctrl_info(ctrl, "Slot(%s): No adapter\n", slot_name(p_slot)); 287 - return -ENODEV; 288 - } 289 - if (MRL_SENS(p_slot->ctrl)) { 290 - pciehp_get_latch_status(p_slot, &getstatus); 283 + if (MRL_SENS(ctrl)) { 284 + pciehp_get_latch_status(ctrl, &getstatus); 291 285 if (getstatus) { 292 286 ctrl_info(ctrl, "Slot(%s): Latch open\n", 293 - slot_name(p_slot)); 287 + slot_name(ctrl)); 294 288 return -ENODEV; 295 289 } 296 290 } 297 291 298 - if (POWER_CTRL(p_slot->ctrl)) { 299 - pciehp_get_power_status(p_slot, &getstatus); 292 + if (POWER_CTRL(ctrl)) { 293 + pciehp_get_power_status(ctrl, &getstatus); 300 294 if (getstatus) { 301 295 ctrl_info(ctrl, "Slot(%s): Already enabled\n", 302 - slot_name(p_slot)); 296 + slot_name(ctrl)); 303 297 return 0; 304 298 } 305 299 } 306 300 307 - return board_added(p_slot); 301 + return board_added(ctrl); 308 302 } 309 303 310 - static int pciehp_enable_slot(struct slot *slot) 304 + static int pciehp_enable_slot(struct controller *ctrl) 311 305 { 312 - struct controller *ctrl = slot->ctrl; 313 306 int ret; 314 307 315 308 pm_runtime_get_sync(&ctrl->pcie->port->dev); 316 - ret = __pciehp_enable_slot(slot); 309 + ret = __pciehp_enable_slot(ctrl); 317 310 if (ret && ATTN_BUTTN(ctrl)) 318 - pciehp_green_led_off(slot); /* may be blinking */ 311 + pciehp_green_led_off(ctrl); /* may be blinking */ 319 312 pm_runtime_put(&ctrl->pcie->port->dev); 320 313 321 - mutex_lock(&slot->lock); 322 - slot->state = ret ? OFF_STATE : ON_STATE; 323 - mutex_unlock(&slot->lock); 314 + mutex_lock(&ctrl->state_lock); 315 + ctrl->state = ret ? OFF_STATE : ON_STATE; 316 + mutex_unlock(&ctrl->state_lock); 324 317 325 318 return ret; 326 319 } 327 320 328 - static int __pciehp_disable_slot(struct slot *p_slot) 321 + static int __pciehp_disable_slot(struct controller *ctrl, bool safe_removal) 329 322 { 330 323 u8 getstatus = 0; 331 - struct controller *ctrl = p_slot->ctrl; 332 324 333 - if (POWER_CTRL(p_slot->ctrl)) { 334 - pciehp_get_power_status(p_slot, &getstatus); 325 + if (POWER_CTRL(ctrl)) { 326 + pciehp_get_power_status(ctrl, &getstatus); 335 327 if (!getstatus) { 336 328 ctrl_info(ctrl, "Slot(%s): Already disabled\n", 337 - slot_name(p_slot)); 329 + slot_name(ctrl)); 338 330 return -EINVAL; 339 331 } 340 332 } 341 333 342 - remove_board(p_slot); 334 + remove_board(ctrl, safe_removal); 343 335 return 0; 344 336 } 345 337 346 - static int pciehp_disable_slot(struct slot *slot) 338 + static int pciehp_disable_slot(struct controller *ctrl, bool safe_removal) 347 339 { 348 - struct controller *ctrl = slot->ctrl; 349 340 int ret; 350 341 351 342 pm_runtime_get_sync(&ctrl->pcie->port->dev); 352 - ret = __pciehp_disable_slot(slot); 343 + ret = __pciehp_disable_slot(ctrl, safe_removal); 353 344 pm_runtime_put(&ctrl->pcie->port->dev); 354 345 355 - mutex_lock(&slot->lock); 356 - slot->state = OFF_STATE; 357 - mutex_unlock(&slot->lock); 346 + mutex_lock(&ctrl->state_lock); 347 + ctrl->state = OFF_STATE; 348 + mutex_unlock(&ctrl->state_lock); 358 349 359 350 return ret; 360 351 } 361 352 362 - int pciehp_sysfs_enable_slot(struct slot *p_slot) 353 + int pciehp_sysfs_enable_slot(struct hotplug_slot *hotplug_slot) 363 354 { 364 - struct controller *ctrl = p_slot->ctrl; 355 + struct controller *ctrl = to_ctrl(hotplug_slot); 365 356 366 - mutex_lock(&p_slot->lock); 367 - switch (p_slot->state) { 357 + mutex_lock(&ctrl->state_lock); 358 + switch (ctrl->state) { 368 359 case BLINKINGON_STATE: 369 360 case OFF_STATE: 370 - mutex_unlock(&p_slot->lock); 361 + mutex_unlock(&ctrl->state_lock); 371 362 /* 372 363 * The IRQ thread becomes a no-op if the user pulls out the 373 364 * card before the thread wakes up, so initialize to -ENODEV. ··· 366 383 return ctrl->request_result; 367 384 case POWERON_STATE: 368 385 ctrl_info(ctrl, "Slot(%s): Already in powering on state\n", 369 - slot_name(p_slot)); 386 + slot_name(ctrl)); 370 387 break; 371 388 case BLINKINGOFF_STATE: 372 389 case ON_STATE: 373 390 case POWEROFF_STATE: 374 391 ctrl_info(ctrl, "Slot(%s): Already enabled\n", 375 - slot_name(p_slot)); 392 + slot_name(ctrl)); 376 393 break; 377 394 default: 378 395 ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n", 379 - slot_name(p_slot), p_slot->state); 396 + slot_name(ctrl), ctrl->state); 380 397 break; 381 398 } 382 - mutex_unlock(&p_slot->lock); 399 + mutex_unlock(&ctrl->state_lock); 383 400 384 401 return -ENODEV; 385 402 } 386 403 387 - int pciehp_sysfs_disable_slot(struct slot *p_slot) 404 + int pciehp_sysfs_disable_slot(struct hotplug_slot *hotplug_slot) 388 405 { 389 - struct controller *ctrl = p_slot->ctrl; 406 + struct controller *ctrl = to_ctrl(hotplug_slot); 390 407 391 - mutex_lock(&p_slot->lock); 392 - switch (p_slot->state) { 408 + mutex_lock(&ctrl->state_lock); 409 + switch (ctrl->state) { 393 410 case BLINKINGOFF_STATE: 394 411 case ON_STATE: 395 - mutex_unlock(&p_slot->lock); 412 + mutex_unlock(&ctrl->state_lock); 396 413 pciehp_request(ctrl, DISABLE_SLOT); 397 414 wait_event(ctrl->requester, 398 415 !atomic_read(&ctrl->pending_events)); 399 416 return ctrl->request_result; 400 417 case POWEROFF_STATE: 401 418 ctrl_info(ctrl, "Slot(%s): Already in powering off state\n", 402 - slot_name(p_slot)); 419 + slot_name(ctrl)); 403 420 break; 404 421 case BLINKINGON_STATE: 405 422 case OFF_STATE: 406 423 case POWERON_STATE: 407 424 ctrl_info(ctrl, "Slot(%s): Already disabled\n", 408 - slot_name(p_slot)); 425 + slot_name(ctrl)); 409 426 break; 410 427 default: 411 428 ctrl_err(ctrl, "Slot(%s): Invalid state %#x\n", 412 - slot_name(p_slot), p_slot->state); 429 + slot_name(ctrl), ctrl->state); 413 430 break; 414 431 } 415 - mutex_unlock(&p_slot->lock); 432 + mutex_unlock(&ctrl->state_lock); 416 433 417 434 return -ENODEV; 418 435 }
+74 -110
drivers/pci/hotplug/pciehp_hpc.c
··· 13 13 */ 14 14 15 15 #include <linux/kernel.h> 16 - #include <linux/module.h> 17 16 #include <linux/types.h> 18 - #include <linux/signal.h> 19 17 #include <linux/jiffies.h> 20 18 #include <linux/kthread.h> 21 19 #include <linux/pci.h> 22 20 #include <linux/pm_runtime.h> 23 21 #include <linux/interrupt.h> 24 - #include <linux/time.h> 25 22 #include <linux/slab.h> 26 23 27 24 #include "../pci.h" ··· 40 43 if (pciehp_poll_mode) { 41 44 ctrl->poll_thread = kthread_run(&pciehp_poll, ctrl, 42 45 "pciehp_poll-%s", 43 - slot_name(ctrl->slot)); 46 + slot_name(ctrl)); 44 47 return PTR_ERR_OR_ZERO(ctrl->poll_thread); 45 48 } 46 49 ··· 214 217 return ret; 215 218 } 216 219 217 - static void pcie_wait_link_active(struct controller *ctrl) 218 - { 219 - struct pci_dev *pdev = ctrl_dev(ctrl); 220 - 221 - pcie_wait_for_link(pdev, true); 222 - } 223 - 224 220 static bool pci_bus_check_dev(struct pci_bus *bus, int devfn) 225 221 { 226 222 u32 l; ··· 246 256 bool found; 247 257 u16 lnk_status; 248 258 249 - /* 250 - * Data Link Layer Link Active Reporting must be capable for 251 - * hot-plug capable downstream port. But old controller might 252 - * not implement it. In this case, we wait for 1000 ms. 253 - */ 254 - if (ctrl->link_active_reporting) 255 - pcie_wait_link_active(ctrl); 256 - else 257 - msleep(1000); 259 + if (!pcie_wait_for_link(pdev, true)) 260 + return -1; 258 261 259 - /* wait 100ms before read pci conf, and try in 1s */ 260 - msleep(100); 261 262 found = pci_bus_check_dev(ctrl->pcie->port->subordinate, 262 263 PCI_DEVFN(0, 0)); 263 264 ··· 299 318 int pciehp_get_raw_indicator_status(struct hotplug_slot *hotplug_slot, 300 319 u8 *status) 301 320 { 302 - struct slot *slot = hotplug_slot->private; 303 - struct pci_dev *pdev = ctrl_dev(slot->ctrl); 321 + struct controller *ctrl = to_ctrl(hotplug_slot); 322 + struct pci_dev *pdev = ctrl_dev(ctrl); 304 323 u16 slot_ctrl; 305 324 306 325 pci_config_pm_runtime_get(pdev); ··· 310 329 return 0; 311 330 } 312 331 313 - void pciehp_get_attention_status(struct slot *slot, u8 *status) 332 + int pciehp_get_attention_status(struct hotplug_slot *hotplug_slot, u8 *status) 314 333 { 315 - struct controller *ctrl = slot->ctrl; 334 + struct controller *ctrl = to_ctrl(hotplug_slot); 316 335 struct pci_dev *pdev = ctrl_dev(ctrl); 317 336 u16 slot_ctrl; 318 337 ··· 336 355 *status = 0xFF; 337 356 break; 338 357 } 358 + 359 + return 0; 339 360 } 340 361 341 - void pciehp_get_power_status(struct slot *slot, u8 *status) 362 + void pciehp_get_power_status(struct controller *ctrl, u8 *status) 342 363 { 343 - struct controller *ctrl = slot->ctrl; 344 364 struct pci_dev *pdev = ctrl_dev(ctrl); 345 365 u16 slot_ctrl; 346 366 ··· 362 380 } 363 381 } 364 382 365 - void pciehp_get_latch_status(struct slot *slot, u8 *status) 383 + void pciehp_get_latch_status(struct controller *ctrl, u8 *status) 366 384 { 367 - struct pci_dev *pdev = ctrl_dev(slot->ctrl); 385 + struct pci_dev *pdev = ctrl_dev(ctrl); 368 386 u16 slot_status; 369 387 370 388 pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 371 389 *status = !!(slot_status & PCI_EXP_SLTSTA_MRLSS); 372 390 } 373 391 374 - void pciehp_get_adapter_status(struct slot *slot, u8 *status) 392 + bool pciehp_card_present(struct controller *ctrl) 375 393 { 376 - struct pci_dev *pdev = ctrl_dev(slot->ctrl); 394 + struct pci_dev *pdev = ctrl_dev(ctrl); 377 395 u16 slot_status; 378 396 379 397 pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); 380 - *status = !!(slot_status & PCI_EXP_SLTSTA_PDS); 398 + return slot_status & PCI_EXP_SLTSTA_PDS; 381 399 } 382 400 383 - int pciehp_query_power_fault(struct slot *slot) 401 + /** 402 + * pciehp_card_present_or_link_active() - whether given slot is occupied 403 + * @ctrl: PCIe hotplug controller 404 + * 405 + * Unlike pciehp_card_present(), which determines presence solely from the 406 + * Presence Detect State bit, this helper also returns true if the Link Active 407 + * bit is set. This is a concession to broken hotplug ports which hardwire 408 + * Presence Detect State to zero, such as Wilocity's [1ae9:0200]. 409 + */ 410 + bool pciehp_card_present_or_link_active(struct controller *ctrl) 384 411 { 385 - struct pci_dev *pdev = ctrl_dev(slot->ctrl); 412 + return pciehp_card_present(ctrl) || pciehp_check_link_active(ctrl); 413 + } 414 + 415 + int pciehp_query_power_fault(struct controller *ctrl) 416 + { 417 + struct pci_dev *pdev = ctrl_dev(ctrl); 386 418 u16 slot_status; 387 419 388 420 pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &slot_status); ··· 406 410 int pciehp_set_raw_indicator_status(struct hotplug_slot *hotplug_slot, 407 411 u8 status) 408 412 { 409 - struct slot *slot = hotplug_slot->private; 410 - struct controller *ctrl = slot->ctrl; 413 + struct controller *ctrl = to_ctrl(hotplug_slot); 411 414 struct pci_dev *pdev = ctrl_dev(ctrl); 412 415 413 416 pci_config_pm_runtime_get(pdev); ··· 416 421 return 0; 417 422 } 418 423 419 - void pciehp_set_attention_status(struct slot *slot, u8 value) 424 + void pciehp_set_attention_status(struct controller *ctrl, u8 value) 420 425 { 421 - struct controller *ctrl = slot->ctrl; 422 426 u16 slot_cmd; 423 427 424 428 if (!ATTN_LED(ctrl)) ··· 441 447 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd); 442 448 } 443 449 444 - void pciehp_green_led_on(struct slot *slot) 450 + void pciehp_green_led_on(struct controller *ctrl) 445 451 { 446 - struct controller *ctrl = slot->ctrl; 447 - 448 452 if (!PWR_LED(ctrl)) 449 453 return; 450 454 ··· 453 461 PCI_EXP_SLTCTL_PWR_IND_ON); 454 462 } 455 463 456 - void pciehp_green_led_off(struct slot *slot) 464 + void pciehp_green_led_off(struct controller *ctrl) 457 465 { 458 - struct controller *ctrl = slot->ctrl; 459 - 460 466 if (!PWR_LED(ctrl)) 461 467 return; 462 468 ··· 465 475 PCI_EXP_SLTCTL_PWR_IND_OFF); 466 476 } 467 477 468 - void pciehp_green_led_blink(struct slot *slot) 478 + void pciehp_green_led_blink(struct controller *ctrl) 469 479 { 470 - struct controller *ctrl = slot->ctrl; 471 - 472 480 if (!PWR_LED(ctrl)) 473 481 return; 474 482 ··· 477 489 PCI_EXP_SLTCTL_PWR_IND_BLINK); 478 490 } 479 491 480 - int pciehp_power_on_slot(struct slot *slot) 492 + int pciehp_power_on_slot(struct controller *ctrl) 481 493 { 482 - struct controller *ctrl = slot->ctrl; 483 494 struct pci_dev *pdev = ctrl_dev(ctrl); 484 495 u16 slot_status; 485 496 int retval; ··· 502 515 return retval; 503 516 } 504 517 505 - void pciehp_power_off_slot(struct slot *slot) 518 + void pciehp_power_off_slot(struct controller *ctrl) 506 519 { 507 - struct controller *ctrl = slot->ctrl; 508 - 509 520 pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_OFF, PCI_EXP_SLTCTL_PCC); 510 521 ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__, 511 522 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ··· 518 533 u16 status, events; 519 534 520 535 /* 521 - * Interrupts only occur in D3hot or shallower (PCIe r4.0, sec 6.7.3.4). 536 + * Interrupts only occur in D3hot or shallower and only if enabled 537 + * in the Slot Control register (PCIe r4.0, sec 6.7.3.4). 522 538 */ 523 - if (pdev->current_state == PCI_D3cold) 539 + if (pdev->current_state == PCI_D3cold || 540 + (!(ctrl->slot_ctrl & PCI_EXP_SLTCTL_HPIE) && !pciehp_poll_mode)) 524 541 return IRQ_NONE; 525 542 526 543 /* ··· 603 616 { 604 617 struct controller *ctrl = (struct controller *)dev_id; 605 618 struct pci_dev *pdev = ctrl_dev(ctrl); 606 - struct slot *slot = ctrl->slot; 607 619 irqreturn_t ret; 608 620 u32 events; 609 621 ··· 628 642 /* Check Attention Button Pressed */ 629 643 if (events & PCI_EXP_SLTSTA_ABP) { 630 644 ctrl_info(ctrl, "Slot(%s): Attention button pressed\n", 631 - slot_name(slot)); 632 - pciehp_handle_button_press(slot); 645 + slot_name(ctrl)); 646 + pciehp_handle_button_press(ctrl); 633 647 } 634 648 635 649 /* Check Power Fault Detected */ 636 650 if ((events & PCI_EXP_SLTSTA_PFD) && !ctrl->power_fault_detected) { 637 651 ctrl->power_fault_detected = 1; 638 - ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(slot)); 639 - pciehp_set_attention_status(slot, 1); 640 - pciehp_green_led_off(slot); 652 + ctrl_err(ctrl, "Slot(%s): Power fault\n", slot_name(ctrl)); 653 + pciehp_set_attention_status(ctrl, 1); 654 + pciehp_green_led_off(ctrl); 641 655 } 642 656 643 657 /* ··· 646 660 */ 647 661 down_read(&ctrl->reset_lock); 648 662 if (events & DISABLE_SLOT) 649 - pciehp_handle_disable_request(slot); 663 + pciehp_handle_disable_request(ctrl); 650 664 else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC)) 651 - pciehp_handle_presence_or_link_change(slot, events); 665 + pciehp_handle_presence_or_link_change(ctrl, events); 652 666 up_read(&ctrl->reset_lock); 653 667 654 668 pci_config_pm_runtime_put(pdev); ··· 734 748 PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC); 735 749 } 736 750 751 + void pcie_enable_interrupt(struct controller *ctrl) 752 + { 753 + pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE); 754 + } 755 + 756 + void pcie_disable_interrupt(struct controller *ctrl) 757 + { 758 + pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE); 759 + } 760 + 737 761 /* 738 762 * pciehp has a 1:1 bus:slot relationship so we ultimately want a secondary 739 763 * bus reset of the bridge, but at the same time we want to ensure that it is ··· 752 756 * momentarily, if we see that they could interfere. Also, clear any spurious 753 757 * events after. 754 758 */ 755 - int pciehp_reset_slot(struct slot *slot, int probe) 759 + int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe) 756 760 { 757 - struct controller *ctrl = slot->ctrl; 761 + struct controller *ctrl = to_ctrl(hotplug_slot); 758 762 struct pci_dev *pdev = ctrl_dev(ctrl); 759 763 u16 stat_mask = 0, ctrl_mask = 0; 760 764 int rc; ··· 804 808 } 805 809 } 806 810 807 - static int pcie_init_slot(struct controller *ctrl) 808 - { 809 - struct pci_bus *subordinate = ctrl_dev(ctrl)->subordinate; 810 - struct slot *slot; 811 - 812 - slot = kzalloc(sizeof(*slot), GFP_KERNEL); 813 - if (!slot) 814 - return -ENOMEM; 815 - 816 - down_read(&pci_bus_sem); 817 - slot->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE; 818 - up_read(&pci_bus_sem); 819 - 820 - slot->ctrl = ctrl; 821 - mutex_init(&slot->lock); 822 - INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work); 823 - ctrl->slot = slot; 824 - return 0; 825 - } 826 - 827 - static void pcie_cleanup_slot(struct controller *ctrl) 828 - { 829 - struct slot *slot = ctrl->slot; 830 - 831 - cancel_delayed_work_sync(&slot->work); 832 - kfree(slot); 833 - } 834 - 835 811 static inline void dbg_ctrl(struct controller *ctrl) 836 812 { 837 813 struct pci_dev *pdev = ctrl->pcie->port; ··· 825 857 { 826 858 struct controller *ctrl; 827 859 u32 slot_cap, link_cap; 828 - u8 occupied, poweron; 860 + u8 poweron; 829 861 struct pci_dev *pdev = dev->port; 862 + struct pci_bus *subordinate = pdev->subordinate; 830 863 831 864 ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); 832 865 if (!ctrl) 833 - goto abort; 866 + return NULL; 834 867 835 868 ctrl->pcie = dev; 836 869 pcie_capability_read_dword(pdev, PCI_EXP_SLTCAP, &slot_cap); ··· 848 879 849 880 ctrl->slot_cap = slot_cap; 850 881 mutex_init(&ctrl->ctrl_lock); 882 + mutex_init(&ctrl->state_lock); 851 883 init_rwsem(&ctrl->reset_lock); 852 884 init_waitqueue_head(&ctrl->requester); 853 885 init_waitqueue_head(&ctrl->queue); 886 + INIT_DELAYED_WORK(&ctrl->button_work, pciehp_queue_pushbutton_work); 854 887 dbg_ctrl(ctrl); 888 + 889 + down_read(&pci_bus_sem); 890 + ctrl->state = list_empty(&subordinate->devices) ? OFF_STATE : ON_STATE; 891 + up_read(&pci_bus_sem); 855 892 856 893 /* Check if Data Link Layer Link Active Reporting is implemented */ 857 894 pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &link_cap); 858 - if (link_cap & PCI_EXP_LNKCAP_DLLLARC) 859 - ctrl->link_active_reporting = 1; 860 895 861 896 /* Clear all remaining event bits in Slot Status register. */ 862 897 pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, ··· 882 909 FLAG(link_cap, PCI_EXP_LNKCAP_DLLLARC), 883 910 pdev->broken_cmd_compl ? " (with Cmd Compl erratum)" : ""); 884 911 885 - if (pcie_init_slot(ctrl)) 886 - goto abort_ctrl; 887 - 888 912 /* 889 913 * If empty slot's power status is on, turn power off. The IRQ isn't 890 914 * requested yet, so avoid triggering a notification with this command. 891 915 */ 892 916 if (POWER_CTRL(ctrl)) { 893 - pciehp_get_adapter_status(ctrl->slot, &occupied); 894 - pciehp_get_power_status(ctrl->slot, &poweron); 895 - if (!occupied && poweron) { 917 + pciehp_get_power_status(ctrl, &poweron); 918 + if (!pciehp_card_present_or_link_active(ctrl) && poweron) { 896 919 pcie_disable_notification(ctrl); 897 - pciehp_power_off_slot(ctrl->slot); 920 + pciehp_power_off_slot(ctrl); 898 921 } 899 922 } 900 923 901 924 return ctrl; 902 - 903 - abort_ctrl: 904 - kfree(ctrl); 905 - abort: 906 - return NULL; 907 925 } 908 926 909 927 void pciehp_release_ctrl(struct controller *ctrl) 910 928 { 911 - pcie_cleanup_slot(ctrl); 929 + cancel_delayed_work_sync(&ctrl->button_work); 912 930 kfree(ctrl); 913 931 } 914 932
+26 -15
drivers/pci/hotplug/pciehp_pci.c
··· 13 13 * 14 14 */ 15 15 16 - #include <linux/module.h> 17 16 #include <linux/kernel.h> 18 17 #include <linux/types.h> 19 18 #include <linux/pci.h> 20 19 #include "../pci.h" 21 20 #include "pciehp.h" 22 21 23 - int pciehp_configure_device(struct slot *p_slot) 22 + /** 23 + * pciehp_configure_device() - enumerate PCI devices below a hotplug bridge 24 + * @ctrl: PCIe hotplug controller 25 + * 26 + * Enumerate PCI devices below a hotplug bridge and add them to the system. 27 + * Return 0 on success, %-EEXIST if the devices are already enumerated or 28 + * %-ENODEV if enumeration failed. 29 + */ 30 + int pciehp_configure_device(struct controller *ctrl) 24 31 { 25 32 struct pci_dev *dev; 26 - struct pci_dev *bridge = p_slot->ctrl->pcie->port; 33 + struct pci_dev *bridge = ctrl->pcie->port; 27 34 struct pci_bus *parent = bridge->subordinate; 28 35 int num, ret = 0; 29 - struct controller *ctrl = p_slot->ctrl; 30 36 31 37 pci_lock_rescan_remove(); 32 38 ··· 68 62 return ret; 69 63 } 70 64 71 - void pciehp_unconfigure_device(struct slot *p_slot) 65 + /** 66 + * pciehp_unconfigure_device() - remove PCI devices below a hotplug bridge 67 + * @ctrl: PCIe hotplug controller 68 + * @presence: whether the card is still present in the slot; 69 + * true for safe removal via sysfs or an Attention Button press, 70 + * false for surprise removal 71 + * 72 + * Unbind PCI devices below a hotplug bridge from their drivers and remove 73 + * them from the system. Safely removed devices are quiesced. Surprise 74 + * removed devices are marked as such to prevent further accesses. 75 + */ 76 + void pciehp_unconfigure_device(struct controller *ctrl, bool presence) 72 77 { 73 - u8 presence = 0; 74 78 struct pci_dev *dev, *temp; 75 - struct pci_bus *parent = p_slot->ctrl->pcie->port->subordinate; 79 + struct pci_bus *parent = ctrl->pcie->port->subordinate; 76 80 u16 command; 77 - struct controller *ctrl = p_slot->ctrl; 78 81 79 82 ctrl_dbg(ctrl, "%s: domain:bus:dev = %04x:%02x:00\n", 80 83 __func__, pci_domain_nr(parent), parent->number); 81 - pciehp_get_adapter_status(p_slot, &presence); 84 + 85 + if (!presence) 86 + pci_walk_bus(parent, pci_dev_set_disconnected, NULL); 82 87 83 88 pci_lock_rescan_remove(); 84 89 ··· 102 85 list_for_each_entry_safe_reverse(dev, temp, &parent->devices, 103 86 bus_list) { 104 87 pci_dev_get(dev); 105 - if (!presence) { 106 - pci_dev_set_disconnected(dev, NULL); 107 - if (pci_has_subordinate(dev)) 108 - pci_walk_bus(dev->subordinate, 109 - pci_dev_set_disconnected, NULL); 110 - } 111 88 pci_stop_and_remove_bus_device(dev); 112 89 /* 113 90 * Ensure that no new Requests will be generated from
+24 -14
drivers/pci/hotplug/pnv_php.c
··· 275 275 goto free_fdt1; 276 276 } 277 277 278 - fdt = kzalloc(fdt_totalsize(fdt1), GFP_KERNEL); 278 + fdt = kmemdup(fdt1, fdt_totalsize(fdt1), GFP_KERNEL); 279 279 if (!fdt) { 280 280 ret = -ENOMEM; 281 281 goto free_fdt1; 282 282 } 283 283 284 284 /* Unflatten device tree blob */ 285 - memcpy(fdt, fdt1, fdt_totalsize(fdt1)); 286 285 dt = of_fdt_unflatten_tree(fdt, php_slot->dn, NULL); 287 286 if (!dt) { 288 287 ret = -EINVAL; ··· 327 328 return ret; 328 329 } 329 330 331 + static inline struct pnv_php_slot *to_pnv_php_slot(struct hotplug_slot *slot) 332 + { 333 + return container_of(slot, struct pnv_php_slot, slot); 334 + } 335 + 330 336 int pnv_php_set_slot_power_state(struct hotplug_slot *slot, 331 337 uint8_t state) 332 338 { 333 - struct pnv_php_slot *php_slot = slot->private; 339 + struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 334 340 struct opal_msg msg; 335 341 int ret; 336 342 ··· 367 363 368 364 static int pnv_php_get_power_state(struct hotplug_slot *slot, u8 *state) 369 365 { 370 - struct pnv_php_slot *php_slot = slot->private; 366 + struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 371 367 uint8_t power_state = OPAL_PCI_SLOT_POWER_ON; 372 368 int ret; 373 369 ··· 382 378 ret); 383 379 } else { 384 380 *state = power_state; 385 - slot->info->power_status = power_state; 386 381 } 387 382 388 383 return 0; ··· 389 386 390 387 static int pnv_php_get_adapter_state(struct hotplug_slot *slot, u8 *state) 391 388 { 392 - struct pnv_php_slot *php_slot = slot->private; 389 + struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 393 390 uint8_t presence = OPAL_PCI_SLOT_EMPTY; 394 391 int ret; 395 392 ··· 400 397 ret = pnv_pci_get_presence_state(php_slot->id, &presence); 401 398 if (ret >= 0) { 402 399 *state = presence; 403 - slot->info->adapter_status = presence; 404 400 ret = 0; 405 401 } else { 406 402 pci_warn(php_slot->pdev, "Error %d getting presence\n", ret); ··· 408 406 return ret; 409 407 } 410 408 409 + static int pnv_php_get_attention_state(struct hotplug_slot *slot, u8 *state) 410 + { 411 + struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 412 + 413 + *state = php_slot->attention_state; 414 + return 0; 415 + } 416 + 411 417 static int pnv_php_set_attention_state(struct hotplug_slot *slot, u8 state) 412 418 { 419 + struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 420 + 413 421 /* FIXME: Make it real once firmware supports it */ 414 - slot->info->attention_status = state; 422 + php_slot->attention_state = state; 415 423 416 424 return 0; 417 425 } ··· 513 501 514 502 static int pnv_php_enable_slot(struct hotplug_slot *slot) 515 503 { 516 - struct pnv_php_slot *php_slot = container_of(slot, 517 - struct pnv_php_slot, slot); 504 + struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 518 505 519 506 return pnv_php_enable(php_slot, true); 520 507 } 521 508 522 509 static int pnv_php_disable_slot(struct hotplug_slot *slot) 523 510 { 524 - struct pnv_php_slot *php_slot = slot->private; 511 + struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 525 512 int ret; 526 513 527 514 if (php_slot->state != PNV_PHP_STATE_POPULATED) ··· 541 530 return ret; 542 531 } 543 532 544 - static struct hotplug_slot_ops php_slot_ops = { 533 + static const struct hotplug_slot_ops php_slot_ops = { 545 534 .get_power_status = pnv_php_get_power_state, 546 535 .get_adapter_status = pnv_php_get_adapter_state, 536 + .get_attention_status = pnv_php_get_attention_state, 547 537 .set_attention_status = pnv_php_set_attention_state, 548 538 .enable_slot = pnv_php_enable_slot, 549 539 .disable_slot = pnv_php_disable_slot, ··· 606 594 php_slot->id = id; 607 595 php_slot->power_state_check = false; 608 596 php_slot->slot.ops = &php_slot_ops; 609 - php_slot->slot.info = &php_slot->slot_info; 610 - php_slot->slot.private = php_slot; 611 597 612 598 INIT_LIST_HEAD(&php_slot->children); 613 599 INIT_LIST_HEAD(&php_slot->link);
+8 -2
drivers/pci/hotplug/rpaphp.h
··· 63 63 u32 index; 64 64 u32 type; 65 65 u32 power_domain; 66 + u8 attention_status; 66 67 char *name; 67 68 struct device_node *dn; 68 69 struct pci_bus *bus; 69 70 struct list_head *pci_devs; 70 - struct hotplug_slot *hotplug_slot; 71 + struct hotplug_slot hotplug_slot; 71 72 }; 72 73 73 - extern struct hotplug_slot_ops rpaphp_hotplug_slot_ops; 74 + extern const struct hotplug_slot_ops rpaphp_hotplug_slot_ops; 74 75 extern struct list_head rpaphp_slot_head; 76 + 77 + static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) 78 + { 79 + return container_of(hotplug_slot, struct slot, hotplug_slot); 80 + } 75 81 76 82 /* function prototypes */ 77 83
+10 -10
drivers/pci/hotplug/rpaphp_core.c
··· 52 52 static int set_attention_status(struct hotplug_slot *hotplug_slot, u8 value) 53 53 { 54 54 int rc; 55 - struct slot *slot = (struct slot *)hotplug_slot->private; 55 + struct slot *slot = to_slot(hotplug_slot); 56 56 57 57 switch (value) { 58 58 case 0: ··· 66 66 67 67 rc = rtas_set_indicator(DR_INDICATOR, slot->index, value); 68 68 if (!rc) 69 - hotplug_slot->info->attention_status = value; 69 + slot->attention_status = value; 70 70 71 71 return rc; 72 72 } ··· 79 79 static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 80 80 { 81 81 int retval, level; 82 - struct slot *slot = (struct slot *)hotplug_slot->private; 82 + struct slot *slot = to_slot(hotplug_slot); 83 83 84 84 retval = rtas_get_power_level(slot->power_domain, &level); 85 85 if (!retval) ··· 94 94 */ 95 95 static int get_attention_status(struct hotplug_slot *hotplug_slot, u8 *value) 96 96 { 97 - struct slot *slot = (struct slot *)hotplug_slot->private; 98 - *value = slot->hotplug_slot->info->attention_status; 97 + struct slot *slot = to_slot(hotplug_slot); 98 + *value = slot->attention_status; 99 99 return 0; 100 100 } 101 101 102 102 static int get_adapter_status(struct hotplug_slot *hotplug_slot, u8 *value) 103 103 { 104 - struct slot *slot = (struct slot *)hotplug_slot->private; 104 + struct slot *slot = to_slot(hotplug_slot); 105 105 int rc, state; 106 106 107 107 rc = rpaphp_get_sensor_state(slot, &state); ··· 409 409 list_for_each_entry_safe(slot, next, &rpaphp_slot_head, 410 410 rpaphp_slot_list) { 411 411 list_del(&slot->rpaphp_slot_list); 412 - pci_hp_deregister(slot->hotplug_slot); 412 + pci_hp_deregister(&slot->hotplug_slot); 413 413 dealloc_slot_struct(slot); 414 414 } 415 415 return; ··· 434 434 435 435 static int enable_slot(struct hotplug_slot *hotplug_slot) 436 436 { 437 - struct slot *slot = (struct slot *)hotplug_slot->private; 437 + struct slot *slot = to_slot(hotplug_slot); 438 438 int state; 439 439 int retval; 440 440 ··· 464 464 465 465 static int disable_slot(struct hotplug_slot *hotplug_slot) 466 466 { 467 - struct slot *slot = (struct slot *)hotplug_slot->private; 467 + struct slot *slot = to_slot(hotplug_slot); 468 468 if (slot->state == NOT_CONFIGURED) 469 469 return -EINVAL; 470 470 ··· 477 477 return 0; 478 478 } 479 479 480 - struct hotplug_slot_ops rpaphp_hotplug_slot_ops = { 480 + const struct hotplug_slot_ops rpaphp_hotplug_slot_ops = { 481 481 .enable_slot = enable_slot, 482 482 .disable_slot = disable_slot, 483 483 .set_attention_status = set_attention_status,
+2 -9
drivers/pci/hotplug/rpaphp_pci.c
··· 54 54 * rpaphp_enable_slot - record slot state, config pci device 55 55 * @slot: target &slot 56 56 * 57 - * Initialize values in the slot, and the hotplug_slot info 58 - * structures to indicate if there is a pci card plugged into 59 - * the slot. If the slot is not empty, run the pcibios routine 57 + * Initialize values in the slot structure to indicate if there is a pci card 58 + * plugged into the slot. If the slot is not empty, run the pcibios routine 60 59 * to get pcibios stuff correctly set up. 61 60 */ 62 61 int rpaphp_enable_slot(struct slot *slot) 63 62 { 64 63 int rc, level, state; 65 64 struct pci_bus *bus; 66 - struct hotplug_slot_info *info = slot->hotplug_slot->info; 67 65 68 - info->adapter_status = NOT_VALID; 69 66 slot->state = EMPTY; 70 67 71 68 /* Find out if the power is turned on for the slot */ 72 69 rc = rtas_get_power_level(slot->power_domain, &level); 73 70 if (rc) 74 71 return rc; 75 - info->power_status = level; 76 72 77 73 /* Figure out if there is an adapter in the slot */ 78 74 rc = rpaphp_get_sensor_state(slot, &state); ··· 81 85 return -EINVAL; 82 86 } 83 87 84 - info->adapter_status = EMPTY; 85 88 slot->bus = bus; 86 89 slot->pci_devs = &bus->devices; 87 90 88 91 /* if there's an adapter in the slot, go add the pci devices */ 89 92 if (state == PRESENT) { 90 - info->adapter_status = NOT_CONFIGURED; 91 93 slot->state = NOT_CONFIGURED; 92 94 93 95 /* non-empty slot has to have child */ ··· 99 105 pci_hp_add_devices(bus); 100 106 101 107 if (!list_empty(&bus->devices)) { 102 - info->adapter_status = CONFIGURED; 103 108 slot->state = CONFIGURED; 104 109 } 105 110
+4 -18
drivers/pci/hotplug/rpaphp_slot.c
··· 21 21 /* free up the memory used by a slot */ 22 22 void dealloc_slot_struct(struct slot *slot) 23 23 { 24 - kfree(slot->hotplug_slot->info); 25 24 kfree(slot->name); 26 - kfree(slot->hotplug_slot); 27 25 kfree(slot); 28 26 } 29 27 ··· 33 35 slot = kzalloc(sizeof(struct slot), GFP_KERNEL); 34 36 if (!slot) 35 37 goto error_nomem; 36 - slot->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); 37 - if (!slot->hotplug_slot) 38 - goto error_slot; 39 - slot->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info), 40 - GFP_KERNEL); 41 - if (!slot->hotplug_slot->info) 42 - goto error_hpslot; 43 38 slot->name = kstrdup(drc_name, GFP_KERNEL); 44 39 if (!slot->name) 45 - goto error_info; 40 + goto error_slot; 46 41 slot->dn = dn; 47 42 slot->index = drc_index; 48 43 slot->power_domain = power_domain; 49 - slot->hotplug_slot->private = slot; 50 - slot->hotplug_slot->ops = &rpaphp_hotplug_slot_ops; 44 + slot->hotplug_slot.ops = &rpaphp_hotplug_slot_ops; 51 45 52 46 return (slot); 53 47 54 - error_info: 55 - kfree(slot->hotplug_slot->info); 56 - error_hpslot: 57 - kfree(slot->hotplug_slot); 58 48 error_slot: 59 49 kfree(slot); 60 50 error_nomem: ··· 63 77 int rpaphp_deregister_slot(struct slot *slot) 64 78 { 65 79 int retval = 0; 66 - struct hotplug_slot *php_slot = slot->hotplug_slot; 80 + struct hotplug_slot *php_slot = &slot->hotplug_slot; 67 81 68 82 dbg("%s - Entry: deregistering slot=%s\n", 69 83 __func__, slot->name); ··· 79 93 80 94 int rpaphp_register_slot(struct slot *slot) 81 95 { 82 - struct hotplug_slot *php_slot = slot->hotplug_slot; 96 + struct hotplug_slot *php_slot = &slot->hotplug_slot; 83 97 struct device_node *child; 84 98 u32 my_index; 85 99 int retval;
+13 -31
drivers/pci/hotplug/s390_pci_hpc.c
··· 32 32 */ 33 33 struct slot { 34 34 struct list_head slot_list; 35 - struct hotplug_slot *hotplug_slot; 35 + struct hotplug_slot hotplug_slot; 36 36 struct zpci_dev *zdev; 37 37 }; 38 + 39 + static inline struct slot *to_slot(struct hotplug_slot *hotplug_slot) 40 + { 41 + return container_of(hotplug_slot, struct slot, hotplug_slot); 42 + } 38 43 39 44 static inline int slot_configure(struct slot *slot) 40 45 { ··· 65 60 66 61 static int enable_slot(struct hotplug_slot *hotplug_slot) 67 62 { 68 - struct slot *slot = hotplug_slot->private; 63 + struct slot *slot = to_slot(hotplug_slot); 69 64 int rc; 70 65 71 66 if (slot->zdev->state != ZPCI_FN_STATE_STANDBY) ··· 93 88 94 89 static int disable_slot(struct hotplug_slot *hotplug_slot) 95 90 { 96 - struct slot *slot = hotplug_slot->private; 91 + struct slot *slot = to_slot(hotplug_slot); 97 92 struct pci_dev *pdev; 98 93 int rc; 99 94 ··· 115 110 116 111 static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value) 117 112 { 118 - struct slot *slot = hotplug_slot->private; 113 + struct slot *slot = to_slot(hotplug_slot); 119 114 120 115 switch (slot->zdev->state) { 121 116 case ZPCI_FN_STATE_STANDBY: ··· 135 130 return 0; 136 131 } 137 132 138 - static struct hotplug_slot_ops s390_hotplug_slot_ops = { 133 + static const struct hotplug_slot_ops s390_hotplug_slot_ops = { 139 134 .enable_slot = enable_slot, 140 135 .disable_slot = disable_slot, 141 136 .get_power_status = get_power_status, ··· 144 139 145 140 int zpci_init_slot(struct zpci_dev *zdev) 146 141 { 147 - struct hotplug_slot *hotplug_slot; 148 - struct hotplug_slot_info *info; 149 142 char name[SLOT_NAME_SIZE]; 150 143 struct slot *slot; 151 144 int rc; ··· 155 152 if (!slot) 156 153 goto error; 157 154 158 - hotplug_slot = kzalloc(sizeof(*hotplug_slot), GFP_KERNEL); 159 - if (!hotplug_slot) 160 - goto error_hp; 161 - hotplug_slot->private = slot; 162 - 163 - slot->hotplug_slot = hotplug_slot; 164 155 slot->zdev = zdev; 165 - 166 - info = kzalloc(sizeof(*info), GFP_KERNEL); 167 - if (!info) 168 - goto error_info; 169 - hotplug_slot->info = info; 170 - 171 - hotplug_slot->ops = &s390_hotplug_slot_ops; 172 - 173 - get_power_status(hotplug_slot, &info->power_status); 174 - get_adapter_status(hotplug_slot, &info->adapter_status); 156 + slot->hotplug_slot.ops = &s390_hotplug_slot_ops; 175 157 176 158 snprintf(name, SLOT_NAME_SIZE, "%08x", zdev->fid); 177 - rc = pci_hp_register(slot->hotplug_slot, zdev->bus, 159 + rc = pci_hp_register(&slot->hotplug_slot, zdev->bus, 178 160 ZPCI_DEVFN, name); 179 161 if (rc) 180 162 goto error_reg; ··· 168 180 return 0; 169 181 170 182 error_reg: 171 - kfree(info); 172 - error_info: 173 - kfree(hotplug_slot); 174 - error_hp: 175 183 kfree(slot); 176 184 error: 177 185 return -ENOMEM; ··· 182 198 if (slot->zdev != zdev) 183 199 continue; 184 200 list_del(&slot->slot_list); 185 - pci_hp_deregister(slot->hotplug_slot); 186 - kfree(slot->hotplug_slot->info); 187 - kfree(slot->hotplug_slot); 201 + pci_hp_deregister(&slot->hotplug_slot); 188 202 kfree(slot); 189 203 } 190 204 }
+23 -40
drivers/pci/hotplug/sgi_hotplug.c
··· 56 56 int device_num; 57 57 struct pci_bus *pci_bus; 58 58 /* this struct for glue internal only */ 59 - struct hotplug_slot *hotplug_slot; 59 + struct hotplug_slot hotplug_slot; 60 60 struct list_head hp_list; 61 61 char physical_path[SN_SLOT_NAME_SIZE]; 62 62 }; ··· 80 80 static int disable_slot(struct hotplug_slot *slot); 81 81 static inline int get_power_status(struct hotplug_slot *slot, u8 *value); 82 82 83 - static struct hotplug_slot_ops sn_hotplug_slot_ops = { 83 + static const struct hotplug_slot_ops sn_hotplug_slot_ops = { 84 84 .enable_slot = enable_slot, 85 85 .disable_slot = disable_slot, 86 86 .get_power_status = get_power_status, ··· 88 88 89 89 static DEFINE_MUTEX(sn_hotplug_mutex); 90 90 91 + static struct slot *to_slot(struct hotplug_slot *bss_hotplug_slot) 92 + { 93 + return container_of(bss_hotplug_slot, struct slot, hotplug_slot); 94 + } 95 + 91 96 static ssize_t path_show(struct pci_slot *pci_slot, char *buf) 92 97 { 93 98 int retval = -ENOENT; 94 - struct slot *slot = pci_slot->hotplug->private; 99 + struct slot *slot = to_slot(pci_slot->hotplug); 95 100 96 101 if (!slot) 97 102 return retval; ··· 161 156 return -EIO; 162 157 } 163 158 164 - static int sn_hp_slot_private_alloc(struct hotplug_slot *bss_hotplug_slot, 159 + static int sn_hp_slot_private_alloc(struct hotplug_slot **bss_hotplug_slot, 165 160 struct pci_bus *pci_bus, int device, 166 161 char *name) 167 162 { ··· 173 168 slot = kzalloc(sizeof(*slot), GFP_KERNEL); 174 169 if (!slot) 175 170 return -ENOMEM; 176 - bss_hotplug_slot->private = slot; 177 171 178 172 slot->device_num = device; 179 173 slot->pci_bus = pci_bus; ··· 183 179 184 180 sn_generate_path(pci_bus, slot->physical_path); 185 181 186 - slot->hotplug_slot = bss_hotplug_slot; 187 182 list_add(&slot->hp_list, &sn_hp_list); 183 + *bss_hotplug_slot = &slot->hotplug_slot; 188 184 189 185 return 0; 190 186 } ··· 196 192 struct hotplug_slot *bss_hotplug_slot = NULL; 197 193 198 194 list_for_each_entry(slot, &sn_hp_list, hp_list) { 199 - bss_hotplug_slot = slot->hotplug_slot; 195 + bss_hotplug_slot = &slot->hotplug_slot; 200 196 pci_slot = bss_hotplug_slot->pci_slot; 201 - list_del(&((struct slot *)bss_hotplug_slot->private)-> 202 - hp_list); 197 + list_del(&slot->hp_list); 203 198 sysfs_remove_file(&pci_slot->kobj, 204 199 &sn_slot_path_attr.attr); 205 200 break; ··· 230 227 static int sn_slot_enable(struct hotplug_slot *bss_hotplug_slot, 231 228 int device_num, char **ssdt) 232 229 { 233 - struct slot *slot = bss_hotplug_slot->private; 230 + struct slot *slot = to_slot(bss_hotplug_slot); 234 231 struct pcibus_info *pcibus_info; 235 232 struct pcibr_slot_enable_resp resp; 236 233 int rc; ··· 270 267 static int sn_slot_disable(struct hotplug_slot *bss_hotplug_slot, 271 268 int device_num, int action) 272 269 { 273 - struct slot *slot = bss_hotplug_slot->private; 270 + struct slot *slot = to_slot(bss_hotplug_slot); 274 271 struct pcibus_info *pcibus_info; 275 272 struct pcibr_slot_disable_resp resp; 276 273 int rc; ··· 326 323 */ 327 324 static int enable_slot(struct hotplug_slot *bss_hotplug_slot) 328 325 { 329 - struct slot *slot = bss_hotplug_slot->private; 326 + struct slot *slot = to_slot(bss_hotplug_slot); 330 327 struct pci_bus *new_bus = NULL; 331 328 struct pci_dev *dev; 332 329 int num_funcs; ··· 472 469 473 470 static int disable_slot(struct hotplug_slot *bss_hotplug_slot) 474 471 { 475 - struct slot *slot = bss_hotplug_slot->private; 472 + struct slot *slot = to_slot(bss_hotplug_slot); 476 473 struct pci_dev *dev, *temp; 477 474 int rc; 478 475 acpi_handle ssdt_hdl = NULL; ··· 574 571 static inline int get_power_status(struct hotplug_slot *bss_hotplug_slot, 575 572 u8 *value) 576 573 { 577 - struct slot *slot = bss_hotplug_slot->private; 574 + struct slot *slot = to_slot(bss_hotplug_slot); 578 575 struct pcibus_info *pcibus_info; 579 576 u32 power; 580 577 ··· 588 585 589 586 static void sn_release_slot(struct hotplug_slot *bss_hotplug_slot) 590 587 { 591 - kfree(bss_hotplug_slot->info); 592 - kfree(bss_hotplug_slot->private); 593 - kfree(bss_hotplug_slot); 588 + kfree(to_slot(bss_hotplug_slot)); 594 589 } 595 590 596 591 static int sn_hotplug_slot_register(struct pci_bus *pci_bus) ··· 608 607 if (sn_pci_slot_valid(pci_bus, device) != 1) 609 608 continue; 610 609 611 - bss_hotplug_slot = kzalloc(sizeof(*bss_hotplug_slot), 612 - GFP_KERNEL); 613 - if (!bss_hotplug_slot) { 614 - rc = -ENOMEM; 615 - goto alloc_err; 616 - } 617 - 618 - bss_hotplug_slot->info = 619 - kzalloc(sizeof(struct hotplug_slot_info), 620 - GFP_KERNEL); 621 - if (!bss_hotplug_slot->info) { 622 - rc = -ENOMEM; 623 - goto alloc_err; 624 - } 625 - 626 - if (sn_hp_slot_private_alloc(bss_hotplug_slot, 610 + if (sn_hp_slot_private_alloc(&bss_hotplug_slot, 627 611 pci_bus, device, name)) { 628 612 rc = -ENOMEM; 629 613 goto alloc_err; ··· 623 637 rc = sysfs_create_file(&pci_slot->kobj, 624 638 &sn_slot_path_attr.attr); 625 639 if (rc) 626 - goto register_err; 640 + goto alloc_err; 627 641 } 628 642 pci_dbg(pci_bus->self, "Registered bus with hotplug\n"); 629 643 return rc; ··· 632 646 pci_dbg(pci_bus->self, "bus failed to register with err = %d\n", 633 647 rc); 634 648 635 - alloc_err: 636 - if (rc == -ENOMEM) 637 - pci_dbg(pci_bus->self, "Memory allocation error\n"); 638 - 639 649 /* destroy THIS element */ 640 - if (bss_hotplug_slot) 641 - sn_release_slot(bss_hotplug_slot); 650 + sn_hp_destroy(); 651 + sn_release_slot(bss_hotplug_slot); 642 652 653 + alloc_err: 643 654 /* destroy anything else on the list */ 644 655 while ((bss_hotplug_slot = sn_hp_destroy())) { 645 656 pci_hp_deregister(bss_hotplug_slot);
+5 -3
drivers/pci/hotplug/shpchp.h
··· 67 67 u32 number; 68 68 u8 is_a_board; 69 69 u8 state; 70 + u8 attention_save; 70 71 u8 presence_save; 72 + u8 latch_save; 71 73 u8 pwr_save; 72 74 struct controller *ctrl; 73 75 const struct hpc_ops *hpc_ops; 74 - struct hotplug_slot *hotplug_slot; 76 + struct hotplug_slot hotplug_slot; 75 77 struct list_head slot_list; 76 78 struct delayed_work work; /* work for button event */ 77 79 struct mutex lock; ··· 171 169 172 170 static inline const char *slot_name(struct slot *slot) 173 171 { 174 - return hotplug_slot_name(slot->hotplug_slot); 172 + return hotplug_slot_name(&slot->hotplug_slot); 175 173 } 176 174 177 175 struct ctrl_reg { ··· 209 207 210 208 static inline struct slot *get_slot(struct hotplug_slot *hotplug_slot) 211 209 { 212 - return hotplug_slot->private; 210 + return container_of(hotplug_slot, struct slot, hotplug_slot); 213 211 } 214 212 215 213 static inline struct slot *shpchp_find_slot(struct controller *ctrl, u8 device)
+14 -34
drivers/pci/hotplug/shpchp_core.c
··· 51 51 static int get_latch_status(struct hotplug_slot *slot, u8 *value); 52 52 static int get_adapter_status(struct hotplug_slot *slot, u8 *value); 53 53 54 - static struct hotplug_slot_ops shpchp_hotplug_slot_ops = { 54 + static const struct hotplug_slot_ops shpchp_hotplug_slot_ops = { 55 55 .set_attention_status = set_attention_status, 56 56 .enable_slot = enable_slot, 57 57 .disable_slot = disable_slot, ··· 65 65 { 66 66 struct slot *slot; 67 67 struct hotplug_slot *hotplug_slot; 68 - struct hotplug_slot_info *info; 69 68 char name[SLOT_NAME_SIZE]; 70 69 int retval; 71 70 int i; ··· 76 77 goto error; 77 78 } 78 79 79 - hotplug_slot = kzalloc(sizeof(*hotplug_slot), GFP_KERNEL); 80 - if (!hotplug_slot) { 81 - retval = -ENOMEM; 82 - goto error_slot; 83 - } 84 - slot->hotplug_slot = hotplug_slot; 85 - 86 - info = kzalloc(sizeof(*info), GFP_KERNEL); 87 - if (!info) { 88 - retval = -ENOMEM; 89 - goto error_hpslot; 90 - } 91 - hotplug_slot->info = info; 80 + hotplug_slot = &slot->hotplug_slot; 92 81 93 82 slot->hp_slot = i; 94 83 slot->ctrl = ctrl; ··· 88 101 slot->wq = alloc_workqueue("shpchp-%d", 0, 0, slot->number); 89 102 if (!slot->wq) { 90 103 retval = -ENOMEM; 91 - goto error_info; 104 + goto error_slot; 92 105 } 93 106 94 107 mutex_init(&slot->lock); 95 108 INIT_DELAYED_WORK(&slot->work, shpchp_queue_pushbutton_work); 96 109 97 110 /* register this slot with the hotplug pci core */ 98 - hotplug_slot->private = slot; 99 111 snprintf(name, SLOT_NAME_SIZE, "%d", slot->number); 100 112 hotplug_slot->ops = &shpchp_hotplug_slot_ops; 101 113 ··· 102 116 pci_domain_nr(ctrl->pci_dev->subordinate), 103 117 slot->bus, slot->device, slot->hp_slot, slot->number, 104 118 ctrl->slot_device_offset); 105 - retval = pci_hp_register(slot->hotplug_slot, 119 + retval = pci_hp_register(hotplug_slot, 106 120 ctrl->pci_dev->subordinate, slot->device, name); 107 121 if (retval) { 108 122 ctrl_err(ctrl, "pci_hp_register failed with error %d\n", ··· 110 124 goto error_slotwq; 111 125 } 112 126 113 - get_power_status(hotplug_slot, &info->power_status); 114 - get_attention_status(hotplug_slot, &info->attention_status); 115 - get_latch_status(hotplug_slot, &info->latch_status); 116 - get_adapter_status(hotplug_slot, &info->adapter_status); 127 + get_power_status(hotplug_slot, &slot->pwr_save); 128 + get_attention_status(hotplug_slot, &slot->attention_save); 129 + get_latch_status(hotplug_slot, &slot->latch_save); 130 + get_adapter_status(hotplug_slot, &slot->presence_save); 117 131 118 132 list_add(&slot->slot_list, &ctrl->slot_list); 119 133 } ··· 121 135 return 0; 122 136 error_slotwq: 123 137 destroy_workqueue(slot->wq); 124 - error_info: 125 - kfree(info); 126 - error_hpslot: 127 - kfree(hotplug_slot); 128 138 error_slot: 129 139 kfree(slot); 130 140 error: ··· 135 153 list_del(&slot->slot_list); 136 154 cancel_delayed_work(&slot->work); 137 155 destroy_workqueue(slot->wq); 138 - pci_hp_deregister(slot->hotplug_slot); 139 - kfree(slot->hotplug_slot->info); 140 - kfree(slot->hotplug_slot); 156 + pci_hp_deregister(&slot->hotplug_slot); 141 157 kfree(slot); 142 158 } 143 159 } ··· 150 170 ctrl_dbg(slot->ctrl, "%s: physical_slot = %s\n", 151 171 __func__, slot_name(slot)); 152 172 153 - hotplug_slot->info->attention_status = status; 173 + slot->attention_save = status; 154 174 slot->hpc_ops->set_attention_status(slot, status); 155 175 156 176 return 0; ··· 186 206 187 207 retval = slot->hpc_ops->get_power_status(slot, value); 188 208 if (retval < 0) 189 - *value = hotplug_slot->info->power_status; 209 + *value = slot->pwr_save; 190 210 191 211 return 0; 192 212 } ··· 201 221 202 222 retval = slot->hpc_ops->get_attention_status(slot, value); 203 223 if (retval < 0) 204 - *value = hotplug_slot->info->attention_status; 224 + *value = slot->attention_save; 205 225 206 226 return 0; 207 227 } ··· 216 236 217 237 retval = slot->hpc_ops->get_latch_status(slot, value); 218 238 if (retval < 0) 219 - *value = hotplug_slot->info->latch_status; 239 + *value = slot->latch_save; 220 240 221 241 return 0; 222 242 } ··· 231 251 232 252 retval = slot->hpc_ops->get_adapter_status(slot, value); 233 253 if (retval < 0) 234 - *value = hotplug_slot->info->adapter_status; 254 + *value = slot->presence_save; 235 255 236 256 return 0; 237 257 }
+5 -16
drivers/pci/hotplug/shpchp_ctrl.c
··· 446 446 mutex_unlock(&p_slot->lock); 447 447 } 448 448 449 - static int update_slot_info (struct slot *slot) 449 + static void update_slot_info(struct slot *slot) 450 450 { 451 - struct hotplug_slot_info *info; 452 - int result; 453 - 454 - info = kmalloc(sizeof(*info), GFP_KERNEL); 455 - if (!info) 456 - return -ENOMEM; 457 - 458 - slot->hpc_ops->get_power_status(slot, &(info->power_status)); 459 - slot->hpc_ops->get_attention_status(slot, &(info->attention_status)); 460 - slot->hpc_ops->get_latch_status(slot, &(info->latch_status)); 461 - slot->hpc_ops->get_adapter_status(slot, &(info->adapter_status)); 462 - 463 - result = pci_hp_change_slot_info(slot->hotplug_slot, info); 464 - kfree (info); 465 - return result; 451 + slot->hpc_ops->get_power_status(slot, &slot->pwr_save); 452 + slot->hpc_ops->get_attention_status(slot, &slot->attention_save); 453 + slot->hpc_ops->get_latch_status(slot, &slot->latch_save); 454 + slot->hpc_ops->get_adapter_status(slot, &slot->presence_save); 466 455 } 467 456 468 457 /*
+2 -1
drivers/pci/iov.c
··· 13 13 #include <linux/export.h> 14 14 #include <linux/string.h> 15 15 #include <linux/delay.h> 16 - #include <linux/pci-ats.h> 17 16 #include "pci.h" 18 17 19 18 #define VIRTFN_ID_LEN 16 ··· 132 133 &physfn->sriov->subsystem_vendor); 133 134 pci_read_config_word(virtfn, PCI_SUBSYSTEM_ID, 134 135 &physfn->sriov->subsystem_device); 136 + 137 + physfn->sriov->cfg_size = pci_cfg_space_size(virtfn); 135 138 } 136 139 137 140 int pci_iov_add_virtfn(struct pci_dev *dev, int id)
+6 -3
drivers/pci/msi.c
··· 958 958 } 959 959 } 960 960 } 961 - WARN_ON(!!dev->msix_enabled); 962 961 963 962 /* Check whether driver already requested for MSI irq */ 964 963 if (dev->msi_enabled) { ··· 1027 1028 if (!pci_msi_supported(dev, minvec)) 1028 1029 return -EINVAL; 1029 1030 1030 - WARN_ON(!!dev->msi_enabled); 1031 - 1032 1031 /* Check whether driver already requested MSI-X irqs */ 1033 1032 if (dev->msix_enabled) { 1034 1033 pci_info(dev, "can't enable MSI (MSI-X already enabled)\n"); ··· 1035 1038 1036 1039 if (maxvec < minvec) 1037 1040 return -ERANGE; 1041 + 1042 + if (WARN_ON_ONCE(dev->msi_enabled)) 1043 + return -EINVAL; 1038 1044 1039 1045 nvec = pci_msi_vec_count(dev); 1040 1046 if (nvec < 0) ··· 1086 1086 1087 1087 if (maxvec < minvec) 1088 1088 return -ERANGE; 1089 + 1090 + if (WARN_ON_ONCE(dev->msix_enabled)) 1091 + return -EINVAL; 1089 1092 1090 1093 for (;;) { 1091 1094 if (affd) {
+805
drivers/pci/p2pdma.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * PCI Peer 2 Peer DMA support. 4 + * 5 + * Copyright (c) 2016-2018, Logan Gunthorpe 6 + * Copyright (c) 2016-2017, Microsemi Corporation 7 + * Copyright (c) 2017, Christoph Hellwig 8 + * Copyright (c) 2018, Eideticom Inc. 9 + */ 10 + 11 + #define pr_fmt(fmt) "pci-p2pdma: " fmt 12 + #include <linux/ctype.h> 13 + #include <linux/pci-p2pdma.h> 14 + #include <linux/module.h> 15 + #include <linux/slab.h> 16 + #include <linux/genalloc.h> 17 + #include <linux/memremap.h> 18 + #include <linux/percpu-refcount.h> 19 + #include <linux/random.h> 20 + #include <linux/seq_buf.h> 21 + 22 + struct pci_p2pdma { 23 + struct percpu_ref devmap_ref; 24 + struct completion devmap_ref_done; 25 + struct gen_pool *pool; 26 + bool p2pmem_published; 27 + }; 28 + 29 + static ssize_t size_show(struct device *dev, struct device_attribute *attr, 30 + char *buf) 31 + { 32 + struct pci_dev *pdev = to_pci_dev(dev); 33 + size_t size = 0; 34 + 35 + if (pdev->p2pdma->pool) 36 + size = gen_pool_size(pdev->p2pdma->pool); 37 + 38 + return snprintf(buf, PAGE_SIZE, "%zd\n", size); 39 + } 40 + static DEVICE_ATTR_RO(size); 41 + 42 + static ssize_t available_show(struct device *dev, struct device_attribute *attr, 43 + char *buf) 44 + { 45 + struct pci_dev *pdev = to_pci_dev(dev); 46 + size_t avail = 0; 47 + 48 + if (pdev->p2pdma->pool) 49 + avail = gen_pool_avail(pdev->p2pdma->pool); 50 + 51 + return snprintf(buf, PAGE_SIZE, "%zd\n", avail); 52 + } 53 + static DEVICE_ATTR_RO(available); 54 + 55 + static ssize_t published_show(struct device *dev, struct device_attribute *attr, 56 + char *buf) 57 + { 58 + struct pci_dev *pdev = to_pci_dev(dev); 59 + 60 + return snprintf(buf, PAGE_SIZE, "%d\n", 61 + pdev->p2pdma->p2pmem_published); 62 + } 63 + static DEVICE_ATTR_RO(published); 64 + 65 + static struct attribute *p2pmem_attrs[] = { 66 + &dev_attr_size.attr, 67 + &dev_attr_available.attr, 68 + &dev_attr_published.attr, 69 + NULL, 70 + }; 71 + 72 + static const struct attribute_group p2pmem_group = { 73 + .attrs = p2pmem_attrs, 74 + .name = "p2pmem", 75 + }; 76 + 77 + static void pci_p2pdma_percpu_release(struct percpu_ref *ref) 78 + { 79 + struct pci_p2pdma *p2p = 80 + container_of(ref, struct pci_p2pdma, devmap_ref); 81 + 82 + complete_all(&p2p->devmap_ref_done); 83 + } 84 + 85 + static void pci_p2pdma_percpu_kill(void *data) 86 + { 87 + struct percpu_ref *ref = data; 88 + 89 + /* 90 + * pci_p2pdma_add_resource() may be called multiple times 91 + * by a driver and may register the percpu_kill devm action multiple 92 + * times. We only want the first action to actually kill the 93 + * percpu_ref. 94 + */ 95 + if (percpu_ref_is_dying(ref)) 96 + return; 97 + 98 + percpu_ref_kill(ref); 99 + } 100 + 101 + static void pci_p2pdma_release(void *data) 102 + { 103 + struct pci_dev *pdev = data; 104 + 105 + if (!pdev->p2pdma) 106 + return; 107 + 108 + wait_for_completion(&pdev->p2pdma->devmap_ref_done); 109 + percpu_ref_exit(&pdev->p2pdma->devmap_ref); 110 + 111 + gen_pool_destroy(pdev->p2pdma->pool); 112 + sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); 113 + pdev->p2pdma = NULL; 114 + } 115 + 116 + static int pci_p2pdma_setup(struct pci_dev *pdev) 117 + { 118 + int error = -ENOMEM; 119 + struct pci_p2pdma *p2p; 120 + 121 + p2p = devm_kzalloc(&pdev->dev, sizeof(*p2p), GFP_KERNEL); 122 + if (!p2p) 123 + return -ENOMEM; 124 + 125 + p2p->pool = gen_pool_create(PAGE_SHIFT, dev_to_node(&pdev->dev)); 126 + if (!p2p->pool) 127 + goto out; 128 + 129 + init_completion(&p2p->devmap_ref_done); 130 + error = percpu_ref_init(&p2p->devmap_ref, 131 + pci_p2pdma_percpu_release, 0, GFP_KERNEL); 132 + if (error) 133 + goto out_pool_destroy; 134 + 135 + error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); 136 + if (error) 137 + goto out_pool_destroy; 138 + 139 + pdev->p2pdma = p2p; 140 + 141 + error = sysfs_create_group(&pdev->dev.kobj, &p2pmem_group); 142 + if (error) 143 + goto out_pool_destroy; 144 + 145 + return 0; 146 + 147 + out_pool_destroy: 148 + pdev->p2pdma = NULL; 149 + gen_pool_destroy(p2p->pool); 150 + out: 151 + devm_kfree(&pdev->dev, p2p); 152 + return error; 153 + } 154 + 155 + /** 156 + * pci_p2pdma_add_resource - add memory for use as p2p memory 157 + * @pdev: the device to add the memory to 158 + * @bar: PCI BAR to add 159 + * @size: size of the memory to add, may be zero to use the whole BAR 160 + * @offset: offset into the PCI BAR 161 + * 162 + * The memory will be given ZONE_DEVICE struct pages so that it may 163 + * be used with any DMA request. 164 + */ 165 + int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, 166 + u64 offset) 167 + { 168 + struct dev_pagemap *pgmap; 169 + void *addr; 170 + int error; 171 + 172 + if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) 173 + return -EINVAL; 174 + 175 + if (offset >= pci_resource_len(pdev, bar)) 176 + return -EINVAL; 177 + 178 + if (!size) 179 + size = pci_resource_len(pdev, bar) - offset; 180 + 181 + if (size + offset > pci_resource_len(pdev, bar)) 182 + return -EINVAL; 183 + 184 + if (!pdev->p2pdma) { 185 + error = pci_p2pdma_setup(pdev); 186 + if (error) 187 + return error; 188 + } 189 + 190 + pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL); 191 + if (!pgmap) 192 + return -ENOMEM; 193 + 194 + pgmap->res.start = pci_resource_start(pdev, bar) + offset; 195 + pgmap->res.end = pgmap->res.start + size - 1; 196 + pgmap->res.flags = pci_resource_flags(pdev, bar); 197 + pgmap->ref = &pdev->p2pdma->devmap_ref; 198 + pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; 199 + pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - 200 + pci_resource_start(pdev, bar); 201 + 202 + addr = devm_memremap_pages(&pdev->dev, pgmap); 203 + if (IS_ERR(addr)) { 204 + error = PTR_ERR(addr); 205 + goto pgmap_free; 206 + } 207 + 208 + error = gen_pool_add_virt(pdev->p2pdma->pool, (unsigned long)addr, 209 + pci_bus_address(pdev, bar) + offset, 210 + resource_size(&pgmap->res), dev_to_node(&pdev->dev)); 211 + if (error) 212 + goto pgmap_free; 213 + 214 + error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_percpu_kill, 215 + &pdev->p2pdma->devmap_ref); 216 + if (error) 217 + goto pgmap_free; 218 + 219 + pci_info(pdev, "added peer-to-peer DMA memory %pR\n", 220 + &pgmap->res); 221 + 222 + return 0; 223 + 224 + pgmap_free: 225 + devm_kfree(&pdev->dev, pgmap); 226 + return error; 227 + } 228 + EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); 229 + 230 + /* 231 + * Note this function returns the parent PCI device with a 232 + * reference taken. It is the caller's responsibily to drop 233 + * the reference. 234 + */ 235 + static struct pci_dev *find_parent_pci_dev(struct device *dev) 236 + { 237 + struct device *parent; 238 + 239 + dev = get_device(dev); 240 + 241 + while (dev) { 242 + if (dev_is_pci(dev)) 243 + return to_pci_dev(dev); 244 + 245 + parent = get_device(dev->parent); 246 + put_device(dev); 247 + dev = parent; 248 + } 249 + 250 + return NULL; 251 + } 252 + 253 + /* 254 + * Check if a PCI bridge has its ACS redirection bits set to redirect P2P 255 + * TLPs upstream via ACS. Returns 1 if the packets will be redirected 256 + * upstream, 0 otherwise. 257 + */ 258 + static int pci_bridge_has_acs_redir(struct pci_dev *pdev) 259 + { 260 + int pos; 261 + u16 ctrl; 262 + 263 + pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ACS); 264 + if (!pos) 265 + return 0; 266 + 267 + pci_read_config_word(pdev, pos + PCI_ACS_CTRL, &ctrl); 268 + 269 + if (ctrl & (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_EC)) 270 + return 1; 271 + 272 + return 0; 273 + } 274 + 275 + static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev) 276 + { 277 + if (!buf) 278 + return; 279 + 280 + seq_buf_printf(buf, "%s;", pci_name(pdev)); 281 + } 282 + 283 + /* 284 + * Find the distance through the nearest common upstream bridge between 285 + * two PCI devices. 286 + * 287 + * If the two devices are the same device then 0 will be returned. 288 + * 289 + * If there are two virtual functions of the same device behind the same 290 + * bridge port then 2 will be returned (one step down to the PCIe switch, 291 + * then one step back to the same device). 292 + * 293 + * In the case where two devices are connected to the same PCIe switch, the 294 + * value 4 will be returned. This corresponds to the following PCI tree: 295 + * 296 + * -+ Root Port 297 + * \+ Switch Upstream Port 298 + * +-+ Switch Downstream Port 299 + * + \- Device A 300 + * \-+ Switch Downstream Port 301 + * \- Device B 302 + * 303 + * The distance is 4 because we traverse from Device A through the downstream 304 + * port of the switch, to the common upstream port, back up to the second 305 + * downstream port and then to Device B. 306 + * 307 + * Any two devices that don't have a common upstream bridge will return -1. 308 + * In this way devices on separate PCIe root ports will be rejected, which 309 + * is what we want for peer-to-peer seeing each PCIe root port defines a 310 + * separate hierarchy domain and there's no way to determine whether the root 311 + * complex supports forwarding between them. 312 + * 313 + * In the case where two devices are connected to different PCIe switches, 314 + * this function will still return a positive distance as long as both 315 + * switches eventually have a common upstream bridge. Note this covers 316 + * the case of using multiple PCIe switches to achieve a desired level of 317 + * fan-out from a root port. The exact distance will be a function of the 318 + * number of switches between Device A and Device B. 319 + * 320 + * If a bridge which has any ACS redirection bits set is in the path 321 + * then this functions will return -2. This is so we reject any 322 + * cases where the TLPs are forwarded up into the root complex. 323 + * In this case, a list of all infringing bridge addresses will be 324 + * populated in acs_list (assuming it's non-null) for printk purposes. 325 + */ 326 + static int upstream_bridge_distance(struct pci_dev *a, 327 + struct pci_dev *b, 328 + struct seq_buf *acs_list) 329 + { 330 + int dist_a = 0; 331 + int dist_b = 0; 332 + struct pci_dev *bb = NULL; 333 + int acs_cnt = 0; 334 + 335 + /* 336 + * Note, we don't need to take references to devices returned by 337 + * pci_upstream_bridge() seeing we hold a reference to a child 338 + * device which will already hold a reference to the upstream bridge. 339 + */ 340 + 341 + while (a) { 342 + dist_b = 0; 343 + 344 + if (pci_bridge_has_acs_redir(a)) { 345 + seq_buf_print_bus_devfn(acs_list, a); 346 + acs_cnt++; 347 + } 348 + 349 + bb = b; 350 + 351 + while (bb) { 352 + if (a == bb) 353 + goto check_b_path_acs; 354 + 355 + bb = pci_upstream_bridge(bb); 356 + dist_b++; 357 + } 358 + 359 + a = pci_upstream_bridge(a); 360 + dist_a++; 361 + } 362 + 363 + return -1; 364 + 365 + check_b_path_acs: 366 + bb = b; 367 + 368 + while (bb) { 369 + if (a == bb) 370 + break; 371 + 372 + if (pci_bridge_has_acs_redir(bb)) { 373 + seq_buf_print_bus_devfn(acs_list, bb); 374 + acs_cnt++; 375 + } 376 + 377 + bb = pci_upstream_bridge(bb); 378 + } 379 + 380 + if (acs_cnt) 381 + return -2; 382 + 383 + return dist_a + dist_b; 384 + } 385 + 386 + static int upstream_bridge_distance_warn(struct pci_dev *provider, 387 + struct pci_dev *client) 388 + { 389 + struct seq_buf acs_list; 390 + int ret; 391 + 392 + seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); 393 + if (!acs_list.buffer) 394 + return -ENOMEM; 395 + 396 + ret = upstream_bridge_distance(provider, client, &acs_list); 397 + if (ret == -2) { 398 + pci_warn(client, "cannot be used for peer-to-peer DMA as ACS redirect is set between the client and provider (%s)\n", 399 + pci_name(provider)); 400 + /* Drop final semicolon */ 401 + acs_list.buffer[acs_list.len-1] = 0; 402 + pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", 403 + acs_list.buffer); 404 + 405 + } else if (ret < 0) { 406 + pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider (%s) do not share an upstream bridge\n", 407 + pci_name(provider)); 408 + } 409 + 410 + kfree(acs_list.buffer); 411 + 412 + return ret; 413 + } 414 + 415 + /** 416 + * pci_p2pdma_distance_many - Determive the cumulative distance between 417 + * a p2pdma provider and the clients in use. 418 + * @provider: p2pdma provider to check against the client list 419 + * @clients: array of devices to check (NULL-terminated) 420 + * @num_clients: number of clients in the array 421 + * @verbose: if true, print warnings for devices when we return -1 422 + * 423 + * Returns -1 if any of the clients are not compatible (behind the same 424 + * root port as the provider), otherwise returns a positive number where 425 + * a lower number is the preferrable choice. (If there's one client 426 + * that's the same as the provider it will return 0, which is best choice). 427 + * 428 + * For now, "compatible" means the provider and the clients are all behind 429 + * the same PCI root port. This cuts out cases that may work but is safest 430 + * for the user. Future work can expand this to white-list root complexes that 431 + * can safely forward between each ports. 432 + */ 433 + int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, 434 + int num_clients, bool verbose) 435 + { 436 + bool not_supported = false; 437 + struct pci_dev *pci_client; 438 + int distance = 0; 439 + int i, ret; 440 + 441 + if (num_clients == 0) 442 + return -1; 443 + 444 + for (i = 0; i < num_clients; i++) { 445 + pci_client = find_parent_pci_dev(clients[i]); 446 + if (!pci_client) { 447 + if (verbose) 448 + dev_warn(clients[i], 449 + "cannot be used for peer-to-peer DMA as it is not a PCI device\n"); 450 + return -1; 451 + } 452 + 453 + if (verbose) 454 + ret = upstream_bridge_distance_warn(provider, 455 + pci_client); 456 + else 457 + ret = upstream_bridge_distance(provider, pci_client, 458 + NULL); 459 + 460 + pci_dev_put(pci_client); 461 + 462 + if (ret < 0) 463 + not_supported = true; 464 + 465 + if (not_supported && !verbose) 466 + break; 467 + 468 + distance += ret; 469 + } 470 + 471 + if (not_supported) 472 + return -1; 473 + 474 + return distance; 475 + } 476 + EXPORT_SYMBOL_GPL(pci_p2pdma_distance_many); 477 + 478 + /** 479 + * pci_has_p2pmem - check if a given PCI device has published any p2pmem 480 + * @pdev: PCI device to check 481 + */ 482 + bool pci_has_p2pmem(struct pci_dev *pdev) 483 + { 484 + return pdev->p2pdma && pdev->p2pdma->p2pmem_published; 485 + } 486 + EXPORT_SYMBOL_GPL(pci_has_p2pmem); 487 + 488 + /** 489 + * pci_p2pmem_find - find a peer-to-peer DMA memory device compatible with 490 + * the specified list of clients and shortest distance (as determined 491 + * by pci_p2pmem_dma()) 492 + * @clients: array of devices to check (NULL-terminated) 493 + * @num_clients: number of client devices in the list 494 + * 495 + * If multiple devices are behind the same switch, the one "closest" to the 496 + * client devices in use will be chosen first. (So if one of the providers are 497 + * the same as one of the clients, that provider will be used ahead of any 498 + * other providers that are unrelated). If multiple providers are an equal 499 + * distance away, one will be chosen at random. 500 + * 501 + * Returns a pointer to the PCI device with a reference taken (use pci_dev_put 502 + * to return the reference) or NULL if no compatible device is found. The 503 + * found provider will also be assigned to the client list. 504 + */ 505 + struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients) 506 + { 507 + struct pci_dev *pdev = NULL; 508 + int distance; 509 + int closest_distance = INT_MAX; 510 + struct pci_dev **closest_pdevs; 511 + int dev_cnt = 0; 512 + const int max_devs = PAGE_SIZE / sizeof(*closest_pdevs); 513 + int i; 514 + 515 + closest_pdevs = kmalloc(PAGE_SIZE, GFP_KERNEL); 516 + if (!closest_pdevs) 517 + return NULL; 518 + 519 + while ((pdev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, pdev))) { 520 + if (!pci_has_p2pmem(pdev)) 521 + continue; 522 + 523 + distance = pci_p2pdma_distance_many(pdev, clients, 524 + num_clients, false); 525 + if (distance < 0 || distance > closest_distance) 526 + continue; 527 + 528 + if (distance == closest_distance && dev_cnt >= max_devs) 529 + continue; 530 + 531 + if (distance < closest_distance) { 532 + for (i = 0; i < dev_cnt; i++) 533 + pci_dev_put(closest_pdevs[i]); 534 + 535 + dev_cnt = 0; 536 + closest_distance = distance; 537 + } 538 + 539 + closest_pdevs[dev_cnt++] = pci_dev_get(pdev); 540 + } 541 + 542 + if (dev_cnt) 543 + pdev = pci_dev_get(closest_pdevs[prandom_u32_max(dev_cnt)]); 544 + 545 + for (i = 0; i < dev_cnt; i++) 546 + pci_dev_put(closest_pdevs[i]); 547 + 548 + kfree(closest_pdevs); 549 + return pdev; 550 + } 551 + EXPORT_SYMBOL_GPL(pci_p2pmem_find_many); 552 + 553 + /** 554 + * pci_alloc_p2p_mem - allocate peer-to-peer DMA memory 555 + * @pdev: the device to allocate memory from 556 + * @size: number of bytes to allocate 557 + * 558 + * Returns the allocated memory or NULL on error. 559 + */ 560 + void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size) 561 + { 562 + void *ret; 563 + 564 + if (unlikely(!pdev->p2pdma)) 565 + return NULL; 566 + 567 + if (unlikely(!percpu_ref_tryget_live(&pdev->p2pdma->devmap_ref))) 568 + return NULL; 569 + 570 + ret = (void *)gen_pool_alloc(pdev->p2pdma->pool, size); 571 + 572 + if (unlikely(!ret)) 573 + percpu_ref_put(&pdev->p2pdma->devmap_ref); 574 + 575 + return ret; 576 + } 577 + EXPORT_SYMBOL_GPL(pci_alloc_p2pmem); 578 + 579 + /** 580 + * pci_free_p2pmem - free peer-to-peer DMA memory 581 + * @pdev: the device the memory was allocated from 582 + * @addr: address of the memory that was allocated 583 + * @size: number of bytes that was allocated 584 + */ 585 + void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size) 586 + { 587 + gen_pool_free(pdev->p2pdma->pool, (uintptr_t)addr, size); 588 + percpu_ref_put(&pdev->p2pdma->devmap_ref); 589 + } 590 + EXPORT_SYMBOL_GPL(pci_free_p2pmem); 591 + 592 + /** 593 + * pci_virt_to_bus - return the PCI bus address for a given virtual 594 + * address obtained with pci_alloc_p2pmem() 595 + * @pdev: the device the memory was allocated from 596 + * @addr: address of the memory that was allocated 597 + */ 598 + pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, void *addr) 599 + { 600 + if (!addr) 601 + return 0; 602 + if (!pdev->p2pdma) 603 + return 0; 604 + 605 + /* 606 + * Note: when we added the memory to the pool we used the PCI 607 + * bus address as the physical address. So gen_pool_virt_to_phys() 608 + * actually returns the bus address despite the misleading name. 609 + */ 610 + return gen_pool_virt_to_phys(pdev->p2pdma->pool, (unsigned long)addr); 611 + } 612 + EXPORT_SYMBOL_GPL(pci_p2pmem_virt_to_bus); 613 + 614 + /** 615 + * pci_p2pmem_alloc_sgl - allocate peer-to-peer DMA memory in a scatterlist 616 + * @pdev: the device to allocate memory from 617 + * @nents: the number of SG entries in the list 618 + * @length: number of bytes to allocate 619 + * 620 + * Returns 0 on success 621 + */ 622 + struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, 623 + unsigned int *nents, u32 length) 624 + { 625 + struct scatterlist *sg; 626 + void *addr; 627 + 628 + sg = kzalloc(sizeof(*sg), GFP_KERNEL); 629 + if (!sg) 630 + return NULL; 631 + 632 + sg_init_table(sg, 1); 633 + 634 + addr = pci_alloc_p2pmem(pdev, length); 635 + if (!addr) 636 + goto out_free_sg; 637 + 638 + sg_set_buf(sg, addr, length); 639 + *nents = 1; 640 + return sg; 641 + 642 + out_free_sg: 643 + kfree(sg); 644 + return NULL; 645 + } 646 + EXPORT_SYMBOL_GPL(pci_p2pmem_alloc_sgl); 647 + 648 + /** 649 + * pci_p2pmem_free_sgl - free a scatterlist allocated by pci_p2pmem_alloc_sgl() 650 + * @pdev: the device to allocate memory from 651 + * @sgl: the allocated scatterlist 652 + */ 653 + void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl) 654 + { 655 + struct scatterlist *sg; 656 + int count; 657 + 658 + for_each_sg(sgl, sg, INT_MAX, count) { 659 + if (!sg) 660 + break; 661 + 662 + pci_free_p2pmem(pdev, sg_virt(sg), sg->length); 663 + } 664 + kfree(sgl); 665 + } 666 + EXPORT_SYMBOL_GPL(pci_p2pmem_free_sgl); 667 + 668 + /** 669 + * pci_p2pmem_publish - publish the peer-to-peer DMA memory for use by 670 + * other devices with pci_p2pmem_find() 671 + * @pdev: the device with peer-to-peer DMA memory to publish 672 + * @publish: set to true to publish the memory, false to unpublish it 673 + * 674 + * Published memory can be used by other PCI device drivers for 675 + * peer-2-peer DMA operations. Non-published memory is reserved for 676 + * exlusive use of the device driver that registers the peer-to-peer 677 + * memory. 678 + */ 679 + void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) 680 + { 681 + if (pdev->p2pdma) 682 + pdev->p2pdma->p2pmem_published = publish; 683 + } 684 + EXPORT_SYMBOL_GPL(pci_p2pmem_publish); 685 + 686 + /** 687 + * pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA 688 + * @dev: device doing the DMA request 689 + * @sg: scatter list to map 690 + * @nents: elements in the scatterlist 691 + * @dir: DMA direction 692 + * 693 + * Scatterlists mapped with this function should not be unmapped in any way. 694 + * 695 + * Returns the number of SG entries mapped or 0 on error. 696 + */ 697 + int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 698 + enum dma_data_direction dir) 699 + { 700 + struct dev_pagemap *pgmap; 701 + struct scatterlist *s; 702 + phys_addr_t paddr; 703 + int i; 704 + 705 + /* 706 + * p2pdma mappings are not compatible with devices that use 707 + * dma_virt_ops. If the upper layers do the right thing 708 + * this should never happen because it will be prevented 709 + * by the check in pci_p2pdma_add_client() 710 + */ 711 + if (WARN_ON_ONCE(IS_ENABLED(CONFIG_DMA_VIRT_OPS) && 712 + dev->dma_ops == &dma_virt_ops)) 713 + return 0; 714 + 715 + for_each_sg(sg, s, nents, i) { 716 + pgmap = sg_page(s)->pgmap; 717 + paddr = sg_phys(s); 718 + 719 + s->dma_address = paddr - pgmap->pci_p2pdma_bus_offset; 720 + sg_dma_len(s) = s->length; 721 + } 722 + 723 + return nents; 724 + } 725 + EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg); 726 + 727 + /** 728 + * pci_p2pdma_enable_store - parse a configfs/sysfs attribute store 729 + * to enable p2pdma 730 + * @page: contents of the value to be stored 731 + * @p2p_dev: returns the PCI device that was selected to be used 732 + * (if one was specified in the stored value) 733 + * @use_p2pdma: returns whether to enable p2pdma or not 734 + * 735 + * Parses an attribute value to decide whether to enable p2pdma. 736 + * The value can select a PCI device (using it's full BDF device 737 + * name) or a boolean (in any format strtobool() accepts). A false 738 + * value disables p2pdma, a true value expects the caller 739 + * to automatically find a compatible device and specifying a PCI device 740 + * expects the caller to use the specific provider. 741 + * 742 + * pci_p2pdma_enable_show() should be used as the show operation for 743 + * the attribute. 744 + * 745 + * Returns 0 on success 746 + */ 747 + int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, 748 + bool *use_p2pdma) 749 + { 750 + struct device *dev; 751 + 752 + dev = bus_find_device_by_name(&pci_bus_type, NULL, page); 753 + if (dev) { 754 + *use_p2pdma = true; 755 + *p2p_dev = to_pci_dev(dev); 756 + 757 + if (!pci_has_p2pmem(*p2p_dev)) { 758 + pci_err(*p2p_dev, 759 + "PCI device has no peer-to-peer memory: %s\n", 760 + page); 761 + pci_dev_put(*p2p_dev); 762 + return -ENODEV; 763 + } 764 + 765 + return 0; 766 + } else if ((page[0] == '0' || page[0] == '1') && !iscntrl(page[1])) { 767 + /* 768 + * If the user enters a PCI device that doesn't exist 769 + * like "0000:01:00.1", we don't want strtobool to think 770 + * it's a '0' when it's clearly not what the user wanted. 771 + * So we require 0's and 1's to be exactly one character. 772 + */ 773 + } else if (!strtobool(page, use_p2pdma)) { 774 + return 0; 775 + } 776 + 777 + pr_err("No such PCI device: %.*s\n", (int)strcspn(page, "\n"), page); 778 + return -ENODEV; 779 + } 780 + EXPORT_SYMBOL_GPL(pci_p2pdma_enable_store); 781 + 782 + /** 783 + * pci_p2pdma_enable_show - show a configfs/sysfs attribute indicating 784 + * whether p2pdma is enabled 785 + * @page: contents of the stored value 786 + * @p2p_dev: the selected p2p device (NULL if no device is selected) 787 + * @use_p2pdma: whether p2pdme has been enabled 788 + * 789 + * Attributes that use pci_p2pdma_enable_store() should use this function 790 + * to show the value of the attribute. 791 + * 792 + * Returns 0 on success 793 + */ 794 + ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, 795 + bool use_p2pdma) 796 + { 797 + if (!use_p2pdma) 798 + return sprintf(page, "0\n"); 799 + 800 + if (!p2p_dev) 801 + return sprintf(page, "1\n"); 802 + 803 + return sprintf(page, "%s\n", pci_name(p2p_dev)); 804 + } 805 + EXPORT_SYMBOL_GPL(pci_p2pdma_enable_show);
+62 -1
drivers/pci/pci-acpi.c
··· 519 519 return PCI_POWER_ERROR; 520 520 } 521 521 522 + static struct acpi_device *acpi_pci_find_companion(struct device *dev); 523 + 524 + static bool acpi_pci_bridge_d3(struct pci_dev *dev) 525 + { 526 + const struct fwnode_handle *fwnode; 527 + struct acpi_device *adev; 528 + struct pci_dev *root; 529 + u8 val; 530 + 531 + if (!dev->is_hotplug_bridge) 532 + return false; 533 + 534 + /* 535 + * Look for a special _DSD property for the root port and if it 536 + * is set we know the hierarchy behind it supports D3 just fine. 537 + */ 538 + root = pci_find_pcie_root_port(dev); 539 + if (!root) 540 + return false; 541 + 542 + adev = ACPI_COMPANION(&root->dev); 543 + if (root == dev) { 544 + /* 545 + * It is possible that the ACPI companion is not yet bound 546 + * for the root port so look it up manually here. 547 + */ 548 + if (!adev && !pci_dev_is_added(root)) 549 + adev = acpi_pci_find_companion(&root->dev); 550 + } 551 + 552 + if (!adev) 553 + return false; 554 + 555 + fwnode = acpi_fwnode_handle(adev); 556 + if (fwnode_property_read_u8(fwnode, "HotPlugSupportInD3", &val)) 557 + return false; 558 + 559 + return val == 1; 560 + } 561 + 522 562 static bool acpi_pci_power_manageable(struct pci_dev *dev) 523 563 { 524 564 struct acpi_device *adev = ACPI_COMPANION(&dev->dev); ··· 588 548 error = -EBUSY; 589 549 break; 590 550 } 551 + /* Fall through */ 591 552 case PCI_D0: 592 553 case PCI_D1: 593 554 case PCI_D2: ··· 676 635 } 677 636 678 637 static const struct pci_platform_pm_ops acpi_pci_platform_pm = { 638 + .bridge_d3 = acpi_pci_bridge_d3, 679 639 .is_manageable = acpi_pci_power_manageable, 680 640 .set_state = acpi_pci_set_power_state, 681 641 .get_state = acpi_pci_get_power_state, ··· 793 751 { 794 752 struct pci_dev *pci_dev = to_pci_dev(dev); 795 753 struct acpi_device *adev = ACPI_COMPANION(dev); 754 + int node; 796 755 797 756 if (!adev) 798 757 return; 758 + 759 + node = acpi_get_node(adev->handle); 760 + if (node != NUMA_NO_NODE) 761 + set_dev_node(dev, node); 799 762 800 763 pci_acpi_optimize_delay(pci_dev, adev->handle); 801 764 ··· 809 762 return; 810 763 811 764 device_set_wakeup_capable(dev, true); 765 + /* 766 + * For bridges that can do D3 we enable wake automatically (as 767 + * we do for the power management itself in that case). The 768 + * reason is that the bridge may have additional methods such as 769 + * _DSW that need to be called. 770 + */ 771 + if (pci_dev->bridge_d3) 772 + device_wakeup_enable(dev); 773 + 812 774 acpi_pci_wakeup(pci_dev, false); 813 775 } 814 776 815 777 static void pci_acpi_cleanup(struct device *dev) 816 778 { 817 779 struct acpi_device *adev = ACPI_COMPANION(dev); 780 + struct pci_dev *pci_dev = to_pci_dev(dev); 818 781 819 782 if (!adev) 820 783 return; 821 784 822 785 pci_acpi_remove_pm_notifier(adev); 823 - if (adev->wakeup.flags.valid) 786 + if (adev->wakeup.flags.valid) { 787 + if (pci_dev->bridge_d3) 788 + device_wakeup_disable(dev); 789 + 824 790 device_set_wakeup_capable(dev, false); 791 + } 825 792 } 826 793 827 794 static bool pci_acpi_bus_match(struct device *dev)
+408
drivers/pci/pci-bridge-emul.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018 Marvell 4 + * 5 + * Author: Thomas Petazzoni <thomas.petazzoni@bootlin.com> 6 + * 7 + * This file helps PCI controller drivers implement a fake root port 8 + * PCI bridge when the HW doesn't provide such a root port PCI 9 + * bridge. 10 + * 11 + * It emulates a PCI bridge by providing a fake PCI configuration 12 + * space (and optionally a PCIe capability configuration space) in 13 + * memory. By default the read/write operations simply read and update 14 + * this fake configuration space in memory. However, PCI controller 15 + * drivers can provide through the 'struct pci_sw_bridge_ops' 16 + * structure a set of operations to override or complement this 17 + * default behavior. 18 + */ 19 + 20 + #include <linux/pci.h> 21 + #include "pci-bridge-emul.h" 22 + 23 + #define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF 24 + #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END 25 + #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2) 26 + 27 + /* 28 + * Initialize a pci_bridge_emul structure to represent a fake PCI 29 + * bridge configuration space. The caller needs to have initialized 30 + * the PCI configuration space with whatever values make sense 31 + * (typically at least vendor, device, revision), the ->ops pointer, 32 + * and optionally ->data and ->has_pcie. 33 + */ 34 + void pci_bridge_emul_init(struct pci_bridge_emul *bridge) 35 + { 36 + bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16; 37 + bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; 38 + bridge->conf.cache_line_size = 0x10; 39 + bridge->conf.status = PCI_STATUS_CAP_LIST; 40 + 41 + if (bridge->has_pcie) { 42 + bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; 43 + bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; 44 + /* Set PCIe v2, root port, slot support */ 45 + bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | 46 + PCI_EXP_FLAGS_SLOT; 47 + } 48 + } 49 + 50 + struct pci_bridge_reg_behavior { 51 + /* Read-only bits */ 52 + u32 ro; 53 + 54 + /* Read-write bits */ 55 + u32 rw; 56 + 57 + /* Write-1-to-clear bits */ 58 + u32 w1c; 59 + 60 + /* Reserved bits (hardwired to 0) */ 61 + u32 rsvd; 62 + }; 63 + 64 + const static struct pci_bridge_reg_behavior pci_regs_behavior[] = { 65 + [PCI_VENDOR_ID / 4] = { .ro = ~0 }, 66 + [PCI_COMMAND / 4] = { 67 + .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | 68 + PCI_COMMAND_MASTER | PCI_COMMAND_PARITY | 69 + PCI_COMMAND_SERR), 70 + .ro = ((PCI_COMMAND_SPECIAL | PCI_COMMAND_INVALIDATE | 71 + PCI_COMMAND_VGA_PALETTE | PCI_COMMAND_WAIT | 72 + PCI_COMMAND_FAST_BACK) | 73 + (PCI_STATUS_CAP_LIST | PCI_STATUS_66MHZ | 74 + PCI_STATUS_FAST_BACK | PCI_STATUS_DEVSEL_MASK) << 16), 75 + .rsvd = GENMASK(15, 10) | ((BIT(6) | GENMASK(3, 0)) << 16), 76 + .w1c = (PCI_STATUS_PARITY | 77 + PCI_STATUS_SIG_TARGET_ABORT | 78 + PCI_STATUS_REC_TARGET_ABORT | 79 + PCI_STATUS_REC_MASTER_ABORT | 80 + PCI_STATUS_SIG_SYSTEM_ERROR | 81 + PCI_STATUS_DETECTED_PARITY) << 16, 82 + }, 83 + [PCI_CLASS_REVISION / 4] = { .ro = ~0 }, 84 + 85 + /* 86 + * Cache Line Size register: implement as read-only, we do not 87 + * pretend implementing "Memory Write and Invalidate" 88 + * transactions" 89 + * 90 + * Latency Timer Register: implemented as read-only, as "A 91 + * bridge that is not capable of a burst transfer of more than 92 + * two data phases on its primary interface is permitted to 93 + * hardwire the Latency Timer to a value of 16 or less" 94 + * 95 + * Header Type: always read-only 96 + * 97 + * BIST register: implemented as read-only, as "A bridge that 98 + * does not support BIST must implement this register as a 99 + * read-only register that returns 0 when read" 100 + */ 101 + [PCI_CACHE_LINE_SIZE / 4] = { .ro = ~0 }, 102 + 103 + /* 104 + * Base Address registers not used must be implemented as 105 + * read-only registers that return 0 when read. 106 + */ 107 + [PCI_BASE_ADDRESS_0 / 4] = { .ro = ~0 }, 108 + [PCI_BASE_ADDRESS_1 / 4] = { .ro = ~0 }, 109 + 110 + [PCI_PRIMARY_BUS / 4] = { 111 + /* Primary, secondary and subordinate bus are RW */ 112 + .rw = GENMASK(24, 0), 113 + /* Secondary latency is read-only */ 114 + .ro = GENMASK(31, 24), 115 + }, 116 + 117 + [PCI_IO_BASE / 4] = { 118 + /* The high four bits of I/O base/limit are RW */ 119 + .rw = (GENMASK(15, 12) | GENMASK(7, 4)), 120 + 121 + /* The low four bits of I/O base/limit are RO */ 122 + .ro = (((PCI_STATUS_66MHZ | PCI_STATUS_FAST_BACK | 123 + PCI_STATUS_DEVSEL_MASK) << 16) | 124 + GENMASK(11, 8) | GENMASK(3, 0)), 125 + 126 + .w1c = (PCI_STATUS_PARITY | 127 + PCI_STATUS_SIG_TARGET_ABORT | 128 + PCI_STATUS_REC_TARGET_ABORT | 129 + PCI_STATUS_REC_MASTER_ABORT | 130 + PCI_STATUS_SIG_SYSTEM_ERROR | 131 + PCI_STATUS_DETECTED_PARITY) << 16, 132 + 133 + .rsvd = ((BIT(6) | GENMASK(4, 0)) << 16), 134 + }, 135 + 136 + [PCI_MEMORY_BASE / 4] = { 137 + /* The high 12-bits of mem base/limit are RW */ 138 + .rw = GENMASK(31, 20) | GENMASK(15, 4), 139 + 140 + /* The low four bits of mem base/limit are RO */ 141 + .ro = GENMASK(19, 16) | GENMASK(3, 0), 142 + }, 143 + 144 + [PCI_PREF_MEMORY_BASE / 4] = { 145 + /* The high 12-bits of pref mem base/limit are RW */ 146 + .rw = GENMASK(31, 20) | GENMASK(15, 4), 147 + 148 + /* The low four bits of pref mem base/limit are RO */ 149 + .ro = GENMASK(19, 16) | GENMASK(3, 0), 150 + }, 151 + 152 + [PCI_PREF_BASE_UPPER32 / 4] = { 153 + .rw = ~0, 154 + }, 155 + 156 + [PCI_PREF_LIMIT_UPPER32 / 4] = { 157 + .rw = ~0, 158 + }, 159 + 160 + [PCI_IO_BASE_UPPER16 / 4] = { 161 + .rw = ~0, 162 + }, 163 + 164 + [PCI_CAPABILITY_LIST / 4] = { 165 + .ro = GENMASK(7, 0), 166 + .rsvd = GENMASK(31, 8), 167 + }, 168 + 169 + [PCI_ROM_ADDRESS1 / 4] = { 170 + .rw = GENMASK(31, 11) | BIT(0), 171 + .rsvd = GENMASK(10, 1), 172 + }, 173 + 174 + /* 175 + * Interrupt line (bits 7:0) are RW, interrupt pin (bits 15:8) 176 + * are RO, and bridge control (31:16) are a mix of RW, RO, 177 + * reserved and W1C bits 178 + */ 179 + [PCI_INTERRUPT_LINE / 4] = { 180 + /* Interrupt line is RW */ 181 + .rw = (GENMASK(7, 0) | 182 + ((PCI_BRIDGE_CTL_PARITY | 183 + PCI_BRIDGE_CTL_SERR | 184 + PCI_BRIDGE_CTL_ISA | 185 + PCI_BRIDGE_CTL_VGA | 186 + PCI_BRIDGE_CTL_MASTER_ABORT | 187 + PCI_BRIDGE_CTL_BUS_RESET | 188 + BIT(8) | BIT(9) | BIT(11)) << 16)), 189 + 190 + /* Interrupt pin is RO */ 191 + .ro = (GENMASK(15, 8) | ((PCI_BRIDGE_CTL_FAST_BACK) << 16)), 192 + 193 + .w1c = BIT(10) << 16, 194 + 195 + .rsvd = (GENMASK(15, 12) | BIT(4)) << 16, 196 + }, 197 + }; 198 + 199 + const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { 200 + [PCI_CAP_LIST_ID / 4] = { 201 + /* 202 + * Capability ID, Next Capability Pointer and 203 + * Capabilities register are all read-only. 204 + */ 205 + .ro = ~0, 206 + }, 207 + 208 + [PCI_EXP_DEVCAP / 4] = { 209 + .ro = ~0, 210 + }, 211 + 212 + [PCI_EXP_DEVCTL / 4] = { 213 + /* Device control register is RW */ 214 + .rw = GENMASK(15, 0), 215 + 216 + /* 217 + * Device status register has 4 bits W1C, then 2 bits 218 + * RO, the rest is reserved 219 + */ 220 + .w1c = GENMASK(19, 16), 221 + .ro = GENMASK(20, 19), 222 + .rsvd = GENMASK(31, 21), 223 + }, 224 + 225 + [PCI_EXP_LNKCAP / 4] = { 226 + /* All bits are RO, except bit 23 which is reserved */ 227 + .ro = lower_32_bits(~BIT(23)), 228 + .rsvd = BIT(23), 229 + }, 230 + 231 + [PCI_EXP_LNKCTL / 4] = { 232 + /* 233 + * Link control has bits [1:0] and [11:3] RW, the 234 + * other bits are reserved. 235 + * Link status has bits [13:0] RO, and bits [14:15] 236 + * W1C. 237 + */ 238 + .rw = GENMASK(11, 3) | GENMASK(1, 0), 239 + .ro = GENMASK(13, 0) << 16, 240 + .w1c = GENMASK(15, 14) << 16, 241 + .rsvd = GENMASK(15, 12) | BIT(2), 242 + }, 243 + 244 + [PCI_EXP_SLTCAP / 4] = { 245 + .ro = ~0, 246 + }, 247 + 248 + [PCI_EXP_SLTCTL / 4] = { 249 + /* 250 + * Slot control has bits [12:0] RW, the rest is 251 + * reserved. 252 + * 253 + * Slot status has a mix of W1C and RO bits, as well 254 + * as reserved bits. 255 + */ 256 + .rw = GENMASK(12, 0), 257 + .w1c = (PCI_EXP_SLTSTA_ABP | PCI_EXP_SLTSTA_PFD | 258 + PCI_EXP_SLTSTA_MRLSC | PCI_EXP_SLTSTA_PDC | 259 + PCI_EXP_SLTSTA_CC | PCI_EXP_SLTSTA_DLLSC) << 16, 260 + .ro = (PCI_EXP_SLTSTA_MRLSS | PCI_EXP_SLTSTA_PDS | 261 + PCI_EXP_SLTSTA_EIS) << 16, 262 + .rsvd = GENMASK(15, 12) | (GENMASK(15, 9) << 16), 263 + }, 264 + 265 + [PCI_EXP_RTCTL / 4] = { 266 + /* 267 + * Root control has bits [4:0] RW, the rest is 268 + * reserved. 269 + * 270 + * Root status has bit 0 RO, the rest is reserved. 271 + */ 272 + .rw = (PCI_EXP_RTCTL_SECEE | PCI_EXP_RTCTL_SENFEE | 273 + PCI_EXP_RTCTL_SEFEE | PCI_EXP_RTCTL_PMEIE | 274 + PCI_EXP_RTCTL_CRSSVE), 275 + .ro = PCI_EXP_RTCAP_CRSVIS << 16, 276 + .rsvd = GENMASK(15, 5) | (GENMASK(15, 1) << 16), 277 + }, 278 + 279 + [PCI_EXP_RTSTA / 4] = { 280 + .ro = GENMASK(15, 0) | PCI_EXP_RTSTA_PENDING, 281 + .w1c = PCI_EXP_RTSTA_PME, 282 + .rsvd = GENMASK(31, 18), 283 + }, 284 + }; 285 + 286 + /* 287 + * Should be called by the PCI controller driver when reading the PCI 288 + * configuration space of the fake bridge. It will call back the 289 + * ->ops->read_base or ->ops->read_pcie operations. 290 + */ 291 + int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, 292 + int size, u32 *value) 293 + { 294 + int ret; 295 + int reg = where & ~3; 296 + pci_bridge_emul_read_status_t (*read_op)(struct pci_bridge_emul *bridge, 297 + int reg, u32 *value); 298 + u32 *cfgspace; 299 + const struct pci_bridge_reg_behavior *behavior; 300 + 301 + if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) { 302 + *value = 0; 303 + return PCIBIOS_SUCCESSFUL; 304 + } 305 + 306 + if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) { 307 + *value = 0; 308 + return PCIBIOS_SUCCESSFUL; 309 + } 310 + 311 + if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { 312 + reg -= PCI_CAP_PCIE_START; 313 + read_op = bridge->ops->read_pcie; 314 + cfgspace = (u32 *) &bridge->pcie_conf; 315 + behavior = pcie_cap_regs_behavior; 316 + } else { 317 + read_op = bridge->ops->read_base; 318 + cfgspace = (u32 *) &bridge->conf; 319 + behavior = pci_regs_behavior; 320 + } 321 + 322 + if (read_op) 323 + ret = read_op(bridge, reg, value); 324 + else 325 + ret = PCI_BRIDGE_EMUL_NOT_HANDLED; 326 + 327 + if (ret == PCI_BRIDGE_EMUL_NOT_HANDLED) 328 + *value = cfgspace[reg / 4]; 329 + 330 + /* 331 + * Make sure we never return any reserved bit with a value 332 + * different from 0. 333 + */ 334 + *value &= ~behavior[reg / 4].rsvd; 335 + 336 + if (size == 1) 337 + *value = (*value >> (8 * (where & 3))) & 0xff; 338 + else if (size == 2) 339 + *value = (*value >> (8 * (where & 3))) & 0xffff; 340 + else if (size != 4) 341 + return PCIBIOS_BAD_REGISTER_NUMBER; 342 + 343 + return PCIBIOS_SUCCESSFUL; 344 + } 345 + 346 + /* 347 + * Should be called by the PCI controller driver when writing the PCI 348 + * configuration space of the fake bridge. It will call back the 349 + * ->ops->write_base or ->ops->write_pcie operations. 350 + */ 351 + int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, 352 + int size, u32 value) 353 + { 354 + int reg = where & ~3; 355 + int mask, ret, old, new, shift; 356 + void (*write_op)(struct pci_bridge_emul *bridge, int reg, 357 + u32 old, u32 new, u32 mask); 358 + u32 *cfgspace; 359 + const struct pci_bridge_reg_behavior *behavior; 360 + 361 + if (bridge->has_pcie && reg >= PCI_CAP_PCIE_END) 362 + return PCIBIOS_SUCCESSFUL; 363 + 364 + if (!bridge->has_pcie && reg >= PCI_BRIDGE_CONF_END) 365 + return PCIBIOS_SUCCESSFUL; 366 + 367 + shift = (where & 0x3) * 8; 368 + 369 + if (size == 4) 370 + mask = 0xffffffff; 371 + else if (size == 2) 372 + mask = 0xffff << shift; 373 + else if (size == 1) 374 + mask = 0xff << shift; 375 + else 376 + return PCIBIOS_BAD_REGISTER_NUMBER; 377 + 378 + ret = pci_bridge_emul_conf_read(bridge, reg, 4, &old); 379 + if (ret != PCIBIOS_SUCCESSFUL) 380 + return ret; 381 + 382 + if (bridge->has_pcie && reg >= PCI_CAP_PCIE_START) { 383 + reg -= PCI_CAP_PCIE_START; 384 + write_op = bridge->ops->write_pcie; 385 + cfgspace = (u32 *) &bridge->pcie_conf; 386 + behavior = pcie_cap_regs_behavior; 387 + } else { 388 + write_op = bridge->ops->write_base; 389 + cfgspace = (u32 *) &bridge->conf; 390 + behavior = pci_regs_behavior; 391 + } 392 + 393 + /* Keep all bits, except the RW bits */ 394 + new = old & (~mask | ~behavior[reg / 4].rw); 395 + 396 + /* Update the value of the RW bits */ 397 + new |= (value << shift) & (behavior[reg / 4].rw & mask); 398 + 399 + /* Clear the W1C bits */ 400 + new &= ~((value << shift) & (behavior[reg / 4].w1c & mask)); 401 + 402 + cfgspace[reg / 4] = new; 403 + 404 + if (write_op) 405 + write_op(bridge, reg, old, new, mask); 406 + 407 + return PCIBIOS_SUCCESSFUL; 408 + }
+124
drivers/pci/pci-bridge-emul.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __PCI_BRIDGE_EMUL_H__ 3 + #define __PCI_BRIDGE_EMUL_H__ 4 + 5 + #include <linux/kernel.h> 6 + 7 + /* PCI configuration space of a PCI-to-PCI bridge. */ 8 + struct pci_bridge_emul_conf { 9 + u16 vendor; 10 + u16 device; 11 + u16 command; 12 + u16 status; 13 + u32 class_revision; 14 + u8 cache_line_size; 15 + u8 latency_timer; 16 + u8 header_type; 17 + u8 bist; 18 + u32 bar[2]; 19 + u8 primary_bus; 20 + u8 secondary_bus; 21 + u8 subordinate_bus; 22 + u8 secondary_latency_timer; 23 + u8 iobase; 24 + u8 iolimit; 25 + u16 secondary_status; 26 + u16 membase; 27 + u16 memlimit; 28 + u16 pref_mem_base; 29 + u16 pref_mem_limit; 30 + u32 prefbaseupper; 31 + u32 preflimitupper; 32 + u16 iobaseupper; 33 + u16 iolimitupper; 34 + u8 capabilities_pointer; 35 + u8 reserve[3]; 36 + u32 romaddr; 37 + u8 intline; 38 + u8 intpin; 39 + u16 bridgectrl; 40 + }; 41 + 42 + /* PCI configuration space of the PCIe capabilities */ 43 + struct pci_bridge_emul_pcie_conf { 44 + u8 cap_id; 45 + u8 next; 46 + u16 cap; 47 + u32 devcap; 48 + u16 devctl; 49 + u16 devsta; 50 + u32 lnkcap; 51 + u16 lnkctl; 52 + u16 lnksta; 53 + u32 slotcap; 54 + u16 slotctl; 55 + u16 slotsta; 56 + u16 rootctl; 57 + u16 rsvd; 58 + u32 rootsta; 59 + u32 devcap2; 60 + u16 devctl2; 61 + u16 devsta2; 62 + u32 lnkcap2; 63 + u16 lnkctl2; 64 + u16 lnksta2; 65 + u32 slotcap2; 66 + u16 slotctl2; 67 + u16 slotsta2; 68 + }; 69 + 70 + struct pci_bridge_emul; 71 + 72 + typedef enum { PCI_BRIDGE_EMUL_HANDLED, 73 + PCI_BRIDGE_EMUL_NOT_HANDLED } pci_bridge_emul_read_status_t; 74 + 75 + struct pci_bridge_emul_ops { 76 + /* 77 + * Called when reading from the regular PCI bridge 78 + * configuration space. Return PCI_BRIDGE_EMUL_HANDLED when the 79 + * operation has handled the read operation and filled in the 80 + * *value, or PCI_BRIDGE_EMUL_NOT_HANDLED when the read should 81 + * be emulated by the common code by reading from the 82 + * in-memory copy of the configuration space. 83 + */ 84 + pci_bridge_emul_read_status_t (*read_base)(struct pci_bridge_emul *bridge, 85 + int reg, u32 *value); 86 + 87 + /* 88 + * Same as ->read_base(), except it is for reading from the 89 + * PCIe capability configuration space. 90 + */ 91 + pci_bridge_emul_read_status_t (*read_pcie)(struct pci_bridge_emul *bridge, 92 + int reg, u32 *value); 93 + /* 94 + * Called when writing to the regular PCI bridge configuration 95 + * space. old is the current value, new is the new value being 96 + * written, and mask indicates which parts of the value are 97 + * being changed. 98 + */ 99 + void (*write_base)(struct pci_bridge_emul *bridge, int reg, 100 + u32 old, u32 new, u32 mask); 101 + 102 + /* 103 + * Same as ->write_base(), except it is for writing from the 104 + * PCIe capability configuration space. 105 + */ 106 + void (*write_pcie)(struct pci_bridge_emul *bridge, int reg, 107 + u32 old, u32 new, u32 mask); 108 + }; 109 + 110 + struct pci_bridge_emul { 111 + struct pci_bridge_emul_conf conf; 112 + struct pci_bridge_emul_pcie_conf pcie_conf; 113 + struct pci_bridge_emul_ops *ops; 114 + void *data; 115 + bool has_pcie; 116 + }; 117 + 118 + void pci_bridge_emul_init(struct pci_bridge_emul *bridge); 119 + int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, 120 + int size, u32 *value); 121 + int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, 122 + int size, u32 value); 123 + 124 + #endif /* __PCI_BRIDGE_EMUL_H__ */
+97 -15
drivers/pci/pci.c
··· 35 35 #include <linux/aer.h> 36 36 #include "pci.h" 37 37 38 + DEFINE_MUTEX(pci_slot_mutex); 39 + 38 40 const char *pci_power_names[] = { 39 41 "error", "D0", "D1", "D2", "D3hot", "D3cold", "unknown", 40 42 }; ··· 198 196 /** 199 197 * pci_dev_str_match_path - test if a path string matches a device 200 198 * @dev: the PCI device to test 201 - * @p: string to match the device against 199 + * @path: string to match the device against 202 200 * @endptr: pointer to the string after the match 203 201 * 204 202 * Test if a string (typically from a kernel parameter) formatted as a ··· 793 791 return pci_platform_pm ? pci_platform_pm->need_resume(dev) : false; 794 792 } 795 793 794 + static inline bool platform_pci_bridge_d3(struct pci_dev *dev) 795 + { 796 + return pci_platform_pm ? pci_platform_pm->bridge_d3(dev) : false; 797 + } 798 + 796 799 /** 797 800 * pci_raw_set_power_state - Use PCI PM registers to set the power state of 798 801 * given PCI device ··· 1006 999 * because have already delayed for the bridge. 1007 1000 */ 1008 1001 if (dev->runtime_d3cold) { 1009 - if (dev->d3cold_delay) 1002 + if (dev->d3cold_delay && !dev->imm_ready) 1010 1003 msleep(dev->d3cold_delay); 1011 1004 /* 1012 1005 * When powering on a bridge from D3cold, the ··· 1291 1284 if (i != 0) 1292 1285 return i; 1293 1286 1287 + pci_save_dpc_state(dev); 1294 1288 return pci_save_vc_state(dev); 1295 1289 } 1296 1290 EXPORT_SYMBOL(pci_save_state); ··· 1397 1389 pci_restore_ats_state(dev); 1398 1390 pci_restore_vc_state(dev); 1399 1391 pci_restore_rebar_state(dev); 1392 + pci_restore_dpc_state(dev); 1400 1393 1401 1394 pci_cleanup_aer_error_status_regs(dev); 1402 1395 ··· 2153 2144 int ret = 0; 2154 2145 2155 2146 /* 2156 - * Bridges can only signal wakeup on behalf of subordinate devices, 2157 - * but that is set up elsewhere, so skip them. 2147 + * Bridges that are not power-manageable directly only signal 2148 + * wakeup on behalf of subordinate devices which is set up 2149 + * elsewhere, so skip them. However, bridges that are 2150 + * power-manageable may signal wakeup for themselves (for example, 2151 + * on a hotplug event) and they need to be covered here. 2158 2152 */ 2159 - if (pci_has_subordinate(dev)) 2153 + if (!pci_power_manageable(dev)) 2160 2154 return 0; 2161 2155 2162 2156 /* Don't do the same thing twice in a row for one device. */ ··· 2534 2522 if (bridge->is_thunderbolt) 2535 2523 return true; 2536 2524 2525 + /* Platform might know better if the bridge supports D3 */ 2526 + if (platform_pci_bridge_d3(bridge)) 2527 + return true; 2528 + 2537 2529 /* 2538 2530 * Hotplug ports handled natively by the OS were not validated 2539 2531 * by vendors for runtime D3 at least until 2018 because there ··· 2671 2655 void pci_pm_init(struct pci_dev *dev) 2672 2656 { 2673 2657 int pm; 2658 + u16 status; 2674 2659 u16 pmc; 2675 2660 2676 2661 pm_runtime_forbid(&dev->dev); ··· 2734 2717 /* Disable the PME# generation functionality */ 2735 2718 pci_pme_active(dev, false); 2736 2719 } 2720 + 2721 + pci_read_config_word(dev, PCI_STATUS, &status); 2722 + if (status & PCI_STATUS_IMM_READY) 2723 + dev->imm_ready = 1; 2737 2724 } 2738 2725 2739 2726 static unsigned long pci_ea_flags(struct pci_dev *dev, u8 prop) ··· 4408 4387 4409 4388 pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_BCR_FLR); 4410 4389 4390 + if (dev->imm_ready) 4391 + return 0; 4392 + 4411 4393 /* 4412 4394 * Per PCIe r4.0, sec 6.6.2, a device must complete an FLR within 4413 4395 * 100ms, but may silently discard requests while the FLR is in ··· 4451 4427 pci_err(dev, "timed out waiting for pending transaction; performing AF function level reset anyway\n"); 4452 4428 4453 4429 pci_write_config_byte(dev, pos + PCI_AF_CTRL, PCI_AF_CTRL_FLR); 4430 + 4431 + if (dev->imm_ready) 4432 + return 0; 4454 4433 4455 4434 /* 4456 4435 * Per Advanced Capabilities for Conventional PCI ECN, 13 April 2006, ··· 4523 4496 bool ret; 4524 4497 u16 lnk_status; 4525 4498 4499 + /* 4500 + * Some controllers might not implement link active reporting. In this 4501 + * case, we wait for 1000 + 100 ms. 4502 + */ 4503 + if (!pdev->link_active_reporting) { 4504 + msleep(1100); 4505 + return true; 4506 + } 4507 + 4508 + /* 4509 + * PCIe r4.0 sec 6.6.1, a component must enter LTSSM Detect within 20ms, 4510 + * after which we should expect an link active if the reset was 4511 + * successful. If so, software must wait a minimum 100ms before sending 4512 + * configuration requests to devices downstream this port. 4513 + * 4514 + * If the link fails to activate, either the device was physically 4515 + * removed or the link is permanently failed. 4516 + */ 4517 + if (active) 4518 + msleep(20); 4526 4519 for (;;) { 4527 4520 pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnk_status); 4528 4521 ret = !!(lnk_status & PCI_EXP_LNKSTA_DLLLA); 4529 4522 if (ret == active) 4530 - return true; 4523 + break; 4531 4524 if (timeout <= 0) 4532 4525 break; 4533 4526 msleep(10); 4534 4527 timeout -= 10; 4535 4528 } 4536 - 4537 - pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", 4538 - active ? "set" : "cleared"); 4539 - 4540 - return false; 4529 + if (active && ret) 4530 + msleep(100); 4531 + else if (ret != active) 4532 + pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", 4533 + active ? "set" : "cleared"); 4534 + return ret == active; 4541 4535 } 4542 4536 4543 4537 void pci_reset_secondary_bus(struct pci_dev *dev) ··· 4630 4582 { 4631 4583 int rc = -ENOTTY; 4632 4584 4633 - if (!hotplug || !try_module_get(hotplug->ops->owner)) 4585 + if (!hotplug || !try_module_get(hotplug->owner)) 4634 4586 return rc; 4635 4587 4636 4588 if (hotplug->ops->reset_slot) 4637 4589 rc = hotplug->ops->reset_slot(hotplug, probe); 4638 4590 4639 - module_put(hotplug->ops->owner); 4591 + module_put(hotplug->owner); 4640 4592 4641 4593 return rc; 4642 4594 } ··· 5213 5165 } 5214 5166 5215 5167 /** 5168 + * pci_bus_error_reset - reset the bridge's subordinate bus 5169 + * @bridge: The parent device that connects to the bus to reset 5170 + * 5171 + * This function will first try to reset the slots on this bus if the method is 5172 + * available. If slot reset fails or is not available, this will fall back to a 5173 + * secondary bus reset. 5174 + */ 5175 + int pci_bus_error_reset(struct pci_dev *bridge) 5176 + { 5177 + struct pci_bus *bus = bridge->subordinate; 5178 + struct pci_slot *slot; 5179 + 5180 + if (!bus) 5181 + return -ENOTTY; 5182 + 5183 + mutex_lock(&pci_slot_mutex); 5184 + if (list_empty(&bus->slots)) 5185 + goto bus_reset; 5186 + 5187 + list_for_each_entry(slot, &bus->slots, list) 5188 + if (pci_probe_reset_slot(slot)) 5189 + goto bus_reset; 5190 + 5191 + list_for_each_entry(slot, &bus->slots, list) 5192 + if (pci_slot_reset(slot, 0)) 5193 + goto bus_reset; 5194 + 5195 + mutex_unlock(&pci_slot_mutex); 5196 + return 0; 5197 + bus_reset: 5198 + mutex_unlock(&pci_slot_mutex); 5199 + return pci_bus_reset(bridge->subordinate, 0); 5200 + } 5201 + 5202 + /** 5216 5203 * pci_probe_reset_bus - probe whether a PCI bus can be reset 5217 5204 * @bus: PCI bus to probe 5218 5205 * ··· 5784 5701 void pci_add_dma_alias(struct pci_dev *dev, u8 devfn) 5785 5702 { 5786 5703 if (!dev->dma_alias_mask) 5787 - dev->dma_alias_mask = kcalloc(BITS_TO_LONGS(U8_MAX), 5788 - sizeof(long), GFP_KERNEL); 5704 + dev->dma_alias_mask = bitmap_zalloc(U8_MAX, GFP_KERNEL); 5789 5705 if (!dev->dma_alias_mask) { 5790 5706 pci_warn(dev, "Unable to allocate DMA alias mask\n"); 5791 5707 return;
+71 -7
drivers/pci/pci.h
··· 35 35 36 36 int pci_probe_reset_function(struct pci_dev *dev); 37 37 int pci_bridge_secondary_bus_reset(struct pci_dev *dev); 38 + int pci_bus_error_reset(struct pci_dev *dev); 38 39 39 40 /** 40 41 * struct pci_platform_pm_ops - Firmware PM callbacks 42 + * 43 + * @bridge_d3: Does the bridge allow entering into D3 41 44 * 42 45 * @is_manageable: returns 'true' if given device is power manageable by the 43 46 * platform firmware ··· 63 60 * these callbacks are mandatory. 64 61 */ 65 62 struct pci_platform_pm_ops { 63 + bool (*bridge_d3)(struct pci_dev *dev); 66 64 bool (*is_manageable)(struct pci_dev *dev); 67 65 int (*set_state)(struct pci_dev *dev, pci_power_t state); 68 66 pci_power_t (*get_state)(struct pci_dev *dev); ··· 140 136 141 137 /* Lock for read/write access to pci device and bus lists */ 142 138 extern struct rw_semaphore pci_bus_sem; 139 + extern struct mutex pci_slot_mutex; 143 140 144 141 extern raw_spinlock_t pci_lock; 145 142 ··· 290 285 u16 driver_max_VFs; /* Max num VFs driver supports */ 291 286 struct pci_dev *dev; /* Lowest numbered PF */ 292 287 struct pci_dev *self; /* This PF */ 288 + u32 cfg_size; /* VF config space size */ 293 289 u32 class; /* VF device */ 294 290 u8 hdr_type; /* VF header type */ 295 291 u16 subsystem_vendor; /* VF subsystem vendor */ ··· 299 293 bool drivers_autoprobe; /* Auto probing of VFs by driver */ 300 294 }; 301 295 302 - /* pci_dev priv_flags */ 303 - #define PCI_DEV_DISCONNECTED 0 304 - #define PCI_DEV_ADDED 1 296 + /** 297 + * pci_dev_set_io_state - Set the new error state if possible. 298 + * 299 + * @dev - pci device to set new error_state 300 + * @new - the state we want dev to be in 301 + * 302 + * Must be called with device_lock held. 303 + * 304 + * Returns true if state has been changed to the requested state. 305 + */ 306 + static inline bool pci_dev_set_io_state(struct pci_dev *dev, 307 + pci_channel_state_t new) 308 + { 309 + bool changed = false; 310 + 311 + device_lock_assert(&dev->dev); 312 + switch (new) { 313 + case pci_channel_io_perm_failure: 314 + switch (dev->error_state) { 315 + case pci_channel_io_frozen: 316 + case pci_channel_io_normal: 317 + case pci_channel_io_perm_failure: 318 + changed = true; 319 + break; 320 + } 321 + break; 322 + case pci_channel_io_frozen: 323 + switch (dev->error_state) { 324 + case pci_channel_io_frozen: 325 + case pci_channel_io_normal: 326 + changed = true; 327 + break; 328 + } 329 + break; 330 + case pci_channel_io_normal: 331 + switch (dev->error_state) { 332 + case pci_channel_io_frozen: 333 + case pci_channel_io_normal: 334 + changed = true; 335 + break; 336 + } 337 + break; 338 + } 339 + if (changed) 340 + dev->error_state = new; 341 + return changed; 342 + } 305 343 306 344 static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused) 307 345 { 308 - set_bit(PCI_DEV_DISCONNECTED, &dev->priv_flags); 346 + device_lock(&dev->dev); 347 + pci_dev_set_io_state(dev, pci_channel_io_perm_failure); 348 + device_unlock(&dev->dev); 349 + 309 350 return 0; 310 351 } 311 352 312 353 static inline bool pci_dev_is_disconnected(const struct pci_dev *dev) 313 354 { 314 - return test_bit(PCI_DEV_DISCONNECTED, &dev->priv_flags); 355 + return dev->error_state == pci_channel_io_perm_failure; 315 356 } 357 + 358 + /* pci_dev priv_flags */ 359 + #define PCI_DEV_ADDED 0 316 360 317 361 static inline void pci_dev_assign_added(struct pci_dev *dev, bool added) 318 362 { ··· 401 345 int aer_get_device_error_info(struct pci_dev *dev, struct aer_err_info *info); 402 346 void aer_print_error(struct pci_dev *dev, struct aer_err_info *info); 403 347 #endif /* CONFIG_PCIEAER */ 348 + 349 + #ifdef CONFIG_PCIE_DPC 350 + void pci_save_dpc_state(struct pci_dev *dev); 351 + void pci_restore_dpc_state(struct pci_dev *dev); 352 + #else 353 + static inline void pci_save_dpc_state(struct pci_dev *dev) {} 354 + static inline void pci_restore_dpc_state(struct pci_dev *dev) {} 355 + #endif 404 356 405 357 #ifdef CONFIG_PCI_ATS 406 358 void pci_restore_ats_state(struct pci_dev *dev); ··· 487 423 #endif 488 424 489 425 /* PCI error reporting and recovery */ 490 - void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service); 491 - void pcie_do_nonfatal_recovery(struct pci_dev *dev); 426 + void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state, 427 + u32 service); 492 428 493 429 bool pcie_wait_for_link(struct pci_dev *pdev, bool active); 494 430 #ifdef CONFIG_PCIEASPM
-4
drivers/pci/pcie/Kconfig
··· 36 36 config PCIEAER_INJECT 37 37 tristate "PCI Express error injection support" 38 38 depends on PCIEAER 39 - default n 40 39 help 41 40 This enables PCI Express Root Port Advanced Error Reporting 42 41 (AER) software error injector. ··· 83 84 config PCIEASPM_DEBUG 84 85 bool "Debug PCI Express ASPM" 85 86 depends on PCIEASPM 86 - default n 87 87 help 88 88 This enables PCI Express ASPM debug support. It will add per-device 89 89 interface to control ASPM. ··· 127 129 config PCIE_DPC 128 130 bool "PCI Express Downstream Port Containment support" 129 131 depends on PCIEPORTBUS && PCIEAER 130 - default n 131 132 help 132 133 This enables PCI Express Downstream Port Containment (DPC) 133 134 driver support. DPC events from Root and Downstream ports ··· 136 139 137 140 config PCIE_PTM 138 141 bool "PCI Express Precision Time Measurement support" 139 - default n 140 142 depends on PCIEPORTBUS 141 143 help 142 144 This enables PCI Express Precision Time Measurement (PTM)
+60 -179
drivers/pci/pcie/aer.c
··· 30 30 #include "../pci.h" 31 31 #include "portdrv.h" 32 32 33 - #define AER_ERROR_SOURCES_MAX 100 33 + #define AER_ERROR_SOURCES_MAX 128 34 34 35 35 #define AER_MAX_TYPEOF_COR_ERRS 16 /* as per PCI_ERR_COR_STATUS */ 36 36 #define AER_MAX_TYPEOF_UNCOR_ERRS 26 /* as per PCI_ERR_UNCOR_STATUS*/ ··· 42 42 43 43 struct aer_rpc { 44 44 struct pci_dev *rpd; /* Root Port device */ 45 - struct work_struct dpc_handler; 46 - struct aer_err_source e_sources[AER_ERROR_SOURCES_MAX]; 47 - struct aer_err_info e_info; 48 - unsigned short prod_idx; /* Error Producer Index */ 49 - unsigned short cons_idx; /* Error Consumer Index */ 50 - int isr; 51 - spinlock_t e_lock; /* 52 - * Lock access to Error Status/ID Regs 53 - * and error producer/consumer index 54 - */ 55 - struct mutex rpc_mutex; /* 56 - * only one thread could do 57 - * recovery on the same 58 - * root port hierarchy 59 - */ 45 + DECLARE_KFIFO(aer_fifo, struct aer_err_source, AER_ERROR_SOURCES_MAX); 60 46 }; 61 47 62 48 /* AER stats for the device */ ··· 852 866 static int add_error_device(struct aer_err_info *e_info, struct pci_dev *dev) 853 867 { 854 868 if (e_info->error_dev_num < AER_MAX_MULTI_ERR_DEVICES) { 855 - e_info->dev[e_info->error_dev_num] = dev; 869 + e_info->dev[e_info->error_dev_num] = pci_dev_get(dev); 856 870 e_info->error_dev_num++; 857 871 return 0; 858 872 } ··· 996 1010 info->status); 997 1011 pci_aer_clear_device_status(dev); 998 1012 } else if (info->severity == AER_NONFATAL) 999 - pcie_do_nonfatal_recovery(dev); 1013 + pcie_do_recovery(dev, pci_channel_io_normal, 1014 + PCIE_PORT_SERVICE_AER); 1000 1015 else if (info->severity == AER_FATAL) 1001 - pcie_do_fatal_recovery(dev, PCIE_PORT_SERVICE_AER); 1016 + pcie_do_recovery(dev, pci_channel_io_frozen, 1017 + PCIE_PORT_SERVICE_AER); 1018 + pci_dev_put(dev); 1002 1019 } 1003 1020 1004 1021 #ifdef CONFIG_ACPI_APEI_PCIEAER ··· 1036 1047 } 1037 1048 cper_print_aer(pdev, entry.severity, entry.regs); 1038 1049 if (entry.severity == AER_NONFATAL) 1039 - pcie_do_nonfatal_recovery(pdev); 1050 + pcie_do_recovery(pdev, pci_channel_io_normal, 1051 + PCIE_PORT_SERVICE_AER); 1040 1052 else if (entry.severity == AER_FATAL) 1041 - pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_AER); 1053 + pcie_do_recovery(pdev, pci_channel_io_frozen, 1054 + PCIE_PORT_SERVICE_AER); 1042 1055 pci_dev_put(pdev); 1043 1056 } 1044 1057 } ··· 1056 1065 void aer_recover_queue(int domain, unsigned int bus, unsigned int devfn, 1057 1066 int severity, struct aer_capability_regs *aer_regs) 1058 1067 { 1059 - unsigned long flags; 1060 1068 struct aer_recover_entry entry = { 1061 1069 .bus = bus, 1062 1070 .devfn = devfn, ··· 1064 1074 .regs = aer_regs, 1065 1075 }; 1066 1076 1067 - spin_lock_irqsave(&aer_recover_ring_lock, flags); 1068 - if (kfifo_put(&aer_recover_ring, entry)) 1077 + if (kfifo_in_spinlocked(&aer_recover_ring, &entry, sizeof(entry), 1078 + &aer_recover_ring_lock)) 1069 1079 schedule_work(&aer_recover_work); 1070 1080 else 1071 1081 pr_err("AER recover: Buffer overflow when recovering AER for %04x:%02x:%02x:%x\n", 1072 1082 domain, bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); 1073 - spin_unlock_irqrestore(&aer_recover_ring_lock, flags); 1074 1083 } 1075 1084 EXPORT_SYMBOL_GPL(aer_recover_queue); 1076 1085 #endif ··· 1104 1115 &info->mask); 1105 1116 if (!(info->status & ~info->mask)) 1106 1117 return 0; 1107 - } else if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE || 1108 - info->severity == AER_NONFATAL) { 1118 + } else if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 1119 + pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || 1120 + info->severity == AER_NONFATAL) { 1109 1121 1110 1122 /* Link is still healthy for IO reads */ 1111 1123 pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, ··· 1160 1170 struct aer_err_source *e_src) 1161 1171 { 1162 1172 struct pci_dev *pdev = rpc->rpd; 1163 - struct aer_err_info *e_info = &rpc->e_info; 1173 + struct aer_err_info e_info; 1164 1174 1165 1175 pci_rootport_aer_stats_incr(pdev, e_src); 1166 1176 ··· 1169 1179 * uncorrectable error being logged. Report correctable error first. 1170 1180 */ 1171 1181 if (e_src->status & PCI_ERR_ROOT_COR_RCV) { 1172 - e_info->id = ERR_COR_ID(e_src->id); 1173 - e_info->severity = AER_CORRECTABLE; 1182 + e_info.id = ERR_COR_ID(e_src->id); 1183 + e_info.severity = AER_CORRECTABLE; 1174 1184 1175 1185 if (e_src->status & PCI_ERR_ROOT_MULTI_COR_RCV) 1176 - e_info->multi_error_valid = 1; 1186 + e_info.multi_error_valid = 1; 1177 1187 else 1178 - e_info->multi_error_valid = 0; 1179 - aer_print_port_info(pdev, e_info); 1188 + e_info.multi_error_valid = 0; 1189 + aer_print_port_info(pdev, &e_info); 1180 1190 1181 - if (find_source_device(pdev, e_info)) 1182 - aer_process_err_devices(e_info); 1191 + if (find_source_device(pdev, &e_info)) 1192 + aer_process_err_devices(&e_info); 1183 1193 } 1184 1194 1185 1195 if (e_src->status & PCI_ERR_ROOT_UNCOR_RCV) { 1186 - e_info->id = ERR_UNCOR_ID(e_src->id); 1196 + e_info.id = ERR_UNCOR_ID(e_src->id); 1187 1197 1188 1198 if (e_src->status & PCI_ERR_ROOT_FATAL_RCV) 1189 - e_info->severity = AER_FATAL; 1199 + e_info.severity = AER_FATAL; 1190 1200 else 1191 - e_info->severity = AER_NONFATAL; 1201 + e_info.severity = AER_NONFATAL; 1192 1202 1193 1203 if (e_src->status & PCI_ERR_ROOT_MULTI_UNCOR_RCV) 1194 - e_info->multi_error_valid = 1; 1204 + e_info.multi_error_valid = 1; 1195 1205 else 1196 - e_info->multi_error_valid = 0; 1206 + e_info.multi_error_valid = 0; 1197 1207 1198 - aer_print_port_info(pdev, e_info); 1208 + aer_print_port_info(pdev, &e_info); 1199 1209 1200 - if (find_source_device(pdev, e_info)) 1201 - aer_process_err_devices(e_info); 1210 + if (find_source_device(pdev, &e_info)) 1211 + aer_process_err_devices(&e_info); 1202 1212 } 1203 - } 1204 - 1205 - /** 1206 - * get_e_source - retrieve an error source 1207 - * @rpc: pointer to the root port which holds an error 1208 - * @e_src: pointer to store retrieved error source 1209 - * 1210 - * Return 1 if an error source is retrieved, otherwise 0. 1211 - * 1212 - * Invoked by DPC handler to consume an error. 1213 - */ 1214 - static int get_e_source(struct aer_rpc *rpc, struct aer_err_source *e_src) 1215 - { 1216 - unsigned long flags; 1217 - 1218 - /* Lock access to Root error producer/consumer index */ 1219 - spin_lock_irqsave(&rpc->e_lock, flags); 1220 - if (rpc->prod_idx == rpc->cons_idx) { 1221 - spin_unlock_irqrestore(&rpc->e_lock, flags); 1222 - return 0; 1223 - } 1224 - 1225 - *e_src = rpc->e_sources[rpc->cons_idx]; 1226 - rpc->cons_idx++; 1227 - if (rpc->cons_idx == AER_ERROR_SOURCES_MAX) 1228 - rpc->cons_idx = 0; 1229 - spin_unlock_irqrestore(&rpc->e_lock, flags); 1230 - 1231 - return 1; 1232 1213 } 1233 1214 1234 1215 /** ··· 1208 1247 * 1209 1248 * Invoked, as DPC, when root port records new detected error 1210 1249 */ 1211 - static void aer_isr(struct work_struct *work) 1250 + static irqreturn_t aer_isr(int irq, void *context) 1212 1251 { 1213 - struct aer_rpc *rpc = container_of(work, struct aer_rpc, dpc_handler); 1252 + struct pcie_device *dev = (struct pcie_device *)context; 1253 + struct aer_rpc *rpc = get_service_data(dev); 1214 1254 struct aer_err_source uninitialized_var(e_src); 1215 1255 1216 - mutex_lock(&rpc->rpc_mutex); 1217 - while (get_e_source(rpc, &e_src)) 1256 + if (kfifo_is_empty(&rpc->aer_fifo)) 1257 + return IRQ_NONE; 1258 + 1259 + while (kfifo_get(&rpc->aer_fifo, &e_src)) 1218 1260 aer_isr_one_error(rpc, &e_src); 1219 - mutex_unlock(&rpc->rpc_mutex); 1261 + return IRQ_HANDLED; 1220 1262 } 1221 1263 1222 1264 /** ··· 1229 1265 * 1230 1266 * Invoked when Root Port detects AER messages. 1231 1267 */ 1232 - irqreturn_t aer_irq(int irq, void *context) 1268 + static irqreturn_t aer_irq(int irq, void *context) 1233 1269 { 1234 - unsigned int status, id; 1235 1270 struct pcie_device *pdev = (struct pcie_device *)context; 1236 1271 struct aer_rpc *rpc = get_service_data(pdev); 1237 - int next_prod_idx; 1238 - unsigned long flags; 1239 - int pos; 1272 + struct pci_dev *rp = rpc->rpd; 1273 + struct aer_err_source e_src = {}; 1274 + int pos = rp->aer_cap; 1240 1275 1241 - pos = pdev->port->aer_cap; 1242 - /* 1243 - * Must lock access to Root Error Status Reg, Root Error ID Reg, 1244 - * and Root error producer/consumer index 1245 - */ 1246 - spin_lock_irqsave(&rpc->e_lock, flags); 1247 - 1248 - /* Read error status */ 1249 - pci_read_config_dword(pdev->port, pos + PCI_ERR_ROOT_STATUS, &status); 1250 - if (!(status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) { 1251 - spin_unlock_irqrestore(&rpc->e_lock, flags); 1276 + pci_read_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, &e_src.status); 1277 + if (!(e_src.status & (PCI_ERR_ROOT_UNCOR_RCV|PCI_ERR_ROOT_COR_RCV))) 1252 1278 return IRQ_NONE; 1253 - } 1254 1279 1255 - /* Read error source and clear error status */ 1256 - pci_read_config_dword(pdev->port, pos + PCI_ERR_ROOT_ERR_SRC, &id); 1257 - pci_write_config_dword(pdev->port, pos + PCI_ERR_ROOT_STATUS, status); 1280 + pci_read_config_dword(rp, pos + PCI_ERR_ROOT_ERR_SRC, &e_src.id); 1281 + pci_write_config_dword(rp, pos + PCI_ERR_ROOT_STATUS, e_src.status); 1258 1282 1259 - /* Store error source for later DPC handler */ 1260 - next_prod_idx = rpc->prod_idx + 1; 1261 - if (next_prod_idx == AER_ERROR_SOURCES_MAX) 1262 - next_prod_idx = 0; 1263 - if (next_prod_idx == rpc->cons_idx) { 1264 - /* 1265 - * Error Storm Condition - possibly the same error occurred. 1266 - * Drop the error. 1267 - */ 1268 - spin_unlock_irqrestore(&rpc->e_lock, flags); 1283 + if (!kfifo_put(&rpc->aer_fifo, e_src)) 1269 1284 return IRQ_HANDLED; 1270 - } 1271 - rpc->e_sources[rpc->prod_idx].status = status; 1272 - rpc->e_sources[rpc->prod_idx].id = id; 1273 - rpc->prod_idx = next_prod_idx; 1274 - spin_unlock_irqrestore(&rpc->e_lock, flags); 1275 1285 1276 - /* Invoke DPC handler */ 1277 - schedule_work(&rpc->dpc_handler); 1278 - 1279 - return IRQ_HANDLED; 1286 + return IRQ_WAKE_THREAD; 1280 1287 } 1281 - EXPORT_SYMBOL_GPL(aer_irq); 1282 1288 1283 1289 static int set_device_error_reporting(struct pci_dev *dev, void *data) 1284 1290 { ··· 1357 1423 } 1358 1424 1359 1425 /** 1360 - * aer_alloc_rpc - allocate Root Port data structure 1361 - * @dev: pointer to the pcie_dev data structure 1362 - * 1363 - * Invoked when Root Port's AER service is loaded. 1364 - */ 1365 - static struct aer_rpc *aer_alloc_rpc(struct pcie_device *dev) 1366 - { 1367 - struct aer_rpc *rpc; 1368 - 1369 - rpc = kzalloc(sizeof(struct aer_rpc), GFP_KERNEL); 1370 - if (!rpc) 1371 - return NULL; 1372 - 1373 - /* Initialize Root lock access, e_lock, to Root Error Status Reg */ 1374 - spin_lock_init(&rpc->e_lock); 1375 - 1376 - rpc->rpd = dev->port; 1377 - INIT_WORK(&rpc->dpc_handler, aer_isr); 1378 - mutex_init(&rpc->rpc_mutex); 1379 - 1380 - /* Use PCIe bus function to store rpc into PCIe device */ 1381 - set_service_data(dev, rpc); 1382 - 1383 - return rpc; 1384 - } 1385 - 1386 - /** 1387 1426 * aer_remove - clean up resources 1388 1427 * @dev: pointer to the pcie_dev data structure 1389 1428 * ··· 1366 1459 { 1367 1460 struct aer_rpc *rpc = get_service_data(dev); 1368 1461 1369 - if (rpc) { 1370 - /* If register interrupt service, it must be free. */ 1371 - if (rpc->isr) 1372 - free_irq(dev->irq, dev); 1373 - 1374 - flush_work(&rpc->dpc_handler); 1375 - aer_disable_rootport(rpc); 1376 - kfree(rpc); 1377 - set_service_data(dev, NULL); 1378 - } 1462 + aer_disable_rootport(rpc); 1379 1463 } 1380 1464 1381 1465 /** ··· 1379 1481 { 1380 1482 int status; 1381 1483 struct aer_rpc *rpc; 1382 - struct device *device = &dev->port->dev; 1484 + struct device *device = &dev->device; 1383 1485 1384 - /* Alloc rpc data structure */ 1385 - rpc = aer_alloc_rpc(dev); 1486 + rpc = devm_kzalloc(device, sizeof(struct aer_rpc), GFP_KERNEL); 1386 1487 if (!rpc) { 1387 1488 dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n"); 1388 - aer_remove(dev); 1389 1489 return -ENOMEM; 1390 1490 } 1491 + rpc->rpd = dev->port; 1492 + set_service_data(dev, rpc); 1391 1493 1392 - /* Request IRQ ISR */ 1393 - status = request_irq(dev->irq, aer_irq, IRQF_SHARED, "aerdrv", dev); 1494 + status = devm_request_threaded_irq(device, dev->irq, aer_irq, aer_isr, 1495 + IRQF_SHARED, "aerdrv", dev); 1394 1496 if (status) { 1395 1497 dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n", 1396 1498 dev->irq); 1397 - aer_remove(dev); 1398 1499 return status; 1399 1500 } 1400 - 1401 - rpc->isr = 1; 1402 1501 1403 1502 aer_enable_rootport(rpc); 1404 1503 dev_info(device, "AER enabled with IRQ %d\n", dev->irq); ··· 1421 1526 reg32 &= ~ROOT_PORT_INTR_ON_MESG_MASK; 1422 1527 pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); 1423 1528 1424 - rc = pci_bridge_secondary_bus_reset(dev); 1529 + rc = pci_bus_error_reset(dev); 1425 1530 pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n"); 1426 1531 1427 1532 /* Clear Root Error Status */ ··· 1436 1541 return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; 1437 1542 } 1438 1543 1439 - /** 1440 - * aer_error_resume - clean up corresponding error status bits 1441 - * @dev: pointer to Root Port's pci_dev data structure 1442 - * 1443 - * Invoked by Port Bus driver during nonfatal recovery. 1444 - */ 1445 - static void aer_error_resume(struct pci_dev *dev) 1446 - { 1447 - pci_aer_clear_device_status(dev); 1448 - pci_cleanup_aer_uncorrect_error_status(dev); 1449 - } 1450 - 1451 1544 static struct pcie_port_service_driver aerdriver = { 1452 1545 .name = "aer", 1453 1546 .port_type = PCI_EXP_TYPE_ROOT_PORT, ··· 1443 1560 1444 1561 .probe = aer_probe, 1445 1562 .remove = aer_remove, 1446 - .error_resume = aer_error_resume, 1447 1563 .reset_link = aer_root_reset, 1448 1564 }; 1449 1565 ··· 1451 1569 * 1452 1570 * Invoked when AER root service driver is loaded. 1453 1571 */ 1454 - static int __init aer_service_init(void) 1572 + int __init pcie_aer_init(void) 1455 1573 { 1456 1574 if (!pci_aer_available() || aer_acpi_firmware_first()) 1457 1575 return -ENXIO; 1458 1576 return pcie_port_service_register(&aerdriver); 1459 1577 } 1460 - device_initcall(aer_service_init);
+46 -50
drivers/pci/pcie/aer_inject.c
··· 14 14 15 15 #include <linux/module.h> 16 16 #include <linux/init.h> 17 + #include <linux/irq.h> 17 18 #include <linux/miscdevice.h> 18 19 #include <linux/pci.h> 19 20 #include <linux/slab.h> ··· 176 175 return target; 177 176 } 178 177 178 + static int aer_inj_read(struct pci_bus *bus, unsigned int devfn, int where, 179 + int size, u32 *val) 180 + { 181 + struct pci_ops *ops, *my_ops; 182 + int rv; 183 + 184 + ops = __find_pci_bus_ops(bus); 185 + if (!ops) 186 + return -1; 187 + 188 + my_ops = bus->ops; 189 + bus->ops = ops; 190 + rv = ops->read(bus, devfn, where, size, val); 191 + bus->ops = my_ops; 192 + 193 + return rv; 194 + } 195 + 196 + static int aer_inj_write(struct pci_bus *bus, unsigned int devfn, int where, 197 + int size, u32 val) 198 + { 199 + struct pci_ops *ops, *my_ops; 200 + int rv; 201 + 202 + ops = __find_pci_bus_ops(bus); 203 + if (!ops) 204 + return -1; 205 + 206 + my_ops = bus->ops; 207 + bus->ops = ops; 208 + rv = ops->write(bus, devfn, where, size, val); 209 + bus->ops = my_ops; 210 + 211 + return rv; 212 + } 213 + 179 214 static int aer_inj_read_config(struct pci_bus *bus, unsigned int devfn, 180 215 int where, int size, u32 *val) 181 216 { 182 217 u32 *sim; 183 218 struct aer_error *err; 184 219 unsigned long flags; 185 - struct pci_ops *ops; 186 - struct pci_ops *my_ops; 187 220 int domain; 188 221 int rv; 189 222 ··· 238 203 return 0; 239 204 } 240 205 out: 241 - ops = __find_pci_bus_ops(bus); 242 - /* 243 - * pci_lock must already be held, so we can directly 244 - * manipulate bus->ops. Many config access functions, 245 - * including pci_generic_config_read() require the original 246 - * bus->ops be installed to function, so temporarily put them 247 - * back. 248 - */ 249 - my_ops = bus->ops; 250 - bus->ops = ops; 251 - rv = ops->read(bus, devfn, where, size, val); 252 - bus->ops = my_ops; 206 + rv = aer_inj_read(bus, devfn, where, size, val); 253 207 spin_unlock_irqrestore(&inject_lock, flags); 254 208 return rv; 255 209 } ··· 250 226 struct aer_error *err; 251 227 unsigned long flags; 252 228 int rw1cs; 253 - struct pci_ops *ops; 254 - struct pci_ops *my_ops; 255 229 int domain; 256 230 int rv; 257 231 ··· 273 251 return 0; 274 252 } 275 253 out: 276 - ops = __find_pci_bus_ops(bus); 277 - /* 278 - * pci_lock must already be held, so we can directly 279 - * manipulate bus->ops. Many config access functions, 280 - * including pci_generic_config_write() require the original 281 - * bus->ops be installed to function, so temporarily put them 282 - * back. 283 - */ 284 - my_ops = bus->ops; 285 - bus->ops = ops; 286 - rv = ops->write(bus, devfn, where, size, val); 287 - bus->ops = my_ops; 254 + rv = aer_inj_write(bus, devfn, where, size, val); 288 255 spin_unlock_irqrestore(&inject_lock, flags); 289 256 return rv; 290 257 } ··· 314 303 return 0; 315 304 } 316 305 317 - static int find_aer_device_iter(struct device *device, void *data) 318 - { 319 - struct pcie_device **result = data; 320 - struct pcie_device *pcie_dev; 321 - 322 - if (device->bus == &pcie_port_bus_type) { 323 - pcie_dev = to_pcie_device(device); 324 - if (pcie_dev->service & PCIE_PORT_SERVICE_AER) { 325 - *result = pcie_dev; 326 - return 1; 327 - } 328 - } 329 - return 0; 330 - } 331 - 332 - static int find_aer_device(struct pci_dev *dev, struct pcie_device **result) 333 - { 334 - return device_for_each_child(&dev->dev, result, find_aer_device_iter); 335 - } 336 - 337 306 static int aer_inject(struct aer_error_inj *einj) 338 307 { 339 308 struct aer_error *err, *rperr; 340 309 struct aer_error *err_alloc = NULL, *rperr_alloc = NULL; 341 310 struct pci_dev *dev, *rpdev; 342 311 struct pcie_device *edev; 312 + struct device *device; 343 313 unsigned long flags; 344 314 unsigned int devfn = PCI_DEVFN(einj->dev, einj->fn); 345 315 int pos_cap_err, rp_pos_cap_err; ··· 456 464 if (ret) 457 465 goto out_put; 458 466 459 - if (find_aer_device(rpdev, &edev)) { 467 + device = pcie_port_find_device(rpdev, PCIE_PORT_SERVICE_AER); 468 + if (device) { 469 + edev = to_pcie_device(device); 460 470 if (!get_service_data(edev)) { 461 471 dev_warn(&edev->device, 462 472 "aer_inject: AER service is not initialized\n"); ··· 468 474 dev_info(&edev->device, 469 475 "aer_inject: Injecting errors %08x/%08x into device %s\n", 470 476 einj->cor_status, einj->uncor_status, pci_name(dev)); 471 - aer_irq(-1, edev); 477 + local_irq_disable(); 478 + generic_handle_irq(edev->irq); 479 + local_irq_enable(); 472 480 } else { 473 481 pci_err(rpdev, "aer_inject: AER device not found\n"); 474 482 ret = -ENODEV;
+2 -2
drivers/pci/pcie/aspm.c
··· 895 895 struct pcie_link_state *link; 896 896 int blacklist = !!pcie_aspm_sanity_check(pdev); 897 897 898 - if (!aspm_support_enabled) 898 + if (!aspm_support_enabled || aspm_disabled) 899 899 return; 900 900 901 901 if (pdev->link_state) ··· 991 991 * All PCIe functions are in one slot, remove one function will remove 992 992 * the whole slot, so just wait until we are the last function left. 993 993 */ 994 - if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices)) 994 + if (!list_empty(&parent->subordinate->devices)) 995 995 goto out; 996 996 997 997 link = parent->link_state;
+61 -11
drivers/pci/pcie/dpc.c
··· 44 44 "Memory Request Completion Timeout", /* Bit Position 18 */ 45 45 }; 46 46 47 + static struct dpc_dev *to_dpc_dev(struct pci_dev *dev) 48 + { 49 + struct device *device; 50 + 51 + device = pcie_port_find_device(dev, PCIE_PORT_SERVICE_DPC); 52 + if (!device) 53 + return NULL; 54 + return get_service_data(to_pcie_device(device)); 55 + } 56 + 57 + void pci_save_dpc_state(struct pci_dev *dev) 58 + { 59 + struct dpc_dev *dpc; 60 + struct pci_cap_saved_state *save_state; 61 + u16 *cap; 62 + 63 + if (!pci_is_pcie(dev)) 64 + return; 65 + 66 + dpc = to_dpc_dev(dev); 67 + if (!dpc) 68 + return; 69 + 70 + save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC); 71 + if (!save_state) 72 + return; 73 + 74 + cap = (u16 *)&save_state->cap.data[0]; 75 + pci_read_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, cap); 76 + } 77 + 78 + void pci_restore_dpc_state(struct pci_dev *dev) 79 + { 80 + struct dpc_dev *dpc; 81 + struct pci_cap_saved_state *save_state; 82 + u16 *cap; 83 + 84 + if (!pci_is_pcie(dev)) 85 + return; 86 + 87 + dpc = to_dpc_dev(dev); 88 + if (!dpc) 89 + return; 90 + 91 + save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_DPC); 92 + if (!save_state) 93 + return; 94 + 95 + cap = (u16 *)&save_state->cap.data[0]; 96 + pci_write_config_word(dev, dpc->cap_pos + PCI_EXP_DPC_CTL, *cap); 97 + } 98 + 47 99 static int dpc_wait_rp_inactive(struct dpc_dev *dpc) 48 100 { 49 101 unsigned long timeout = jiffies + HZ; ··· 119 67 static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev) 120 68 { 121 69 struct dpc_dev *dpc; 122 - struct pcie_device *pciedev; 123 - struct device *devdpc; 124 - 125 70 u16 cap; 126 71 127 72 /* 128 73 * DPC disables the Link automatically in hardware, so it has 129 74 * already been reset by the time we get here. 130 75 */ 131 - devdpc = pcie_port_find_device(pdev, PCIE_PORT_SERVICE_DPC); 132 - pciedev = to_pcie_device(devdpc); 133 - dpc = get_service_data(pciedev); 76 + dpc = to_dpc_dev(pdev); 134 77 cap = dpc->cap_pos; 135 78 136 79 /* ··· 140 93 pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS, 141 94 PCI_EXP_DPC_STATUS_TRIGGER); 142 95 96 + if (!pcie_wait_for_link(pdev, true)) 97 + return PCI_ERS_RESULT_DISCONNECT; 98 + 143 99 return PCI_ERS_RESULT_RECOVERED; 144 100 } 145 - 146 101 147 102 static void dpc_process_rp_pio_error(struct dpc_dev *dpc) 148 103 { ··· 218 169 219 170 reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; 220 171 ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; 221 - dev_warn(dev, "DPC %s detected, remove downstream devices\n", 172 + dev_warn(dev, "DPC %s detected\n", 222 173 (reason == 0) ? "unmasked uncorrectable error" : 223 174 (reason == 1) ? "ERR_NONFATAL" : 224 175 (reason == 2) ? "ERR_FATAL" : ··· 235 186 } 236 187 237 188 /* We configure DPC so it only triggers on ERR_FATAL */ 238 - pcie_do_fatal_recovery(pdev, PCIE_PORT_SERVICE_DPC); 189 + pcie_do_recovery(pdev, pci_channel_io_frozen, PCIE_PORT_SERVICE_DPC); 239 190 240 191 return IRQ_HANDLED; 241 192 } ··· 308 259 FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), 309 260 FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size, 310 261 FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); 262 + 263 + pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16)); 311 264 return status; 312 265 } 313 266 ··· 333 282 .reset_link = dpc_reset_link, 334 283 }; 335 284 336 - static int __init dpc_service_init(void) 285 + int __init pcie_dpc_init(void) 337 286 { 338 287 return pcie_port_service_register(&dpcdriver); 339 288 } 340 - device_initcall(dpc_service_init);
+68 -213
drivers/pci/pcie/err.c
··· 12 12 13 13 #include <linux/pci.h> 14 14 #include <linux/module.h> 15 - #include <linux/pci.h> 16 15 #include <linux/kernel.h> 17 16 #include <linux/errno.h> 18 17 #include <linux/aer.h> 19 18 #include "portdrv.h" 20 19 #include "../pci.h" 21 - 22 - struct aer_broadcast_data { 23 - enum pci_channel_state state; 24 - enum pci_ers_result result; 25 - }; 26 20 27 21 static pci_ers_result_t merge_result(enum pci_ers_result orig, 28 22 enum pci_ers_result new) ··· 43 49 return orig; 44 50 } 45 51 46 - static int report_error_detected(struct pci_dev *dev, void *data) 52 + static int report_error_detected(struct pci_dev *dev, 53 + enum pci_channel_state state, 54 + enum pci_ers_result *result) 47 55 { 48 56 pci_ers_result_t vote; 49 57 const struct pci_error_handlers *err_handler; 50 - struct aer_broadcast_data *result_data; 51 - 52 - result_data = (struct aer_broadcast_data *) data; 53 58 54 59 device_lock(&dev->dev); 55 - dev->error_state = result_data->state; 56 - 57 - if (!dev->driver || 60 + if (!pci_dev_set_io_state(dev, state) || 61 + !dev->driver || 58 62 !dev->driver->err_handler || 59 63 !dev->driver->err_handler->error_detected) { 60 - if (result_data->state == pci_channel_io_frozen && 61 - dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) { 62 - /* 63 - * In case of fatal recovery, if one of down- 64 - * stream device has no driver. We might be 65 - * unable to recover because a later insmod 66 - * of a driver for this device is unaware of 67 - * its hw state. 68 - */ 69 - pci_printk(KERN_DEBUG, dev, "device has %s\n", 70 - dev->driver ? 71 - "no AER-aware driver" : "no driver"); 72 - } 73 - 74 64 /* 75 - * If there's any device in the subtree that does not 76 - * have an error_detected callback, returning 77 - * PCI_ERS_RESULT_NO_AER_DRIVER prevents calling of 78 - * the subsequent mmio_enabled/slot_reset/resume 79 - * callbacks of "any" device in the subtree. All the 80 - * devices in the subtree are left in the error state 81 - * without recovery. 65 + * If any device in the subtree does not have an error_detected 66 + * callback, PCI_ERS_RESULT_NO_AER_DRIVER prevents subsequent 67 + * error callbacks of "any" device in the subtree, and will 68 + * exit in the disconnected error state. 82 69 */ 83 - 84 70 if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) 85 71 vote = PCI_ERS_RESULT_NO_AER_DRIVER; 86 72 else 87 73 vote = PCI_ERS_RESULT_NONE; 88 74 } else { 89 75 err_handler = dev->driver->err_handler; 90 - vote = err_handler->error_detected(dev, result_data->state); 91 - pci_uevent_ers(dev, PCI_ERS_RESULT_NONE); 76 + vote = err_handler->error_detected(dev, state); 92 77 } 93 - 94 - result_data->result = merge_result(result_data->result, vote); 78 + pci_uevent_ers(dev, vote); 79 + *result = merge_result(*result, vote); 95 80 device_unlock(&dev->dev); 96 81 return 0; 97 82 } 98 83 84 + static int report_frozen_detected(struct pci_dev *dev, void *data) 85 + { 86 + return report_error_detected(dev, pci_channel_io_frozen, data); 87 + } 88 + 89 + static int report_normal_detected(struct pci_dev *dev, void *data) 90 + { 91 + return report_error_detected(dev, pci_channel_io_normal, data); 92 + } 93 + 99 94 static int report_mmio_enabled(struct pci_dev *dev, void *data) 100 95 { 101 - pci_ers_result_t vote; 96 + pci_ers_result_t vote, *result = data; 102 97 const struct pci_error_handlers *err_handler; 103 - struct aer_broadcast_data *result_data; 104 - 105 - result_data = (struct aer_broadcast_data *) data; 106 98 107 99 device_lock(&dev->dev); 108 100 if (!dev->driver || ··· 98 118 99 119 err_handler = dev->driver->err_handler; 100 120 vote = err_handler->mmio_enabled(dev); 101 - result_data->result = merge_result(result_data->result, vote); 121 + *result = merge_result(*result, vote); 102 122 out: 103 123 device_unlock(&dev->dev); 104 124 return 0; ··· 106 126 107 127 static int report_slot_reset(struct pci_dev *dev, void *data) 108 128 { 109 - pci_ers_result_t vote; 129 + pci_ers_result_t vote, *result = data; 110 130 const struct pci_error_handlers *err_handler; 111 - struct aer_broadcast_data *result_data; 112 - 113 - result_data = (struct aer_broadcast_data *) data; 114 131 115 132 device_lock(&dev->dev); 116 133 if (!dev->driver || ··· 117 140 118 141 err_handler = dev->driver->err_handler; 119 142 vote = err_handler->slot_reset(dev); 120 - result_data->result = merge_result(result_data->result, vote); 143 + *result = merge_result(*result, vote); 121 144 out: 122 145 device_unlock(&dev->dev); 123 146 return 0; ··· 128 151 const struct pci_error_handlers *err_handler; 129 152 130 153 device_lock(&dev->dev); 131 - dev->error_state = pci_channel_io_normal; 132 - 133 - if (!dev->driver || 154 + if (!pci_dev_set_io_state(dev, pci_channel_io_normal) || 155 + !dev->driver || 134 156 !dev->driver->err_handler || 135 157 !dev->driver->err_handler->resume) 136 158 goto out; 137 159 138 160 err_handler = dev->driver->err_handler; 139 161 err_handler->resume(dev); 140 - pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); 141 162 out: 163 + pci_uevent_ers(dev, PCI_ERS_RESULT_RECOVERED); 142 164 device_unlock(&dev->dev); 143 165 return 0; 144 166 } ··· 153 177 { 154 178 int rc; 155 179 156 - rc = pci_bridge_secondary_bus_reset(dev); 180 + rc = pci_bus_error_reset(dev); 157 181 pci_printk(KERN_DEBUG, dev, "downstream link has been reset\n"); 158 182 return rc ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; 159 183 } 160 184 161 185 static pci_ers_result_t reset_link(struct pci_dev *dev, u32 service) 162 186 { 163 - struct pci_dev *udev; 164 187 pci_ers_result_t status; 165 188 struct pcie_port_service_driver *driver = NULL; 166 189 167 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 168 - /* Reset this port for all subordinates */ 169 - udev = dev; 170 - } else { 171 - /* Reset the upstream component (likely downstream port) */ 172 - udev = dev->bus->self; 173 - } 174 - 175 - /* Use the aer driver of the component firstly */ 176 - driver = pcie_port_find_service(udev, service); 177 - 190 + driver = pcie_port_find_service(dev, service); 178 191 if (driver && driver->reset_link) { 179 - status = driver->reset_link(udev); 180 - } else if (udev->has_secondary_link) { 181 - status = default_reset_link(udev); 192 + status = driver->reset_link(dev); 193 + } else if (dev->has_secondary_link) { 194 + status = default_reset_link(dev); 182 195 } else { 183 196 pci_printk(KERN_DEBUG, dev, "no link-reset support at upstream device %s\n", 184 - pci_name(udev)); 197 + pci_name(dev)); 185 198 return PCI_ERS_RESULT_DISCONNECT; 186 199 } 187 200 188 201 if (status != PCI_ERS_RESULT_RECOVERED) { 189 202 pci_printk(KERN_DEBUG, dev, "link reset at upstream device %s failed\n", 190 - pci_name(udev)); 203 + pci_name(dev)); 191 204 return PCI_ERS_RESULT_DISCONNECT; 192 205 } 193 206 194 207 return status; 195 208 } 196 209 197 - /** 198 - * broadcast_error_message - handle message broadcast to downstream drivers 199 - * @dev: pointer to from where in a hierarchy message is broadcasted down 200 - * @state: error state 201 - * @error_mesg: message to print 202 - * @cb: callback to be broadcasted 203 - * 204 - * Invoked during error recovery process. Once being invoked, the content 205 - * of error severity will be broadcasted to all downstream drivers in a 206 - * hierarchy in question. 207 - */ 208 - static pci_ers_result_t broadcast_error_message(struct pci_dev *dev, 209 - enum pci_channel_state state, 210 - char *error_mesg, 211 - int (*cb)(struct pci_dev *, void *)) 210 + void pcie_do_recovery(struct pci_dev *dev, enum pci_channel_state state, 211 + u32 service) 212 212 { 213 - struct aer_broadcast_data result_data; 213 + pci_ers_result_t status = PCI_ERS_RESULT_CAN_RECOVER; 214 + struct pci_bus *bus; 214 215 215 - pci_printk(KERN_DEBUG, dev, "broadcast %s message\n", error_mesg); 216 - result_data.state = state; 217 - if (cb == report_error_detected) 218 - result_data.result = PCI_ERS_RESULT_CAN_RECOVER; 216 + /* 217 + * Error recovery runs on all subordinates of the first downstream port. 218 + * If the downstream port detected the error, it is cleared at the end. 219 + */ 220 + if (!(pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 221 + pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM)) 222 + dev = dev->bus->self; 223 + bus = dev->subordinate; 224 + 225 + pci_dbg(dev, "broadcast error_detected message\n"); 226 + if (state == pci_channel_io_frozen) 227 + pci_walk_bus(bus, report_frozen_detected, &status); 219 228 else 220 - result_data.result = PCI_ERS_RESULT_RECOVERED; 229 + pci_walk_bus(bus, report_normal_detected, &status); 221 230 222 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) { 223 - /* 224 - * If the error is reported by a bridge, we think this error 225 - * is related to the downstream link of the bridge, so we 226 - * do error recovery on all subordinates of the bridge instead 227 - * of the bridge and clear the error status of the bridge. 228 - */ 229 - if (cb == report_error_detected) 230 - dev->error_state = state; 231 - pci_walk_bus(dev->subordinate, cb, &result_data); 232 - if (cb == report_resume) { 233 - pci_aer_clear_device_status(dev); 234 - pci_cleanup_aer_uncorrect_error_status(dev); 235 - dev->error_state = pci_channel_io_normal; 236 - } 237 - } else { 238 - /* 239 - * If the error is reported by an end point, we think this 240 - * error is related to the upstream link of the end point. 241 - * The error is non fatal so the bus is ok; just invoke 242 - * the callback for the function that logged the error. 243 - */ 244 - cb(dev, &result_data); 231 + if (state == pci_channel_io_frozen && 232 + reset_link(dev, service) != PCI_ERS_RESULT_RECOVERED) 233 + goto failed; 234 + 235 + if (status == PCI_ERS_RESULT_CAN_RECOVER) { 236 + status = PCI_ERS_RESULT_RECOVERED; 237 + pci_dbg(dev, "broadcast mmio_enabled message\n"); 238 + pci_walk_bus(bus, report_mmio_enabled, &status); 245 239 } 246 - 247 - return result_data.result; 248 - } 249 - 250 - /** 251 - * pcie_do_fatal_recovery - handle fatal error recovery process 252 - * @dev: pointer to a pci_dev data structure of agent detecting an error 253 - * 254 - * Invoked when an error is fatal. Once being invoked, removes the devices 255 - * beneath this AER agent, followed by reset link e.g. secondary bus reset 256 - * followed by re-enumeration of devices. 257 - */ 258 - void pcie_do_fatal_recovery(struct pci_dev *dev, u32 service) 259 - { 260 - struct pci_dev *udev; 261 - struct pci_bus *parent; 262 - struct pci_dev *pdev, *temp; 263 - pci_ers_result_t result; 264 - 265 - if (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE) 266 - udev = dev; 267 - else 268 - udev = dev->bus->self; 269 - 270 - parent = udev->subordinate; 271 - pci_lock_rescan_remove(); 272 - pci_dev_get(dev); 273 - list_for_each_entry_safe_reverse(pdev, temp, &parent->devices, 274 - bus_list) { 275 - pci_dev_get(pdev); 276 - pci_dev_set_disconnected(pdev, NULL); 277 - if (pci_has_subordinate(pdev)) 278 - pci_walk_bus(pdev->subordinate, 279 - pci_dev_set_disconnected, NULL); 280 - pci_stop_and_remove_bus_device(pdev); 281 - pci_dev_put(pdev); 282 - } 283 - 284 - result = reset_link(udev, service); 285 - 286 - if ((service == PCIE_PORT_SERVICE_AER) && 287 - (dev->hdr_type == PCI_HEADER_TYPE_BRIDGE)) { 288 - /* 289 - * If the error is reported by a bridge, we think this error 290 - * is related to the downstream link of the bridge, so we 291 - * do error recovery on all subordinates of the bridge instead 292 - * of the bridge and clear the error status of the bridge. 293 - */ 294 - pci_aer_clear_fatal_status(dev); 295 - pci_aer_clear_device_status(dev); 296 - } 297 - 298 - if (result == PCI_ERS_RESULT_RECOVERED) { 299 - if (pcie_wait_for_link(udev, true)) 300 - pci_rescan_bus(udev->bus); 301 - pci_info(dev, "Device recovery from fatal error successful\n"); 302 - } else { 303 - pci_uevent_ers(dev, PCI_ERS_RESULT_DISCONNECT); 304 - pci_info(dev, "Device recovery from fatal error failed\n"); 305 - } 306 - 307 - pci_dev_put(dev); 308 - pci_unlock_rescan_remove(); 309 - } 310 - 311 - /** 312 - * pcie_do_nonfatal_recovery - handle nonfatal error recovery process 313 - * @dev: pointer to a pci_dev data structure of agent detecting an error 314 - * 315 - * Invoked when an error is nonfatal/fatal. Once being invoked, broadcast 316 - * error detected message to all downstream drivers within a hierarchy in 317 - * question and return the returned code. 318 - */ 319 - void pcie_do_nonfatal_recovery(struct pci_dev *dev) 320 - { 321 - pci_ers_result_t status; 322 - enum pci_channel_state state; 323 - 324 - state = pci_channel_io_normal; 325 - 326 - status = broadcast_error_message(dev, 327 - state, 328 - "error_detected", 329 - report_error_detected); 330 - 331 - if (status == PCI_ERS_RESULT_CAN_RECOVER) 332 - status = broadcast_error_message(dev, 333 - state, 334 - "mmio_enabled", 335 - report_mmio_enabled); 336 240 337 241 if (status == PCI_ERS_RESULT_NEED_RESET) { 338 242 /* ··· 220 364 * functions to reset slot before calling 221 365 * drivers' slot_reset callbacks? 222 366 */ 223 - status = broadcast_error_message(dev, 224 - state, 225 - "slot_reset", 226 - report_slot_reset); 367 + status = PCI_ERS_RESULT_RECOVERED; 368 + pci_dbg(dev, "broadcast slot_reset message\n"); 369 + pci_walk_bus(bus, report_slot_reset, &status); 227 370 } 228 371 229 372 if (status != PCI_ERS_RESULT_RECOVERED) 230 373 goto failed; 231 374 232 - broadcast_error_message(dev, 233 - state, 234 - "resume", 235 - report_resume); 375 + pci_dbg(dev, "broadcast resume message\n"); 376 + pci_walk_bus(bus, report_resume, &status); 236 377 378 + pci_aer_clear_device_status(dev); 379 + pci_cleanup_aer_uncorrect_error_status(dev); 237 380 pci_info(dev, "AER: Device recovery successful\n"); 238 381 return; 239 382
+28 -2
drivers/pci/pcie/pme.c
··· 432 432 kfree(get_service_data(srv)); 433 433 } 434 434 435 + static int pcie_pme_runtime_suspend(struct pcie_device *srv) 436 + { 437 + struct pcie_pme_service_data *data = get_service_data(srv); 438 + 439 + spin_lock_irq(&data->lock); 440 + pcie_pme_interrupt_enable(srv->port, false); 441 + pcie_clear_root_pme_status(srv->port); 442 + data->noirq = true; 443 + spin_unlock_irq(&data->lock); 444 + 445 + return 0; 446 + } 447 + 448 + static int pcie_pme_runtime_resume(struct pcie_device *srv) 449 + { 450 + struct pcie_pme_service_data *data = get_service_data(srv); 451 + 452 + spin_lock_irq(&data->lock); 453 + pcie_pme_interrupt_enable(srv->port, true); 454 + data->noirq = false; 455 + spin_unlock_irq(&data->lock); 456 + 457 + return 0; 458 + } 459 + 435 460 static struct pcie_port_service_driver pcie_pme_driver = { 436 461 .name = "pcie_pme", 437 462 .port_type = PCI_EXP_TYPE_ROOT_PORT, ··· 464 439 465 440 .probe = pcie_pme_probe, 466 441 .suspend = pcie_pme_suspend, 442 + .runtime_suspend = pcie_pme_runtime_suspend, 443 + .runtime_resume = pcie_pme_runtime_resume, 467 444 .resume = pcie_pme_resume, 468 445 .remove = pcie_pme_remove, 469 446 }; ··· 473 446 /** 474 447 * pcie_pme_service_init - Register the PCIe PME service driver. 475 448 */ 476 - static int __init pcie_pme_service_init(void) 449 + int __init pcie_pme_init(void) 477 450 { 478 451 return pcie_port_service_register(&pcie_pme_driver); 479 452 } 480 - device_initcall(pcie_pme_service_init);
+28 -4
drivers/pci/pcie/portdrv.h
··· 23 23 24 24 #define PCIE_PORT_DEVICE_MAXSERVICES 4 25 25 26 + #ifdef CONFIG_PCIEAER 27 + int pcie_aer_init(void); 28 + #else 29 + static inline int pcie_aer_init(void) { return 0; } 30 + #endif 31 + 32 + #ifdef CONFIG_HOTPLUG_PCI_PCIE 33 + int pcie_hp_init(void); 34 + #else 35 + static inline int pcie_hp_init(void) { return 0; } 36 + #endif 37 + 38 + #ifdef CONFIG_PCIE_PME 39 + int pcie_pme_init(void); 40 + #else 41 + static inline int pcie_pme_init(void) { return 0; } 42 + #endif 43 + 44 + #ifdef CONFIG_PCIE_DPC 45 + int pcie_dpc_init(void); 46 + #else 47 + static inline int pcie_dpc_init(void) { return 0; } 48 + #endif 49 + 26 50 /* Port Type */ 27 51 #define PCIE_ANY_PORT (~0) 28 52 ··· 76 52 int (*suspend) (struct pcie_device *dev); 77 53 int (*resume_noirq) (struct pcie_device *dev); 78 54 int (*resume) (struct pcie_device *dev); 55 + int (*runtime_suspend) (struct pcie_device *dev); 56 + int (*runtime_resume) (struct pcie_device *dev); 79 57 80 58 /* Device driver may resume normal operations */ 81 59 void (*error_resume)(struct pci_dev *dev); ··· 111 85 int pcie_port_device_suspend(struct device *dev); 112 86 int pcie_port_device_resume_noirq(struct device *dev); 113 87 int pcie_port_device_resume(struct device *dev); 88 + int pcie_port_device_runtime_suspend(struct device *dev); 89 + int pcie_port_device_runtime_resume(struct device *dev); 114 90 #endif 115 91 void pcie_port_device_remove(struct pci_dev *dev); 116 92 int __must_check pcie_port_bus_register(void); ··· 149 121 return pci_dev->__aer_firmware_first; 150 122 return 0; 151 123 } 152 - #endif 153 - 154 - #ifdef CONFIG_PCIEAER 155 - irqreturn_t aer_irq(int irq, void *context); 156 124 #endif 157 125 158 126 struct pcie_port_service_driver *pcie_port_find_service(struct pci_dev *dev,
+21
drivers/pci/pcie/portdrv_core.c
··· 395 395 size_t off = offsetof(struct pcie_port_service_driver, resume); 396 396 return device_for_each_child(dev, &off, pm_iter); 397 397 } 398 + 399 + /** 400 + * pcie_port_device_runtime_suspend - runtime suspend port services 401 + * @dev: PCI Express port to handle 402 + */ 403 + int pcie_port_device_runtime_suspend(struct device *dev) 404 + { 405 + size_t off = offsetof(struct pcie_port_service_driver, runtime_suspend); 406 + return device_for_each_child(dev, &off, pm_iter); 407 + } 408 + 409 + /** 410 + * pcie_port_device_runtime_resume - runtime resume port services 411 + * @dev: PCI Express port to handle 412 + */ 413 + int pcie_port_device_runtime_resume(struct device *dev) 414 + { 415 + size_t off = offsetof(struct pcie_port_service_driver, runtime_resume); 416 + return device_for_each_child(dev, &off, pm_iter); 417 + } 398 418 #endif /* PM */ 399 419 400 420 static int remove_iter(struct device *dev, void *data) ··· 486 466 device = pdrvs.dev; 487 467 return device; 488 468 } 469 + EXPORT_SYMBOL_GPL(pcie_port_find_device); 489 470 490 471 /** 491 472 * pcie_port_device_remove - unregister PCI Express port service devices
+23 -8
drivers/pci/pcie/portdrv_pci.c
··· 45 45 #ifdef CONFIG_PM 46 46 static int pcie_port_runtime_suspend(struct device *dev) 47 47 { 48 - return to_pci_dev(dev)->bridge_d3 ? 0 : -EBUSY; 49 - } 48 + if (!to_pci_dev(dev)->bridge_d3) 49 + return -EBUSY; 50 50 51 - static int pcie_port_runtime_resume(struct device *dev) 52 - { 53 - return 0; 51 + return pcie_port_device_runtime_suspend(dev); 54 52 } 55 53 56 54 static int pcie_port_runtime_idle(struct device *dev) ··· 71 73 .restore_noirq = pcie_port_device_resume_noirq, 72 74 .restore = pcie_port_device_resume, 73 75 .runtime_suspend = pcie_port_runtime_suspend, 74 - .runtime_resume = pcie_port_runtime_resume, 76 + .runtime_resume = pcie_port_device_runtime_resume, 75 77 .runtime_idle = pcie_port_runtime_idle, 76 78 }; 77 79 ··· 107 109 108 110 pci_save_state(dev); 109 111 110 - dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_SMART_SUSPEND | 111 - DPM_FLAG_LEAVE_SUSPENDED); 112 + dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NEVER_SKIP | 113 + DPM_FLAG_SMART_SUSPEND); 112 114 113 115 if (pci_bridge_d3_possible(dev)) { 114 116 /* ··· 142 144 { 143 145 /* Root Port has no impact. Always recovers. */ 144 146 return PCI_ERS_RESULT_CAN_RECOVER; 147 + } 148 + 149 + static pci_ers_result_t pcie_portdrv_slot_reset(struct pci_dev *dev) 150 + { 151 + pci_restore_state(dev); 152 + pci_save_state(dev); 153 + return PCI_ERS_RESULT_RECOVERED; 145 154 } 146 155 147 156 static pci_ers_result_t pcie_portdrv_mmio_enabled(struct pci_dev *dev) ··· 190 185 191 186 static const struct pci_error_handlers pcie_portdrv_err_handler = { 192 187 .error_detected = pcie_portdrv_error_detected, 188 + .slot_reset = pcie_portdrv_slot_reset, 193 189 .mmio_enabled = pcie_portdrv_mmio_enabled, 194 190 .resume = pcie_portdrv_err_resume, 195 191 }; ··· 232 226 {} 233 227 }; 234 228 229 + static void __init pcie_init_services(void) 230 + { 231 + pcie_aer_init(); 232 + pcie_pme_init(); 233 + pcie_dpc_init(); 234 + pcie_hp_init(); 235 + } 236 + 235 237 static int __init pcie_portdrv_init(void) 236 238 { 237 239 if (pcie_ports_disabled) 238 240 return -EACCES; 239 241 242 + pcie_init_services(); 240 243 dmi_check_system(pcie_portdrv_dmi_table); 241 244 242 245 return pci_register_driver(&pcie_portdriver);
+21 -3
drivers/pci/probe.c
··· 713 713 714 714 pcie_capability_read_dword(bridge, PCI_EXP_LNKCAP, &linkcap); 715 715 bus->max_bus_speed = pcie_link_speed[linkcap & PCI_EXP_LNKCAP_SLS]; 716 + bridge->link_active_reporting = !!(linkcap & PCI_EXP_LNKCAP_DLLLARC); 716 717 717 718 pcie_capability_read_word(bridge, PCI_EXP_LNKSTA, &linksta); 718 719 pcie_update_link_speed(bus, linksta); ··· 1439 1438 return PCI_CFG_SPACE_EXP_SIZE; 1440 1439 } 1441 1440 1441 + #ifdef CONFIG_PCI_IOV 1442 + static bool is_vf0(struct pci_dev *dev) 1443 + { 1444 + if (pci_iov_virtfn_devfn(dev->physfn, 0) == dev->devfn && 1445 + pci_iov_virtfn_bus(dev->physfn, 0) == dev->bus->number) 1446 + return true; 1447 + 1448 + return false; 1449 + } 1450 + #endif 1451 + 1442 1452 int pci_cfg_space_size(struct pci_dev *dev) 1443 1453 { 1444 1454 int pos; 1445 1455 u32 status; 1446 1456 u16 class; 1457 + 1458 + #ifdef CONFIG_PCI_IOV 1459 + /* Read cached value for all VFs except for VF0 */ 1460 + if (dev->is_virtfn && !is_vf0(dev)) 1461 + return dev->physfn->sriov->cfg_size; 1462 + #endif 1447 1463 1448 1464 if (dev->bus->bus_flags & PCI_BUS_FLAGS_NO_EXTCFG) 1449 1465 return PCI_CFG_SPACE_SIZE; ··· 2161 2143 pcibios_release_device(pci_dev); 2162 2144 pci_bus_put(pci_dev->bus); 2163 2145 kfree(pci_dev->driver_override); 2164 - kfree(pci_dev->dma_alias_mask); 2146 + bitmap_free(pci_dev->dma_alias_mask); 2165 2147 kfree(pci_dev); 2166 2148 } 2167 2149 ··· 2415 2397 dev->dev.dma_parms = &dev->dma_parms; 2416 2398 dev->dev.coherent_dma_mask = 0xffffffffull; 2417 2399 2418 - pci_set_dma_max_seg_size(dev, 65536); 2419 - pci_set_dma_seg_boundary(dev, 0xffffffff); 2400 + dma_set_max_seg_size(&dev->dev, 65536); 2401 + dma_set_seg_boundary(&dev->dev, 0xffffffff); 2420 2402 2421 2403 /* Fix up broken headers */ 2422 2404 pci_fixup_device(pci_fixup_header, dev);
+38 -58
drivers/pci/quirks.c
··· 3190 3190 3191 3191 pci_iounmap(dev, regs); 3192 3192 } 3193 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq); 3194 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq); 3195 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq); 3193 3196 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq); 3197 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq); 3194 3198 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq); 3195 3199 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq); 3196 3200 ··· 4991 4987 void __iomem *mmio; 4992 4988 struct ntb_info_regs __iomem *mmio_ntb; 4993 4989 struct ntb_ctrl_regs __iomem *mmio_ctrl; 4994 - struct sys_info_regs __iomem *mmio_sys_info; 4995 4990 u64 partition_map; 4996 4991 u8 partition; 4997 4992 int pp; ··· 5011 5008 5012 5009 mmio_ntb = mmio + SWITCHTEC_GAS_NTB_OFFSET; 5013 5010 mmio_ctrl = (void __iomem *) mmio_ntb + SWITCHTEC_NTB_REG_CTRL_OFFSET; 5014 - mmio_sys_info = mmio + SWITCHTEC_GAS_SYS_INFO_OFFSET; 5015 5011 5016 5012 partition = ioread8(&mmio_ntb->partition_id); 5017 5013 ··· 5059 5057 pci_iounmap(pdev, mmio); 5060 5058 pci_disable_device(pdev); 5061 5059 } 5062 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8531, 5063 - quirk_switchtec_ntb_dma_alias); 5064 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8532, 5065 - quirk_switchtec_ntb_dma_alias); 5066 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8533, 5067 - quirk_switchtec_ntb_dma_alias); 5068 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8534, 5069 - quirk_switchtec_ntb_dma_alias); 5070 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8535, 5071 - quirk_switchtec_ntb_dma_alias); 5072 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8536, 5073 - quirk_switchtec_ntb_dma_alias); 5074 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8543, 5075 - quirk_switchtec_ntb_dma_alias); 5076 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8544, 5077 - quirk_switchtec_ntb_dma_alias); 5078 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8545, 5079 - quirk_switchtec_ntb_dma_alias); 5080 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8546, 5081 - quirk_switchtec_ntb_dma_alias); 5082 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8551, 5083 - quirk_switchtec_ntb_dma_alias); 5084 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8552, 5085 - quirk_switchtec_ntb_dma_alias); 5086 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8553, 5087 - quirk_switchtec_ntb_dma_alias); 5088 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8554, 5089 - quirk_switchtec_ntb_dma_alias); 5090 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8555, 5091 - quirk_switchtec_ntb_dma_alias); 5092 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8556, 5093 - quirk_switchtec_ntb_dma_alias); 5094 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8561, 5095 - quirk_switchtec_ntb_dma_alias); 5096 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8562, 5097 - quirk_switchtec_ntb_dma_alias); 5098 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8563, 5099 - quirk_switchtec_ntb_dma_alias); 5100 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8564, 5101 - quirk_switchtec_ntb_dma_alias); 5102 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8565, 5103 - quirk_switchtec_ntb_dma_alias); 5104 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8566, 5105 - quirk_switchtec_ntb_dma_alias); 5106 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8571, 5107 - quirk_switchtec_ntb_dma_alias); 5108 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8572, 5109 - quirk_switchtec_ntb_dma_alias); 5110 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8573, 5111 - quirk_switchtec_ntb_dma_alias); 5112 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8574, 5113 - quirk_switchtec_ntb_dma_alias); 5114 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8575, 5115 - quirk_switchtec_ntb_dma_alias); 5116 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_MICROSEMI, 0x8576, 5117 - quirk_switchtec_ntb_dma_alias); 5060 + #define SWITCHTEC_QUIRK(vid) \ 5061 + DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_MICROSEMI, vid, \ 5062 + PCI_CLASS_BRIDGE_OTHER, 8, quirk_switchtec_ntb_dma_alias) 5063 + 5064 + SWITCHTEC_QUIRK(0x8531); /* PFX 24xG3 */ 5065 + SWITCHTEC_QUIRK(0x8532); /* PFX 32xG3 */ 5066 + SWITCHTEC_QUIRK(0x8533); /* PFX 48xG3 */ 5067 + SWITCHTEC_QUIRK(0x8534); /* PFX 64xG3 */ 5068 + SWITCHTEC_QUIRK(0x8535); /* PFX 80xG3 */ 5069 + SWITCHTEC_QUIRK(0x8536); /* PFX 96xG3 */ 5070 + SWITCHTEC_QUIRK(0x8541); /* PSX 24xG3 */ 5071 + SWITCHTEC_QUIRK(0x8542); /* PSX 32xG3 */ 5072 + SWITCHTEC_QUIRK(0x8543); /* PSX 48xG3 */ 5073 + SWITCHTEC_QUIRK(0x8544); /* PSX 64xG3 */ 5074 + SWITCHTEC_QUIRK(0x8545); /* PSX 80xG3 */ 5075 + SWITCHTEC_QUIRK(0x8546); /* PSX 96xG3 */ 5076 + SWITCHTEC_QUIRK(0x8551); /* PAX 24XG3 */ 5077 + SWITCHTEC_QUIRK(0x8552); /* PAX 32XG3 */ 5078 + SWITCHTEC_QUIRK(0x8553); /* PAX 48XG3 */ 5079 + SWITCHTEC_QUIRK(0x8554); /* PAX 64XG3 */ 5080 + SWITCHTEC_QUIRK(0x8555); /* PAX 80XG3 */ 5081 + SWITCHTEC_QUIRK(0x8556); /* PAX 96XG3 */ 5082 + SWITCHTEC_QUIRK(0x8561); /* PFXL 24XG3 */ 5083 + SWITCHTEC_QUIRK(0x8562); /* PFXL 32XG3 */ 5084 + SWITCHTEC_QUIRK(0x8563); /* PFXL 48XG3 */ 5085 + SWITCHTEC_QUIRK(0x8564); /* PFXL 64XG3 */ 5086 + SWITCHTEC_QUIRK(0x8565); /* PFXL 80XG3 */ 5087 + SWITCHTEC_QUIRK(0x8566); /* PFXL 96XG3 */ 5088 + SWITCHTEC_QUIRK(0x8571); /* PFXI 24XG3 */ 5089 + SWITCHTEC_QUIRK(0x8572); /* PFXI 32XG3 */ 5090 + SWITCHTEC_QUIRK(0x8573); /* PFXI 48XG3 */ 5091 + SWITCHTEC_QUIRK(0x8574); /* PFXI 64XG3 */ 5092 + SWITCHTEC_QUIRK(0x8575); /* PFXI 80XG3 */ 5093 + SWITCHTEC_QUIRK(0x8576); /* PFXI 96XG3 */
+1 -3
drivers/pci/remove.c
··· 25 25 26 26 pci_dev_assign_added(dev, false); 27 27 } 28 - 29 - if (dev->bus->self) 30 - pcie_aspm_exit_link_state(dev); 31 28 } 32 29 33 30 static void pci_destroy_dev(struct pci_dev *dev) ··· 38 41 list_del(&dev->bus_list); 39 42 up_write(&pci_bus_sem); 40 43 44 + pcie_aspm_exit_link_state(dev); 41 45 pci_bridge_d3_update(dev); 42 46 pci_free_resources(dev); 43 47 put_device(&dev->dev);
+15 -13
drivers/pci/setup-bus.c
··· 811 811 static resource_size_t calculate_iosize(resource_size_t size, 812 812 resource_size_t min_size, 813 813 resource_size_t size1, 814 + resource_size_t add_size, 815 + resource_size_t children_add_size, 814 816 resource_size_t old_size, 815 817 resource_size_t align) 816 818 { ··· 825 823 #if defined(CONFIG_ISA) || defined(CONFIG_EISA) 826 824 size = (size & 0xff) + ((size & ~0xffUL) << 2); 827 825 #endif 828 - size = ALIGN(size + size1, align); 826 + size = size + size1; 829 827 if (size < old_size) 830 828 size = old_size; 829 + 830 + size = ALIGN(max(size, add_size) + children_add_size, align); 831 831 return size; 832 832 } 833 833 834 834 static resource_size_t calculate_memsize(resource_size_t size, 835 835 resource_size_t min_size, 836 - resource_size_t size1, 836 + resource_size_t add_size, 837 + resource_size_t children_add_size, 837 838 resource_size_t old_size, 838 839 resource_size_t align) 839 840 { ··· 846 841 old_size = 0; 847 842 if (size < old_size) 848 843 size = old_size; 849 - size = ALIGN(size + size1, align); 844 + 845 + size = ALIGN(max(size, add_size) + children_add_size, align); 850 846 return size; 851 847 } 852 848 ··· 936 930 } 937 931 } 938 932 939 - size0 = calculate_iosize(size, min_size, size1, 933 + size0 = calculate_iosize(size, min_size, size1, 0, 0, 940 934 resource_size(b_res), min_align); 941 - if (children_add_size > add_size) 942 - add_size = children_add_size; 943 - size1 = (!realloc_head || (realloc_head && !add_size)) ? size0 : 944 - calculate_iosize(size, min_size, add_size + size1, 935 + size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 : 936 + calculate_iosize(size, min_size, size1, add_size, children_add_size, 945 937 resource_size(b_res), min_align); 946 938 if (!size0 && !size1) { 947 939 if (b_res->start || b_res->end) ··· 1083 1079 1084 1080 min_align = calculate_mem_align(aligns, max_order); 1085 1081 min_align = max(min_align, window_alignment(bus, b_res->flags)); 1086 - size0 = calculate_memsize(size, min_size, 0, resource_size(b_res), min_align); 1082 + size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), min_align); 1087 1083 add_align = max(min_align, add_align); 1088 - if (children_add_size > add_size) 1089 - add_size = children_add_size; 1090 - size1 = (!realloc_head || (realloc_head && !add_size)) ? size0 : 1091 - calculate_memsize(size, min_size, add_size, 1084 + size1 = (!realloc_head || (realloc_head && !add_size && !children_add_size)) ? size0 : 1085 + calculate_memsize(size, min_size, add_size, children_add_size, 1092 1086 resource_size(b_res), add_align); 1093 1087 if (!size0 && !size1) { 1094 1088 if (b_res->start || b_res->end)
+1 -2
drivers/pci/slot.c
··· 14 14 15 15 struct kset *pci_slots_kset; 16 16 EXPORT_SYMBOL_GPL(pci_slots_kset); 17 - static DEFINE_MUTEX(pci_slot_mutex); 18 17 19 18 static ssize_t pci_slot_attr_show(struct kobject *kobj, 20 19 struct attribute *attr, char *buf) ··· 370 371 371 372 if (!slot || !slot->ops) 372 373 return; 373 - kobj = kset_find_obj(module_kset, slot->ops->mod_name); 374 + kobj = kset_find_obj(module_kset, slot->mod_name); 374 375 if (!kobj) 375 376 return; 376 377 ret = sysfs_create_link(&pci_slot->kobj, kobj, "module");
+10 -29
drivers/platform/x86/asus-wmi.c
··· 254 254 int asus_hwmon_num_fans; 255 255 int asus_hwmon_pwm; 256 256 257 - struct hotplug_slot *hotplug_slot; 257 + struct hotplug_slot hotplug_slot; 258 258 struct mutex hotplug_lock; 259 259 struct mutex wmi_lock; 260 260 struct workqueue_struct *hotplug_workqueue; ··· 753 753 if (asus->wlan.rfkill) 754 754 rfkill_set_sw_state(asus->wlan.rfkill, blocked); 755 755 756 - if (asus->hotplug_slot) { 756 + if (asus->hotplug_slot.ops) { 757 757 bus = pci_find_bus(0, 1); 758 758 if (!bus) { 759 759 pr_warn("Unable to find PCI bus 1?\n"); ··· 858 858 static int asus_get_adapter_status(struct hotplug_slot *hotplug_slot, 859 859 u8 *value) 860 860 { 861 - struct asus_wmi *asus = hotplug_slot->private; 861 + struct asus_wmi *asus = container_of(hotplug_slot, 862 + struct asus_wmi, hotplug_slot); 862 863 int result = asus_wmi_get_devstate_simple(asus, ASUS_WMI_DEVID_WLAN); 863 864 864 865 if (result < 0) ··· 869 868 return 0; 870 869 } 871 870 872 - static struct hotplug_slot_ops asus_hotplug_slot_ops = { 873 - .owner = THIS_MODULE, 871 + static const struct hotplug_slot_ops asus_hotplug_slot_ops = { 874 872 .get_adapter_status = asus_get_adapter_status, 875 873 .get_power_status = asus_get_adapter_status, 876 874 }; ··· 899 899 900 900 INIT_WORK(&asus->hotplug_work, asus_hotplug_work); 901 901 902 - asus->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); 903 - if (!asus->hotplug_slot) 904 - goto error_slot; 902 + asus->hotplug_slot.ops = &asus_hotplug_slot_ops; 905 903 906 - asus->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info), 907 - GFP_KERNEL); 908 - if (!asus->hotplug_slot->info) 909 - goto error_info; 910 - 911 - asus->hotplug_slot->private = asus; 912 - asus->hotplug_slot->ops = &asus_hotplug_slot_ops; 913 - asus_get_adapter_status(asus->hotplug_slot, 914 - &asus->hotplug_slot->info->adapter_status); 915 - 916 - ret = pci_hp_register(asus->hotplug_slot, bus, 0, "asus-wifi"); 904 + ret = pci_hp_register(&asus->hotplug_slot, bus, 0, "asus-wifi"); 917 905 if (ret) { 918 906 pr_err("Unable to register hotplug slot - %d\n", ret); 919 907 goto error_register; ··· 910 922 return 0; 911 923 912 924 error_register: 913 - kfree(asus->hotplug_slot->info); 914 - error_info: 915 - kfree(asus->hotplug_slot); 916 - asus->hotplug_slot = NULL; 917 - error_slot: 925 + asus->hotplug_slot.ops = NULL; 918 926 destroy_workqueue(asus->hotplug_workqueue); 919 927 error_workqueue: 920 928 return ret; ··· 1038 1054 * asus_unregister_rfkill_notifier() 1039 1055 */ 1040 1056 asus_rfkill_hotplug(asus); 1041 - if (asus->hotplug_slot) { 1042 - pci_hp_deregister(asus->hotplug_slot); 1043 - kfree(asus->hotplug_slot->info); 1044 - kfree(asus->hotplug_slot); 1045 - } 1057 + if (asus->hotplug_slot.ops) 1058 + pci_hp_deregister(&asus->hotplug_slot); 1046 1059 if (asus->hotplug_workqueue) 1047 1060 destroy_workqueue(asus->hotplug_workqueue); 1048 1061
+13 -30
drivers/platform/x86/eeepc-laptop.c
··· 177 177 struct rfkill *wwan3g_rfkill; 178 178 struct rfkill *wimax_rfkill; 179 179 180 - struct hotplug_slot *hotplug_slot; 180 + struct hotplug_slot hotplug_slot; 181 181 struct mutex hotplug_lock; 182 182 183 183 struct led_classdev tpd_led; ··· 582 582 mutex_lock(&eeepc->hotplug_lock); 583 583 pci_lock_rescan_remove(); 584 584 585 - if (!eeepc->hotplug_slot) 585 + if (!eeepc->hotplug_slot.ops) 586 586 goto out_unlock; 587 587 588 588 port = acpi_get_pci_dev(handle); ··· 715 715 static int eeepc_get_adapter_status(struct hotplug_slot *hotplug_slot, 716 716 u8 *value) 717 717 { 718 - struct eeepc_laptop *eeepc = hotplug_slot->private; 719 - int val = get_acpi(eeepc, CM_ASL_WLAN); 718 + struct eeepc_laptop *eeepc; 719 + int val; 720 + 721 + eeepc = container_of(hotplug_slot, struct eeepc_laptop, hotplug_slot); 722 + val = get_acpi(eeepc, CM_ASL_WLAN); 720 723 721 724 if (val == 1 || val == 0) 722 725 *value = val; ··· 729 726 return 0; 730 727 } 731 728 732 - static struct hotplug_slot_ops eeepc_hotplug_slot_ops = { 733 - .owner = THIS_MODULE, 729 + static const struct hotplug_slot_ops eeepc_hotplug_slot_ops = { 734 730 .get_adapter_status = eeepc_get_adapter_status, 735 731 .get_power_status = eeepc_get_adapter_status, 736 732 }; ··· 744 742 return -ENODEV; 745 743 } 746 744 747 - eeepc->hotplug_slot = kzalloc(sizeof(struct hotplug_slot), GFP_KERNEL); 748 - if (!eeepc->hotplug_slot) 749 - goto error_slot; 745 + eeepc->hotplug_slot.ops = &eeepc_hotplug_slot_ops; 750 746 751 - eeepc->hotplug_slot->info = kzalloc(sizeof(struct hotplug_slot_info), 752 - GFP_KERNEL); 753 - if (!eeepc->hotplug_slot->info) 754 - goto error_info; 755 - 756 - eeepc->hotplug_slot->private = eeepc; 757 - eeepc->hotplug_slot->ops = &eeepc_hotplug_slot_ops; 758 - eeepc_get_adapter_status(eeepc->hotplug_slot, 759 - &eeepc->hotplug_slot->info->adapter_status); 760 - 761 - ret = pci_hp_register(eeepc->hotplug_slot, bus, 0, "eeepc-wifi"); 747 + ret = pci_hp_register(&eeepc->hotplug_slot, bus, 0, "eeepc-wifi"); 762 748 if (ret) { 763 749 pr_err("Unable to register hotplug slot - %d\n", ret); 764 750 goto error_register; ··· 755 765 return 0; 756 766 757 767 error_register: 758 - kfree(eeepc->hotplug_slot->info); 759 - error_info: 760 - kfree(eeepc->hotplug_slot); 761 - eeepc->hotplug_slot = NULL; 762 - error_slot: 768 + eeepc->hotplug_slot.ops = NULL; 763 769 return ret; 764 770 } 765 771 ··· 816 830 eeepc->wlan_rfkill = NULL; 817 831 } 818 832 819 - if (eeepc->hotplug_slot) { 820 - pci_hp_deregister(eeepc->hotplug_slot); 821 - kfree(eeepc->hotplug_slot->info); 822 - kfree(eeepc->hotplug_slot); 823 - } 833 + if (eeepc->hotplug_slot.ops) 834 + pci_hp_deregister(&eeepc->hotplug_slot); 824 835 825 836 if (eeepc->bluetooth_rfkill) { 826 837 rfkill_unregister(eeepc->bluetooth_rfkill);
+1
drivers/reset/reset-imx7.c
··· 67 67 [IMX7_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, BIT(2) | BIT(1) }, 68 68 [IMX7_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) }, 69 69 [IMX7_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) }, 70 + [IMX7_RESET_PCIE_CTRL_APPS_TURNOFF] = { SRC_PCIEPHY_RCR, BIT(11) }, 70 71 [IMX7_RESET_DDRC_PRST] = { SRC_DDRC_RCR, BIT(0) }, 71 72 [IMX7_RESET_DDRC_CORE_RST] = { SRC_DDRC_RCR, BIT(1) }, 72 73 };
+2 -2
drivers/s390/net/ism_drv.c
··· 515 515 if (ret) 516 516 goto err_unmap; 517 517 518 - pci_set_dma_seg_boundary(pdev, SZ_1M - 1); 519 - pci_set_dma_max_seg_size(pdev, SZ_1M); 518 + dma_set_seg_boundary(&pdev->dev, SZ_1M - 1); 519 + dma_set_max_seg_size(&pdev->dev, SZ_1M); 520 520 pci_set_master(pdev); 521 521 522 522 ism->smcd = smcd_alloc_dev(&pdev->dev, dev_name(&pdev->dev), &ism_ops,
+1 -3
drivers/scsi/aacraid/linit.c
··· 1747 1747 shost->max_sectors = (shost->sg_tablesize * 8) + 112; 1748 1748 } 1749 1749 1750 - error = pci_set_dma_max_seg_size(pdev, 1750 + error = dma_set_max_seg_size(&pdev->dev, 1751 1751 (aac->adapter_info.options & AAC_OPT_NEW_COMM) ? 1752 1752 (shost->max_sectors << 9) : 65536); 1753 1753 if (error) ··· 2054 2054 struct Scsi_Host *shost = pci_get_drvdata(pdev); 2055 2055 struct scsi_device *sdev = NULL; 2056 2056 struct aac_dev *aac = (struct aac_dev *)shost_priv(shost); 2057 - 2058 - pci_cleanup_aer_uncorrect_error_status(pdev); 2059 2057 2060 2058 if (aac_adapter_ioremap(aac, aac->base_size)) { 2061 2059
-1
drivers/scsi/be2iscsi/be_main.c
··· 5529 5529 return PCI_ERS_RESULT_DISCONNECT; 5530 5530 } 5531 5531 5532 - pci_cleanup_aer_uncorrect_error_status(pdev); 5533 5532 return PCI_ERS_RESULT_RECOVERED; 5534 5533 } 5535 5534
-2
drivers/scsi/bfa/bfad.c
··· 1569 1569 if (pci_set_dma_mask(bfad->pcidev, DMA_BIT_MASK(32)) != 0) 1570 1570 goto out_disable_device; 1571 1571 1572 - pci_cleanup_aer_uncorrect_error_status(pdev); 1573 - 1574 1572 if (restart_bfa(bfad) == -1) 1575 1573 goto out_disable_device; 1576 1574
-1
drivers/scsi/csiostor/csio_init.c
··· 1102 1102 pci_set_master(pdev); 1103 1103 pci_restore_state(pdev); 1104 1104 pci_save_state(pdev); 1105 - pci_cleanup_aer_uncorrect_error_status(pdev); 1106 1105 1107 1106 /* Bring HW s/m to ready state. 1108 1107 * but don't resume IOs.
-8
drivers/scsi/lpfc/lpfc_init.c
··· 11329 11329 11330 11330 /* Bring device online, it will be no-op for non-fatal error resume */ 11331 11331 lpfc_online(phba); 11332 - 11333 - /* Clean up Advanced Error Reporting (AER) if needed */ 11334 - if (phba->hba_flag & HBA_AER_ENABLED) 11335 - pci_cleanup_aer_uncorrect_error_status(pdev); 11336 11332 } 11337 11333 11338 11334 /** ··· 12140 12144 /* Bring the device back online */ 12141 12145 lpfc_online(phba); 12142 12146 } 12143 - 12144 - /* Clean up Advanced Error Reporting (AER) if needed */ 12145 - if (phba->hba_flag & HBA_AER_ENABLED) 12146 - pci_cleanup_aer_uncorrect_error_status(pdev); 12147 12147 } 12148 12148 12149 12149 /**
-1
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 10828 10828 10829 10829 pr_info(MPT3SAS_FMT "PCI error: resume callback!!\n", ioc->name); 10830 10830 10831 - pci_cleanup_aer_uncorrect_error_status(pdev); 10832 10831 mpt3sas_base_start_watchdog(ioc); 10833 10832 scsi_unblock_requests(ioc->shost); 10834 10833 }
-2
drivers/scsi/qla2xxx/qla_os.c
··· 6839 6839 "The device failed to resume I/O from slot/link_reset.\n"); 6840 6840 } 6841 6841 6842 - pci_cleanup_aer_uncorrect_error_status(pdev); 6843 - 6844 6842 ha->flags.eeh_busy = 0; 6845 6843 } 6846 6844
-1
drivers/scsi/qla4xxx/ql4_os.c
··· 9824 9824 __func__); 9825 9825 } 9826 9826 9827 - pci_cleanup_aer_uncorrect_error_status(pdev); 9828 9827 clear_bit(AF_EEH_BUSY, &ha->flags); 9829 9828 } 9830 9829
+7 -1
include/acpi/acpi_bus.h
··· 346 346 bool put_online:1; 347 347 }; 348 348 349 + struct acpi_device_properties { 350 + const guid_t *guid; 351 + const union acpi_object *properties; 352 + struct list_head list; 353 + }; 354 + 349 355 /* ACPI Device Specific Data (_DSD) */ 350 356 struct acpi_device_data { 351 357 const union acpi_object *pointer; 352 - const union acpi_object *properties; 358 + struct list_head properties; 353 359 const union acpi_object *of_compatible; 354 360 struct list_head subnodes; 355 361 };
+3 -1
include/dt-bindings/reset/imx7-reset.h
··· 56 56 #define IMX7_RESET_DDRC_PRST 23 57 57 #define IMX7_RESET_DDRC_CORE_RST 24 58 58 59 - #define IMX7_RESET_NUM 25 59 + #define IMX7_RESET_PCIE_CTRL_APPS_TURNOFF 25 60 + 61 + #define IMX7_RESET_NUM 26 60 62 61 63 #endif 62 64
+9
include/linux/acpi.h
··· 1072 1072 NR_FWNODE_REFERENCE_ARGS, args); 1073 1073 } 1074 1074 1075 + static inline bool acpi_dev_has_props(const struct acpi_device *adev) 1076 + { 1077 + return !list_empty(&adev->data.properties); 1078 + } 1079 + 1080 + struct acpi_device_properties * 1081 + acpi_data_add_props(struct acpi_device_data *data, const guid_t *guid, 1082 + const union acpi_object *properties); 1083 + 1075 1084 int acpi_node_prop_get(const struct fwnode_handle *fwnode, const char *propname, 1076 1085 void **valptr); 1077 1086 int acpi_dev_prop_read_single(struct acpi_device *adev,
+3
include/linux/blkdev.h
··· 704 704 #define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */ 705 705 #define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */ 706 706 #define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */ 707 + #define QUEUE_FLAG_PCI_P2PDMA 29 /* device supports PCI p2p requests */ 707 708 708 709 #define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \ 709 710 (1 << QUEUE_FLAG_SAME_COMP) | \ ··· 737 736 #define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags) 738 737 #define blk_queue_scsi_passthrough(q) \ 739 738 test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags) 739 + #define blk_queue_pci_p2pdma(q) \ 740 + test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags) 740 741 741 742 #define blk_noretry_request(rq) \ 742 743 ((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
+6
include/linux/memremap.h
··· 53 53 * wakeup event whenever a page is unpinned and becomes idle. This 54 54 * wakeup is used to coordinate physical address space management (ex: 55 55 * fs truncate/hole punch) vs pinned pages (ex: device dma). 56 + * 57 + * MEMORY_DEVICE_PCI_P2PDMA: 58 + * Device memory residing in a PCI BAR intended for use with Peer-to-Peer 59 + * transactions. 56 60 */ 57 61 enum memory_type { 58 62 MEMORY_DEVICE_PRIVATE = 1, 59 63 MEMORY_DEVICE_PUBLIC, 60 64 MEMORY_DEVICE_FS_DAX, 65 + MEMORY_DEVICE_PCI_P2PDMA, 61 66 }; 62 67 63 68 /* ··· 125 120 struct device *dev; 126 121 void *data; 127 122 enum memory_type type; 123 + u64 pci_p2pdma_bus_offset; 128 124 }; 129 125 130 126 #ifdef CONFIG_ZONE_DEVICE
+18
include/linux/mm.h
··· 890 890 page->pgmap->type == MEMORY_DEVICE_PUBLIC; 891 891 } 892 892 893 + #ifdef CONFIG_PCI_P2PDMA 894 + static inline bool is_pci_p2pdma_page(const struct page *page) 895 + { 896 + return is_zone_device_page(page) && 897 + page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; 898 + } 899 + #else /* CONFIG_PCI_P2PDMA */ 900 + static inline bool is_pci_p2pdma_page(const struct page *page) 901 + { 902 + return false; 903 + } 904 + #endif /* CONFIG_PCI_P2PDMA */ 905 + 893 906 #else /* CONFIG_DEV_PAGEMAP_OPS */ 894 907 static inline void dev_pagemap_get_ops(void) 895 908 { ··· 923 910 } 924 911 925 912 static inline bool is_device_public_page(const struct page *page) 913 + { 914 + return false; 915 + } 916 + 917 + static inline bool is_pci_p2pdma_page(const struct page *page) 926 918 { 927 919 return false; 928 920 }
-18
include/linux/pci-dma-compat.h
··· 119 119 { 120 120 return dma_set_coherent_mask(&dev->dev, mask); 121 121 } 122 - 123 - static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, 124 - unsigned int size) 125 - { 126 - return dma_set_max_seg_size(&dev->dev, size); 127 - } 128 - 129 - static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, 130 - unsigned long mask) 131 - { 132 - return dma_set_seg_boundary(&dev->dev, mask); 133 - } 134 122 #else 135 123 static inline int pci_set_dma_mask(struct pci_dev *dev, u64 mask) 136 124 { return -EIO; } 137 125 static inline int pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask) 138 - { return -EIO; } 139 - static inline int pci_set_dma_max_seg_size(struct pci_dev *dev, 140 - unsigned int size) 141 - { return -EIO; } 142 - static inline int pci_set_dma_seg_boundary(struct pci_dev *dev, 143 - unsigned long mask) 144 126 { return -EIO; } 145 127 #endif 146 128
-12
include/linux/pci-dma.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _LINUX_PCI_DMA_H 3 - #define _LINUX_PCI_DMA_H 4 - 5 - #define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) DEFINE_DMA_UNMAP_ADDR(ADDR_NAME); 6 - #define DECLARE_PCI_UNMAP_LEN(LEN_NAME) DEFINE_DMA_UNMAP_LEN(LEN_NAME); 7 - #define pci_unmap_addr dma_unmap_addr 8 - #define pci_unmap_addr_set dma_unmap_addr_set 9 - #define pci_unmap_len dma_unmap_len 10 - #define pci_unmap_len_set dma_unmap_len_set 11 - 12 - #endif
+114
include/linux/pci-p2pdma.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * PCI Peer 2 Peer DMA support. 4 + * 5 + * Copyright (c) 2016-2018, Logan Gunthorpe 6 + * Copyright (c) 2016-2017, Microsemi Corporation 7 + * Copyright (c) 2017, Christoph Hellwig 8 + * Copyright (c) 2018, Eideticom Inc. 9 + */ 10 + 11 + #ifndef _LINUX_PCI_P2PDMA_H 12 + #define _LINUX_PCI_P2PDMA_H 13 + 14 + #include <linux/pci.h> 15 + 16 + struct block_device; 17 + struct scatterlist; 18 + 19 + #ifdef CONFIG_PCI_P2PDMA 20 + int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, 21 + u64 offset); 22 + int pci_p2pdma_distance_many(struct pci_dev *provider, struct device **clients, 23 + int num_clients, bool verbose); 24 + bool pci_has_p2pmem(struct pci_dev *pdev); 25 + struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients); 26 + void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size); 27 + void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size); 28 + pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, void *addr); 29 + struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, 30 + unsigned int *nents, u32 length); 31 + void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl); 32 + void pci_p2pmem_publish(struct pci_dev *pdev, bool publish); 33 + int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, 34 + enum dma_data_direction dir); 35 + int pci_p2pdma_enable_store(const char *page, struct pci_dev **p2p_dev, 36 + bool *use_p2pdma); 37 + ssize_t pci_p2pdma_enable_show(char *page, struct pci_dev *p2p_dev, 38 + bool use_p2pdma); 39 + #else /* CONFIG_PCI_P2PDMA */ 40 + static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, 41 + size_t size, u64 offset) 42 + { 43 + return -EOPNOTSUPP; 44 + } 45 + static inline int pci_p2pdma_distance_many(struct pci_dev *provider, 46 + struct device **clients, int num_clients, bool verbose) 47 + { 48 + return -1; 49 + } 50 + static inline bool pci_has_p2pmem(struct pci_dev *pdev) 51 + { 52 + return false; 53 + } 54 + static inline struct pci_dev *pci_p2pmem_find_many(struct device **clients, 55 + int num_clients) 56 + { 57 + return NULL; 58 + } 59 + static inline void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size) 60 + { 61 + return NULL; 62 + } 63 + static inline void pci_free_p2pmem(struct pci_dev *pdev, void *addr, 64 + size_t size) 65 + { 66 + } 67 + static inline pci_bus_addr_t pci_p2pmem_virt_to_bus(struct pci_dev *pdev, 68 + void *addr) 69 + { 70 + return 0; 71 + } 72 + static inline struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, 73 + unsigned int *nents, u32 length) 74 + { 75 + return NULL; 76 + } 77 + static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev, 78 + struct scatterlist *sgl) 79 + { 80 + } 81 + static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) 82 + { 83 + } 84 + static inline int pci_p2pdma_map_sg(struct device *dev, 85 + struct scatterlist *sg, int nents, enum dma_data_direction dir) 86 + { 87 + return 0; 88 + } 89 + static inline int pci_p2pdma_enable_store(const char *page, 90 + struct pci_dev **p2p_dev, bool *use_p2pdma) 91 + { 92 + *use_p2pdma = false; 93 + return 0; 94 + } 95 + static inline ssize_t pci_p2pdma_enable_show(char *page, 96 + struct pci_dev *p2p_dev, bool use_p2pdma) 97 + { 98 + return sprintf(page, "none\n"); 99 + } 100 + #endif /* CONFIG_PCI_P2PDMA */ 101 + 102 + 103 + static inline int pci_p2pdma_distance(struct pci_dev *provider, 104 + struct device *client, bool verbose) 105 + { 106 + return pci_p2pdma_distance_many(provider, &client, 1, verbose); 107 + } 108 + 109 + static inline struct pci_dev *pci_p2pmem_find(struct device *client) 110 + { 111 + return pci_p2pmem_find_many(&client, 1); 112 + } 113 + 114 + #endif /* _LINUX_PCI_P2P_H */
+6 -1
include/linux/pci.h
··· 281 281 struct pci_vpd; 282 282 struct pci_sriov; 283 283 struct pci_ats; 284 + struct pci_p2pdma; 284 285 285 286 /* The pci_dev structure describes PCI devices */ 286 287 struct pci_dev { ··· 326 325 pci_power_t current_state; /* Current operating state. In ACPI, 327 326 this is D0-D3, D0 being fully 328 327 functional, and D3 being off. */ 328 + unsigned int imm_ready:1; /* Supports Immediate Readiness */ 329 329 u8 pm_cap; /* PM capability offset */ 330 330 unsigned int pme_support:5; /* Bitmask of states from which PME# 331 331 can be generated */ ··· 404 402 unsigned int has_secondary_link:1; 405 403 unsigned int non_compliant_bars:1; /* Broken BARs; ignore them */ 406 404 unsigned int is_probed:1; /* Device probing in progress */ 405 + unsigned int link_active_reporting:1;/* Device capable of reporting link active */ 407 406 pci_dev_flags_t dev_flags; 408 407 atomic_t enable_cnt; /* pci_enable_device has been called */ 409 408 ··· 441 438 #endif 442 439 #ifdef CONFIG_PCI_PASID 443 440 u16 pasid_features; 441 + #endif 442 + #ifdef CONFIG_PCI_P2PDMA 443 + struct pci_p2pdma *p2pdma; 444 444 #endif 445 445 phys_addr_t rom; /* Physical address if not from BAR */ 446 446 size_t romlen; /* Length if not from BAR */ ··· 1348 1342 1349 1343 /* kmem_cache style wrapper around pci_alloc_consistent() */ 1350 1344 1351 - #include <linux/pci-dma.h> 1352 1345 #include <linux/dmapool.h> 1353 1346 1354 1347 #define pci_pool dma_pool
+5 -38
include/linux/pci_hotplug.h
··· 16 16 17 17 /** 18 18 * struct hotplug_slot_ops -the callbacks that the hotplug pci core can use 19 - * @owner: The module owner of this structure 20 - * @mod_name: The module name (KBUILD_MODNAME) of this structure 21 19 * @enable_slot: Called when the user wants to enable a specific pci slot 22 20 * @disable_slot: Called when the user wants to disable a specific pci slot 23 21 * @set_attention_status: Called to set the specific slot's attention LED to ··· 23 25 * @hardware_test: Called to run a specified hardware test on the specified 24 26 * slot. 25 27 * @get_power_status: Called to get the current power status of a slot. 26 - * If this field is NULL, the value passed in the struct hotplug_slot_info 27 - * will be used when this value is requested by a user. 28 28 * @get_attention_status: Called to get the current attention status of a slot. 29 - * If this field is NULL, the value passed in the struct hotplug_slot_info 30 - * will be used when this value is requested by a user. 31 29 * @get_latch_status: Called to get the current latch status of a slot. 32 - * If this field is NULL, the value passed in the struct hotplug_slot_info 33 - * will be used when this value is requested by a user. 34 30 * @get_adapter_status: Called to get see if an adapter is present in the slot or not. 35 - * If this field is NULL, the value passed in the struct hotplug_slot_info 36 - * will be used when this value is requested by a user. 37 31 * @reset_slot: Optional interface to allow override of a bus reset for the 38 32 * slot for cases where a secondary bus reset can result in spurious 39 33 * hotplug events or where a slot can be reset independent of the bus. ··· 36 46 * set an LED, enable / disable power, etc.) 37 47 */ 38 48 struct hotplug_slot_ops { 39 - struct module *owner; 40 - const char *mod_name; 41 49 int (*enable_slot) (struct hotplug_slot *slot); 42 50 int (*disable_slot) (struct hotplug_slot *slot); 43 51 int (*set_attention_status) (struct hotplug_slot *slot, u8 value); ··· 48 60 }; 49 61 50 62 /** 51 - * struct hotplug_slot_info - used to notify the hotplug pci core of the state of the slot 52 - * @power_status: if power is enabled or not (1/0) 53 - * @attention_status: if the attention light is enabled or not (1/0) 54 - * @latch_status: if the latch (if any) is open or closed (1/0) 55 - * @adapter_status: if there is a pci board present in the slot or not (1/0) 56 - * 57 - * Used to notify the hotplug pci core of the status of a specific slot. 58 - */ 59 - struct hotplug_slot_info { 60 - u8 power_status; 61 - u8 attention_status; 62 - u8 latch_status; 63 - u8 adapter_status; 64 - }; 65 - 66 - /** 67 63 * struct hotplug_slot - used to register a physical slot with the hotplug pci core 68 64 * @ops: pointer to the &struct hotplug_slot_ops to be used for this slot 69 - * @info: pointer to the &struct hotplug_slot_info for the initial values for 70 - * this slot. 71 - * @private: used by the hotplug pci controller driver to store whatever it 72 - * needs. 65 + * @owner: The module owner of this structure 66 + * @mod_name: The module name (KBUILD_MODNAME) of this structure 73 67 */ 74 68 struct hotplug_slot { 75 - struct hotplug_slot_ops *ops; 76 - struct hotplug_slot_info *info; 77 - void *private; 69 + const struct hotplug_slot_ops *ops; 78 70 79 71 /* Variables below this are for use only by the hotplug pci core. */ 80 72 struct list_head slot_list; 81 73 struct pci_slot *pci_slot; 74 + struct module *owner; 75 + const char *mod_name; 82 76 }; 83 77 84 78 static inline const char *hotplug_slot_name(const struct hotplug_slot *slot) ··· 79 109 void pci_hp_del(struct hotplug_slot *slot); 80 110 void pci_hp_destroy(struct hotplug_slot *slot); 81 111 void pci_hp_deregister(struct hotplug_slot *slot); 82 - 83 - int __must_check pci_hp_change_slot_info(struct hotplug_slot *slot, 84 - struct hotplug_slot_info *info); 85 112 86 113 /* use a define to avoid include chaining to get THIS_MODULE & friends */ 87 114 #define pci_hp_register(slot, pbus, devnr, name) \
-2
include/linux/pci_ids.h
··· 2543 2543 #define PCI_VENDOR_ID_HUAWEI 0x19e5 2544 2544 2545 2545 #define PCI_VENDOR_ID_NETRONOME 0x19ee 2546 - #define PCI_DEVICE_ID_NETRONOME_NFP3200 0x3200 2547 - #define PCI_DEVICE_ID_NETRONOME_NFP3240 0x3240 2548 2546 #define PCI_DEVICE_ID_NETRONOME_NFP4000 0x4000 2549 2547 #define PCI_DEVICE_ID_NETRONOME_NFP5000 0x5000 2550 2548 #define PCI_DEVICE_ID_NETRONOME_NFP6000 0x6000
+1
include/uapi/linux/pci_regs.h
··· 52 52 #define PCI_COMMAND_INTX_DISABLE 0x400 /* INTx Emulation Disable */ 53 53 54 54 #define PCI_STATUS 0x06 /* 16 bits */ 55 + #define PCI_STATUS_IMM_READY 0x01 /* Immediate Readiness */ 55 56 #define PCI_STATUS_INTERRUPT 0x08 /* Interrupt status */ 56 57 #define PCI_STATUS_CAP_LIST 0x10 /* Support Capability List */ 57 58 #define PCI_STATUS_66MHZ 0x20 /* Support 66 MHz PCI 2.1 bus */
+7 -6
tools/Makefile
··· 21 21 @echo ' leds - LEDs tools' 22 22 @echo ' liblockdep - user-space wrapper for kernel locking-validator' 23 23 @echo ' bpf - misc BPF tools' 24 + @echo ' pci - PCI tools' 24 25 @echo ' perf - Linux performance measurement and analysis tool' 25 26 @echo ' selftests - various kernel selftests' 26 27 @echo ' spi - spi tools' ··· 60 59 cpupower: FORCE 61 60 $(call descend,power/$@) 62 61 63 - cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi: FORCE 62 + cgroup firewire hv guest spi usb virtio vm bpf iio gpio objtool leds wmi pci: FORCE 64 63 $(call descend,$@) 65 64 66 65 liblockdep: FORCE ··· 95 94 all: acpi cgroup cpupower gpio hv firewire liblockdep \ 96 95 perf selftests spi turbostat usb \ 97 96 virtio vm bpf x86_energy_perf_policy \ 98 - tmon freefall iio objtool kvm_stat wmi 97 + tmon freefall iio objtool kvm_stat wmi pci 99 98 100 99 acpi_install: 101 100 $(call descend,power/$(@:_install=),install) ··· 103 102 cpupower_install: 104 103 $(call descend,power/$(@:_install=),install) 105 104 106 - cgroup_install firewire_install gpio_install hv_install iio_install perf_install spi_install usb_install virtio_install vm_install bpf_install objtool_install wmi_install: 105 + cgroup_install firewire_install gpio_install hv_install iio_install perf_install spi_install usb_install virtio_install vm_install bpf_install objtool_install wmi_install pci_install: 107 106 $(call descend,$(@:_install=),install) 108 107 109 108 liblockdep_install: ··· 129 128 perf_install selftests_install turbostat_install usb_install \ 130 129 virtio_install vm_install bpf_install x86_energy_perf_policy_install \ 131 130 tmon_install freefall_install objtool_install kvm_stat_install \ 132 - wmi_install 131 + wmi_install pci_install 133 132 134 133 acpi_clean: 135 134 $(call descend,power/acpi,clean) ··· 137 136 cpupower_clean: 138 137 $(call descend,power/cpupower,clean) 139 138 140 - cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean: 139 + cgroup_clean hv_clean firewire_clean spi_clean usb_clean virtio_clean vm_clean wmi_clean bpf_clean iio_clean gpio_clean objtool_clean leds_clean pci_clean: 141 140 $(call descend,$(@:_clean=),clean) 142 141 143 142 liblockdep_clean: ··· 175 174 perf_clean selftests_clean turbostat_clean spi_clean usb_clean virtio_clean \ 176 175 vm_clean bpf_clean iio_clean x86_energy_perf_policy_clean tmon_clean \ 177 176 freefall_clean build_clean libbpf_clean libsubcmd_clean liblockdep_clean \ 178 - gpio_clean objtool_clean leds_clean wmi_clean 177 + gpio_clean objtool_clean leds_clean wmi_clean pci_clean 179 178 180 179 .PHONY: FORCE
+1
tools/pci/Build
··· 1 + pcitest-y += pcitest.o
+53
tools/pci/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + include ../scripts/Makefile.include 3 + 4 + bindir ?= /usr/bin 5 + 6 + ifeq ($(srctree),) 7 + srctree := $(patsubst %/,%,$(dir $(CURDIR))) 8 + srctree := $(patsubst %/,%,$(dir $(srctree))) 9 + endif 10 + 11 + # Do not use make's built-in rules 12 + # (this improves performance and avoids hard-to-debug behaviour); 13 + MAKEFLAGS += -r 14 + 15 + CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include 16 + 17 + ALL_TARGETS := pcitest pcitest.sh 18 + ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS)) 19 + 20 + all: $(ALL_PROGRAMS) 21 + 22 + export srctree OUTPUT CC LD CFLAGS 23 + include $(srctree)/tools/build/Makefile.include 24 + 25 + # 26 + # We need the following to be outside of kernel tree 27 + # 28 + $(OUTPUT)include/linux/: ../../include/uapi/linux/ 29 + mkdir -p $(OUTPUT)include/linux/ 2>&1 || true 30 + ln -sf $(CURDIR)/../../include/uapi/linux/pcitest.h $@ 31 + 32 + prepare: $(OUTPUT)include/linux/ 33 + 34 + PCITEST_IN := $(OUTPUT)pcitest-in.o 35 + $(PCITEST_IN): prepare FORCE 36 + $(Q)$(MAKE) $(build)=pcitest 37 + $(OUTPUT)pcitest: $(PCITEST_IN) 38 + $(QUIET_LINK)$(CC) $(CFLAGS) $(LDFLAGS) $< -o $@ 39 + 40 + clean: 41 + rm -f $(ALL_PROGRAMS) 42 + rm -rf $(OUTPUT)include/ 43 + find $(if $(OUTPUT),$(OUTPUT),.) -name '*.o' -delete -o -name '\.*.d' -delete 44 + 45 + install: $(ALL_PROGRAMS) 46 + install -d -m 755 $(DESTDIR)$(bindir); \ 47 + for program in $(ALL_PROGRAMS); do \ 48 + install $$program $(DESTDIR)$(bindir); \ 49 + done 50 + 51 + FORCE: 52 + 53 + .PHONY: all install clean FORCE prepare
+2 -5
tools/pci/pcitest.c
··· 23 23 #include <stdio.h> 24 24 #include <stdlib.h> 25 25 #include <sys/ioctl.h> 26 - #include <time.h> 27 26 #include <unistd.h> 28 27 29 28 #include <linux/pcitest.h> ··· 47 48 unsigned long size; 48 49 }; 49 50 50 - static int run_test(struct pci_test *test) 51 + static void run_test(struct pci_test *test) 51 52 { 52 53 long ret; 53 54 int fd; 54 - struct timespec start, end; 55 - double time; 56 55 57 56 fd = open(test->device, O_RDWR); 58 57 if (fd < 0) { 59 58 perror("can't open PCI Endpoint Test device"); 60 - return fd; 59 + return; 61 60 } 62 61 63 62 if (test->barnum >= 0 && test->barnum <= 5) {